public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-05-25 17:49 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-05-25 17:49 UTC (permalink / raw
  To: gentoo-commits

commit:     149451916045d65e0f2e6944027ab2a23825630c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue May 25 17:48:25 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue May 25 17:48:25 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=14945191

Genpatches Updates

Patch to enable link security restrictions by default.
Support for namespace user.pax.* on tmpfs
Enable link security restrictions by default.
Bluetooth: Check key sizes only when Secure Simple Pairing
is enabled. See bug #686758
tmp513 requies REGMAP_I2C to build.  Select it by default in
Kconfig. See bug #710790. Thanks to Phil Stracchino
sign-file: full functionality with modern LibreSSL
Add Gentoo Linux support config settings and defaults.

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |  25 +
 1500_XATTR_USER_PREFIX.patch                       |  67 ++
 ...ble-link-security-restrictions-by-default.patch |  20 +
 ...zes-only-if-Secure-Simple-Pairing-enabled.patch |  37 ++
 ...3-Fix-build-issue-by-selecting-CONFIG_REG.patch |  30 +
 2920_sign-file-patch-for-libressl.patch            |  16 +
 5010_enable-cpu-optimizations-universal.patch      | 684 +++++++++++++++++++++
 7 files changed, 879 insertions(+)

diff --git a/0000_README b/0000_README
index 9018993..2bffadc 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,31 @@ EXPERIMENTAL
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
+Patch:  1500_XATTR_USER_PREFIX.patch
+From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
+Desc:   Support for namespace user.pax.* on tmpfs.
+
+Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
+From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
+Desc:   Enable link security restrictions by default.
+
+Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
+From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
+Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
+
+Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
+From:   https://bugs.gentoo.org/710790
+Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
+
+Patch:  2920_sign-file-patch-for-libressl.patch
+From:   https://bugs.gentoo.org/717166
+Desc:   sign-file: full functionality with modern LibreSSL
+
 Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.
+
+Patch:  5010_enable-cpu-optimizations-universal.patch
+From:   https://github.com/graysky2/kernel_gcc_patch/
+Desc:   Kernel >= 5.8 patch enables gcc = v9+ optimizations for additional CPUs.
+

diff --git a/1500_XATTR_USER_PREFIX.patch b/1500_XATTR_USER_PREFIX.patch
new file mode 100644
index 0000000..245dcc2
--- /dev/null
+++ b/1500_XATTR_USER_PREFIX.patch
@@ -0,0 +1,67 @@
+From: Anthony G. Basile <blueness@gentoo.org>
+
+This patch adds support for a restricted user-controlled namespace on
+tmpfs filesystem used to house PaX flags.  The namespace must be of the
+form user.pax.* and its value cannot exceed a size of 8 bytes.
+
+This is needed even on all Gentoo systems so that XATTR_PAX flags
+are preserved for users who might build packages using portage on
+a tmpfs system with a non-hardened kernel and then switch to a
+hardened kernel with XATTR_PAX enabled.
+
+The namespace is added to any user with Extended Attribute support
+enabled for tmpfs.  Users who do not enable xattrs will not have
+the XATTR_PAX flags preserved.
+
+diff --git a/include/uapi/linux/xattr.h b/include/uapi/linux/xattr.h
+index 1590c49..5eab462 100644
+--- a/include/uapi/linux/xattr.h
++++ b/include/uapi/linux/xattr.h
+@@ -73,5 +73,9 @@
+ #define XATTR_POSIX_ACL_DEFAULT  "posix_acl_default"
+ #define XATTR_NAME_POSIX_ACL_DEFAULT XATTR_SYSTEM_PREFIX XATTR_POSIX_ACL_DEFAULT
+ 
++/* User namespace */
++#define XATTR_PAX_PREFIX XATTR_USER_PREFIX "pax."
++#define XATTR_PAX_FLAGS_SUFFIX "flags"
++#define XATTR_NAME_PAX_FLAGS XATTR_PAX_PREFIX XATTR_PAX_FLAGS_SUFFIX
+ 
+ #endif /* _UAPI_LINUX_XATTR_H */
+--- a/mm/shmem.c	2020-05-04 15:30:27.042035334 -0400
++++ b/mm/shmem.c	2020-05-04 15:34:57.013881725 -0400
+@@ -3238,6 +3238,14 @@ static int shmem_xattr_handler_set(const
+ 	struct shmem_inode_info *info = SHMEM_I(inode);
+ 
+ 	name = xattr_full_name(handler, name);
++
++	if (!strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN)) {
++		if (strcmp(name, XATTR_NAME_PAX_FLAGS))
++			return -EOPNOTSUPP;
++		if (size > 8)
++			return -EINVAL;
++	}
++
+ 	return simple_xattr_set(&info->xattrs, name, value, size, flags, NULL);
+ }
+ 
+@@ -3253,6 +3261,12 @@ static const struct xattr_handler shmem_
+ 	.set = shmem_xattr_handler_set,
+ };
+ 
++static const struct xattr_handler shmem_user_xattr_handler = {
++	.prefix = XATTR_USER_PREFIX,
++	.get = shmem_xattr_handler_get,
++	.set = shmem_xattr_handler_set,
++};
++
+ static const struct xattr_handler *shmem_xattr_handlers[] = {
+ #ifdef CONFIG_TMPFS_POSIX_ACL
+ 	&posix_acl_access_xattr_handler,
+@@ -3260,6 +3274,7 @@ static const struct xattr_handler *shmem
+ #endif
+ 	&shmem_security_xattr_handler,
+ 	&shmem_trusted_xattr_handler,
++	&shmem_user_xattr_handler,
+ 	NULL
+ };
+ 

diff --git a/1510_fs-enable-link-security-restrictions-by-default.patch b/1510_fs-enable-link-security-restrictions-by-default.patch
new file mode 100644
index 0000000..f0ed144
--- /dev/null
+++ b/1510_fs-enable-link-security-restrictions-by-default.patch
@@ -0,0 +1,20 @@
+From: Ben Hutchings <ben@decadent.org.uk>
+Subject: fs: Enable link security restrictions by default
+Date: Fri, 02 Nov 2012 05:32:06 +0000
+Bug-Debian: https://bugs.debian.org/609455
+Forwarded: not-needed
+This reverts commit 561ec64ae67ef25cac8d72bb9c4bfc955edfd415
+('VFS: don't do protected {sym,hard}links by default').
+--- a/fs/namei.c	2018-09-28 07:56:07.770005006 -0400
++++ b/fs/namei.c	2018-09-28 07:56:43.370349204 -0400
+@@ -885,8 +885,8 @@ static inline void put_link(struct namei
+ 		path_put(&last->link);
+ }
+ 
+-int sysctl_protected_symlinks __read_mostly = 0;
+-int sysctl_protected_hardlinks __read_mostly = 0;
++int sysctl_protected_symlinks __read_mostly = 1;
++int sysctl_protected_hardlinks __read_mostly = 1;
+ int sysctl_protected_fifos __read_mostly;
+ int sysctl_protected_regular __read_mostly;
+ 

diff --git a/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch b/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
new file mode 100644
index 0000000..394ad48
--- /dev/null
+++ b/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
@@ -0,0 +1,37 @@
+The encryption is only mandatory to be enforced when both sides are using
+Secure Simple Pairing and this means the key size check makes only sense
+in that case.
+
+On legacy Bluetooth 2.0 and earlier devices like mice the encryption was
+optional and thus causing an issue if the key size check is not bound to
+using Secure Simple Pairing.
+
+Fixes: d5bb334a8e17 ("Bluetooth: Align minimum encryption key size for LE and BR/EDR connections")
+Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
+Cc: stable@vger.kernel.org
+---
+ net/bluetooth/hci_conn.c | 9 +++++++--
+ 1 file changed, 7 insertions(+), 2 deletions(-)
+
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 3cf0764d5793..7516cdde3373 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -1272,8 +1272,13 @@ int hci_conn_check_link_mode(struct hci_conn *conn)
+ 			return 0;
+ 	}
+ 
+-	if (hci_conn_ssp_enabled(conn) &&
+-	    !test_bit(HCI_CONN_ENCRYPT, &conn->flags))
++	/* If Secure Simple Pairing is not enabled, then legacy connection
++	 * setup is used and no encryption or key sizes can be enforced.
++	 */
++	if (!hci_conn_ssp_enabled(conn))
++		return 1;
++
++	if (!test_bit(HCI_CONN_ENCRYPT, &conn->flags))
+ 		return 0;
+ 
+ 	/* The minimum encryption key size needs to be enforced by the
+-- 
+2.20.1

diff --git a/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch b/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
new file mode 100644
index 0000000..4335685
--- /dev/null
+++ b/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
@@ -0,0 +1,30 @@
+From dc328d75a6f37f4ff11a81ae16b1ec88c3197640 Mon Sep 17 00:00:00 2001
+From: Mike Pagano <mpagano@gentoo.org>
+Date: Mon, 23 Mar 2020 08:20:06 -0400
+Subject: [PATCH 1/1] This driver requires REGMAP_I2C to build.  Select it by
+ default in Kconfig. Reported at gentoo bugzilla:
+ https://bugs.gentoo.org/710790
+Cc: mpagano@gentoo.org
+
+Reported-by: Phil Stracchino <phils@caerllewys.net>
+
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/hwmon/Kconfig | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
+index 47ac20aee06f..530b4f29ba85 100644
+--- a/drivers/hwmon/Kconfig
++++ b/drivers/hwmon/Kconfig
+@@ -1769,6 +1769,7 @@ config SENSORS_TMP421
+ config SENSORS_TMP513
+ 	tristate "Texas Instruments TMP513 and compatibles"
+ 	depends on I2C
++	select REGMAP_I2C
+ 	help
+ 	  If you say yes here you get support for Texas Instruments TMP512,
+ 	  and TMP513 temperature and power supply sensor chips.
+-- 
+2.24.1
+

diff --git a/2920_sign-file-patch-for-libressl.patch b/2920_sign-file-patch-for-libressl.patch
new file mode 100644
index 0000000..e6ec017
--- /dev/null
+++ b/2920_sign-file-patch-for-libressl.patch
@@ -0,0 +1,16 @@
+--- a/scripts/sign-file.c	2020-05-20 18:47:21.282820662 -0400
++++ b/scripts/sign-file.c	2020-05-20 18:48:37.991081899 -0400
+@@ -41,9 +41,10 @@
+  * signing with anything other than SHA1 - so we're stuck with that if such is
+  * the case.
+  */
+-#if defined(LIBRESSL_VERSION_NUMBER) || \
+-	OPENSSL_VERSION_NUMBER < 0x10000000L || \
+-	defined(OPENSSL_NO_CMS)
++#if defined(OPENSSL_NO_CMS) || \
++	( defined(LIBRESSL_VERSION_NUMBER) \
++	&& (LIBRESSL_VERSION_NUMBER < 0x3010000fL) ) || \
++	OPENSSL_VERSION_NUMBER < 0x10000000L
+ #define USE_PKCS7
+ #endif
+ #ifndef USE_PKCS7

diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
new file mode 100644
index 0000000..1868f23
--- /dev/null
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -0,0 +1,684 @@
+From 59db769ad69e080c512b3890e1d27d6120f4a1a4 Mon Sep 17 00:00:00 2001
+From: graysky <graysky@archlinux.us>
+Date: Mon, 12 Apr 2021 07:09:27 -0400
+Subject: [PATCH] more uarches for kernel 5.8+
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+WARNING
+This patch works with all gcc versions 9.0+ and with kernel version 5.8+ and should
+NOT be applied when compiling on older versions of gcc due to key name changes
+of the march flags introduced with the version 4.9 release of gcc.[1]
+
+FEATURES
+This patch adds additional CPU options to the Linux kernel accessible under:
+ Processor type and features  --->
+  Processor family --->
+
+With the release of gcc 11.0, several generic 64-bit levels are offered which
+are good for supported Intel or AMD CPUs:
+• x86-64-v2
+• x86-64-v3
+• x86-64-v4
+
+Users of glibc 2.33 and above can see which level is supported by current
+hardware by running:
+  /lib/ld-linux-x86-64.so.2 --help | grep supported
+
+Alternatively, compare the flags from /proc/cpuinfo to this list.[2]
+
+CPU-specific microarchitectures include:
+• AMD Improved K8-family
+• AMD K10-family
+• AMD Family 10h (Barcelona)
+• AMD Family 14h (Bobcat)
+• AMD Family 16h (Jaguar)
+• AMD Family 15h (Bulldozer)
+• AMD Family 15h (Piledriver)
+• AMD Family 15h (Steamroller)
+• AMD Family 15h (Excavator)
+• AMD Family 17h (Zen)
+• AMD Family 17h (Zen 2)
+• AMD Family 19h (Zen 3)†
+• Intel Silvermont low-power processors
+• Intel Goldmont low-power processors (Apollo Lake and Denverton)
+• Intel Goldmont Plus low-power processors (Gemini Lake)
+• Intel 1st Gen Core i3/i5/i7 (Nehalem)
+• Intel 1.5 Gen Core i3/i5/i7 (Westmere)
+• Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
+• Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
+• Intel 4th Gen Core i3/i5/i7 (Haswell)
+• Intel 5th Gen Core i3/i5/i7 (Broadwell)
+• Intel 6th Gen Core i3/i5/i7 (Skylake)
+• Intel 6th Gen Core i7/i9 (Skylake X)
+• Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
+• Intel 10th Gen Core i7/i9 (Ice Lake)
+• Intel Xeon (Cascade Lake)
+• Intel Xeon (Cooper Lake)*
+• Intel 3rd Gen 10nm++ i3/i5/i7/i9-family (Tiger Lake)*
+• Intel 3rd Gen 10nm++ Xeon (Sapphire Rapids)‡
+• Intel 11th Gen i3/i5/i7/i9-family (Rocket Lake)‡
+• Intel 12th Gen i3/i5/i7/i9-family (Alder Lake)‡
+
+Notes: If not otherwise noted, gcc >=9.1 is required for support.
+       *Requires gcc >=10.1  †Required gcc >=10.3  ‡Required gcc >=11.0
+
+It also offers to compile passing the 'native' option which, "selects the CPU
+to generate code for at compilation time by determining the processor type of
+the compiling machine. Using -march=native enables all instruction subsets
+supported by the local machine and will produce code optimized for the local
+machine under the constraints of the selected instruction set."[3]
+
+Users of Intel CPUs should select the 'Intel-Native' option and users of AMD
+CPUs should select the 'AMD-Native' option.
+
+MINOR NOTES RELATING TO INTEL ATOM PROCESSORS
+This patch also changes -march=atom to -march=bonnell in accordance with the
+gcc v4.9 changes. Upstream is using the deprecated -match=atom flags when I
+believe it should use the newer -march=bonnell flag for atom processors.[4]
+
+It is not recommended to compile on Atom-CPUs with the 'native' option.[5] The
+recommendation is to use the 'atom' option instead.
+
+BENEFITS
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=5.8
+gcc version >=9.0
+
+ACKNOWLEDGMENTS
+This patch builds on the seminal work by Jeroen.[6]
+
+REFERENCES
+1.  https://gcc.gnu.org/gcc-4.9/changes.html
+2.  https://gitlab.com/x86-psABIs/x86-64-ABI/-/commit/77566eb03bc6a326811cb7e9
+3.  https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html#index-x86-Options
+4.  https://bugzilla.kernel.org/show_bug.cgi?id=77461
+5.  https://github.com/graysky2/kernel_gcc_patch/issues/15
+6.  http://www.linuxforge.net/docs/linux/linux-gcc.php
+---
+ arch/x86/Kconfig.cpu            | 332 ++++++++++++++++++++++++++++++--
+ arch/x86/Makefile               |  47 ++++-
+ arch/x86/include/asm/vermagic.h |  66 +++++++
+ 3 files changed, 428 insertions(+), 17 deletions(-)
+
+diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
+index 814fe0d349b0..872b9cf598e3 100644
+--- a/arch/x86/Kconfig.cpu
++++ b/arch/x86/Kconfig.cpu
+@@ -157,7 +157,7 @@ config MPENTIUM4
+ 
+ 
+ config MK6
+-	bool "K6/K6-II/K6-III"
++	bool "AMD K6/K6-II/K6-III"
+ 	depends on X86_32
+ 	help
+ 	  Select this for an AMD K6-family processor.  Enables use of
+@@ -165,7 +165,7 @@ config MK6
+ 	  flags to GCC.
+ 
+ config MK7
+-	bool "Athlon/Duron/K7"
++	bool "AMD Athlon/Duron/K7"
+ 	depends on X86_32
+ 	help
+ 	  Select this for an AMD Athlon K7-family processor.  Enables use of
+@@ -173,12 +173,98 @@ config MK7
+ 	  flags to GCC.
+ 
+ config MK8
+-	bool "Opteron/Athlon64/Hammer/K8"
++	bool "AMD Opteron/Athlon64/Hammer/K8"
+ 	help
+ 	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ 	  Enables use of some extended instructions, and passes appropriate
+ 	  optimization flags to GCC.
+ 
++config MK8SSE3
++	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++	help
++	  Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MK10
++	bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++	help
++	  Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++	  Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MBARCELONA
++	bool "AMD Barcelona"
++	help
++	  Select this for AMD Family 10h Barcelona processors.
++
++	  Enables -march=barcelona
++
++config MBOBCAT
++	bool "AMD Bobcat"
++	help
++	  Select this for AMD Family 14h Bobcat processors.
++
++	  Enables -march=btver1
++
++config MJAGUAR
++	bool "AMD Jaguar"
++	help
++	  Select this for AMD Family 16h Jaguar processors.
++
++	  Enables -march=btver2
++
++config MBULLDOZER
++	bool "AMD Bulldozer"
++	help
++	  Select this for AMD Family 15h Bulldozer processors.
++
++	  Enables -march=bdver1
++
++config MPILEDRIVER
++	bool "AMD Piledriver"
++	help
++	  Select this for AMD Family 15h Piledriver processors.
++
++	  Enables -march=bdver2
++
++config MSTEAMROLLER
++	bool "AMD Steamroller"
++	help
++	  Select this for AMD Family 15h Steamroller processors.
++
++	  Enables -march=bdver3
++
++config MEXCAVATOR
++	bool "AMD Excavator"
++	help
++	  Select this for AMD Family 15h Excavator processors.
++
++	  Enables -march=bdver4
++
++config MZEN
++	bool "AMD Zen"
++	help
++	  Select this for AMD Family 17h Zen processors.
++
++	  Enables -march=znver1
++
++config MZEN2
++	bool "AMD Zen 2"
++	help
++	  Select this for AMD Family 17h Zen 2 processors.
++
++	  Enables -march=znver2
++
++config MZEN3
++	bool "AMD Zen 3"
++	depends on GCC_VERSION > 100300
++	help
++	  Select this for AMD Family 19h Zen 3 processors.
++
++	  Enables -march=znver3
++
+ config MCRUSOE
+ 	bool "Crusoe"
+ 	depends on X86_32
+@@ -270,7 +356,7 @@ config MPSC
+ 	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+ 
+ config MCORE2
+-	bool "Core 2/newer Xeon"
++	bool "Intel Core 2"
+ 	help
+ 
+ 	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -278,6 +364,8 @@ config MCORE2
+ 	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ 	  (not a typo)
+ 
++	  Enables -march=core2
++
+ config MATOM
+ 	bool "Intel Atom"
+ 	help
+@@ -287,6 +375,182 @@ config MATOM
+ 	  accordingly optimized code. Use a recent GCC with specific Atom
+ 	  support in order to fully benefit from selecting this option.
+ 
++config MNEHALEM
++	bool "Intel Nehalem"
++	select X86_P6_NOP
++	help
++
++	  Select this for 1st Gen Core processors in the Nehalem family.
++
++	  Enables -march=nehalem
++
++config MWESTMERE
++	bool "Intel Westmere"
++	select X86_P6_NOP
++	help
++
++	  Select this for the Intel Westmere formerly Nehalem-C family.
++
++	  Enables -march=westmere
++
++config MSILVERMONT
++	bool "Intel Silvermont"
++	select X86_P6_NOP
++	help
++
++	  Select this for the Intel Silvermont platform.
++
++	  Enables -march=silvermont
++
++config MGOLDMONT
++	bool "Intel Goldmont"
++	select X86_P6_NOP
++	help
++
++	  Select this for the Intel Goldmont platform including Apollo Lake and Denverton.
++
++	  Enables -march=goldmont
++
++config MGOLDMONTPLUS
++	bool "Intel Goldmont Plus"
++	select X86_P6_NOP
++	help
++
++	  Select this for the Intel Goldmont Plus platform including Gemini Lake.
++
++	  Enables -march=goldmont-plus
++
++config MSANDYBRIDGE
++	bool "Intel Sandy Bridge"
++	select X86_P6_NOP
++	help
++
++	  Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++	  Enables -march=sandybridge
++
++config MIVYBRIDGE
++	bool "Intel Ivy Bridge"
++	select X86_P6_NOP
++	help
++
++	  Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++	  Enables -march=ivybridge
++
++config MHASWELL
++	bool "Intel Haswell"
++	select X86_P6_NOP
++	help
++
++	  Select this for 4th Gen Core processors in the Haswell family.
++
++	  Enables -march=haswell
++
++config MBROADWELL
++	bool "Intel Broadwell"
++	select X86_P6_NOP
++	help
++
++	  Select this for 5th Gen Core processors in the Broadwell family.
++
++	  Enables -march=broadwell
++
++config MSKYLAKE
++	bool "Intel Skylake"
++	select X86_P6_NOP
++	help
++
++	  Select this for 6th Gen Core processors in the Skylake family.
++
++	  Enables -march=skylake
++
++config MSKYLAKEX
++	bool "Intel Skylake X"
++	select X86_P6_NOP
++	help
++
++	  Select this for 6th Gen Core processors in the Skylake X family.
++
++	  Enables -march=skylake-avx512
++
++config MCANNONLAKE
++	bool "Intel Cannon Lake"
++	select X86_P6_NOP
++	help
++
++	  Select this for 8th Gen Core processors
++
++	  Enables -march=cannonlake
++
++config MICELAKE
++	bool "Intel Ice Lake"
++	select X86_P6_NOP
++	help
++
++	  Select this for 10th Gen Core processors in the Ice Lake family.
++
++	  Enables -march=icelake-client
++
++config MCASCADELAKE
++	bool "Intel Cascade Lake"
++	select X86_P6_NOP
++	help
++
++	  Select this for Xeon processors in the Cascade Lake family.
++
++	  Enables -march=cascadelake
++
++config MCOOPERLAKE
++	bool "Intel Cooper Lake"
++	depends on GCC_VERSION > 100100
++	select X86_P6_NOP
++	help
++
++	  Select this for Xeon processors in the Cooper Lake family.
++
++	  Enables -march=cooperlake
++
++config MTIGERLAKE
++	bool "Intel Tiger Lake"
++	depends on GCC_VERSION > 100100
++	select X86_P6_NOP
++	help
++
++	  Select this for third-generation 10 nm process processors in the Tiger Lake family.
++
++	  Enables -march=tigerlake
++
++config MSAPPHIRERAPIDS
++	bool "Intel Sapphire Rapids"
++	depends on GCC_VERSION > 110000
++	select X86_P6_NOP
++	help
++
++	  Select this for third-generation 10 nm process processors in the Sapphire Rapids family.
++
++	  Enables -march=sapphirerapids
++
++config MROCKETLAKE
++	bool "Intel Rocket Lake"
++	depends on GCC_VERSION > 110000
++	select X86_P6_NOP
++	help
++
++	  Select this for eleventh-generation processors in the Rocket Lake family.
++
++	  Enables -march=rocketlake
++
++config MALDERLAKE
++	bool "Intel Alder Lake"
++	depends on GCC_VERSION > 110000
++	select X86_P6_NOP
++	help
++
++	  Select this for twelfth-generation processors in the Alder Lake family.
++
++	  Enables -march=alderlake
++
+ config GENERIC_CPU
+ 	bool "Generic-x86-64"
+ 	depends on X86_64
+@@ -294,6 +558,50 @@ config GENERIC_CPU
+ 	  Generic x86-64 CPU.
+ 	  Run equally well on all x86-64 CPUs.
+ 
++config GENERIC_CPU2
++	bool "Generic-x86-64-v2"
++	depends on GCC_VERSION > 110000
++	depends on X86_64
++	help
++	  Generic x86-64 CPU.
++	  Run equally well on all x86-64 CPUs with min support of x86-64-v2.
++
++config GENERIC_CPU3
++	bool "Generic-x86-64-v3"
++	depends on GCC_VERSION > 110000
++	depends on X86_64
++	help
++	  Generic x86-64-v3 CPU with v3 instructions.
++	  Run equally well on all x86-64 CPUs with min support of x86-64-v3.
++
++config GENERIC_CPU4
++	bool "Generic-x86-64-v4"
++	depends on GCC_VERSION > 110000
++	depends on X86_64
++	help
++	  Generic x86-64 CPU with v4 instructions.
++	  Run equally well on all x86-64 CPUs with min support of x86-64-v4.
++
++config MNATIVE_INTEL
++	bool "Intel-Native optimizations autodetected by GCC"
++	help
++
++	  GCC 4.2 and above support -march=native, which automatically detects
++	  the optimum settings to use based on your processor. Do NOT use this
++	  for AMD CPUs.  Intel Only!
++
++	  Enables -march=native
++
++config MNATIVE_AMD
++	bool "AMD-Native optimizations autodetected by GCC"
++	help
++
++	  GCC 4.2 and above support -march=native, which automatically detects
++	  the optimum settings to use based on your processor. Do NOT use this
++	  for Intel CPUs.  AMD Only!
++
++	  Enables -march=native
++
+ endchoice
+ 
+ config X86_GENERIC
+@@ -318,7 +626,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ 	int
+ 	default "7" if MPENTIUM4 || MPSC
+-	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD || X86_GENERIC || GENERIC_CPU || GENERIC_CPU2 || GENERIC_CPU3 || GENERIC_CPU4
+ 	default "4" if MELAN || M486SX || M486 || MGEODEGX1
+ 	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+ 
+@@ -336,11 +644,11 @@ config X86_ALIGNMENT_16
+ 
+ config X86_INTEL_USERCOPY
+ 	def_bool y
+-	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL
+ 
+ config X86_USE_PPRO_CHECKSUM
+ 	def_bool y
+-	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
+ 
+ config X86_USE_3DNOW
+ 	def_bool y
+@@ -360,26 +668,26 @@ config X86_USE_3DNOW
+ config X86_P6_NOP
+ 	def_bool y
+ 	depends on X86_64
+-	depends on (MCORE2 || MPENTIUM4 || MPSC)
++	depends on (MCORE2 || MPENTIUM4 || MPSC || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL)
+ 
+ config X86_TSC
+ 	def_bool y
+-	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD) || X86_64
+ 
+ config X86_CMPXCHG64
+ 	def_bool y
+-	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8
++	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
+ 
+ # this should be set for all -march=.. options where the compiler
+ # generates cmov.
+ config X86_CMOV
+ 	def_bool y
+-	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
+ 
+ config X86_MINIMUM_CPU_FAMILY
+ 	int
+ 	default "64" if X86_64
+-	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8)
++	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8 ||  MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
+ 	default "5" if X86_32 && X86_CMPXCHG64
+ 	default "4"
+ 
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index 9a85eae37b17..facf9a278fe3 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -113,11 +113,48 @@ else
+         # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
+         cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
+         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+-
+-        cflags-$(CONFIG_MCORE2) += \
+-                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+-	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+-		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3)
++        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++        cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
++        cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-mno-tbm)
++        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
++        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-mno-tbm)
++        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
++        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-mno-tbm)
++        cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
++        cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2)
++        cflags-$(CONFIG_MZEN3) += $(call cc-option,-march=znver3)
++
++        cflags-$(CONFIG_MNATIVE_INTEL) += $(call cc-option,-march=native)
++        cflags-$(CONFIG_MNATIVE_AMD) += $(call cc-option,-march=native)
++        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell)
++        cflags-$(CONFIG_MCORE2) += $(call cc-option,-march=core2)
++        cflags-$(CONFIG_MNEHALEM) += $(call cc-option,-march=nehalem)
++        cflags-$(CONFIG_MWESTMERE) += $(call cc-option,-march=westmere)
++        cflags-$(CONFIG_MSILVERMONT) += $(call cc-option,-march=silvermont)
++        cflags-$(CONFIG_MGOLDMONT) += $(call cc-option,-march=goldmont)
++        cflags-$(CONFIG_MGOLDMONTPLUS) += $(call cc-option,-march=goldmont-plus)
++        cflags-$(CONFIG_MSANDYBRIDGE) += $(call cc-option,-march=sandybridge)
++        cflags-$(CONFIG_MIVYBRIDGE) += $(call cc-option,-march=ivybridge)
++        cflags-$(CONFIG_MHASWELL) += $(call cc-option,-march=haswell)
++        cflags-$(CONFIG_MBROADWELL) += $(call cc-option,-march=broadwell)
++        cflags-$(CONFIG_MSKYLAKE) += $(call cc-option,-march=skylake)
++        cflags-$(CONFIG_MSKYLAKEX) += $(call cc-option,-march=skylake-avx512)
++        cflags-$(CONFIG_MCANNONLAKE) += $(call cc-option,-march=cannonlake)
++        cflags-$(CONFIG_MICELAKE) += $(call cc-option,-march=icelake-client)
++        cflags-$(CONFIG_MCASCADELAKE) += $(call cc-option,-march=cascadelake)
++        cflags-$(CONFIG_MCOOPERLAKE) += $(call cc-option,-march=cooperlake)
++        cflags-$(CONFIG_MTIGERLAKE) += $(call cc-option,-march=tigerlake)
++        cflags-$(CONFIG_MSAPPHIRERAPIDS) += $(call cc-option,-march=sapphirerapids)
++        cflags-$(CONFIG_MROCKETLAKE) += $(call cc-option,-march=rocketlake)
++        cflags-$(CONFIG_MALDERLAKE) += $(call cc-option,-march=alderlake)
++        cflags-$(CONFIG_GENERIC_CPU2) += $(call cc-option,-march=x86-64-v2)
++        cflags-$(CONFIG_GENERIC_CPU3) += $(call cc-option,-march=x86-64-v3)
++        cflags-$(CONFIG_GENERIC_CPU4) += $(call cc-option,-march=x86-64-v4)
+         cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+         KBUILD_CFLAGS += $(cflags-y)
+ 
+diff --git a/arch/x86/include/asm/vermagic.h b/arch/x86/include/asm/vermagic.h
+index 75884d2cdec3..4e6a08d4c7e5 100644
+--- a/arch/x86/include/asm/vermagic.h
++++ b/arch/x86/include/asm/vermagic.h
+@@ -17,6 +17,48 @@
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE_INTEL
++#define MODULE_PROC_FAMILY "NATIVE_INTEL "
++#elif defined CONFIG_MNATIVE_AMD
++#define MODULE_PROC_FAMILY "NATIVE_AMD "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MGOLDMONT
++#define MODULE_PROC_FAMILY "GOLDMONT "
++#elif defined CONFIG_MGOLDMONTPLUS
++#define MODULE_PROC_FAMILY "GOLDMONTPLUS "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
++#elif defined CONFIG_MSKYLAKE
++#define MODULE_PROC_FAMILY "SKYLAKE "
++#elif defined CONFIG_MSKYLAKEX
++#define MODULE_PROC_FAMILY "SKYLAKEX "
++#elif defined CONFIG_MCANNONLAKE
++#define MODULE_PROC_FAMILY "CANNONLAKE "
++#elif defined CONFIG_MICELAKE
++#define MODULE_PROC_FAMILY "ICELAKE "
++#elif defined CONFIG_MCASCADELAKE
++#define MODULE_PROC_FAMILY "CASCADELAKE "
++#elif defined CONFIG_MCOOPERLAKE
++#define MODULE_PROC_FAMILY "COOPERLAKE "
++#elif defined CONFIG_MTIGERLAKE
++#define MODULE_PROC_FAMILY "TIGERLAKE "
++#elif defined CONFIG_MSAPPHIRERAPIDS
++#define MODULE_PROC_FAMILY "SAPPHIRERAPIDS "
++#elif defined CONFIG_ROCKETLAKE
++#define MODULE_PROC_FAMILY "ROCKETLAKE "
++#elif defined CONFIG_MALDERLAKE
++#define MODULE_PROC_FAMILY "ALDERLAKE "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -35,6 +77,30 @@
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MSTEAMROLLER
++#define MODULE_PROC_FAMILY "STEAMROLLER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
++#elif defined CONFIG_MEXCAVATOR
++#define MODULE_PROC_FAMILY "EXCAVATOR "
++#elif defined CONFIG_MZEN
++#define MODULE_PROC_FAMILY "ZEN "
++#elif defined CONFIG_MZEN2
++#define MODULE_PROC_FAMILY "ZEN2 "
++#elif defined CONFIG_MZEN3
++#define MODULE_PROC_FAMILY "ZEN3 "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+-- 
+2.31.1
+


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-06-13 20:14 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-06-13 20:14 UTC (permalink / raw
  To: gentoo-commits

commit:     7fc6a7b27e47caef4143b2b9f84b80a2634f5fe5
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Jun 13 20:14:22 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jun 13 20:14:22 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7fc6a7b2

Update to Kernel Self Protection patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 177 +++++++++++++++++++++++++++++++++++++--
 1 file changed, 171 insertions(+), 6 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index e754a3e..337ba12 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -1,14 +1,14 @@
---- a/Kconfig	2020-04-15 11:05:30.202413863 -0400
-+++ b/Kconfig	2020-04-15 10:37:45.683952949 -0400
-@@ -32,3 +32,5 @@ source "lib/Kconfig"
+--- a/Kconfig	2021-06-04 19:03:33.646823432 -0400
++++ b/Kconfig	2021-06-04 19:03:40.508892817 -0400
+@@ -30,3 +30,5 @@ source "lib/Kconfig"
  source "lib/Kconfig.debug"
  
  source "Documentation/Kconfig"
 +
 +source "distro/Kconfig"
---- /dev/null	2020-09-24 03:06:47.590000000 -0400
-+++ b/distro/Kconfig	2020-09-24 11:31:29.403150624 -0400
-@@ -0,0 +1,158 @@
+--- /dev/null	2021-06-08 16:56:49.698138501 -0400
++++ b/distro/Kconfig	2021-06-08 17:11:33.377999003 -0400
+@@ -0,0 +1,263 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -166,4 +166,169 @@
 +
 +endmenu
 +
++menu "Enable Kernel Self Protection Project Recommendations"
++	visible if GENTOO_LINUX
++
++config GENTOO_KERNEL_SELF_PROTECTION
++	bool "Architecture Independant Kernel Self Protection Project Recommendations"
++
++	help
++  	Recommended Kernel settings based on the suggestions from the Kernel Self Protection Project
++	See: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings
++	Note, there may be additional settings for which the CONFIG_ setting is invisible in menuconfig due 
++	to unmet dependencies. Search for GENTOO_KERNEL_SELF_PROTECTION_{X86_64, ARM64, X86_32, ARM} for 
++	dependency information on your specific architecture.
++	Note 2: Please see the URL above for numeric settings, e.g. CONFIG_DEFAULT_MMAP_MIN_ADDR=65536 
++	for X86_64
++
++	depends on GENTOO_LINUX && !ACPI_CUSTOM_METHOD && !COMPAT_BRK && !DEVKMEM && !PROC_KCORE && !COMPAT_VDSO && !KEXEC && !HIBERNATION && !LEGACY_PTYS && !X86_X32 && !MODIFY_LDT_SYSCALL
++
++	select BUG
++	select STRICT_KERNEL_RWX
++	select DEBUG_WX
++	select STACKPROTECTOR
++	select STACKPROTECTOR_STRONG
++	select STRICT_DEVMEM
++	select IO_STRICT_DEVMEM
++	select SYN_COOKIES
++	select DEBUG_CREDENTIALS
++	select DEBUG_NOTIFIERS
++	select DEBUG_LIST
++	select DEBUG_SG
++	select BUG_ON_DATA_CORRUPTION
++	select SCHED_STACK_END_CHECK
++	select SECCOMP
++	select SECCOMP_FILTER
++	select SECURITY_YAMA
++	select SLAB_FREELIST_RANDOM
++	select SLAB_FREELIST_HARDENED
++	select SHUFFLE_PAGE_ALLOCATOR
++	select SLUB_DEBUG
++	select PAGE_POISONING
++	select PAGE_POISONING_NO_SANITY
++	select PAGE_POISONING_ZERO
++	select INIT_ON_ALLOC_DEFAULT_ON
++	select INIT_ON_FREE_DEFAULT_ON
++	select VMAP_STACK
++	select REFCOUNT_FULL
++	select FORTIFY_SOURCE
++	select SECURITY_DMESG_RESTRICT
++	select PANIC_ON_OOPS
++	select CONFIG_GCC_PLUGINS
++	select GCC_PLUGIN_LATENT_ENTROPY
++	select GCC_PLUGIN_STRUCTLEAK
++	select GCC_PLUGIN_STRUCTLEAK_BYREF_ALL
++	select GCC_PLUGIN_STACKLEAK
++	select GCC_PLUGIN_RANDSTRUCT
++	select GCC_PLUGIN_RANDSTRUCT_PERFORMANCE
++
++menu "Architecture Specific Self Protection Project Recommendations"
++
++config GENTOO_KERNEL_SELF_PROTECTION_X86_64
++	bool "X86_64 KSPP Settings"
++
++	depends on !X86_MSR && X86_64
++	default n
++	
++	select RANDOMIZE_BASE
++	select RANDOMIZE_MEMORY
++	select LEGACY_VSYSCALL_NONE
++ select PAGE_TABLE_ISOLATION
++
++
++config GENTOO_KERNEL_SELF_PROTECTION_ARM64
++	bool "ARM64 KSPP Settings"
++
++	depends on ARM64
++	default n
++
++	select RANDOMIZE_BASE
++	select ARM64_SW_TTBR0_PAN
++	select CONFIG_UNMAP_KERNEL_AT_EL0
++
++config GENTOO_KERNEL_SELF_PROTECTION_X86_32
++	bool "X86_32 KSPP Settings"
++
++	depends on !X86_MSR && !MODIFY_LDT_SYSCALL && !M486 && X86_32
++	default n
++
++	select HIGHMEM64G
++	select X86_PAE
++	select RANDOMIZE_BASE
++	select PAGE_TABLE_ISOLATION
++
++config GENTOO_KERNEL_SELF_PROTECTION_ARM
++	bool "ARM KSPP Settings"
++
++	depends on !OABI_COMPAT && ARM
++	default n
++
++	select VMSPLIT_3G
++	select STRICT_MEMORY_RWX
++	select CPU_SW_DOMAIN_PAN
++
++endmenu
++
++endmenu
++
 +endmenu
+diff --git a/security/Kconfig b/security/Kconfig
+index 7561f6f99..01f0bf73f 100644
+--- a/security/Kconfig
++++ b/security/Kconfig
+@@ -166,6 +166,7 @@ config HARDENED_USERCOPY
+ config HARDENED_USERCOPY_FALLBACK
+ 	bool "Allow usercopy whitelist violations to fallback to object size"
+ 	depends on HARDENED_USERCOPY
++	depends on !GENTOO_KERNEL_SELF_PROTECTION
+ 	default y
+ 	help
+ 	  This is a temporary option that allows missing usercopy whitelists
+@@ -181,6 +182,7 @@ config HARDENED_USERCOPY_PAGESPAN
+ 	bool "Refuse to copy allocations that span multiple pages"
+ 	depends on HARDENED_USERCOPY
+ 	depends on EXPERT
++	depends on !GENTOO_KERNEL_SELF_PROTECTION
+ 	help
+ 	  When a multi-page allocation is done without __GFP_COMP,
+ 	  hardened usercopy will reject attempts to copy it. There are,
+diff --git a/security/selinux/Kconfig b/security/selinux/Kconfig
+index 9e921fc72..f29bc13fa 100644
+--- a/security/selinux/Kconfig
++++ b/security/selinux/Kconfig
+@@ -26,6 +26,7 @@ config SECURITY_SELINUX_BOOTPARAM
+ config SECURITY_SELINUX_DISABLE
+ 	bool "NSA SELinux runtime disable"
+ 	depends on SECURITY_SELINUX
++	depends on !GENTOO_KERNEL_SELF_PROTECTION
+ 	select SECURITY_WRITABLE_HOOKS
+ 	default n
+ 	help
+-- 
+2.31.1
+
+From bd3ff0b16792c18c0614c2b95e148943209f460a Mon Sep 17 00:00:00 2001
+From: Georgy Yakovlev <gyakovlev@gentoo.org>
+Date: Tue, 8 Jun 2021 13:59:57 -0700
+Subject: [PATCH 2/2] set DEFAULT_MMAP_MIN_ADDR by default
+
+---
+ mm/Kconfig | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/mm/Kconfig b/mm/Kconfig
+index 24c045b24..e13fc740c 100644
+--- a/mm/Kconfig
++++ b/mm/Kconfig
+@@ -321,6 +321,8 @@ config KSM
+ config DEFAULT_MMAP_MIN_ADDR
+ 	int "Low address space to protect from user allocation"
+ 	depends on MMU
++	default 65536 if ( X86_64 || X86_32 || PPC64 || IA64 ) && GENTOO_KERNEL_SELF_PROTECTION
++	default 32768 if ( ARM64 || ARM ) && GENTOO_KERNEL_SELF_PROTECTION
+ 	default 4096
+ 	help
+ 	  This is the portion of low virtual memory which should be protected
+-- 
+2.31.1
+```


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-07-01 14:29 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-07-01 14:29 UTC (permalink / raw
  To: gentoo-commits

commit:     bb94f673a30433c53477c8bedb4d23006859169a
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jul  1 14:29:16 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jul  1 14:29:16 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bb94f673

Update CPU Opt Patch 06062021

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 5010_enable-cpu-optimizations-universal.patch | 74 +++++++++++++--------------
 1 file changed, 36 insertions(+), 38 deletions(-)

diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
index 1868f23..c45d13b 100644
--- a/5010_enable-cpu-optimizations-universal.patch
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -1,23 +1,18 @@
-From 59db769ad69e080c512b3890e1d27d6120f4a1a4 Mon Sep 17 00:00:00 2001
+From 4af44fbc97bc51eb742f0d6555bde23cf580d4e3 Mon Sep 17 00:00:00 2001
 From: graysky <graysky@archlinux.us>
-Date: Mon, 12 Apr 2021 07:09:27 -0400
+Date: Sun, 6 Jun 2021 09:41:36 -0400
 Subject: [PATCH] more uarches for kernel 5.8+
 MIME-Version: 1.0
 Content-Type: text/plain; charset=UTF-8
 Content-Transfer-Encoding: 8bit
 
-WARNING
-This patch works with all gcc versions 9.0+ and with kernel version 5.8+ and should
-NOT be applied when compiling on older versions of gcc due to key name changes
-of the march flags introduced with the version 4.9 release of gcc.[1]
-
 FEATURES
 This patch adds additional CPU options to the Linux kernel accessible under:
  Processor type and features  --->
   Processor family --->
 
-With the release of gcc 11.0, several generic 64-bit levels are offered which
-are good for supported Intel or AMD CPUs:
+With the release of gcc 11.1 and clang 12.0, several generic 64-bit levels are
+offered which are good for supported Intel or AMD CPUs:
 • x86-64-v2
 • x86-64-v3
 • x86-64-v4
@@ -26,7 +21,7 @@ Users of glibc 2.33 and above can see which level is supported by current
 hardware by running:
   /lib/ld-linux-x86-64.so.2 --help | grep supported
 
-Alternatively, compare the flags from /proc/cpuinfo to this list.[2]
+Alternatively, compare the flags from /proc/cpuinfo to this list.[1]
 
 CPU-specific microarchitectures include:
 • AMD Improved K8-family
@@ -62,13 +57,15 @@ CPU-specific microarchitectures include:
 • Intel 12th Gen i3/i5/i7/i9-family (Alder Lake)‡
 
 Notes: If not otherwise noted, gcc >=9.1 is required for support.
-       *Requires gcc >=10.1  †Required gcc >=10.3  ‡Required gcc >=11.0
+       *Requires gcc >=10.1 or clang >=10.0
+       †Required gcc >=10.3 or clang >=12.0
+       ‡Required gcc >=11.1 or clang >=12.0
 
 It also offers to compile passing the 'native' option which, "selects the CPU
 to generate code for at compilation time by determining the processor type of
 the compiling machine. Using -march=native enables all instruction subsets
 supported by the local machine and will produce code optimized for the local
-machine under the constraints of the selected instruction set."[3]
+machine under the constraints of the selected instruction set."[2]
 
 Users of Intel CPUs should select the 'Intel-Native' option and users of AMD
 CPUs should select the 'AMD-Native' option.
@@ -76,9 +73,9 @@ CPUs should select the 'AMD-Native' option.
 MINOR NOTES RELATING TO INTEL ATOM PROCESSORS
 This patch also changes -march=atom to -march=bonnell in accordance with the
 gcc v4.9 changes. Upstream is using the deprecated -match=atom flags when I
-believe it should use the newer -march=bonnell flag for atom processors.[4]
+believe it should use the newer -march=bonnell flag for atom processors.[3]
 
-It is not recommended to compile on Atom-CPUs with the 'native' option.[5] The
+It is not recommended to compile on Atom-CPUs with the 'native' option.[4] The
 recommendation is to use the 'atom' option instead.
 
 BENEFITS
@@ -90,18 +87,19 @@ https://github.com/graysky2/kernel_gcc_patch
 
 REQUIREMENTS
 linux version >=5.8
-gcc version >=9.0
+gcc version >=9.0 or clang version >=9.0
 
 ACKNOWLEDGMENTS
-This patch builds on the seminal work by Jeroen.[6]
+This patch builds on the seminal work by Jeroen.[5]
 
 REFERENCES
-1.  https://gcc.gnu.org/gcc-4.9/changes.html
-2.  https://gitlab.com/x86-psABIs/x86-64-ABI/-/commit/77566eb03bc6a326811cb7e9
-3.  https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html#index-x86-Options
-4.  https://bugzilla.kernel.org/show_bug.cgi?id=77461
-5.  https://github.com/graysky2/kernel_gcc_patch/issues/15
-6.  http://www.linuxforge.net/docs/linux/linux-gcc.php
+1.  https://gitlab.com/x86-psABIs/x86-64-ABI/-/commit/77566eb03bc6a326811cb7e9
+2.  https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html#index-x86-Options
+3.  https://bugzilla.kernel.org/show_bug.cgi?id=77461
+4.  https://github.com/graysky2/kernel_gcc_patch/issues/15
+5.  http://www.linuxforge.net/docs/linux/linux-gcc.php
+
+Signed-off-by: graysky <graysky@archlinux.us>
 ---
  arch/x86/Kconfig.cpu            | 332 ++++++++++++++++++++++++++++++--
  arch/x86/Makefile               |  47 ++++-
@@ -109,7 +107,7 @@ REFERENCES
  3 files changed, 428 insertions(+), 17 deletions(-)
 
 diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
-index 814fe0d349b0..872b9cf598e3 100644
+index 814fe0d349b0..8acf6519d279 100644
 --- a/arch/x86/Kconfig.cpu
 +++ b/arch/x86/Kconfig.cpu
 @@ -157,7 +157,7 @@ config MPENTIUM4
@@ -221,7 +219,7 @@ index 814fe0d349b0..872b9cf598e3 100644
 +
 +config MZEN3
 +	bool "AMD Zen 3"
-+	depends on GCC_VERSION > 100300
++	depends on ( CC_IS_GCC && GCC_VERSION >= 100300 ) || ( CC_IS_CLANG && CLANG_VERSION >= 120000 )
 +	help
 +	  Select this for AMD Family 19h Zen 3 processors.
 +
@@ -380,7 +378,7 @@ index 814fe0d349b0..872b9cf598e3 100644
 +
 +config MCOOPERLAKE
 +	bool "Intel Cooper Lake"
-+	depends on GCC_VERSION > 100100
++	depends on ( CC_IS_GCC && GCC_VERSION > 100100 ) || ( CC_IS_CLANG && CLANG_VERSION >= 100000 )
 +	select X86_P6_NOP
 +	help
 +
@@ -390,7 +388,7 @@ index 814fe0d349b0..872b9cf598e3 100644
 +
 +config MTIGERLAKE
 +	bool "Intel Tiger Lake"
-+	depends on GCC_VERSION > 100100
++	depends on  ( CC_IS_GCC && GCC_VERSION > 100100 ) || ( CC_IS_CLANG && CLANG_VERSION >= 100000 )
 +	select X86_P6_NOP
 +	help
 +
@@ -400,7 +398,7 @@ index 814fe0d349b0..872b9cf598e3 100644
 +
 +config MSAPPHIRERAPIDS
 +	bool "Intel Sapphire Rapids"
-+	depends on GCC_VERSION > 110000
++	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && CLANG_VERSION >= 120000 )
 +	select X86_P6_NOP
 +	help
 +
@@ -410,7 +408,7 @@ index 814fe0d349b0..872b9cf598e3 100644
 +
 +config MROCKETLAKE
 +	bool "Intel Rocket Lake"
-+	depends on GCC_VERSION > 110000
++	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && CLANG_VERSION >= 120000 )
 +	select X86_P6_NOP
 +	help
 +
@@ -420,7 +418,7 @@ index 814fe0d349b0..872b9cf598e3 100644
 +
 +config MALDERLAKE
 +	bool "Intel Alder Lake"
-+	depends on GCC_VERSION > 110000
++	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && CLANG_VERSION >= 120000 )
 +	select X86_P6_NOP
 +	help
 +
@@ -437,7 +435,7 @@ index 814fe0d349b0..872b9cf598e3 100644
  
 +config GENERIC_CPU2
 +	bool "Generic-x86-64-v2"
-+	depends on GCC_VERSION > 110000
++	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && LANG_VERSION >= 120000 )
 +	depends on X86_64
 +	help
 +	  Generic x86-64 CPU.
@@ -445,7 +443,7 @@ index 814fe0d349b0..872b9cf598e3 100644
 +
 +config GENERIC_CPU3
 +	bool "Generic-x86-64-v3"
-+	depends on GCC_VERSION > 110000
++	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && LANG_VERSION >= 120000 )
 +	depends on X86_64
 +	help
 +	  Generic x86-64-v3 CPU with v3 instructions.
@@ -453,27 +451,27 @@ index 814fe0d349b0..872b9cf598e3 100644
 +
 +config GENERIC_CPU4
 +	bool "Generic-x86-64-v4"
-+	depends on GCC_VERSION > 110000
++	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && LANG_VERSION >= 120000 )
 +	depends on X86_64
 +	help
 +	  Generic x86-64 CPU with v4 instructions.
 +	  Run equally well on all x86-64 CPUs with min support of x86-64-v4.
 +
 +config MNATIVE_INTEL
-+	bool "Intel-Native optimizations autodetected by GCC"
++	bool "Intel-Native optimizations autodetected by the compiler"
 +	help
 +
-+	  GCC 4.2 and above support -march=native, which automatically detects
++	  Clang 3.8, GCC 4.2 and above support -march=native, which automatically detects
 +	  the optimum settings to use based on your processor. Do NOT use this
 +	  for AMD CPUs.  Intel Only!
 +
 +	  Enables -march=native
 +
 +config MNATIVE_AMD
-+	bool "AMD-Native optimizations autodetected by GCC"
++	bool "AMD-Native optimizations autodetected by the compiler"
 +	help
 +
-+	  GCC 4.2 and above support -march=native, which automatically detects
++	  Clang 3.8, GCC 4.2 and above support -march=native, which automatically detects
 +	  the optimum settings to use based on your processor. Do NOT use this
 +	  for Intel CPUs.  AMD Only!
 +
@@ -538,10 +536,10 @@ index 814fe0d349b0..872b9cf598e3 100644
  	default "4"
  
 diff --git a/arch/x86/Makefile b/arch/x86/Makefile
-index 9a85eae37b17..facf9a278fe3 100644
+index 78faf9c7e3ae..ee0cd507af8b 100644
 --- a/arch/x86/Makefile
 +++ b/arch/x86/Makefile
-@@ -113,11 +113,48 @@ else
+@@ -114,11 +114,48 @@ else
          # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
          cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
          cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-07-04 15:43 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-07-04 15:43 UTC (permalink / raw
  To: gentoo-commits

commit:     4ff35782b50a4bbf5cf725d39f56e25ca93a41c4
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Jul  4 15:16:10 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jul  4 15:43:30 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4ff35782

Fix DEVMEM Select and move help text

Thanks to Peter for reporting

Bug: https://bugs.gentoo.org/798315

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 26 +++++++++++++-------------
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 337ba12..c063c6d 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -6,8 +6,8 @@
  source "Documentation/Kconfig"
 +
 +source "distro/Kconfig"
---- /dev/null	2021-06-08 16:56:49.698138501 -0400
-+++ b/distro/Kconfig	2021-06-08 17:11:33.377999003 -0400
+--- /dev/null	2021-07-04 10:53:51.006624416 -0400
++++ b/distro/Kconfig	2021-07-04 11:07:33.534248860 -0400
 @@ -0,0 +1,263 @@
 +menu "Gentoo Linux"
 +
@@ -172,15 +172,6 @@
 +config GENTOO_KERNEL_SELF_PROTECTION
 +	bool "Architecture Independant Kernel Self Protection Project Recommendations"
 +
-+	help
-+  	Recommended Kernel settings based on the suggestions from the Kernel Self Protection Project
-+	See: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings
-+	Note, there may be additional settings for which the CONFIG_ setting is invisible in menuconfig due 
-+	to unmet dependencies. Search for GENTOO_KERNEL_SELF_PROTECTION_{X86_64, ARM64, X86_32, ARM} for 
-+	dependency information on your specific architecture.
-+	Note 2: Please see the URL above for numeric settings, e.g. CONFIG_DEFAULT_MMAP_MIN_ADDR=65536 
-+	for X86_64
-+
 +	depends on GENTOO_LINUX && !ACPI_CUSTOM_METHOD && !COMPAT_BRK && !DEVKMEM && !PROC_KCORE && !COMPAT_VDSO && !KEXEC && !HIBERNATION && !LEGACY_PTYS && !X86_X32 && !MODIFY_LDT_SYSCALL
 +
 +	select BUG
@@ -188,8 +179,8 @@
 +	select DEBUG_WX
 +	select STACKPROTECTOR
 +	select STACKPROTECTOR_STRONG
-+	select STRICT_DEVMEM
-+	select IO_STRICT_DEVMEM
++	select STRICT_DEVMEM if DEVMEM=y
++	select IO_STRICT_DEVMEM if DEVMEM=y
 +	select SYN_COOKIES
 +	select DEBUG_CREDENTIALS
 +	select DEBUG_NOTIFIERS
@@ -222,6 +213,15 @@
 +	select GCC_PLUGIN_RANDSTRUCT
 +	select GCC_PLUGIN_RANDSTRUCT_PERFORMANCE
 +
++	help
++  		Recommended Kernel settings based on the suggestions from the Kernel Self Protection Project
++		See: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings
++		Note, there may be additional settings for which the CONFIG_ setting is invisible in menuconfig due 
++		to unmet dependencies. Search for GENTOO_KERNEL_SELF_PROTECTION_{X86_64, ARM64, X86_32, ARM} for 
++		dependency information on your specific architecture.
++		Note 2: Please see the URL above for numeric settings, e.g. CONFIG_DEFAULT_MMAP_MIN_ADDR=65536 
++		for X86_64
++
 +menu "Architecture Specific Self Protection Project Recommendations"
 +
 +config GENTOO_KERNEL_SELF_PROTECTION_X86_64


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-07-05 14:11 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-07-05 14:11 UTC (permalink / raw
  To: gentoo-commits

commit:     6ac83a2778d17a95e61f53637085d2c776a2f728
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jul  5 14:10:17 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jul  5 14:10:17 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6ac83a27

mm/page_alloc: correct retval of pop elemnts if bulk arr is populated

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |  4 ++
 1800_mm-page-alloc-fix-ret-val-on-alloc-fail.patch | 50 ++++++++++++++++++++++
 2 files changed, 54 insertions(+)

diff --git a/0000_README b/0000_README
index 2bffadc..0f74eb1 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
+Patch:  1800_mm-page-alloc-fix-ret-val-on-alloc-fail.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/patch/?id=ff4b2b4014cbffb3d32b22629252f4dc8616b0fe
+Desc:   mm/page_alloc: correct return value of populated elements if bulk array is populated
+
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/1800_mm-page-alloc-fix-ret-val-on-alloc-fail.patch b/1800_mm-page-alloc-fix-ret-val-on-alloc-fail.patch
new file mode 100644
index 0000000..2f1b4dc
--- /dev/null
+++ b/1800_mm-page-alloc-fix-ret-val-on-alloc-fail.patch
@@ -0,0 +1,50 @@
+From ff4b2b4014cbffb3d32b22629252f4dc8616b0fe Mon Sep 17 00:00:00 2001
+From: Mel Gorman <mgorman@techsingularity.net>
+Date: Mon, 28 Jun 2021 19:33:29 -0700
+Subject: mm/page_alloc: correct return value of populated elements if bulk
+ array is populated
+
+Dave Jones reported the following
+
+	This made it into 5.13 final, and completely breaks NFSD for me
+	(Serving tcp v3 mounts).  Existing mounts on clients hang, as do
+	new mounts from new clients.  Rebooting the server back to rc7
+	everything recovers.
+
+The commit b3b64ebd3822 ("mm/page_alloc: do bulk array bounds check after
+checking populated elements") returns the wrong value if the array is
+already populated which is interpreted as an allocation failure.  Dave
+reported this fixes his problem and it also passed a test running dbench
+over NFS.
+
+Link: https://lkml.kernel.org/r/20210628150219.GC3840@techsingularity.net
+Fixes: b3b64ebd3822 ("mm/page_alloc: do bulk array bounds check after checking populated elements")
+Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
+Reported-by: Dave Jones <davej@codemonkey.org.uk>
+Tested-by: Dave Jones <davej@codemonkey.org.uk>
+Cc: Dan Carpenter <dan.carpenter@oracle.com>
+Cc: Jesper Dangaard Brouer <brouer@redhat.com>
+Cc: Vlastimil Babka <vbabka@suse.cz>
+Cc: <stable@vger.kernel.org> [5.13+]
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+---
+ mm/page_alloc.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 5b5c9f5813b9a..2bf03c76504b0 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -5058,7 +5058,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
+ 
+ 	/* Already populated array? */
+ 	if (unlikely(page_array && nr_pages - nr_populated == 0))
+-		return 0;
++		return nr_populated;
+ 
+ 	/* Use the single page allocator for one page. */
+ 	if (nr_pages - nr_populated == 1)
+-- 
+cgit 1.2.3-1.el7
+


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-07-07 13:11 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-07-07 13:11 UTC (permalink / raw
  To: gentoo-commits

commit:     06e76a486944933220307d40297a443876ffb3b5
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul  7 13:10:29 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul  7 13:10:29 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=06e76a48

Linux patch 5.13.1

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |  4 ++++
 1001_linux-5.13.1.patch | 50 +++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 54 insertions(+)

diff --git a/0000_README b/0000_README
index 0f74eb1..51feb2b 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
+Patch:  1000_linux-5.13.1.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.13.1
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1001_linux-5.13.1.patch b/1001_linux-5.13.1.patch
new file mode 100644
index 0000000..21b2223
--- /dev/null
+++ b/1001_linux-5.13.1.patch
@@ -0,0 +1,50 @@
+diff --git a/Makefile b/Makefile
+index 0565caea0362a..069607cfe2836 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 13
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 9c7ced0e31718..682e82956ea5a 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -320,6 +320,7 @@ union kvm_mmu_extended_role {
+ 		unsigned int cr4_pke:1;
+ 		unsigned int cr4_smap:1;
+ 		unsigned int cr4_smep:1;
++		unsigned int cr4_la57:1;
+ 		unsigned int maxphyaddr:6;
+ 	};
+ };
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 8d5876dfc6b71..a54f72c31be90 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -4476,6 +4476,7 @@ static union kvm_mmu_extended_role kvm_calc_mmu_role_ext(struct kvm_vcpu *vcpu)
+ 	ext.cr4_smap = !!kvm_read_cr4_bits(vcpu, X86_CR4_SMAP);
+ 	ext.cr4_pse = !!is_pse(vcpu);
+ 	ext.cr4_pke = !!kvm_read_cr4_bits(vcpu, X86_CR4_PKE);
++	ext.cr4_la57 = !!kvm_read_cr4_bits(vcpu, X86_CR4_LA57);
+ 	ext.maxphyaddr = cpuid_maxphyaddr(vcpu);
+ 
+ 	ext.valid = 1;
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index ef2265f86b913..04220581579cd 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -5058,7 +5058,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
+ 
+ 	/* Already populated array? */
+ 	if (unlikely(page_array && nr_pages - nr_populated == 0))
+-		return 0;
++		return nr_populated;
+ 
+ 	/* Use the single page allocator for one page. */
+ 	if (nr_pages - nr_populated == 1)


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-07-07 13:26 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-07-07 13:26 UTC (permalink / raw
  To: gentoo-commits

commit:     d0396ce4d5ae709ae93c8ea0962b2b36ba4230ca
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul  7 13:26:20 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul  7 13:26:20 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d0396ce4

Remove redundant patch (mm page alloc fix)

Removed: 1800_mm-page-alloc-fix-ret-val-on-alloc-fail.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |  4 --
 1800_mm-page-alloc-fix-ret-val-on-alloc-fail.patch | 50 ----------------------
 2 files changed, 54 deletions(-)

diff --git a/0000_README b/0000_README
index 51feb2b..d3e2ab4 100644
--- a/0000_README
+++ b/0000_README
@@ -59,10 +59,6 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
-Patch:  1800_mm-page-alloc-fix-ret-val-on-alloc-fail.patch
-From:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/patch/?id=ff4b2b4014cbffb3d32b22629252f4dc8616b0fe
-Desc:   mm/page_alloc: correct return value of populated elements if bulk array is populated
-
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/1800_mm-page-alloc-fix-ret-val-on-alloc-fail.patch b/1800_mm-page-alloc-fix-ret-val-on-alloc-fail.patch
deleted file mode 100644
index 2f1b4dc..0000000
--- a/1800_mm-page-alloc-fix-ret-val-on-alloc-fail.patch
+++ /dev/null
@@ -1,50 +0,0 @@
-From ff4b2b4014cbffb3d32b22629252f4dc8616b0fe Mon Sep 17 00:00:00 2001
-From: Mel Gorman <mgorman@techsingularity.net>
-Date: Mon, 28 Jun 2021 19:33:29 -0700
-Subject: mm/page_alloc: correct return value of populated elements if bulk
- array is populated
-
-Dave Jones reported the following
-
-	This made it into 5.13 final, and completely breaks NFSD for me
-	(Serving tcp v3 mounts).  Existing mounts on clients hang, as do
-	new mounts from new clients.  Rebooting the server back to rc7
-	everything recovers.
-
-The commit b3b64ebd3822 ("mm/page_alloc: do bulk array bounds check after
-checking populated elements") returns the wrong value if the array is
-already populated which is interpreted as an allocation failure.  Dave
-reported this fixes his problem and it also passed a test running dbench
-over NFS.
-
-Link: https://lkml.kernel.org/r/20210628150219.GC3840@techsingularity.net
-Fixes: b3b64ebd3822 ("mm/page_alloc: do bulk array bounds check after checking populated elements")
-Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
-Reported-by: Dave Jones <davej@codemonkey.org.uk>
-Tested-by: Dave Jones <davej@codemonkey.org.uk>
-Cc: Dan Carpenter <dan.carpenter@oracle.com>
-Cc: Jesper Dangaard Brouer <brouer@redhat.com>
-Cc: Vlastimil Babka <vbabka@suse.cz>
-Cc: <stable@vger.kernel.org> [5.13+]
-Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
----
- mm/page_alloc.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/mm/page_alloc.c b/mm/page_alloc.c
-index 5b5c9f5813b9a..2bf03c76504b0 100644
---- a/mm/page_alloc.c
-+++ b/mm/page_alloc.c
-@@ -5058,7 +5058,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
- 
- 	/* Already populated array? */
- 	if (unlikely(page_array && nr_pages - nr_populated == 0))
--		return 0;
-+		return nr_populated;
- 
- 	/* Use the single page allocator for one page. */
- 	if (nr_pages - nr_populated == 1)
--- 
-cgit 1.2.3-1.el7
-


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-07-12 11:36 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-07-12 11:36 UTC (permalink / raw
  To: gentoo-commits

commit:     5104561ec6c463e866a47f10f183292270afe519
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jul 12 11:35:40 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jul 12 11:35:40 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5104561e

Add BMQ v5.13-r1 and gentoo default (to off) patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                  |    8 +
 5020_BMQ-and-PDS-io-scheduler-v5.13-r1.patch | 9523 ++++++++++++++++++++++++++
 5021_BMQ-and-PDS-gentoo-defaults.patch       |   13 +
 3 files changed, 9544 insertions(+)

diff --git a/0000_README b/0000_README
index d3e2ab4..f92bd44 100644
--- a/0000_README
+++ b/0000_README
@@ -75,3 +75,11 @@ Patch:  5010_enable-cpu-optimizations-universal.patch
 From:   https://github.com/graysky2/kernel_gcc_patch/
 Desc:   Kernel >= 5.8 patch enables gcc = v9+ optimizations for additional CPUs.
 
+Patch:  5020_BMQ-and-PDS-io-scheduler-v5.13-r1.patch
+From:   https://gitlab.com/alfredchen/linux-prjc
+Desc:   BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
+
+Patch:  5021_BMQ-and-PDS-gentoo-defaults.patch
+From:   https://gitweb.gentoo.org/proj/linux-patches.git/
+Desc:   Set defaults for BMQ. Add archs as people test, default to N
+

diff --git a/5020_BMQ-and-PDS-io-scheduler-v5.13-r1.patch b/5020_BMQ-and-PDS-io-scheduler-v5.13-r1.patch
new file mode 100644
index 0000000..82d7f5a
--- /dev/null
+++ b/5020_BMQ-and-PDS-io-scheduler-v5.13-r1.patch
@@ -0,0 +1,9523 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index cb89dbdedc46..37192ffbd3f8 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -4878,6 +4878,12 @@
+ 
+ 	sbni=		[NET] Granch SBNI12 leased line adapter
+ 
++	sched_timeslice=
++			[KNL] Time slice in us for BMQ/PDS scheduler.
++			Format: <int> (must be >= 1000)
++			Default: 4000
++			See Documentation/scheduler/sched-BMQ.txt
++
+ 	sched_verbose	[KNL] Enables verbose scheduler debug messages.
+ 
+ 	schedstats=	[KNL,X86] Enable or disable scheduled statistics.
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index 68b21395a743..0c14a4544fd6 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -1527,3 +1527,13 @@ is 10 seconds.
+ 
+ The softlockup threshold is (``2 * watchdog_thresh``). Setting this
+ tunable to zero will disable lockup detection altogether.
++
++yield_type:
++===========
++
++BMQ/PDS CPU scheduler only. This determines what type of yield calls
++to sched_yield will perform.
++
++  0 - No yield.
++  1 - Deboost and requeue task. (default)
++  2 - Set run queue skip task.
+diff --git a/Documentation/scheduler/sched-BMQ.txt b/Documentation/scheduler/sched-BMQ.txt
+new file mode 100644
+index 000000000000..05c84eec0f31
+--- /dev/null
++++ b/Documentation/scheduler/sched-BMQ.txt
+@@ -0,0 +1,110 @@
++                         BitMap queue CPU Scheduler
++                         --------------------------
++
++CONTENT
++========
++
++ Background
++ Design
++   Overview
++   Task policy
++   Priority management
++   BitMap Queue
++   CPU Assignment and Migration
++
++
++Background
++==========
++
++BitMap Queue CPU scheduler, referred to as BMQ from here on, is an evolution
++of previous Priority and Deadline based Skiplist multiple queue scheduler(PDS),
++and inspired by Zircon scheduler. The goal of it is to keep the scheduler code
++simple, while efficiency and scalable for interactive tasks, such as desktop,
++movie playback and gaming etc.
++
++Design
++======
++
++Overview
++--------
++
++BMQ use per CPU run queue design, each CPU(logical) has it's own run queue,
++each CPU is responsible for scheduling the tasks that are putting into it's
++run queue.
++
++The run queue is a set of priority queues. Note that these queues are fifo
++queue for non-rt tasks or priority queue for rt tasks in data structure. See
++BitMap Queue below for details. BMQ is optimized for non-rt tasks in the fact
++that most applications are non-rt tasks. No matter the queue is fifo or
++priority, In each queue is an ordered list of runnable tasks awaiting execution
++and the data structures are the same. When it is time for a new task to run,
++the scheduler simply looks the lowest numbered queueue that contains a task,
++and runs the first task from the head of that queue. And per CPU idle task is
++also in the run queue, so the scheduler can always find a task to run on from
++its run queue.
++
++Each task will assigned the same timeslice(default 4ms) when it is picked to
++start running. Task will be reinserted at the end of the appropriate priority
++queue when it uses its whole timeslice. When the scheduler selects a new task
++from the priority queue it sets the CPU's preemption timer for the remainder of
++the previous timeslice. When that timer fires the scheduler will stop execution
++on that task, select another task and start over again.
++
++If a task blocks waiting for a shared resource then it's taken out of its
++priority queue and is placed in a wait queue for the shared resource. When it
++is unblocked it will be reinserted in the appropriate priority queue of an
++eligible CPU.
++
++Task policy
++-----------
++
++BMQ supports DEADLINE, FIFO, RR, NORMAL, BATCH and IDLE task policy like the
++mainline CFS scheduler. But BMQ is heavy optimized for non-rt task, that's
++NORMAL/BATCH/IDLE policy tasks. Below is the implementation detail of each
++policy.
++
++DEADLINE
++	It is squashed as priority 0 FIFO task.
++
++FIFO/RR
++	All RT tasks share one single priority queue in BMQ run queue designed. The
++complexity of insert operation is O(n). BMQ is not designed for system runs
++with major rt policy tasks.
++
++NORMAL/BATCH/IDLE
++	BATCH and IDLE tasks are treated as the same policy. They compete CPU with
++NORMAL policy tasks, but they just don't boost. To control the priority of
++NORMAL/BATCH/IDLE tasks, simply use nice level.
++
++ISO
++	ISO policy is not supported in BMQ. Please use nice level -20 NORMAL policy
++task instead.
++
++Priority management
++-------------------
++
++RT tasks have priority from 0-99. For non-rt tasks, there are three different
++factors used to determine the effective priority of a task. The effective
++priority being what is used to determine which queue it will be in.
++
++The first factor is simply the task’s static priority. Which is assigned from
++task's nice level, within [-20, 19] in userland's point of view and [0, 39]
++internally.
++
++The second factor is the priority boost. This is a value bounded between
++[-MAX_PRIORITY_ADJ, MAX_PRIORITY_ADJ] used to offset the base priority, it is
++modified by the following cases:
++
++*When a thread has used up its entire timeslice, always deboost its boost by
++increasing by one.
++*When a thread gives up cpu control(voluntary or non-voluntary) to reschedule,
++and its switch-in time(time after last switch and run) below the thredhold
++based on its priority boost, will boost its boost by decreasing by one buti is
++capped at 0 (won’t go negative).
++
++The intent in this system is to ensure that interactive threads are serviced
++quickly. These are usually the threads that interact directly with the user
++and cause user-perceivable latency. These threads usually do little work and
++spend most of their time blocked awaiting another user event. So they get the
++priority boost from unblocking while background threads that do most of the
++processing receive the priority penalty for using their entire timeslice.
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index 9cbd915025ad..f4f05b4cb2af 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -476,7 +476,7 @@ static int proc_pid_schedstat(struct seq_file *m, struct pid_namespace *ns,
+ 		seq_puts(m, "0 0 0\n");
+ 	else
+ 		seq_printf(m, "%llu %llu %lu\n",
+-		   (unsigned long long)task->se.sum_exec_runtime,
++		   (unsigned long long)tsk_seruntime(task),
+ 		   (unsigned long long)task->sched_info.run_delay,
+ 		   task->sched_info.pcount);
+ 
+diff --git a/include/asm-generic/resource.h b/include/asm-generic/resource.h
+index 8874f681b056..59eb72bf7d5f 100644
+--- a/include/asm-generic/resource.h
++++ b/include/asm-generic/resource.h
+@@ -23,7 +23,7 @@
+ 	[RLIMIT_LOCKS]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
+ 	[RLIMIT_SIGPENDING]	= { 		0,	       0 },	\
+ 	[RLIMIT_MSGQUEUE]	= {   MQ_BYTES_MAX,   MQ_BYTES_MAX },	\
+-	[RLIMIT_NICE]		= { 0, 0 },				\
++	[RLIMIT_NICE]		= { 30, 30 },				\
+ 	[RLIMIT_RTPRIO]		= { 0, 0 },				\
+ 	[RLIMIT_RTTIME]		= {  RLIM_INFINITY,  RLIM_INFINITY },	\
+ }
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 32813c345115..35f7cfe6539a 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -678,12 +678,18 @@ struct task_struct {
+ 	unsigned int			ptrace;
+ 
+ #ifdef CONFIG_SMP
+-	int				on_cpu;
+ 	struct __call_single_node	wake_entry;
++#endif
++#if defined(CONFIG_SMP) || defined(CONFIG_SCHED_ALT)
++	int				on_cpu;
++#endif
++
++#ifdef CONFIG_SMP
+ #ifdef CONFIG_THREAD_INFO_IN_TASK
+ 	/* Current CPU: */
+ 	unsigned int			cpu;
+ #endif
++#ifndef CONFIG_SCHED_ALT
+ 	unsigned int			wakee_flips;
+ 	unsigned long			wakee_flip_decay_ts;
+ 	struct task_struct		*last_wakee;
+@@ -697,6 +703,7 @@ struct task_struct {
+ 	 */
+ 	int				recent_used_cpu;
+ 	int				wake_cpu;
++#endif /* !CONFIG_SCHED_ALT */
+ #endif
+ 	int				on_rq;
+ 
+@@ -705,13 +712,28 @@ struct task_struct {
+ 	int				normal_prio;
+ 	unsigned int			rt_priority;
+ 
++#ifdef CONFIG_SCHED_ALT
++	u64				last_ran;
++	s64				time_slice;
++	int				sq_idx;
++	struct list_head		sq_node;
++#ifdef CONFIG_SCHED_BMQ
++	int				boost_prio;
++#endif /* CONFIG_SCHED_BMQ */
++#ifdef CONFIG_SCHED_PDS
++	u64				deadline;
++#endif /* CONFIG_SCHED_PDS */
++	/* sched_clock time spent running */
++	u64				sched_time;
++#else /* !CONFIG_SCHED_ALT */
+ 	const struct sched_class	*sched_class;
+ 	struct sched_entity		se;
+ 	struct sched_rt_entity		rt;
++	struct sched_dl_entity		dl;
++#endif
+ #ifdef CONFIG_CGROUP_SCHED
+ 	struct task_group		*sched_task_group;
+ #endif
+-	struct sched_dl_entity		dl;
+ 
+ #ifdef CONFIG_UCLAMP_TASK
+ 	/*
+@@ -1407,6 +1429,15 @@ struct task_struct {
+ 	 */
+ };
+ 
++#ifdef CONFIG_SCHED_ALT
++#define tsk_seruntime(t)		((t)->sched_time)
++/* replace the uncertian rt_timeout with 0UL */
++#define tsk_rttimeout(t)		(0UL)
++#else /* CFS */
++#define tsk_seruntime(t)	((t)->se.sum_exec_runtime)
++#define tsk_rttimeout(t)	((t)->rt.timeout)
++#endif /* !CONFIG_SCHED_ALT */
++
+ static inline struct pid *task_pid(struct task_struct *task)
+ {
+ 	return task->thread_pid;
+diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
+index 1aff00b65f3c..216fdf2fe90c 100644
+--- a/include/linux/sched/deadline.h
++++ b/include/linux/sched/deadline.h
+@@ -1,5 +1,24 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ 
++#ifdef CONFIG_SCHED_ALT
++
++static inline int dl_task(struct task_struct *p)
++{
++	return 0;
++}
++
++#ifdef CONFIG_SCHED_BMQ
++#define __tsk_deadline(p)	(0UL)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define __tsk_deadline(p)	((((u64) ((p)->prio))<<56) | (p)->deadline)
++#endif
++
++#else
++
++#define __tsk_deadline(p)	((p)->dl.deadline)
++
+ /*
+  * SCHED_DEADLINE tasks has negative priorities, reflecting
+  * the fact that any of them has higher prio than RT and
+@@ -19,6 +38,7 @@ static inline int dl_task(struct task_struct *p)
+ {
+ 	return dl_prio(p->prio);
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ static inline bool dl_time_before(u64 a, u64 b)
+ {
+diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h
+index ab83d85e1183..6af9ae681116 100644
+--- a/include/linux/sched/prio.h
++++ b/include/linux/sched/prio.h
+@@ -18,6 +18,32 @@
+ #define MAX_PRIO		(MAX_RT_PRIO + NICE_WIDTH)
+ #define DEFAULT_PRIO		(MAX_RT_PRIO + NICE_WIDTH / 2)
+ 
++#ifdef CONFIG_SCHED_ALT
++
++/* Undefine MAX_PRIO and DEFAULT_PRIO */
++#undef MAX_PRIO
++#undef DEFAULT_PRIO
++
++/* +/- priority levels from the base priority */
++#ifdef CONFIG_SCHED_BMQ
++#define MAX_PRIORITY_ADJ	(7)
++
++#define MIN_NORMAL_PRIO		(MAX_RT_PRIO)
++#define MAX_PRIO		(MIN_NORMAL_PRIO + NICE_WIDTH)
++#define DEFAULT_PRIO		(MIN_NORMAL_PRIO + NICE_WIDTH / 2)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define MAX_PRIORITY_ADJ	(0)
++
++#define MIN_NORMAL_PRIO		(128)
++#define NORMAL_PRIO_NUM		(64)
++#define MAX_PRIO		(MIN_NORMAL_PRIO + NORMAL_PRIO_NUM)
++#define DEFAULT_PRIO		(MAX_PRIO - NICE_WIDTH / 2)
++#endif
++
++#endif /* CONFIG_SCHED_ALT */
++
+ /*
+  * Convert user-nice values [ -20 ... 0 ... 19 ]
+  * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
+diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
+index e5af028c08b4..0a7565d0d3cf 100644
+--- a/include/linux/sched/rt.h
++++ b/include/linux/sched/rt.h
+@@ -24,8 +24,10 @@ static inline bool task_is_realtime(struct task_struct *tsk)
+ 
+ 	if (policy == SCHED_FIFO || policy == SCHED_RR)
+ 		return true;
++#ifndef CONFIG_SCHED_ALT
+ 	if (policy == SCHED_DEADLINE)
+ 		return true;
++#endif
+ 	return false;
+ }
+ 
+diff --git a/init/Kconfig b/init/Kconfig
+index a61c92066c2e..7746c8d4610b 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -783,9 +783,39 @@ config GENERIC_SCHED_CLOCK
+ 
+ menu "Scheduler features"
+ 
++menuconfig SCHED_ALT
++	bool "Alternative CPU Schedulers"
++	default y
++	help
++	  This feature enable alternative CPU scheduler"
++
++if SCHED_ALT
++
++choice
++	prompt "Alternative CPU Scheduler"
++	default SCHED_BMQ
++
++config SCHED_BMQ
++	bool "BMQ CPU scheduler"
++	help
++	  The BitMap Queue CPU scheduler for excellent interactivity and
++	  responsiveness on the desktop and solid scalability on normal
++	  hardware and commodity servers.
++
++config SCHED_PDS
++	bool "PDS CPU scheduler"
++	help
++	  The Priority and Deadline based Skip list multiple queue CPU
++	  Scheduler.
++
++endchoice
++
++endif
++
+ config UCLAMP_TASK
+ 	bool "Enable utilization clamping for RT/FAIR tasks"
+ 	depends on CPU_FREQ_GOV_SCHEDUTIL
++	depends on !SCHED_ALT
+ 	help
+ 	  This feature enables the scheduler to track the clamped utilization
+ 	  of each CPU based on RUNNABLE tasks scheduled on that CPU.
+@@ -871,6 +901,7 @@ config NUMA_BALANCING
+ 	depends on ARCH_SUPPORTS_NUMA_BALANCING
+ 	depends on !ARCH_WANT_NUMA_VARIABLE_LOCALITY
+ 	depends on SMP && NUMA && MIGRATION
++	depends on !SCHED_ALT
+ 	help
+ 	  This option adds support for automatic NUMA aware memory/task placement.
+ 	  The mechanism is quite primitive and is based on migrating memory when
+@@ -963,6 +994,7 @@ config FAIR_GROUP_SCHED
+ 	depends on CGROUP_SCHED
+ 	default CGROUP_SCHED
+ 
++if !SCHED_ALT
+ config CFS_BANDWIDTH
+ 	bool "CPU bandwidth provisioning for FAIR_GROUP_SCHED"
+ 	depends on FAIR_GROUP_SCHED
+@@ -985,6 +1017,7 @@ config RT_GROUP_SCHED
+ 	  realtime bandwidth for them.
+ 	  See Documentation/scheduler/sched-rt-group.rst for more information.
+ 
++endif #!SCHED_ALT
+ endif #CGROUP_SCHED
+ 
+ config UCLAMP_TASK_GROUP
+@@ -1228,6 +1261,7 @@ config CHECKPOINT_RESTORE
+ 
+ config SCHED_AUTOGROUP
+ 	bool "Automatic process group scheduling"
++	depends on !SCHED_ALT
+ 	select CGROUPS
+ 	select CGROUP_SCHED
+ 	select FAIR_GROUP_SCHED
+diff --git a/init/init_task.c b/init/init_task.c
+index 8b08c2e19cbb..0dfa1a63dc4e 100644
+--- a/init/init_task.c
++++ b/init/init_task.c
+@@ -75,9 +75,15 @@ struct task_struct init_task
+ 	.stack		= init_stack,
+ 	.usage		= REFCOUNT_INIT(2),
+ 	.flags		= PF_KTHREAD,
++#ifdef CONFIG_SCHED_ALT
++	.prio		= DEFAULT_PRIO + MAX_PRIORITY_ADJ,
++	.static_prio	= DEFAULT_PRIO,
++	.normal_prio	= DEFAULT_PRIO + MAX_PRIORITY_ADJ,
++#else
+ 	.prio		= MAX_PRIO - 20,
+ 	.static_prio	= MAX_PRIO - 20,
+ 	.normal_prio	= MAX_PRIO - 20,
++#endif
+ 	.policy		= SCHED_NORMAL,
+ 	.cpus_ptr	= &init_task.cpus_mask,
+ 	.cpus_mask	= CPU_MASK_ALL,
+@@ -87,6 +93,17 @@ struct task_struct init_task
+ 	.restart_block	= {
+ 		.fn = do_no_restart_syscall,
+ 	},
++#ifdef CONFIG_SCHED_ALT
++	.sq_node	= LIST_HEAD_INIT(init_task.sq_node),
++#ifdef CONFIG_SCHED_BMQ
++	.boost_prio	= 0,
++	.sq_idx		= 15,
++#endif
++#ifdef CONFIG_SCHED_PDS
++	.deadline	= 0,
++#endif
++	.time_slice	= HZ,
++#else
+ 	.se		= {
+ 		.group_node 	= LIST_HEAD_INIT(init_task.se.group_node),
+ 	},
+@@ -94,6 +111,7 @@ struct task_struct init_task
+ 		.run_list	= LIST_HEAD_INIT(init_task.rt.run_list),
+ 		.time_slice	= RR_TIMESLICE,
+ 	},
++#endif
+ 	.tasks		= LIST_HEAD_INIT(init_task.tasks),
+ #ifdef CONFIG_SMP
+ 	.pushable_tasks	= PLIST_NODE_INIT(init_task.pushable_tasks, MAX_PRIO),
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index adb5190c4429..8c02bce63146 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -636,7 +636,7 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
+ 	return ret;
+ }
+ 
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_ALT)
+ /*
+  * Helper routine for generate_sched_domains().
+  * Do cpusets a, b have overlapping effective cpus_allowed masks?
+@@ -1032,7 +1032,7 @@ static void rebuild_sched_domains_locked(void)
+ 	/* Have scheduler rebuild the domains */
+ 	partition_and_rebuild_sched_domains(ndoms, doms, attr);
+ }
+-#else /* !CONFIG_SMP */
++#else /* !CONFIG_SMP || CONFIG_SCHED_ALT */
+ static void rebuild_sched_domains_locked(void)
+ {
+ }
+diff --git a/kernel/delayacct.c b/kernel/delayacct.c
+index 27725754ac99..769d773c7182 100644
+--- a/kernel/delayacct.c
++++ b/kernel/delayacct.c
+@@ -106,7 +106,7 @@ int __delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
+ 	 */
+ 	t1 = tsk->sched_info.pcount;
+ 	t2 = tsk->sched_info.run_delay;
+-	t3 = tsk->se.sum_exec_runtime;
++	t3 = tsk_seruntime(tsk);
+ 
+ 	d->cpu_count += t1;
+ 
+diff --git a/kernel/exit.c b/kernel/exit.c
+index 65809fac3038..9504db57d878 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -122,7 +122,7 @@ static void __exit_signal(struct task_struct *tsk)
+ 			sig->curr_target = next_thread(tsk);
+ 	}
+ 
+-	add_device_randomness((const void*) &tsk->se.sum_exec_runtime,
++	add_device_randomness((const void*) &tsk_seruntime(tsk),
+ 			      sizeof(unsigned long long));
+ 
+ 	/*
+@@ -143,7 +143,7 @@ static void __exit_signal(struct task_struct *tsk)
+ 	sig->inblock += task_io_get_inblock(tsk);
+ 	sig->oublock += task_io_get_oublock(tsk);
+ 	task_io_accounting_add(&sig->ioac, &tsk->ioac);
+-	sig->sum_sched_runtime += tsk->se.sum_exec_runtime;
++	sig->sum_sched_runtime += tsk_seruntime(tsk);
+ 	sig->nr_threads--;
+ 	__unhash_process(tsk, group_dead);
+ 	write_sequnlock(&sig->stats_lock);
+diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
+index 3a4beb9395c4..98a709628cb3 100644
+--- a/kernel/livepatch/transition.c
++++ b/kernel/livepatch/transition.c
+@@ -307,7 +307,11 @@ static bool klp_try_switch_task(struct task_struct *task)
+ 	 */
+ 	rq = task_rq_lock(task, &flags);
+ 
++#ifdef	CONFIG_SCHED_ALT
++	if (task_running(task) && task != current) {
++#else
+ 	if (task_running(rq, task) && task != current) {
++#endif
+ 		snprintf(err_buf, STACK_ERR_BUF_SIZE,
+ 			 "%s: %s:%d is running\n", __func__, task->comm,
+ 			 task->pid);
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index 406818196a9f..31c46750fa94 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -227,14 +227,18 @@ static __always_inline bool unlock_rt_mutex_safe(struct rt_mutex *lock,
+  * Only use with rt_mutex_waiter_{less,equal}()
+  */
+ #define task_to_waiter(p)	\
+-	&(struct rt_mutex_waiter){ .prio = (p)->prio, .deadline = (p)->dl.deadline }
++	&(struct rt_mutex_waiter){ .prio = (p)->prio, .deadline = __tsk_deadline(p) }
+ 
+ static __always_inline int rt_mutex_waiter_less(struct rt_mutex_waiter *left,
+ 						struct rt_mutex_waiter *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++	return (left->deadline < right->deadline);
++#else
+ 	if (left->prio < right->prio)
+ 		return 1;
+ 
++#ifndef CONFIG_SCHED_BMQ
+ 	/*
+ 	 * If both waiters have dl_prio(), we check the deadlines of the
+ 	 * associated tasks.
+@@ -243,16 +247,22 @@ static __always_inline int rt_mutex_waiter_less(struct rt_mutex_waiter *left,
+ 	 */
+ 	if (dl_prio(left->prio))
+ 		return dl_time_before(left->deadline, right->deadline);
++#endif
+ 
+ 	return 0;
++#endif
+ }
+ 
+ static __always_inline int rt_mutex_waiter_equal(struct rt_mutex_waiter *left,
+ 						 struct rt_mutex_waiter *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++	return (left->deadline == right->deadline);
++#else
+ 	if (left->prio != right->prio)
+ 		return 0;
+ 
++#ifndef CONFIG_SCHED_BMQ
+ 	/*
+ 	 * If both waiters have dl_prio(), we check the deadlines of the
+ 	 * associated tasks.
+@@ -261,8 +271,10 @@ static __always_inline int rt_mutex_waiter_equal(struct rt_mutex_waiter *left,
+ 	 */
+ 	if (dl_prio(left->prio))
+ 		return left->deadline == right->deadline;
++#endif
+ 
+ 	return 1;
++#endif
+ }
+ 
+ #define __node_2_waiter(node) \
+@@ -654,7 +666,7 @@ static int __sched rt_mutex_adjust_prio_chain(struct task_struct *task,
+ 	 * the values of the node being removed.
+ 	 */
+ 	waiter->prio = task->prio;
+-	waiter->deadline = task->dl.deadline;
++	waiter->deadline = __tsk_deadline(task);
+ 
+ 	rt_mutex_enqueue(lock, waiter);
+ 
+@@ -925,7 +937,7 @@ static int __sched task_blocks_on_rt_mutex(struct rt_mutex *lock,
+ 	waiter->task = task;
+ 	waiter->lock = lock;
+ 	waiter->prio = task->prio;
+-	waiter->deadline = task->dl.deadline;
++	waiter->deadline = __tsk_deadline(task);
+ 
+ 	/* Get the top priority waiter on the lock */
+ 	if (rt_mutex_has_waiters(lock))
+diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
+index 5fc9c9b70862..06b60d612535 100644
+--- a/kernel/sched/Makefile
++++ b/kernel/sched/Makefile
+@@ -22,14 +22,21 @@ ifneq ($(CONFIG_SCHED_OMIT_FRAME_POINTER),y)
+ CFLAGS_core.o := $(PROFILING) -fno-omit-frame-pointer
+ endif
+ 
+-obj-y += core.o loadavg.o clock.o cputime.o
+-obj-y += idle.o fair.o rt.o deadline.o
+-obj-y += wait.o wait_bit.o swait.o completion.o
+-
+-obj-$(CONFIG_SMP) += cpupri.o cpudeadline.o topology.o stop_task.o pelt.o
++ifdef CONFIG_SCHED_ALT
++obj-y += alt_core.o
++obj-$(CONFIG_SCHED_DEBUG) += alt_debug.o
++else
++obj-y += core.o
++obj-y += fair.o rt.o deadline.o
++obj-$(CONFIG_SMP) += cpudeadline.o stop_task.o
+ obj-$(CONFIG_SCHED_AUTOGROUP) += autogroup.o
+-obj-$(CONFIG_SCHEDSTATS) += stats.o
++endif
+ obj-$(CONFIG_SCHED_DEBUG) += debug.o
++obj-y += loadavg.o clock.o cputime.o
++obj-y += idle.o
++obj-y += wait.o wait_bit.o swait.o completion.o
++obj-$(CONFIG_SMP) += cpupri.o pelt.o topology.o
++obj-$(CONFIG_SCHEDSTATS) += stats.o
+ obj-$(CONFIG_CGROUP_CPUACCT) += cpuacct.o
+ obj-$(CONFIG_CPU_FREQ) += cpufreq.o
+ obj-$(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) += cpufreq_schedutil.o
+diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
+new file mode 100644
+index 000000000000..b65b12c6014f
+--- /dev/null
++++ b/kernel/sched/alt_core.c
+@@ -0,0 +1,7249 @@
++/*
++ *  kernel/sched/alt_core.c
++ *
++ *  Core alternative kernel scheduler code and related syscalls
++ *
++ *  Copyright (C) 1991-2002  Linus Torvalds
++ *
++ *  2009-08-13	Brainfuck deadline scheduling policy by Con Kolivas deletes
++ *		a whole lot of those previous things.
++ *  2017-09-06	Priority and Deadline based Skip list multiple queue kernel
++ *		scheduler by Alfred Chen.
++ *  2019-02-20	BMQ(BitMap Queue) kernel scheduler by Alfred Chen.
++ */
++#define CREATE_TRACE_POINTS
++#include <trace/events/sched.h>
++#undef CREATE_TRACE_POINTS
++
++#include "sched.h"
++
++#include <linux/sched/rt.h>
++
++#include <linux/context_tracking.h>
++#include <linux/compat.h>
++#include <linux/blkdev.h>
++#include <linux/delayacct.h>
++#include <linux/freezer.h>
++#include <linux/init_task.h>
++#include <linux/kprobes.h>
++#include <linux/mmu_context.h>
++#include <linux/nmi.h>
++#include <linux/profile.h>
++#include <linux/rcupdate_wait.h>
++#include <linux/security.h>
++#include <linux/syscalls.h>
++#include <linux/wait_bit.h>
++
++#include <linux/kcov.h>
++#include <linux/scs.h>
++
++#include <asm/switch_to.h>
++
++#include "../workqueue_internal.h"
++#include "../../fs/io-wq.h"
++#include "../smpboot.h"
++
++#include "pelt.h"
++#include "smp.h"
++
++/*
++ * Export tracepoints that act as a bare tracehook (ie: have no trace event
++ * associated with them) to allow external modules to probe them.
++ */
++EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
++
++#ifdef CONFIG_SCHED_DEBUG
++#define sched_feat(x)	(1)
++/*
++ * Print a warning if need_resched is set for the given duration (if
++ * LATENCY_WARN is enabled).
++ *
++ * If sysctl_resched_latency_warn_once is set, only one warning will be shown
++ * per boot.
++ */
++__read_mostly int sysctl_resched_latency_warn_ms = 100;
++__read_mostly int sysctl_resched_latency_warn_once = 1;
++#else
++#define sched_feat(x)	(0)
++#endif /* CONFIG_SCHED_DEBUG */
++
++#define ALT_SCHED_VERSION "v5.13-r1"
++
++/* rt_prio(prio) defined in include/linux/sched/rt.h */
++#define rt_task(p)		rt_prio((p)->prio)
++#define rt_policy(policy)	((policy) == SCHED_FIFO || (policy) == SCHED_RR)
++#define task_has_rt_policy(p)	(rt_policy((p)->policy))
++
++#define STOP_PRIO		(MAX_RT_PRIO - 1)
++
++/* Default time slice is 4 in ms, can be set via kernel parameter "sched_timeslice" */
++u64 sched_timeslice_ns __read_mostly = (4 << 20);
++
++static inline void requeue_task(struct task_struct *p, struct rq *rq);
++
++#ifdef CONFIG_SCHED_BMQ
++#include "bmq.h"
++#endif
++#ifdef CONFIG_SCHED_PDS
++#include "pds.h"
++#endif
++
++static int __init sched_timeslice(char *str)
++{
++	int timeslice_ms;
++
++	get_option(&str, &timeslice_ms);
++	if (2 != timeslice_ms)
++		timeslice_ms = 4;
++	sched_timeslice_ns = timeslice_ms << 20;
++	sched_timeslice_imp(timeslice_ms);
++
++	return 0;
++}
++early_param("sched_timeslice", sched_timeslice);
++
++/* Reschedule if less than this many μs left */
++#define RESCHED_NS		(100 << 10)
++
++/**
++ * sched_yield_type - Choose what sort of yield sched_yield will perform.
++ * 0: No yield.
++ * 1: Deboost and requeue task. (default)
++ * 2: Set rq skip task.
++ */
++int sched_yield_type __read_mostly = 1;
++
++#ifdef CONFIG_SMP
++static cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
++
++DEFINE_PER_CPU(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_affinity_masks);
++DEFINE_PER_CPU(cpumask_t *, sched_cpu_affinity_end_mask);
++
++DEFINE_PER_CPU(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++DEFINE_PER_CPU(cpumask_t *, sched_cpu_llc_mask);
++
++#ifdef CONFIG_SCHED_SMT
++DEFINE_STATIC_KEY_FALSE(sched_smt_present);
++EXPORT_SYMBOL_GPL(sched_smt_present);
++#endif
++
++/*
++ * Keep a unique ID per domain (we use the first CPUs number in the cpumask of
++ * the domain), this allows us to quickly tell if two cpus are in the same cache
++ * domain, see cpus_share_cache().
++ */
++DEFINE_PER_CPU(int, sd_llc_id);
++#endif /* CONFIG_SMP */
++
++static DEFINE_MUTEX(sched_hotcpu_mutex);
++
++DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next)	do { } while (0)
++#endif
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch()	do { } while (0)
++#endif
++
++#define IDLE_WM	(IDLE_TASK_SCHED_PRIO)
++
++#ifdef CONFIG_SCHED_SMT
++static cpumask_t sched_sg_idle_mask ____cacheline_aligned_in_smp;
++#endif
++static cpumask_t sched_rq_watermark[SCHED_BITS] ____cacheline_aligned_in_smp;
++
++/* sched_queue related functions */
++static inline void sched_queue_init(struct sched_queue *q)
++{
++	int i;
++
++	bitmap_zero(q->bitmap, SCHED_BITS);
++	for(i = 0; i < SCHED_BITS; i++)
++		INIT_LIST_HEAD(&q->heads[i]);
++}
++
++/*
++ * Init idle task and put into queue structure of rq
++ * IMPORTANT: may be called multiple times for a single cpu
++ */
++static inline void sched_queue_init_idle(struct sched_queue *q,
++					 struct task_struct *idle)
++{
++	idle->sq_idx = IDLE_TASK_SCHED_PRIO;
++	INIT_LIST_HEAD(&q->heads[idle->sq_idx]);
++	list_add(&idle->sq_node, &q->heads[idle->sq_idx]);
++}
++
++/* water mark related functions */
++static inline void update_sched_rq_watermark(struct rq *rq)
++{
++	unsigned long watermark = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
++	unsigned long last_wm = rq->watermark;
++	unsigned long i;
++	int cpu;
++
++	if (watermark == last_wm)
++		return;
++
++	rq->watermark = watermark;
++	cpu = cpu_of(rq);
++	if (watermark < last_wm) {
++		for (i = watermark + 1; i <= last_wm; i++)
++			cpumask_andnot(&sched_rq_watermark[i],
++				       &sched_rq_watermark[i], cpumask_of(cpu));
++#ifdef CONFIG_SCHED_SMT
++		if (static_branch_likely(&sched_smt_present) &&
++		    IDLE_WM == last_wm)
++			cpumask_andnot(&sched_sg_idle_mask,
++				       &sched_sg_idle_mask, cpu_smt_mask(cpu));
++#endif
++		return;
++	}
++	/* last_wm < watermark */
++	for (i = last_wm + 1; i <= watermark; i++)
++		cpumask_set_cpu(cpu, &sched_rq_watermark[i]);
++#ifdef CONFIG_SCHED_SMT
++	if (static_branch_likely(&sched_smt_present) && IDLE_WM == watermark) {
++		cpumask_t tmp;
++
++		cpumask_and(&tmp, cpu_smt_mask(cpu), &sched_rq_watermark[IDLE_WM]);
++		if (cpumask_equal(&tmp, cpu_smt_mask(cpu)))
++			cpumask_or(&sched_sg_idle_mask, cpu_smt_mask(cpu),
++				   &sched_sg_idle_mask);
++	}
++#endif
++}
++
++/*
++ * This routine assume that the idle task always in queue
++ */
++static inline struct task_struct *sched_rq_first_task(struct rq *rq)
++{
++	unsigned long idx = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
++	const struct list_head *head = &rq->queue.heads[sched_prio2idx(idx, rq)];
++
++	return list_first_entry(head, struct task_struct, sq_node);
++}
++
++static inline struct task_struct *
++sched_rq_next_task(struct task_struct *p, struct rq *rq)
++{
++	unsigned long idx = p->sq_idx;
++	struct list_head *head = &rq->queue.heads[idx];
++
++	if (list_is_last(&p->sq_node, head)) {
++		idx = find_next_bit(rq->queue.bitmap, SCHED_QUEUE_BITS,
++				    sched_idx2prio(idx, rq) + 1);
++		head = &rq->queue.heads[sched_prio2idx(idx, rq)];
++
++		return list_first_entry(head, struct task_struct, sq_node);
++	}
++
++	return list_next_entry(p, sq_node);
++}
++
++static inline struct task_struct *rq_runnable_task(struct rq *rq)
++{
++	struct task_struct *next = sched_rq_first_task(rq);
++
++	if (unlikely(next == rq->skip))
++		next = sched_rq_next_task(next, rq);
++
++	return next;
++}
++
++/*
++ * Serialization rules:
++ *
++ * Lock order:
++ *
++ *   p->pi_lock
++ *     rq->lock
++ *       hrtimer_cpu_base->lock (hrtimer_start() for bandwidth controls)
++ *
++ *  rq1->lock
++ *    rq2->lock  where: rq1 < rq2
++ *
++ * Regular state:
++ *
++ * Normal scheduling state is serialized by rq->lock. __schedule() takes the
++ * local CPU's rq->lock, it optionally removes the task from the runqueue and
++ * always looks at the local rq data structures to find the most eligible task
++ * to run next.
++ *
++ * Task enqueue is also under rq->lock, possibly taken from another CPU.
++ * Wakeups from another LLC domain might use an IPI to transfer the enqueue to
++ * the local CPU to avoid bouncing the runqueue state around [ see
++ * ttwu_queue_wakelist() ]
++ *
++ * Task wakeup, specifically wakeups that involve migration, are horribly
++ * complicated to avoid having to take two rq->locks.
++ *
++ * Special state:
++ *
++ * System-calls and anything external will use task_rq_lock() which acquires
++ * both p->pi_lock and rq->lock. As a consequence the state they change is
++ * stable while holding either lock:
++ *
++ *  - sched_setaffinity()/
++ *    set_cpus_allowed_ptr():	p->cpus_ptr, p->nr_cpus_allowed
++ *  - set_user_nice():		p->se.load, p->*prio
++ *  - __sched_setscheduler():	p->sched_class, p->policy, p->*prio,
++ *				p->se.load, p->rt_priority,
++ *				p->dl.dl_{runtime, deadline, period, flags, bw, density}
++ *  - sched_setnuma():		p->numa_preferred_nid
++ *  - sched_move_task()/
++ *    cpu_cgroup_fork():	p->sched_task_group
++ *  - uclamp_update_active()	p->uclamp*
++ *
++ * p->state <- TASK_*:
++ *
++ *   is changed locklessly using set_current_state(), __set_current_state() or
++ *   set_special_state(), see their respective comments, or by
++ *   try_to_wake_up(). This latter uses p->pi_lock to serialize against
++ *   concurrent self.
++ *
++ * p->on_rq <- { 0, 1 = TASK_ON_RQ_QUEUED, 2 = TASK_ON_RQ_MIGRATING }:
++ *
++ *   is set by activate_task() and cleared by deactivate_task(), under
++ *   rq->lock. Non-zero indicates the task is runnable, the special
++ *   ON_RQ_MIGRATING state is used for migration without holding both
++ *   rq->locks. It indicates task_cpu() is not stable, see task_rq_lock().
++ *
++ * p->on_cpu <- { 0, 1 }:
++ *
++ *   is set by prepare_task() and cleared by finish_task() such that it will be
++ *   set before p is scheduled-in and cleared after p is scheduled-out, both
++ *   under rq->lock. Non-zero indicates the task is running on its CPU.
++ *
++ *   [ The astute reader will observe that it is possible for two tasks on one
++ *     CPU to have ->on_cpu = 1 at the same time. ]
++ *
++ * task_cpu(p): is changed by set_task_cpu(), the rules are:
++ *
++ *  - Don't call set_task_cpu() on a blocked task:
++ *
++ *    We don't care what CPU we're not running on, this simplifies hotplug,
++ *    the CPU assignment of blocked tasks isn't required to be valid.
++ *
++ *  - for try_to_wake_up(), called under p->pi_lock:
++ *
++ *    This allows try_to_wake_up() to only take one rq->lock, see its comment.
++ *
++ *  - for migration called under rq->lock:
++ *    [ see task_on_rq_migrating() in task_rq_lock() ]
++ *
++ *    o move_queued_task()
++ *    o detach_task()
++ *
++ *  - for migration called under double_rq_lock():
++ *
++ *    o __migrate_swap_task()
++ *    o push_rt_task() / pull_rt_task()
++ *    o push_dl_task() / pull_dl_task()
++ *    o dl_task_offline_migration()
++ *
++ */
++
++/*
++ * Context: p->pi_lock
++ */
++static inline struct rq
++*__task_access_lock(struct task_struct *p, raw_spinlock_t **plock)
++{
++	struct rq *rq;
++	for (;;) {
++		rq = task_rq(p);
++		if (p->on_cpu || task_on_rq_queued(p)) {
++			raw_spin_lock(&rq->lock);
++			if (likely((p->on_cpu || task_on_rq_queued(p))
++				   && rq == task_rq(p))) {
++				*plock = &rq->lock;
++				return rq;
++			}
++			raw_spin_unlock(&rq->lock);
++		} else if (task_on_rq_migrating(p)) {
++			do {
++				cpu_relax();
++			} while (unlikely(task_on_rq_migrating(p)));
++		} else {
++			*plock = NULL;
++			return rq;
++		}
++	}
++}
++
++static inline void
++__task_access_unlock(struct task_struct *p, raw_spinlock_t *lock)
++{
++	if (NULL != lock)
++		raw_spin_unlock(lock);
++}
++
++static inline struct rq
++*task_access_lock_irqsave(struct task_struct *p, raw_spinlock_t **plock,
++			  unsigned long *flags)
++{
++	struct rq *rq;
++	for (;;) {
++		rq = task_rq(p);
++		if (p->on_cpu || task_on_rq_queued(p)) {
++			raw_spin_lock_irqsave(&rq->lock, *flags);
++			if (likely((p->on_cpu || task_on_rq_queued(p))
++				   && rq == task_rq(p))) {
++				*plock = &rq->lock;
++				return rq;
++			}
++			raw_spin_unlock_irqrestore(&rq->lock, *flags);
++		} else if (task_on_rq_migrating(p)) {
++			do {
++				cpu_relax();
++			} while (unlikely(task_on_rq_migrating(p)));
++		} else {
++			raw_spin_lock_irqsave(&p->pi_lock, *flags);
++			if (likely(!p->on_cpu && !p->on_rq &&
++				   rq == task_rq(p))) {
++				*plock = &p->pi_lock;
++				return rq;
++			}
++			raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
++		}
++	}
++}
++
++static inline void
++task_access_unlock_irqrestore(struct task_struct *p, raw_spinlock_t *lock,
++			      unsigned long *flags)
++{
++	raw_spin_unlock_irqrestore(lock, *flags);
++}
++
++/*
++ * __task_rq_lock - lock the rq @p resides on.
++ */
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	lockdep_assert_held(&p->pi_lock);
++
++	for (;;) {
++		rq = task_rq(p);
++		raw_spin_lock(&rq->lock);
++		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
++			return rq;
++		raw_spin_unlock(&rq->lock);
++
++		while (unlikely(task_on_rq_migrating(p)))
++			cpu_relax();
++	}
++}
++
++/*
++ * task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
++ */
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(p->pi_lock)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	for (;;) {
++		raw_spin_lock_irqsave(&p->pi_lock, rf->flags);
++		rq = task_rq(p);
++		raw_spin_lock(&rq->lock);
++		/*
++		 *	move_queued_task()		task_rq_lock()
++		 *
++		 *	ACQUIRE (rq->lock)
++		 *	[S] ->on_rq = MIGRATING		[L] rq = task_rq()
++		 *	WMB (__set_task_cpu())		ACQUIRE (rq->lock);
++		 *	[S] ->cpu = new_cpu		[L] task_rq()
++		 *					[L] ->on_rq
++		 *	RELEASE (rq->lock)
++		 *
++		 * If we observe the old CPU in task_rq_lock(), the acquire of
++		 * the old rq->lock will fully serialize against the stores.
++		 *
++		 * If we observe the new CPU in task_rq_lock(), the address
++		 * dependency headed by '[L] rq = task_rq()' and the acquire
++		 * will pair with the WMB to ensure we then also see migrating.
++		 */
++		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
++			return rq;
++		}
++		raw_spin_unlock(&rq->lock);
++		raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++
++		while (unlikely(task_on_rq_migrating(p)))
++			cpu_relax();
++	}
++}
++
++static inline void
++rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock_irqsave(&rq->lock, rf->flags);
++}
++
++static inline void
++rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock_irqrestore(&rq->lock, rf->flags);
++}
++
++/*
++ * RQ-clock updating methods:
++ */
++
++static void update_rq_clock_task(struct rq *rq, s64 delta)
++{
++/*
++ * In theory, the compile should just see 0 here, and optimize out the call
++ * to sched_rt_avg_update. But I don't trust it...
++ */
++	s64 __maybe_unused steal = 0, irq_delta = 0;
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++	irq_delta = irq_time_read(cpu_of(rq)) - rq->prev_irq_time;
++
++	/*
++	 * Since irq_time is only updated on {soft,}irq_exit, we might run into
++	 * this case when a previous update_rq_clock() happened inside a
++	 * {soft,}irq region.
++	 *
++	 * When this happens, we stop ->clock_task and only update the
++	 * prev_irq_time stamp to account for the part that fit, so that a next
++	 * update will consume the rest. This ensures ->clock_task is
++	 * monotonic.
++	 *
++	 * It does however cause some slight miss-attribution of {soft,}irq
++	 * time, a more accurate solution would be to update the irq_time using
++	 * the current rq->clock timestamp, except that would require using
++	 * atomic ops.
++	 */
++	if (irq_delta > delta)
++		irq_delta = delta;
++
++	rq->prev_irq_time += irq_delta;
++	delta -= irq_delta;
++#endif
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++	if (static_key_false((&paravirt_steal_rq_enabled))) {
++		steal = paravirt_steal_clock(cpu_of(rq));
++		steal -= rq->prev_steal_time_rq;
++
++		if (unlikely(steal > delta))
++			steal = delta;
++
++		rq->prev_steal_time_rq += steal;
++		delta -= steal;
++	}
++#endif
++
++	rq->clock_task += delta;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++	if ((irq_delta + steal))
++		update_irq_load_avg(rq, irq_delta + steal);
++#endif
++}
++
++static inline void update_rq_clock(struct rq *rq)
++{
++	s64 delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
++
++	if (unlikely(delta <= 0))
++		return;
++	rq->clock += delta;
++	update_rq_time_edge(rq);
++	update_rq_clock_task(rq, delta);
++}
++
++#ifdef CONFIG_NO_HZ_FULL
++/*
++ * Tick may be needed by tasks in the runqueue depending on their policy and
++ * requirements. If tick is needed, lets send the target an IPI to kick it out
++ * of nohz mode if necessary.
++ */
++static inline void sched_update_tick_dependency(struct rq *rq)
++{
++	int cpu = cpu_of(rq);
++
++	if (!tick_nohz_full_cpu(cpu))
++		return;
++
++	if (rq->nr_running < 2)
++		tick_nohz_dep_clear_cpu(cpu, TICK_DEP_BIT_SCHED);
++	else
++		tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
++}
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_update_tick_dependency(struct rq *rq) { }
++#endif
++
++/*
++ * Add/Remove/Requeue task to/from the runqueue routines
++ * Context: rq->lock
++ */
++#define __SCHED_DEQUEUE_TASK(p, rq, flags, func)		\
++	psi_dequeue(p, flags & DEQUEUE_SLEEP);			\
++	sched_info_dequeued(rq, p);				\
++								\
++	list_del(&p->sq_node);					\
++	if (list_empty(&rq->queue.heads[p->sq_idx])) {		\
++		clear_bit(sched_idx2prio(p->sq_idx, rq),	\
++			  rq->queue.bitmap);			\
++		func;						\
++	}
++
++#define __SCHED_ENQUEUE_TASK(p, rq, flags)				\
++	sched_info_queued(rq, p);					\
++	psi_enqueue(p, flags);						\
++									\
++	p->sq_idx = task_sched_prio_idx(p, rq);				\
++	list_add_tail(&p->sq_node, &rq->queue.heads[p->sq_idx]);	\
++	set_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);
++
++static inline void dequeue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++	lockdep_assert_held(&rq->lock);
++
++	/*printk(KERN_INFO "sched: dequeue(%d) %px %016llx\n", cpu_of(rq), p, p->priodl);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: dequeue task reside on cpu%d from cpu%d\n",
++		  task_cpu(p), cpu_of(rq));
++
++	__SCHED_DEQUEUE_TASK(p, rq, flags, update_sched_rq_watermark(rq));
++	--rq->nr_running;
++#ifdef CONFIG_SMP
++	if (1 == rq->nr_running)
++		cpumask_clear_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++	sched_update_tick_dependency(rq);
++}
++
++static inline void enqueue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++	lockdep_assert_held(&rq->lock);
++
++	/*printk(KERN_INFO "sched: enqueue(%d) %px %016llx\n", cpu_of(rq), p, p->priodl);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: enqueue task reside on cpu%d to cpu%d\n",
++		  task_cpu(p), cpu_of(rq));
++
++	__SCHED_ENQUEUE_TASK(p, rq, flags);
++	update_sched_rq_watermark(rq);
++	++rq->nr_running;
++#ifdef CONFIG_SMP
++	if (2 == rq->nr_running)
++		cpumask_set_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++	sched_update_tick_dependency(rq);
++}
++
++static inline void requeue_task(struct task_struct *p, struct rq *rq)
++{
++	int idx;
++
++	lockdep_assert_held(&rq->lock);
++	/*printk(KERN_INFO "sched: requeue(%d) %px %016llx\n", cpu_of(rq), p, p->priodl);*/
++	WARN_ONCE(task_rq(p) != rq, "sched: cpu[%d] requeue task reside on cpu%d\n",
++		  cpu_of(rq), task_cpu(p));
++
++	idx = task_sched_prio_idx(p, rq);
++
++	list_del(&p->sq_node);
++	list_add_tail(&p->sq_node, &rq->queue.heads[idx]);
++	if (idx != p->sq_idx) {
++		if (list_empty(&rq->queue.heads[p->sq_idx]))
++			clear_bit(sched_idx2prio(p->sq_idx, rq),
++				  rq->queue.bitmap);
++		p->sq_idx = idx;
++		set_bit(sched_idx2prio(p->sq_idx, rq), rq->queue.bitmap);
++		update_sched_rq_watermark(rq);
++	}
++}
++
++/*
++ * cmpxchg based fetch_or, macro so it works for different integer types
++ */
++#define fetch_or(ptr, mask)						\
++	({								\
++		typeof(ptr) _ptr = (ptr);				\
++		typeof(mask) _mask = (mask);				\
++		typeof(*_ptr) _old, _val = *_ptr;			\
++									\
++		for (;;) {						\
++			_old = cmpxchg(_ptr, _val, _val | _mask);	\
++			if (_old == _val)				\
++				break;					\
++			_val = _old;					\
++		}							\
++	_old;								\
++})
++
++#if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG)
++/*
++ * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
++ * this avoids any races wrt polling state changes and thereby avoids
++ * spurious IPIs.
++ */
++static bool set_nr_and_not_polling(struct task_struct *p)
++{
++	struct thread_info *ti = task_thread_info(p);
++	return !(fetch_or(&ti->flags, _TIF_NEED_RESCHED) & _TIF_POLLING_NRFLAG);
++}
++
++/*
++ * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
++ *
++ * If this returns true, then the idle task promises to call
++ * sched_ttwu_pending() and reschedule soon.
++ */
++static bool set_nr_if_polling(struct task_struct *p)
++{
++	struct thread_info *ti = task_thread_info(p);
++	typeof(ti->flags) old, val = READ_ONCE(ti->flags);
++
++	for (;;) {
++		if (!(val & _TIF_POLLING_NRFLAG))
++			return false;
++		if (val & _TIF_NEED_RESCHED)
++			return true;
++		old = cmpxchg(&ti->flags, val, val | _TIF_NEED_RESCHED);
++		if (old == val)
++			break;
++		val = old;
++	}
++	return true;
++}
++
++#else
++static bool set_nr_and_not_polling(struct task_struct *p)
++{
++	set_tsk_need_resched(p);
++	return true;
++}
++
++#ifdef CONFIG_SMP
++static bool set_nr_if_polling(struct task_struct *p)
++{
++	return false;
++}
++#endif
++#endif
++
++static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++	struct wake_q_node *node = &task->wake_q;
++
++	/*
++	 * Atomically grab the task, if ->wake_q is !nil already it means
++	 * it's already queued (either by us or someone else) and will get the
++	 * wakeup due to that.
++	 *
++	 * In order to ensure that a pending wakeup will observe our pending
++	 * state, even in the failed case, an explicit smp_mb() must be used.
++	 */
++	smp_mb__before_atomic();
++	if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
++		return false;
++
++	/*
++	 * The head is context local, there can be no concurrency.
++	 */
++	*head->lastp = node;
++	head->lastp = &node->next;
++	return true;
++}
++
++/**
++ * wake_q_add() - queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ */
++void wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++	if (__wake_q_add(head, task))
++		get_task_struct(task);
++}
++
++/**
++ * wake_q_add_safe() - safely queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ *
++ * This function is essentially a task-safe equivalent to wake_q_add(). Callers
++ * that already hold reference to @task can call the 'safe' version and trust
++ * wake_q to do the right thing depending whether or not the @task is already
++ * queued for wakeup.
++ */
++void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
++{
++	if (!__wake_q_add(head, task))
++		put_task_struct(task);
++}
++
++void wake_up_q(struct wake_q_head *head)
++{
++	struct wake_q_node *node = head->first;
++
++	while (node != WAKE_Q_TAIL) {
++		struct task_struct *task;
++
++		task = container_of(node, struct task_struct, wake_q);
++		BUG_ON(!task);
++		/* task can safely be re-inserted now: */
++		node = node->next;
++		task->wake_q.next = NULL;
++
++		/*
++		 * wake_up_process() executes a full barrier, which pairs with
++		 * the queueing in wake_q_add() so as not to miss wakeups.
++		 */
++		wake_up_process(task);
++		put_task_struct(task);
++	}
++}
++
++/*
++ * resched_curr - mark rq's current task 'to be rescheduled now'.
++ *
++ * On UP this means the setting of the need_resched flag, on SMP it
++ * might also involve a cross-CPU call to trigger the scheduler on
++ * the target CPU.
++ */
++void resched_curr(struct rq *rq)
++{
++	struct task_struct *curr = rq->curr;
++	int cpu;
++
++	lockdep_assert_held(&rq->lock);
++
++	if (test_tsk_need_resched(curr))
++		return;
++
++	cpu = cpu_of(rq);
++	if (cpu == smp_processor_id()) {
++		set_tsk_need_resched(curr);
++		set_preempt_need_resched();
++		return;
++	}
++
++	if (set_nr_and_not_polling(curr))
++		smp_send_reschedule(cpu);
++	else
++		trace_sched_wake_idle_without_ipi(cpu);
++}
++
++void resched_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	if (cpu_online(cpu) || cpu == smp_processor_id())
++		resched_curr(cpu_rq(cpu));
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++#ifdef CONFIG_SMP
++#ifdef CONFIG_NO_HZ_COMMON
++void nohz_balance_enter_idle(int cpu) {}
++
++void select_nohz_load_balancer(int stop_tick) {}
++
++void set_cpu_sd_state_idle(void) {}
++
++/*
++ * In the semi idle case, use the nearest busy CPU for migrating timers
++ * from an idle CPU.  This is good for power-savings.
++ *
++ * We don't do similar optimization for completely idle system, as
++ * selecting an idle CPU will add more delays to the timers than intended
++ * (as that CPU's timer base may not be uptodate wrt jiffies etc).
++ */
++int get_nohz_timer_target(void)
++{
++	int i, cpu = smp_processor_id(), default_cpu = -1;
++	struct cpumask *mask;
++
++	if (housekeeping_cpu(cpu, HK_FLAG_TIMER)) {
++		if (!idle_cpu(cpu))
++			return cpu;
++		default_cpu = cpu;
++	}
++
++	for (mask = per_cpu(sched_cpu_affinity_masks, cpu) + 1;
++	     mask < per_cpu(sched_cpu_affinity_end_mask, cpu); mask++)
++		for_each_cpu_and(i, mask, housekeeping_cpumask(HK_FLAG_TIMER))
++			if (!idle_cpu(i))
++				return i;
++
++	if (default_cpu == -1)
++		default_cpu = housekeeping_any_cpu(HK_FLAG_TIMER);
++	cpu = default_cpu;
++
++	return cpu;
++}
++
++/*
++ * When add_timer_on() enqueues a timer into the timer wheel of an
++ * idle CPU then this timer might expire before the next timer event
++ * which is scheduled to wake up that CPU. In case of a completely
++ * idle system the next event might even be infinite time into the
++ * future. wake_up_idle_cpu() ensures that the CPU is woken up and
++ * leaves the inner idle loop so the newly added timer is taken into
++ * account when the CPU goes back to idle and evaluates the timer
++ * wheel for the next timer event.
++ */
++static inline void wake_up_idle_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (cpu == smp_processor_id())
++		return;
++
++	if (set_nr_and_not_polling(rq->idle))
++		smp_send_reschedule(cpu);
++	else
++		trace_sched_wake_idle_without_ipi(cpu);
++}
++
++static inline bool wake_up_full_nohz_cpu(int cpu)
++{
++	/*
++	 * We just need the target to call irq_exit() and re-evaluate
++	 * the next tick. The nohz full kick at least implies that.
++	 * If needed we can still optimize that later with an
++	 * empty IRQ.
++	 */
++	if (cpu_is_offline(cpu))
++		return true;  /* Don't try to wake offline CPUs. */
++	if (tick_nohz_full_cpu(cpu)) {
++		if (cpu != smp_processor_id() ||
++		    tick_nohz_tick_stopped())
++			tick_nohz_full_kick_cpu(cpu);
++		return true;
++	}
++
++	return false;
++}
++
++void wake_up_nohz_cpu(int cpu)
++{
++	if (!wake_up_full_nohz_cpu(cpu))
++		wake_up_idle_cpu(cpu);
++}
++
++static void nohz_csd_func(void *info)
++{
++	struct rq *rq = info;
++	int cpu = cpu_of(rq);
++	unsigned int flags;
++
++	/*
++	 * Release the rq::nohz_csd.
++	 */
++	flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(cpu));
++	WARN_ON(!(flags & NOHZ_KICK_MASK));
++
++	rq->idle_balance = idle_cpu(cpu);
++	if (rq->idle_balance && !need_resched()) {
++		rq->nohz_idle_balance = flags;
++		raise_softirq_irqoff(SCHED_SOFTIRQ);
++	}
++}
++
++#endif /* CONFIG_NO_HZ_COMMON */
++#endif /* CONFIG_SMP */
++
++static inline void check_preempt_curr(struct rq *rq)
++{
++	if (sched_rq_first_task(rq) != rq->curr)
++		resched_curr(rq);
++}
++
++#ifdef CONFIG_SCHED_HRTICK
++/*
++ * Use HR-timers to deliver accurate preemption points.
++ */
++
++static void hrtick_clear(struct rq *rq)
++{
++	if (hrtimer_active(&rq->hrtick_timer))
++		hrtimer_cancel(&rq->hrtick_timer);
++}
++
++/*
++ * High-resolution timer tick.
++ * Runs from hardirq context with interrupts disabled.
++ */
++static enum hrtimer_restart hrtick(struct hrtimer *timer)
++{
++	struct rq *rq = container_of(timer, struct rq, hrtick_timer);
++
++	WARN_ON_ONCE(cpu_of(rq) != smp_processor_id());
++
++	raw_spin_lock(&rq->lock);
++	resched_curr(rq);
++	raw_spin_unlock(&rq->lock);
++
++	return HRTIMER_NORESTART;
++}
++
++/*
++ * Use hrtick when:
++ *  - enabled by features
++ *  - hrtimer is actually high res
++ */
++static inline int hrtick_enabled(struct rq *rq)
++{
++	/**
++	 * Alt schedule FW doesn't support sched_feat yet
++	if (!sched_feat(HRTICK))
++		return 0;
++	*/
++	if (!cpu_active(cpu_of(rq)))
++		return 0;
++	return hrtimer_is_hres_active(&rq->hrtick_timer);
++}
++
++#ifdef CONFIG_SMP
++
++static void __hrtick_restart(struct rq *rq)
++{
++	struct hrtimer *timer = &rq->hrtick_timer;
++	ktime_t time = rq->hrtick_time;
++
++	hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
++}
++
++/*
++ * called from hardirq (IPI) context
++ */
++static void __hrtick_start(void *arg)
++{
++	struct rq *rq = arg;
++
++	raw_spin_lock(&rq->lock);
++	__hrtick_restart(rq);
++	raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and irqs disabled
++ */
++void hrtick_start(struct rq *rq, u64 delay)
++{
++	struct hrtimer *timer = &rq->hrtick_timer;
++	s64 delta;
++
++	/*
++	 * Don't schedule slices shorter than 10000ns, that just
++	 * doesn't make sense and can cause timer DoS.
++	 */
++	delta = max_t(s64, delay, 10000LL);
++
++	rq->hrtick_time = ktime_add_ns(timer->base->get_time(), delta);
++
++	if (rq == this_rq())
++		__hrtick_restart(rq);
++	else
++		smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
++}
++
++#else
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and irqs disabled
++ */
++void hrtick_start(struct rq *rq, u64 delay)
++{
++	/*
++	 * Don't schedule slices shorter than 10000ns, that just
++	 * doesn't make sense. Rely on vruntime for fairness.
++	 */
++	delay = max_t(u64, delay, 10000LL);
++	hrtimer_start(&rq->hrtick_timer, ns_to_ktime(delay),
++		      HRTIMER_MODE_REL_PINNED_HARD);
++}
++#endif /* CONFIG_SMP */
++
++static void hrtick_rq_init(struct rq *rq)
++{
++#ifdef CONFIG_SMP
++	INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
++#endif
++
++	hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
++	rq->hrtick_timer.function = hrtick;
++}
++#else	/* CONFIG_SCHED_HRTICK */
++static inline int hrtick_enabled(struct rq *rq)
++{
++	return 0;
++}
++
++static inline void hrtick_clear(struct rq *rq)
++{
++}
++
++static inline void hrtick_rq_init(struct rq *rq)
++{
++}
++#endif	/* CONFIG_SCHED_HRTICK */
++
++/*
++ * Calculate the expected normal priority: i.e. priority
++ * without taking RT-inheritance into account. Might be
++ * boosted by interactivity modifiers. Changes upon fork,
++ * setprio syscalls, and whenever the interactivity
++ * estimator recalculates.
++ */
++static inline int normal_prio(struct task_struct *p)
++{
++	return task_has_rt_policy(p) ? (MAX_RT_PRIO - 1 - p->rt_priority) :
++		p->static_prio + MAX_PRIORITY_ADJ;
++}
++
++/*
++ * Calculate the current priority, i.e. the priority
++ * taken into account by the scheduler. This value might
++ * be boosted by RT tasks as it will be RT if the task got
++ * RT-boosted. If not then it returns p->normal_prio.
++ */
++static int effective_prio(struct task_struct *p)
++{
++	p->normal_prio = normal_prio(p);
++	/*
++	 * If we are RT tasks or we were boosted to RT priority,
++	 * keep the priority unchanged. Otherwise, update priority
++	 * to the normal priority:
++	 */
++	if (!rt_prio(p->prio))
++		return p->normal_prio;
++	return p->prio;
++}
++
++/*
++ * activate_task - move a task to the runqueue.
++ *
++ * Context: rq->lock
++ */
++static void activate_task(struct task_struct *p, struct rq *rq)
++{
++	enqueue_task(p, rq, ENQUEUE_WAKEUP);
++	p->on_rq = TASK_ON_RQ_QUEUED;
++
++	/*
++	 * If in_iowait is set, the code below may not trigger any cpufreq
++	 * utilization updates, so do it here explicitly with the IOWAIT flag
++	 * passed.
++	 */
++	cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT * p->in_iowait);
++}
++
++/*
++ * deactivate_task - remove a task from the runqueue.
++ *
++ * Context: rq->lock
++ */
++static inline void deactivate_task(struct task_struct *p, struct rq *rq)
++{
++	dequeue_task(p, rq, DEQUEUE_SLEEP);
++	p->on_rq = 0;
++	cpufreq_update_util(rq, 0);
++}
++
++static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
++{
++#ifdef CONFIG_SMP
++	/*
++	 * After ->cpu is set up to a new value, task_access_lock(p, ...) can be
++	 * successfully executed on another CPU. We must ensure that updates of
++	 * per-task data have been completed by this moment.
++	 */
++	smp_wmb();
++
++#ifdef CONFIG_THREAD_INFO_IN_TASK
++	WRITE_ONCE(p->cpu, cpu);
++#else
++	WRITE_ONCE(task_thread_info(p)->cpu, cpu);
++#endif
++#endif
++}
++
++static inline bool is_migration_disabled(struct task_struct *p)
++{
++#ifdef CONFIG_SMP
++	return p->migration_disabled;
++#else
++	return false;
++#endif
++}
++
++#define SCA_CHECK		0x01
++
++#ifdef CONFIG_SMP
++
++void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
++{
++#ifdef CONFIG_SCHED_DEBUG
++	/*
++	 * We should never call set_task_cpu() on a blocked task,
++	 * ttwu() will sort out the placement.
++	 */
++	WARN_ON_ONCE(p->state != TASK_RUNNING && p->state != TASK_WAKING &&
++		     !p->on_rq);
++#ifdef CONFIG_LOCKDEP
++	/*
++	 * The caller should hold either p->pi_lock or rq->lock, when changing
++	 * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
++	 *
++	 * sched_move_task() holds both and thus holding either pins the cgroup,
++	 * see task_group().
++	 */
++	WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
++				      lockdep_is_held(&task_rq(p)->lock)));
++#endif
++	/*
++	 * Clearly, migrating tasks to offline CPUs is a fairly daft thing.
++	 */
++	WARN_ON_ONCE(!cpu_online(new_cpu));
++
++	WARN_ON_ONCE(is_migration_disabled(p));
++#endif
++	if (task_cpu(p) == new_cpu)
++		return;
++	trace_sched_migrate_task(p, new_cpu);
++	rseq_migrate(p);
++	perf_event_task_migrate(p);
++
++	__set_task_cpu(p, new_cpu);
++}
++
++#define MDF_FORCE_ENABLED	0x80
++
++static void
++__do_set_cpus_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++	/*
++	 * This here violates the locking rules for affinity, since we're only
++	 * supposed to change these variables while holding both rq->lock and
++	 * p->pi_lock.
++	 *
++	 * HOWEVER, it magically works, because ttwu() is the only code that
++	 * accesses these variables under p->pi_lock and only does so after
++	 * smp_cond_load_acquire(&p->on_cpu, !VAL), and we're in __schedule()
++	 * before finish_task().
++	 *
++	 * XXX do further audits, this smells like something putrid.
++	 */
++	SCHED_WARN_ON(!p->on_cpu);
++	p->cpus_ptr = new_mask;
++}
++
++void migrate_disable(void)
++{
++	struct task_struct *p = current;
++	int cpu;
++
++	if (p->migration_disabled) {
++		p->migration_disabled++;
++		return;
++	}
++
++	preempt_disable();
++	cpu = smp_processor_id();
++	if (cpumask_test_cpu(cpu, &p->cpus_mask)) {
++		cpu_rq(cpu)->nr_pinned++;
++		p->migration_disabled = 1;
++		p->migration_flags &= ~MDF_FORCE_ENABLED;
++
++		/*
++		 * Violates locking rules! see comment in __do_set_cpus_ptr().
++		 */
++		if (p->cpus_ptr == &p->cpus_mask)
++			__do_set_cpus_ptr(p, cpumask_of(cpu));
++	}
++	preempt_enable();
++}
++EXPORT_SYMBOL_GPL(migrate_disable);
++
++void migrate_enable(void)
++{
++	struct task_struct *p = current;
++
++	if (0 == p->migration_disabled)
++		return;
++
++	if (p->migration_disabled > 1) {
++		p->migration_disabled--;
++		return;
++	}
++
++	/*
++	 * Ensure stop_task runs either before or after this, and that
++	 * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
++	 */
++	preempt_disable();
++	/*
++	 * Assumption: current should be running on allowed cpu
++	 */
++	WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), &p->cpus_mask));
++	if (p->cpus_ptr != &p->cpus_mask)
++		__do_set_cpus_ptr(p, &p->cpus_mask);
++	/*
++	 * Mustn't clear migration_disabled() until cpus_ptr points back at the
++	 * regular cpus_mask, otherwise things that race (eg.
++	 * select_fallback_rq) get confused.
++	 */
++	barrier();
++	p->migration_disabled = 0;
++	this_rq()->nr_pinned--;
++	preempt_enable();
++}
++EXPORT_SYMBOL_GPL(migrate_enable);
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++	return rq->nr_pinned;
++}
++
++/*
++ * Per-CPU kthreads are allowed to run on !active && online CPUs, see
++ * __set_cpus_allowed_ptr() and select_fallback_rq().
++ */
++static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
++{
++	/* When not in the task's cpumask, no point in looking further. */
++	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++		return false;
++
++	/* migrate_disabled() must be allowed to finish. */
++	if (is_migration_disabled(p))
++		return cpu_online(cpu);
++
++	/* Non kernel threads are not allowed during either online or offline. */
++	if (!(p->flags & PF_KTHREAD))
++		return cpu_active(cpu);
++
++	/* KTHREAD_IS_PER_CPU is always allowed. */
++	if (kthread_is_per_cpu(p))
++		return cpu_online(cpu);
++
++	/* Regular kernel threads don't get to stay during offline. */
++	if (cpu_dying(cpu))
++		return false;
++
++	/* But are allowed during online. */
++	return cpu_online(cpu);
++}
++
++/*
++ * This is how migration works:
++ *
++ * 1) we invoke migration_cpu_stop() on the target CPU using
++ *    stop_one_cpu().
++ * 2) stopper starts to run (implicitly forcing the migrated thread
++ *    off the CPU)
++ * 3) it checks whether the migrated task is still in the wrong runqueue.
++ * 4) if it's in the wrong runqueue then the migration thread removes
++ *    it and puts it into the right queue.
++ * 5) stopper completes and stop_one_cpu() returns and the migration
++ *    is done.
++ */
++
++/*
++ * move_queued_task - move a queued task to new rq.
++ *
++ * Returns (locked) new rq. Old rq's lock is released.
++ */
++static struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int
++				   new_cpu)
++{
++	lockdep_assert_held(&rq->lock);
++
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
++	dequeue_task(p, rq, 0);
++	set_task_cpu(p, new_cpu);
++	raw_spin_unlock(&rq->lock);
++
++	rq = cpu_rq(new_cpu);
++
++	raw_spin_lock(&rq->lock);
++	BUG_ON(task_cpu(p) != new_cpu);
++	sched_task_sanity_check(p, rq);
++	enqueue_task(p, rq, 0);
++	p->on_rq = TASK_ON_RQ_QUEUED;
++	check_preempt_curr(rq);
++
++	return rq;
++}
++
++struct migration_arg {
++	struct task_struct *task;
++	int dest_cpu;
++};
++
++/*
++ * Move (not current) task off this CPU, onto the destination CPU. We're doing
++ * this because either it can't run here any more (set_cpus_allowed()
++ * away from this CPU, or CPU going down), or because we're
++ * attempting to rebalance this task on exec (sched_exec).
++ *
++ * So we race with normal scheduler movements, but that's OK, as long
++ * as the task is no longer on this CPU.
++ */
++static struct rq *__migrate_task(struct rq *rq, struct task_struct *p, int
++				 dest_cpu)
++{
++	/* Affinity changed (again). */
++	if (!is_cpu_allowed(p, dest_cpu))
++		return rq;
++
++	update_rq_clock(rq);
++	return move_queued_task(rq, p, dest_cpu);
++}
++
++/*
++ * migration_cpu_stop - this will be executed by a highprio stopper thread
++ * and performs thread migration by bumping thread off CPU then
++ * 'pushing' onto another runqueue.
++ */
++static int migration_cpu_stop(void *data)
++{
++	struct migration_arg *arg = data;
++	struct task_struct *p = arg->task;
++	struct rq *rq = this_rq();
++	unsigned long flags;
++
++	/*
++	 * The original target CPU might have gone down and we might
++	 * be on another CPU but it doesn't matter.
++	 */
++	local_irq_save(flags);
++	/*
++	 * We need to explicitly wake pending tasks before running
++	 * __migrate_task() such that we will not miss enforcing cpus_ptr
++	 * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
++	 */
++	flush_smp_call_function_from_idle();
++
++	raw_spin_lock(&p->pi_lock);
++	raw_spin_lock(&rq->lock);
++	/*
++	 * If task_rq(p) != rq, it cannot be migrated here, because we're
++	 * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
++	 * we're holding p->pi_lock.
++	 */
++	if (task_rq(p) == rq && task_on_rq_queued(p))
++		rq = __migrate_task(rq, p, arg->dest_cpu);
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	return 0;
++}
++
++static inline void
++set_cpus_allowed_common(struct task_struct *p, const struct cpumask *new_mask)
++{
++	cpumask_copy(&p->cpus_mask, new_mask);
++	p->nr_cpus_allowed = cpumask_weight(new_mask);
++}
++
++static void
++__do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
++{
++	lockdep_assert_held(&p->pi_lock);
++	set_cpus_allowed_common(p, new_mask);
++}
++
++void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
++{
++	__do_set_cpus_allowed(p, new_mask);
++}
++
++#endif
++
++/**
++ * task_curr - is this task currently executing on a CPU?
++ * @p: the task in question.
++ *
++ * Return: 1 if the task is currently executing. 0 otherwise.
++ */
++inline int task_curr(const struct task_struct *p)
++{
++	return cpu_curr(task_cpu(p)) == p;
++}
++
++#ifdef CONFIG_SMP
++/*
++ * wait_task_inactive - wait for a thread to unschedule.
++ *
++ * If @match_state is nonzero, it's the @p->state value just checked and
++ * not expected to change.  If it changes, i.e. @p might have woken up,
++ * then return zero.  When we succeed in waiting for @p to be off its CPU,
++ * we return a positive number (its total switch count).  If a second call
++ * a short while later returns the same number, the caller can be sure that
++ * @p has remained unscheduled the whole time.
++ *
++ * The caller must ensure that the task *will* unschedule sometime soon,
++ * else this function might spin for a *long* time. This function can't
++ * be called with interrupts off, or it may introduce deadlock with
++ * smp_call_function() if an IPI is sent by the same process we are
++ * waiting to become inactive.
++ */
++unsigned long wait_task_inactive(struct task_struct *p, long match_state)
++{
++	unsigned long flags;
++	bool running, on_rq;
++	unsigned long ncsw;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	for (;;) {
++		rq = task_rq(p);
++
++		/*
++		 * If the task is actively running on another CPU
++		 * still, just relax and busy-wait without holding
++		 * any locks.
++		 *
++		 * NOTE! Since we don't hold any locks, it's not
++		 * even sure that "rq" stays as the right runqueue!
++		 * But we don't care, since this will return false
++		 * if the runqueue has changed and p is actually now
++		 * running somewhere else!
++		 */
++		while (task_running(p) && p == rq->curr) {
++			if (match_state && unlikely(p->state != match_state))
++				return 0;
++			cpu_relax();
++		}
++
++		/*
++		 * Ok, time to look more closely! We need the rq
++		 * lock now, to be *sure*. If we're wrong, we'll
++		 * just go back and repeat.
++		 */
++		task_access_lock_irqsave(p, &lock, &flags);
++		trace_sched_wait_task(p);
++		running = task_running(p);
++		on_rq = p->on_rq;
++		ncsw = 0;
++		if (!match_state || p->state == match_state)
++			ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
++		task_access_unlock_irqrestore(p, lock, &flags);
++
++		/*
++		 * If it changed from the expected state, bail out now.
++		 */
++		if (unlikely(!ncsw))
++			break;
++
++		/*
++		 * Was it really running after all now that we
++		 * checked with the proper locks actually held?
++		 *
++		 * Oops. Go back and try again..
++		 */
++		if (unlikely(running)) {
++			cpu_relax();
++			continue;
++		}
++
++		/*
++		 * It's not enough that it's not actively running,
++		 * it must be off the runqueue _entirely_, and not
++		 * preempted!
++		 *
++		 * So if it was still runnable (but just not actively
++		 * running right now), it's preempted, and we should
++		 * yield - it could be a while.
++		 */
++		if (unlikely(on_rq)) {
++			ktime_t to = NSEC_PER_SEC / HZ;
++
++			set_current_state(TASK_UNINTERRUPTIBLE);
++			schedule_hrtimeout(&to, HRTIMER_MODE_REL);
++			continue;
++		}
++
++		/*
++		 * Ahh, all good. It wasn't running, and it wasn't
++		 * runnable, which means that it will never become
++		 * running in the future either. We're all done!
++		 */
++		break;
++	}
++
++	return ncsw;
++}
++
++/***
++ * kick_process - kick a running thread to enter/exit the kernel
++ * @p: the to-be-kicked thread
++ *
++ * Cause a process which is running on another CPU to enter
++ * kernel-mode, without any delay. (to get signals handled.)
++ *
++ * NOTE: this function doesn't have to take the runqueue lock,
++ * because all it wants to ensure is that the remote task enters
++ * the kernel. If the IPI races and the task has been migrated
++ * to another CPU then no harm is done and the purpose has been
++ * achieved as well.
++ */
++void kick_process(struct task_struct *p)
++{
++	int cpu;
++
++	preempt_disable();
++	cpu = task_cpu(p);
++	if ((cpu != smp_processor_id()) && task_curr(p))
++		smp_send_reschedule(cpu);
++	preempt_enable();
++}
++EXPORT_SYMBOL_GPL(kick_process);
++
++/*
++ * ->cpus_ptr is protected by both rq->lock and p->pi_lock
++ *
++ * A few notes on cpu_active vs cpu_online:
++ *
++ *  - cpu_active must be a subset of cpu_online
++ *
++ *  - on CPU-up we allow per-CPU kthreads on the online && !active CPU,
++ *    see __set_cpus_allowed_ptr(). At this point the newly online
++ *    CPU isn't yet part of the sched domains, and balancing will not
++ *    see it.
++ *
++ *  - on cpu-down we clear cpu_active() to mask the sched domains and
++ *    avoid the load balancer to place new tasks on the to be removed
++ *    CPU. Existing tasks will remain running there and will be taken
++ *    off.
++ *
++ * This means that fallback selection must not select !active CPUs.
++ * And can assume that any active CPU must be online. Conversely
++ * select_task_rq() below may allow selection of !active CPUs in order
++ * to satisfy the above rules.
++ */
++static int select_fallback_rq(int cpu, struct task_struct *p)
++{
++	int nid = cpu_to_node(cpu);
++	const struct cpumask *nodemask = NULL;
++	enum { cpuset, possible, fail } state = cpuset;
++	int dest_cpu;
++
++	/*
++	 * If the node that the CPU is on has been offlined, cpu_to_node()
++	 * will return -1. There is no CPU on the node, and we should
++	 * select the CPU on the other node.
++	 */
++	if (nid != -1) {
++		nodemask = cpumask_of_node(nid);
++
++		/* Look for allowed, online CPU in same node. */
++		for_each_cpu(dest_cpu, nodemask) {
++			if (!cpu_active(dest_cpu))
++				continue;
++			if (cpumask_test_cpu(dest_cpu, p->cpus_ptr))
++				return dest_cpu;
++		}
++	}
++
++	for (;;) {
++		/* Any allowed, online CPU? */
++		for_each_cpu(dest_cpu, p->cpus_ptr) {
++			if (!is_cpu_allowed(p, dest_cpu))
++				continue;
++			goto out;
++		}
++
++		/* No more Mr. Nice Guy. */
++		switch (state) {
++		case cpuset:
++			if (IS_ENABLED(CONFIG_CPUSETS)) {
++				cpuset_cpus_allowed_fallback(p);
++				state = possible;
++				break;
++			}
++			fallthrough;
++		case possible:
++			/*
++			 * XXX When called from select_task_rq() we only
++			 * hold p->pi_lock and again violate locking order.
++			 *
++			 * More yuck to audit.
++			 */
++			do_set_cpus_allowed(p, cpu_possible_mask);
++			state = fail;
++			break;
++
++		case fail:
++			BUG();
++			break;
++		}
++	}
++
++out:
++	if (state != cpuset) {
++		/*
++		 * Don't tell them about moving exiting tasks or
++		 * kernel threads (both mm NULL), since they never
++		 * leave kernel.
++		 */
++		if (p->mm && printk_ratelimit()) {
++			printk_deferred("process %d (%s) no longer affine to cpu%d\n",
++					task_pid_nr(p), p->comm, cpu);
++		}
++	}
++
++	return dest_cpu;
++}
++
++static inline int select_task_rq(struct task_struct *p)
++{
++	cpumask_t chk_mask, tmp;
++
++	if (unlikely(!cpumask_and(&chk_mask, p->cpus_ptr, cpu_active_mask)))
++		return select_fallback_rq(task_cpu(p), p);
++
++	if (
++#ifdef CONFIG_SCHED_SMT
++	    cpumask_and(&tmp, &chk_mask, &sched_sg_idle_mask) ||
++#endif
++	    cpumask_and(&tmp, &chk_mask, &sched_rq_watermark[IDLE_WM]) ||
++	    cpumask_and(&tmp, &chk_mask,
++			&sched_rq_watermark[task_sched_prio(p) + 1]))
++		return best_mask_cpu(task_cpu(p), &tmp);
++
++	return best_mask_cpu(task_cpu(p), &chk_mask);
++}
++
++void sched_set_stop_task(int cpu, struct task_struct *stop)
++{
++	static struct lock_class_key stop_pi_lock;
++	struct sched_param stop_param = { .sched_priority = STOP_PRIO };
++	struct sched_param start_param = { .sched_priority = 0 };
++	struct task_struct *old_stop = cpu_rq(cpu)->stop;
++
++	if (stop) {
++		/*
++		 * Make it appear like a SCHED_FIFO task, its something
++		 * userspace knows about and won't get confused about.
++		 *
++		 * Also, it will make PI more or less work without too
++		 * much confusion -- but then, stop work should not
++		 * rely on PI working anyway.
++		 */
++		sched_setscheduler_nocheck(stop, SCHED_FIFO, &stop_param);
++
++		/*
++		 * The PI code calls rt_mutex_setprio() with ->pi_lock held to
++		 * adjust the effective priority of a task. As a result,
++		 * rt_mutex_setprio() can trigger (RT) balancing operations,
++		 * which can then trigger wakeups of the stop thread to push
++		 * around the current task.
++		 *
++		 * The stop task itself will never be part of the PI-chain, it
++		 * never blocks, therefore that ->pi_lock recursion is safe.
++		 * Tell lockdep about this by placing the stop->pi_lock in its
++		 * own class.
++		 */
++		lockdep_set_class(&stop->pi_lock, &stop_pi_lock);
++	}
++
++	cpu_rq(cpu)->stop = stop;
++
++	if (old_stop) {
++		/*
++		 * Reset it back to a normal scheduling policy so that
++		 * it can die in pieces.
++		 */
++		sched_setscheduler_nocheck(old_stop, SCHED_NORMAL, &start_param);
++	}
++}
++
++/*
++ * Change a given task's CPU affinity. Migrate the thread to a
++ * proper CPU and schedule it away if the CPU it's executing on
++ * is removed from the allowed bitmask.
++ *
++ * NOTE: the caller must have a valid reference to the task, the
++ * task must not exit() & deallocate itself prematurely. The
++ * call is not atomic; no spinlocks may be held.
++ */
++static int __set_cpus_allowed_ptr(struct task_struct *p,
++				  const struct cpumask *new_mask,
++				  u32 flags)
++{
++	const struct cpumask *cpu_valid_mask = cpu_active_mask;
++	int dest_cpu;
++	unsigned long irq_flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++	int ret = 0;
++
++	raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++	rq = __task_access_lock(p, &lock);
++
++	if (p->flags & PF_KTHREAD || is_migration_disabled(p)) {
++		/*
++		 * Kernel threads are allowed on online && !active CPUs,
++		 * however, during cpu-hot-unplug, even these might get pushed
++		 * away if not KTHREAD_IS_PER_CPU.
++		 *
++		 * Specifically, migration_disabled() tasks must not fail the
++		 * cpumask_any_and_distribute() pick below, esp. so on
++		 * SCA_MIGRATE_ENABLE, otherwise we'll not call
++		 * set_cpus_allowed_common() and actually reset p->cpus_ptr.
++		 */
++		cpu_valid_mask = cpu_online_mask;
++	}
++
++	/*
++	 * Must re-check here, to close a race against __kthread_bind(),
++	 * sched_setaffinity() is not guaranteed to observe the flag.
++	 */
++	if ((flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	if (cpumask_equal(&p->cpus_mask, new_mask))
++		goto out;
++
++	dest_cpu = cpumask_any_and(cpu_valid_mask, new_mask);
++	if (dest_cpu >= nr_cpu_ids) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	__do_set_cpus_allowed(p, new_mask);
++
++	/* Can the task run on the task's current CPU? If so, we're done */
++	if (cpumask_test_cpu(task_cpu(p), new_mask))
++		goto out;
++
++	if (p->migration_disabled) {
++		if (likely(p->cpus_ptr != &p->cpus_mask))
++			__do_set_cpus_ptr(p, &p->cpus_mask);
++		p->migration_disabled = 0;
++		p->migration_flags |= MDF_FORCE_ENABLED;
++		/* When p is migrate_disabled, rq->lock should be held */
++		rq->nr_pinned--;
++	}
++
++	if (task_running(p) || p->state == TASK_WAKING) {
++		struct migration_arg arg = { p, dest_cpu };
++
++		/* Need help from migration thread: drop lock and wait. */
++		__task_access_unlock(p, lock);
++		raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++		stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
++		return 0;
++	}
++	if (task_on_rq_queued(p)) {
++		/*
++		 * OK, since we're going to drop the lock immediately
++		 * afterwards anyway.
++		 */
++		update_rq_clock(rq);
++		rq = move_queued_task(rq, p, dest_cpu);
++		lock = &rq->lock;
++	}
++
++out:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++
++	return ret;
++}
++
++int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++	return __set_cpus_allowed_ptr(p, new_mask, 0);
++}
++EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
++
++#else /* CONFIG_SMP */
++
++static inline int select_task_rq(struct task_struct *p)
++{
++	return 0;
++}
++
++static inline int
++__set_cpus_allowed_ptr(struct task_struct *p,
++		       const struct cpumask *new_mask,
++		       u32 flags)
++{
++	return set_cpus_allowed_ptr(p, new_mask);
++}
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++	return false;
++}
++
++#endif /* !CONFIG_SMP */
++
++static void
++ttwu_stat(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq;
++
++	if (!schedstat_enabled())
++		return;
++
++	rq = this_rq();
++
++#ifdef CONFIG_SMP
++	if (cpu == rq->cpu)
++		__schedstat_inc(rq->ttwu_local);
++	else {
++		/** Alt schedule FW ToDo:
++		 * How to do ttwu_wake_remote
++		 */
++	}
++#endif /* CONFIG_SMP */
++
++	__schedstat_inc(rq->ttwu_count);
++}
++
++/*
++ * Mark the task runnable and perform wakeup-preemption.
++ */
++static inline void
++ttwu_do_wakeup(struct rq *rq, struct task_struct *p, int wake_flags)
++{
++	check_preempt_curr(rq);
++	p->state = TASK_RUNNING;
++	trace_sched_wakeup(p);
++}
++
++static inline void
++ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags)
++{
++	if (p->sched_contributes_to_load)
++		rq->nr_uninterruptible--;
++
++	if (
++#ifdef CONFIG_SMP
++	    !(wake_flags & WF_MIGRATED) &&
++#endif
++	    p->in_iowait) {
++		delayacct_blkio_end(p);
++		atomic_dec(&task_rq(p)->nr_iowait);
++	}
++
++	activate_task(p, rq);
++	ttwu_do_wakeup(rq, p, 0);
++}
++
++/*
++ * Consider @p being inside a wait loop:
++ *
++ *   for (;;) {
++ *      set_current_state(TASK_UNINTERRUPTIBLE);
++ *
++ *      if (CONDITION)
++ *         break;
++ *
++ *      schedule();
++ *   }
++ *   __set_current_state(TASK_RUNNING);
++ *
++ * between set_current_state() and schedule(). In this case @p is still
++ * runnable, so all that needs doing is change p->state back to TASK_RUNNING in
++ * an atomic manner.
++ *
++ * By taking task_rq(p)->lock we serialize against schedule(), if @p->on_rq
++ * then schedule() must still happen and p->state can be changed to
++ * TASK_RUNNING. Otherwise we lost the race, schedule() has happened, and we
++ * need to do a full wakeup with enqueue.
++ *
++ * Returns: %true when the wakeup is done,
++ *          %false otherwise.
++ */
++static int ttwu_runnable(struct task_struct *p, int wake_flags)
++{
++	struct rq *rq;
++	raw_spinlock_t *lock;
++	int ret = 0;
++
++	rq = __task_access_lock(p, &lock);
++	if (task_on_rq_queued(p)) {
++		/* check_preempt_curr() may use rq clock */
++		update_rq_clock(rq);
++		ttwu_do_wakeup(rq, p, wake_flags);
++		ret = 1;
++	}
++	__task_access_unlock(p, lock);
++
++	return ret;
++}
++
++#ifdef CONFIG_SMP
++void sched_ttwu_pending(void *arg)
++{
++	struct llist_node *llist = arg;
++	struct rq *rq = this_rq();
++	struct task_struct *p, *t;
++	struct rq_flags rf;
++
++	if (!llist)
++		return;
++
++	/*
++	 * rq::ttwu_pending racy indication of out-standing wakeups.
++	 * Races such that false-negatives are possible, since they
++	 * are shorter lived that false-positives would be.
++	 */
++	WRITE_ONCE(rq->ttwu_pending, 0);
++
++	rq_lock_irqsave(rq, &rf);
++	update_rq_clock(rq);
++
++	llist_for_each_entry_safe(p, t, llist, wake_entry.llist) {
++		if (WARN_ON_ONCE(p->on_cpu))
++			smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++		if (WARN_ON_ONCE(task_cpu(p) != cpu_of(rq)))
++			set_task_cpu(p, cpu_of(rq));
++
++		ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0);
++	}
++
++	rq_unlock_irqrestore(rq, &rf);
++}
++
++void send_call_function_single_ipi(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (!set_nr_if_polling(rq->idle))
++		arch_send_call_function_single_ipi(cpu);
++	else
++		trace_sched_wake_idle_without_ipi(cpu);
++}
++
++/*
++ * Queue a task on the target CPUs wake_list and wake the CPU via IPI if
++ * necessary. The wakee CPU on receipt of the IPI will queue the task
++ * via sched_ttwu_wakeup() for activation so the wakee incurs the cost
++ * of the wakeup instead of the waker.
++ */
++static void __ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED);
++
++	WRITE_ONCE(rq->ttwu_pending, 1);
++	__smp_call_single_queue(cpu, &p->wake_entry.llist);
++}
++
++static inline bool ttwu_queue_cond(int cpu, int wake_flags)
++{
++	/*
++	 * Do not complicate things with the async wake_list while the CPU is
++	 * in hotplug state.
++	 */
++	if (!cpu_active(cpu))
++		return false;
++
++	/*
++	 * If the CPU does not share cache, then queue the task on the
++	 * remote rqs wakelist to avoid accessing remote data.
++	 */
++	if (!cpus_share_cache(smp_processor_id(), cpu))
++		return true;
++
++	/*
++	 * If the task is descheduling and the only running task on the
++	 * CPU then use the wakelist to offload the task activation to
++	 * the soon-to-be-idle CPU as the current CPU is likely busy.
++	 * nr_running is checked to avoid unnecessary task stacking.
++	 */
++	if ((wake_flags & WF_ON_CPU) && cpu_rq(cpu)->nr_running <= 1)
++		return true;
++
++	return false;
++}
++
++static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	if (__is_defined(ALT_SCHED_TTWU_QUEUE) && ttwu_queue_cond(cpu, wake_flags)) {
++		if (WARN_ON_ONCE(cpu == smp_processor_id()))
++			return false;
++
++		sched_clock_cpu(cpu); /* Sync clocks across CPUs */
++		__ttwu_queue_wakelist(p, cpu, wake_flags);
++		return true;
++	}
++
++	return false;
++}
++
++void wake_up_if_idle(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	rcu_read_lock();
++
++	if (!is_idle_task(rcu_dereference(rq->curr)))
++		goto out;
++
++	if (set_nr_if_polling(rq->idle)) {
++		trace_sched_wake_idle_without_ipi(cpu);
++	} else {
++		raw_spin_lock_irqsave(&rq->lock, flags);
++		if (is_idle_task(rq->curr))
++			smp_send_reschedule(cpu);
++		/* Else CPU is not idle, do nothing here */
++		raw_spin_unlock_irqrestore(&rq->lock, flags);
++	}
++
++out:
++	rcu_read_unlock();
++}
++
++bool cpus_share_cache(int this_cpu, int that_cpu)
++{
++	return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
++}
++#else /* !CONFIG_SMP */
++
++static inline bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++	return false;
++}
++
++#endif /* CONFIG_SMP */
++
++static inline void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (ttwu_queue_wakelist(p, cpu, wake_flags))
++		return;
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++	ttwu_do_activate(rq, p, wake_flags);
++	raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Notes on Program-Order guarantees on SMP systems.
++ *
++ *  MIGRATION
++ *
++ * The basic program-order guarantee on SMP systems is that when a task [t]
++ * migrates, all its activity on its old CPU [c0] happens-before any subsequent
++ * execution on its new CPU [c1].
++ *
++ * For migration (of runnable tasks) this is provided by the following means:
++ *
++ *  A) UNLOCK of the rq(c0)->lock scheduling out task t
++ *  B) migration for t is required to synchronize *both* rq(c0)->lock and
++ *     rq(c1)->lock (if not at the same time, then in that order).
++ *  C) LOCK of the rq(c1)->lock scheduling in task
++ *
++ * Transitivity guarantees that B happens after A and C after B.
++ * Note: we only require RCpc transitivity.
++ * Note: the CPU doing B need not be c0 or c1
++ *
++ * Example:
++ *
++ *   CPU0            CPU1            CPU2
++ *
++ *   LOCK rq(0)->lock
++ *   sched-out X
++ *   sched-in Y
++ *   UNLOCK rq(0)->lock
++ *
++ *                                   LOCK rq(0)->lock // orders against CPU0
++ *                                   dequeue X
++ *                                   UNLOCK rq(0)->lock
++ *
++ *                                   LOCK rq(1)->lock
++ *                                   enqueue X
++ *                                   UNLOCK rq(1)->lock
++ *
++ *                   LOCK rq(1)->lock // orders against CPU2
++ *                   sched-out Z
++ *                   sched-in X
++ *                   UNLOCK rq(1)->lock
++ *
++ *
++ *  BLOCKING -- aka. SLEEP + WAKEUP
++ *
++ * For blocking we (obviously) need to provide the same guarantee as for
++ * migration. However the means are completely different as there is no lock
++ * chain to provide order. Instead we do:
++ *
++ *   1) smp_store_release(X->on_cpu, 0)   -- finish_task()
++ *   2) smp_cond_load_acquire(!X->on_cpu) -- try_to_wake_up()
++ *
++ * Example:
++ *
++ *   CPU0 (schedule)  CPU1 (try_to_wake_up) CPU2 (schedule)
++ *
++ *   LOCK rq(0)->lock LOCK X->pi_lock
++ *   dequeue X
++ *   sched-out X
++ *   smp_store_release(X->on_cpu, 0);
++ *
++ *                    smp_cond_load_acquire(&X->on_cpu, !VAL);
++ *                    X->state = WAKING
++ *                    set_task_cpu(X,2)
++ *
++ *                    LOCK rq(2)->lock
++ *                    enqueue X
++ *                    X->state = RUNNING
++ *                    UNLOCK rq(2)->lock
++ *
++ *                                          LOCK rq(2)->lock // orders against CPU1
++ *                                          sched-out Z
++ *                                          sched-in X
++ *                                          UNLOCK rq(2)->lock
++ *
++ *                    UNLOCK X->pi_lock
++ *   UNLOCK rq(0)->lock
++ *
++ *
++ * However; for wakeups there is a second guarantee we must provide, namely we
++ * must observe the state that lead to our wakeup. That is, not only must our
++ * task observe its own prior state, it must also observe the stores prior to
++ * its wakeup.
++ *
++ * This means that any means of doing remote wakeups must order the CPU doing
++ * the wakeup against the CPU the task is going to end up running on. This,
++ * however, is already required for the regular Program-Order guarantee above,
++ * since the waking CPU is the one issueing the ACQUIRE (smp_cond_load_acquire).
++ *
++ */
++
++/**
++ * try_to_wake_up - wake up a thread
++ * @p: the thread to be awakened
++ * @state: the mask of task states that can be woken
++ * @wake_flags: wake modifier flags (WF_*)
++ *
++ * Conceptually does:
++ *
++ *   If (@state & @p->state) @p->state = TASK_RUNNING.
++ *
++ * If the task was not queued/runnable, also place it back on a runqueue.
++ *
++ * This function is atomic against schedule() which would dequeue the task.
++ *
++ * It issues a full memory barrier before accessing @p->state, see the comment
++ * with set_current_state().
++ *
++ * Uses p->pi_lock to serialize against concurrent wake-ups.
++ *
++ * Relies on p->pi_lock stabilizing:
++ *  - p->sched_class
++ *  - p->cpus_ptr
++ *  - p->sched_task_group
++ * in order to do migration, see its use of select_task_rq()/set_task_cpu().
++ *
++ * Tries really hard to only take one task_rq(p)->lock for performance.
++ * Takes rq->lock in:
++ *  - ttwu_runnable()    -- old rq, unavoidable, see comment there;
++ *  - ttwu_queue()       -- new rq, for enqueue of the task;
++ *  - psi_ttwu_dequeue() -- much sadness :-( accounting will kill us.
++ *
++ * As a consequence we race really badly with just about everything. See the
++ * many memory barriers and their comments for details.
++ *
++ * Return: %true if @p->state changes (an actual wakeup was done),
++ *	   %false otherwise.
++ */
++static int try_to_wake_up(struct task_struct *p, unsigned int state,
++			  int wake_flags)
++{
++	unsigned long flags;
++	int cpu, success = 0;
++
++	preempt_disable();
++	if (p == current) {
++		/*
++		 * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
++		 * == smp_processor_id()'. Together this means we can special
++		 * case the whole 'p->on_rq && ttwu_runnable()' case below
++		 * without taking any locks.
++		 *
++		 * In particular:
++		 *  - we rely on Program-Order guarantees for all the ordering,
++		 *  - we're serialized against set_special_state() by virtue of
++		 *    it disabling IRQs (this allows not taking ->pi_lock).
++		 */
++		if (!(p->state & state))
++			goto out;
++
++		success = 1;
++		trace_sched_waking(p);
++		p->state = TASK_RUNNING;
++		trace_sched_wakeup(p);
++		goto out;
++	}
++
++	/*
++	 * If we are going to wake up a thread waiting for CONDITION we
++	 * need to ensure that CONDITION=1 done by the caller can not be
++	 * reordered with p->state check below. This pairs with smp_store_mb()
++	 * in set_current_state() that the waiting thread does.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	smp_mb__after_spinlock();
++	if (!(p->state & state))
++		goto unlock;
++
++	trace_sched_waking(p);
++
++	/* We're going to change ->state: */
++	success = 1;
++
++	/*
++	 * Ensure we load p->on_rq _after_ p->state, otherwise it would
++	 * be possible to, falsely, observe p->on_rq == 0 and get stuck
++	 * in smp_cond_load_acquire() below.
++	 *
++	 * sched_ttwu_pending()			try_to_wake_up()
++	 *   STORE p->on_rq = 1			  LOAD p->state
++	 *   UNLOCK rq->lock
++	 *
++	 * __schedule() (switch to task 'p')
++	 *   LOCK rq->lock			  smp_rmb();
++	 *   smp_mb__after_spinlock();
++	 *   UNLOCK rq->lock
++	 *
++	 * [task p]
++	 *   STORE p->state = UNINTERRUPTIBLE	  LOAD p->on_rq
++	 *
++	 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++	 * __schedule().  See the comment for smp_mb__after_spinlock().
++	 *
++	 * A similar smb_rmb() lives in try_invoke_on_locked_down_task().
++	 */
++	smp_rmb();
++	if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))
++		goto unlock;
++
++#ifdef CONFIG_SMP
++	/*
++	 * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
++	 * possible to, falsely, observe p->on_cpu == 0.
++	 *
++	 * One must be running (->on_cpu == 1) in order to remove oneself
++	 * from the runqueue.
++	 *
++	 * __schedule() (switch to task 'p')	try_to_wake_up()
++	 *   STORE p->on_cpu = 1		  LOAD p->on_rq
++	 *   UNLOCK rq->lock
++	 *
++	 * __schedule() (put 'p' to sleep)
++	 *   LOCK rq->lock			  smp_rmb();
++	 *   smp_mb__after_spinlock();
++	 *   STORE p->on_rq = 0			  LOAD p->on_cpu
++	 *
++	 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++	 * __schedule().  See the comment for smp_mb__after_spinlock().
++	 *
++	 * Form a control-dep-acquire with p->on_rq == 0 above, to ensure
++	 * schedule()'s deactivate_task() has 'happened' and p will no longer
++	 * care about it's own p->state. See the comment in __schedule().
++	 */
++	smp_acquire__after_ctrl_dep();
++
++	/*
++	 * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq
++	 * == 0), which means we need to do an enqueue, change p->state to
++	 * TASK_WAKING such that we can unlock p->pi_lock before doing the
++	 * enqueue, such as ttwu_queue_wakelist().
++	 */
++	p->state = TASK_WAKING;
++
++	/*
++	 * If the owning (remote) CPU is still in the middle of schedule() with
++	 * this task as prev, considering queueing p on the remote CPUs wake_list
++	 * which potentially sends an IPI instead of spinning on p->on_cpu to
++	 * let the waker make forward progress. This is safe because IRQs are
++	 * disabled and the IPI will deliver after on_cpu is cleared.
++	 *
++	 * Ensure we load task_cpu(p) after p->on_cpu:
++	 *
++	 * set_task_cpu(p, cpu);
++	 *   STORE p->cpu = @cpu
++	 * __schedule() (switch to task 'p')
++	 *   LOCK rq->lock
++	 *   smp_mb__after_spin_lock()          smp_cond_load_acquire(&p->on_cpu)
++	 *   STORE p->on_cpu = 1                LOAD p->cpu
++	 *
++	 * to ensure we observe the correct CPU on which the task is currently
++	 * scheduling.
++	 */
++	if (smp_load_acquire(&p->on_cpu) &&
++	    ttwu_queue_wakelist(p, task_cpu(p), wake_flags | WF_ON_CPU))
++		goto unlock;
++
++	/*
++	 * If the owning (remote) CPU is still in the middle of schedule() with
++	 * this task as prev, wait until it's done referencing the task.
++	 *
++	 * Pairs with the smp_store_release() in finish_task().
++	 *
++	 * This ensures that tasks getting woken will be fully ordered against
++	 * their previous state and preserve Program Order.
++	 */
++	smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++	sched_task_ttwu(p);
++
++	cpu = select_task_rq(p);
++
++	if (cpu != task_cpu(p)) {
++		if (p->in_iowait) {
++			delayacct_blkio_end(p);
++			atomic_dec(&task_rq(p)->nr_iowait);
++		}
++
++		wake_flags |= WF_MIGRATED;
++		psi_ttwu_dequeue(p);
++		set_task_cpu(p, cpu);
++	}
++#else
++	cpu = task_cpu(p);
++#endif /* CONFIG_SMP */
++
++	ttwu_queue(p, cpu, wake_flags);
++unlock:
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++out:
++	if (success)
++		ttwu_stat(p, task_cpu(p), wake_flags);
++	preempt_enable();
++
++	return success;
++}
++
++/**
++ * try_invoke_on_locked_down_task - Invoke a function on task in fixed state
++ * @p: Process for which the function is to be invoked, can be @current.
++ * @func: Function to invoke.
++ * @arg: Argument to function.
++ *
++ * If the specified task can be quickly locked into a definite state
++ * (either sleeping or on a given runqueue), arrange to keep it in that
++ * state while invoking @func(@arg).  This function can use ->on_rq and
++ * task_curr() to work out what the state is, if required.  Given that
++ * @func can be invoked with a runqueue lock held, it had better be quite
++ * lightweight.
++ *
++ * Returns:
++ *	@false if the task slipped out from under the locks.
++ *	@true if the task was locked onto a runqueue or is sleeping.
++ *		However, @func can override this by returning @false.
++ */
++bool try_invoke_on_locked_down_task(struct task_struct *p, bool (*func)(struct task_struct *t, void *arg), void *arg)
++{
++	struct rq_flags rf;
++	bool ret = false;
++	struct rq *rq;
++
++	raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
++	if (p->on_rq) {
++		rq = __task_rq_lock(p, &rf);
++		if (task_rq(p) == rq)
++			ret = func(p, arg);
++		__task_rq_unlock(rq, &rf);
++	} else {
++		switch (p->state) {
++		case TASK_RUNNING:
++		case TASK_WAKING:
++			break;
++		default:
++			smp_rmb(); // See smp_rmb() comment in try_to_wake_up().
++			if (!p->on_rq)
++				ret = func(p, arg);
++		}
++	}
++	raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
++	return ret;
++}
++
++/**
++ * wake_up_process - Wake up a specific process
++ * @p: The process to be woken up.
++ *
++ * Attempt to wake up the nominated process and move it to the set of runnable
++ * processes.
++ *
++ * Return: 1 if the process was woken up, 0 if it was already running.
++ *
++ * This function executes a full memory barrier before accessing the task state.
++ */
++int wake_up_process(struct task_struct *p)
++{
++	return try_to_wake_up(p, TASK_NORMAL, 0);
++}
++EXPORT_SYMBOL(wake_up_process);
++
++int wake_up_state(struct task_struct *p, unsigned int state)
++{
++	return try_to_wake_up(p, state, 0);
++}
++
++/*
++ * Perform scheduler related setup for a newly forked process p.
++ * p is forked by current.
++ *
++ * __sched_fork() is basic setup used by init_idle() too:
++ */
++static inline void __sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++	p->on_rq			= 0;
++	p->on_cpu			= 0;
++	p->utime			= 0;
++	p->stime			= 0;
++	p->sched_time			= 0;
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++	INIT_HLIST_HEAD(&p->preempt_notifiers);
++#endif
++
++#ifdef CONFIG_COMPACTION
++	p->capture_control = NULL;
++#endif
++#ifdef CONFIG_SMP
++	p->wake_entry.u_flags = CSD_TYPE_TTWU;
++#endif
++}
++
++/*
++ * fork()/clone()-time setup:
++ */
++int sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++	unsigned long flags;
++	struct rq *rq;
++
++	__sched_fork(clone_flags, p);
++	/*
++	 * We mark the process as NEW here. This guarantees that
++	 * nobody will actually run it, and a signal or other external
++	 * event cannot wake it up and insert it on the runqueue either.
++	 */
++	p->state = TASK_NEW;
++
++	/*
++	 * Make sure we do not leak PI boosting priority to the child.
++	 */
++	p->prio = current->normal_prio;
++
++	/*
++	 * Revert to default priority/policy on fork if requested.
++	 */
++	if (unlikely(p->sched_reset_on_fork)) {
++		if (task_has_rt_policy(p)) {
++			p->policy = SCHED_NORMAL;
++			p->static_prio = NICE_TO_PRIO(0);
++			p->rt_priority = 0;
++		} else if (PRIO_TO_NICE(p->static_prio) < 0)
++			p->static_prio = NICE_TO_PRIO(0);
++
++		p->prio = p->normal_prio = normal_prio(p);
++
++		/*
++		 * We don't need the reset flag anymore after the fork. It has
++		 * fulfilled its duty:
++		 */
++		p->sched_reset_on_fork = 0;
++	}
++
++	/*
++	 * The child is not yet in the pid-hash so no cgroup attach races,
++	 * and the cgroup is pinned to this child due to cgroup_fork()
++	 * is ran before sched_fork().
++	 *
++	 * Silence PROVE_RCU.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	/*
++	 * Share the timeslice between parent and child, thus the
++	 * total amount of pending timeslices in the system doesn't change,
++	 * resulting in more scheduling fairness.
++	 */
++	rq = this_rq();
++	raw_spin_lock(&rq->lock);
++
++	rq->curr->time_slice /= 2;
++	p->time_slice = rq->curr->time_slice;
++#ifdef CONFIG_SCHED_HRTICK
++	hrtick_start(rq, rq->curr->time_slice);
++#endif
++
++	if (p->time_slice < RESCHED_NS) {
++		p->time_slice = sched_timeslice_ns;
++		resched_curr(rq);
++	}
++	sched_task_fork(p, rq);
++	raw_spin_unlock(&rq->lock);
++
++	rseq_migrate(p);
++	/*
++	 * We're setting the CPU for the first time, we don't migrate,
++	 * so use __set_task_cpu().
++	 */
++	__set_task_cpu(p, cpu_of(rq));
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++#ifdef CONFIG_SCHED_INFO
++	if (unlikely(sched_info_on()))
++		memset(&p->sched_info, 0, sizeof(p->sched_info));
++#endif
++	init_task_preempt_count(p);
++
++	return 0;
++}
++
++void sched_post_fork(struct task_struct *p) {}
++
++#ifdef CONFIG_SCHEDSTATS
++
++DEFINE_STATIC_KEY_FALSE(sched_schedstats);
++static bool __initdata __sched_schedstats = false;
++
++static void set_schedstats(bool enabled)
++{
++	if (enabled)
++		static_branch_enable(&sched_schedstats);
++	else
++		static_branch_disable(&sched_schedstats);
++}
++
++void force_schedstat_enabled(void)
++{
++	if (!schedstat_enabled()) {
++		pr_info("kernel profiling enabled schedstats, disable via kernel.sched_schedstats.\n");
++		static_branch_enable(&sched_schedstats);
++	}
++}
++
++static int __init setup_schedstats(char *str)
++{
++	int ret = 0;
++	if (!str)
++		goto out;
++
++	/*
++	 * This code is called before jump labels have been set up, so we can't
++	 * change the static branch directly just yet.  Instead set a temporary
++	 * variable so init_schedstats() can do it later.
++	 */
++	if (!strcmp(str, "enable")) {
++		__sched_schedstats = true;
++		ret = 1;
++	} else if (!strcmp(str, "disable")) {
++		__sched_schedstats = false;
++		ret = 1;
++	}
++out:
++	if (!ret)
++		pr_warn("Unable to parse schedstats=\n");
++
++	return ret;
++}
++__setup("schedstats=", setup_schedstats);
++
++static void __init init_schedstats(void)
++{
++	set_schedstats(__sched_schedstats);
++}
++
++#ifdef CONFIG_PROC_SYSCTL
++int sysctl_schedstats(struct ctl_table *table, int write,
++			 void __user *buffer, size_t *lenp, loff_t *ppos)
++{
++	struct ctl_table t;
++	int err;
++	int state = static_branch_likely(&sched_schedstats);
++
++	if (write && !capable(CAP_SYS_ADMIN))
++		return -EPERM;
++
++	t = *table;
++	t.data = &state;
++	err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
++	if (err < 0)
++		return err;
++	if (write)
++		set_schedstats(state);
++	return err;
++}
++#endif /* CONFIG_PROC_SYSCTL */
++#else  /* !CONFIG_SCHEDSTATS */
++static inline void init_schedstats(void) {}
++#endif /* CONFIG_SCHEDSTATS */
++
++/*
++ * wake_up_new_task - wake up a newly created task for the first time.
++ *
++ * This function will do some initial scheduler statistics housekeeping
++ * that must be done for every newly created context, then puts the task
++ * on the runqueue and wakes it.
++ */
++void wake_up_new_task(struct task_struct *p)
++{
++	unsigned long flags;
++	struct rq *rq;
++
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	p->state = TASK_RUNNING;
++	rq = cpu_rq(select_task_rq(p));
++#ifdef CONFIG_SMP
++	rseq_migrate(p);
++	/*
++	 * Fork balancing, do it here and not earlier because:
++	 * - cpus_ptr can change in the fork path
++	 * - any previously selected CPU might disappear through hotplug
++	 *
++	 * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
++	 * as we're not fully set-up yet.
++	 */
++	__set_task_cpu(p, cpu_of(rq));
++#endif
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++
++	activate_task(p, rq);
++	trace_sched_wakeup_new(p);
++	check_preempt_curr(rq);
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++
++static DEFINE_STATIC_KEY_FALSE(preempt_notifier_key);
++
++void preempt_notifier_inc(void)
++{
++	static_branch_inc(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_inc);
++
++void preempt_notifier_dec(void)
++{
++	static_branch_dec(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_dec);
++
++/**
++ * preempt_notifier_register - tell me when current is being preempted & rescheduled
++ * @notifier: notifier struct to register
++ */
++void preempt_notifier_register(struct preempt_notifier *notifier)
++{
++	if (!static_branch_unlikely(&preempt_notifier_key))
++		WARN(1, "registering preempt_notifier while notifiers disabled\n");
++
++	hlist_add_head(&notifier->link, &current->preempt_notifiers);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_register);
++
++/**
++ * preempt_notifier_unregister - no longer interested in preemption notifications
++ * @notifier: notifier struct to unregister
++ *
++ * This is *not* safe to call from within a preemption notifier.
++ */
++void preempt_notifier_unregister(struct preempt_notifier *notifier)
++{
++	hlist_del(&notifier->link);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_unregister);
++
++static void __fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++	struct preempt_notifier *notifier;
++
++	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++		notifier->ops->sched_in(notifier, raw_smp_processor_id());
++}
++
++static __always_inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++	if (static_branch_unlikely(&preempt_notifier_key))
++		__fire_sched_in_preempt_notifiers(curr);
++}
++
++static void
++__fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				   struct task_struct *next)
++{
++	struct preempt_notifier *notifier;
++
++	hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++		notifier->ops->sched_out(notifier, next);
++}
++
++static __always_inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				 struct task_struct *next)
++{
++	if (static_branch_unlikely(&preempt_notifier_key))
++		__fire_sched_out_preempt_notifiers(curr, next);
++}
++
++#else /* !CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++}
++
++static inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++				 struct task_struct *next)
++{
++}
++
++#endif /* CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void prepare_task(struct task_struct *next)
++{
++	/*
++	 * Claim the task as running, we do this before switching to it
++	 * such that any running task will have this set.
++	 *
++	 * See the ttwu() WF_ON_CPU case and its ordering comment.
++	 */
++	WRITE_ONCE(next->on_cpu, 1);
++}
++
++static inline void finish_task(struct task_struct *prev)
++{
++#ifdef CONFIG_SMP
++	/*
++	 * This must be the very last reference to @prev from this CPU. After
++	 * p->on_cpu is cleared, the task can be moved to a different CPU. We
++	 * must ensure this doesn't happen until the switch is completely
++	 * finished.
++	 *
++	 * In particular, the load of prev->state in finish_task_switch() must
++	 * happen before this.
++	 *
++	 * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
++	 */
++	smp_store_release(&prev->on_cpu, 0);
++#else
++	prev->on_cpu = 0;
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++static void do_balance_callbacks(struct rq *rq, struct callback_head *head)
++{
++	void (*func)(struct rq *rq);
++	struct callback_head *next;
++
++	lockdep_assert_held(&rq->lock);
++
++	while (head) {
++		func = (void (*)(struct rq *))head->func;
++		next = head->next;
++		head->next = NULL;
++		head = next;
++
++		func(rq);
++	}
++}
++
++static void balance_push(struct rq *rq);
++
++struct callback_head balance_push_callback = {
++	.next = NULL,
++	.func = (void (*)(struct callback_head *))balance_push,
++};
++
++static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
++{
++	struct callback_head *head = rq->balance_callback;
++
++	if (head) {
++		lockdep_assert_held(&rq->lock);
++		rq->balance_callback = NULL;
++	}
++
++	return head;
++}
++
++static void __balance_callbacks(struct rq *rq)
++{
++	do_balance_callbacks(rq, splice_balance_callbacks(rq));
++}
++
++static inline void balance_callbacks(struct rq *rq, struct callback_head *head)
++{
++	unsigned long flags;
++
++	if (unlikely(head)) {
++		raw_spin_lock_irqsave(&rq->lock, flags);
++		do_balance_callbacks(rq, head);
++		raw_spin_unlock_irqrestore(&rq->lock, flags);
++	}
++}
++
++#else
++
++static inline void __balance_callbacks(struct rq *rq)
++{
++}
++
++static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
++{
++	return NULL;
++}
++
++static inline void balance_callbacks(struct rq *rq, struct callback_head *head)
++{
++}
++
++#endif
++
++static inline void
++prepare_lock_switch(struct rq *rq, struct task_struct *next)
++{
++	/*
++	 * Since the runqueue lock will be released by the next
++	 * task (which is an invalid locking op but in the case
++	 * of the scheduler it's an obvious special-case), so we
++	 * do an early lockdep release here:
++	 */
++	spin_release(&rq->lock.dep_map, _THIS_IP_);
++#ifdef CONFIG_DEBUG_SPINLOCK
++	/* this is a valid case when another task releases the spinlock */
++	rq->lock.owner = next;
++#endif
++}
++
++static inline void finish_lock_switch(struct rq *rq)
++{
++	/*
++	 * If we are tracking spinlock dependencies then we have to
++	 * fix up the runqueue lock - which gets 'carried over' from
++	 * prev into current:
++	 */
++	spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
++	__balance_callbacks(rq);
++	raw_spin_unlock_irq(&rq->lock);
++}
++
++/*
++ * NOP if the arch has not defined these:
++ */
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next)	do { } while (0)
++#endif
++
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch()	do { } while (0)
++#endif
++
++static inline void kmap_local_sched_out(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++	if (unlikely(current->kmap_ctrl.idx))
++		__kmap_local_sched_out();
++#endif
++}
++
++static inline void kmap_local_sched_in(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++	if (unlikely(current->kmap_ctrl.idx))
++		__kmap_local_sched_in();
++#endif
++}
++
++/**
++ * prepare_task_switch - prepare to switch tasks
++ * @rq: the runqueue preparing to switch
++ * @next: the task we are going to switch to.
++ *
++ * This is called with the rq lock held and interrupts off. It must
++ * be paired with a subsequent finish_task_switch after the context
++ * switch.
++ *
++ * prepare_task_switch sets up locking and calls architecture specific
++ * hooks.
++ */
++static inline void
++prepare_task_switch(struct rq *rq, struct task_struct *prev,
++		    struct task_struct *next)
++{
++	kcov_prepare_switch(prev);
++	sched_info_switch(rq, prev, next);
++	perf_event_task_sched_out(prev, next);
++	rseq_preempt(prev);
++	fire_sched_out_preempt_notifiers(prev, next);
++	kmap_local_sched_out();
++	prepare_task(next);
++	prepare_arch_switch(next);
++}
++
++/**
++ * finish_task_switch - clean up after a task-switch
++ * @rq: runqueue associated with task-switch
++ * @prev: the thread we just switched away from.
++ *
++ * finish_task_switch must be called after the context switch, paired
++ * with a prepare_task_switch call before the context switch.
++ * finish_task_switch will reconcile locking set up by prepare_task_switch,
++ * and do any other architecture-specific cleanup actions.
++ *
++ * Note that we may have delayed dropping an mm in context_switch(). If
++ * so, we finish that here outside of the runqueue lock.  (Doing it
++ * with the lock held can cause deadlocks; see schedule() for
++ * details.)
++ *
++ * The context switch have flipped the stack from under us and restored the
++ * local variables which were saved when this task called schedule() in the
++ * past. prev == current is still correct but we need to recalculate this_rq
++ * because prev may have moved to another CPU.
++ */
++static struct rq *finish_task_switch(struct task_struct *prev)
++	__releases(rq->lock)
++{
++	struct rq *rq = this_rq();
++	struct mm_struct *mm = rq->prev_mm;
++	long prev_state;
++
++	/*
++	 * The previous task will have left us with a preempt_count of 2
++	 * because it left us after:
++	 *
++	 *	schedule()
++	 *	  preempt_disable();			// 1
++	 *	  __schedule()
++	 *	    raw_spin_lock_irq(&rq->lock)	// 2
++	 *
++	 * Also, see FORK_PREEMPT_COUNT.
++	 */
++	if (WARN_ONCE(preempt_count() != 2*PREEMPT_DISABLE_OFFSET,
++		      "corrupted preempt_count: %s/%d/0x%x\n",
++		      current->comm, current->pid, preempt_count()))
++		preempt_count_set(FORK_PREEMPT_COUNT);
++
++	rq->prev_mm = NULL;
++
++	/*
++	 * A task struct has one reference for the use as "current".
++	 * If a task dies, then it sets TASK_DEAD in tsk->state and calls
++	 * schedule one last time. The schedule call will never return, and
++	 * the scheduled task must drop that reference.
++	 *
++	 * We must observe prev->state before clearing prev->on_cpu (in
++	 * finish_task), otherwise a concurrent wakeup can get prev
++	 * running on another CPU and we could rave with its RUNNING -> DEAD
++	 * transition, resulting in a double drop.
++	 */
++	prev_state = prev->state;
++	vtime_task_switch(prev);
++	perf_event_task_sched_in(prev, current);
++	finish_task(prev);
++	finish_lock_switch(rq);
++	finish_arch_post_lock_switch();
++	kcov_finish_switch(current);
++	/*
++	 * kmap_local_sched_out() is invoked with rq::lock held and
++	 * interrupts disabled. There is no requirement for that, but the
++	 * sched out code does not have an interrupt enabled section.
++	 * Restoring the maps on sched in does not require interrupts being
++	 * disabled either.
++	 */
++	kmap_local_sched_in();
++
++	fire_sched_in_preempt_notifiers(current);
++	/*
++	 * When switching through a kernel thread, the loop in
++	 * membarrier_{private,global}_expedited() may have observed that
++	 * kernel thread and not issued an IPI. It is therefore possible to
++	 * schedule between user->kernel->user threads without passing though
++	 * switch_mm(). Membarrier requires a barrier after storing to
++	 * rq->curr, before returning to userspace, so provide them here:
++	 *
++	 * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly
++	 *   provided by mmdrop(),
++	 * - a sync_core for SYNC_CORE.
++	 */
++	if (mm) {
++		membarrier_mm_sync_core_before_usermode(mm);
++		mmdrop(mm);
++	}
++	if (unlikely(prev_state == TASK_DEAD)) {
++		/*
++		 * Remove function-return probe instances associated with this
++		 * task and put them back on the free list.
++		 */
++		kprobe_flush_task(prev);
++
++		/* Task is done with its stack. */
++		put_task_stack(prev);
++
++		put_task_struct_rcu_user(prev);
++	}
++
++	tick_nohz_task_switch();
++	return rq;
++}
++
++/**
++ * schedule_tail - first thing a freshly forked thread must call.
++ * @prev: the thread we just switched away from.
++ */
++asmlinkage __visible void schedule_tail(struct task_struct *prev)
++	__releases(rq->lock)
++{
++	/*
++	 * New tasks start with FORK_PREEMPT_COUNT, see there and
++	 * finish_task_switch() for details.
++	 *
++	 * finish_task_switch() will drop rq->lock() and lower preempt_count
++	 * and the preempt_enable() will end up enabling preemption (on
++	 * PREEMPT_COUNT kernels).
++	 */
++
++	finish_task_switch(prev);
++	preempt_enable();
++
++	if (current->set_child_tid)
++		put_user(task_pid_vnr(current), current->set_child_tid);
++
++	calculate_sigpending();
++}
++
++/*
++ * context_switch - switch to the new MM and the new thread's register state.
++ */
++static __always_inline struct rq *
++context_switch(struct rq *rq, struct task_struct *prev,
++	       struct task_struct *next)
++{
++	prepare_task_switch(rq, prev, next);
++
++	/*
++	 * For paravirt, this is coupled with an exit in switch_to to
++	 * combine the page table reload and the switch backend into
++	 * one hypercall.
++	 */
++	arch_start_context_switch(prev);
++
++	/*
++	 * kernel -> kernel   lazy + transfer active
++	 *   user -> kernel   lazy + mmgrab() active
++	 *
++	 * kernel ->   user   switch + mmdrop() active
++	 *   user ->   user   switch
++	 */
++	if (!next->mm) {                                // to kernel
++		enter_lazy_tlb(prev->active_mm, next);
++
++		next->active_mm = prev->active_mm;
++		if (prev->mm)                           // from user
++			mmgrab(prev->active_mm);
++		else
++			prev->active_mm = NULL;
++	} else {                                        // to user
++		membarrier_switch_mm(rq, prev->active_mm, next->mm);
++		/*
++		 * sys_membarrier() requires an smp_mb() between setting
++		 * rq->curr / membarrier_switch_mm() and returning to userspace.
++		 *
++		 * The below provides this either through switch_mm(), or in
++		 * case 'prev->active_mm == next->mm' through
++		 * finish_task_switch()'s mmdrop().
++		 */
++		switch_mm_irqs_off(prev->active_mm, next->mm, next);
++
++		if (!prev->mm) {                        // from kernel
++			/* will mmdrop() in finish_task_switch(). */
++			rq->prev_mm = prev->active_mm;
++			prev->active_mm = NULL;
++		}
++	}
++
++	prepare_lock_switch(rq, next);
++
++	/* Here we just switch the register state and the stack. */
++	switch_to(prev, next, prev);
++	barrier();
++
++	return finish_task_switch(prev);
++}
++
++/*
++ * nr_running, nr_uninterruptible and nr_context_switches:
++ *
++ * externally visible scheduler statistics: current number of runnable
++ * threads, total number of context switches performed since bootup.
++ */
++unsigned long nr_running(void)
++{
++	unsigned long i, sum = 0;
++
++	for_each_online_cpu(i)
++		sum += cpu_rq(i)->nr_running;
++
++	return sum;
++}
++
++/*
++ * Check if only the current task is running on the CPU.
++ *
++ * Caution: this function does not check that the caller has disabled
++ * preemption, thus the result might have a time-of-check-to-time-of-use
++ * race.  The caller is responsible to use it correctly, for example:
++ *
++ * - from a non-preemptible section (of course)
++ *
++ * - from a thread that is bound to a single CPU
++ *
++ * - in a loop with very short iterations (e.g. a polling loop)
++ */
++bool single_task_running(void)
++{
++	return raw_rq()->nr_running == 1;
++}
++EXPORT_SYMBOL(single_task_running);
++
++unsigned long long nr_context_switches(void)
++{
++	int i;
++	unsigned long long sum = 0;
++
++	for_each_possible_cpu(i)
++		sum += cpu_rq(i)->nr_switches;
++
++	return sum;
++}
++
++/*
++ * Consumers of these two interfaces, like for example the cpuidle menu
++ * governor, are using nonsensical data. Preferring shallow idle state selection
++ * for a CPU that has IO-wait which might not even end up running the task when
++ * it does become runnable.
++ */
++
++unsigned long nr_iowait_cpu(int cpu)
++{
++	return atomic_read(&cpu_rq(cpu)->nr_iowait);
++}
++
++/*
++ * IO-wait accounting, and how it's mostly bollocks (on SMP).
++ *
++ * The idea behind IO-wait account is to account the idle time that we could
++ * have spend running if it were not for IO. That is, if we were to improve the
++ * storage performance, we'd have a proportional reduction in IO-wait time.
++ *
++ * This all works nicely on UP, where, when a task blocks on IO, we account
++ * idle time as IO-wait, because if the storage were faster, it could've been
++ * running and we'd not be idle.
++ *
++ * This has been extended to SMP, by doing the same for each CPU. This however
++ * is broken.
++ *
++ * Imagine for instance the case where two tasks block on one CPU, only the one
++ * CPU will have IO-wait accounted, while the other has regular idle. Even
++ * though, if the storage were faster, both could've ran at the same time,
++ * utilising both CPUs.
++ *
++ * This means, that when looking globally, the current IO-wait accounting on
++ * SMP is a lower bound, by reason of under accounting.
++ *
++ * Worse, since the numbers are provided per CPU, they are sometimes
++ * interpreted per CPU, and that is nonsensical. A blocked task isn't strictly
++ * associated with any one particular CPU, it can wake to another CPU than it
++ * blocked on. This means the per CPU IO-wait number is meaningless.
++ *
++ * Task CPU affinities can make all that even more 'interesting'.
++ */
++
++unsigned long nr_iowait(void)
++{
++	unsigned long i, sum = 0;
++
++	for_each_possible_cpu(i)
++		sum += nr_iowait_cpu(i);
++
++	return sum;
++}
++
++#ifdef CONFIG_SMP
++
++/*
++ * sched_exec - execve() is a valuable balancing opportunity, because at
++ * this point the task has the smallest effective memory and cache
++ * footprint.
++ */
++void sched_exec(void)
++{
++	struct task_struct *p = current;
++	unsigned long flags;
++	int dest_cpu;
++	struct rq *rq;
++
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	rq = this_rq();
++
++	if (rq != task_rq(p) || rq->nr_running < 2)
++		goto unlock;
++
++	dest_cpu = select_task_rq(p);
++	if (dest_cpu == smp_processor_id())
++		goto unlock;
++
++	if (likely(cpu_active(dest_cpu))) {
++		struct migration_arg arg = { p, dest_cpu };
++
++		raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++		stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg);
++		return;
++	}
++unlock:
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++#endif
++
++DEFINE_PER_CPU(struct kernel_stat, kstat);
++DEFINE_PER_CPU(struct kernel_cpustat, kernel_cpustat);
++
++EXPORT_PER_CPU_SYMBOL(kstat);
++EXPORT_PER_CPU_SYMBOL(kernel_cpustat);
++
++static inline void update_curr(struct rq *rq, struct task_struct *p)
++{
++	s64 ns = rq->clock_task - p->last_ran;
++
++	p->sched_time += ns;
++	account_group_exec_runtime(p, ns);
++
++	p->time_slice -= ns;
++	p->last_ran = rq->clock_task;
++}
++
++/*
++ * Return accounted runtime for the task.
++ * Return separately the current's pending runtime that have not been
++ * accounted yet.
++ */
++unsigned long long task_sched_runtime(struct task_struct *p)
++{
++	unsigned long flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++	u64 ns;
++
++#if defined(CONFIG_64BIT) && defined(CONFIG_SMP)
++	/*
++	 * 64-bit doesn't need locks to atomically read a 64-bit value.
++	 * So we have a optimization chance when the task's delta_exec is 0.
++	 * Reading ->on_cpu is racy, but this is ok.
++	 *
++	 * If we race with it leaving CPU, we'll take a lock. So we're correct.
++	 * If we race with it entering CPU, unaccounted time is 0. This is
++	 * indistinguishable from the read occurring a few cycles earlier.
++	 * If we see ->on_cpu without ->on_rq, the task is leaving, and has
++	 * been accounted, so we're correct here as well.
++	 */
++	if (!p->on_cpu || !task_on_rq_queued(p))
++		return tsk_seruntime(p);
++#endif
++
++	rq = task_access_lock_irqsave(p, &lock, &flags);
++	/*
++	 * Must be ->curr _and_ ->on_rq.  If dequeued, we would
++	 * project cycles that may never be accounted to this
++	 * thread, breaking clock_gettime().
++	 */
++	if (p == rq->curr && task_on_rq_queued(p)) {
++		update_rq_clock(rq);
++		update_curr(rq, p);
++	}
++	ns = tsk_seruntime(p);
++	task_access_unlock_irqrestore(p, lock, &flags);
++
++	return ns;
++}
++
++/* This manages tasks that have run out of timeslice during a scheduler_tick */
++static inline void scheduler_task_tick(struct rq *rq)
++{
++	struct task_struct *p = rq->curr;
++
++	if (is_idle_task(p))
++		return;
++
++	update_curr(rq, p);
++	cpufreq_update_util(rq, 0);
++
++	/*
++	 * Tasks have less than RESCHED_NS of time slice left they will be
++	 * rescheduled.
++	 */
++	if (p->time_slice >= RESCHED_NS)
++		return;
++	set_tsk_need_resched(p);
++	set_preempt_need_resched();
++}
++
++#ifdef CONFIG_SCHED_DEBUG
++static u64 cpu_resched_latency(struct rq *rq)
++{
++	int latency_warn_ms = READ_ONCE(sysctl_resched_latency_warn_ms);
++	u64 resched_latency, now = rq_clock(rq);
++	static bool warned_once;
++
++	if (sysctl_resched_latency_warn_once && warned_once)
++		return 0;
++
++	if (!need_resched() || !latency_warn_ms)
++		return 0;
++
++	if (system_state == SYSTEM_BOOTING)
++		return 0;
++
++	if (!rq->last_seen_need_resched_ns) {
++		rq->last_seen_need_resched_ns = now;
++		rq->ticks_without_resched = 0;
++		return 0;
++	}
++
++	rq->ticks_without_resched++;
++	resched_latency = now - rq->last_seen_need_resched_ns;
++	if (resched_latency <= latency_warn_ms * NSEC_PER_MSEC)
++		return 0;
++
++	warned_once = true;
++
++	return resched_latency;
++}
++
++static int __init setup_resched_latency_warn_ms(char *str)
++{
++	long val;
++
++	if ((kstrtol(str, 0, &val))) {
++		pr_warn("Unable to set resched_latency_warn_ms\n");
++		return 1;
++	}
++
++	sysctl_resched_latency_warn_ms = val;
++	return 1;
++}
++__setup("resched_latency_warn_ms=", setup_resched_latency_warn_ms);
++#else
++static inline u64 cpu_resched_latency(struct rq *rq) { return 0; }
++#endif /* CONFIG_SCHED_DEBUG */
++
++/*
++ * This function gets called by the timer code, with HZ frequency.
++ * We call it with interrupts disabled.
++ */
++void scheduler_tick(void)
++{
++	int cpu __maybe_unused = smp_processor_id();
++	struct rq *rq = cpu_rq(cpu);
++	u64 resched_latency;
++
++	arch_scale_freq_tick();
++	sched_clock_tick();
++
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++
++	scheduler_task_tick(rq);
++	if (sched_feat(LATENCY_WARN))
++		resched_latency = cpu_resched_latency(rq);
++	calc_global_load_tick(rq);
++
++	rq->last_tick = rq->clock;
++	raw_spin_unlock(&rq->lock);
++
++	if (sched_feat(LATENCY_WARN) && resched_latency)
++		resched_latency_warn(cpu, resched_latency);
++
++	perf_event_task_tick();
++}
++
++#ifdef CONFIG_SCHED_SMT
++static inline int active_load_balance_cpu_stop(void *data)
++{
++	struct rq *rq = this_rq();
++	struct task_struct *p = data;
++	cpumask_t tmp;
++	unsigned long flags;
++
++	local_irq_save(flags);
++
++	raw_spin_lock(&p->pi_lock);
++	raw_spin_lock(&rq->lock);
++
++	rq->active_balance = 0;
++	/* _something_ may have changed the task, double check again */
++	if (task_on_rq_queued(p) && task_rq(p) == rq &&
++	    cpumask_and(&tmp, p->cpus_ptr, &sched_sg_idle_mask) &&
++	    !is_migration_disabled(p)) {
++		int cpu = cpu_of(rq);
++		int dcpu = __best_mask_cpu(cpu, &tmp,
++					   per_cpu(sched_cpu_llc_mask, cpu));
++		rq = move_queued_task(rq, p, dcpu);
++	}
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock(&p->pi_lock);
++
++	local_irq_restore(flags);
++
++	return 0;
++}
++
++/* sg_balance_trigger - trigger slibing group balance for @cpu */
++static inline int sg_balance_trigger(const int cpu)
++{
++	struct rq *rq= cpu_rq(cpu);
++	unsigned long flags;
++	struct task_struct *curr;
++	int res;
++
++	if (!raw_spin_trylock_irqsave(&rq->lock, flags))
++		return 0;
++	curr = rq->curr;
++	res = (!is_idle_task(curr)) && (1 == rq->nr_running) &&\
++	      cpumask_intersects(curr->cpus_ptr, &sched_sg_idle_mask) &&\
++	      !is_migration_disabled(curr) && (!rq->active_balance);
++
++	if (res)
++		rq->active_balance = 1;
++
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	if (res)
++		stop_one_cpu_nowait(cpu, active_load_balance_cpu_stop,
++				    curr, &rq->active_balance_work);
++	return res;
++}
++
++/*
++ * sg_balance_check - slibing group balance check for run queue @rq
++ */
++static inline void sg_balance_check(struct rq *rq)
++{
++	cpumask_t chk;
++	int cpu;
++
++	/* exit when no sg in idle */
++	if (cpumask_empty(&sched_sg_idle_mask))
++		return;
++
++	/* exit when cpu is offline */
++	if (unlikely(!rq->online))
++		return;
++
++	cpu = cpu_of(rq);
++	/*
++	 * Only cpu in slibing idle group will do the checking and then
++	 * find potential cpus which can migrate the current running task
++	 */
++	if (cpumask_test_cpu(cpu, &sched_sg_idle_mask) &&
++	    cpumask_andnot(&chk, cpu_online_mask, &sched_rq_pending_mask) &&
++	    cpumask_andnot(&chk, &chk, &sched_rq_watermark[IDLE_WM])) {
++		int i, tried = 0;
++
++		for_each_cpu_wrap(i, &chk, cpu) {
++			if (cpumask_subset(cpu_smt_mask(i), &chk)) {
++				if (sg_balance_trigger(i))
++					return;
++				if (tried)
++					return;
++				tried++;
++			}
++		}
++	}
++}
++#endif /* CONFIG_SCHED_SMT */
++
++#ifdef CONFIG_NO_HZ_FULL
++
++struct tick_work {
++	int			cpu;
++	atomic_t		state;
++	struct delayed_work	work;
++};
++/* Values for ->state, see diagram below. */
++#define TICK_SCHED_REMOTE_OFFLINE	0
++#define TICK_SCHED_REMOTE_OFFLINING	1
++#define TICK_SCHED_REMOTE_RUNNING	2
++
++/*
++ * State diagram for ->state:
++ *
++ *
++ *          TICK_SCHED_REMOTE_OFFLINE
++ *                    |   ^
++ *                    |   |
++ *                    |   | sched_tick_remote()
++ *                    |   |
++ *                    |   |
++ *                    +--TICK_SCHED_REMOTE_OFFLINING
++ *                    |   ^
++ *                    |   |
++ * sched_tick_start() |   | sched_tick_stop()
++ *                    |   |
++ *                    V   |
++ *          TICK_SCHED_REMOTE_RUNNING
++ *
++ *
++ * Other transitions get WARN_ON_ONCE(), except that sched_tick_remote()
++ * and sched_tick_start() are happy to leave the state in RUNNING.
++ */
++
++static struct tick_work __percpu *tick_work_cpu;
++
++static void sched_tick_remote(struct work_struct *work)
++{
++	struct delayed_work *dwork = to_delayed_work(work);
++	struct tick_work *twork = container_of(dwork, struct tick_work, work);
++	int cpu = twork->cpu;
++	struct rq *rq = cpu_rq(cpu);
++	struct task_struct *curr;
++	unsigned long flags;
++	u64 delta;
++	int os;
++
++	/*
++	 * Handle the tick only if it appears the remote CPU is running in full
++	 * dynticks mode. The check is racy by nature, but missing a tick or
++	 * having one too much is no big deal because the scheduler tick updates
++	 * statistics and checks timeslices in a time-independent way, regardless
++	 * of when exactly it is running.
++	 */
++	if (!tick_nohz_tick_stopped_cpu(cpu))
++		goto out_requeue;
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	curr = rq->curr;
++	if (cpu_is_offline(cpu))
++		goto out_unlock;
++
++	update_rq_clock(rq);
++	if (!is_idle_task(curr)) {
++		/*
++		 * Make sure the next tick runs within a reasonable
++		 * amount of time.
++		 */
++		delta = rq_clock_task(rq) - curr->last_ran;
++		WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
++	}
++	scheduler_task_tick(rq);
++
++	calc_load_nohz_remote(rq);
++out_unlock:
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++out_requeue:
++	/*
++	 * Run the remote tick once per second (1Hz). This arbitrary
++	 * frequency is large enough to avoid overload but short enough
++	 * to keep scheduler internal stats reasonably up to date.  But
++	 * first update state to reflect hotplug activity if required.
++	 */
++	os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
++	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
++	if (os == TICK_SCHED_REMOTE_RUNNING)
++		queue_delayed_work(system_unbound_wq, dwork, HZ);
++}
++
++static void sched_tick_start(int cpu)
++{
++	int os;
++	struct tick_work *twork;
++
++	if (housekeeping_cpu(cpu, HK_FLAG_TICK))
++		return;
++
++	WARN_ON_ONCE(!tick_work_cpu);
++
++	twork = per_cpu_ptr(tick_work_cpu, cpu);
++	os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_RUNNING);
++	WARN_ON_ONCE(os == TICK_SCHED_REMOTE_RUNNING);
++	if (os == TICK_SCHED_REMOTE_OFFLINE) {
++		twork->cpu = cpu;
++		INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
++		queue_delayed_work(system_unbound_wq, &twork->work, HZ);
++	}
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++static void sched_tick_stop(int cpu)
++{
++	struct tick_work *twork;
++
++	if (housekeeping_cpu(cpu, HK_FLAG_TICK))
++		return;
++
++	WARN_ON_ONCE(!tick_work_cpu);
++
++	twork = per_cpu_ptr(tick_work_cpu, cpu);
++	cancel_delayed_work_sync(&twork->work);
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++int __init sched_tick_offload_init(void)
++{
++	tick_work_cpu = alloc_percpu(struct tick_work);
++	BUG_ON(!tick_work_cpu);
++	return 0;
++}
++
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_tick_start(int cpu) { }
++static inline void sched_tick_stop(int cpu) { }
++#endif
++
++#if defined(CONFIG_PREEMPTION) && (defined(CONFIG_DEBUG_PREEMPT) || \
++				defined(CONFIG_PREEMPT_TRACER))
++/*
++ * If the value passed in is equal to the current preempt count
++ * then we just disabled preemption. Start timing the latency.
++ */
++static inline void preempt_latency_start(int val)
++{
++	if (preempt_count() == val) {
++		unsigned long ip = get_lock_parent_ip();
++#ifdef CONFIG_DEBUG_PREEMPT
++		current->preempt_disable_ip = ip;
++#endif
++		trace_preempt_off(CALLER_ADDR0, ip);
++	}
++}
++
++void preempt_count_add(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Underflow?
++	 */
++	if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
++		return;
++#endif
++	__preempt_count_add(val);
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Spinlock count overflowing soon?
++	 */
++	DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
++				PREEMPT_MASK - 10);
++#endif
++	preempt_latency_start(val);
++}
++EXPORT_SYMBOL(preempt_count_add);
++NOKPROBE_SYMBOL(preempt_count_add);
++
++/*
++ * If the value passed in equals to the current preempt count
++ * then we just enabled preemption. Stop timing the latency.
++ */
++static inline void preempt_latency_stop(int val)
++{
++	if (preempt_count() == val)
++		trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
++}
++
++void preempt_count_sub(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	/*
++	 * Underflow?
++	 */
++	if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
++		return;
++	/*
++	 * Is the spinlock portion underflowing?
++	 */
++	if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
++			!(preempt_count() & PREEMPT_MASK)))
++		return;
++#endif
++
++	preempt_latency_stop(val);
++	__preempt_count_sub(val);
++}
++EXPORT_SYMBOL(preempt_count_sub);
++NOKPROBE_SYMBOL(preempt_count_sub);
++
++#else
++static inline void preempt_latency_start(int val) { }
++static inline void preempt_latency_stop(int val) { }
++#endif
++
++static inline unsigned long get_preempt_disable_ip(struct task_struct *p)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++	return p->preempt_disable_ip;
++#else
++	return 0;
++#endif
++}
++
++/*
++ * Print scheduling while atomic bug:
++ */
++static noinline void __schedule_bug(struct task_struct *prev)
++{
++	/* Save this before calling printk(), since that will clobber it */
++	unsigned long preempt_disable_ip = get_preempt_disable_ip(current);
++
++	if (oops_in_progress)
++		return;
++
++	printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n",
++		prev->comm, prev->pid, preempt_count());
++
++	debug_show_held_locks(prev);
++	print_modules();
++	if (irqs_disabled())
++		print_irqtrace_events(prev);
++	if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)
++	    && in_atomic_preempt_off()) {
++		pr_err("Preemption disabled at:");
++		print_ip_sym(KERN_ERR, preempt_disable_ip);
++	}
++	if (panic_on_warn)
++		panic("scheduling while atomic\n");
++
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++
++/*
++ * Various schedule()-time debugging checks and statistics:
++ */
++static inline void schedule_debug(struct task_struct *prev, bool preempt)
++{
++#ifdef CONFIG_SCHED_STACK_END_CHECK
++	if (task_stack_end_corrupted(prev))
++		panic("corrupted stack end detected inside scheduler\n");
++
++	if (task_scs_end_corrupted(prev))
++		panic("corrupted shadow stack detected inside scheduler\n");
++#endif
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++	if (!preempt && prev->state && prev->non_block_count) {
++		printk(KERN_ERR "BUG: scheduling in a non-blocking section: %s/%d/%i\n",
++			prev->comm, prev->pid, prev->non_block_count);
++		dump_stack();
++		add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++	}
++#endif
++
++	if (unlikely(in_atomic_preempt_off())) {
++		__schedule_bug(prev);
++		preempt_count_set(PREEMPT_DISABLED);
++	}
++	rcu_sleep_check();
++	SCHED_WARN_ON(ct_state() == CONTEXT_USER);
++
++	profile_hit(SCHED_PROFILING, __builtin_return_address(0));
++
++	schedstat_inc(this_rq()->sched_count);
++}
++
++/*
++ * Compile time debug macro
++ * #define ALT_SCHED_DEBUG
++ */
++
++#ifdef ALT_SCHED_DEBUG
++void alt_sched_debug(void)
++{
++	printk(KERN_INFO "sched: pending: 0x%04lx, idle: 0x%04lx, sg_idle: 0x%04lx\n",
++	       sched_rq_pending_mask.bits[0],
++	       sched_rq_watermark[IDLE_WM].bits[0],
++	       sched_sg_idle_mask.bits[0]);
++}
++#else
++inline void alt_sched_debug(void) {}
++#endif
++
++#ifdef	CONFIG_SMP
++
++#define SCHED_RQ_NR_MIGRATION (32U)
++/*
++ * Migrate pending tasks in @rq to @dest_cpu
++ * Will try to migrate mininal of half of @rq nr_running tasks and
++ * SCHED_RQ_NR_MIGRATION to @dest_cpu
++ */
++static inline int
++migrate_pending_tasks(struct rq *rq, struct rq *dest_rq, const int dest_cpu)
++{
++	struct task_struct *p, *skip = rq->curr;
++	int nr_migrated = 0;
++	int nr_tries = min(rq->nr_running / 2, SCHED_RQ_NR_MIGRATION);
++
++	while (skip != rq->idle && nr_tries &&
++	       (p = sched_rq_next_task(skip, rq)) != rq->idle) {
++		skip = sched_rq_next_task(p, rq);
++		if (cpumask_test_cpu(dest_cpu, p->cpus_ptr)) {
++			__SCHED_DEQUEUE_TASK(p, rq, 0, );
++			set_task_cpu(p, dest_cpu);
++			__SCHED_ENQUEUE_TASK(p, dest_rq, 0);
++			nr_migrated++;
++		}
++		nr_tries--;
++	}
++
++	return nr_migrated;
++}
++
++static inline int take_other_rq_tasks(struct rq *rq, int cpu)
++{
++	struct cpumask *affinity_mask, *end_mask;
++
++	if (unlikely(!rq->online))
++		return 0;
++
++	if (cpumask_empty(&sched_rq_pending_mask))
++		return 0;
++
++	affinity_mask = per_cpu(sched_cpu_affinity_masks, cpu) + 1;
++	end_mask = per_cpu(sched_cpu_affinity_end_mask, cpu);
++	do {
++		int i;
++		for_each_cpu_and(i, &sched_rq_pending_mask, affinity_mask) {
++			int nr_migrated;
++			struct rq *src_rq;
++
++			src_rq = cpu_rq(i);
++			if (!do_raw_spin_trylock(&src_rq->lock))
++				continue;
++			spin_acquire(&src_rq->lock.dep_map,
++				     SINGLE_DEPTH_NESTING, 1, _RET_IP_);
++
++			if ((nr_migrated = migrate_pending_tasks(src_rq, rq, cpu))) {
++				src_rq->nr_running -= nr_migrated;
++				if (src_rq->nr_running < 2)
++					cpumask_clear_cpu(i, &sched_rq_pending_mask);
++
++				rq->nr_running += nr_migrated;
++				if (rq->nr_running > 1)
++					cpumask_set_cpu(cpu, &sched_rq_pending_mask);
++
++				update_sched_rq_watermark(rq);
++				cpufreq_update_util(rq, 0);
++
++				spin_release(&src_rq->lock.dep_map, _RET_IP_);
++				do_raw_spin_unlock(&src_rq->lock);
++
++				return 1;
++			}
++
++			spin_release(&src_rq->lock.dep_map, _RET_IP_);
++			do_raw_spin_unlock(&src_rq->lock);
++		}
++	} while (++affinity_mask < end_mask);
++
++	return 0;
++}
++#endif
++
++/*
++ * Timeslices below RESCHED_NS are considered as good as expired as there's no
++ * point rescheduling when there's so little time left.
++ */
++static inline void check_curr(struct task_struct *p, struct rq *rq)
++{
++	if (unlikely(rq->idle == p))
++		return;
++
++	update_curr(rq, p);
++
++	if (p->time_slice < RESCHED_NS)
++		time_slice_expired(p, rq);
++}
++
++static inline struct task_struct *
++choose_next_task(struct rq *rq, int cpu, struct task_struct *prev)
++{
++	struct task_struct *next;
++
++	if (unlikely(rq->skip)) {
++		next = rq_runnable_task(rq);
++		if (next == rq->idle) {
++#ifdef	CONFIG_SMP
++			if (!take_other_rq_tasks(rq, cpu)) {
++#endif
++				rq->skip = NULL;
++				schedstat_inc(rq->sched_goidle);
++				return next;
++#ifdef	CONFIG_SMP
++			}
++			next = rq_runnable_task(rq);
++#endif
++		}
++		rq->skip = NULL;
++#ifdef CONFIG_HIGH_RES_TIMERS
++		hrtick_start(rq, next->time_slice);
++#endif
++		return next;
++	}
++
++	next = sched_rq_first_task(rq);
++	if (next == rq->idle) {
++#ifdef	CONFIG_SMP
++		if (!take_other_rq_tasks(rq, cpu)) {
++#endif
++			schedstat_inc(rq->sched_goidle);
++			/*printk(KERN_INFO "sched: choose_next_task(%d) idle %px\n", cpu, next);*/
++			return next;
++#ifdef	CONFIG_SMP
++		}
++		next = sched_rq_first_task(rq);
++#endif
++	}
++#ifdef CONFIG_HIGH_RES_TIMERS
++	hrtick_start(rq, next->time_slice);
++#endif
++	/*printk(KERN_INFO "sched: choose_next_task(%d) next %px\n", cpu,
++	 * next);*/
++	return next;
++}
++
++/*
++ * schedule() is the main scheduler function.
++ *
++ * The main means of driving the scheduler and thus entering this function are:
++ *
++ *   1. Explicit blocking: mutex, semaphore, waitqueue, etc.
++ *
++ *   2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
++ *      paths. For example, see arch/x86/entry_64.S.
++ *
++ *      To drive preemption between tasks, the scheduler sets the flag in timer
++ *      interrupt handler scheduler_tick().
++ *
++ *   3. Wakeups don't really cause entry into schedule(). They add a
++ *      task to the run-queue and that's it.
++ *
++ *      Now, if the new task added to the run-queue preempts the current
++ *      task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
++ *      called on the nearest possible occasion:
++ *
++ *       - If the kernel is preemptible (CONFIG_PREEMPTION=y):
++ *
++ *         - in syscall or exception context, at the next outmost
++ *           preempt_enable(). (this might be as soon as the wake_up()'s
++ *           spin_unlock()!)
++ *
++ *         - in IRQ context, return from interrupt-handler to
++ *           preemptible context
++ *
++ *       - If the kernel is not preemptible (CONFIG_PREEMPTION is not set)
++ *         then at the next:
++ *
++ *          - cond_resched() call
++ *          - explicit schedule() call
++ *          - return from syscall or exception to user-space
++ *          - return from interrupt-handler to user-space
++ *
++ * WARNING: must be called with preemption disabled!
++ */
++static void __sched notrace __schedule(bool preempt)
++{
++	struct task_struct *prev, *next;
++	unsigned long *switch_count;
++	unsigned long prev_state;
++	struct rq *rq;
++	int cpu;
++
++	cpu = smp_processor_id();
++	rq = cpu_rq(cpu);
++	prev = rq->curr;
++
++	schedule_debug(prev, preempt);
++
++	/* by passing sched_feat(HRTICK) checking which Alt schedule FW doesn't support */
++	hrtick_clear(rq);
++
++	local_irq_disable();
++	rcu_note_context_switch(preempt);
++
++	/*
++	 * Make sure that signal_pending_state()->signal_pending() below
++	 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
++	 * done by the caller to avoid the race with signal_wake_up():
++	 *
++	 * __set_current_state(@state)		signal_wake_up()
++	 * schedule()				  set_tsk_thread_flag(p, TIF_SIGPENDING)
++	 *					  wake_up_state(p, state)
++	 *   LOCK rq->lock			    LOCK p->pi_state
++	 *   smp_mb__after_spinlock()		    smp_mb__after_spinlock()
++	 *     if (signal_pending_state())	    if (p->state & @state)
++	 *
++	 * Also, the membarrier system call requires a full memory barrier
++	 * after coming from user-space, before storing to rq->curr.
++	 */
++	raw_spin_lock(&rq->lock);
++	smp_mb__after_spinlock();
++
++	update_rq_clock(rq);
++
++	switch_count = &prev->nivcsw;
++	/*
++	 * We must load prev->state once (task_struct::state is volatile), such
++	 * that:
++	 *
++	 *  - we form a control dependency vs deactivate_task() below.
++	 *  - ptrace_{,un}freeze_traced() can change ->state underneath us.
++	 */
++	prev_state = prev->state;
++	if (!preempt && prev_state && prev_state == prev->state) {
++		if (signal_pending_state(prev_state, prev)) {
++			prev->state = TASK_RUNNING;
++		} else {
++			prev->sched_contributes_to_load =
++				(prev_state & TASK_UNINTERRUPTIBLE) &&
++				!(prev_state & TASK_NOLOAD) &&
++				!(prev->flags & PF_FROZEN);
++
++			if (prev->sched_contributes_to_load)
++				rq->nr_uninterruptible++;
++
++			/*
++			 * __schedule()			ttwu()
++			 *   prev_state = prev->state;    if (p->on_rq && ...)
++			 *   if (prev_state)		    goto out;
++			 *     p->on_rq = 0;		  smp_acquire__after_ctrl_dep();
++			 *				  p->state = TASK_WAKING
++			 *
++			 * Where __schedule() and ttwu() have matching control dependencies.
++			 *
++			 * After this, schedule() must not care about p->state any more.
++			 */
++			sched_task_deactivate(prev, rq);
++			deactivate_task(prev, rq);
++
++			if (prev->in_iowait) {
++				atomic_inc(&rq->nr_iowait);
++				delayacct_blkio_start();
++			}
++		}
++		switch_count = &prev->nvcsw;
++	}
++
++	check_curr(prev, rq);
++
++	next = choose_next_task(rq, cpu, prev);
++	clear_tsk_need_resched(prev);
++	clear_preempt_need_resched();
++#ifdef CONFIG_SCHED_DEBUG
++	rq->last_seen_need_resched_ns = 0;
++#endif
++
++	if (likely(prev != next)) {
++		next->last_ran = rq->clock_task;
++		rq->last_ts_switch = rq->clock;
++
++		rq->nr_switches++;
++		/*
++		 * RCU users of rcu_dereference(rq->curr) may not see
++		 * changes to task_struct made by pick_next_task().
++		 */
++		RCU_INIT_POINTER(rq->curr, next);
++		/*
++		 * The membarrier system call requires each architecture
++		 * to have a full memory barrier after updating
++		 * rq->curr, before returning to user-space.
++		 *
++		 * Here are the schemes providing that barrier on the
++		 * various architectures:
++		 * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC.
++		 *   switch_mm() rely on membarrier_arch_switch_mm() on PowerPC.
++		 * - finish_lock_switch() for weakly-ordered
++		 *   architectures where spin_unlock is a full barrier,
++		 * - switch_to() for arm64 (weakly-ordered, spin_unlock
++		 *   is a RELEASE barrier),
++		 */
++		++*switch_count;
++
++		psi_sched_switch(prev, next, !task_on_rq_queued(prev));
++
++		trace_sched_switch(preempt, prev, next);
++
++		/* Also unlocks the rq: */
++		rq = context_switch(rq, prev, next);
++	} else {
++		__balance_callbacks(rq);
++		raw_spin_unlock_irq(&rq->lock);
++	}
++
++#ifdef CONFIG_SCHED_SMT
++	sg_balance_check(rq);
++#endif
++}
++
++void __noreturn do_task_dead(void)
++{
++	/* Causes final put_task_struct in finish_task_switch(): */
++	set_special_state(TASK_DEAD);
++
++	/* Tell freezer to ignore us: */
++	current->flags |= PF_NOFREEZE;
++
++	__schedule(false);
++	BUG();
++
++	/* Avoid "noreturn function does return" - but don't continue if BUG() is a NOP: */
++	for (;;)
++		cpu_relax();
++}
++
++static inline void sched_submit_work(struct task_struct *tsk)
++{
++	unsigned int task_flags;
++
++	if (!tsk->state)
++		return;
++
++	task_flags = tsk->flags;
++	/*
++	 * If a worker went to sleep, notify and ask workqueue whether
++	 * it wants to wake up a task to maintain concurrency.
++	 * As this function is called inside the schedule() context,
++	 * we disable preemption to avoid it calling schedule() again
++	 * in the possible wakeup of a kworker and because wq_worker_sleeping()
++	 * requires it.
++	 */
++	if (task_flags & (PF_WQ_WORKER | PF_IO_WORKER)) {
++		preempt_disable();
++		if (task_flags & PF_WQ_WORKER)
++			wq_worker_sleeping(tsk);
++		else
++			io_wq_worker_sleeping(tsk);
++		preempt_enable_no_resched();
++	}
++
++	if (tsk_is_pi_blocked(tsk))
++		return;
++
++	/*
++	 * If we are going to sleep and we have plugged IO queued,
++	 * make sure to submit it to avoid deadlocks.
++	 */
++	if (blk_needs_flush_plug(tsk))
++		blk_schedule_flush_plug(tsk);
++}
++
++static void sched_update_worker(struct task_struct *tsk)
++{
++	if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER)) {
++		if (tsk->flags & PF_WQ_WORKER)
++			wq_worker_running(tsk);
++		else
++			io_wq_worker_running(tsk);
++	}
++}
++
++asmlinkage __visible void __sched schedule(void)
++{
++	struct task_struct *tsk = current;
++
++	sched_submit_work(tsk);
++	do {
++		preempt_disable();
++		__schedule(false);
++		sched_preempt_enable_no_resched();
++	} while (need_resched());
++	sched_update_worker(tsk);
++}
++EXPORT_SYMBOL(schedule);
++
++/*
++ * synchronize_rcu_tasks() makes sure that no task is stuck in preempted
++ * state (have scheduled out non-voluntarily) by making sure that all
++ * tasks have either left the run queue or have gone into user space.
++ * As idle tasks do not do either, they must not ever be preempted
++ * (schedule out non-voluntarily).
++ *
++ * schedule_idle() is similar to schedule_preempt_disable() except that it
++ * never enables preemption because it does not call sched_submit_work().
++ */
++void __sched schedule_idle(void)
++{
++	/*
++	 * As this skips calling sched_submit_work(), which the idle task does
++	 * regardless because that function is a nop when the task is in a
++	 * TASK_RUNNING state, make sure this isn't used someplace that the
++	 * current task can be in any other state. Note, idle is always in the
++	 * TASK_RUNNING state.
++	 */
++	WARN_ON_ONCE(current->state);
++	do {
++		__schedule(false);
++	} while (need_resched());
++}
++
++#if defined(CONFIG_CONTEXT_TRACKING) && !defined(CONFIG_HAVE_CONTEXT_TRACKING_OFFSTACK)
++asmlinkage __visible void __sched schedule_user(void)
++{
++	/*
++	 * If we come here after a random call to set_need_resched(),
++	 * or we have been woken up remotely but the IPI has not yet arrived,
++	 * we haven't yet exited the RCU idle mode. Do it here manually until
++	 * we find a better solution.
++	 *
++	 * NB: There are buggy callers of this function.  Ideally we
++	 * should warn if prev_state != CONTEXT_USER, but that will trigger
++	 * too frequently to make sense yet.
++	 */
++	enum ctx_state prev_state = exception_enter();
++	schedule();
++	exception_exit(prev_state);
++}
++#endif
++
++/**
++ * schedule_preempt_disabled - called with preemption disabled
++ *
++ * Returns with preemption disabled. Note: preempt_count must be 1
++ */
++void __sched schedule_preempt_disabled(void)
++{
++	sched_preempt_enable_no_resched();
++	schedule();
++	preempt_disable();
++}
++
++static void __sched notrace preempt_schedule_common(void)
++{
++	do {
++		/*
++		 * Because the function tracer can trace preempt_count_sub()
++		 * and it also uses preempt_enable/disable_notrace(), if
++		 * NEED_RESCHED is set, the preempt_enable_notrace() called
++		 * by the function tracer will call this function again and
++		 * cause infinite recursion.
++		 *
++		 * Preemption must be disabled here before the function
++		 * tracer can trace. Break up preempt_disable() into two
++		 * calls. One to disable preemption without fear of being
++		 * traced. The other to still record the preemption latency,
++		 * which can also be traced by the function tracer.
++		 */
++		preempt_disable_notrace();
++		preempt_latency_start(1);
++		__schedule(true);
++		preempt_latency_stop(1);
++		preempt_enable_no_resched_notrace();
++
++		/*
++		 * Check again in case we missed a preemption opportunity
++		 * between schedule and now.
++		 */
++	} while (need_resched());
++}
++
++#ifdef CONFIG_PREEMPTION
++/*
++ * This is the entry point to schedule() from in-kernel preemption
++ * off of preempt_enable.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule(void)
++{
++	/*
++	 * If there is a non-zero preempt_count or interrupts are disabled,
++	 * we do not want to preempt the current task. Just return..
++	 */
++	if (likely(!preemptible()))
++		return;
++
++	preempt_schedule_common();
++}
++NOKPROBE_SYMBOL(preempt_schedule);
++EXPORT_SYMBOL(preempt_schedule);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++DEFINE_STATIC_CALL(preempt_schedule, __preempt_schedule_func);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
++#endif
++
++
++/**
++ * preempt_schedule_notrace - preempt_schedule called by tracing
++ *
++ * The tracing infrastructure uses preempt_enable_notrace to prevent
++ * recursion and tracing preempt enabling caused by the tracing
++ * infrastructure itself. But as tracing can happen in areas coming
++ * from userspace or just about to enter userspace, a preempt enable
++ * can occur before user_exit() is called. This will cause the scheduler
++ * to be called when the system is still in usermode.
++ *
++ * To prevent this, the preempt_enable_notrace will use this function
++ * instead of preempt_schedule() to exit user context if needed before
++ * calling the scheduler.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
++{
++	enum ctx_state prev_ctx;
++
++	if (likely(!preemptible()))
++		return;
++
++	do {
++		/*
++		 * Because the function tracer can trace preempt_count_sub()
++		 * and it also uses preempt_enable/disable_notrace(), if
++		 * NEED_RESCHED is set, the preempt_enable_notrace() called
++		 * by the function tracer will call this function again and
++		 * cause infinite recursion.
++		 *
++		 * Preemption must be disabled here before the function
++		 * tracer can trace. Break up preempt_disable() into two
++		 * calls. One to disable preemption without fear of being
++		 * traced. The other to still record the preemption latency,
++		 * which can also be traced by the function tracer.
++		 */
++		preempt_disable_notrace();
++		preempt_latency_start(1);
++		/*
++		 * Needs preempt disabled in case user_exit() is traced
++		 * and the tracer calls preempt_enable_notrace() causing
++		 * an infinite recursion.
++		 */
++		prev_ctx = exception_enter();
++		__schedule(true);
++		exception_exit(prev_ctx);
++
++		preempt_latency_stop(1);
++		preempt_enable_no_resched_notrace();
++	} while (need_resched());
++}
++EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
++#endif
++
++#endif /* CONFIG_PREEMPTION */
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++
++#include <linux/entry-common.h>
++
++/*
++ * SC:cond_resched
++ * SC:might_resched
++ * SC:preempt_schedule
++ * SC:preempt_schedule_notrace
++ * SC:irqentry_exit_cond_resched
++ *
++ *
++ * NONE:
++ *   cond_resched               <- __cond_resched
++ *   might_resched              <- RET0
++ *   preempt_schedule           <- NOP
++ *   preempt_schedule_notrace   <- NOP
++ *   irqentry_exit_cond_resched <- NOP
++ *
++ * VOLUNTARY:
++ *   cond_resched               <- __cond_resched
++ *   might_resched              <- __cond_resched
++ *   preempt_schedule           <- NOP
++ *   preempt_schedule_notrace   <- NOP
++ *   irqentry_exit_cond_resched <- NOP
++ *
++ * FULL:
++ *   cond_resched               <- RET0
++ *   might_resched              <- RET0
++ *   preempt_schedule           <- preempt_schedule
++ *   preempt_schedule_notrace   <- preempt_schedule_notrace
++ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
++ */
++
++enum {
++	preempt_dynamic_none = 0,
++	preempt_dynamic_voluntary,
++	preempt_dynamic_full,
++};
++
++int preempt_dynamic_mode = preempt_dynamic_full;
++
++int sched_dynamic_mode(const char *str)
++{
++	if (!strcmp(str, "none"))
++		return preempt_dynamic_none;
++
++	if (!strcmp(str, "voluntary"))
++		return preempt_dynamic_voluntary;
++
++	if (!strcmp(str, "full"))
++		return preempt_dynamic_full;
++
++	return -EINVAL;
++}
++
++void sched_dynamic_update(int mode)
++{
++	/*
++	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
++	 * the ZERO state, which is invalid.
++	 */
++	static_call_update(cond_resched, __cond_resched);
++	static_call_update(might_resched, __cond_resched);
++	static_call_update(preempt_schedule, __preempt_schedule_func);
++	static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
++	static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
++
++	switch (mode) {
++	case preempt_dynamic_none:
++		static_call_update(cond_resched, __cond_resched);
++		static_call_update(might_resched, (void *)&__static_call_return0);
++		static_call_update(preempt_schedule, NULL);
++		static_call_update(preempt_schedule_notrace, NULL);
++		static_call_update(irqentry_exit_cond_resched, NULL);
++		pr_info("Dynamic Preempt: none\n");
++		break;
++
++	case preempt_dynamic_voluntary:
++		static_call_update(cond_resched, __cond_resched);
++		static_call_update(might_resched, __cond_resched);
++		static_call_update(preempt_schedule, NULL);
++		static_call_update(preempt_schedule_notrace, NULL);
++		static_call_update(irqentry_exit_cond_resched, NULL);
++		pr_info("Dynamic Preempt: voluntary\n");
++		break;
++
++	case preempt_dynamic_full:
++		static_call_update(cond_resched, (void *)&__static_call_return0);
++		static_call_update(might_resched, (void *)&__static_call_return0);
++		static_call_update(preempt_schedule, __preempt_schedule_func);
++		static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
++		static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
++		pr_info("Dynamic Preempt: full\n");
++		break;
++	}
++
++	preempt_dynamic_mode = mode;
++}
++
++static int __init setup_preempt_mode(char *str)
++{
++	int mode = sched_dynamic_mode(str);
++	if (mode < 0) {
++		pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
++		return 1;
++	}
++
++	sched_dynamic_update(mode);
++	return 0;
++}
++__setup("preempt=", setup_preempt_mode);
++
++#endif /* CONFIG_PREEMPT_DYNAMIC */
++
++/*
++ * This is the entry point to schedule() from kernel preemption
++ * off of irq context.
++ * Note, that this is called and return with irqs disabled. This will
++ * protect us against recursive calling from irq.
++ */
++asmlinkage __visible void __sched preempt_schedule_irq(void)
++{
++	enum ctx_state prev_state;
++
++	/* Catch callers which need to be fixed */
++	BUG_ON(preempt_count() || !irqs_disabled());
++
++	prev_state = exception_enter();
++
++	do {
++		preempt_disable();
++		local_irq_enable();
++		__schedule(true);
++		local_irq_disable();
++		sched_preempt_enable_no_resched();
++	} while (need_resched());
++
++	exception_exit(prev_state);
++}
++
++int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flags,
++			  void *key)
++{
++	WARN_ON_ONCE(IS_ENABLED(CONFIG_SCHED_DEBUG) && wake_flags & ~WF_SYNC);
++	return try_to_wake_up(curr->private, mode, wake_flags);
++}
++EXPORT_SYMBOL(default_wake_function);
++
++static inline void check_task_changed(struct task_struct *p, struct rq *rq)
++{
++	/* Trigger resched if task sched_prio has been modified. */
++	if (task_on_rq_queued(p) && task_sched_prio_idx(p, rq) != p->sq_idx) {
++		requeue_task(p, rq);
++		check_preempt_curr(rq);
++	}
++}
++
++#ifdef CONFIG_RT_MUTEXES
++
++static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
++{
++	if (pi_task)
++		prio = min(prio, pi_task->prio);
++
++	return prio;
++}
++
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++	struct task_struct *pi_task = rt_mutex_get_top_task(p);
++
++	return __rt_effective_prio(pi_task, prio);
++}
++
++/*
++ * rt_mutex_setprio - set the current priority of a task
++ * @p: task to boost
++ * @pi_task: donor task
++ *
++ * This function changes the 'effective' priority of a task. It does
++ * not touch ->normal_prio like __setscheduler().
++ *
++ * Used by the rt_mutex code to implement priority inheritance
++ * logic. Call site only calls if the priority of the task changed.
++ */
++void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
++{
++	int prio;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	/* XXX used to be waiter->prio, not waiter->task->prio */
++	prio = __rt_effective_prio(pi_task, p->normal_prio);
++
++	/*
++	 * If nothing changed; bail early.
++	 */
++	if (p->pi_top_task == pi_task && prio == p->prio)
++		return;
++
++	rq = __task_access_lock(p, &lock);
++	/*
++	 * Set under pi_lock && rq->lock, such that the value can be used under
++	 * either lock.
++	 *
++	 * Note that there is loads of tricky to make this pointer cache work
++	 * right. rt_mutex_slowunlock()+rt_mutex_postunlock() work together to
++	 * ensure a task is de-boosted (pi_task is set to NULL) before the
++	 * task is allowed to run again (and can exit). This ensures the pointer
++	 * points to a blocked task -- which guarantees the task is present.
++	 */
++	p->pi_top_task = pi_task;
++
++	/*
++	 * For FIFO/RR we only need to set prio, if that matches we're done.
++	 */
++	if (prio == p->prio)
++		goto out_unlock;
++
++	/*
++	 * Idle task boosting is a nono in general. There is one
++	 * exception, when PREEMPT_RT and NOHZ is active:
++	 *
++	 * The idle task calls get_next_timer_interrupt() and holds
++	 * the timer wheel base->lock on the CPU and another CPU wants
++	 * to access the timer (probably to cancel it). We can safely
++	 * ignore the boosting request, as the idle CPU runs this code
++	 * with interrupts disabled and will complete the lock
++	 * protected section without being interrupted. So there is no
++	 * real need to boost.
++	 */
++	if (unlikely(p == rq->idle)) {
++		WARN_ON(p != rq->curr);
++		WARN_ON(p->pi_blocked_on);
++		goto out_unlock;
++	}
++
++	trace_sched_pi_setprio(p, pi_task);
++	p->prio = prio;
++
++	check_task_changed(p, rq);
++out_unlock:
++	/* Avoid rq from going away on us: */
++	preempt_disable();
++
++	__balance_callbacks(rq);
++	__task_access_unlock(p, lock);
++
++	preempt_enable();
++}
++#else
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++	return prio;
++}
++#endif
++
++void set_user_nice(struct task_struct *p, long nice)
++{
++	unsigned long flags;
++	struct rq *rq;
++	raw_spinlock_t *lock;
++
++	if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE)
++		return;
++	/*
++	 * We have to be careful, if called from sys_setpriority(),
++	 * the task might be in the middle of scheduling on another CPU.
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++	rq = __task_access_lock(p, &lock);
++
++	p->static_prio = NICE_TO_PRIO(nice);
++	/*
++	 * The RT priorities are set via sched_setscheduler(), but we still
++	 * allow the 'normal' nice value to be set - but as expected
++	 * it won't have any effect on scheduling until the task is
++	 * not SCHED_NORMAL/SCHED_BATCH:
++	 */
++	if (task_has_rt_policy(p))
++		goto out_unlock;
++
++	p->prio = effective_prio(p);
++
++	check_task_changed(p, rq);
++out_unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++EXPORT_SYMBOL(set_user_nice);
++
++/*
++ * can_nice - check if a task can reduce its nice value
++ * @p: task
++ * @nice: nice value
++ */
++int can_nice(const struct task_struct *p, const int nice)
++{
++	/* Convert nice value [19,-20] to rlimit style value [1,40] */
++	int nice_rlim = nice_to_rlimit(nice);
++
++	return (nice_rlim <= task_rlimit(p, RLIMIT_NICE) ||
++		capable(CAP_SYS_NICE));
++}
++
++#ifdef __ARCH_WANT_SYS_NICE
++
++/*
++ * sys_nice - change the priority of the current process.
++ * @increment: priority increment
++ *
++ * sys_setpriority is a more generic, but much slower function that
++ * does similar things.
++ */
++SYSCALL_DEFINE1(nice, int, increment)
++{
++	long nice, retval;
++
++	/*
++	 * Setpriority might change our priority at the same moment.
++	 * We don't have to worry. Conceptually one call occurs first
++	 * and we have a single winner.
++	 */
++
++	increment = clamp(increment, -NICE_WIDTH, NICE_WIDTH);
++	nice = task_nice(current) + increment;
++
++	nice = clamp_val(nice, MIN_NICE, MAX_NICE);
++	if (increment < 0 && !can_nice(current, nice))
++		return -EPERM;
++
++	retval = security_task_setnice(current, nice);
++	if (retval)
++		return retval;
++
++	set_user_nice(current, nice);
++	return 0;
++}
++
++#endif
++
++/**
++ * task_prio - return the priority value of a given task.
++ * @p: the task in question.
++ *
++ * Return: The priority value as seen by users in /proc.
++ *
++ * sched policy         return value   kernel prio    user prio/nice
++ *
++ * (BMQ)normal, batch, idle[0 ... 53]  [100 ... 139]          0/[-20 ... 19]/[-7 ... 7]
++ * (PDS)normal, batch, idle[0 ... 39]            100          0/[-20 ... 19]
++ * fifo, rr             [-1 ... -100]     [99 ... 0]  [0 ... 99]
++ */
++int task_prio(const struct task_struct *p)
++{
++	return (p->prio < MAX_RT_PRIO) ? p->prio - MAX_RT_PRIO :
++		task_sched_prio_normal(p, task_rq(p));
++}
++
++/**
++ * idle_cpu - is a given CPU idle currently?
++ * @cpu: the processor in question.
++ *
++ * Return: 1 if the CPU is currently idle. 0 otherwise.
++ */
++int idle_cpu(int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	if (rq->curr != rq->idle)
++		return 0;
++
++	if (rq->nr_running)
++		return 0;
++
++#ifdef CONFIG_SMP
++	if (rq->ttwu_pending)
++		return 0;
++#endif
++
++	return 1;
++}
++
++/**
++ * idle_task - return the idle task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * Return: The idle task for the cpu @cpu.
++ */
++struct task_struct *idle_task(int cpu)
++{
++	return cpu_rq(cpu)->idle;
++}
++
++/**
++ * find_process_by_pid - find a process with a matching PID value.
++ * @pid: the pid in question.
++ *
++ * The task of @pid, if found. %NULL otherwise.
++ */
++static inline struct task_struct *find_process_by_pid(pid_t pid)
++{
++	return pid ? find_task_by_vpid(pid) : current;
++}
++
++/*
++ * sched_setparam() passes in -1 for its policy, to let the functions
++ * it calls know not to change it.
++ */
++#define SETPARAM_POLICY -1
++
++static void __setscheduler_params(struct task_struct *p,
++		const struct sched_attr *attr)
++{
++	int policy = attr->sched_policy;
++
++	if (policy == SETPARAM_POLICY)
++		policy = p->policy;
++
++	p->policy = policy;
++
++	/*
++	 * allow normal nice value to be set, but will not have any
++	 * effect on scheduling until the task not SCHED_NORMAL/
++	 * SCHED_BATCH
++	 */
++	p->static_prio = NICE_TO_PRIO(attr->sched_nice);
++
++	/*
++	 * __sched_setscheduler() ensures attr->sched_priority == 0 when
++	 * !rt_policy. Always setting this ensures that things like
++	 * getparam()/getattr() don't report silly values for !rt tasks.
++	 */
++	p->rt_priority = attr->sched_priority;
++	p->normal_prio = normal_prio(p);
++}
++
++/* Actually do priority change: must hold rq lock. */
++static void __setscheduler(struct rq *rq, struct task_struct *p,
++			   const struct sched_attr *attr, bool keep_boost)
++{
++	__setscheduler_params(p, attr);
++
++	/*
++	 * Keep a potential priority boosting if called from
++	 * sched_setscheduler().
++	 */
++	p->prio = normal_prio(p);
++	if (keep_boost)
++		p->prio = rt_effective_prio(p, p->prio);
++}
++
++/*
++ * check the target process has a UID that matches the current process's
++ */
++static bool check_same_owner(struct task_struct *p)
++{
++	const struct cred *cred = current_cred(), *pcred;
++	bool match;
++
++	rcu_read_lock();
++	pcred = __task_cred(p);
++	match = (uid_eq(cred->euid, pcred->euid) ||
++		 uid_eq(cred->euid, pcred->uid));
++	rcu_read_unlock();
++	return match;
++}
++
++static int __sched_setscheduler(struct task_struct *p,
++				const struct sched_attr *attr,
++				bool user, bool pi)
++{
++	const struct sched_attr dl_squash_attr = {
++		.size		= sizeof(struct sched_attr),
++		.sched_policy	= SCHED_FIFO,
++		.sched_nice	= 0,
++		.sched_priority = 99,
++	};
++	int newprio = MAX_RT_PRIO - 1 - attr->sched_priority;
++	int retval, oldpolicy = -1;
++	int policy = attr->sched_policy;
++	struct callback_head *head;
++	unsigned long flags;
++	struct rq *rq;
++	int reset_on_fork;
++	raw_spinlock_t *lock;
++
++	/* The pi code expects interrupts enabled */
++	BUG_ON(pi && in_interrupt());
++
++	/*
++	 * Alt schedule FW supports SCHED_DEADLINE by squash it as prio 0 SCHED_FIFO
++	 */
++	if (unlikely(SCHED_DEADLINE == policy)) {
++		attr = &dl_squash_attr;
++		policy = attr->sched_policy;
++		newprio = MAX_RT_PRIO - 1 - attr->sched_priority;
++	}
++recheck:
++	/* Double check policy once rq lock held */
++	if (policy < 0) {
++		reset_on_fork = p->sched_reset_on_fork;
++		policy = oldpolicy = p->policy;
++	} else {
++		reset_on_fork = !!(attr->sched_flags & SCHED_RESET_ON_FORK);
++
++		if (policy > SCHED_IDLE)
++			return -EINVAL;
++	}
++
++	if (attr->sched_flags & ~(SCHED_FLAG_ALL))
++		return -EINVAL;
++
++	/*
++	 * Valid priorities for SCHED_FIFO and SCHED_RR are
++	 * 1..MAX_RT_PRIO-1, valid priority for SCHED_NORMAL and
++	 * SCHED_BATCH and SCHED_IDLE is 0.
++	 */
++	if (attr->sched_priority < 0 ||
++	    (p->mm && attr->sched_priority > MAX_RT_PRIO - 1) ||
++	    (!p->mm && attr->sched_priority > MAX_RT_PRIO - 1))
++		return -EINVAL;
++	if ((SCHED_RR == policy || SCHED_FIFO == policy) !=
++	    (attr->sched_priority != 0))
++		return -EINVAL;
++
++	/*
++	 * Allow unprivileged RT tasks to decrease priority:
++	 */
++	if (user && !capable(CAP_SYS_NICE)) {
++		if (SCHED_FIFO == policy || SCHED_RR == policy) {
++			unsigned long rlim_rtprio =
++					task_rlimit(p, RLIMIT_RTPRIO);
++
++			/* Can't set/change the rt policy */
++			if (policy != p->policy && !rlim_rtprio)
++				return -EPERM;
++
++			/* Can't increase priority */
++			if (attr->sched_priority > p->rt_priority &&
++			    attr->sched_priority > rlim_rtprio)
++				return -EPERM;
++		}
++
++		/* Can't change other user's priorities */
++		if (!check_same_owner(p))
++			return -EPERM;
++
++		/* Normal users shall not reset the sched_reset_on_fork flag */
++		if (p->sched_reset_on_fork && !reset_on_fork)
++			return -EPERM;
++	}
++
++	if (user) {
++		retval = security_task_setscheduler(p);
++		if (retval)
++			return retval;
++	}
++
++	if (pi)
++		cpuset_read_lock();
++
++	/*
++	 * Make sure no PI-waiters arrive (or leave) while we are
++	 * changing the priority of the task:
++	 */
++	raw_spin_lock_irqsave(&p->pi_lock, flags);
++
++	/*
++	 * To be able to change p->policy safely, task_access_lock()
++	 * must be called.
++	 * IF use task_access_lock() here:
++	 * For the task p which is not running, reading rq->stop is
++	 * racy but acceptable as ->stop doesn't change much.
++	 * An enhancemnet can be made to read rq->stop saftly.
++	 */
++	rq = __task_access_lock(p, &lock);
++
++	/*
++	 * Changing the policy of the stop threads its a very bad idea
++	 */
++	if (p == rq->stop) {
++		retval = -EINVAL;
++		goto unlock;
++	}
++
++	/*
++	 * If not changing anything there's no need to proceed further:
++	 */
++	if (unlikely(policy == p->policy)) {
++		if (rt_policy(policy) && attr->sched_priority != p->rt_priority)
++			goto change;
++		if (!rt_policy(policy) &&
++		    NICE_TO_PRIO(attr->sched_nice) != p->static_prio)
++			goto change;
++
++		p->sched_reset_on_fork = reset_on_fork;
++		retval = 0;
++		goto unlock;
++	}
++change:
++
++	/* Re-check policy now with rq lock held */
++	if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
++		policy = oldpolicy = -1;
++		__task_access_unlock(p, lock);
++		raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++		if (pi)
++			cpuset_read_unlock();
++		goto recheck;
++	}
++
++	p->sched_reset_on_fork = reset_on_fork;
++
++	if (pi) {
++		/*
++		 * Take priority boosted tasks into account. If the new
++		 * effective priority is unchanged, we just store the new
++		 * normal parameters and do not touch the scheduler class and
++		 * the runqueue. This will be done when the task deboost
++		 * itself.
++		 */
++		if (rt_effective_prio(p, newprio) == p->prio) {
++			__setscheduler_params(p, attr);
++			retval = 0;
++			goto unlock;
++		}
++	}
++
++	__setscheduler(rq, p, attr, pi);
++
++	check_task_changed(p, rq);
++
++	/* Avoid rq from going away on us: */
++	preempt_disable();
++	head = splice_balance_callbacks(rq);
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++	if (pi) {
++		cpuset_read_unlock();
++		rt_mutex_adjust_pi(p);
++	}
++
++	/* Run balance callbacks after we've adjusted the PI chain: */
++	balance_callbacks(rq, head);
++	preempt_enable();
++
++	return 0;
++
++unlock:
++	__task_access_unlock(p, lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++	if (pi)
++		cpuset_read_unlock();
++	return retval;
++}
++
++static int _sched_setscheduler(struct task_struct *p, int policy,
++			       const struct sched_param *param, bool check)
++{
++	struct sched_attr attr = {
++		.sched_policy   = policy,
++		.sched_priority = param->sched_priority,
++		.sched_nice     = PRIO_TO_NICE(p->static_prio),
++	};
++
++	/* Fixup the legacy SCHED_RESET_ON_FORK hack. */
++	if ((policy != SETPARAM_POLICY) && (policy & SCHED_RESET_ON_FORK)) {
++		attr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
++		policy &= ~SCHED_RESET_ON_FORK;
++		attr.sched_policy = policy;
++	}
++
++	return __sched_setscheduler(p, &attr, check, true);
++}
++
++/**
++ * sched_setscheduler - change the scheduling policy and/or RT priority of a thread.
++ * @p: the task in question.
++ * @policy: new policy.
++ * @param: structure containing the new RT priority.
++ *
++ * Use sched_set_fifo(), read its comment.
++ *
++ * Return: 0 on success. An error code otherwise.
++ *
++ * NOTE that the task may be already dead.
++ */
++int sched_setscheduler(struct task_struct *p, int policy,
++		       const struct sched_param *param)
++{
++	return _sched_setscheduler(p, policy, param, true);
++}
++
++int sched_setattr(struct task_struct *p, const struct sched_attr *attr)
++{
++	return __sched_setscheduler(p, attr, true, true);
++}
++
++int sched_setattr_nocheck(struct task_struct *p, const struct sched_attr *attr)
++{
++	return __sched_setscheduler(p, attr, false, true);
++}
++
++/**
++ * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace.
++ * @p: the task in question.
++ * @policy: new policy.
++ * @param: structure containing the new RT priority.
++ *
++ * Just like sched_setscheduler, only don't bother checking if the
++ * current context has permission.  For example, this is needed in
++ * stop_machine(): we create temporary high priority worker threads,
++ * but our caller might not have that capability.
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++int sched_setscheduler_nocheck(struct task_struct *p, int policy,
++			       const struct sched_param *param)
++{
++	return _sched_setscheduler(p, policy, param, false);
++}
++
++/*
++ * SCHED_FIFO is a broken scheduler model; that is, it is fundamentally
++ * incapable of resource management, which is the one thing an OS really should
++ * be doing.
++ *
++ * This is of course the reason it is limited to privileged users only.
++ *
++ * Worse still; it is fundamentally impossible to compose static priority
++ * workloads. You cannot take two correctly working static prio workloads
++ * and smash them together and still expect them to work.
++ *
++ * For this reason 'all' FIFO tasks the kernel creates are basically at:
++ *
++ *   MAX_RT_PRIO / 2
++ *
++ * The administrator _MUST_ configure the system, the kernel simply doesn't
++ * know enough information to make a sensible choice.
++ */
++void sched_set_fifo(struct task_struct *p)
++{
++	struct sched_param sp = { .sched_priority = MAX_RT_PRIO / 2 };
++	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_fifo);
++
++/*
++ * For when you don't much care about FIFO, but want to be above SCHED_NORMAL.
++ */
++void sched_set_fifo_low(struct task_struct *p)
++{
++	struct sched_param sp = { .sched_priority = 1 };
++	WARN_ON_ONCE(sched_setscheduler_nocheck(p, SCHED_FIFO, &sp) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_fifo_low);
++
++void sched_set_normal(struct task_struct *p, int nice)
++{
++	struct sched_attr attr = {
++		.sched_policy = SCHED_NORMAL,
++		.sched_nice = nice,
++	};
++	WARN_ON_ONCE(sched_setattr_nocheck(p, &attr) != 0);
++}
++EXPORT_SYMBOL_GPL(sched_set_normal);
++
++static int
++do_sched_setscheduler(pid_t pid, int policy, struct sched_param __user *param)
++{
++	struct sched_param lparam;
++	struct task_struct *p;
++	int retval;
++
++	if (!param || pid < 0)
++		return -EINVAL;
++	if (copy_from_user(&lparam, param, sizeof(struct sched_param)))
++		return -EFAULT;
++
++	rcu_read_lock();
++	retval = -ESRCH;
++	p = find_process_by_pid(pid);
++	if (likely(p))
++		get_task_struct(p);
++	rcu_read_unlock();
++
++	if (likely(p)) {
++		retval = sched_setscheduler(p, policy, &lparam);
++		put_task_struct(p);
++	}
++
++	return retval;
++}
++
++/*
++ * Mimics kernel/events/core.c perf_copy_attr().
++ */
++static int sched_copy_attr(struct sched_attr __user *uattr, struct sched_attr *attr)
++{
++	u32 size;
++	int ret;
++
++	/* Zero the full structure, so that a short copy will be nice: */
++	memset(attr, 0, sizeof(*attr));
++
++	ret = get_user(size, &uattr->size);
++	if (ret)
++		return ret;
++
++	/* ABI compatibility quirk: */
++	if (!size)
++		size = SCHED_ATTR_SIZE_VER0;
++
++	if (size < SCHED_ATTR_SIZE_VER0 || size > PAGE_SIZE)
++		goto err_size;
++
++	ret = copy_struct_from_user(attr, sizeof(*attr), uattr, size);
++	if (ret) {
++		if (ret == -E2BIG)
++			goto err_size;
++		return ret;
++	}
++
++	/*
++	 * XXX: Do we want to be lenient like existing syscalls; or do we want
++	 * to be strict and return an error on out-of-bounds values?
++	 */
++	attr->sched_nice = clamp(attr->sched_nice, -20, 19);
++
++	/* sched/core.c uses zero here but we already know ret is zero */
++	return 0;
++
++err_size:
++	put_user(sizeof(*attr), &uattr->size);
++	return -E2BIG;
++}
++
++/**
++ * sys_sched_setscheduler - set/change the scheduler policy and RT priority
++ * @pid: the pid in question.
++ * @policy: new policy.
++ *
++ * Return: 0 on success. An error code otherwise.
++ * @param: structure containing the new RT priority.
++ */
++SYSCALL_DEFINE3(sched_setscheduler, pid_t, pid, int, policy, struct sched_param __user *, param)
++{
++	if (policy < 0)
++		return -EINVAL;
++
++	return do_sched_setscheduler(pid, policy, param);
++}
++
++/**
++ * sys_sched_setparam - set/change the RT priority of a thread
++ * @pid: the pid in question.
++ * @param: structure containing the new RT priority.
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++SYSCALL_DEFINE2(sched_setparam, pid_t, pid, struct sched_param __user *, param)
++{
++	return do_sched_setscheduler(pid, SETPARAM_POLICY, param);
++}
++
++/**
++ * sys_sched_setattr - same as above, but with extended sched_attr
++ * @pid: the pid in question.
++ * @uattr: structure containing the extended parameters.
++ */
++SYSCALL_DEFINE3(sched_setattr, pid_t, pid, struct sched_attr __user *, uattr,
++			       unsigned int, flags)
++{
++	struct sched_attr attr;
++	struct task_struct *p;
++	int retval;
++
++	if (!uattr || pid < 0 || flags)
++		return -EINVAL;
++
++	retval = sched_copy_attr(uattr, &attr);
++	if (retval)
++		return retval;
++
++	if ((int)attr.sched_policy < 0)
++		return -EINVAL;
++
++	rcu_read_lock();
++	retval = -ESRCH;
++	p = find_process_by_pid(pid);
++	if (likely(p))
++		get_task_struct(p);
++	rcu_read_unlock();
++
++	if (likely(p)) {
++		retval = sched_setattr(p, &attr);
++		put_task_struct(p);
++	}
++
++	return retval;
++}
++
++/**
++ * sys_sched_getscheduler - get the policy (scheduling class) of a thread
++ * @pid: the pid in question.
++ *
++ * Return: On success, the policy of the thread. Otherwise, a negative error
++ * code.
++ */
++SYSCALL_DEFINE1(sched_getscheduler, pid_t, pid)
++{
++	struct task_struct *p;
++	int retval = -EINVAL;
++
++	if (pid < 0)
++		goto out_nounlock;
++
++	retval = -ESRCH;
++	rcu_read_lock();
++	p = find_process_by_pid(pid);
++	if (p) {
++		retval = security_task_getscheduler(p);
++		if (!retval)
++			retval = p->policy;
++	}
++	rcu_read_unlock();
++
++out_nounlock:
++	return retval;
++}
++
++/**
++ * sys_sched_getscheduler - get the RT priority of a thread
++ * @pid: the pid in question.
++ * @param: structure containing the RT priority.
++ *
++ * Return: On success, 0 and the RT priority is in @param. Otherwise, an error
++ * code.
++ */
++SYSCALL_DEFINE2(sched_getparam, pid_t, pid, struct sched_param __user *, param)
++{
++	struct sched_param lp = { .sched_priority = 0 };
++	struct task_struct *p;
++	int retval = -EINVAL;
++
++	if (!param || pid < 0)
++		goto out_nounlock;
++
++	rcu_read_lock();
++	p = find_process_by_pid(pid);
++	retval = -ESRCH;
++	if (!p)
++		goto out_unlock;
++
++	retval = security_task_getscheduler(p);
++	if (retval)
++		goto out_unlock;
++
++	if (task_has_rt_policy(p))
++		lp.sched_priority = p->rt_priority;
++	rcu_read_unlock();
++
++	/*
++	 * This one might sleep, we cannot do it with a spinlock held ...
++	 */
++	retval = copy_to_user(param, &lp, sizeof(*param)) ? -EFAULT : 0;
++
++out_nounlock:
++	return retval;
++
++out_unlock:
++	rcu_read_unlock();
++	return retval;
++}
++
++/*
++ * Copy the kernel size attribute structure (which might be larger
++ * than what user-space knows about) to user-space.
++ *
++ * Note that all cases are valid: user-space buffer can be larger or
++ * smaller than the kernel-space buffer. The usual case is that both
++ * have the same size.
++ */
++static int
++sched_attr_copy_to_user(struct sched_attr __user *uattr,
++			struct sched_attr *kattr,
++			unsigned int usize)
++{
++	unsigned int ksize = sizeof(*kattr);
++
++	if (!access_ok(uattr, usize))
++		return -EFAULT;
++
++	/*
++	 * sched_getattr() ABI forwards and backwards compatibility:
++	 *
++	 * If usize == ksize then we just copy everything to user-space and all is good.
++	 *
++	 * If usize < ksize then we only copy as much as user-space has space for,
++	 * this keeps ABI compatibility as well. We skip the rest.
++	 *
++	 * If usize > ksize then user-space is using a newer version of the ABI,
++	 * which part the kernel doesn't know about. Just ignore it - tooling can
++	 * detect the kernel's knowledge of attributes from the attr->size value
++	 * which is set to ksize in this case.
++	 */
++	kattr->size = min(usize, ksize);
++
++	if (copy_to_user(uattr, kattr, kattr->size))
++		return -EFAULT;
++
++	return 0;
++}
++
++/**
++ * sys_sched_getattr - similar to sched_getparam, but with sched_attr
++ * @pid: the pid in question.
++ * @uattr: structure containing the extended parameters.
++ * @usize: sizeof(attr) for fwd/bwd comp.
++ * @flags: for future extension.
++ */
++SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,
++		unsigned int, usize, unsigned int, flags)
++{
++	struct sched_attr kattr = { };
++	struct task_struct *p;
++	int retval;
++
++	if (!uattr || pid < 0 || usize > PAGE_SIZE ||
++	    usize < SCHED_ATTR_SIZE_VER0 || flags)
++		return -EINVAL;
++
++	rcu_read_lock();
++	p = find_process_by_pid(pid);
++	retval = -ESRCH;
++	if (!p)
++		goto out_unlock;
++
++	retval = security_task_getscheduler(p);
++	if (retval)
++		goto out_unlock;
++
++	kattr.sched_policy = p->policy;
++	if (p->sched_reset_on_fork)
++		kattr.sched_flags |= SCHED_FLAG_RESET_ON_FORK;
++	if (task_has_rt_policy(p))
++		kattr.sched_priority = p->rt_priority;
++	else
++		kattr.sched_nice = task_nice(p);
++
++#ifdef CONFIG_UCLAMP_TASK
++	kattr.sched_util_min = p->uclamp_req[UCLAMP_MIN].value;
++	kattr.sched_util_max = p->uclamp_req[UCLAMP_MAX].value;
++#endif
++
++	rcu_read_unlock();
++
++	return sched_attr_copy_to_user(uattr, &kattr, usize);
++
++out_unlock:
++	rcu_read_unlock();
++	return retval;
++}
++
++long sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
++{
++	cpumask_var_t cpus_allowed, new_mask;
++	struct task_struct *p;
++	int retval;
++
++	rcu_read_lock();
++
++	p = find_process_by_pid(pid);
++	if (!p) {
++		rcu_read_unlock();
++		return -ESRCH;
++	}
++
++	/* Prevent p going away */
++	get_task_struct(p);
++	rcu_read_unlock();
++
++	if (p->flags & PF_NO_SETAFFINITY) {
++		retval = -EINVAL;
++		goto out_put_task;
++	}
++	if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL)) {
++		retval = -ENOMEM;
++		goto out_put_task;
++	}
++	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL)) {
++		retval = -ENOMEM;
++		goto out_free_cpus_allowed;
++	}
++	retval = -EPERM;
++	if (!check_same_owner(p)) {
++		rcu_read_lock();
++		if (!ns_capable(__task_cred(p)->user_ns, CAP_SYS_NICE)) {
++			rcu_read_unlock();
++			goto out_free_new_mask;
++		}
++		rcu_read_unlock();
++	}
++
++	retval = security_task_setscheduler(p);
++	if (retval)
++		goto out_free_new_mask;
++
++	cpuset_cpus_allowed(p, cpus_allowed);
++	cpumask_and(new_mask, in_mask, cpus_allowed);
++
++again:
++	retval = __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK);
++
++	if (!retval) {
++		cpuset_cpus_allowed(p, cpus_allowed);
++		if (!cpumask_subset(new_mask, cpus_allowed)) {
++			/*
++			 * We must have raced with a concurrent cpuset
++			 * update. Just reset the cpus_allowed to the
++			 * cpuset's cpus_allowed
++			 */
++			cpumask_copy(new_mask, cpus_allowed);
++			goto again;
++		}
++	}
++out_free_new_mask:
++	free_cpumask_var(new_mask);
++out_free_cpus_allowed:
++	free_cpumask_var(cpus_allowed);
++out_put_task:
++	put_task_struct(p);
++	return retval;
++}
++
++static int get_user_cpu_mask(unsigned long __user *user_mask_ptr, unsigned len,
++			     struct cpumask *new_mask)
++{
++	if (len < cpumask_size())
++		cpumask_clear(new_mask);
++	else if (len > cpumask_size())
++		len = cpumask_size();
++
++	return copy_from_user(new_mask, user_mask_ptr, len) ? -EFAULT : 0;
++}
++
++/**
++ * sys_sched_setaffinity - set the CPU affinity of a process
++ * @pid: pid of the process
++ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
++ * @user_mask_ptr: user-space pointer to the new CPU mask
++ *
++ * Return: 0 on success. An error code otherwise.
++ */
++SYSCALL_DEFINE3(sched_setaffinity, pid_t, pid, unsigned int, len,
++		unsigned long __user *, user_mask_ptr)
++{
++	cpumask_var_t new_mask;
++	int retval;
++
++	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL))
++		return -ENOMEM;
++
++	retval = get_user_cpu_mask(user_mask_ptr, len, new_mask);
++	if (retval == 0)
++		retval = sched_setaffinity(pid, new_mask);
++	free_cpumask_var(new_mask);
++	return retval;
++}
++
++long sched_getaffinity(pid_t pid, cpumask_t *mask)
++{
++	struct task_struct *p;
++	raw_spinlock_t *lock;
++	unsigned long flags;
++	int retval;
++
++	rcu_read_lock();
++
++	retval = -ESRCH;
++	p = find_process_by_pid(pid);
++	if (!p)
++		goto out_unlock;
++
++	retval = security_task_getscheduler(p);
++	if (retval)
++		goto out_unlock;
++
++	task_access_lock_irqsave(p, &lock, &flags);
++	cpumask_and(mask, &p->cpus_mask, cpu_active_mask);
++	task_access_unlock_irqrestore(p, lock, &flags);
++
++out_unlock:
++	rcu_read_unlock();
++
++	return retval;
++}
++
++/**
++ * sys_sched_getaffinity - get the CPU affinity of a process
++ * @pid: pid of the process
++ * @len: length in bytes of the bitmask pointed to by user_mask_ptr
++ * @user_mask_ptr: user-space pointer to hold the current CPU mask
++ *
++ * Return: size of CPU mask copied to user_mask_ptr on success. An
++ * error code otherwise.
++ */
++SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
++		unsigned long __user *, user_mask_ptr)
++{
++	int ret;
++	cpumask_var_t mask;
++
++	if ((len * BITS_PER_BYTE) < nr_cpu_ids)
++		return -EINVAL;
++	if (len & (sizeof(unsigned long)-1))
++		return -EINVAL;
++
++	if (!alloc_cpumask_var(&mask, GFP_KERNEL))
++		return -ENOMEM;
++
++	ret = sched_getaffinity(pid, mask);
++	if (ret == 0) {
++		unsigned int retlen = min_t(size_t, len, cpumask_size());
++
++		if (copy_to_user(user_mask_ptr, mask, retlen))
++			ret = -EFAULT;
++		else
++			ret = retlen;
++	}
++	free_cpumask_var(mask);
++
++	return ret;
++}
++
++static void do_sched_yield(void)
++{
++	struct rq *rq;
++	struct rq_flags rf;
++
++	if (!sched_yield_type)
++		return;
++
++	rq = this_rq_lock_irq(&rf);
++
++	schedstat_inc(rq->yld_count);
++
++	if (1 == sched_yield_type) {
++		if (!rt_task(current))
++			do_sched_yield_type_1(current, rq);
++	} else if (2 == sched_yield_type) {
++		if (rq->nr_running > 1)
++			rq->skip = current;
++	}
++
++	preempt_disable();
++	raw_spin_unlock_irq(&rq->lock);
++	sched_preempt_enable_no_resched();
++
++	schedule();
++}
++
++/**
++ * sys_sched_yield - yield the current processor to other threads.
++ *
++ * This function yields the current CPU to other tasks. If there are no
++ * other threads running on this CPU then this function will return.
++ *
++ * Return: 0.
++ */
++SYSCALL_DEFINE0(sched_yield)
++{
++	do_sched_yield();
++	return 0;
++}
++
++#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
++int __sched __cond_resched(void)
++{
++	if (should_resched(0)) {
++		preempt_schedule_common();
++		return 1;
++	}
++#ifndef CONFIG_PREEMPT_RCU
++	rcu_all_qs();
++#endif
++	return 0;
++}
++EXPORT_SYMBOL(__cond_resched);
++#endif
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(cond_resched);
++
++DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(might_resched);
++#endif
++
++/*
++ * __cond_resched_lock() - if a reschedule is pending, drop the given lock,
++ * call schedule, and on return reacquire the lock.
++ *
++ * This works OK both with and without CONFIG_PREEMPTION.  We do strange low-level
++ * operations here to prevent schedule() from being called twice (once via
++ * spin_unlock(), once by hand).
++ */
++int __cond_resched_lock(spinlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held(lock);
++
++	if (spin_needbreak(lock) || resched) {
++		spin_unlock(lock);
++		if (resched)
++			preempt_schedule_common();
++		else
++			cpu_relax();
++		ret = 1;
++		spin_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_lock);
++
++int __cond_resched_rwlock_read(rwlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held_read(lock);
++
++	if (rwlock_needbreak(lock) || resched) {
++		read_unlock(lock);
++		if (resched)
++			preempt_schedule_common();
++		else
++			cpu_relax();
++		ret = 1;
++		read_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_read);
++
++int __cond_resched_rwlock_write(rwlock_t *lock)
++{
++	int resched = should_resched(PREEMPT_LOCK_OFFSET);
++	int ret = 0;
++
++	lockdep_assert_held_write(lock);
++
++	if (rwlock_needbreak(lock) || resched) {
++		write_unlock(lock);
++		if (resched)
++			preempt_schedule_common();
++		else
++			cpu_relax();
++		ret = 1;
++		write_lock(lock);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_write);
++
++/**
++ * yield - yield the current processor to other threads.
++ *
++ * Do not ever use this function, there's a 99% chance you're doing it wrong.
++ *
++ * The scheduler is at all times free to pick the calling task as the most
++ * eligible task to run, if removing the yield() call from your code breaks
++ * it, it's already broken.
++ *
++ * Typical broken usage is:
++ *
++ * while (!event)
++ * 	yield();
++ *
++ * where one assumes that yield() will let 'the other' process run that will
++ * make event true. If the current task is a SCHED_FIFO task that will never
++ * happen. Never use yield() as a progress guarantee!!
++ *
++ * If you want to use yield() to wait for something, use wait_event().
++ * If you want to use yield() to be 'nice' for others, use cond_resched().
++ * If you still want to use yield(), do not!
++ */
++void __sched yield(void)
++{
++	set_current_state(TASK_RUNNING);
++	do_sched_yield();
++}
++EXPORT_SYMBOL(yield);
++
++/**
++ * yield_to - yield the current processor to another thread in
++ * your thread group, or accelerate that thread toward the
++ * processor it's on.
++ * @p: target task
++ * @preempt: whether task preemption is allowed or not
++ *
++ * It's the caller's job to ensure that the target task struct
++ * can't go away on us before we can do any checks.
++ *
++ * In Alt schedule FW, yield_to is not supported.
++ *
++ * Return:
++ *	true (>0) if we indeed boosted the target task.
++ *	false (0) if we failed to boost the target.
++ *	-ESRCH if there's no task to yield to.
++ */
++int __sched yield_to(struct task_struct *p, bool preempt)
++{
++	return 0;
++}
++EXPORT_SYMBOL_GPL(yield_to);
++
++int io_schedule_prepare(void)
++{
++	int old_iowait = current->in_iowait;
++
++	current->in_iowait = 1;
++	blk_schedule_flush_plug(current);
++
++	return old_iowait;
++}
++
++void io_schedule_finish(int token)
++{
++	current->in_iowait = token;
++}
++
++/*
++ * This task is about to go to sleep on IO.  Increment rq->nr_iowait so
++ * that process accounting knows that this is a task in IO wait state.
++ *
++ * But don't do that if it is a deliberate, throttling IO wait (this task
++ * has set its backing_dev_info: the queue against which it should throttle)
++ */
++
++long __sched io_schedule_timeout(long timeout)
++{
++	int token;
++	long ret;
++
++	token = io_schedule_prepare();
++	ret = schedule_timeout(timeout);
++	io_schedule_finish(token);
++
++	return ret;
++}
++EXPORT_SYMBOL(io_schedule_timeout);
++
++void __sched io_schedule(void)
++{
++	int token;
++
++	token = io_schedule_prepare();
++	schedule();
++	io_schedule_finish(token);
++}
++EXPORT_SYMBOL(io_schedule);
++
++/**
++ * sys_sched_get_priority_max - return maximum RT priority.
++ * @policy: scheduling class.
++ *
++ * Return: On success, this syscall returns the maximum
++ * rt_priority that can be used by a given scheduling class.
++ * On failure, a negative error code is returned.
++ */
++SYSCALL_DEFINE1(sched_get_priority_max, int, policy)
++{
++	int ret = -EINVAL;
++
++	switch (policy) {
++	case SCHED_FIFO:
++	case SCHED_RR:
++		ret = MAX_RT_PRIO - 1;
++		break;
++	case SCHED_NORMAL:
++	case SCHED_BATCH:
++	case SCHED_IDLE:
++		ret = 0;
++		break;
++	}
++	return ret;
++}
++
++/**
++ * sys_sched_get_priority_min - return minimum RT priority.
++ * @policy: scheduling class.
++ *
++ * Return: On success, this syscall returns the minimum
++ * rt_priority that can be used by a given scheduling class.
++ * On failure, a negative error code is returned.
++ */
++SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
++{
++	int ret = -EINVAL;
++
++	switch (policy) {
++	case SCHED_FIFO:
++	case SCHED_RR:
++		ret = 1;
++		break;
++	case SCHED_NORMAL:
++	case SCHED_BATCH:
++	case SCHED_IDLE:
++		ret = 0;
++		break;
++	}
++	return ret;
++}
++
++static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
++{
++	struct task_struct *p;
++	int retval;
++
++	alt_sched_debug();
++
++	if (pid < 0)
++		return -EINVAL;
++
++	retval = -ESRCH;
++	rcu_read_lock();
++	p = find_process_by_pid(pid);
++	if (!p)
++		goto out_unlock;
++
++	retval = security_task_getscheduler(p);
++	if (retval)
++		goto out_unlock;
++	rcu_read_unlock();
++
++	*t = ns_to_timespec64(sched_timeslice_ns);
++	return 0;
++
++out_unlock:
++	rcu_read_unlock();
++	return retval;
++}
++
++/**
++ * sys_sched_rr_get_interval - return the default timeslice of a process.
++ * @pid: pid of the process.
++ * @interval: userspace pointer to the timeslice value.
++ *
++ *
++ * Return: On success, 0 and the timeslice is in @interval. Otherwise,
++ * an error code.
++ */
++SYSCALL_DEFINE2(sched_rr_get_interval, pid_t, pid,
++		struct __kernel_timespec __user *, interval)
++{
++	struct timespec64 t;
++	int retval = sched_rr_get_interval(pid, &t);
++
++	if (retval == 0)
++		retval = put_timespec64(&t, interval);
++
++	return retval;
++}
++
++#ifdef CONFIG_COMPAT_32BIT_TIME
++SYSCALL_DEFINE2(sched_rr_get_interval_time32, pid_t, pid,
++		struct old_timespec32 __user *, interval)
++{
++	struct timespec64 t;
++	int retval = sched_rr_get_interval(pid, &t);
++
++	if (retval == 0)
++		retval = put_old_timespec32(&t, interval);
++	return retval;
++}
++#endif
++
++void sched_show_task(struct task_struct *p)
++{
++	unsigned long free = 0;
++	int ppid;
++
++	if (!try_get_task_stack(p))
++		return;
++
++	pr_info("task:%-15.15s state:%c", p->comm, task_state_to_char(p));
++
++	if (p->state == TASK_RUNNING)
++		pr_cont("  running task    ");
++#ifdef CONFIG_DEBUG_STACK_USAGE
++	free = stack_not_used(p);
++#endif
++	ppid = 0;
++	rcu_read_lock();
++	if (pid_alive(p))
++		ppid = task_pid_nr(rcu_dereference(p->real_parent));
++	rcu_read_unlock();
++	pr_cont(" stack:%5lu pid:%5d ppid:%6d flags:0x%08lx\n",
++		free, task_pid_nr(p), ppid,
++		(unsigned long)task_thread_info(p)->flags);
++
++	print_worker_info(KERN_INFO, p);
++	print_stop_info(KERN_INFO, p);
++	show_stack(p, NULL, KERN_INFO);
++	put_task_stack(p);
++}
++EXPORT_SYMBOL_GPL(sched_show_task);
++
++static inline bool
++state_filter_match(unsigned long state_filter, struct task_struct *p)
++{
++	/* no filter, everything matches */
++	if (!state_filter)
++		return true;
++
++	/* filter, but doesn't match */
++	if (!(p->state & state_filter))
++		return false;
++
++	/*
++	 * When looking for TASK_UNINTERRUPTIBLE skip TASK_IDLE (allows
++	 * TASK_KILLABLE).
++	 */
++	if (state_filter == TASK_UNINTERRUPTIBLE && p->state == TASK_IDLE)
++		return false;
++
++	return true;
++}
++
++
++void show_state_filter(unsigned long state_filter)
++{
++	struct task_struct *g, *p;
++
++	rcu_read_lock();
++	for_each_process_thread(g, p) {
++		/*
++		 * reset the NMI-timeout, listing all files on a slow
++		 * console might take a lot of time:
++		 * Also, reset softlockup watchdogs on all CPUs, because
++		 * another CPU might be blocked waiting for us to process
++		 * an IPI.
++		 */
++		touch_nmi_watchdog();
++		touch_all_softlockup_watchdogs();
++		if (state_filter_match(state_filter, p))
++			sched_show_task(p);
++	}
++
++#ifdef CONFIG_SCHED_DEBUG
++	/* TODO: Alt schedule FW should support this
++	if (!state_filter)
++		sysrq_sched_debug_show();
++	*/
++#endif
++	rcu_read_unlock();
++	/*
++	 * Only show locks if all tasks are dumped:
++	 */
++	if (!state_filter)
++		debug_show_all_locks();
++}
++
++void dump_cpu_task(int cpu)
++{
++	pr_info("Task dump for CPU %d:\n", cpu);
++	sched_show_task(cpu_curr(cpu));
++}
++
++/**
++ * init_idle - set up an idle thread for a given CPU
++ * @idle: task in question
++ * @cpu: CPU the idle task belongs to
++ *
++ * NOTE: this function does not set the idle thread's NEED_RESCHED
++ * flag, to make booting more robust.
++ */
++void init_idle(struct task_struct *idle, int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	__sched_fork(0, idle);
++
++	raw_spin_lock_irqsave(&idle->pi_lock, flags);
++	raw_spin_lock(&rq->lock);
++	update_rq_clock(rq);
++
++	idle->last_ran = rq->clock_task;
++	idle->state = TASK_RUNNING;
++	idle->flags |= PF_IDLE;
++	sched_queue_init_idle(&rq->queue, idle);
++
++	scs_task_reset(idle);
++	kasan_unpoison_task_stack(idle);
++
++#ifdef CONFIG_SMP
++	/*
++	 * It's possible that init_idle() gets called multiple times on a task,
++	 * in that case do_set_cpus_allowed() will not do the right thing.
++	 *
++	 * And since this is boot we can forgo the serialisation.
++	 */
++	set_cpus_allowed_common(idle, cpumask_of(cpu));
++#endif
++
++	/* Silence PROVE_RCU */
++	rcu_read_lock();
++	__set_task_cpu(idle, cpu);
++	rcu_read_unlock();
++
++	rq->idle = idle;
++	rcu_assign_pointer(rq->curr, idle);
++	idle->on_cpu = 1;
++
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&idle->pi_lock, flags);
++
++	/* Set the preempt count _outside_ the spinlocks! */
++	init_idle_preempt_count(idle, cpu);
++
++	ftrace_graph_init_idle_task(idle, cpu);
++	vtime_init_idle(idle, cpu);
++#ifdef CONFIG_SMP
++	sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++int cpuset_cpumask_can_shrink(const struct cpumask __maybe_unused *cur,
++			      const struct cpumask __maybe_unused *trial)
++{
++	return 1;
++}
++
++int task_can_attach(struct task_struct *p,
++		    const struct cpumask *cs_cpus_allowed)
++{
++	int ret = 0;
++
++	/*
++	 * Kthreads which disallow setaffinity shouldn't be moved
++	 * to a new cpuset; we don't want to change their CPU
++	 * affinity and isolating such threads by their set of
++	 * allowed nodes is unnecessary.  Thus, cpusets are not
++	 * applicable for such threads.  This prevents checking for
++	 * success of set_cpus_allowed_ptr() on all attached tasks
++	 * before cpus_mask may be changed.
++	 */
++	if (p->flags & PF_NO_SETAFFINITY)
++		ret = -EINVAL;
++
++	return ret;
++}
++
++bool sched_smp_initialized __read_mostly;
++
++#ifdef CONFIG_HOTPLUG_CPU
++/*
++ * Ensures that the idle task is using init_mm right before its CPU goes
++ * offline.
++ */
++void idle_task_exit(void)
++{
++	struct mm_struct *mm = current->active_mm;
++
++	BUG_ON(current != this_rq()->idle);
++
++	if (mm != &init_mm) {
++		switch_mm(mm, &init_mm, current);
++		finish_arch_post_lock_switch();
++	}
++
++	/* finish_cpu(), as ran on the BP, will clean up the active_mm state */
++}
++
++static int __balance_push_cpu_stop(void *arg)
++{
++	struct task_struct *p = arg;
++	struct rq *rq = this_rq();
++	struct rq_flags rf;
++	int cpu;
++
++	raw_spin_lock_irq(&p->pi_lock);
++	rq_lock(rq, &rf);
++
++	update_rq_clock(rq);
++
++	if (task_rq(p) == rq && task_on_rq_queued(p)) {
++		cpu = select_fallback_rq(rq->cpu, p);
++		rq = __migrate_task(rq, p, cpu);
++	}
++
++	rq_unlock(rq, &rf);
++	raw_spin_unlock_irq(&p->pi_lock);
++
++	put_task_struct(p);
++
++	return 0;
++}
++
++static DEFINE_PER_CPU(struct cpu_stop_work, push_work);
++
++/*
++ * This is enabled below SCHED_AP_ACTIVE; when !cpu_active(), but only
++ * effective when the hotplug motion is down.
++ */
++static void balance_push(struct rq *rq)
++{
++	struct task_struct *push_task = rq->curr;
++
++	lockdep_assert_held(&rq->lock);
++	SCHED_WARN_ON(rq->cpu != smp_processor_id());
++
++	/*
++	 * Ensure the thing is persistent until balance_push_set(.on = false);
++	 */
++	rq->balance_callback = &balance_push_callback;
++
++	/*
++	 * Only active while going offline.
++	 */
++	if (!cpu_dying(rq->cpu))
++		return;
++
++	/*
++	 * Both the cpu-hotplug and stop task are in this case and are
++	 * required to complete the hotplug process.
++	 *
++	 * XXX: the idle task does not match kthread_is_per_cpu() due to
++	 * histerical raisins.
++	 */
++	if (rq->idle == push_task ||
++	    kthread_is_per_cpu(push_task) ||
++	    is_migration_disabled(push_task)) {
++
++		/*
++		 * If this is the idle task on the outgoing CPU try to wake
++		 * up the hotplug control thread which might wait for the
++		 * last task to vanish. The rcuwait_active() check is
++		 * accurate here because the waiter is pinned on this CPU
++		 * and can't obviously be running in parallel.
++		 *
++		 * On RT kernels this also has to check whether there are
++		 * pinned and scheduled out tasks on the runqueue. They
++		 * need to leave the migrate disabled section first.
++		 */
++		if (!rq->nr_running && !rq_has_pinned_tasks(rq) &&
++		    rcuwait_active(&rq->hotplug_wait)) {
++			raw_spin_unlock(&rq->lock);
++			rcuwait_wake_up(&rq->hotplug_wait);
++			raw_spin_lock(&rq->lock);
++		}
++		return;
++	}
++
++	get_task_struct(push_task);
++	/*
++	 * Temporarily drop rq->lock such that we can wake-up the stop task.
++	 * Both preemption and IRQs are still disabled.
++	 */
++	raw_spin_unlock(&rq->lock);
++	stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
++			    this_cpu_ptr(&push_work));
++	/*
++	 * At this point need_resched() is true and we'll take the loop in
++	 * schedule(). The next pick is obviously going to be the stop task
++	 * which kthread_is_per_cpu() and will push this task away.
++	 */
++	raw_spin_lock(&rq->lock);
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++	struct rq *rq = cpu_rq(cpu);
++	struct rq_flags rf;
++
++	rq_lock_irqsave(rq, &rf);
++	if (on) {
++		WARN_ON_ONCE(rq->balance_callback);
++		rq->balance_callback = &balance_push_callback;
++	} else if (rq->balance_callback == &balance_push_callback) {
++		rq->balance_callback = NULL;
++	}
++	rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Invoked from a CPUs hotplug control thread after the CPU has been marked
++ * inactive. All tasks which are not per CPU kernel threads are either
++ * pushed off this CPU now via balance_push() or placed on a different CPU
++ * during wakeup. Wait until the CPU is quiescent.
++ */
++static void balance_hotplug_wait(void)
++{
++	struct rq *rq = this_rq();
++
++	rcuwait_wait_event(&rq->hotplug_wait,
++			   rq->nr_running == 1 && !rq_has_pinned_tasks(rq),
++			   TASK_UNINTERRUPTIBLE);
++}
++
++#else
++
++static void balance_push(struct rq *rq)
++{
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++}
++
++static inline void balance_hotplug_wait(void)
++{
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++static void set_rq_offline(struct rq *rq)
++{
++	if (rq->online)
++		rq->online = false;
++}
++
++static void set_rq_online(struct rq *rq)
++{
++	if (!rq->online)
++		rq->online = true;
++}
++
++/*
++ * used to mark begin/end of suspend/resume:
++ */
++static int num_cpus_frozen;
++
++/*
++ * Update cpusets according to cpu_active mask.  If cpusets are
++ * disabled, cpuset_update_active_cpus() becomes a simple wrapper
++ * around partition_sched_domains().
++ *
++ * If we come here as part of a suspend/resume, don't touch cpusets because we
++ * want to restore it back to its original state upon resume anyway.
++ */
++static void cpuset_cpu_active(void)
++{
++	if (cpuhp_tasks_frozen) {
++		/*
++		 * num_cpus_frozen tracks how many CPUs are involved in suspend
++		 * resume sequence. As long as this is not the last online
++		 * operation in the resume sequence, just build a single sched
++		 * domain, ignoring cpusets.
++		 */
++		partition_sched_domains(1, NULL, NULL);
++		if (--num_cpus_frozen)
++			return;
++		/*
++		 * This is the last CPU online operation. So fall through and
++		 * restore the original sched domains by considering the
++		 * cpuset configurations.
++		 */
++		cpuset_force_rebuild();
++	}
++
++	cpuset_update_active_cpus();
++}
++
++static int cpuset_cpu_inactive(unsigned int cpu)
++{
++	if (!cpuhp_tasks_frozen) {
++		cpuset_update_active_cpus();
++	} else {
++		num_cpus_frozen++;
++		partition_sched_domains(1, NULL, NULL);
++	}
++	return 0;
++}
++
++int sched_cpu_activate(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	/*
++	 * Clear the balance_push callback and prepare to schedule
++	 * regular tasks.
++	 */
++	balance_push_set(cpu, false);
++
++#ifdef CONFIG_SCHED_SMT
++	/*
++	 * When going up, increment the number of cores with SMT present.
++	 */
++	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
++		static_branch_inc_cpuslocked(&sched_smt_present);
++#endif
++	set_cpu_active(cpu, true);
++
++	if (sched_smp_initialized)
++		cpuset_cpu_active();
++
++	/*
++	 * Put the rq online, if not already. This happens:
++	 *
++	 * 1) In the early boot process, because we build the real domains
++	 *    after all cpus have been brought up.
++	 *
++	 * 2) At runtime, if cpuset_cpu_active() fails to rebuild the
++	 *    domains.
++	 */
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	set_rq_online(rq);
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	return 0;
++}
++
++int sched_cpu_deactivate(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++	int ret;
++
++	set_cpu_active(cpu, false);
++
++	/*
++	 * From this point forward, this CPU will refuse to run any task that
++	 * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
++	 * push those tasks away until this gets cleared, see
++	 * sched_cpu_dying().
++	 */
++	balance_push_set(cpu, true);
++
++	/*
++	 * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
++	 * users of this state to go away such that all new such users will
++	 * observe it.
++	 *
++	 * Specifically, we rely on ttwu to no longer target this CPU, see
++	 * ttwu_queue_cond() and is_cpu_allowed().
++	 *
++	 * Do sync before park smpboot threads to take care the rcu boost case.
++	 */
++	synchronize_rcu();
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	update_rq_clock(rq);
++	set_rq_offline(rq);
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++#ifdef CONFIG_SCHED_SMT
++	/*
++	 * When going down, decrement the number of cores with SMT present.
++	 */
++	if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
++		static_branch_dec_cpuslocked(&sched_smt_present);
++		if (!static_branch_likely(&sched_smt_present))
++			cpumask_clear(&sched_sg_idle_mask);
++	}
++#endif
++
++	if (!sched_smp_initialized)
++		return 0;
++
++	ret = cpuset_cpu_inactive(cpu);
++	if (ret) {
++		balance_push_set(cpu, false);
++		set_cpu_active(cpu, true);
++		return ret;
++	}
++
++	return 0;
++}
++
++static void sched_rq_cpu_starting(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++
++	rq->calc_load_update = calc_load_update;
++}
++
++int sched_cpu_starting(unsigned int cpu)
++{
++	sched_rq_cpu_starting(cpu);
++	sched_tick_start(cpu);
++	return 0;
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++
++/*
++ * Invoked immediately before the stopper thread is invoked to bring the
++ * CPU down completely. At this point all per CPU kthreads except the
++ * hotplug thread (current) and the stopper thread (inactive) have been
++ * either parked or have been unbound from the outgoing CPU. Ensure that
++ * any of those which might be on the way out are gone.
++ *
++ * If after this point a bound task is being woken on this CPU then the
++ * responsible hotplug callback has failed to do it's job.
++ * sched_cpu_dying() will catch it with the appropriate fireworks.
++ */
++int sched_cpu_wait_empty(unsigned int cpu)
++{
++	balance_hotplug_wait();
++	return 0;
++}
++
++/*
++ * Since this CPU is going 'away' for a while, fold any nr_active delta we
++ * might have. Called from the CPU stopper task after ensuring that the
++ * stopper is the last running task on the CPU, so nr_active count is
++ * stable. We need to take the teardown thread which is calling this into
++ * account, so we hand in adjust = 1 to the load calculation.
++ *
++ * Also see the comment "Global load-average calculations".
++ */
++static void calc_load_migrate(struct rq *rq)
++{
++	long delta = calc_load_fold_active(rq, 1);
++
++	if (delta)
++		atomic_long_add(delta, &calc_load_tasks);
++}
++
++static void dump_rq_tasks(struct rq *rq, const char *loglvl)
++{
++	struct task_struct *g, *p;
++	int cpu = cpu_of(rq);
++
++	lockdep_assert_held(&rq->lock);
++
++	printk("%sCPU%d enqueued tasks (%u total):\n", loglvl, cpu, rq->nr_running);
++	for_each_process_thread(g, p) {
++		if (task_cpu(p) != cpu)
++			continue;
++
++		if (!task_on_rq_queued(p))
++			continue;
++
++		printk("%s\tpid: %d, name: %s\n", loglvl, p->pid, p->comm);
++	}
++}
++
++int sched_cpu_dying(unsigned int cpu)
++{
++	struct rq *rq = cpu_rq(cpu);
++	unsigned long flags;
++
++	/* Handle pending wakeups and then migrate everything off */
++	sched_tick_stop(cpu);
++
++	raw_spin_lock_irqsave(&rq->lock, flags);
++	if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) {
++		WARN(true, "Dying CPU not properly vacated!");
++		dump_rq_tasks(rq, KERN_WARNING);
++	}
++	raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++	calc_load_migrate(rq);
++	hrtick_clear(rq);
++	return 0;
++}
++#endif
++
++#ifdef CONFIG_SMP
++static void sched_init_topology_cpumask_early(void)
++{
++	int cpu;
++	cpumask_t *tmp;
++
++	for_each_possible_cpu(cpu) {
++		/* init affinity masks */
++		tmp = per_cpu(sched_cpu_affinity_masks, cpu);
++
++		cpumask_copy(tmp, cpumask_of(cpu));
++		tmp++;
++		cpumask_copy(tmp, cpu_possible_mask);
++		cpumask_clear_cpu(cpu, tmp);
++		per_cpu(sched_cpu_affinity_end_mask, cpu) = ++tmp;
++		/* init topo masks */
++		tmp = per_cpu(sched_cpu_topo_masks, cpu);
++
++		cpumask_copy(tmp, cpumask_of(cpu));
++		tmp++;
++		cpumask_copy(tmp, cpu_possible_mask);
++		per_cpu(sched_cpu_llc_mask, cpu) = tmp;
++		/*per_cpu(sd_llc_id, cpu) = cpu;*/
++	}
++}
++
++#define TOPOLOGY_CPUMASK(name, mask, last) \
++	if (cpumask_and(chk, chk, mask)) {					\
++		cpumask_copy(topo, mask);					\
++		printk(KERN_INFO "sched: cpu#%02d affinity: 0x%08lx topo: 0x%08lx - "#name,\
++		       cpu, (chk++)->bits[0], (topo++)->bits[0]);		\
++	}									\
++	if (!last)								\
++		cpumask_complement(chk, mask)
++
++static void sched_init_topology_cpumask(void)
++{
++	int cpu;
++	cpumask_t *chk, *topo;
++
++	for_each_online_cpu(cpu) {
++		/* take chance to reset time slice for idle tasks */
++		cpu_rq(cpu)->idle->time_slice = sched_timeslice_ns;
++
++		chk = per_cpu(sched_cpu_affinity_masks, cpu) + 1;
++		topo = per_cpu(sched_cpu_topo_masks, cpu) + 1;
++
++		cpumask_complement(chk, cpumask_of(cpu));
++#ifdef CONFIG_SCHED_SMT
++		TOPOLOGY_CPUMASK(smt, topology_sibling_cpumask(cpu), false);
++#endif
++		per_cpu(sd_llc_id, cpu) = cpumask_first(cpu_coregroup_mask(cpu));
++		per_cpu(sched_cpu_llc_mask, cpu) = topo;
++		TOPOLOGY_CPUMASK(coregroup, cpu_coregroup_mask(cpu), false);
++
++		TOPOLOGY_CPUMASK(core, topology_core_cpumask(cpu), false);
++
++		TOPOLOGY_CPUMASK(others, cpu_online_mask, true);
++
++		per_cpu(sched_cpu_affinity_end_mask, cpu) = chk;
++		printk(KERN_INFO "sched: cpu#%02d llc_id = %d, llc_mask idx = %d\n",
++		       cpu, per_cpu(sd_llc_id, cpu),
++		       (int) (per_cpu(sched_cpu_llc_mask, cpu) -
++			      per_cpu(sched_cpu_topo_masks, cpu)));
++	}
++}
++#endif
++
++void __init sched_init_smp(void)
++{
++	/* Move init over to a non-isolated CPU */
++	if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_FLAG_DOMAIN)) < 0)
++		BUG();
++
++	sched_init_topology_cpumask();
++
++	sched_smp_initialized = true;
++}
++#else
++void __init sched_init_smp(void)
++{
++	cpu_rq(0)->idle->time_slice = sched_timeslice_ns;
++}
++#endif /* CONFIG_SMP */
++
++int in_sched_functions(unsigned long addr)
++{
++	return in_lock_functions(addr) ||
++		(addr >= (unsigned long)__sched_text_start
++		&& addr < (unsigned long)__sched_text_end);
++}
++
++#ifdef CONFIG_CGROUP_SCHED
++/* task group related information */
++struct task_group {
++	struct cgroup_subsys_state css;
++
++	struct rcu_head rcu;
++	struct list_head list;
++
++	struct task_group *parent;
++	struct list_head siblings;
++	struct list_head children;
++#ifdef CONFIG_FAIR_GROUP_SCHED
++	unsigned long		shares;
++#endif
++};
++
++/*
++ * Default task group.
++ * Every task in system belongs to this group at bootup.
++ */
++struct task_group root_task_group;
++LIST_HEAD(task_groups);
++
++/* Cacheline aligned slab cache for task_group */
++static struct kmem_cache *task_group_cache __read_mostly;
++#endif /* CONFIG_CGROUP_SCHED */
++
++void __init sched_init(void)
++{
++	int i;
++	struct rq *rq;
++
++	printk(KERN_INFO ALT_SCHED_VERSION_MSG);
++
++	wait_bit_init();
++
++#ifdef CONFIG_SMP
++	for (i = 0; i < SCHED_BITS; i++)
++		cpumask_copy(&sched_rq_watermark[i], cpu_present_mask);
++#endif
++
++#ifdef CONFIG_CGROUP_SCHED
++	task_group_cache = KMEM_CACHE(task_group, 0);
++
++	list_add(&root_task_group.list, &task_groups);
++	INIT_LIST_HEAD(&root_task_group.children);
++	INIT_LIST_HEAD(&root_task_group.siblings);
++#endif /* CONFIG_CGROUP_SCHED */
++	for_each_possible_cpu(i) {
++		rq = cpu_rq(i);
++
++		sched_queue_init(&rq->queue);
++		rq->watermark = IDLE_WM;
++		rq->skip = NULL;
++
++		raw_spin_lock_init(&rq->lock);
++		rq->nr_running = rq->nr_uninterruptible = 0;
++		rq->calc_load_active = 0;
++		rq->calc_load_update = jiffies + LOAD_FREQ;
++#ifdef CONFIG_SMP
++		rq->online = false;
++		rq->cpu = i;
++
++#ifdef CONFIG_SCHED_SMT
++		rq->active_balance = 0;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++		INIT_CSD(&rq->nohz_csd, nohz_csd_func, rq);
++#endif
++		rq->balance_callback = &balance_push_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++		rcuwait_init(&rq->hotplug_wait);
++#endif
++#endif /* CONFIG_SMP */
++		rq->nr_switches = 0;
++
++		hrtick_rq_init(rq);
++		atomic_set(&rq->nr_iowait, 0);
++	}
++#ifdef CONFIG_SMP
++	/* Set rq->online for cpu 0 */
++	cpu_rq(0)->online = true;
++#endif
++	/*
++	 * The boot idle thread does lazy MMU switching as well:
++	 */
++	mmgrab(&init_mm);
++	enter_lazy_tlb(&init_mm, current);
++
++	/*
++	 * Make us the idle thread. Technically, schedule() should not be
++	 * called from this thread, however somewhere below it might be,
++	 * but because we are the idle thread, we just pick up running again
++	 * when this runqueue becomes "idle".
++	 */
++	init_idle(current, smp_processor_id());
++
++	calc_load_update = jiffies + LOAD_FREQ;
++
++#ifdef CONFIG_SMP
++	idle_thread_set_boot_cpu();
++	balance_push_set(smp_processor_id(), false);
++
++	sched_init_topology_cpumask_early();
++#endif /* SMP */
++
++	init_schedstats();
++
++	psi_init();
++}
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++static inline int preempt_count_equals(int preempt_offset)
++{
++	int nested = preempt_count() + rcu_preempt_depth();
++
++	return (nested == preempt_offset);
++}
++
++void __might_sleep(const char *file, int line, int preempt_offset)
++{
++	/*
++	 * Blocking primitives will set (and therefore destroy) current->state,
++	 * since we will exit with TASK_RUNNING make sure we enter with it,
++	 * otherwise we will destroy state.
++	 */
++	WARN_ONCE(current->state != TASK_RUNNING && current->task_state_change,
++			"do not call blocking ops when !TASK_RUNNING; "
++			"state=%lx set at [<%p>] %pS\n",
++			current->state,
++			(void *)current->task_state_change,
++			(void *)current->task_state_change);
++
++	___might_sleep(file, line, preempt_offset);
++}
++EXPORT_SYMBOL(__might_sleep);
++
++void ___might_sleep(const char *file, int line, int preempt_offset)
++{
++	/* Ratelimiting timestamp: */
++	static unsigned long prev_jiffy;
++
++	unsigned long preempt_disable_ip;
++
++	/* WARN_ON_ONCE() by default, no rate limit required: */
++	rcu_sleep_check();
++
++	if ((preempt_count_equals(preempt_offset) && !irqs_disabled() &&
++	     !is_idle_task(current) && !current->non_block_count) ||
++	    system_state == SYSTEM_BOOTING || system_state > SYSTEM_RUNNING ||
++	    oops_in_progress)
++		return;
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	/* Save this before calling printk(), since that will clobber it: */
++	preempt_disable_ip = get_preempt_disable_ip(current);
++
++	printk(KERN_ERR
++		"BUG: sleeping function called from invalid context at %s:%d\n",
++			file, line);
++	printk(KERN_ERR
++		"in_atomic(): %d, irqs_disabled(): %d, non_block: %d, pid: %d, name: %s\n",
++			in_atomic(), irqs_disabled(), current->non_block_count,
++			current->pid, current->comm);
++
++	if (task_stack_end_corrupted(current))
++		printk(KERN_EMERG "Thread overran stack, or stack corrupted\n");
++
++	debug_show_held_locks(current);
++	if (irqs_disabled())
++		print_irqtrace_events(current);
++#ifdef CONFIG_DEBUG_PREEMPT
++	if (!preempt_count_equals(preempt_offset)) {
++		pr_err("Preemption disabled at:");
++		print_ip_sym(KERN_ERR, preempt_disable_ip);
++	}
++#endif
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL(___might_sleep);
++
++void __cant_sleep(const char *file, int line, int preempt_offset)
++{
++	static unsigned long prev_jiffy;
++
++	if (irqs_disabled())
++		return;
++
++	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++		return;
++
++	if (preempt_count() > preempt_offset)
++		return;
++
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	printk(KERN_ERR "BUG: assuming atomic context at %s:%d\n", file, line);
++	printk(KERN_ERR "in_atomic(): %d, irqs_disabled(): %d, pid: %d, name: %s\n",
++			in_atomic(), irqs_disabled(),
++			current->pid, current->comm);
++
++	debug_show_held_locks(current);
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_sleep);
++
++#ifdef CONFIG_SMP
++void __cant_migrate(const char *file, int line)
++{
++	static unsigned long prev_jiffy;
++
++	if (irqs_disabled())
++		return;
++
++	if (is_migration_disabled(current))
++		return;
++
++	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++		return;
++
++	if (preempt_count() > 0)
++		return;
++
++	if (current->migration_flags & MDF_FORCE_ENABLED)
++		return;
++
++	if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++		return;
++	prev_jiffy = jiffies;
++
++	pr_err("BUG: assuming non migratable context at %s:%d\n", file, line);
++	pr_err("in_atomic(): %d, irqs_disabled(): %d, migration_disabled() %u pid: %d, name: %s\n",
++	       in_atomic(), irqs_disabled(), is_migration_disabled(current),
++	       current->pid, current->comm);
++
++	debug_show_held_locks(current);
++	dump_stack();
++	add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_migrate);
++#endif
++#endif
++
++#ifdef CONFIG_MAGIC_SYSRQ
++void normalize_rt_tasks(void)
++{
++	struct task_struct *g, *p;
++	struct sched_attr attr = {
++		.sched_policy = SCHED_NORMAL,
++	};
++
++	read_lock(&tasklist_lock);
++	for_each_process_thread(g, p) {
++		/*
++		 * Only normalize user tasks:
++		 */
++		if (p->flags & PF_KTHREAD)
++			continue;
++
++		if (!rt_task(p)) {
++			/*
++			 * Renice negative nice level userspace
++			 * tasks back to 0:
++			 */
++			if (task_nice(p) < 0)
++				set_user_nice(p, 0);
++			continue;
++		}
++
++		__sched_setscheduler(p, &attr, false, false);
++	}
++	read_unlock(&tasklist_lock);
++}
++#endif /* CONFIG_MAGIC_SYSRQ */
++
++#if defined(CONFIG_IA64) || defined(CONFIG_KGDB_KDB)
++/*
++ * These functions are only useful for the IA64 MCA handling, or kdb.
++ *
++ * They can only be called when the whole system has been
++ * stopped - every CPU needs to be quiescent, and no scheduling
++ * activity can take place. Using them for anything else would
++ * be a serious bug, and as a result, they aren't even visible
++ * under any other configuration.
++ */
++
++/**
++ * curr_task - return the current task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
++ *
++ * Return: The current task for @cpu.
++ */
++struct task_struct *curr_task(int cpu)
++{
++	return cpu_curr(cpu);
++}
++
++#endif /* defined(CONFIG_IA64) || defined(CONFIG_KGDB_KDB) */
++
++#ifdef CONFIG_IA64
++/**
++ * ia64_set_curr_task - set the current task for a given CPU.
++ * @cpu: the processor in question.
++ * @p: the task pointer to set.
++ *
++ * Description: This function must only be used when non-maskable interrupts
++ * are serviced on a separate stack.  It allows the architecture to switch the
++ * notion of the current task on a CPU in a non-blocking manner.  This function
++ * must be called with all CPU's synchronised, and interrupts disabled, the
++ * and caller must save the original value of the current task (see
++ * curr_task() above) and restore that value before reenabling interrupts and
++ * re-starting the system.
++ *
++ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
++ */
++void ia64_set_curr_task(int cpu, struct task_struct *p)
++{
++	cpu_curr(cpu) = p;
++}
++
++#endif
++
++#ifdef CONFIG_CGROUP_SCHED
++static void sched_free_group(struct task_group *tg)
++{
++	kmem_cache_free(task_group_cache, tg);
++}
++
++/* allocate runqueue etc for a new task group */
++struct task_group *sched_create_group(struct task_group *parent)
++{
++	struct task_group *tg;
++
++	tg = kmem_cache_alloc(task_group_cache, GFP_KERNEL | __GFP_ZERO);
++	if (!tg)
++		return ERR_PTR(-ENOMEM);
++
++	return tg;
++}
++
++void sched_online_group(struct task_group *tg, struct task_group *parent)
++{
++}
++
++/* rcu callback to free various structures associated with a task group */
++static void sched_free_group_rcu(struct rcu_head *rhp)
++{
++	/* Now it should be safe to free those cfs_rqs */
++	sched_free_group(container_of(rhp, struct task_group, rcu));
++}
++
++void sched_destroy_group(struct task_group *tg)
++{
++	/* Wait for possible concurrent references to cfs_rqs complete */
++	call_rcu(&tg->rcu, sched_free_group_rcu);
++}
++
++void sched_offline_group(struct task_group *tg)
++{
++}
++
++static inline struct task_group *css_tg(struct cgroup_subsys_state *css)
++{
++	return css ? container_of(css, struct task_group, css) : NULL;
++}
++
++static struct cgroup_subsys_state *
++cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
++{
++	struct task_group *parent = css_tg(parent_css);
++	struct task_group *tg;
++
++	if (!parent) {
++		/* This is early initialization for the top cgroup */
++		return &root_task_group.css;
++	}
++
++	tg = sched_create_group(parent);
++	if (IS_ERR(tg))
++		return ERR_PTR(-ENOMEM);
++	return &tg->css;
++}
++
++/* Expose task group only after completing cgroup initialization */
++static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++	struct task_group *parent = css_tg(css->parent);
++
++	if (parent)
++		sched_online_group(tg, parent);
++	return 0;
++}
++
++static void cpu_cgroup_css_released(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++
++	sched_offline_group(tg);
++}
++
++static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
++{
++	struct task_group *tg = css_tg(css);
++
++	/*
++	 * Relies on the RCU grace period between css_released() and this.
++	 */
++	sched_free_group(tg);
++}
++
++static void cpu_cgroup_fork(struct task_struct *task)
++{
++}
++
++static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
++{
++	return 0;
++}
++
++static void cpu_cgroup_attach(struct cgroup_taskset *tset)
++{
++}
++
++#ifdef CONFIG_FAIR_GROUP_SCHED
++static DEFINE_MUTEX(shares_mutex);
++
++int sched_group_set_shares(struct task_group *tg, unsigned long shares)
++{
++	/*
++	 * We can't change the weight of the root cgroup.
++	 */
++	if (&root_task_group == tg)
++		return -EINVAL;
++
++	shares = clamp(shares, scale_load(MIN_SHARES), scale_load(MAX_SHARES));
++
++	mutex_lock(&shares_mutex);
++	if (tg->shares == shares)
++		goto done;
++
++	tg->shares = shares;
++done:
++	mutex_unlock(&shares_mutex);
++	return 0;
++}
++
++static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
++				struct cftype *cftype, u64 shareval)
++{
++	if (shareval > scale_load_down(ULONG_MAX))
++		shareval = MAX_SHARES;
++	return sched_group_set_shares(css_tg(css), scale_load(shareval));
++}
++
++static u64 cpu_shares_read_u64(struct cgroup_subsys_state *css,
++			       struct cftype *cft)
++{
++	struct task_group *tg = css_tg(css);
++
++	return (u64) scale_load_down(tg->shares);
++}
++#endif
++
++static struct cftype cpu_legacy_files[] = {
++#ifdef CONFIG_FAIR_GROUP_SCHED
++	{
++		.name = "shares",
++		.read_u64 = cpu_shares_read_u64,
++		.write_u64 = cpu_shares_write_u64,
++	},
++#endif
++	{ }	/* Terminate */
++};
++
++
++static struct cftype cpu_files[] = {
++	{ }	/* terminate */
++};
++
++static int cpu_extra_stat_show(struct seq_file *sf,
++			       struct cgroup_subsys_state *css)
++{
++	return 0;
++}
++
++struct cgroup_subsys cpu_cgrp_subsys = {
++	.css_alloc	= cpu_cgroup_css_alloc,
++	.css_online	= cpu_cgroup_css_online,
++	.css_released	= cpu_cgroup_css_released,
++	.css_free	= cpu_cgroup_css_free,
++	.css_extra_stat_show = cpu_extra_stat_show,
++	.fork		= cpu_cgroup_fork,
++	.can_attach	= cpu_cgroup_can_attach,
++	.attach		= cpu_cgroup_attach,
++	.legacy_cftypes	= cpu_files,
++	.legacy_cftypes	= cpu_legacy_files,
++	.dfl_cftypes	= cpu_files,
++	.early_init	= true,
++	.threaded	= true,
++};
++#endif	/* CONFIG_CGROUP_SCHED */
++
++#undef CREATE_TRACE_POINTS
+diff --git a/kernel/sched/alt_debug.c b/kernel/sched/alt_debug.c
+new file mode 100644
+index 000000000000..1212a031700e
+--- /dev/null
++++ b/kernel/sched/alt_debug.c
+@@ -0,0 +1,31 @@
++/*
++ * kernel/sched/alt_debug.c
++ *
++ * Print the alt scheduler debugging details
++ *
++ * Author: Alfred Chen
++ * Date  : 2020
++ */
++#include "sched.h"
++
++/*
++ * This allows printing both to /proc/sched_debug and
++ * to the console
++ */
++#define SEQ_printf(m, x...)			\
++ do {						\
++	if (m)					\
++		seq_printf(m, x);		\
++	else					\
++		pr_cont(x);			\
++ } while (0)
++
++void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
++			  struct seq_file *m)
++{
++	SEQ_printf(m, "%s (%d, #threads: %d)\n", p->comm, task_pid_nr_ns(p, ns),
++						get_nr_threads(p));
++}
++
++void proc_sched_set_task(struct task_struct *p)
++{}
+diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
+new file mode 100644
+index 000000000000..f9f79422bf0e
+--- /dev/null
++++ b/kernel/sched/alt_sched.h
+@@ -0,0 +1,710 @@
++#ifndef ALT_SCHED_H
++#define ALT_SCHED_H
++
++#include <linux/sched.h>
++
++#include <linux/sched/clock.h>
++#include <linux/sched/cpufreq.h>
++#include <linux/sched/cputime.h>
++#include <linux/sched/debug.h>
++#include <linux/sched/init.h>
++#include <linux/sched/isolation.h>
++#include <linux/sched/loadavg.h>
++#include <linux/sched/mm.h>
++#include <linux/sched/nohz.h>
++#include <linux/sched/signal.h>
++#include <linux/sched/stat.h>
++#include <linux/sched/sysctl.h>
++#include <linux/sched/task.h>
++#include <linux/sched/topology.h>
++#include <linux/sched/wake_q.h>
++
++#include <uapi/linux/sched/types.h>
++
++#include <linux/cgroup.h>
++#include <linux/cpufreq.h>
++#include <linux/cpuidle.h>
++#include <linux/cpuset.h>
++#include <linux/ctype.h>
++#include <linux/debugfs.h>
++#include <linux/kthread.h>
++#include <linux/livepatch.h>
++#include <linux/membarrier.h>
++#include <linux/proc_fs.h>
++#include <linux/psi.h>
++#include <linux/slab.h>
++#include <linux/stop_machine.h>
++#include <linux/suspend.h>
++#include <linux/swait.h>
++#include <linux/syscalls.h>
++#include <linux/tsacct_kern.h>
++
++#include <asm/tlb.h>
++
++#ifdef CONFIG_PARAVIRT
++# include <asm/paravirt.h>
++#endif
++
++#include "cpupri.h"
++
++#include <trace/events/sched.h>
++
++#ifdef CONFIG_SCHED_BMQ
++/* bits:
++ * RT(0-99), (Low prio adj range, nice width, high prio adj range) / 2, cpu idle task */
++#define SCHED_BITS	(MAX_RT_PRIO + NICE_WIDTH / 2 + MAX_PRIORITY_ADJ + 1)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++/* bits: RT(0-99), reserved(100-127), NORMAL_PRIO_NUM, cpu idle task */
++#define SCHED_BITS	(MIN_NORMAL_PRIO + NORMAL_PRIO_NUM + 1)
++#endif /* CONFIG_SCHED_PDS */
++
++#define IDLE_TASK_SCHED_PRIO	(SCHED_BITS - 1)
++
++#ifdef CONFIG_SCHED_DEBUG
++# define SCHED_WARN_ON(x)	WARN_ONCE(x, #x)
++extern void resched_latency_warn(int cpu, u64 latency);
++#else
++# define SCHED_WARN_ON(x)	({ (void)(x), 0; })
++static inline void resched_latency_warn(int cpu, u64 latency) {}
++#endif
++
++/*
++ * Increase resolution of nice-level calculations for 64-bit architectures.
++ * The extra resolution improves shares distribution and load balancing of
++ * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
++ * hierarchies, especially on larger systems. This is not a user-visible change
++ * and does not change the user-interface for setting shares/weights.
++ *
++ * We increase resolution only if we have enough bits to allow this increased
++ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
++ * are pretty high and the returns do not justify the increased costs.
++ *
++ * Really only required when CONFIG_FAIR_GROUP_SCHED=y is also set, but to
++ * increase coverage and consistency always enable it on 64-bit platforms.
++ */
++#ifdef CONFIG_64BIT
++# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w)		((w) << SCHED_FIXEDPOINT_SHIFT)
++# define scale_load_down(w) \
++({ \
++	unsigned long __w = (w); \
++	if (__w) \
++		__w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
++	__w; \
++})
++#else
++# define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w)		(w)
++# define scale_load_down(w)	(w)
++#endif
++
++#ifdef CONFIG_FAIR_GROUP_SCHED
++#define ROOT_TASK_GROUP_LOAD	NICE_0_LOAD
++
++/*
++ * A weight of 0 or 1 can cause arithmetics problems.
++ * A weight of a cfs_rq is the sum of weights of which entities
++ * are queued on this cfs_rq, so a weight of a entity should not be
++ * too large, so as the shares value of a task group.
++ * (The default weight is 1024 - so there's no practical
++ *  limitation from this.)
++ */
++#define MIN_SHARES		(1UL <<  1)
++#define MAX_SHARES		(1UL << 18)
++#endif
++
++/* task_struct::on_rq states: */
++#define TASK_ON_RQ_QUEUED	1
++#define TASK_ON_RQ_MIGRATING	2
++
++static inline int task_on_rq_queued(struct task_struct *p)
++{
++	return p->on_rq == TASK_ON_RQ_QUEUED;
++}
++
++static inline int task_on_rq_migrating(struct task_struct *p)
++{
++	return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
++}
++
++/*
++ * wake flags
++ */
++#define WF_SYNC		0x01		/* waker goes to sleep after wakeup */
++#define WF_FORK		0x02		/* child wakeup after fork */
++#define WF_MIGRATED	0x04		/* internal use, task got migrated */
++#define WF_ON_CPU	0x08		/* Wakee is on_rq */
++
++#define SCHED_QUEUE_BITS	(SCHED_BITS - 1)
++
++struct sched_queue {
++	DECLARE_BITMAP(bitmap, SCHED_QUEUE_BITS);
++	struct list_head heads[SCHED_BITS];
++};
++
++/*
++ * This is the main, per-CPU runqueue data structure.
++ * This data should only be modified by the local cpu.
++ */
++struct rq {
++	/* runqueue lock: */
++	raw_spinlock_t lock;
++
++	struct task_struct __rcu *curr;
++	struct task_struct *idle, *stop, *skip;
++	struct mm_struct *prev_mm;
++
++	struct sched_queue	queue;
++#ifdef CONFIG_SCHED_PDS
++	u64			time_edge;
++#endif
++	unsigned long watermark;
++
++	/* switch count */
++	u64 nr_switches;
++
++	atomic_t nr_iowait;
++
++#ifdef CONFIG_SCHED_DEBUG
++	u64 last_seen_need_resched_ns;
++	int ticks_without_resched;
++#endif
++
++#ifdef CONFIG_MEMBARRIER
++	int membarrier_state;
++#endif
++
++#ifdef CONFIG_SMP
++	int cpu;		/* cpu of this runqueue */
++	bool online;
++
++	unsigned int		ttwu_pending;
++	unsigned char		nohz_idle_balance;
++	unsigned char		idle_balance;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++	struct sched_avg	avg_irq;
++#endif
++
++#ifdef CONFIG_SCHED_SMT
++	int active_balance;
++	struct cpu_stop_work	active_balance_work;
++#endif
++	struct callback_head	*balance_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++	struct rcuwait		hotplug_wait;
++#endif
++	unsigned int		nr_pinned;
++#endif /* CONFIG_SMP */
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++	u64 prev_irq_time;
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++#ifdef CONFIG_PARAVIRT
++	u64 prev_steal_time;
++#endif /* CONFIG_PARAVIRT */
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++	u64 prev_steal_time_rq;
++#endif /* CONFIG_PARAVIRT_TIME_ACCOUNTING */
++
++	/* calc_load related fields */
++	unsigned long calc_load_update;
++	long calc_load_active;
++
++	u64 clock, last_tick;
++	u64 last_ts_switch;
++	u64 clock_task;
++
++	unsigned int  nr_running;
++	unsigned long nr_uninterruptible;
++
++#ifdef CONFIG_SCHED_HRTICK
++#ifdef CONFIG_SMP
++	call_single_data_t hrtick_csd;
++#endif
++	struct hrtimer		hrtick_timer;
++	ktime_t			hrtick_time;
++#endif
++
++#ifdef CONFIG_SCHEDSTATS
++
++	/* latency stats */
++	struct sched_info rq_sched_info;
++	unsigned long long rq_cpu_time;
++	/* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */
++
++	/* sys_sched_yield() stats */
++	unsigned int yld_count;
++
++	/* schedule() stats */
++	unsigned int sched_switch;
++	unsigned int sched_count;
++	unsigned int sched_goidle;
++
++	/* try_to_wake_up() stats */
++	unsigned int ttwu_count;
++	unsigned int ttwu_local;
++#endif /* CONFIG_SCHEDSTATS */
++
++#ifdef CONFIG_CPU_IDLE
++	/* Must be inspected within a rcu lock section */
++	struct cpuidle_state *idle_state;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++#ifdef CONFIG_SMP
++	call_single_data_t	nohz_csd;
++#endif
++	atomic_t		nohz_flags;
++#endif /* CONFIG_NO_HZ_COMMON */
++};
++
++extern unsigned long calc_load_update;
++extern atomic_long_t calc_load_tasks;
++
++extern void calc_global_load_tick(struct rq *this_rq);
++extern long calc_load_fold_active(struct rq *this_rq, long adjust);
++
++DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++#define cpu_rq(cpu)		(&per_cpu(runqueues, (cpu)))
++#define this_rq()		this_cpu_ptr(&runqueues)
++#define task_rq(p)		cpu_rq(task_cpu(p))
++#define cpu_curr(cpu)		(cpu_rq(cpu)->curr)
++#define raw_rq()		raw_cpu_ptr(&runqueues)
++
++#ifdef CONFIG_SMP
++#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL)
++void register_sched_domain_sysctl(void);
++void unregister_sched_domain_sysctl(void);
++#else
++static inline void register_sched_domain_sysctl(void)
++{
++}
++static inline void unregister_sched_domain_sysctl(void)
++{
++}
++#endif
++
++extern bool sched_smp_initialized;
++
++enum {
++	ITSELF_LEVEL_SPACE_HOLDER,
++#ifdef CONFIG_SCHED_SMT
++	SMT_LEVEL_SPACE_HOLDER,
++#endif
++	COREGROUP_LEVEL_SPACE_HOLDER,
++	CORE_LEVEL_SPACE_HOLDER,
++	OTHER_LEVEL_SPACE_HOLDER,
++	NR_CPU_AFFINITY_LEVELS
++};
++
++DECLARE_PER_CPU(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++DECLARE_PER_CPU(cpumask_t *, sched_cpu_llc_mask);
++
++static inline int __best_mask_cpu(int cpu, const cpumask_t *cpumask,
++				  const cpumask_t *mask)
++{
++#if NR_CPUS <= 64
++	unsigned long t;
++
++	while ((t = cpumask->bits[0] & mask->bits[0]) == 0UL)
++		mask++;
++
++	return __ffs(t);
++#else
++	while ((cpu = cpumask_any_and(cpumask, mask)) >= nr_cpu_ids)
++		mask++;
++	return cpu;
++#endif
++}
++
++static inline int best_mask_cpu(int cpu, const cpumask_t *mask)
++{
++#if NR_CPUS <= 64
++	unsigned long llc_match;
++	cpumask_t *chk = per_cpu(sched_cpu_llc_mask, cpu);
++
++	if ((llc_match = mask->bits[0] & chk->bits[0])) {
++		unsigned long match;
++
++		chk = per_cpu(sched_cpu_topo_masks, cpu);
++		if (mask->bits[0] & chk->bits[0])
++			return cpu;
++
++#ifdef CONFIG_SCHED_SMT
++		chk++;
++		if ((match = mask->bits[0] & chk->bits[0]))
++			return __ffs(match);
++#endif
++
++		return __ffs(llc_match);
++	}
++
++	return __best_mask_cpu(cpu, mask, chk + 1);
++#else
++	cpumask_t llc_match;
++	cpumask_t *chk = per_cpu(sched_cpu_llc_mask, cpu);
++
++	if (cpumask_and(&llc_match, mask, chk)) {
++		cpumask_t tmp;
++
++		chk = per_cpu(sched_cpu_topo_masks, cpu);
++		if (cpumask_test_cpu(cpu, mask))
++			return cpu;
++
++#ifdef CONFIG_SCHED_SMT
++		chk++;
++		if (cpumask_and(&tmp, mask, chk))
++			return cpumask_any(&tmp);
++#endif
++
++		return cpumask_any(&llc_match);
++	}
++
++	return __best_mask_cpu(cpu, mask, chk + 1);
++#endif
++}
++
++extern void flush_smp_call_function_from_idle(void);
++
++#else  /* !CONFIG_SMP */
++static inline void flush_smp_call_function_from_idle(void) { }
++#endif
++
++#ifndef arch_scale_freq_tick
++static __always_inline
++void arch_scale_freq_tick(void)
++{
++}
++#endif
++
++#ifndef arch_scale_freq_capacity
++static __always_inline
++unsigned long arch_scale_freq_capacity(int cpu)
++{
++	return SCHED_CAPACITY_SCALE;
++}
++#endif
++
++static inline u64 __rq_clock_broken(struct rq *rq)
++{
++	return READ_ONCE(rq->clock);
++}
++
++static inline u64 rq_clock(struct rq *rq)
++{
++	/*
++	 * Relax lockdep_assert_held() checking as in VRQ, call to
++	 * sched_info_xxxx() may not held rq->lock
++	 * lockdep_assert_held(&rq->lock);
++	 */
++	return rq->clock;
++}
++
++static inline u64 rq_clock_task(struct rq *rq)
++{
++	/*
++	 * Relax lockdep_assert_held() checking as in VRQ, call to
++	 * sched_info_xxxx() may not held rq->lock
++	 * lockdep_assert_held(&rq->lock);
++	 */
++	return rq->clock_task;
++}
++
++/*
++ * {de,en}queue flags:
++ *
++ * DEQUEUE_SLEEP  - task is no longer runnable
++ * ENQUEUE_WAKEUP - task just became runnable
++ *
++ */
++
++#define DEQUEUE_SLEEP		0x01
++
++#define ENQUEUE_WAKEUP		0x01
++
++
++/*
++ * Below are scheduler API which using in other kernel code
++ * It use the dummy rq_flags
++ * ToDo : BMQ need to support these APIs for compatibility with mainline
++ * scheduler code.
++ */
++struct rq_flags {
++	unsigned long flags;
++};
++
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(rq->lock);
++
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++	__acquires(p->pi_lock)
++	__acquires(rq->lock);
++
++static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
++	__releases(rq->lock)
++	__releases(p->pi_lock)
++{
++	raw_spin_unlock(&rq->lock);
++	raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++}
++
++static inline void
++rq_lock(struct rq *rq, struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	raw_spin_lock(&rq->lock);
++}
++
++static inline void
++rq_unlock_irq(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock_irq(&rq->lock);
++}
++
++static inline void
++rq_unlock(struct rq *rq, struct rq_flags *rf)
++	__releases(rq->lock)
++{
++	raw_spin_unlock(&rq->lock);
++}
++
++static inline struct rq *
++this_rq_lock_irq(struct rq_flags *rf)
++	__acquires(rq->lock)
++{
++	struct rq *rq;
++
++	local_irq_disable();
++	rq = this_rq();
++	raw_spin_lock(&rq->lock);
++
++	return rq;
++}
++
++static inline int task_current(struct rq *rq, struct task_struct *p)
++{
++	return rq->curr == p;
++}
++
++static inline bool task_running(struct task_struct *p)
++{
++	return p->on_cpu;
++}
++
++extern int task_running_nice(struct task_struct *p);
++
++extern struct static_key_false sched_schedstats;
++
++#ifdef CONFIG_CPU_IDLE
++static inline void idle_set_state(struct rq *rq,
++				  struct cpuidle_state *idle_state)
++{
++	rq->idle_state = idle_state;
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++	WARN_ON(!rcu_read_lock_held());
++	return rq->idle_state;
++}
++#else
++static inline void idle_set_state(struct rq *rq,
++				  struct cpuidle_state *idle_state)
++{
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++	return NULL;
++}
++#endif
++
++static inline int cpu_of(const struct rq *rq)
++{
++#ifdef CONFIG_SMP
++	return rq->cpu;
++#else
++	return 0;
++#endif
++}
++
++#include "stats.h"
++
++#ifdef CONFIG_NO_HZ_COMMON
++#define NOHZ_BALANCE_KICK_BIT	0
++#define NOHZ_STATS_KICK_BIT	1
++
++#define NOHZ_BALANCE_KICK	BIT(NOHZ_BALANCE_KICK_BIT)
++#define NOHZ_STATS_KICK		BIT(NOHZ_STATS_KICK_BIT)
++
++#define NOHZ_KICK_MASK	(NOHZ_BALANCE_KICK | NOHZ_STATS_KICK)
++
++#define nohz_flags(cpu)	(&cpu_rq(cpu)->nohz_flags)
++
++/* TODO: needed?
++extern void nohz_balance_exit_idle(struct rq *rq);
++#else
++static inline void nohz_balance_exit_idle(struct rq *rq) { }
++*/
++#endif
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++struct irqtime {
++	u64			total;
++	u64			tick_delta;
++	u64			irq_start_time;
++	struct u64_stats_sync	sync;
++};
++
++DECLARE_PER_CPU(struct irqtime, cpu_irqtime);
++
++/*
++ * Returns the irqtime minus the softirq time computed by ksoftirqd.
++ * Otherwise ksoftirqd's sum_exec_runtime is substracted its own runtime
++ * and never move forward.
++ */
++static inline u64 irq_time_read(int cpu)
++{
++	struct irqtime *irqtime = &per_cpu(cpu_irqtime, cpu);
++	unsigned int seq;
++	u64 total;
++
++	do {
++		seq = __u64_stats_fetch_begin(&irqtime->sync);
++		total = irqtime->total;
++	} while (__u64_stats_fetch_retry(&irqtime->sync, seq));
++
++	return total;
++}
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++
++#ifdef CONFIG_CPU_FREQ
++DECLARE_PER_CPU(struct update_util_data __rcu *, cpufreq_update_util_data);
++
++/**
++ * cpufreq_update_util - Take a note about CPU utilization changes.
++ * @rq: Runqueue to carry out the update for.
++ * @flags: Update reason flags.
++ *
++ * This function is called by the scheduler on the CPU whose utilization is
++ * being updated.
++ *
++ * It can only be called from RCU-sched read-side critical sections.
++ *
++ * The way cpufreq is currently arranged requires it to evaluate the CPU
++ * performance state (frequency/voltage) on a regular basis to prevent it from
++ * being stuck in a completely inadequate performance level for too long.
++ * That is not guaranteed to happen if the updates are only triggered from CFS
++ * and DL, though, because they may not be coming in if only RT tasks are
++ * active all the time (or there are RT tasks only).
++ *
++ * As a workaround for that issue, this function is called periodically by the
++ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
++ * but that really is a band-aid.  Going forward it should be replaced with
++ * solutions targeted more specifically at RT tasks.
++ */
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++	struct update_util_data *data;
++
++	data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data,
++						  cpu_of(rq)));
++	if (data)
++		data->func(data, rq_clock(rq), flags);
++}
++#else
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {}
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++extern int __init sched_tick_offload_init(void);
++#else
++static inline int sched_tick_offload_init(void) { return 0; }
++#endif
++
++#ifdef arch_scale_freq_capacity
++#ifndef arch_scale_freq_invariant
++#define arch_scale_freq_invariant()	(true)
++#endif
++#else /* arch_scale_freq_capacity */
++#define arch_scale_freq_invariant()	(false)
++#endif
++
++extern void schedule_idle(void);
++
++#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
++
++/*
++ * !! For sched_setattr_nocheck() (kernel) only !!
++ *
++ * This is actually gross. :(
++ *
++ * It is used to make schedutil kworker(s) higher priority than SCHED_DEADLINE
++ * tasks, but still be able to sleep. We need this on platforms that cannot
++ * atomically change clock frequency. Remove once fast switching will be
++ * available on such platforms.
++ *
++ * SUGOV stands for SchedUtil GOVernor.
++ */
++#define SCHED_FLAG_SUGOV	0x10000000
++
++#ifdef CONFIG_MEMBARRIER
++/*
++ * The scheduler provides memory barriers required by membarrier between:
++ * - prior user-space memory accesses and store to rq->membarrier_state,
++ * - store to rq->membarrier_state and following user-space memory accesses.
++ * In the same way it provides those guarantees around store to rq->curr.
++ */
++static inline void membarrier_switch_mm(struct rq *rq,
++					struct mm_struct *prev_mm,
++					struct mm_struct *next_mm)
++{
++	int membarrier_state;
++
++	if (prev_mm == next_mm)
++		return;
++
++	membarrier_state = atomic_read(&next_mm->membarrier_state);
++	if (READ_ONCE(rq->membarrier_state) == membarrier_state)
++		return;
++
++	WRITE_ONCE(rq->membarrier_state, membarrier_state);
++}
++#else
++static inline void membarrier_switch_mm(struct rq *rq,
++					struct mm_struct *prev_mm,
++					struct mm_struct *next_mm)
++{
++}
++#endif
++
++#ifdef CONFIG_NUMA
++extern int sched_numa_find_closest(const struct cpumask *cpus, int cpu);
++#else
++static inline int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++	return nr_cpu_ids;
++}
++#endif
++
++extern void swake_up_all_locked(struct swait_queue_head *q);
++extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++extern int preempt_dynamic_mode;
++extern int sched_dynamic_mode(const char *str);
++extern void sched_dynamic_update(int mode);
++#endif
++
++static inline void nohz_run_idle_balance(int cpu) { }
++#endif /* ALT_SCHED_H */
+diff --git a/kernel/sched/bmq.h b/kernel/sched/bmq.h
+new file mode 100644
+index 000000000000..7635c00dde7f
+--- /dev/null
++++ b/kernel/sched/bmq.h
+@@ -0,0 +1,111 @@
++#define ALT_SCHED_VERSION_MSG "sched/bmq: BMQ CPU Scheduler "ALT_SCHED_VERSION" by Alfred Chen.\n"
++
++/*
++ * BMQ only routines
++ */
++#define rq_switch_time(rq)	((rq)->clock - (rq)->last_ts_switch)
++#define boost_threshold(p)	(sched_timeslice_ns >>\
++				 (15 - MAX_PRIORITY_ADJ -  (p)->boost_prio))
++
++static inline void boost_task(struct task_struct *p)
++{
++	int limit;
++
++	switch (p->policy) {
++	case SCHED_NORMAL:
++		limit = -MAX_PRIORITY_ADJ;
++		break;
++	case SCHED_BATCH:
++	case SCHED_IDLE:
++		limit = 0;
++		break;
++	default:
++		return;
++	}
++
++	if (p->boost_prio > limit)
++		p->boost_prio--;
++}
++
++static inline void deboost_task(struct task_struct *p)
++{
++	if (p->boost_prio < MAX_PRIORITY_ADJ)
++		p->boost_prio++;
++}
++
++/*
++ * Common interfaces
++ */
++static inline void sched_timeslice_imp(const int timeslice_ms) {}
++
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++	return p->prio + p->boost_prio - MAX_RT_PRIO;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++	return (p->prio < MAX_RT_PRIO)? p->prio : MAX_RT_PRIO / 2 + (p->prio + p->boost_prio) / 2;
++}
++
++static inline int
++task_sched_prio_idx(const struct task_struct *p, const struct rq *rq)
++{
++	return task_sched_prio(p);
++}
++
++static inline int sched_prio2idx(int prio, struct rq *rq)
++{
++	return prio;
++}
++
++static inline int sched_idx2prio(int idx, struct rq *rq)
++{
++	return idx;
++}
++
++static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
++{
++	p->time_slice = sched_timeslice_ns;
++
++	if (SCHED_FIFO != p->policy && task_on_rq_queued(p)) {
++		if (SCHED_RR != p->policy)
++			deboost_task(p);
++		requeue_task(p, rq);
++	}
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq) {}
++
++inline int task_running_nice(struct task_struct *p)
++{
++	return (p->prio + p->boost_prio > DEFAULT_PRIO + MAX_PRIORITY_ADJ);
++}
++
++static void sched_task_fork(struct task_struct *p, struct rq *rq)
++{
++	p->boost_prio = (p->boost_prio < 0) ?
++		p->boost_prio + MAX_PRIORITY_ADJ : MAX_PRIORITY_ADJ;
++}
++
++static void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++	p->boost_prio = MAX_PRIORITY_ADJ;
++}
++
++#ifdef CONFIG_SMP
++static void sched_task_ttwu(struct task_struct *p)
++{
++	if(this_rq()->clock_task - p->last_ran > sched_timeslice_ns)
++		boost_task(p);
++}
++#endif
++
++static void sched_task_deactivate(struct task_struct *p, struct rq *rq)
++{
++	if (rq_switch_time(rq) < boost_threshold(p))
++		boost_task(p);
++}
++
++static inline void update_rq_time_edge(struct rq *rq) {}
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index 4f09afd2f321..805b54e517ff 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -57,6 +57,13 @@ struct sugov_cpu {
+ 	unsigned long		bw_dl;
+ 	unsigned long		max;
+ 
++#ifdef CONFIG_SCHED_ALT
++	/* For genenal cpu load util */
++	s32			load_history;
++	u64			load_block;
++	u64			load_stamp;
++#endif
++
+ 	/* The field below is for single-CPU policies only: */
+ #ifdef CONFIG_NO_HZ_COMMON
+ 	unsigned long		saved_idle_calls;
+@@ -160,6 +167,7 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
+ 	return cpufreq_driver_resolve_freq(policy, freq);
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ static void sugov_get_util(struct sugov_cpu *sg_cpu)
+ {
+ 	struct rq *rq = cpu_rq(sg_cpu->cpu);
+@@ -171,6 +179,55 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu)
+ 					  FREQUENCY_UTIL, NULL);
+ }
+ 
++#else /* CONFIG_SCHED_ALT */
++
++#define SG_CPU_LOAD_HISTORY_BITS	(sizeof(s32) * 8ULL)
++#define SG_CPU_UTIL_SHIFT		(8)
++#define SG_CPU_LOAD_HISTORY_SHIFT	(SG_CPU_LOAD_HISTORY_BITS - 1 - SG_CPU_UTIL_SHIFT)
++#define SG_CPU_LOAD_HISTORY_TO_UTIL(l)	(((l) >> SG_CPU_LOAD_HISTORY_SHIFT) & 0xff)
++
++#define LOAD_BLOCK(t)		((t) >> 17)
++#define LOAD_HALF_BLOCK(t)	((t) >> 16)
++#define BLOCK_MASK(t)		((t) & ((0x01 << 18) - 1))
++#define LOAD_BLOCK_BIT(b)	(1UL << (SG_CPU_LOAD_HISTORY_BITS - 1 - (b)))
++#define CURRENT_LOAD_BIT	LOAD_BLOCK_BIT(0)
++
++static void sugov_get_util(struct sugov_cpu *sg_cpu)
++{
++	unsigned long max = arch_scale_cpu_capacity(sg_cpu->cpu);
++
++	sg_cpu->max = max;
++	sg_cpu->bw_dl = 0;
++	sg_cpu->util = SG_CPU_LOAD_HISTORY_TO_UTIL(sg_cpu->load_history) *
++		(max >> SG_CPU_UTIL_SHIFT);
++}
++
++static inline void sugov_cpu_load_update(struct sugov_cpu *sg_cpu, u64 time)
++{
++	u64 delta = min(LOAD_BLOCK(time) - LOAD_BLOCK(sg_cpu->load_stamp),
++			SG_CPU_LOAD_HISTORY_BITS - 1);
++	u64 prev = !!(sg_cpu->load_history & CURRENT_LOAD_BIT);
++	u64 curr = !!cpu_rq(sg_cpu->cpu)->nr_running;
++
++	if (delta) {
++		sg_cpu->load_history = sg_cpu->load_history >> delta;
++
++		if (delta <= SG_CPU_UTIL_SHIFT) {
++			sg_cpu->load_block += (~BLOCK_MASK(sg_cpu->load_stamp)) * prev;
++			if (!!LOAD_HALF_BLOCK(sg_cpu->load_block) ^ curr)
++				sg_cpu->load_history ^= LOAD_BLOCK_BIT(delta);
++		}
++
++		sg_cpu->load_block = BLOCK_MASK(time) * prev;
++	} else {
++		sg_cpu->load_block += (time - sg_cpu->load_stamp) * prev;
++	}
++	if (prev ^ curr)
++		sg_cpu->load_history ^= CURRENT_LOAD_BIT;
++	sg_cpu->load_stamp = time;
++}
++#endif /* CONFIG_SCHED_ALT */
++
+ /**
+  * sugov_iowait_reset() - Reset the IO boost status of a CPU.
+  * @sg_cpu: the sugov data for the CPU to boost
+@@ -311,13 +368,19 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
+  */
+ static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_dl)
+ 		sg_cpu->sg_policy->limits_changed = true;
++#endif
+ }
+ 
+ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
+ 					      u64 time, unsigned int flags)
+ {
++#ifdef CONFIG_SCHED_ALT
++	sugov_cpu_load_update(sg_cpu, time);
++#endif /* CONFIG_SCHED_ALT */
++
+ 	sugov_iowait_boost(sg_cpu, time, flags);
+ 	sg_cpu->last_update = time;
+ 
+@@ -438,6 +501,10 @@ sugov_update_shared(struct update_util_data *hook, u64 time, unsigned int flags)
+ 
+ 	raw_spin_lock(&sg_policy->update_lock);
+ 
++#ifdef CONFIG_SCHED_ALT
++	sugov_cpu_load_update(sg_cpu, time);
++#endif /* CONFIG_SCHED_ALT */
++
+ 	sugov_iowait_boost(sg_cpu, time, flags);
+ 	sg_cpu->last_update = time;
+ 
+@@ -598,6 +665,7 @@ static int sugov_kthread_create(struct sugov_policy *sg_policy)
+ 	}
+ 
+ 	ret = sched_setattr_nocheck(thread, &attr);
++
+ 	if (ret) {
+ 		kthread_stop(thread);
+ 		pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__);
+@@ -832,7 +900,9 @@ cpufreq_governor_init(schedutil_gov);
+ #ifdef CONFIG_ENERGY_MODEL
+ static void rebuild_sd_workfn(struct work_struct *work)
+ {
++#ifndef CONFIG_SCHED_ALT
+ 	rebuild_sched_domains_energy();
++#endif /* CONFIG_SCHED_ALT */
+ }
+ static DECLARE_WORK(rebuild_sd_work, rebuild_sd_workfn);
+ 
+diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
+index 872e481d5098..f920c8b48ec1 100644
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -123,7 +123,7 @@ void account_user_time(struct task_struct *p, u64 cputime)
+ 	p->utime += cputime;
+ 	account_group_user_time(p, cputime);
+ 
+-	index = (task_nice(p) > 0) ? CPUTIME_NICE : CPUTIME_USER;
++	index = task_running_nice(p) ? CPUTIME_NICE : CPUTIME_USER;
+ 
+ 	/* Add user time to cpustat. */
+ 	task_group_account_field(p, index, cputime);
+@@ -147,7 +147,7 @@ void account_guest_time(struct task_struct *p, u64 cputime)
+ 	p->gtime += cputime;
+ 
+ 	/* Add guest time to cpustat. */
+-	if (task_nice(p) > 0) {
++	if (task_running_nice(p)) {
+ 		cpustat[CPUTIME_NICE] += cputime;
+ 		cpustat[CPUTIME_GUEST_NICE] += cputime;
+ 	} else {
+@@ -270,7 +270,7 @@ static inline u64 account_other_time(u64 max)
+ #ifdef CONFIG_64BIT
+ static inline u64 read_sum_exec_runtime(struct task_struct *t)
+ {
+-	return t->se.sum_exec_runtime;
++	return tsk_seruntime(t);
+ }
+ #else
+ static u64 read_sum_exec_runtime(struct task_struct *t)
+@@ -280,7 +280,7 @@ static u64 read_sum_exec_runtime(struct task_struct *t)
+ 	struct rq *rq;
+ 
+ 	rq = task_rq_lock(t, &rf);
+-	ns = t->se.sum_exec_runtime;
++	ns = tsk_seruntime(t);
+ 	task_rq_unlock(rq, t, &rf);
+ 
+ 	return ns;
+@@ -612,7 +612,7 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
+ void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st)
+ {
+ 	struct task_cputime cputime = {
+-		.sum_exec_runtime = p->se.sum_exec_runtime,
++		.sum_exec_runtime = tsk_seruntime(p),
+ 	};
+ 
+ 	task_cputime(p, &cputime.utime, &cputime.stime);
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index c5aacbd492a1..105433c36b5f 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -8,6 +8,7 @@
+  */
+ #include "sched.h"
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * This allows printing both to /proc/sched_debug and
+  * to the console
+@@ -210,6 +211,7 @@ static const struct file_operations sched_scaling_fops = {
+ };
+ 
+ #endif /* SMP */
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ 
+@@ -273,6 +275,7 @@ static const struct file_operations sched_dynamic_fops = {
+ 
+ #endif /* CONFIG_PREEMPT_DYNAMIC */
+ 
++#ifndef CONFIG_SCHED_ALT
+ __read_mostly bool sched_debug_verbose;
+ 
+ static const struct seq_operations sched_debug_sops;
+@@ -288,6 +291,7 @@ static const struct file_operations sched_debug_fops = {
+ 	.llseek		= seq_lseek,
+ 	.release	= seq_release,
+ };
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ static struct dentry *debugfs_sched;
+ 
+@@ -297,12 +301,15 @@ static __init int sched_init_debug(void)
+ 
+ 	debugfs_sched = debugfs_create_dir("sched", NULL);
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	debugfs_create_file("features", 0644, debugfs_sched, NULL, &sched_feat_fops);
+ 	debugfs_create_bool("verbose", 0644, debugfs_sched, &sched_debug_verbose);
++#endif /* !CONFIG_SCHED_ALT */
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ 	debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops);
+ #endif
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	debugfs_create_u32("latency_ns", 0644, debugfs_sched, &sysctl_sched_latency);
+ 	debugfs_create_u32("min_granularity_ns", 0644, debugfs_sched, &sysctl_sched_min_granularity);
+ 	debugfs_create_u32("wakeup_granularity_ns", 0644, debugfs_sched, &sysctl_sched_wakeup_granularity);
+@@ -330,11 +337,13 @@ static __init int sched_init_debug(void)
+ #endif
+ 
+ 	debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ 	return 0;
+ }
+ late_initcall(sched_init_debug);
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ 
+ static cpumask_var_t		sd_sysctl_cpus;
+@@ -1047,6 +1056,7 @@ void proc_sched_set_task(struct task_struct *p)
+ 	memset(&p->se.statistics, 0, sizeof(p->se.statistics));
+ #endif
+ }
++#endif /* !CONFIG_SCHED_ALT */
+ 
+ void resched_latency_warn(int cpu, u64 latency)
+ {
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index 7ca3d3d86c2a..23e890141939 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -403,6 +403,7 @@ void cpu_startup_entry(enum cpuhp_state state)
+ 		do_idle();
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * idle-task scheduling class.
+  */
+@@ -516,3 +517,4 @@ DEFINE_SCHED_CLASS(idle) = {
+ 	.switched_to		= switched_to_idle,
+ 	.update_curr		= update_curr_idle,
+ };
++#endif
+diff --git a/kernel/sched/pds.h b/kernel/sched/pds.h
+new file mode 100644
+index 000000000000..06d88e72b543
+--- /dev/null
++++ b/kernel/sched/pds.h
+@@ -0,0 +1,129 @@
++#define ALT_SCHED_VERSION_MSG "sched/pds: PDS CPU Scheduler "ALT_SCHED_VERSION" by Alfred Chen.\n"
++
++static int sched_timeslice_shift = 22;
++
++#define NORMAL_PRIO_MOD(x)	((x) & (NORMAL_PRIO_NUM - 1))
++
++/*
++ * Common interfaces
++ */
++static inline void sched_timeslice_imp(const int timeslice_ms)
++{
++	if (2 == timeslice_ms)
++		sched_timeslice_shift = 21;
++}
++
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++	s64 delta = p->deadline - rq->time_edge + NORMAL_PRIO_NUM - NICE_WIDTH;
++
++	if (unlikely(delta > NORMAL_PRIO_NUM - 1)) {
++		pr_info("pds: task_sched_prio_normal delta %lld, deadline %llu, time_edge %llu\n",
++			delta, p->deadline, rq->time_edge);
++		return NORMAL_PRIO_NUM - 1;
++	}
++
++	return (delta < 0) ? 0 : delta;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++	return (p->prio < MAX_RT_PRIO) ? p->prio :
++		MIN_NORMAL_PRIO + task_sched_prio_normal(p, task_rq(p));
++}
++
++static inline int
++task_sched_prio_idx(const struct task_struct *p, const struct rq *rq)
++{
++	return (p->prio < MAX_RT_PRIO) ? p->prio : MIN_NORMAL_PRIO +
++		NORMAL_PRIO_MOD(task_sched_prio_normal(p, rq) + rq->time_edge);
++}
++
++static inline int sched_prio2idx(int prio, struct rq *rq)
++{
++	return (IDLE_TASK_SCHED_PRIO == prio || prio < MAX_RT_PRIO) ? prio :
++		MIN_NORMAL_PRIO + NORMAL_PRIO_MOD((prio - MIN_NORMAL_PRIO) +
++						  rq->time_edge);
++}
++
++static inline int sched_idx2prio(int idx, struct rq *rq)
++{
++	return (idx < MAX_RT_PRIO) ? idx : MIN_NORMAL_PRIO +
++		NORMAL_PRIO_MOD((idx - MIN_NORMAL_PRIO) + NORMAL_PRIO_NUM -
++				NORMAL_PRIO_MOD(rq->time_edge));
++}
++
++static inline void sched_renew_deadline(struct task_struct *p, const struct rq *rq)
++{
++	if (p->prio >= MAX_RT_PRIO)
++		p->deadline = (rq->clock >> sched_timeslice_shift) +
++			p->static_prio - (MAX_PRIO - NICE_WIDTH);
++}
++
++int task_running_nice(struct task_struct *p)
++{
++	return (p->prio > DEFAULT_PRIO);
++}
++
++static inline void update_rq_time_edge(struct rq *rq)
++{
++	struct list_head head;
++	u64 old = rq->time_edge;
++	u64 now = rq->clock >> sched_timeslice_shift;
++	u64 prio, delta;
++
++	if (now == old)
++		return;
++
++	delta = min_t(u64, NORMAL_PRIO_NUM, now - old);
++	INIT_LIST_HEAD(&head);
++
++	for_each_set_bit(prio, &rq->queue.bitmap[2], delta)
++		list_splice_tail_init(rq->queue.heads + MIN_NORMAL_PRIO +
++				      NORMAL_PRIO_MOD(prio + old), &head);
++
++	rq->queue.bitmap[2] = (NORMAL_PRIO_NUM == delta) ? 0UL :
++		rq->queue.bitmap[2] >> delta;
++	rq->time_edge = now;
++	if (!list_empty(&head)) {
++		u64 idx = MIN_NORMAL_PRIO + NORMAL_PRIO_MOD(now);
++		struct task_struct *p;
++
++		list_for_each_entry(p, &head, sq_node)
++			p->sq_idx = idx;
++
++		list_splice(&head, rq->queue.heads + idx);
++		rq->queue.bitmap[2] |= 1UL;
++	}
++}
++
++static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
++{
++	p->time_slice = sched_timeslice_ns;
++	sched_renew_deadline(p, rq);
++	if (SCHED_FIFO != p->policy && task_on_rq_queued(p))
++		requeue_task(p, rq);
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq)
++{
++	u64 max_dl = rq->time_edge + NICE_WIDTH - 1;
++	if (unlikely(p->deadline > max_dl))
++		p->deadline = max_dl;
++}
++
++static void sched_task_fork(struct task_struct *p, struct rq *rq)
++{
++	sched_renew_deadline(p, rq);
++}
++
++static void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++	time_slice_expired(p, rq);
++}
++
++#ifdef CONFIG_SMP
++static void sched_task_ttwu(struct task_struct *p) {}
++#endif
++static void sched_task_deactivate(struct task_struct *p, struct rq *rq) {}
+diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
+index a554e3bbab2b..3e56f5e6ff5c 100644
+--- a/kernel/sched/pelt.c
++++ b/kernel/sched/pelt.c
+@@ -270,6 +270,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load)
+ 	WRITE_ONCE(sa->util_avg, sa->util_sum / divider);
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ /*
+  * sched_entity:
+  *
+@@ -387,8 +388,9 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+ 
+ 	return 0;
+ }
++#endif
+ 
+-#ifdef CONFIG_SCHED_THERMAL_PRESSURE
++#if defined(CONFIG_SCHED_THERMAL_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ /*
+  * thermal:
+  *
+diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
+index cfe94ffd2b38..8a33dc6124aa 100644
+--- a/kernel/sched/pelt.h
++++ b/kernel/sched/pelt.h
+@@ -1,13 +1,15 @@
+ #ifdef CONFIG_SMP
+ #include "sched-pelt.h"
+ 
++#ifndef CONFIG_SCHED_ALT
+ int __update_load_avg_blocked_se(u64 now, struct sched_entity *se);
+ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se);
+ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
+ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
+ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
++#endif
+ 
+-#ifdef CONFIG_SCHED_THERMAL_PRESSURE
++#if defined(CONFIG_SCHED_THERMAL_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ int update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity);
+ 
+ static inline u64 thermal_load_avg(struct rq *rq)
+@@ -42,6 +44,7 @@ static inline u32 get_pelt_divider(struct sched_avg *avg)
+ 	return LOAD_AVG_MAX - 1024 + avg->period_contrib;
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline void cfs_se_util_change(struct sched_avg *avg)
+ {
+ 	unsigned int enqueued;
+@@ -153,9 +156,11 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
+ 	return rq_clock_pelt(rq_of(cfs_rq));
+ }
+ #endif
++#endif /* CONFIG_SCHED_ALT */
+ 
+ #else
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline int
+ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
+ {
+@@ -173,6 +178,7 @@ update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+ {
+ 	return 0;
+ }
++#endif
+ 
+ static inline int
+ update_thermal_load_avg(u64 now, struct rq *rq, u64 capacity)
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index a189bec13729..02e4234cbc1f 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -2,6 +2,10 @@
+ /*
+  * Scheduler internal types and methods:
+  */
++#ifdef CONFIG_SCHED_ALT
++#include "alt_sched.h"
++#else
++
+ #include <linux/sched.h>
+ 
+ #include <linux/sched/autogroup.h>
+@@ -2749,3 +2753,8 @@ extern int sched_dynamic_mode(const char *str);
+ extern void sched_dynamic_update(int mode);
+ #endif
+ 
++static inline int task_running_nice(struct task_struct *p)
++{
++	return (task_nice(p) > 0);
++}
++#endif /* !CONFIG_SCHED_ALT */
+diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c
+index 3f93fc3b5648..528b71e144e9 100644
+--- a/kernel/sched/stats.c
++++ b/kernel/sched/stats.c
+@@ -22,8 +22,10 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 	} else {
+ 		struct rq *rq;
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 		struct sched_domain *sd;
+ 		int dcount = 0;
++#endif
+ #endif
+ 		cpu = (unsigned long)(v - 2);
+ 		rq = cpu_rq(cpu);
+@@ -40,6 +42,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 		seq_printf(seq, "\n");
+ 
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ 		/* domain-specific stats */
+ 		rcu_read_lock();
+ 		for_each_domain(cpu, sd) {
+@@ -68,6 +71,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ 			    sd->ttwu_move_balance);
+ 		}
+ 		rcu_read_unlock();
++#endif
+ #endif
+ 	}
+ 	return 0;
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index 55a0a243e871..fda2e8fe6ffe 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -4,6 +4,7 @@
+  */
+ #include "sched.h"
+ 
++#ifndef CONFIG_SCHED_ALT
+ DEFINE_MUTEX(sched_domains_mutex);
+ 
+ /* Protected by sched_domains_mutex: */
+@@ -1272,8 +1273,10 @@ static void init_sched_groups_capacity(int cpu, struct sched_domain *sd)
+  */
+ 
+ static int default_relax_domain_level = -1;
++#endif /* CONFIG_SCHED_ALT */
+ int sched_domain_level_max;
+ 
++#ifndef CONFIG_SCHED_ALT
+ static int __init setup_relax_domain_level(char *str)
+ {
+ 	if (kstrtoint(str, 0, &default_relax_domain_level))
+@@ -1503,6 +1506,7 @@ sd_init(struct sched_domain_topology_level *tl,
+ 
+ 	return sd;
+ }
++#endif /* CONFIG_SCHED_ALT */
+ 
+ /*
+  * Topology list, bottom-up.
+@@ -1532,6 +1536,7 @@ void set_sched_topology(struct sched_domain_topology_level *tl)
+ 	sched_domain_topology = tl;
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_NUMA
+ 
+ static const struct cpumask *sd_numa_mask(int cpu)
+@@ -2398,3 +2403,17 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
+ 	partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
+ 	mutex_unlock(&sched_domains_mutex);
+ }
++#else /* CONFIG_SCHED_ALT */
++void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
++			     struct sched_domain_attr *dattr_new)
++{}
++
++#ifdef CONFIG_NUMA
++int __read_mostly		node_reclaim_distance = RECLAIM_DISTANCE;
++
++int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++	return best_mask_cpu(cpu, cpus);
++}
++#endif /* CONFIG_NUMA */
++#endif
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index d4a78e08f6d8..403bd33e5880 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -120,6 +120,10 @@ static unsigned long long_max = LONG_MAX;
+ static int one_hundred = 100;
+ static int two_hundred = 200;
+ static int one_thousand = 1000;
++#ifdef CONFIG_SCHED_ALT
++static int __maybe_unused zero = 0;
++extern int sched_yield_type;
++#endif
+ #ifdef CONFIG_PRINTK
+ static int ten_thousand = 10000;
+ #endif
+@@ -1729,6 +1733,24 @@ int proc_do_static_key(struct ctl_table *table, int write,
+ }
+ 
+ static struct ctl_table kern_table[] = {
++#ifdef CONFIG_SCHED_ALT
++/* In ALT, only supported "sched_schedstats" */
++#ifdef CONFIG_SCHED_DEBUG
++#ifdef CONFIG_SMP
++#ifdef CONFIG_SCHEDSTATS
++	{
++		.procname	= "sched_schedstats",
++		.data		= NULL,
++		.maxlen		= sizeof(unsigned int),
++		.mode		= 0644,
++		.proc_handler	= sysctl_schedstats,
++		.extra1		= SYSCTL_ZERO,
++		.extra2		= SYSCTL_ONE,
++	},
++#endif /* CONFIG_SCHEDSTATS */
++#endif /* CONFIG_SMP */
++#endif /* CONFIG_SCHED_DEBUG */
++#else  /* !CONFIG_SCHED_ALT */
+ 	{
+ 		.procname	= "sched_child_runs_first",
+ 		.data		= &sysctl_sched_child_runs_first,
+@@ -1848,6 +1870,7 @@ static struct ctl_table kern_table[] = {
+ 		.extra2		= SYSCTL_ONE,
+ 	},
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
+ #ifdef CONFIG_PROVE_LOCKING
+ 	{
+ 		.procname	= "prove_locking",
+@@ -2424,6 +2447,17 @@ static struct ctl_table kern_table[] = {
+ 		.proc_handler	= proc_dointvec,
+ 	},
+ #endif
++#ifdef CONFIG_SCHED_ALT
++	{
++		.procname	= "yield_type",
++		.data		= &sched_yield_type,
++		.maxlen		= sizeof (int),
++		.mode		= 0644,
++		.proc_handler	= &proc_dointvec_minmax,
++		.extra1		= &zero,
++		.extra2		= &two,
++	},
++#endif
+ #if defined(CONFIG_S390) && defined(CONFIG_SMP)
+ 	{
+ 		.procname	= "spin_retry",
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index 4a66725b1d4a..cb80ed5c1f5c 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -1940,8 +1940,10 @@ long hrtimer_nanosleep(ktime_t rqtp, const enum hrtimer_mode mode,
+ 	int ret = 0;
+ 	u64 slack;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	slack = current->timer_slack_ns;
+ 	if (dl_task(current) || rt_task(current))
++#endif
+ 		slack = 0;
+ 
+ 	hrtimer_init_sleeper_on_stack(&t, clockid, mode);
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 3bb96a8b49c9..11509fcf1d8a 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -216,7 +216,7 @@ static void task_sample_cputime(struct task_struct *p, u64 *samples)
+ 	u64 stime, utime;
+ 
+ 	task_cputime(p, &utime, &stime);
+-	store_samples(samples, stime, utime, p->se.sum_exec_runtime);
++	store_samples(samples, stime, utime, tsk_seruntime(p));
+ }
+ 
+ static void proc_sample_cputime_atomic(struct task_cputime_atomic *at,
+@@ -801,6 +801,7 @@ static void collect_posix_cputimers(struct posix_cputimers *pct, u64 *samples,
+ 	}
+ }
+ 
++#ifndef CONFIG_SCHED_ALT
+ static inline void check_dl_overrun(struct task_struct *tsk)
+ {
+ 	if (tsk->dl.dl_overrun) {
+@@ -808,6 +809,7 @@ static inline void check_dl_overrun(struct task_struct *tsk)
+ 		__group_send_sig_info(SIGXCPU, SEND_SIG_PRIV, tsk);
+ 	}
+ }
++#endif
+ 
+ static bool check_rlimit(u64 time, u64 limit, int signo, bool rt, bool hard)
+ {
+@@ -835,8 +837,10 @@ static void check_thread_timers(struct task_struct *tsk,
+ 	u64 samples[CPUCLOCK_MAX];
+ 	unsigned long soft;
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (dl_task(tsk))
+ 		check_dl_overrun(tsk);
++#endif
+ 
+ 	if (expiry_cache_is_inactive(pct))
+ 		return;
+@@ -850,7 +854,7 @@ static void check_thread_timers(struct task_struct *tsk,
+ 	soft = task_rlimit(tsk, RLIMIT_RTTIME);
+ 	if (soft != RLIM_INFINITY) {
+ 		/* Task RT timeout is accounted in jiffies. RTTIME is usec */
+-		unsigned long rttime = tsk->rt.timeout * (USEC_PER_SEC / HZ);
++		unsigned long rttime = tsk_rttimeout(tsk) * (USEC_PER_SEC / HZ);
+ 		unsigned long hard = task_rlimit_max(tsk, RLIMIT_RTTIME);
+ 
+ 		/* At the hard limit, send SIGKILL. No further action. */
+@@ -1086,8 +1090,10 @@ static inline bool fastpath_timer_check(struct task_struct *tsk)
+ 			return true;
+ 	}
+ 
++#ifndef CONFIG_SCHED_ALT
+ 	if (dl_task(tsk) && tsk->dl.dl_overrun)
+ 		return true;
++#endif
+ 
+ 	return false;
+ }
+diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
+index adf7ef194005..11c8f36e281b 100644
+--- a/kernel/trace/trace_selftest.c
++++ b/kernel/trace/trace_selftest.c
+@@ -1052,10 +1052,15 @@ static int trace_wakeup_test_thread(void *data)
+ {
+ 	/* Make this a -deadline thread */
+ 	static const struct sched_attr attr = {
++#ifdef CONFIG_SCHED_ALT
++		/* No deadline on BMQ/PDS, use RR */
++		.sched_policy = SCHED_RR,
++#else
+ 		.sched_policy = SCHED_DEADLINE,
+ 		.sched_runtime = 100000ULL,
+ 		.sched_deadline = 10000000ULL,
+ 		.sched_period = 10000000ULL
++#endif
+ 	};
+ 	struct wakeup_test_data *x = data;
+ 

diff --git a/5021_BMQ-and-PDS-gentoo-defaults.patch b/5021_BMQ-and-PDS-gentoo-defaults.patch
new file mode 100644
index 0000000..d449eec
--- /dev/null
+++ b/5021_BMQ-and-PDS-gentoo-defaults.patch
@@ -0,0 +1,13 @@
+--- a/init/Kconfig	2021-04-27 07:38:30.556467045 -0400
++++ b/init/Kconfig	2021-04-27 07:39:32.956412800 -0400
+@@ -780,8 +780,9 @@ config GENERIC_SCHED_CLOCK
+ menu "Scheduler features"
+ 
+ menuconfig SCHED_ALT
++	depends on X86_64
+ 	bool "Alternative CPU Schedulers"
+-	default y
++	default n
+ 	help
+ 	  This feature enable alternative CPU scheduler"
+ 


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-07-13 12:35 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-07-13 12:35 UTC (permalink / raw
  To: gentoo-commits

commit:     fa20133b8b272e1cdfd6c7657053ebe81450f9ff
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jul 13 12:35:15 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jul 13 12:35:15 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fa20133b

Update Homepage for CPU Optimization patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/0000_README b/0000_README
index f92bd44..57ab4e6 100644
--- a/0000_README
+++ b/0000_README
@@ -72,7 +72,7 @@ From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.
 
 Patch:  5010_enable-cpu-optimizations-universal.patch
-From:   https://github.com/graysky2/kernel_gcc_patch/
+From:   https://github.com/graysky2/kernel_compiler_patch
 Desc:   Kernel >= 5.8 patch enables gcc = v9+ optimizations for additional CPUs.
 
 Patch:  5020_BMQ-and-PDS-io-scheduler-v5.13-r1.patch


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-07-14 16:16 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-07-14 16:16 UTC (permalink / raw
  To: gentoo-commits

commit:     01cc722d694c5ac80294fa112d6cf98ebb5aefd1
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul 14 16:15:37 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul 14 16:15:37 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=01cc722d

Linux 5.13.2 and rename 5.13.1 patch properly

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |     4 +
 1001_linux-5.13.1.patch => 1000_linux-5.13.1.patch |     0
 1001_linux-5.13.2.patch                            | 29932 +++++++++++++++++++
 3 files changed, 29936 insertions(+)

diff --git a/0000_README b/0000_README
index 57ab4e6..e1ba986 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch:  1000_linux-5.13.1.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.13.1
 
+Patch:  1001_linux-5.13.2.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.13.2
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1001_linux-5.13.1.patch b/1000_linux-5.13.1.patch
similarity index 100%
rename from 1001_linux-5.13.1.patch
rename to 1000_linux-5.13.1.patch

diff --git a/1001_linux-5.13.2.patch b/1001_linux-5.13.2.patch
new file mode 100644
index 0000000..c6dd58d
--- /dev/null
+++ b/1001_linux-5.13.2.patch
@@ -0,0 +1,29932 @@
+diff --git a/Documentation/ABI/testing/evm b/Documentation/ABI/testing/evm
+index 3c477ba48a312..2243b72e41107 100644
+--- a/Documentation/ABI/testing/evm
++++ b/Documentation/ABI/testing/evm
+@@ -49,8 +49,30 @@ Description:
+ 		modification of EVM-protected metadata and
+ 		disable all further modification of policy
+ 
+-		Note that once a key has been loaded, it will no longer be
+-		possible to enable metadata modification.
++		Echoing a value is additive, the new value is added to the
++		existing initialization flags.
++
++		For example, after::
++
++		  echo 2 ><securityfs>/evm
++
++		another echo can be performed::
++
++		  echo 1 ><securityfs>/evm
++
++		and the resulting value will be 3.
++
++		Note that once an HMAC key has been loaded, it will no longer
++		be possible to enable metadata modification. Signaling that an
++		HMAC key has been loaded will clear the corresponding flag.
++		For example, if the current value is 6 (2 and 4 set)::
++
++		  echo 1 ><securityfs>/evm
++
++		will set the new value to 3 (4 cleared).
++
++		Loading an HMAC key is the only way to disable metadata
++		modification.
+ 
+ 		Until key loading has been signaled EVM can not create
+ 		or validate the 'security.evm' xattr, but returns
+diff --git a/Documentation/ABI/testing/sysfs-bus-papr-pmem b/Documentation/ABI/testing/sysfs-bus-papr-pmem
+index 92e2db0e2d3de..95254cec92bfb 100644
+--- a/Documentation/ABI/testing/sysfs-bus-papr-pmem
++++ b/Documentation/ABI/testing/sysfs-bus-papr-pmem
+@@ -39,9 +39,11 @@ KernelVersion:	v5.9
+ Contact:	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>, nvdimm@lists.linux.dev,
+ Description:
+ 		(RO) Report various performance stats related to papr-scm NVDIMM
+-		device.  Each stat is reported on a new line with each line
+-		composed of a stat-identifier followed by it value. Below are
+-		currently known dimm performance stats which are reported:
++		device. This attribute is only available for NVDIMM devices
++		that support reporting NVDIMM performance stats. Each stat is
++		reported on a new line with each line composed of a
++		stat-identifier followed by it value. Below are currently known
++		dimm performance stats which are reported:
+ 
+ 		* "CtlResCt" : Controller Reset Count
+ 		* "CtlResTm" : Controller Reset Elapsed Time
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index cb89dbdedc463..995deccc28bcd 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -581,6 +581,12 @@
+ 			loops can be debugged more effectively on production
+ 			systems.
+ 
++	clocksource.max_cswd_read_retries= [KNL]
++			Number of clocksource_watchdog() retries due to
++			external delays before the clock will be marked
++			unstable.  Defaults to three retries, that is,
++			four attempts to read the clock under test.
++
+ 	clearcpuid=BITNUM[,BITNUM...] [X86]
+ 			Disable CPUID feature X for the kernel. See
+ 			arch/x86/include/asm/cpufeatures.h for the valid bit
+diff --git a/Documentation/hwmon/max31790.rst b/Documentation/hwmon/max31790.rst
+index f301385d8cef3..7b097c3b9b908 100644
+--- a/Documentation/hwmon/max31790.rst
++++ b/Documentation/hwmon/max31790.rst
+@@ -38,6 +38,7 @@ Sysfs entries
+ fan[1-12]_input    RO  fan tachometer speed in RPM
+ fan[1-12]_fault    RO  fan experienced fault
+ fan[1-6]_target    RW  desired fan speed in RPM
+-pwm[1-6]_enable    RW  regulator mode, 0=disabled, 1=manual mode, 2=rpm mode
+-pwm[1-6]           RW  fan target duty cycle (0-255)
++pwm[1-6]_enable    RW  regulator mode, 0=disabled (duty cycle=0%), 1=manual mode, 2=rpm mode
++pwm[1-6]           RW  read: current pwm duty cycle,
++                       write: target pwm duty cycle (0-255)
+ ================== === =======================================================
+diff --git a/Documentation/userspace-api/media/v4l/ext-ctrls-codec.rst b/Documentation/userspace-api/media/v4l/ext-ctrls-codec.rst
+index b0de4e6e7ebd1..514b334470eab 100644
+--- a/Documentation/userspace-api/media/v4l/ext-ctrls-codec.rst
++++ b/Documentation/userspace-api/media/v4l/ext-ctrls-codec.rst
+@@ -3053,7 +3053,7 @@ enum v4l2_mpeg_video_hevc_size_of_length_field -
+     :stub-columns: 0
+     :widths:       1 1 2
+ 
+-    * - ``V4L2_HEVC_PPS_FLAG_DEPENDENT_SLICE_SEGMENT``
++    * - ``V4L2_HEVC_PPS_FLAG_DEPENDENT_SLICE_SEGMENT_ENABLED``
+       - 0x00000001
+       -
+     * - ``V4L2_HEVC_PPS_FLAG_OUTPUT_FLAG_PRESENT``
+@@ -3277,6 +3277,9 @@ enum v4l2_mpeg_video_hevc_size_of_length_field -
+     * - ``V4L2_HEVC_SLICE_PARAMS_FLAG_SLICE_LOOP_FILTER_ACROSS_SLICES_ENABLED``
+       - 0x00000100
+       -
++    * - ``V4L2_HEVC_SLICE_PARAMS_FLAG_DEPENDENT_SLICE_SEGMENT``
++      - 0x00000200
++      -
+ 
+ .. raw:: latex
+ 
+diff --git a/Documentation/userspace-api/seccomp_filter.rst b/Documentation/userspace-api/seccomp_filter.rst
+index 6efb41cc80725..d61219889e494 100644
+--- a/Documentation/userspace-api/seccomp_filter.rst
++++ b/Documentation/userspace-api/seccomp_filter.rst
+@@ -259,6 +259,18 @@ and ``ioctl(SECCOMP_IOCTL_NOTIF_SEND)`` a response, indicating what should be
+ returned to userspace. The ``id`` member of ``struct seccomp_notif_resp`` should
+ be the same ``id`` as in ``struct seccomp_notif``.
+ 
++Userspace can also add file descriptors to the notifying process via
++``ioctl(SECCOMP_IOCTL_NOTIF_ADDFD)``. The ``id`` member of
++``struct seccomp_notif_addfd`` should be the same ``id`` as in
++``struct seccomp_notif``. The ``newfd_flags`` flag may be used to set flags
++like O_EXEC on the file descriptor in the notifying process. If the supervisor
++wants to inject the file descriptor with a specific number, the
++``SECCOMP_ADDFD_FLAG_SETFD`` flag can be used, and set the ``newfd`` member to
++the specific number to use. If that file descriptor is already open in the
++notifying process it will be replaced. The supervisor can also add an FD, and
++respond atomically by using the ``SECCOMP_ADDFD_FLAG_SEND`` flag and the return
++value will be the injected file descriptor number.
++
+ It is worth noting that ``struct seccomp_data`` contains the values of register
+ arguments to the syscall, but does not contain pointers to memory. The task's
+ memory is accessible to suitably privileged traces via ``ptrace()`` or
+diff --git a/Makefile b/Makefile
+index 069607cfe2836..31bbcc5255357 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 13
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+@@ -1039,7 +1039,7 @@ LDFLAGS_vmlinux	+= $(call ld-option, -X,)
+ endif
+ 
+ ifeq ($(CONFIG_RELR),y)
+-LDFLAGS_vmlinux	+= --pack-dyn-relocs=relr
++LDFLAGS_vmlinux	+= --pack-dyn-relocs=relr --use-android-relr-tags
+ endif
+ 
+ # We never want expected sections to be placed heuristically by the
+diff --git a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c
+index f4dd9f3f30010..4b2575f936d46 100644
+--- a/arch/alpha/kernel/smp.c
++++ b/arch/alpha/kernel/smp.c
+@@ -166,7 +166,6 @@ smp_callin(void)
+ 	DBGS(("smp_callin: commencing CPU %d current %p active_mm %p\n",
+ 	      cpuid, current, current->active_mm));
+ 
+-	preempt_disable();
+ 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+ }
+ 
+diff --git a/arch/arc/kernel/smp.c b/arch/arc/kernel/smp.c
+index 52906d3145371..db0e104d68355 100644
+--- a/arch/arc/kernel/smp.c
++++ b/arch/arc/kernel/smp.c
+@@ -189,7 +189,6 @@ void start_kernel_secondary(void)
+ 	pr_info("## CPU%u LIVE ##: Executing Code...\n", cpu);
+ 
+ 	local_irq_enable();
+-	preempt_disable();
+ 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+ }
+ 
+diff --git a/arch/arm/boot/dts/sama5d4.dtsi b/arch/arm/boot/dts/sama5d4.dtsi
+index 05c55875835d5..f70a8528b9598 100644
+--- a/arch/arm/boot/dts/sama5d4.dtsi
++++ b/arch/arm/boot/dts/sama5d4.dtsi
+@@ -787,7 +787,7 @@
+ 					0xffffffff 0x3ffcfe7c 0x1c010101	/* pioA */
+ 					0x7fffffff 0xfffccc3a 0x3f00cc3a	/* pioB */
+ 					0xffffffff 0x3ff83fff 0xff00ffff	/* pioC */
+-					0x0003ff00 0x8002a800 0x00000000	/* pioD */
++					0xb003ff00 0x8002a800 0x00000000	/* pioD */
+ 					0xffffffff 0x7fffffff 0x76fff1bf	/* pioE */
+ 					>;
+ 
+diff --git a/arch/arm/boot/dts/ste-href.dtsi b/arch/arm/boot/dts/ste-href.dtsi
+index 83b179692dff7..13d2161929042 100644
+--- a/arch/arm/boot/dts/ste-href.dtsi
++++ b/arch/arm/boot/dts/ste-href.dtsi
+@@ -4,6 +4,7 @@
+  */
+ 
+ #include <dt-bindings/interrupt-controller/irq.h>
++#include <dt-bindings/leds/common.h>
+ #include "ste-href-family-pinctrl.dtsi"
+ 
+ / {
+@@ -64,17 +65,20 @@
+ 					reg = <0>;
+ 					led-cur = /bits/ 8 <0x2f>;
+ 					max-cur = /bits/ 8 <0x5f>;
++					color = <LED_COLOR_ID_BLUE>;
+ 					linux,default-trigger = "heartbeat";
+ 				};
+ 				chan@1 {
+ 					reg = <1>;
+ 					led-cur = /bits/ 8 <0x2f>;
+ 					max-cur = /bits/ 8 <0x5f>;
++					color = <LED_COLOR_ID_BLUE>;
+ 				};
+ 				chan@2 {
+ 					reg = <2>;
+ 					led-cur = /bits/ 8 <0x2f>;
+ 					max-cur = /bits/ 8 <0x5f>;
++					color = <LED_COLOR_ID_BLUE>;
+ 				};
+ 			};
+ 			lp5521@34 {
+@@ -88,16 +92,19 @@
+ 					reg = <0>;
+ 					led-cur = /bits/ 8 <0x2f>;
+ 					max-cur = /bits/ 8 <0x5f>;
++					color = <LED_COLOR_ID_BLUE>;
+ 				};
+ 				chan@1 {
+ 					reg = <1>;
+ 					led-cur = /bits/ 8 <0x2f>;
+ 					max-cur = /bits/ 8 <0x5f>;
++					color = <LED_COLOR_ID_BLUE>;
+ 				};
+ 				chan@2 {
+ 					reg = <2>;
+ 					led-cur = /bits/ 8 <0x2f>;
+ 					max-cur = /bits/ 8 <0x5f>;
++					color = <LED_COLOR_ID_BLUE>;
+ 				};
+ 			};
+ 			bh1780@29 {
+diff --git a/arch/arm/kernel/perf_event_v7.c b/arch/arm/kernel/perf_event_v7.c
+index 2924d7910b106..eb2190477da10 100644
+--- a/arch/arm/kernel/perf_event_v7.c
++++ b/arch/arm/kernel/perf_event_v7.c
+@@ -773,10 +773,10 @@ static inline void armv7pmu_write_counter(struct perf_event *event, u64 value)
+ 		pr_err("CPU%u writing wrong counter %d\n",
+ 			smp_processor_id(), idx);
+ 	} else if (idx == ARMV7_IDX_CYCLE_COUNTER) {
+-		asm volatile("mcr p15, 0, %0, c9, c13, 0" : : "r" (value));
++		asm volatile("mcr p15, 0, %0, c9, c13, 0" : : "r" ((u32)value));
+ 	} else {
+ 		armv7_pmnc_select_counter(idx);
+-		asm volatile("mcr p15, 0, %0, c9, c13, 2" : : "r" (value));
++		asm volatile("mcr p15, 0, %0, c9, c13, 2" : : "r" ((u32)value));
+ 	}
+ }
+ 
+diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
+index 74679240a9d8e..c7bb168b0d97c 100644
+--- a/arch/arm/kernel/smp.c
++++ b/arch/arm/kernel/smp.c
+@@ -432,7 +432,6 @@ asmlinkage void secondary_start_kernel(void)
+ #endif
+ 	pr_debug("CPU%u: Booted secondary processor\n", cpu);
+ 
+-	preempt_disable();
+ 	trace_hardirqs_off();
+ 
+ 	/*
+diff --git a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+index 456dcd4a7793f..6ffbb099fcac7 100644
+--- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+@@ -134,7 +134,7 @@
+ 
+ 			uart0: serial@12000 {
+ 				compatible = "marvell,armada-3700-uart";
+-				reg = <0x12000 0x200>;
++				reg = <0x12000 0x18>;
+ 				clocks = <&xtalclk>;
+ 				interrupts =
+ 				<GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>,
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index 7cd7d5c8c4bc2..6336b4309114b 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -46,6 +46,7 @@
+ #define KVM_REQ_VCPU_RESET	KVM_ARCH_REQ(2)
+ #define KVM_REQ_RECORD_STEAL	KVM_ARCH_REQ(3)
+ #define KVM_REQ_RELOAD_GICv4	KVM_ARCH_REQ(4)
++#define KVM_REQ_RELOAD_PMU	KVM_ARCH_REQ(5)
+ 
+ #define KVM_DIRTY_LOG_MANUAL_CAPS   (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | \
+ 				     KVM_DIRTY_LOG_INITIALLY_SET)
+diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
+index d3cef91335396..eeb210997149a 100644
+--- a/arch/arm64/include/asm/mmu_context.h
++++ b/arch/arm64/include/asm/mmu_context.h
+@@ -177,9 +177,9 @@ static inline void update_saved_ttbr0(struct task_struct *tsk,
+ 		return;
+ 
+ 	if (mm == &init_mm)
+-		ttbr = __pa_symbol(reserved_pg_dir);
++		ttbr = phys_to_ttbr(__pa_symbol(reserved_pg_dir));
+ 	else
+-		ttbr = virt_to_phys(mm->pgd) | ASID(mm) << 48;
++		ttbr = phys_to_ttbr(virt_to_phys(mm->pgd)) | ASID(mm) << 48;
+ 
+ 	WRITE_ONCE(task_thread_info(tsk)->ttbr0, ttbr);
+ }
+diff --git a/arch/arm64/include/asm/preempt.h b/arch/arm64/include/asm/preempt.h
+index 80e946b2abee2..e83f0982b99c1 100644
+--- a/arch/arm64/include/asm/preempt.h
++++ b/arch/arm64/include/asm/preempt.h
+@@ -23,7 +23,7 @@ static inline void preempt_count_set(u64 pc)
+ } while (0)
+ 
+ #define init_idle_preempt_count(p, cpu) do { \
+-	task_thread_info(p)->preempt_count = PREEMPT_ENABLED; \
++	task_thread_info(p)->preempt_count = PREEMPT_DISABLED; \
+ } while (0)
+ 
+ static inline void set_preempt_need_resched(void)
+diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
+index 6cc97730790e7..787c3c83edd7a 100644
+--- a/arch/arm64/kernel/Makefile
++++ b/arch/arm64/kernel/Makefile
+@@ -14,6 +14,11 @@ CFLAGS_REMOVE_return_address.o = $(CC_FLAGS_FTRACE)
+ CFLAGS_REMOVE_syscall.o	 = -fstack-protector -fstack-protector-strong
+ CFLAGS_syscall.o	+= -fno-stack-protector
+ 
++# It's not safe to invoke KCOV when portions of the kernel environment aren't
++# available or are out-of-sync with HW state. Since `noinstr` doesn't always
++# inhibit KCOV instrumentation, disable it for the entire compilation unit.
++KCOV_INSTRUMENT_entry.o := n
++
+ # Object file lists.
+ obj-y			:= debug-monitors.o entry.o irq.o fpsimd.o		\
+ 			   entry-common.o entry-fpsimd.o process.o ptrace.o	\
+diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
+index f594957e29bd1..44b6eda69a81a 100644
+--- a/arch/arm64/kernel/perf_event.c
++++ b/arch/arm64/kernel/perf_event.c
+@@ -312,7 +312,7 @@ static ssize_t slots_show(struct device *dev, struct device_attribute *attr,
+ 	struct arm_pmu *cpu_pmu = container_of(pmu, struct arm_pmu, pmu);
+ 	u32 slots = cpu_pmu->reg_pmmir & ARMV8_PMU_SLOTS_MASK;
+ 
+-	return snprintf(page, PAGE_SIZE, "0x%08x\n", slots);
++	return sysfs_emit(page, "0x%08x\n", slots);
+ }
+ 
+ static DEVICE_ATTR_RO(slots);
+diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
+index 61845c0821d9d..68b30e8c22dbf 100644
+--- a/arch/arm64/kernel/setup.c
++++ b/arch/arm64/kernel/setup.c
+@@ -381,7 +381,7 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p)
+ 	 * faults in case uaccess_enable() is inadvertently called by the init
+ 	 * thread.
+ 	 */
+-	init_task.thread_info.ttbr0 = __pa_symbol(reserved_pg_dir);
++	init_task.thread_info.ttbr0 = phys_to_ttbr(__pa_symbol(reserved_pg_dir));
+ #endif
+ 
+ 	if (boot_args[1] || boot_args[2] || boot_args[3]) {
+diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
+index dcd7041b2b077..6671000a8b7d7 100644
+--- a/arch/arm64/kernel/smp.c
++++ b/arch/arm64/kernel/smp.c
+@@ -224,7 +224,6 @@ asmlinkage notrace void secondary_start_kernel(void)
+ 		init_gic_priority_masking();
+ 
+ 	rcu_cpu_starting(cpu);
+-	preempt_disable();
+ 	trace_hardirqs_off();
+ 
+ 	/*
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index e720148232a06..facf4d41d32a2 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -689,6 +689,10 @@ static void check_vcpu_requests(struct kvm_vcpu *vcpu)
+ 			vgic_v4_load(vcpu);
+ 			preempt_enable();
+ 		}
++
++		if (kvm_check_request(KVM_REQ_RELOAD_PMU, vcpu))
++			kvm_pmu_handle_pmcr(vcpu,
++					    __vcpu_sys_reg(vcpu, PMCR_EL0));
+ 	}
+ }
+ 
+diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
+index fd167d4f42157..f33825c995cbb 100644
+--- a/arch/arm64/kvm/pmu-emul.c
++++ b/arch/arm64/kvm/pmu-emul.c
+@@ -578,6 +578,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
+ 		kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 0);
+ 
+ 	if (val & ARMV8_PMU_PMCR_P) {
++		mask &= ~BIT(ARMV8_PMU_CYCLE_IDX);
+ 		for_each_set_bit(i, &mask, 32)
+ 			kvm_pmu_set_counter_value(vcpu, i, 0);
+ 	}
+@@ -850,6 +851,9 @@ int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu)
+ 		   return -EINVAL;
+ 	}
+ 
++	/* One-off reload of the PMU on first run */
++	kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu);
++
+ 	return 0;
+ }
+ 
+diff --git a/arch/csky/kernel/smp.c b/arch/csky/kernel/smp.c
+index 0f9f5eef93386..e2993539af8ef 100644
+--- a/arch/csky/kernel/smp.c
++++ b/arch/csky/kernel/smp.c
+@@ -281,7 +281,6 @@ void csky_start_secondary(void)
+ 	pr_info("CPU%u Online: %s...\n", cpu, __func__);
+ 
+ 	local_irq_enable();
+-	preempt_disable();
+ 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+ }
+ 
+diff --git a/arch/csky/mm/syscache.c b/arch/csky/mm/syscache.c
+index 4e51d63850c46..cd847ad62c7ee 100644
+--- a/arch/csky/mm/syscache.c
++++ b/arch/csky/mm/syscache.c
+@@ -12,15 +12,17 @@ SYSCALL_DEFINE3(cacheflush,
+ 		int, cache)
+ {
+ 	switch (cache) {
+-	case ICACHE:
+ 	case BCACHE:
+-		flush_icache_mm_range(current->mm,
+-				(unsigned long)addr,
+-				(unsigned long)addr + bytes);
+-		fallthrough;
+ 	case DCACHE:
+ 		dcache_wb_range((unsigned long)addr,
+ 				(unsigned long)addr + bytes);
++		if (cache != BCACHE)
++			break;
++		fallthrough;
++	case ICACHE:
++		flush_icache_mm_range(current->mm,
++				(unsigned long)addr,
++				(unsigned long)addr + bytes);
+ 		break;
+ 	default:
+ 		return -EINVAL;
+diff --git a/arch/ia64/kernel/mca_drv.c b/arch/ia64/kernel/mca_drv.c
+index 36a69b4e61690..5bfc79be4cefe 100644
+--- a/arch/ia64/kernel/mca_drv.c
++++ b/arch/ia64/kernel/mca_drv.c
+@@ -343,7 +343,7 @@ init_record_index_pools(void)
+ 
+ 	/* - 2 - */
+ 	sect_min_size = sal_log_sect_min_sizes[0];
+-	for (i = 1; i < sizeof sal_log_sect_min_sizes/sizeof(size_t); i++)
++	for (i = 1; i < ARRAY_SIZE(sal_log_sect_min_sizes); i++)
+ 		if (sect_min_size > sal_log_sect_min_sizes[i])
+ 			sect_min_size = sal_log_sect_min_sizes[i];
+ 
+diff --git a/arch/ia64/kernel/smpboot.c b/arch/ia64/kernel/smpboot.c
+index 49b4885809399..d10f780c13b9e 100644
+--- a/arch/ia64/kernel/smpboot.c
++++ b/arch/ia64/kernel/smpboot.c
+@@ -441,7 +441,6 @@ start_secondary (void *unused)
+ #endif
+ 	efi_map_pal_code();
+ 	cpu_init();
+-	preempt_disable();
+ 	smp_callin();
+ 
+ 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+diff --git a/arch/m68k/Kconfig.machine b/arch/m68k/Kconfig.machine
+index 4d59ec2f5b8d6..d964c1f273995 100644
+--- a/arch/m68k/Kconfig.machine
++++ b/arch/m68k/Kconfig.machine
+@@ -25,6 +25,9 @@ config ATARI
+ 	  this kernel on an Atari, say Y here and browse the material
+ 	  available in <file:Documentation/m68k>; otherwise say N.
+ 
++config ATARI_KBD_CORE
++	bool
++
+ config MAC
+ 	bool "Macintosh support"
+ 	depends on MMU
+diff --git a/arch/mips/include/asm/highmem.h b/arch/mips/include/asm/highmem.h
+index 292d0425717f3..92a3802100178 100644
+--- a/arch/mips/include/asm/highmem.h
++++ b/arch/mips/include/asm/highmem.h
+@@ -36,7 +36,7 @@ extern pte_t *pkmap_page_table;
+  * easily, subsequent pte tables have to be allocated in one physical
+  * chunk of RAM.
+  */
+-#ifdef CONFIG_PHYS_ADDR_T_64BIT
++#if defined(CONFIG_PHYS_ADDR_T_64BIT) || defined(CONFIG_MIPS_HUGE_TLB_SUPPORT)
+ #define LAST_PKMAP 512
+ #else
+ #define LAST_PKMAP 1024
+diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
+index ef86fbad85460..d542fb7af3ba2 100644
+--- a/arch/mips/kernel/smp.c
++++ b/arch/mips/kernel/smp.c
+@@ -348,7 +348,6 @@ asmlinkage void start_secondary(void)
+ 	 */
+ 
+ 	calibrate_delay();
+-	preempt_disable();
+ 	cpu = smp_processor_id();
+ 	cpu_data[cpu].udelay_val = loops_per_jiffy;
+ 
+diff --git a/arch/openrisc/kernel/smp.c b/arch/openrisc/kernel/smp.c
+index 48e1092a64de3..415e209732a3d 100644
+--- a/arch/openrisc/kernel/smp.c
++++ b/arch/openrisc/kernel/smp.c
+@@ -145,8 +145,6 @@ asmlinkage __init void secondary_start_kernel(void)
+ 	set_cpu_online(cpu, true);
+ 
+ 	local_irq_enable();
+-
+-	preempt_disable();
+ 	/*
+ 	 * OK, it's off to the idle thread for us
+ 	 */
+diff --git a/arch/parisc/kernel/smp.c b/arch/parisc/kernel/smp.c
+index 10227f667c8a6..1405b603b91b6 100644
+--- a/arch/parisc/kernel/smp.c
++++ b/arch/parisc/kernel/smp.c
+@@ -302,7 +302,6 @@ void __init smp_callin(unsigned long pdce_proc)
+ #endif
+ 
+ 	smp_cpu_init(slave_id);
+-	preempt_disable();
+ 
+ 	flush_cache_all_local(); /* start with known state */
+ 	flush_tlb_all_local(NULL);
+diff --git a/arch/powerpc/include/asm/cputhreads.h b/arch/powerpc/include/asm/cputhreads.h
+index 98c8bd155bf9d..b167186aaee4a 100644
+--- a/arch/powerpc/include/asm/cputhreads.h
++++ b/arch/powerpc/include/asm/cputhreads.h
+@@ -98,6 +98,36 @@ static inline int cpu_last_thread_sibling(int cpu)
+ 	return cpu | (threads_per_core - 1);
+ }
+ 
++/*
++ * tlb_thread_siblings are siblings which share a TLB. This is not
++ * architected, is not something a hypervisor could emulate and a future
++ * CPU may change behaviour even in compat mode, so this should only be
++ * used on PowerNV, and only with care.
++ */
++static inline int cpu_first_tlb_thread_sibling(int cpu)
++{
++	if (cpu_has_feature(CPU_FTR_ARCH_300) && (threads_per_core == 8))
++		return cpu & ~0x6;	/* Big Core */
++	else
++		return cpu_first_thread_sibling(cpu);
++}
++
++static inline int cpu_last_tlb_thread_sibling(int cpu)
++{
++	if (cpu_has_feature(CPU_FTR_ARCH_300) && (threads_per_core == 8))
++		return cpu | 0x6;	/* Big Core */
++	else
++		return cpu_last_thread_sibling(cpu);
++}
++
++static inline int cpu_tlb_thread_sibling_step(void)
++{
++	if (cpu_has_feature(CPU_FTR_ARCH_300) && (threads_per_core == 8))
++		return 2;		/* Big Core */
++	else
++		return 1;
++}
++
+ static inline u32 get_tensr(void)
+ {
+ #ifdef	CONFIG_BOOKE
+diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
+index 59f704408d65d..a26aad41ef3e7 100644
+--- a/arch/powerpc/include/asm/interrupt.h
++++ b/arch/powerpc/include/asm/interrupt.h
+@@ -186,6 +186,7 @@ struct interrupt_nmi_state {
+ 	u8 irq_soft_mask;
+ 	u8 irq_happened;
+ 	u8 ftrace_enabled;
++	u64 softe;
+ #endif
+ };
+ 
+@@ -211,6 +212,7 @@ static inline void interrupt_nmi_enter_prepare(struct pt_regs *regs, struct inte
+ #ifdef CONFIG_PPC64
+ 	state->irq_soft_mask = local_paca->irq_soft_mask;
+ 	state->irq_happened = local_paca->irq_happened;
++	state->softe = regs->softe;
+ 
+ 	/*
+ 	 * Set IRQS_ALL_DISABLED unconditionally so irqs_disabled() does
+@@ -263,6 +265,7 @@ static inline void interrupt_nmi_exit_prepare(struct pt_regs *regs, struct inter
+ 
+ 	/* Check we didn't change the pending interrupt mask. */
+ 	WARN_ON_ONCE((state->irq_happened | PACA_IRQ_HARD_DIS) != local_paca->irq_happened);
++	regs->softe = state->softe;
+ 	local_paca->irq_happened = state->irq_happened;
+ 	local_paca->irq_soft_mask = state->irq_soft_mask;
+ #endif
+diff --git a/arch/powerpc/include/asm/kvm_guest.h b/arch/powerpc/include/asm/kvm_guest.h
+index 2fca299f7e192..c63105d2c9e7c 100644
+--- a/arch/powerpc/include/asm/kvm_guest.h
++++ b/arch/powerpc/include/asm/kvm_guest.h
+@@ -16,10 +16,10 @@ static inline bool is_kvm_guest(void)
+ 	return static_branch_unlikely(&kvm_guest);
+ }
+ 
+-bool check_kvm_guest(void);
++int check_kvm_guest(void);
+ #else
+ static inline bool is_kvm_guest(void) { return false; }
+-static inline bool check_kvm_guest(void) { return false; }
++static inline int check_kvm_guest(void) { return 0; }
+ #endif
+ 
+ #endif /* _ASM_POWERPC_KVM_GUEST_H_ */
+diff --git a/arch/powerpc/kernel/firmware.c b/arch/powerpc/kernel/firmware.c
+index c9e2819b095ab..c7022c41cc314 100644
+--- a/arch/powerpc/kernel/firmware.c
++++ b/arch/powerpc/kernel/firmware.c
+@@ -23,18 +23,20 @@ EXPORT_SYMBOL_GPL(powerpc_firmware_features);
+ 
+ #if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_KVM_GUEST)
+ DEFINE_STATIC_KEY_FALSE(kvm_guest);
+-bool check_kvm_guest(void)
++int __init check_kvm_guest(void)
+ {
+ 	struct device_node *hyper_node;
+ 
+ 	hyper_node = of_find_node_by_path("/hypervisor");
+ 	if (!hyper_node)
+-		return false;
++		return 0;
+ 
+ 	if (!of_device_is_compatible(hyper_node, "linux,kvm"))
+-		return false;
++		return 0;
+ 
+ 	static_branch_enable(&kvm_guest);
+-	return true;
++
++	return 0;
+ }
++core_initcall(check_kvm_guest); // before kvm_guest_init()
+ #endif
+diff --git a/arch/powerpc/kernel/mce_power.c b/arch/powerpc/kernel/mce_power.c
+index 667104d4c4550..2fff886c549d0 100644
+--- a/arch/powerpc/kernel/mce_power.c
++++ b/arch/powerpc/kernel/mce_power.c
+@@ -481,12 +481,11 @@ static int mce_find_instr_ea_and_phys(struct pt_regs *regs, uint64_t *addr,
+ 	return -1;
+ }
+ 
+-static int mce_handle_ierror(struct pt_regs *regs,
++static int mce_handle_ierror(struct pt_regs *regs, unsigned long srr1,
+ 		const struct mce_ierror_table table[],
+ 		struct mce_error_info *mce_err, uint64_t *addr,
+ 		uint64_t *phys_addr)
+ {
+-	uint64_t srr1 = regs->msr;
+ 	int handled = 0;
+ 	int i;
+ 
+@@ -695,19 +694,19 @@ static long mce_handle_ue_error(struct pt_regs *regs,
+ }
+ 
+ static long mce_handle_error(struct pt_regs *regs,
++		unsigned long srr1,
+ 		const struct mce_derror_table dtable[],
+ 		const struct mce_ierror_table itable[])
+ {
+ 	struct mce_error_info mce_err = { 0 };
+ 	uint64_t addr, phys_addr = ULONG_MAX;
+-	uint64_t srr1 = regs->msr;
+ 	long handled;
+ 
+ 	if (SRR1_MC_LOADSTORE(srr1))
+ 		handled = mce_handle_derror(regs, dtable, &mce_err, &addr,
+ 				&phys_addr);
+ 	else
+-		handled = mce_handle_ierror(regs, itable, &mce_err, &addr,
++		handled = mce_handle_ierror(regs, srr1, itable, &mce_err, &addr,
+ 				&phys_addr);
+ 
+ 	if (!handled && mce_err.error_type == MCE_ERROR_TYPE_UE)
+@@ -723,16 +722,20 @@ long __machine_check_early_realmode_p7(struct pt_regs *regs)
+ 	/* P7 DD1 leaves top bits of DSISR undefined */
+ 	regs->dsisr &= 0x0000ffff;
+ 
+-	return mce_handle_error(regs, mce_p7_derror_table, mce_p7_ierror_table);
++	return mce_handle_error(regs, regs->msr,
++			mce_p7_derror_table, mce_p7_ierror_table);
+ }
+ 
+ long __machine_check_early_realmode_p8(struct pt_regs *regs)
+ {
+-	return mce_handle_error(regs, mce_p8_derror_table, mce_p8_ierror_table);
++	return mce_handle_error(regs, regs->msr,
++			mce_p8_derror_table, mce_p8_ierror_table);
+ }
+ 
+ long __machine_check_early_realmode_p9(struct pt_regs *regs)
+ {
++	unsigned long srr1 = regs->msr;
++
+ 	/*
+ 	 * On POWER9 DD2.1 and below, it's possible to get a machine check
+ 	 * caused by a paste instruction where only DSISR bit 25 is set. This
+@@ -746,10 +749,39 @@ long __machine_check_early_realmode_p9(struct pt_regs *regs)
+ 	if (SRR1_MC_LOADSTORE(regs->msr) && regs->dsisr == 0x02000000)
+ 		return 1;
+ 
+-	return mce_handle_error(regs, mce_p9_derror_table, mce_p9_ierror_table);
++	/*
++	 * Async machine check due to bad real address from store or foreign
++	 * link time out comes with the load/store bit (PPC bit 42) set in
++	 * SRR1, but the cause comes in SRR1 not DSISR. Clear bit 42 so we're
++	 * directed to the ierror table so it will find the cause (which
++	 * describes it correctly as a store error).
++	 */
++	if (SRR1_MC_LOADSTORE(srr1) &&
++			((srr1 & 0x081c0000) == 0x08140000 ||
++			 (srr1 & 0x081c0000) == 0x08180000)) {
++		srr1 &= ~PPC_BIT(42);
++	}
++
++	return mce_handle_error(regs, srr1,
++			mce_p9_derror_table, mce_p9_ierror_table);
+ }
+ 
+ long __machine_check_early_realmode_p10(struct pt_regs *regs)
+ {
+-	return mce_handle_error(regs, mce_p10_derror_table, mce_p10_ierror_table);
++	unsigned long srr1 = regs->msr;
++
++	/*
++	 * Async machine check due to bad real address from store comes with
++	 * the load/store bit (PPC bit 42) set in SRR1, but the cause comes in
++	 * SRR1 not DSISR. Clear bit 42 so we're directed to the ierror table
++	 * so it will find the cause (which describes it correctly as a store
++	 * error).
++	 */
++	if (SRR1_MC_LOADSTORE(srr1) &&
++			(srr1 & 0x081c0000) == 0x08140000) {
++		srr1 &= ~PPC_BIT(42);
++	}
++
++	return mce_handle_error(regs, srr1,
++			mce_p10_derror_table, mce_p10_ierror_table);
+ }
+diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
+index 89e34aa273e21..1138f035ce747 100644
+--- a/arch/powerpc/kernel/process.c
++++ b/arch/powerpc/kernel/process.c
+@@ -1213,6 +1213,19 @@ struct task_struct *__switch_to(struct task_struct *prev,
+ 			__flush_tlb_pending(batch);
+ 		batch->active = 0;
+ 	}
++
++	/*
++	 * On POWER9 the copy-paste buffer can only paste into
++	 * foreign real addresses, so unprivileged processes can not
++	 * see the data or use it in any way unless they have
++	 * foreign real mappings. If the new process has the foreign
++	 * real address mappings, we must issue a cp_abort to clear
++	 * any state and prevent snooping, corruption or a covert
++	 * channel. ISA v3.1 supports paste into local memory.
++	 */
++	if (new->mm && (cpu_has_feature(CPU_FTR_ARCH_31) ||
++			atomic_read(&new->mm->context.vas_windows)))
++		asm volatile(PPC_CP_ABORT);
+ #endif /* CONFIG_PPC_BOOK3S_64 */
+ 
+ #ifdef CONFIG_PPC_ADV_DEBUG_REGS
+@@ -1261,30 +1274,33 @@ struct task_struct *__switch_to(struct task_struct *prev,
+ #endif
+ 	last = _switch(old_thread, new_thread);
+ 
++	/*
++	 * Nothing after _switch will be run for newly created tasks,
++	 * because they switch directly to ret_from_fork/ret_from_kernel_thread
++	 * etc. Code added here should have a comment explaining why that is
++	 * okay.
++	 */
++
+ #ifdef CONFIG_PPC_BOOK3S_64
++	/*
++	 * This applies to a process that was context switched while inside
++	 * arch_enter_lazy_mmu_mode(), to re-activate the batch that was
++	 * deactivated above, before _switch(). This will never be the case
++	 * for new tasks.
++	 */
+ 	if (current_thread_info()->local_flags & _TLF_LAZY_MMU) {
+ 		current_thread_info()->local_flags &= ~_TLF_LAZY_MMU;
+ 		batch = this_cpu_ptr(&ppc64_tlb_batch);
+ 		batch->active = 1;
+ 	}
+ 
+-	if (current->thread.regs) {
++	/*
++	 * Math facilities are masked out of the child MSR in copy_thread.
++	 * A new task does not need to restore_math because it will
++	 * demand fault them.
++	 */
++	if (current->thread.regs)
+ 		restore_math(current->thread.regs);
+-
+-		/*
+-		 * On POWER9 the copy-paste buffer can only paste into
+-		 * foreign real addresses, so unprivileged processes can not
+-		 * see the data or use it in any way unless they have
+-		 * foreign real mappings. If the new process has the foreign
+-		 * real address mappings, we must issue a cp_abort to clear
+-		 * any state and prevent snooping, corruption or a covert
+-		 * channel. ISA v3.1 supports paste into local memory.
+-		 */
+-		if (current->mm &&
+-			(cpu_has_feature(CPU_FTR_ARCH_31) ||
+-			atomic_read(&current->mm->context.vas_windows)))
+-			asm volatile(PPC_CP_ABORT);
+-	}
+ #endif /* CONFIG_PPC_BOOK3S_64 */
+ 
+ 	return last;
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index 2e05c783440a3..df6b468976d53 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -619,6 +619,8 @@ static void nmi_stop_this_cpu(struct pt_regs *regs)
+ 	/*
+ 	 * IRQs are already hard disabled by the smp_handle_nmi_ipi.
+ 	 */
++	set_cpu_online(smp_processor_id(), false);
++
+ 	spin_begin();
+ 	while (1)
+ 		spin_cpu_relax();
+@@ -634,6 +636,15 @@ void smp_send_stop(void)
+ static void stop_this_cpu(void *dummy)
+ {
+ 	hard_irq_disable();
++
++	/*
++	 * Offlining CPUs in stop_this_cpu can result in scheduler warnings,
++	 * (see commit de6e5d38417e), but printk_safe_flush_on_panic() wants
++	 * to know other CPUs are offline before it breaks locks to flush
++	 * printk buffers, in case we panic()ed while holding the lock.
++	 */
++	set_cpu_online(smp_processor_id(), false);
++
+ 	spin_begin();
+ 	while (1)
+ 		spin_cpu_relax();
+@@ -1547,7 +1558,6 @@ void start_secondary(void *unused)
+ 	smp_store_cpu_info(cpu);
+ 	set_dec(tb_ticks_per_jiffy);
+ 	rcu_cpu_starting(cpu);
+-	preempt_disable();
+ 	cpu_callin_map[cpu] = 1;
+ 
+ 	if (smp_ops->setup_cpu)
+diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
+index 1deb1bf331ddb..ea0d9c36e177c 100644
+--- a/arch/powerpc/kernel/stacktrace.c
++++ b/arch/powerpc/kernel/stacktrace.c
+@@ -172,17 +172,31 @@ static void handle_backtrace_ipi(struct pt_regs *regs)
+ 
+ static void raise_backtrace_ipi(cpumask_t *mask)
+ {
++	struct paca_struct *p;
+ 	unsigned int cpu;
++	u64 delay_us;
+ 
+ 	for_each_cpu(cpu, mask) {
+-		if (cpu == smp_processor_id())
++		if (cpu == smp_processor_id()) {
+ 			handle_backtrace_ipi(NULL);
+-		else
+-			smp_send_safe_nmi_ipi(cpu, handle_backtrace_ipi, 5 * USEC_PER_SEC);
+-	}
++			continue;
++		}
+ 
+-	for_each_cpu(cpu, mask) {
+-		struct paca_struct *p = paca_ptrs[cpu];
++		delay_us = 5 * USEC_PER_SEC;
++
++		if (smp_send_safe_nmi_ipi(cpu, handle_backtrace_ipi, delay_us)) {
++			// Now wait up to 5s for the other CPU to do its backtrace
++			while (cpumask_test_cpu(cpu, mask) && delay_us) {
++				udelay(1);
++				delay_us--;
++			}
++
++			// Other CPU cleared itself from the mask
++			if (delay_us)
++				continue;
++		}
++
++		p = paca_ptrs[cpu];
+ 
+ 		cpumask_clear_cpu(cpu, mask);
+ 
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index bc08136446660..67cc164c4ac1a 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -2657,7 +2657,7 @@ static void radix_flush_cpu(struct kvm *kvm, int cpu, struct kvm_vcpu *vcpu)
+ 	cpumask_t *cpu_in_guest;
+ 	int i;
+ 
+-	cpu = cpu_first_thread_sibling(cpu);
++	cpu = cpu_first_tlb_thread_sibling(cpu);
+ 	if (nested) {
+ 		cpumask_set_cpu(cpu, &nested->need_tlb_flush);
+ 		cpu_in_guest = &nested->cpu_in_guest;
+@@ -2671,9 +2671,10 @@ static void radix_flush_cpu(struct kvm *kvm, int cpu, struct kvm_vcpu *vcpu)
+ 	 * the other side is the first smp_mb() in kvmppc_run_core().
+ 	 */
+ 	smp_mb();
+-	for (i = 0; i < threads_per_core; ++i)
+-		if (cpumask_test_cpu(cpu + i, cpu_in_guest))
+-			smp_call_function_single(cpu + i, do_nothing, NULL, 1);
++	for (i = cpu; i <= cpu_last_tlb_thread_sibling(cpu);
++					i += cpu_tlb_thread_sibling_step())
++		if (cpumask_test_cpu(i, cpu_in_guest))
++			smp_call_function_single(i, do_nothing, NULL, 1);
+ }
+ 
+ static void kvmppc_prepare_radix_vcpu(struct kvm_vcpu *vcpu, int pcpu)
+@@ -2704,8 +2705,8 @@ static void kvmppc_prepare_radix_vcpu(struct kvm_vcpu *vcpu, int pcpu)
+ 	 */
+ 	if (prev_cpu != pcpu) {
+ 		if (prev_cpu >= 0 &&
+-		    cpu_first_thread_sibling(prev_cpu) !=
+-		    cpu_first_thread_sibling(pcpu))
++		    cpu_first_tlb_thread_sibling(prev_cpu) !=
++		    cpu_first_tlb_thread_sibling(pcpu))
+ 			radix_flush_cpu(kvm, prev_cpu, vcpu);
+ 		if (nested)
+ 			nested->prev_cpu[vcpu->arch.nested_vcpu_id] = pcpu;
+diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c
+index 7a0e33a9c980d..3edc25c890923 100644
+--- a/arch/powerpc/kvm/book3s_hv_builtin.c
++++ b/arch/powerpc/kvm/book3s_hv_builtin.c
+@@ -800,7 +800,7 @@ void kvmppc_check_need_tlb_flush(struct kvm *kvm, int pcpu,
+ 	 * Thus we make all 4 threads use the same bit.
+ 	 */
+ 	if (cpu_has_feature(CPU_FTR_ARCH_300))
+-		pcpu = cpu_first_thread_sibling(pcpu);
++		pcpu = cpu_first_tlb_thread_sibling(pcpu);
+ 
+ 	if (nested)
+ 		need_tlb_flush = &nested->need_tlb_flush;
+diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
+index 60724f6744219..1b3ff0af12648 100644
+--- a/arch/powerpc/kvm/book3s_hv_nested.c
++++ b/arch/powerpc/kvm/book3s_hv_nested.c
+@@ -53,7 +53,8 @@ void kvmhv_save_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr)
+ 	hr->dawrx1 = vcpu->arch.dawrx1;
+ }
+ 
+-static void byteswap_pt_regs(struct pt_regs *regs)
++/* Use noinline_for_stack due to https://bugs.llvm.org/show_bug.cgi?id=49610 */
++static noinline_for_stack void byteswap_pt_regs(struct pt_regs *regs)
+ {
+ 	unsigned long *addr = (unsigned long *) regs;
+ 
+diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+index 7a0f12404e0ee..502d9ebe3ae47 100644
+--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
++++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+@@ -56,7 +56,7 @@ static int global_invalidates(struct kvm *kvm)
+ 		 * so use the bit for the first thread to represent the core.
+ 		 */
+ 		if (cpu_has_feature(CPU_FTR_ARCH_300))
+-			cpu = cpu_first_thread_sibling(cpu);
++			cpu = cpu_first_tlb_thread_sibling(cpu);
+ 		cpumask_clear_cpu(cpu, &kvm->arch.need_tlb_flush);
+ 	}
+ 
+diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
+index 96d9aa1640073..ac5720371c0d9 100644
+--- a/arch/powerpc/mm/book3s64/hash_utils.c
++++ b/arch/powerpc/mm/book3s64/hash_utils.c
+@@ -1522,8 +1522,8 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap,
+ }
+ EXPORT_SYMBOL_GPL(hash_page);
+ 
+-DECLARE_INTERRUPT_HANDLER_RET(__do_hash_fault);
+-DEFINE_INTERRUPT_HANDLER_RET(__do_hash_fault)
++DECLARE_INTERRUPT_HANDLER(__do_hash_fault);
++DEFINE_INTERRUPT_HANDLER(__do_hash_fault)
+ {
+ 	unsigned long ea = regs->dar;
+ 	unsigned long dsisr = regs->dsisr;
+@@ -1533,6 +1533,11 @@ DEFINE_INTERRUPT_HANDLER_RET(__do_hash_fault)
+ 	unsigned int region_id;
+ 	long err;
+ 
++	if (unlikely(dsisr & (DSISR_BAD_FAULT_64S | DSISR_KEYFAULT))) {
++		hash__do_page_fault(regs);
++		return;
++	}
++
+ 	region_id = get_region_id(ea);
+ 	if ((region_id == VMALLOC_REGION_ID) || (region_id == IO_REGION_ID))
+ 		mm = &init_mm;
+@@ -1571,9 +1576,10 @@ DEFINE_INTERRUPT_HANDLER_RET(__do_hash_fault)
+ 			bad_page_fault(regs, SIGBUS);
+ 		}
+ 		err = 0;
+-	}
+ 
+-	return err;
++	} else if (err) {
++		hash__do_page_fault(regs);
++	}
+ }
+ 
+ /*
+@@ -1582,13 +1588,6 @@ DEFINE_INTERRUPT_HANDLER_RET(__do_hash_fault)
+  */
+ DEFINE_INTERRUPT_HANDLER_RAW(do_hash_fault)
+ {
+-	unsigned long dsisr = regs->dsisr;
+-
+-	if (unlikely(dsisr & (DSISR_BAD_FAULT_64S | DSISR_KEYFAULT))) {
+-		hash__do_page_fault(regs);
+-		return 0;
+-	}
+-
+ 	/*
+ 	 * If we are in an "NMI" (e.g., an interrupt when soft-disabled), then
+ 	 * don't call hash_page, just fail the fault. This is required to
+@@ -1607,8 +1606,7 @@ DEFINE_INTERRUPT_HANDLER_RAW(do_hash_fault)
+ 		return 0;
+ 	}
+ 
+-	if (__do_hash_fault(regs))
+-		hash__do_page_fault(regs);
++	__do_hash_fault(regs);
+ 
+ 	return 0;
+ }
+diff --git a/arch/powerpc/platforms/cell/smp.c b/arch/powerpc/platforms/cell/smp.c
+index c855a0aeb49cc..d7ab868aab54a 100644
+--- a/arch/powerpc/platforms/cell/smp.c
++++ b/arch/powerpc/platforms/cell/smp.c
+@@ -78,9 +78,6 @@ static inline int smp_startup_cpu(unsigned int lcpu)
+ 
+ 	pcpu = get_hard_smp_processor_id(lcpu);
+ 
+-	/* Fixup atomic count: it exited inside IRQ handler. */
+-	task_thread_info(paca_ptrs[lcpu]->__current)->preempt_count	= 0;
+-
+ 	/*
+ 	 * If the RTAS start-cpu token does not exist then presume the
+ 	 * cpu is already spinning.
+diff --git a/arch/powerpc/platforms/pseries/papr_scm.c b/arch/powerpc/platforms/pseries/papr_scm.c
+index ef26fe40efb03..d34e6eb4be0d5 100644
+--- a/arch/powerpc/platforms/pseries/papr_scm.c
++++ b/arch/powerpc/platforms/pseries/papr_scm.c
+@@ -18,6 +18,7 @@
+ #include <asm/plpar_wrappers.h>
+ #include <asm/papr_pdsm.h>
+ #include <asm/mce.h>
++#include <asm/unaligned.h>
+ 
+ #define BIND_ANY_ADDR (~0ul)
+ 
+@@ -900,6 +901,20 @@ static ssize_t flags_show(struct device *dev,
+ }
+ DEVICE_ATTR_RO(flags);
+ 
++static umode_t papr_nd_attribute_visible(struct kobject *kobj,
++					 struct attribute *attr, int n)
++{
++	struct device *dev = kobj_to_dev(kobj);
++	struct nvdimm *nvdimm = to_nvdimm(dev);
++	struct papr_scm_priv *p = nvdimm_provider_data(nvdimm);
++
++	/* For if perf-stats not available remove perf_stats sysfs */
++	if (attr == &dev_attr_perf_stats.attr && p->stat_buffer_len == 0)
++		return 0;
++
++	return attr->mode;
++}
++
+ /* papr_scm specific dimm attributes */
+ static struct attribute *papr_nd_attributes[] = {
+ 	&dev_attr_flags.attr,
+@@ -909,6 +924,7 @@ static struct attribute *papr_nd_attributes[] = {
+ 
+ static struct attribute_group papr_nd_attribute_group = {
+ 	.name = "papr",
++	.is_visible = papr_nd_attribute_visible,
+ 	.attrs = papr_nd_attributes,
+ };
+ 
+@@ -924,7 +940,6 @@ static int papr_scm_nvdimm_init(struct papr_scm_priv *p)
+ 	struct nd_region_desc ndr_desc;
+ 	unsigned long dimm_flags;
+ 	int target_nid, online_nid;
+-	ssize_t stat_size;
+ 
+ 	p->bus_desc.ndctl = papr_scm_ndctl;
+ 	p->bus_desc.module = THIS_MODULE;
+@@ -1009,16 +1024,6 @@ static int papr_scm_nvdimm_init(struct papr_scm_priv *p)
+ 	list_add_tail(&p->region_list, &papr_nd_regions);
+ 	mutex_unlock(&papr_ndr_lock);
+ 
+-	/* Try retriving the stat buffer and see if its supported */
+-	stat_size = drc_pmem_query_stats(p, NULL, 0);
+-	if (stat_size > 0) {
+-		p->stat_buffer_len = stat_size;
+-		dev_dbg(&p->pdev->dev, "Max perf-stat size %lu-bytes\n",
+-			p->stat_buffer_len);
+-	} else {
+-		dev_info(&p->pdev->dev, "Dimm performance stats unavailable\n");
+-	}
+-
+ 	return 0;
+ 
+ err:	nvdimm_bus_unregister(p->bus);
+@@ -1094,8 +1099,10 @@ static int papr_scm_probe(struct platform_device *pdev)
+ 	u32 drc_index, metadata_size;
+ 	u64 blocks, block_size;
+ 	struct papr_scm_priv *p;
++	u8 uuid_raw[UUID_SIZE];
+ 	const char *uuid_str;
+-	u64 uuid[2];
++	ssize_t stat_size;
++	uuid_t uuid;
+ 	int rc;
+ 
+ 	/* check we have all the required DT properties */
+@@ -1138,16 +1145,23 @@ static int papr_scm_probe(struct platform_device *pdev)
+ 	p->hcall_flush_required = of_property_read_bool(dn, "ibm,hcall-flush-required");
+ 
+ 	/* We just need to ensure that set cookies are unique across */
+-	uuid_parse(uuid_str, (uuid_t *) uuid);
++	uuid_parse(uuid_str, &uuid);
++
+ 	/*
+-	 * cookie1 and cookie2 are not really little endian
+-	 * we store a little endian representation of the
+-	 * uuid str so that we can compare this with the label
+-	 * area cookie irrespective of the endian config with which
+-	 * the kernel is built.
++	 * The cookie1 and cookie2 are not really little endian.
++	 * We store a raw buffer representation of the
++	 * uuid string so that we can compare this with the label
++	 * area cookie irrespective of the endian configuration
++	 * with which the kernel is built.
++	 *
++	 * Historically we stored the cookie in the below format.
++	 * for a uuid string 72511b67-0b3b-42fd-8d1d-5be3cae8bcaa
++	 *	cookie1 was 0xfd423b0b671b5172
++	 *	cookie2 was 0xaabce8cae35b1d8d
+ 	 */
+-	p->nd_set.cookie1 = cpu_to_le64(uuid[0]);
+-	p->nd_set.cookie2 = cpu_to_le64(uuid[1]);
++	export_uuid(uuid_raw, &uuid);
++	p->nd_set.cookie1 = get_unaligned_le64(&uuid_raw[0]);
++	p->nd_set.cookie2 = get_unaligned_le64(&uuid_raw[8]);
+ 
+ 	/* might be zero */
+ 	p->metadata_size = metadata_size;
+@@ -1172,6 +1186,14 @@ static int papr_scm_probe(struct platform_device *pdev)
+ 	p->res.name  = pdev->name;
+ 	p->res.flags = IORESOURCE_MEM;
+ 
++	/* Try retrieving the stat buffer and see if its supported */
++	stat_size = drc_pmem_query_stats(p, NULL, 0);
++	if (stat_size > 0) {
++		p->stat_buffer_len = stat_size;
++		dev_dbg(&p->pdev->dev, "Max perf-stat size %lu-bytes\n",
++			p->stat_buffer_len);
++	}
++
+ 	rc = papr_scm_nvdimm_init(p);
+ 	if (rc)
+ 		goto err2;
+diff --git a/arch/powerpc/platforms/pseries/smp.c b/arch/powerpc/platforms/pseries/smp.c
+index c70b4be9f0a54..f47429323eee9 100644
+--- a/arch/powerpc/platforms/pseries/smp.c
++++ b/arch/powerpc/platforms/pseries/smp.c
+@@ -105,9 +105,6 @@ static inline int smp_startup_cpu(unsigned int lcpu)
+ 		return 1;
+ 	}
+ 
+-	/* Fixup atomic count: it exited inside IRQ handler. */
+-	task_thread_info(paca_ptrs[lcpu]->__current)->preempt_count	= 0;
+-
+ 	/* 
+ 	 * If the RTAS start-cpu token does not exist then presume the
+ 	 * cpu is already spinning.
+@@ -211,7 +208,9 @@ static __init void pSeries_smp_probe(void)
+ 	if (!cpu_has_feature(CPU_FTR_SMT))
+ 		return;
+ 
+-	if (check_kvm_guest()) {
++	check_kvm_guest();
++
++	if (is_kvm_guest()) {
+ 		/*
+ 		 * KVM emulates doorbells by disabling FSCR[MSGP] so msgsndp
+ 		 * faults to the hypervisor which then reads the instruction
+diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c
+index 9a408e2942acf..bd82375db51a6 100644
+--- a/arch/riscv/kernel/smpboot.c
++++ b/arch/riscv/kernel/smpboot.c
+@@ -180,7 +180,6 @@ asmlinkage __visible void smp_callin(void)
+ 	 * Disable preemption before enabling interrupts, so we don't try to
+ 	 * schedule a CPU that hasn't actually started yet.
+ 	 */
+-	preempt_disable();
+ 	local_irq_enable();
+ 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+ }
+diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
+index b4c7c34069f81..93488bbf491b9 100644
+--- a/arch/s390/Kconfig
++++ b/arch/s390/Kconfig
+@@ -164,6 +164,7 @@ config S390
+ 	select HAVE_FUTEX_CMPXCHG if FUTEX
+ 	select HAVE_GCC_PLUGINS
+ 	select HAVE_GENERIC_VDSO
++	select HAVE_IOREMAP_PROT if PCI
+ 	select HAVE_IRQ_EXIT_ON_IRQ_STACK
+ 	select HAVE_KERNEL_BZIP2
+ 	select HAVE_KERNEL_GZIP
+@@ -853,7 +854,7 @@ config CMM_IUCV
+ config APPLDATA_BASE
+ 	def_bool n
+ 	prompt "Linux - VM Monitor Stream, base infrastructure"
+-	depends on PROC_FS
++	depends on PROC_SYSCTL
+ 	help
+ 	  This provides a kernel interface for creating and updating z/VM APPLDATA
+ 	  monitor records. The monitor records are updated at certain time
+diff --git a/arch/s390/boot/uv.c b/arch/s390/boot/uv.c
+index 87641dd65ccf9..b3501ea5039e4 100644
+--- a/arch/s390/boot/uv.c
++++ b/arch/s390/boot/uv.c
+@@ -36,6 +36,7 @@ void uv_query_info(void)
+ 		uv_info.max_sec_stor_addr = ALIGN(uvcb.max_guest_stor_addr, PAGE_SIZE);
+ 		uv_info.max_num_sec_conf = uvcb.max_num_sec_conf;
+ 		uv_info.max_guest_cpu_id = uvcb.max_guest_cpu_id;
++		uv_info.uv_feature_indications = uvcb.uv_feature_indications;
+ 	}
+ 
+ #ifdef CONFIG_PROTECTED_VIRTUALIZATION_GUEST
+diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
+index 29c7ecd5ad1d5..adea53f69bfd3 100644
+--- a/arch/s390/include/asm/pgtable.h
++++ b/arch/s390/include/asm/pgtable.h
+@@ -344,8 +344,6 @@ static inline int is_module_addr(void *addr)
+ #define PTRS_PER_P4D	_CRST_ENTRIES
+ #define PTRS_PER_PGD	_CRST_ENTRIES
+ 
+-#define MAX_PTRS_PER_P4D	PTRS_PER_P4D
+-
+ /*
+  * Segment table and region3 table entry encoding
+  * (R = read-only, I = invalid, y = young bit):
+@@ -865,6 +863,25 @@ static inline int pte_unused(pte_t pte)
+ 	return pte_val(pte) & _PAGE_UNUSED;
+ }
+ 
++/*
++ * Extract the pgprot value from the given pte while at the same time making it
++ * usable for kernel address space mappings where fault driven dirty and
++ * young/old accounting is not supported, i.e _PAGE_PROTECT and _PAGE_INVALID
++ * must not be set.
++ */
++static inline pgprot_t pte_pgprot(pte_t pte)
++{
++	unsigned long pte_flags = pte_val(pte) & _PAGE_CHG_MASK;
++
++	if (pte_write(pte))
++		pte_flags |= pgprot_val(PAGE_KERNEL);
++	else
++		pte_flags |= pgprot_val(PAGE_KERNEL_RO);
++	pte_flags |= pte_val(pte) & mio_wb_bit_mask;
++
++	return __pgprot(pte_flags);
++}
++
+ /*
+  * pgd/pmd/pte modification functions
+  */
+diff --git a/arch/s390/include/asm/preempt.h b/arch/s390/include/asm/preempt.h
+index b49e0492842cc..d9d5350cc3ec3 100644
+--- a/arch/s390/include/asm/preempt.h
++++ b/arch/s390/include/asm/preempt.h
+@@ -29,12 +29,6 @@ static inline void preempt_count_set(int pc)
+ 				  old, new) != old);
+ }
+ 
+-#define init_task_preempt_count(p)	do { } while (0)
+-
+-#define init_idle_preempt_count(p, cpu)	do { \
+-	S390_lowcore.preempt_count = PREEMPT_ENABLED; \
+-} while (0)
+-
+ static inline void set_preempt_need_resched(void)
+ {
+ 	__atomic_and(~PREEMPT_NEED_RESCHED, &S390_lowcore.preempt_count);
+@@ -88,12 +82,6 @@ static inline void preempt_count_set(int pc)
+ 	S390_lowcore.preempt_count = pc;
+ }
+ 
+-#define init_task_preempt_count(p)	do { } while (0)
+-
+-#define init_idle_preempt_count(p, cpu)	do { \
+-	S390_lowcore.preempt_count = PREEMPT_ENABLED; \
+-} while (0)
+-
+ static inline void set_preempt_need_resched(void)
+ {
+ }
+@@ -130,6 +118,10 @@ static inline bool should_resched(int preempt_offset)
+ 
+ #endif /* CONFIG_HAVE_MARCH_Z196_FEATURES */
+ 
++#define init_task_preempt_count(p)	do { } while (0)
++/* Deferred to CPU bringup time */
++#define init_idle_preempt_count(p, cpu)	do { } while (0)
++
+ #ifdef CONFIG_PREEMPTION
+ extern void preempt_schedule(void);
+ #define __preempt_schedule() preempt_schedule()
+diff --git a/arch/s390/include/asm/uv.h b/arch/s390/include/asm/uv.h
+index 7b98d4caee779..12c5f006c1364 100644
+--- a/arch/s390/include/asm/uv.h
++++ b/arch/s390/include/asm/uv.h
+@@ -73,6 +73,10 @@ enum uv_cmds_inst {
+ 	BIT_UVC_CMD_UNPIN_PAGE_SHARED = 22,
+ };
+ 
++enum uv_feat_ind {
++	BIT_UV_FEAT_MISC = 0,
++};
++
+ struct uv_cb_header {
+ 	u16 len;
+ 	u16 cmd;	/* Command Code */
+@@ -97,7 +101,8 @@ struct uv_cb_qui {
+ 	u64 max_guest_stor_addr;
+ 	u8  reserved88[158 - 136];
+ 	u16 max_guest_cpu_id;
+-	u8  reserveda0[200 - 160];
++	u64 uv_feature_indications;
++	u8  reserveda0[200 - 168];
+ } __packed __aligned(8);
+ 
+ /* Initialize Ultravisor */
+@@ -274,6 +279,7 @@ struct uv_info {
+ 	unsigned long max_sec_stor_addr;
+ 	unsigned int max_num_sec_conf;
+ 	unsigned short max_guest_cpu_id;
++	unsigned long uv_feature_indications;
+ };
+ 
+ extern struct uv_info uv_info;
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index 5aab59ad56881..382d73da134cf 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -466,6 +466,7 @@ static void __init setup_lowcore_dat_off(void)
+ 	lc->br_r1_trampoline = 0x07f1;	/* br %r1 */
+ 	lc->return_lpswe = gen_lpswe(__LC_RETURN_PSW);
+ 	lc->return_mcck_lpswe = gen_lpswe(__LC_RETURN_MCCK_PSW);
++	lc->preempt_count = PREEMPT_DISABLED;
+ 
+ 	set_prefix((u32)(unsigned long) lc);
+ 	lowcore_ptr[0] = lc;
+diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
+index 2fec2b80d35d2..1fb483e06a647 100644
+--- a/arch/s390/kernel/smp.c
++++ b/arch/s390/kernel/smp.c
+@@ -219,6 +219,7 @@ static int pcpu_alloc_lowcore(struct pcpu *pcpu, int cpu)
+ 	lc->br_r1_trampoline = 0x07f1;	/* br %r1 */
+ 	lc->return_lpswe = gen_lpswe(__LC_RETURN_PSW);
+ 	lc->return_mcck_lpswe = gen_lpswe(__LC_RETURN_MCCK_PSW);
++	lc->preempt_count = PREEMPT_DISABLED;
+ 	if (nmi_alloc_per_cpu(lc))
+ 		goto out_stack;
+ 	lowcore_ptr[cpu] = lc;
+@@ -878,7 +879,6 @@ static void smp_init_secondary(void)
+ 	restore_access_regs(S390_lowcore.access_regs_save_area);
+ 	cpu_init();
+ 	rcu_cpu_starting(cpu);
+-	preempt_disable();
+ 	init_cpu_timer();
+ 	vtime_init();
+ 	vdso_getcpu_init();
+diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
+index 370f664580af5..650b4b7b1e6b0 100644
+--- a/arch/s390/kernel/uv.c
++++ b/arch/s390/kernel/uv.c
+@@ -364,6 +364,15 @@ static ssize_t uv_query_facilities(struct kobject *kobj,
+ static struct kobj_attribute uv_query_facilities_attr =
+ 	__ATTR(facilities, 0444, uv_query_facilities, NULL);
+ 
++static ssize_t uv_query_feature_indications(struct kobject *kobj,
++					    struct kobj_attribute *attr, char *buf)
++{
++	return sysfs_emit(buf, "%lx\n", uv_info.uv_feature_indications);
++}
++
++static struct kobj_attribute uv_query_feature_indications_attr =
++	__ATTR(feature_indications, 0444, uv_query_feature_indications, NULL);
++
+ static ssize_t uv_query_max_guest_cpus(struct kobject *kobj,
+ 				       struct kobj_attribute *attr, char *page)
+ {
+@@ -396,6 +405,7 @@ static struct kobj_attribute uv_query_max_guest_addr_attr =
+ 
+ static struct attribute *uv_query_attrs[] = {
+ 	&uv_query_facilities_attr.attr,
++	&uv_query_feature_indications_attr.attr,
+ 	&uv_query_max_guest_cpus_attr.attr,
+ 	&uv_query_max_guest_vms_attr.attr,
+ 	&uv_query_max_guest_addr_attr.attr,
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 1296fc10f80c8..876fc1f7282a0 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -329,31 +329,31 @@ static void allow_cpu_feat(unsigned long nr)
+ 
+ static inline int plo_test_bit(unsigned char nr)
+ {
+-	register unsigned long r0 asm("0") = (unsigned long) nr | 0x100;
++	unsigned long function = (unsigned long)nr | 0x100;
+ 	int cc;
+ 
+ 	asm volatile(
++		"	lgr	0,%[function]\n"
+ 		/* Parameter registers are ignored for "test bit" */
+ 		"	plo	0,0,0,0(0)\n"
+ 		"	ipm	%0\n"
+ 		"	srl	%0,28\n"
+ 		: "=d" (cc)
+-		: "d" (r0)
+-		: "cc");
++		: [function] "d" (function)
++		: "cc", "0");
+ 	return cc == 0;
+ }
+ 
+ static __always_inline void __insn32_query(unsigned int opcode, u8 *query)
+ {
+-	register unsigned long r0 asm("0") = 0;	/* query function */
+-	register unsigned long r1 asm("1") = (unsigned long) query;
+-
+ 	asm volatile(
+-		/* Parameter regs are ignored */
++		"	lghi	0,0\n"
++		"	lgr	1,%[query]\n"
++		/* Parameter registers are ignored */
+ 		"	.insn	rrf,%[opc] << 16,2,4,6,0\n"
+ 		:
+-		: "d" (r0), "a" (r1), [opc] "i" (opcode)
+-		: "cc", "memory");
++		: [query] "d" ((unsigned long)query), [opc] "i" (opcode)
++		: "cc", "memory", "0", "1");
+ }
+ 
+ #define INSN_SORTL 0xb938
+diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
+index 826d017773616..f54f6dcd87489 100644
+--- a/arch/s390/mm/fault.c
++++ b/arch/s390/mm/fault.c
+@@ -792,6 +792,32 @@ void do_secure_storage_access(struct pt_regs *regs)
+ 	struct page *page;
+ 	int rc;
+ 
++	/*
++	 * bit 61 tells us if the address is valid, if it's not we
++	 * have a major problem and should stop the kernel or send a
++	 * SIGSEGV to the process. Unfortunately bit 61 is not
++	 * reliable without the misc UV feature so we need to check
++	 * for that as well.
++	 */
++	if (test_bit_inv(BIT_UV_FEAT_MISC, &uv_info.uv_feature_indications) &&
++	    !test_bit_inv(61, &regs->int_parm_long)) {
++		/*
++		 * When this happens, userspace did something that it
++		 * was not supposed to do, e.g. branching into secure
++		 * memory. Trigger a segmentation fault.
++		 */
++		if (user_mode(regs)) {
++			send_sig(SIGSEGV, current, 0);
++			return;
++		}
++
++		/*
++		 * The kernel should never run into this case and we
++		 * have no way out of this situation.
++		 */
++		panic("Unexpected PGM 0x3d with TEID bit 61=0");
++	}
++
+ 	switch (get_fault_type(regs)) {
+ 	case USER_FAULT:
+ 		mm = current->mm;
+diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c
+index 372acdc9033eb..65924d9ec2459 100644
+--- a/arch/sh/kernel/smp.c
++++ b/arch/sh/kernel/smp.c
+@@ -186,8 +186,6 @@ asmlinkage void start_secondary(void)
+ 
+ 	per_cpu_trap_init();
+ 
+-	preempt_disable();
+-
+ 	notify_cpu_starting(cpu);
+ 
+ 	local_irq_enable();
+diff --git a/arch/sparc/kernel/smp_32.c b/arch/sparc/kernel/smp_32.c
+index 50c127ab46d5b..22b148e5a5f88 100644
+--- a/arch/sparc/kernel/smp_32.c
++++ b/arch/sparc/kernel/smp_32.c
+@@ -348,7 +348,6 @@ static void sparc_start_secondary(void *arg)
+ 	 */
+ 	arch_cpu_pre_starting(arg);
+ 
+-	preempt_disable();
+ 	cpu = smp_processor_id();
+ 
+ 	notify_cpu_starting(cpu);
+diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
+index e38d8bf454e86..ae5faa1d989d2 100644
+--- a/arch/sparc/kernel/smp_64.c
++++ b/arch/sparc/kernel/smp_64.c
+@@ -138,9 +138,6 @@ void smp_callin(void)
+ 
+ 	set_cpu_online(cpuid, true);
+ 
+-	/* idle thread is expected to have preempt disabled */
+-	preempt_disable();
+-
+ 	local_irq_enable();
+ 
+ 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+diff --git a/arch/x86/crypto/curve25519-x86_64.c b/arch/x86/crypto/curve25519-x86_64.c
+index 6706b6cb1d0fc..38caf61cd5b7d 100644
+--- a/arch/x86/crypto/curve25519-x86_64.c
++++ b/arch/x86/crypto/curve25519-x86_64.c
+@@ -1500,7 +1500,7 @@ static int __init curve25519_mod_init(void)
+ static void __exit curve25519_mod_exit(void)
+ {
+ 	if (IS_REACHABLE(CONFIG_CRYPTO_KPP) &&
+-	    (boot_cpu_has(X86_FEATURE_BMI2) || boot_cpu_has(X86_FEATURE_ADX)))
++	    static_branch_likely(&curve25519_use_bmi2_adx))
+ 		crypto_unregister_kpp(&curve25519_alg);
+ }
+ 
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index a16a5294d55f6..1886aaf199143 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -506,7 +506,7 @@ SYM_CODE_START(\asmsym)
+ 
+ 	movq	%rsp, %rdi		/* pt_regs pointer */
+ 
+-	call	\cfunc
++	call	kernel_\cfunc
+ 
+ 	/*
+ 	 * No need to switch back to the IST stack. The current stack is either
+@@ -517,7 +517,7 @@ SYM_CODE_START(\asmsym)
+ 
+ 	/* Switch to the regular task stack */
+ .Lfrom_usermode_switch_stack_\@:
+-	idtentry_body safe_stack_\cfunc, has_error_code=1
++	idtentry_body user_\cfunc, has_error_code=1
+ 
+ _ASM_NOKPROBE(\asmsym)
+ SYM_CODE_END(\asmsym)
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index 8f71dd72ef95f..1eb45139fcc6e 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -1626,6 +1626,8 @@ static void x86_pmu_del(struct perf_event *event, int flags)
+ 	if (cpuc->txn_flags & PERF_PMU_TXN_ADD)
+ 		goto do_del;
+ 
++	__set_bit(event->hw.idx, cpuc->dirty);
++
+ 	/*
+ 	 * Not a TXN, therefore cleanup properly.
+ 	 */
+@@ -2474,6 +2476,31 @@ static int x86_pmu_event_init(struct perf_event *event)
+ 	return err;
+ }
+ 
++void perf_clear_dirty_counters(void)
++{
++	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
++	int i;
++
++	 /* Don't need to clear the assigned counter. */
++	for (i = 0; i < cpuc->n_events; i++)
++		__clear_bit(cpuc->assign[i], cpuc->dirty);
++
++	if (bitmap_empty(cpuc->dirty, X86_PMC_IDX_MAX))
++		return;
++
++	for_each_set_bit(i, cpuc->dirty, X86_PMC_IDX_MAX) {
++		/* Metrics and fake events don't have corresponding HW counters. */
++		if (is_metric_idx(i) || (i == INTEL_PMC_IDX_FIXED_VLBR))
++			continue;
++		else if (i >= INTEL_PMC_IDX_FIXED)
++			wrmsrl(MSR_ARCH_PERFMON_FIXED_CTR0 + (i - INTEL_PMC_IDX_FIXED), 0);
++		else
++			wrmsrl(x86_pmu_event_addr(i), 0);
++	}
++
++	bitmap_zero(cpuc->dirty, X86_PMC_IDX_MAX);
++}
++
+ static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm)
+ {
+ 	if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
+@@ -2497,7 +2524,6 @@ static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm)
+ 
+ static void x86_pmu_event_unmapped(struct perf_event *event, struct mm_struct *mm)
+ {
+-
+ 	if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))
+ 		return;
+ 
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index e28892270c580..d76be3bba11e4 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -280,6 +280,8 @@ static struct extra_reg intel_spr_extra_regs[] __read_mostly = {
+ 	INTEL_UEVENT_EXTRA_REG(0x012b, MSR_OFFCORE_RSP_1, 0x3fffffffffull, RSP_1),
+ 	INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x01cd),
+ 	INTEL_UEVENT_EXTRA_REG(0x01c6, MSR_PEBS_FRONTEND, 0x7fff17, FE),
++	INTEL_UEVENT_EXTRA_REG(0x40ad, MSR_PEBS_FRONTEND, 0x7, FE),
++	INTEL_UEVENT_EXTRA_REG(0x04c2, MSR_PEBS_FRONTEND, 0x8, FE),
+ 	EVENT_EXTRA_END
+ };
+ 
+@@ -4030,8 +4032,10 @@ spr_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
+ 	 * The :ppp indicates the Precise Distribution (PDist) facility, which
+ 	 * is only supported on the GP counter 0. If a :ppp event which is not
+ 	 * available on the GP counter 0, error out.
++	 * Exception: Instruction PDIR is only available on the fixed counter 0.
+ 	 */
+-	if (event->attr.precise_ip == 3) {
++	if ((event->attr.precise_ip == 3) &&
++	    !constraint_match(&fixed0_constraint, event->hw.config)) {
+ 		if (c->idxmsk64 & BIT_ULL(0))
+ 			return &counter0_constraint;
+ 
+@@ -6157,8 +6161,13 @@ __init int intel_pmu_init(void)
+ 		pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_CORE_IDX];
+ 		pmu->name = "cpu_core";
+ 		pmu->cpu_type = hybrid_big;
+-		pmu->num_counters = x86_pmu.num_counters + 2;
+-		pmu->num_counters_fixed = x86_pmu.num_counters_fixed + 1;
++		if (cpu_feature_enabled(X86_FEATURE_HYBRID_CPU)) {
++			pmu->num_counters = x86_pmu.num_counters + 2;
++			pmu->num_counters_fixed = x86_pmu.num_counters_fixed + 1;
++		} else {
++			pmu->num_counters = x86_pmu.num_counters;
++			pmu->num_counters_fixed = x86_pmu.num_counters_fixed;
++		}
+ 		pmu->max_pebs_events = min_t(unsigned, MAX_PEBS_EVENTS, pmu->num_counters);
+ 		pmu->unconstrained = (struct event_constraint)
+ 					__EVENT_CONSTRAINT(0, (1ULL << pmu->num_counters) - 1,
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index ad87cb36f7c81..2bf1c7ea2758d 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -229,6 +229,7 @@ struct cpu_hw_events {
+ 	 */
+ 	struct perf_event	*events[X86_PMC_IDX_MAX]; /* in counter order */
+ 	unsigned long		active_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
++	unsigned long		dirty[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
+ 	int			enabled;
+ 
+ 	int			n_events; /* the # of events in the below arrays */
+diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
+index 73d45b0dfff2d..cd9f3e3049449 100644
+--- a/arch/x86/include/asm/idtentry.h
++++ b/arch/x86/include/asm/idtentry.h
+@@ -312,8 +312,8 @@ static __always_inline void __##func(struct pt_regs *regs)
+  */
+ #define DECLARE_IDTENTRY_VC(vector, func)				\
+ 	DECLARE_IDTENTRY_RAW_ERRORCODE(vector, func);			\
+-	__visible noinstr void ist_##func(struct pt_regs *regs, unsigned long error_code);	\
+-	__visible noinstr void safe_stack_##func(struct pt_regs *regs, unsigned long error_code)
++	__visible noinstr void kernel_##func(struct pt_regs *regs, unsigned long error_code);	\
++	__visible noinstr void   user_##func(struct pt_regs *regs, unsigned long error_code)
+ 
+ /**
+  * DEFINE_IDTENTRY_IST - Emit code for IST entry points
+@@ -355,33 +355,24 @@ static __always_inline void __##func(struct pt_regs *regs)
+ 	DEFINE_IDTENTRY_RAW_ERRORCODE(func)
+ 
+ /**
+- * DEFINE_IDTENTRY_VC_SAFE_STACK - Emit code for VMM communication handler
+-				   which runs on a safe stack.
++ * DEFINE_IDTENTRY_VC_KERNEL - Emit code for VMM communication handler
++			       when raised from kernel mode
+  * @func:	Function name of the entry point
+  *
+  * Maps to DEFINE_IDTENTRY_RAW_ERRORCODE
+  */
+-#define DEFINE_IDTENTRY_VC_SAFE_STACK(func)				\
+-	DEFINE_IDTENTRY_RAW_ERRORCODE(safe_stack_##func)
++#define DEFINE_IDTENTRY_VC_KERNEL(func)				\
++	DEFINE_IDTENTRY_RAW_ERRORCODE(kernel_##func)
+ 
+ /**
+- * DEFINE_IDTENTRY_VC_IST - Emit code for VMM communication handler
+-			    which runs on the VC fall-back stack
++ * DEFINE_IDTENTRY_VC_USER - Emit code for VMM communication handler
++			     when raised from user mode
+  * @func:	Function name of the entry point
+  *
+  * Maps to DEFINE_IDTENTRY_RAW_ERRORCODE
+  */
+-#define DEFINE_IDTENTRY_VC_IST(func)				\
+-	DEFINE_IDTENTRY_RAW_ERRORCODE(ist_##func)
+-
+-/**
+- * DEFINE_IDTENTRY_VC - Emit code for VMM communication handler
+- * @func:	Function name of the entry point
+- *
+- * Maps to DEFINE_IDTENTRY_RAW_ERRORCODE
+- */
+-#define DEFINE_IDTENTRY_VC(func)					\
+-	DEFINE_IDTENTRY_RAW_ERRORCODE(func)
++#define DEFINE_IDTENTRY_VC_USER(func)				\
++	DEFINE_IDTENTRY_RAW_ERRORCODE(user_##func)
+ 
+ #else	/* CONFIG_X86_64 */
+ 
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 682e82956ea5a..fbd55c682d5e7 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -85,7 +85,7 @@
+ #define KVM_REQ_APICV_UPDATE \
+ 	KVM_ARCH_REQ_FLAGS(25, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
+ #define KVM_REQ_TLB_FLUSH_CURRENT	KVM_ARCH_REQ(26)
+-#define KVM_REQ_HV_TLB_FLUSH \
++#define KVM_REQ_TLB_FLUSH_GUEST \
+ 	KVM_ARCH_REQ_FLAGS(27, KVM_REQUEST_NO_WAKEUP)
+ #define KVM_REQ_APF_READY		KVM_ARCH_REQ(28)
+ #define KVM_REQ_MSR_FILTER_CHANGED	KVM_ARCH_REQ(29)
+@@ -1464,6 +1464,7 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu);
+ void kvm_mmu_init_vm(struct kvm *kvm);
+ void kvm_mmu_uninit_vm(struct kvm *kvm);
+ 
++void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu);
+ void kvm_mmu_reset_context(struct kvm_vcpu *vcpu);
+ void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
+ 				      struct kvm_memory_slot *memslot,
+diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
+index 544f41a179fb6..8fc1b5003713f 100644
+--- a/arch/x86/include/asm/perf_event.h
++++ b/arch/x86/include/asm/perf_event.h
+@@ -478,6 +478,7 @@ struct x86_pmu_lbr {
+ 
+ extern void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap);
+ extern void perf_check_microcode(void);
++extern void perf_clear_dirty_counters(void);
+ extern int x86_perf_rdpmc_index(struct perf_event *event);
+ #else
+ static inline void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap)
+diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
+index f8cb8af4de5ce..fe5efbcba8240 100644
+--- a/arch/x86/include/asm/preempt.h
++++ b/arch/x86/include/asm/preempt.h
+@@ -44,7 +44,7 @@ static __always_inline void preempt_count_set(int pc)
+ #define init_task_preempt_count(p) do { } while (0)
+ 
+ #define init_idle_preempt_count(p, cpu) do { \
+-	per_cpu(__preempt_count, (cpu)) = PREEMPT_ENABLED; \
++	per_cpu(__preempt_count, (cpu)) = PREEMPT_DISABLED; \
+ } while (0)
+ 
+ /*
+diff --git a/arch/x86/include/uapi/asm/hwcap2.h b/arch/x86/include/uapi/asm/hwcap2.h
+index 5fdfcb47000f9..054604aba9f00 100644
+--- a/arch/x86/include/uapi/asm/hwcap2.h
++++ b/arch/x86/include/uapi/asm/hwcap2.h
+@@ -2,10 +2,12 @@
+ #ifndef _ASM_X86_HWCAP2_H
+ #define _ASM_X86_HWCAP2_H
+ 
++#include <linux/const.h>
++
+ /* MONITOR/MWAIT enabled in Ring 3 */
+-#define HWCAP2_RING3MWAIT		(1 << 0)
++#define HWCAP2_RING3MWAIT		_BITUL(0)
+ 
+ /* Kernel allows FSGSBASE instructions available in Ring 3 */
+-#define HWCAP2_FSGSBASE			BIT(1)
++#define HWCAP2_FSGSBASE			_BITUL(1)
+ 
+ #endif
+diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
+index 22f13343b5da8..4fa0a42808951 100644
+--- a/arch/x86/kernel/cpu/mshyperv.c
++++ b/arch/x86/kernel/cpu/mshyperv.c
+@@ -236,7 +236,7 @@ static void __init hv_smp_prepare_cpus(unsigned int max_cpus)
+ 	for_each_present_cpu(i) {
+ 		if (i == 0)
+ 			continue;
+-		ret = hv_call_add_logical_proc(numa_cpu_node(i), i, cpu_physical_id(i));
++		ret = hv_call_add_logical_proc(numa_cpu_node(i), i, i);
+ 		BUG_ON(ret);
+ 	}
+ 
+diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.c
+index 6edd1e2ee8afa..058aacb423371 100644
+--- a/arch/x86/kernel/early-quirks.c
++++ b/arch/x86/kernel/early-quirks.c
+@@ -549,6 +549,7 @@ static const struct pci_device_id intel_early_ids[] __initconst = {
+ 	INTEL_CNL_IDS(&gen9_early_ops),
+ 	INTEL_ICL_11_IDS(&gen11_early_ops),
+ 	INTEL_EHL_IDS(&gen11_early_ops),
++	INTEL_JSL_IDS(&gen11_early_ops),
+ 	INTEL_TGL_12_IDS(&gen11_early_ops),
+ 	INTEL_RKL_IDS(&gen11_early_ops),
+ 	INTEL_ADLS_IDS(&gen11_early_ops),
+diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
+index 651b81cd648e5..d66a33d24f4f9 100644
+--- a/arch/x86/kernel/sev.c
++++ b/arch/x86/kernel/sev.c
+@@ -7,12 +7,11 @@
+  * Author: Joerg Roedel <jroedel@suse.de>
+  */
+ 
+-#define pr_fmt(fmt)	"SEV-ES: " fmt
++#define pr_fmt(fmt)	"SEV: " fmt
+ 
+ #include <linux/sched/debug.h>	/* For show_regs() */
+ #include <linux/percpu-defs.h>
+ #include <linux/mem_encrypt.h>
+-#include <linux/lockdep.h>
+ #include <linux/printk.h>
+ #include <linux/mm_types.h>
+ #include <linux/set_memory.h>
+@@ -192,11 +191,19 @@ void noinstr __sev_es_ist_exit(void)
+ 	this_cpu_write(cpu_tss_rw.x86_tss.ist[IST_INDEX_VC], *(unsigned long *)ist);
+ }
+ 
+-static __always_inline struct ghcb *sev_es_get_ghcb(struct ghcb_state *state)
++/*
++ * Nothing shall interrupt this code path while holding the per-CPU
++ * GHCB. The backup GHCB is only for NMIs interrupting this path.
++ *
++ * Callers must disable local interrupts around it.
++ */
++static noinstr struct ghcb *__sev_get_ghcb(struct ghcb_state *state)
+ {
+ 	struct sev_es_runtime_data *data;
+ 	struct ghcb *ghcb;
+ 
++	WARN_ON(!irqs_disabled());
++
+ 	data = this_cpu_read(runtime_data);
+ 	ghcb = &data->ghcb_page;
+ 
+@@ -213,7 +220,9 @@ static __always_inline struct ghcb *sev_es_get_ghcb(struct ghcb_state *state)
+ 			data->ghcb_active        = false;
+ 			data->backup_ghcb_active = false;
+ 
++			instrumentation_begin();
+ 			panic("Unable to handle #VC exception! GHCB and Backup GHCB are already in use");
++			instrumentation_end();
+ 		}
+ 
+ 		/* Mark backup_ghcb active before writing to it */
+@@ -479,11 +488,13 @@ static enum es_result vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt
+ /* Include code shared with pre-decompression boot stage */
+ #include "sev-shared.c"
+ 
+-static __always_inline void sev_es_put_ghcb(struct ghcb_state *state)
++static noinstr void __sev_put_ghcb(struct ghcb_state *state)
+ {
+ 	struct sev_es_runtime_data *data;
+ 	struct ghcb *ghcb;
+ 
++	WARN_ON(!irqs_disabled());
++
+ 	data = this_cpu_read(runtime_data);
+ 	ghcb = &data->ghcb_page;
+ 
+@@ -507,7 +518,7 @@ void noinstr __sev_es_nmi_complete(void)
+ 	struct ghcb_state state;
+ 	struct ghcb *ghcb;
+ 
+-	ghcb = sev_es_get_ghcb(&state);
++	ghcb = __sev_get_ghcb(&state);
+ 
+ 	vc_ghcb_invalidate(ghcb);
+ 	ghcb_set_sw_exit_code(ghcb, SVM_VMGEXIT_NMI_COMPLETE);
+@@ -517,7 +528,7 @@ void noinstr __sev_es_nmi_complete(void)
+ 	sev_es_wr_ghcb_msr(__pa_nodebug(ghcb));
+ 	VMGEXIT();
+ 
+-	sev_es_put_ghcb(&state);
++	__sev_put_ghcb(&state);
+ }
+ 
+ static u64 get_jump_table_addr(void)
+@@ -529,7 +540,7 @@ static u64 get_jump_table_addr(void)
+ 
+ 	local_irq_save(flags);
+ 
+-	ghcb = sev_es_get_ghcb(&state);
++	ghcb = __sev_get_ghcb(&state);
+ 
+ 	vc_ghcb_invalidate(ghcb);
+ 	ghcb_set_sw_exit_code(ghcb, SVM_VMGEXIT_AP_JUMP_TABLE);
+@@ -543,7 +554,7 @@ static u64 get_jump_table_addr(void)
+ 	    ghcb_sw_exit_info_2_is_valid(ghcb))
+ 		ret = ghcb->save.sw_exit_info_2;
+ 
+-	sev_es_put_ghcb(&state);
++	__sev_put_ghcb(&state);
+ 
+ 	local_irq_restore(flags);
+ 
+@@ -668,7 +679,7 @@ static void sev_es_ap_hlt_loop(void)
+ 	struct ghcb_state state;
+ 	struct ghcb *ghcb;
+ 
+-	ghcb = sev_es_get_ghcb(&state);
++	ghcb = __sev_get_ghcb(&state);
+ 
+ 	while (true) {
+ 		vc_ghcb_invalidate(ghcb);
+@@ -685,7 +696,7 @@ static void sev_es_ap_hlt_loop(void)
+ 			break;
+ 	}
+ 
+-	sev_es_put_ghcb(&state);
++	__sev_put_ghcb(&state);
+ }
+ 
+ /*
+@@ -775,7 +786,7 @@ void __init sev_es_init_vc_handling(void)
+ 	sev_es_setup_play_dead();
+ 
+ 	/* Secondary CPUs use the runtime #VC handler */
+-	initial_vc_handler = (unsigned long)safe_stack_exc_vmm_communication;
++	initial_vc_handler = (unsigned long)kernel_exc_vmm_communication;
+ }
+ 
+ static void __init vc_early_forward_exception(struct es_em_ctxt *ctxt)
+@@ -1213,14 +1224,6 @@ static enum es_result vc_handle_trap_ac(struct ghcb *ghcb,
+ 	return ES_EXCEPTION;
+ }
+ 
+-static __always_inline void vc_handle_trap_db(struct pt_regs *regs)
+-{
+-	if (user_mode(regs))
+-		noist_exc_debug(regs);
+-	else
+-		exc_debug(regs);
+-}
+-
+ static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt,
+ 					 struct ghcb *ghcb,
+ 					 unsigned long exit_code)
+@@ -1316,44 +1319,15 @@ static __always_inline bool on_vc_fallback_stack(struct pt_regs *regs)
+ 	return (sp >= __this_cpu_ist_bottom_va(VC2) && sp < __this_cpu_ist_top_va(VC2));
+ }
+ 
+-/*
+- * Main #VC exception handler. It is called when the entry code was able to
+- * switch off the IST to a safe kernel stack.
+- *
+- * With the current implementation it is always possible to switch to a safe
+- * stack because #VC exceptions only happen at known places, like intercepted
+- * instructions or accesses to MMIO areas/IO ports. They can also happen with
+- * code instrumentation when the hypervisor intercepts #DB, but the critical
+- * paths are forbidden to be instrumented, so #DB exceptions currently also
+- * only happen in safe places.
+- */
+-DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
++static bool vc_raw_handle_exception(struct pt_regs *regs, unsigned long error_code)
+ {
+-	irqentry_state_t irq_state;
+ 	struct ghcb_state state;
+ 	struct es_em_ctxt ctxt;
+ 	enum es_result result;
+ 	struct ghcb *ghcb;
++	bool ret = true;
+ 
+-	/*
+-	 * Handle #DB before calling into !noinstr code to avoid recursive #DB.
+-	 */
+-	if (error_code == SVM_EXIT_EXCP_BASE + X86_TRAP_DB) {
+-		vc_handle_trap_db(regs);
+-		return;
+-	}
+-
+-	irq_state = irqentry_nmi_enter(regs);
+-	lockdep_assert_irqs_disabled();
+-	instrumentation_begin();
+-
+-	/*
+-	 * This is invoked through an interrupt gate, so IRQs are disabled. The
+-	 * code below might walk page-tables for user or kernel addresses, so
+-	 * keep the IRQs disabled to protect us against concurrent TLB flushes.
+-	 */
+-
+-	ghcb = sev_es_get_ghcb(&state);
++	ghcb = __sev_get_ghcb(&state);
+ 
+ 	vc_ghcb_invalidate(ghcb);
+ 	result = vc_init_em_ctxt(&ctxt, regs, error_code);
+@@ -1361,7 +1335,7 @@ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
+ 	if (result == ES_OK)
+ 		result = vc_handle_exitcode(&ctxt, ghcb, error_code);
+ 
+-	sev_es_put_ghcb(&state);
++	__sev_put_ghcb(&state);
+ 
+ 	/* Done - now check the result */
+ 	switch (result) {
+@@ -1371,15 +1345,18 @@ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
+ 	case ES_UNSUPPORTED:
+ 		pr_err_ratelimited("Unsupported exit-code 0x%02lx in early #VC exception (IP: 0x%lx)\n",
+ 				   error_code, regs->ip);
+-		goto fail;
++		ret = false;
++		break;
+ 	case ES_VMM_ERROR:
+ 		pr_err_ratelimited("Failure in communication with VMM (exit-code 0x%02lx IP: 0x%lx)\n",
+ 				   error_code, regs->ip);
+-		goto fail;
++		ret = false;
++		break;
+ 	case ES_DECODE_FAILED:
+ 		pr_err_ratelimited("Failed to decode instruction (exit-code 0x%02lx IP: 0x%lx)\n",
+ 				   error_code, regs->ip);
+-		goto fail;
++		ret = false;
++		break;
+ 	case ES_EXCEPTION:
+ 		vc_forward_exception(&ctxt);
+ 		break;
+@@ -1395,24 +1372,52 @@ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
+ 		BUG();
+ 	}
+ 
+-out:
+-	instrumentation_end();
+-	irqentry_nmi_exit(regs, irq_state);
++	return ret;
++}
+ 
+-	return;
++static __always_inline bool vc_is_db(unsigned long error_code)
++{
++	return error_code == SVM_EXIT_EXCP_BASE + X86_TRAP_DB;
++}
+ 
+-fail:
+-	if (user_mode(regs)) {
+-		/*
+-		 * Do not kill the machine if user-space triggered the
+-		 * exception. Send SIGBUS instead and let user-space deal with
+-		 * it.
+-		 */
+-		force_sig_fault(SIGBUS, BUS_OBJERR, (void __user *)0);
+-	} else {
+-		pr_emerg("PANIC: Unhandled #VC exception in kernel space (result=%d)\n",
+-			 result);
++/*
++ * Runtime #VC exception handler when raised from kernel mode. Runs in NMI mode
++ * and will panic when an error happens.
++ */
++DEFINE_IDTENTRY_VC_KERNEL(exc_vmm_communication)
++{
++	irqentry_state_t irq_state;
+ 
++	/*
++	 * With the current implementation it is always possible to switch to a
++	 * safe stack because #VC exceptions only happen at known places, like
++	 * intercepted instructions or accesses to MMIO areas/IO ports. They can
++	 * also happen with code instrumentation when the hypervisor intercepts
++	 * #DB, but the critical paths are forbidden to be instrumented, so #DB
++	 * exceptions currently also only happen in safe places.
++	 *
++	 * But keep this here in case the noinstr annotations are violated due
++	 * to bug elsewhere.
++	 */
++	if (unlikely(on_vc_fallback_stack(regs))) {
++		instrumentation_begin();
++		panic("Can't handle #VC exception from unsupported context\n");
++		instrumentation_end();
++	}
++
++	/*
++	 * Handle #DB before calling into !noinstr code to avoid recursive #DB.
++	 */
++	if (vc_is_db(error_code)) {
++		exc_debug(regs);
++		return;
++	}
++
++	irq_state = irqentry_nmi_enter(regs);
++
++	instrumentation_begin();
++
++	if (!vc_raw_handle_exception(regs, error_code)) {
+ 		/* Show some debug info */
+ 		show_regs(regs);
+ 
+@@ -1423,23 +1428,38 @@ fail:
+ 		panic("Returned from Terminate-Request to Hypervisor\n");
+ 	}
+ 
+-	goto out;
++	instrumentation_end();
++	irqentry_nmi_exit(regs, irq_state);
+ }
+ 
+-/* This handler runs on the #VC fall-back stack. It can cause further #VC exceptions */
+-DEFINE_IDTENTRY_VC_IST(exc_vmm_communication)
++/*
++ * Runtime #VC exception handler when raised from user mode. Runs in IRQ mode
++ * and will kill the current task with SIGBUS when an error happens.
++ */
++DEFINE_IDTENTRY_VC_USER(exc_vmm_communication)
+ {
++	/*
++	 * Handle #DB before calling into !noinstr code to avoid recursive #DB.
++	 */
++	if (vc_is_db(error_code)) {
++		noist_exc_debug(regs);
++		return;
++	}
++
++	irqentry_enter_from_user_mode(regs);
+ 	instrumentation_begin();
+-	panic("Can't handle #VC exception from unsupported context\n");
+-	instrumentation_end();
+-}
+ 
+-DEFINE_IDTENTRY_VC(exc_vmm_communication)
+-{
+-	if (likely(!on_vc_fallback_stack(regs)))
+-		safe_stack_exc_vmm_communication(regs, error_code);
+-	else
+-		ist_exc_vmm_communication(regs, error_code);
++	if (!vc_raw_handle_exception(regs, error_code)) {
++		/*
++		 * Do not kill the machine if user-space triggered the
++		 * exception. Send SIGBUS instead and let user-space deal with
++		 * it.
++		 */
++		force_sig_fault(SIGBUS, BUS_OBJERR, (void __user *)0);
++	}
++
++	instrumentation_end();
++	irqentry_exit_to_user_mode(regs);
+ }
+ 
+ bool __init handle_vc_boot_ghcb(struct pt_regs *regs)
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index 7770245cc7fa7..ec2d64aa21631 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -236,7 +236,6 @@ static void notrace start_secondary(void *unused)
+ 	cpu_init();
+ 	rcu_cpu_starting(raw_smp_processor_id());
+ 	x86_cpuinit.early_percpu_clock_init();
+-	preempt_disable();
+ 	smp_callin();
+ 
+ 	enable_start_cpu0 = 0;
+diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
+index 57ec011921805..6eb1b097e97eb 100644
+--- a/arch/x86/kernel/tsc.c
++++ b/arch/x86/kernel/tsc.c
+@@ -1152,7 +1152,8 @@ static struct clocksource clocksource_tsc = {
+ 	.mask			= CLOCKSOURCE_MASK(64),
+ 	.flags			= CLOCK_SOURCE_IS_CONTINUOUS |
+ 				  CLOCK_SOURCE_VALID_FOR_HRES |
+-				  CLOCK_SOURCE_MUST_VERIFY,
++				  CLOCK_SOURCE_MUST_VERIFY |
++				  CLOCK_SOURCE_VERIFY_PERCPU,
+ 	.vdso_clock_mode	= VDSO_CLOCKMODE_TSC,
+ 	.enable			= tsc_cs_enable,
+ 	.resume			= tsc_resume,
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index b4da665bb8923..c42613cfb5ba6 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -202,10 +202,10 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
+ 	static_call(kvm_x86_vcpu_after_set_cpuid)(vcpu);
+ 
+ 	/*
+-	 * Except for the MMU, which needs to be reset after any vendor
+-	 * specific adjustments to the reserved GPA bits.
++	 * Except for the MMU, which needs to do its thing any vendor specific
++	 * adjustments to the reserved GPA bits.
+ 	 */
+-	kvm_mmu_reset_context(vcpu);
++	kvm_mmu_after_set_cpuid(vcpu);
+ }
+ 
+ static int is_efer_nx(void)
+diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
+index f00830e5202fe..fdd1eca717fd6 100644
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -1704,7 +1704,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, u64 ingpa, u16 rep_cnt, bool
+ 	 * vcpu->arch.cr3 may not be up-to-date for running vCPUs so we can't
+ 	 * analyze it here, flush TLB regardless of the specified address space.
+ 	 */
+-	kvm_make_vcpus_request_mask(kvm, KVM_REQ_HV_TLB_FLUSH,
++	kvm_make_vcpus_request_mask(kvm, KVM_REQ_TLB_FLUSH_GUEST,
+ 				    NULL, vcpu_mask, &hv_vcpu->tlb_flush);
+ 
+ ret_success:
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index a54f72c31be90..99afc6f1eed02 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -4168,7 +4168,15 @@ static inline u64 reserved_hpa_bits(void)
+ void
+ reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context)
+ {
+-	bool uses_nx = context->nx ||
++	/*
++	 * KVM uses NX when TDP is disabled to handle a variety of scenarios,
++	 * notably for huge SPTEs if iTLB multi-hit mitigation is enabled and
++	 * to generate correct permissions for CR0.WP=0/CR4.SMEP=1/EFER.NX=0.
++	 * The iTLB multi-hit workaround can be toggled at any time, so assume
++	 * NX can be used by any non-nested shadow MMU to avoid having to reset
++	 * MMU contexts.  Note, KVM forces EFER.NX=1 when TDP is disabled.
++	 */
++	bool uses_nx = context->nx || !tdp_enabled ||
+ 		context->mmu_role.base.smep_andnot_wp;
+ 	struct rsvd_bits_validate *shadow_zero_check;
+ 	int i;
+@@ -4851,6 +4859,18 @@ kvm_mmu_calc_root_page_role(struct kvm_vcpu *vcpu)
+ 	return role.base;
+ }
+ 
++void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu)
++{
++	/*
++	 * Invalidate all MMU roles to force them to reinitialize as CPUID
++	 * information is factored into reserved bit calculations.
++	 */
++	vcpu->arch.root_mmu.mmu_role.ext.valid = 0;
++	vcpu->arch.guest_mmu.mmu_role.ext.valid = 0;
++	vcpu->arch.nested_mmu.mmu_role.ext.valid = 0;
++	kvm_mmu_reset_context(vcpu);
++}
++
+ void kvm_mmu_reset_context(struct kvm_vcpu *vcpu)
+ {
+ 	kvm_mmu_unload(vcpu);
+diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
+index 823a5919f9fa0..52fffd68b5229 100644
+--- a/arch/x86/kvm/mmu/paging_tmpl.h
++++ b/arch/x86/kvm/mmu/paging_tmpl.h
+@@ -471,8 +471,7 @@ retry_walk:
+ 
+ error:
+ 	errcode |= write_fault | user_fault;
+-	if (fetch_fault && (mmu->nx ||
+-			    kvm_read_cr4_bits(vcpu, X86_CR4_SMEP)))
++	if (fetch_fault && (mmu->nx || mmu->mmu_role.ext.cr4_smep))
+ 		errcode |= PFERR_FETCH_MASK;
+ 
+ 	walker->fault.vector = PF_VECTOR;
+diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
+index 66d43cec0c31a..8e8e8da740a07 100644
+--- a/arch/x86/kvm/mmu/spte.c
++++ b/arch/x86/kvm/mmu/spte.c
+@@ -102,13 +102,6 @@ int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level,
+ 	else if (kvm_vcpu_ad_need_write_protect(vcpu))
+ 		spte |= SPTE_TDP_AD_WRPROT_ONLY_MASK;
+ 
+-	/*
+-	 * Bits 62:52 of PAE SPTEs are reserved.  WARN if said bits are set
+-	 * if PAE paging may be employed (shadow paging or any 32-bit KVM).
+-	 */
+-	WARN_ON_ONCE((!tdp_enabled || !IS_ENABLED(CONFIG_X86_64)) &&
+-		     (spte & SPTE_TDP_AD_MASK));
+-
+ 	/*
+ 	 * For the EPT case, shadow_present_mask is 0 if hardware
+ 	 * supports exec-only page table entries.  In that case,
+diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
+index 237317b1eddda..8773bd5287da8 100644
+--- a/arch/x86/kvm/mmu/tdp_mmu.c
++++ b/arch/x86/kvm/mmu/tdp_mmu.c
+@@ -912,7 +912,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, int write,
+ 					  kvm_pfn_t pfn, bool prefault)
+ {
+ 	u64 new_spte;
+-	int ret = 0;
++	int ret = RET_PF_FIXED;
+ 	int make_spte_ret = 0;
+ 
+ 	if (unlikely(is_noslot_pfn(pfn)))
+@@ -949,7 +949,11 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, int write,
+ 				       rcu_dereference(iter->sptep));
+ 	}
+ 
+-	if (!prefault)
++	/*
++	 * Increase pf_fixed in both RET_PF_EMULATE and RET_PF_FIXED to be
++	 * consistent with legacy MMU behavior.
++	 */
++	if (ret != RET_PF_SPURIOUS)
+ 		vcpu->stat.pf_fixed++;
+ 
+ 	return ret;
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 6058a65a6ede6..2e63171864a74 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -1127,12 +1127,19 @@ static int nested_vmx_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, bool ne
+ 
+ 	/*
+ 	 * Unconditionally skip the TLB flush on fast CR3 switch, all TLB
+-	 * flushes are handled by nested_vmx_transition_tlb_flush().  See
+-	 * nested_vmx_transition_mmu_sync for details on skipping the MMU sync.
++	 * flushes are handled by nested_vmx_transition_tlb_flush().
+ 	 */
+-	if (!nested_ept)
+-		kvm_mmu_new_pgd(vcpu, cr3, true,
+-				!nested_vmx_transition_mmu_sync(vcpu));
++	if (!nested_ept) {
++		kvm_mmu_new_pgd(vcpu, cr3, true, true);
++
++		/*
++		 * A TLB flush on VM-Enter/VM-Exit flushes all linear mappings
++		 * across all PCIDs, i.e. all PGDs need to be synchronized.
++		 * See nested_vmx_transition_mmu_sync() for more details.
++		 */
++		if (nested_vmx_transition_mmu_sync(vcpu))
++			kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu);
++	}
+ 
+ 	vcpu->arch.cr3 = cr3;
+ 	kvm_register_mark_available(vcpu, VCPU_EXREG_CR3);
+@@ -3682,7 +3689,7 @@ void nested_mark_vmcs12_pages_dirty(struct kvm_vcpu *vcpu)
+ 	}
+ }
+ 
+-static void vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu)
++static int vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu)
+ {
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+ 	int max_irr;
+@@ -3690,17 +3697,17 @@ static void vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu)
+ 	u16 status;
+ 
+ 	if (!vmx->nested.pi_desc || !vmx->nested.pi_pending)
+-		return;
++		return 0;
+ 
+ 	vmx->nested.pi_pending = false;
+ 	if (!pi_test_and_clear_on(vmx->nested.pi_desc))
+-		return;
++		return 0;
+ 
+ 	max_irr = find_last_bit((unsigned long *)vmx->nested.pi_desc->pir, 256);
+ 	if (max_irr != 256) {
+ 		vapic_page = vmx->nested.virtual_apic_map.hva;
+ 		if (!vapic_page)
+-			return;
++			return 0;
+ 
+ 		__kvm_apic_update_irr(vmx->nested.pi_desc->pir,
+ 			vapic_page, &max_irr);
+@@ -3713,6 +3720,7 @@ static void vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu)
+ 	}
+ 
+ 	nested_mark_vmcs12_pages_dirty(vcpu);
++	return 0;
+ }
+ 
+ static void nested_vmx_inject_exception_vmexit(struct kvm_vcpu *vcpu,
+@@ -3887,8 +3895,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
+ 	}
+ 
+ no_vmexit:
+-	vmx_complete_nested_posted_interrupt(vcpu);
+-	return 0;
++	return vmx_complete_nested_posted_interrupt(vcpu);
+ }
+ 
+ static u32 vmx_get_preemption_timer_value(struct kvm_vcpu *vcpu)
+@@ -5481,8 +5488,6 @@ static int nested_vmx_eptp_switching(struct kvm_vcpu *vcpu,
+ {
+ 	u32 index = kvm_rcx_read(vcpu);
+ 	u64 new_eptp;
+-	bool accessed_dirty;
+-	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
+ 
+ 	if (!nested_cpu_has_eptp_switching(vmcs12) ||
+ 	    !nested_cpu_has_ept(vmcs12))
+@@ -5491,13 +5496,10 @@ static int nested_vmx_eptp_switching(struct kvm_vcpu *vcpu,
+ 	if (index >= VMFUNC_EPTP_ENTRIES)
+ 		return 1;
+ 
+-
+ 	if (kvm_vcpu_read_guest_page(vcpu, vmcs12->eptp_list_address >> PAGE_SHIFT,
+ 				     &new_eptp, index * 8, 8))
+ 		return 1;
+ 
+-	accessed_dirty = !!(new_eptp & VMX_EPTP_AD_ENABLE_BIT);
+-
+ 	/*
+ 	 * If the (L2) guest does a vmfunc to the currently
+ 	 * active ept pointer, we don't have to do anything else
+@@ -5506,8 +5508,6 @@ static int nested_vmx_eptp_switching(struct kvm_vcpu *vcpu,
+ 		if (!nested_vmx_check_eptp(vcpu, new_eptp))
+ 			return 1;
+ 
+-		mmu->ept_ad = accessed_dirty;
+-		mmu->mmu_role.base.ad_disabled = !accessed_dirty;
+ 		vmcs12->ept_pointer = new_eptp;
+ 
+ 		kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu);
+@@ -5533,7 +5533,7 @@ static int handle_vmfunc(struct kvm_vcpu *vcpu)
+ 	}
+ 
+ 	vmcs12 = get_vmcs12(vcpu);
+-	if ((vmcs12->vm_function_control & (1 << function)) == 0)
++	if (!(vmcs12->vm_function_control & BIT_ULL(function)))
+ 		goto fail;
+ 
+ 	switch (function) {
+@@ -5806,6 +5806,9 @@ static bool nested_vmx_l0_wants_exit(struct kvm_vcpu *vcpu,
+ 		else if (is_breakpoint(intr_info) &&
+ 			 vcpu->guest_debug & KVM_GUESTDBG_USE_SW_BP)
+ 			return true;
++		else if (is_alignment_check(intr_info) &&
++			 !vmx_guest_inject_ac(vcpu))
++			return true;
+ 		return false;
+ 	case EXIT_REASON_EXTERNAL_INTERRUPT:
+ 		return true;
+diff --git a/arch/x86/kvm/vmx/vmcs.h b/arch/x86/kvm/vmx/vmcs.h
+index 1472c6c376f74..571d9ad80a59e 100644
+--- a/arch/x86/kvm/vmx/vmcs.h
++++ b/arch/x86/kvm/vmx/vmcs.h
+@@ -117,6 +117,11 @@ static inline bool is_gp_fault(u32 intr_info)
+ 	return is_exception_n(intr_info, GP_VECTOR);
+ }
+ 
++static inline bool is_alignment_check(u32 intr_info)
++{
++	return is_exception_n(intr_info, AC_VECTOR);
++}
++
+ static inline bool is_machine_check(u32 intr_info)
+ {
+ 	return is_exception_n(intr_info, MC_VECTOR);
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index c2a779b688e64..dcd4f43c23de5 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -4829,7 +4829,7 @@ static int handle_machine_check(struct kvm_vcpu *vcpu)
+  *  - Guest has #AC detection enabled in CR0
+  *  - Guest EFLAGS has AC bit set
+  */
+-static inline bool guest_inject_ac(struct kvm_vcpu *vcpu)
++bool vmx_guest_inject_ac(struct kvm_vcpu *vcpu)
+ {
+ 	if (!boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT))
+ 		return true;
+@@ -4937,7 +4937,7 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu)
+ 		kvm_run->debug.arch.exception = ex_no;
+ 		break;
+ 	case AC_VECTOR:
+-		if (guest_inject_ac(vcpu)) {
++		if (vmx_guest_inject_ac(vcpu)) {
+ 			kvm_queue_exception_e(vcpu, AC_VECTOR, error_code);
+ 			return 1;
+ 		}
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index 16e4e457ba23c..d91869c8c1fc2 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -387,6 +387,7 @@ void vmx_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
+ void vmx_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
+ u64 construct_eptp(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level);
+ 
++bool vmx_guest_inject_ac(struct kvm_vcpu *vcpu);
+ void vmx_update_exception_bitmap(struct kvm_vcpu *vcpu);
+ void vmx_update_msr_bitmap(struct kvm_vcpu *vcpu);
+ bool vmx_nmi_blocked(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index e0f4a46649d75..dad282fe0dac2 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -9171,7 +9171,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 		}
+ 		if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))
+ 			kvm_vcpu_flush_tlb_current(vcpu);
+-		if (kvm_check_request(KVM_REQ_HV_TLB_FLUSH, vcpu))
++		if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu))
+ 			kvm_vcpu_flush_tlb_guest(vcpu);
+ 
+ 		if (kvm_check_request(KVM_REQ_REPORT_TPR_ACCESS, vcpu)) {
+@@ -10454,6 +10454,8 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
+ 
+ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
+ {
++	unsigned long old_cr0 = kvm_read_cr0(vcpu);
++
+ 	kvm_lapic_reset(vcpu, init_event);
+ 
+ 	vcpu->arch.hflags = 0;
+@@ -10522,6 +10524,17 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
+ 	vcpu->arch.ia32_xss = 0;
+ 
+ 	static_call(kvm_x86_vcpu_reset)(vcpu, init_event);
++
++	/*
++	 * Reset the MMU context if paging was enabled prior to INIT (which is
++	 * implied if CR0.PG=1 as CR0 will be '0' prior to RESET).  Unlike the
++	 * standard CR0/CR4/EFER modification paths, only CR0.PG needs to be
++	 * checked because it is unconditionally cleared on INIT and all other
++	 * paging related bits are ignored if paging is disabled, i.e. CR0.WP,
++	 * CR4, and EFER changes are all irrelevant if CR0.PG was '0'.
++	 */
++	if (old_cr0 & X86_CR0_PG)
++		kvm_mmu_reset_context(vcpu);
+ }
+ 
+ void kvm_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector)
+diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
+index 78804680e9231..cfe6b1e85fa61 100644
+--- a/arch/x86/mm/tlb.c
++++ b/arch/x86/mm/tlb.c
+@@ -14,6 +14,7 @@
+ #include <asm/nospec-branch.h>
+ #include <asm/cache.h>
+ #include <asm/apic.h>
++#include <asm/perf_event.h>
+ 
+ #include "mm_internal.h"
+ 
+@@ -404,9 +405,14 @@ static inline void cr4_update_pce_mm(struct mm_struct *mm)
+ {
+ 	if (static_branch_unlikely(&rdpmc_always_available_key) ||
+ 	    (!static_branch_unlikely(&rdpmc_never_available_key) &&
+-	     atomic_read(&mm->context.perf_rdpmc_allowed)))
++	     atomic_read(&mm->context.perf_rdpmc_allowed))) {
++		/*
++		 * Clear the existing dirty counters to
++		 * prevent the leak for an RDPMC task.
++		 */
++		perf_clear_dirty_counters();
+ 		cr4_set_bits_irqsoff(X86_CR4_PCE);
+-	else
++	} else
+ 		cr4_clear_bits_irqsoff(X86_CR4_PCE);
+ }
+ 
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index 2a2e290fa5d83..a3d867f221531 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -1297,7 +1297,7 @@ st:			if (is_imm8(insn->off))
+ 			emit_ldx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn->off);
+ 			if (BPF_MODE(insn->code) == BPF_PROBE_MEM) {
+ 				struct exception_table_entry *ex;
+-				u8 *_insn = image + proglen;
++				u8 *_insn = image + proglen + (start_of_ldx - temp);
+ 				s64 delta;
+ 
+ 				/* populate jmp_offset for JMP above */
+diff --git a/arch/xtensa/kernel/smp.c b/arch/xtensa/kernel/smp.c
+index cd85a7a2722ba..1254da07ead1f 100644
+--- a/arch/xtensa/kernel/smp.c
++++ b/arch/xtensa/kernel/smp.c
+@@ -145,7 +145,6 @@ void secondary_start_kernel(void)
+ 	cpumask_set_cpu(cpu, mm_cpumask(mm));
+ 	enter_lazy_tlb(mm, current);
+ 
+-	preempt_disable();
+ 	trace_hardirqs_off();
+ 
+ 	calibrate_delay();
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index acd1f881273e0..eccbe2aed7c3f 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2695,9 +2695,15 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 	 * costly and complicated.
+ 	 */
+ 	if (unlikely(!bfqd->nonrot_with_queueing)) {
+-		if (bic->stable_merge_bfqq &&
++		/*
++		 * Make sure also that bfqq is sync, because
++		 * bic->stable_merge_bfqq may point to some queue (for
++		 * stable merging) also if bic is associated with a
++		 * sync queue, but this bfqq is async
++		 */
++		if (bfq_bfqq_sync(bfqq) && bic->stable_merge_bfqq &&
+ 		    !bfq_bfqq_just_created(bfqq) &&
+-		    time_is_after_jiffies(bfqq->split_time +
++		    time_is_before_jiffies(bfqq->split_time +
+ 					  msecs_to_jiffies(200))) {
+ 			struct bfq_queue *stable_merge_bfqq =
+ 				bic->stable_merge_bfqq;
+@@ -6129,11 +6135,13 @@ static void bfq_completed_request(struct bfq_queue *bfqq, struct bfq_data *bfqd)
+ 	 * of other queues. But a false waker will unjustly steal
+ 	 * bandwidth to its supposedly woken queue. So considering
+ 	 * also shared queues in the waking mechanism may cause more
+-	 * control troubles than throughput benefits. Then do not set
+-	 * last_completed_rq_bfqq to bfqq if bfqq is a shared queue.
++	 * control troubles than throughput benefits. Then reset
++	 * last_completed_rq_bfqq if bfqq is a shared queue.
+ 	 */
+ 	if (!bfq_bfqq_coop(bfqq))
+ 		bfqd->last_completed_rq_bfqq = bfqq;
++	else
++		bfqd->last_completed_rq_bfqq = NULL;
+ 
+ 	/*
+ 	 * If we are waiting to discover whether the request pattern
+diff --git a/block/bio.c b/block/bio.c
+index 44205dfb6b60a..1fab762e079be 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -1375,8 +1375,7 @@ static inline bool bio_remaining_done(struct bio *bio)
+  *
+  *   bio_endio() can be called several times on a bio that has been chained
+  *   using bio_chain().  The ->bi_end_io() function will only be called the
+- *   last time.  At this point the BLK_TA_COMPLETE tracing event will be
+- *   generated if BIO_TRACE_COMPLETION is set.
++ *   last time.
+  **/
+ void bio_endio(struct bio *bio)
+ {
+@@ -1389,6 +1388,11 @@ again:
+ 	if (bio->bi_bdev)
+ 		rq_qos_done_bio(bio->bi_bdev->bd_disk->queue, bio);
+ 
++	if (bio->bi_bdev && bio_flagged(bio, BIO_TRACE_COMPLETION)) {
++		trace_block_bio_complete(bio->bi_bdev->bd_disk->queue, bio);
++		bio_clear_flag(bio, BIO_TRACE_COMPLETION);
++	}
++
+ 	/*
+ 	 * Need to have a real endio function for chained bios, otherwise
+ 	 * various corner cases will break (like stacking block devices that
+@@ -1402,11 +1406,6 @@ again:
+ 		goto again;
+ 	}
+ 
+-	if (bio->bi_bdev && bio_flagged(bio, BIO_TRACE_COMPLETION)) {
+-		trace_block_bio_complete(bio->bi_bdev->bd_disk->queue, bio);
+-		bio_clear_flag(bio, BIO_TRACE_COMPLETION);
+-	}
+-
+ 	blk_throtl_bio_endio(bio);
+ 	/* release cgroup info */
+ 	bio_uninit(bio);
+diff --git a/block/blk-flush.c b/block/blk-flush.c
+index 7942ca6ed3211..1002f6c581816 100644
+--- a/block/blk-flush.c
++++ b/block/blk-flush.c
+@@ -219,8 +219,6 @@ static void flush_end_io(struct request *flush_rq, blk_status_t error)
+ 	unsigned long flags = 0;
+ 	struct blk_flush_queue *fq = blk_get_flush_queue(q, flush_rq->mq_ctx);
+ 
+-	blk_account_io_flush(flush_rq);
+-
+ 	/* release the tag's ownership to the req cloned from */
+ 	spin_lock_irqsave(&fq->mq_flush_lock, flags);
+ 
+@@ -230,6 +228,7 @@ static void flush_end_io(struct request *flush_rq, blk_status_t error)
+ 		return;
+ 	}
+ 
++	blk_account_io_flush(flush_rq);
+ 	/*
+ 	 * Flush request has to be marked as IDLE when it is really ended
+ 	 * because its .end_io() is called from timeout code path too for
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index 4d97fb6dd2267..bcdff1879c346 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -559,10 +559,14 @@ static inline unsigned int blk_rq_get_max_segments(struct request *rq)
+ static inline int ll_new_hw_segment(struct request *req, struct bio *bio,
+ 		unsigned int nr_phys_segs)
+ {
+-	if (req->nr_phys_segments + nr_phys_segs > blk_rq_get_max_segments(req))
++	if (blk_integrity_merge_bio(req->q, req, bio) == false)
+ 		goto no_merge;
+ 
+-	if (blk_integrity_merge_bio(req->q, req, bio) == false)
++	/* discard request merge won't add new segment */
++	if (req_op(req) == REQ_OP_DISCARD)
++		return 1;
++
++	if (req->nr_phys_segments + nr_phys_segs > blk_rq_get_max_segments(req))
+ 		goto no_merge;
+ 
+ 	/*
+diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
+index 2a37731e8244b..1671dae43030b 100644
+--- a/block/blk-mq-tag.c
++++ b/block/blk-mq-tag.c
+@@ -199,6 +199,20 @@ struct bt_iter_data {
+ 	bool reserved;
+ };
+ 
++static struct request *blk_mq_find_and_get_req(struct blk_mq_tags *tags,
++		unsigned int bitnr)
++{
++	struct request *rq;
++	unsigned long flags;
++
++	spin_lock_irqsave(&tags->lock, flags);
++	rq = tags->rqs[bitnr];
++	if (!rq || !refcount_inc_not_zero(&rq->ref))
++		rq = NULL;
++	spin_unlock_irqrestore(&tags->lock, flags);
++	return rq;
++}
++
+ static bool bt_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)
+ {
+ 	struct bt_iter_data *iter_data = data;
+@@ -206,18 +220,22 @@ static bool bt_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)
+ 	struct blk_mq_tags *tags = hctx->tags;
+ 	bool reserved = iter_data->reserved;
+ 	struct request *rq;
++	bool ret = true;
+ 
+ 	if (!reserved)
+ 		bitnr += tags->nr_reserved_tags;
+-	rq = tags->rqs[bitnr];
+-
+ 	/*
+ 	 * We can hit rq == NULL here, because the tagging functions
+ 	 * test and set the bit before assigning ->rqs[].
+ 	 */
+-	if (rq && rq->q == hctx->queue && rq->mq_hctx == hctx)
+-		return iter_data->fn(hctx, rq, iter_data->data, reserved);
+-	return true;
++	rq = blk_mq_find_and_get_req(tags, bitnr);
++	if (!rq)
++		return true;
++
++	if (rq->q == hctx->queue && rq->mq_hctx == hctx)
++		ret = iter_data->fn(hctx, rq, iter_data->data, reserved);
++	blk_mq_put_rq_ref(rq);
++	return ret;
+ }
+ 
+ /**
+@@ -264,6 +282,8 @@ static bool bt_tags_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)
+ 	struct blk_mq_tags *tags = iter_data->tags;
+ 	bool reserved = iter_data->flags & BT_TAG_ITER_RESERVED;
+ 	struct request *rq;
++	bool ret = true;
++	bool iter_static_rqs = !!(iter_data->flags & BT_TAG_ITER_STATIC_RQS);
+ 
+ 	if (!reserved)
+ 		bitnr += tags->nr_reserved_tags;
+@@ -272,16 +292,19 @@ static bool bt_tags_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)
+ 	 * We can hit rq == NULL here, because the tagging functions
+ 	 * test and set the bit before assigning ->rqs[].
+ 	 */
+-	if (iter_data->flags & BT_TAG_ITER_STATIC_RQS)
++	if (iter_static_rqs)
+ 		rq = tags->static_rqs[bitnr];
+ 	else
+-		rq = tags->rqs[bitnr];
++		rq = blk_mq_find_and_get_req(tags, bitnr);
+ 	if (!rq)
+ 		return true;
+-	if ((iter_data->flags & BT_TAG_ITER_STARTED) &&
+-	    !blk_mq_request_started(rq))
+-		return true;
+-	return iter_data->fn(rq, iter_data->data, reserved);
++
++	if (!(iter_data->flags & BT_TAG_ITER_STARTED) ||
++	    blk_mq_request_started(rq))
++		ret = iter_data->fn(rq, iter_data->data, reserved);
++	if (!iter_static_rqs)
++		blk_mq_put_rq_ref(rq);
++	return ret;
+ }
+ 
+ /**
+@@ -348,6 +371,9 @@ void blk_mq_all_tag_iter(struct blk_mq_tags *tags, busy_tag_iter_fn *fn,
+  *		indicates whether or not @rq is a reserved request. Return
+  *		true to continue iterating tags, false to stop.
+  * @priv:	Will be passed as second argument to @fn.
++ *
++ * We grab one request reference before calling @fn and release it after
++ * @fn returns.
+  */
+ void blk_mq_tagset_busy_iter(struct blk_mq_tag_set *tagset,
+ 		busy_tag_iter_fn *fn, void *priv)
+@@ -516,6 +542,7 @@ struct blk_mq_tags *blk_mq_init_tags(unsigned int total_tags,
+ 
+ 	tags->nr_tags = total_tags;
+ 	tags->nr_reserved_tags = reserved_tags;
++	spin_lock_init(&tags->lock);
+ 
+ 	if (blk_mq_is_sbitmap_shared(flags))
+ 		return tags;
+diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h
+index 7d3e6b333a4a9..f887988e5ef60 100644
+--- a/block/blk-mq-tag.h
++++ b/block/blk-mq-tag.h
+@@ -20,6 +20,12 @@ struct blk_mq_tags {
+ 	struct request **rqs;
+ 	struct request **static_rqs;
+ 	struct list_head page_list;
++
++	/*
++	 * used to clear request reference in rqs[] before freeing one
++	 * request pool
++	 */
++	spinlock_t lock;
+ };
+ 
+ extern struct blk_mq_tags *blk_mq_init_tags(unsigned int nr_tags,
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index c86c01bfecdbe..c732aa581124f 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -909,6 +909,14 @@ static bool blk_mq_req_expired(struct request *rq, unsigned long *next)
+ 	return false;
+ }
+ 
++void blk_mq_put_rq_ref(struct request *rq)
++{
++	if (is_flush_rq(rq, rq->mq_hctx))
++		rq->end_io(rq, 0);
++	else if (refcount_dec_and_test(&rq->ref))
++		__blk_mq_free_request(rq);
++}
++
+ static bool blk_mq_check_expired(struct blk_mq_hw_ctx *hctx,
+ 		struct request *rq, void *priv, bool reserved)
+ {
+@@ -942,11 +950,7 @@ static bool blk_mq_check_expired(struct blk_mq_hw_ctx *hctx,
+ 	if (blk_mq_req_expired(rq, next))
+ 		blk_mq_rq_timed_out(rq, reserved);
+ 
+-	if (is_flush_rq(rq, hctx))
+-		rq->end_io(rq, 0);
+-	else if (refcount_dec_and_test(&rq->ref))
+-		__blk_mq_free_request(rq);
+-
++	blk_mq_put_rq_ref(rq);
+ 	return true;
+ }
+ 
+@@ -1220,9 +1224,6 @@ static void blk_mq_update_dispatch_busy(struct blk_mq_hw_ctx *hctx, bool busy)
+ {
+ 	unsigned int ewma;
+ 
+-	if (hctx->queue->elevator)
+-		return;
+-
+ 	ewma = hctx->dispatch_busy;
+ 
+ 	if (!ewma && !busy)
+@@ -2303,6 +2304,45 @@ queue_exit:
+ 	return BLK_QC_T_NONE;
+ }
+ 
++static size_t order_to_size(unsigned int order)
++{
++	return (size_t)PAGE_SIZE << order;
++}
++
++/* called before freeing request pool in @tags */
++static void blk_mq_clear_rq_mapping(struct blk_mq_tag_set *set,
++		struct blk_mq_tags *tags, unsigned int hctx_idx)
++{
++	struct blk_mq_tags *drv_tags = set->tags[hctx_idx];
++	struct page *page;
++	unsigned long flags;
++
++	list_for_each_entry(page, &tags->page_list, lru) {
++		unsigned long start = (unsigned long)page_address(page);
++		unsigned long end = start + order_to_size(page->private);
++		int i;
++
++		for (i = 0; i < set->queue_depth; i++) {
++			struct request *rq = drv_tags->rqs[i];
++			unsigned long rq_addr = (unsigned long)rq;
++
++			if (rq_addr >= start && rq_addr < end) {
++				WARN_ON_ONCE(refcount_read(&rq->ref) != 0);
++				cmpxchg(&drv_tags->rqs[i], rq, NULL);
++			}
++		}
++	}
++
++	/*
++	 * Wait until all pending iteration is done.
++	 *
++	 * Request reference is cleared and it is guaranteed to be observed
++	 * after the ->lock is released.
++	 */
++	spin_lock_irqsave(&drv_tags->lock, flags);
++	spin_unlock_irqrestore(&drv_tags->lock, flags);
++}
++
+ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
+ 		     unsigned int hctx_idx)
+ {
+@@ -2321,6 +2361,8 @@ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags,
+ 		}
+ 	}
+ 
++	blk_mq_clear_rq_mapping(set, tags, hctx_idx);
++
+ 	while (!list_empty(&tags->page_list)) {
+ 		page = list_first_entry(&tags->page_list, struct page, lru);
+ 		list_del_init(&page->lru);
+@@ -2380,11 +2422,6 @@ struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set,
+ 	return tags;
+ }
+ 
+-static size_t order_to_size(unsigned int order)
+-{
+-	return (size_t)PAGE_SIZE << order;
+-}
+-
+ static int blk_mq_init_request(struct blk_mq_tag_set *set, struct request *rq,
+ 			       unsigned int hctx_idx, int node)
+ {
+diff --git a/block/blk-mq.h b/block/blk-mq.h
+index 9ce64bc4a6c8f..556368d2c5b69 100644
+--- a/block/blk-mq.h
++++ b/block/blk-mq.h
+@@ -47,6 +47,7 @@ void blk_mq_add_to_requeue_list(struct request *rq, bool at_head,
+ void blk_mq_flush_busy_ctxs(struct blk_mq_hw_ctx *hctx, struct list_head *list);
+ struct request *blk_mq_dequeue_from_ctx(struct blk_mq_hw_ctx *hctx,
+ 					struct blk_mq_ctx *start);
++void blk_mq_put_rq_ref(struct request *rq);
+ 
+ /*
+  * Internal helpers for allocating/freeing the request map
+diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h
+index 2bc43e94f4c40..2bcb3495e376b 100644
+--- a/block/blk-rq-qos.h
++++ b/block/blk-rq-qos.h
+@@ -7,6 +7,7 @@
+ #include <linux/blk_types.h>
+ #include <linux/atomic.h>
+ #include <linux/wait.h>
++#include <linux/blk-mq.h>
+ 
+ #include "blk-mq-debugfs.h"
+ 
+@@ -99,8 +100,21 @@ static inline void rq_wait_init(struct rq_wait *rq_wait)
+ 
+ static inline void rq_qos_add(struct request_queue *q, struct rq_qos *rqos)
+ {
++	/*
++	 * No IO can be in-flight when adding rqos, so freeze queue, which
++	 * is fine since we only support rq_qos for blk-mq queue.
++	 *
++	 * Reuse ->queue_lock for protecting against other concurrent
++	 * rq_qos adding/deleting
++	 */
++	blk_mq_freeze_queue(q);
++
++	spin_lock_irq(&q->queue_lock);
+ 	rqos->next = q->rq_qos;
+ 	q->rq_qos = rqos;
++	spin_unlock_irq(&q->queue_lock);
++
++	blk_mq_unfreeze_queue(q);
+ 
+ 	if (rqos->ops->debugfs_attrs)
+ 		blk_mq_debugfs_register_rqos(rqos);
+@@ -110,12 +124,22 @@ static inline void rq_qos_del(struct request_queue *q, struct rq_qos *rqos)
+ {
+ 	struct rq_qos **cur;
+ 
++	/*
++	 * See comment in rq_qos_add() about freezing queue & using
++	 * ->queue_lock.
++	 */
++	blk_mq_freeze_queue(q);
++
++	spin_lock_irq(&q->queue_lock);
+ 	for (cur = &q->rq_qos; *cur; cur = &(*cur)->next) {
+ 		if (*cur == rqos) {
+ 			*cur = rqos->next;
+ 			break;
+ 		}
+ 	}
++	spin_unlock_irq(&q->queue_lock);
++
++	blk_mq_unfreeze_queue(q);
+ 
+ 	blk_mq_debugfs_unregister_rqos(rqos);
+ }
+diff --git a/block/blk-wbt.c b/block/blk-wbt.c
+index 42aed0160f86a..f5e5ac915bf7c 100644
+--- a/block/blk-wbt.c
++++ b/block/blk-wbt.c
+@@ -77,7 +77,8 @@ enum {
+ 
+ static inline bool rwb_enabled(struct rq_wb *rwb)
+ {
+-	return rwb && rwb->wb_normal != 0;
++	return rwb && rwb->enable_state != WBT_STATE_OFF_DEFAULT &&
++		      rwb->wb_normal != 0;
+ }
+ 
+ static void wb_timestamp(struct rq_wb *rwb, unsigned long *var)
+@@ -636,9 +637,13 @@ void wbt_set_write_cache(struct request_queue *q, bool write_cache_on)
+ void wbt_enable_default(struct request_queue *q)
+ {
+ 	struct rq_qos *rqos = wbt_rq_qos(q);
++
+ 	/* Throttling already enabled? */
+-	if (rqos)
++	if (rqos) {
++		if (RQWB(rqos)->enable_state == WBT_STATE_OFF_DEFAULT)
++			RQWB(rqos)->enable_state = WBT_STATE_ON_DEFAULT;
+ 		return;
++	}
+ 
+ 	/* Queue not registered? Maybe shutting down... */
+ 	if (!blk_queue_registered(q))
+@@ -702,7 +707,7 @@ void wbt_disable_default(struct request_queue *q)
+ 	rwb = RQWB(rqos);
+ 	if (rwb->enable_state == WBT_STATE_ON_DEFAULT) {
+ 		blk_stat_deactivate(rwb->cb);
+-		rwb->wb_normal = 0;
++		rwb->enable_state = WBT_STATE_OFF_DEFAULT;
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(wbt_disable_default);
+diff --git a/block/blk-wbt.h b/block/blk-wbt.h
+index 16bdc85b8df92..2eb01becde8c4 100644
+--- a/block/blk-wbt.h
++++ b/block/blk-wbt.h
+@@ -34,6 +34,7 @@ enum {
+ enum {
+ 	WBT_STATE_ON_DEFAULT	= 1,
+ 	WBT_STATE_ON_MANUAL	= 2,
++	WBT_STATE_OFF_DEFAULT
+ };
+ 
+ struct rq_wb {
+diff --git a/crypto/ecdh.c b/crypto/ecdh.c
+index 04a427b8c9564..e2c4808590244 100644
+--- a/crypto/ecdh.c
++++ b/crypto/ecdh.c
+@@ -179,10 +179,20 @@ static int ecdh_init(void)
+ {
+ 	int ret;
+ 
++	/* NIST p192 will fail to register in FIPS mode */
+ 	ret = crypto_register_kpp(&ecdh_nist_p192);
+ 	ecdh_nist_p192_registered = ret == 0;
+ 
+-	return crypto_register_kpp(&ecdh_nist_p256);
++	ret = crypto_register_kpp(&ecdh_nist_p256);
++	if (ret)
++		goto nist_p256_error;
++
++	return 0;
++
++nist_p256_error:
++	if (ecdh_nist_p192_registered)
++		crypto_unregister_kpp(&ecdh_nist_p192);
++	return ret;
+ }
+ 
+ static void ecdh_exit(void)
+diff --git a/crypto/shash.c b/crypto/shash.c
+index 2e3433ad97629..0a0a50cb694f0 100644
+--- a/crypto/shash.c
++++ b/crypto/shash.c
+@@ -20,12 +20,24 @@
+ 
+ static const struct crypto_type crypto_shash_type;
+ 
+-int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
+-		    unsigned int keylen)
++static int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
++			   unsigned int keylen)
+ {
+ 	return -ENOSYS;
+ }
+-EXPORT_SYMBOL_GPL(shash_no_setkey);
++
++/*
++ * Check whether an shash algorithm has a setkey function.
++ *
++ * For CFI compatibility, this must not be an inline function.  This is because
++ * when CFI is enabled, modules won't get the same address for shash_no_setkey
++ * (if it were exported, which inlining would require) as the core kernel will.
++ */
++bool crypto_shash_alg_has_setkey(struct shash_alg *alg)
++{
++	return alg->setkey != shash_no_setkey;
++}
++EXPORT_SYMBOL_GPL(crypto_shash_alg_has_setkey);
+ 
+ static int shash_setkey_unaligned(struct crypto_shash *tfm, const u8 *key,
+ 				  unsigned int keylen)
+diff --git a/crypto/sm2.c b/crypto/sm2.c
+index b21addc3ac06a..db8a4a265669d 100644
+--- a/crypto/sm2.c
++++ b/crypto/sm2.c
+@@ -79,10 +79,17 @@ static int sm2_ec_ctx_init(struct mpi_ec_ctx *ec)
+ 		goto free;
+ 
+ 	rc = -ENOMEM;
++
++	ec->Q = mpi_point_new(0);
++	if (!ec->Q)
++		goto free;
++
+ 	/* mpi_ec_setup_elliptic_curve */
+ 	ec->G = mpi_point_new(0);
+-	if (!ec->G)
++	if (!ec->G) {
++		mpi_point_release(ec->Q);
+ 		goto free;
++	}
+ 
+ 	mpi_set(ec->G->x, x);
+ 	mpi_set(ec->G->y, y);
+@@ -91,6 +98,7 @@ static int sm2_ec_ctx_init(struct mpi_ec_ctx *ec)
+ 	rc = -EINVAL;
+ 	ec->n = mpi_scanval(ecp->n);
+ 	if (!ec->n) {
++		mpi_point_release(ec->Q);
+ 		mpi_point_release(ec->G);
+ 		goto free;
+ 	}
+@@ -386,27 +394,15 @@ static int sm2_set_pub_key(struct crypto_akcipher *tfm,
+ 	MPI a;
+ 	int rc;
+ 
+-	ec->Q = mpi_point_new(0);
+-	if (!ec->Q)
+-		return -ENOMEM;
+-
+ 	/* include the uncompressed flag '0x04' */
+-	rc = -ENOMEM;
+ 	a = mpi_read_raw_data(key, keylen);
+ 	if (!a)
+-		goto error;
++		return -ENOMEM;
+ 
+ 	mpi_normalize(a);
+ 	rc = sm2_ecc_os2ec(ec->Q, a);
+ 	mpi_free(a);
+-	if (rc)
+-		goto error;
+-
+-	return 0;
+ 
+-error:
+-	mpi_point_release(ec->Q);
+-	ec->Q = NULL;
+ 	return rc;
+ }
+ 
+diff --git a/crypto/testmgr.c b/crypto/testmgr.c
+index 10c5b3b01ec47..26e40dba9ad29 100644
+--- a/crypto/testmgr.c
++++ b/crypto/testmgr.c
+@@ -4899,15 +4899,12 @@ static const struct alg_test_desc alg_test_descs[] = {
+ 		}
+ 	}, {
+ #endif
+-#ifndef CONFIG_CRYPTO_FIPS
+ 		.alg = "ecdh-nist-p192",
+ 		.test = alg_test_kpp,
+-		.fips_allowed = 1,
+ 		.suite = {
+ 			.kpp = __VECS(ecdh_p192_tv_template)
+ 		}
+ 	}, {
+-#endif
+ 		.alg = "ecdh-nist-p256",
+ 		.test = alg_test_kpp,
+ 		.fips_allowed = 1,
+diff --git a/crypto/testmgr.h b/crypto/testmgr.h
+index 34e4a3db39917..b9cf5b815532a 100644
+--- a/crypto/testmgr.h
++++ b/crypto/testmgr.h
+@@ -2685,7 +2685,6 @@ static const struct kpp_testvec curve25519_tv_template[] = {
+ }
+ };
+ 
+-#ifndef CONFIG_CRYPTO_FIPS
+ static const struct kpp_testvec ecdh_p192_tv_template[] = {
+ 	{
+ 	.secret =
+@@ -2719,13 +2718,12 @@ static const struct kpp_testvec ecdh_p192_tv_template[] = {
+ 	"\xf4\x57\xcc\x4f\x1f\x4e\x31\xcc"
+ 	"\xe3\x40\x60\xc8\x06\x93\xc6\x2e"
+ 	"\x99\x80\x81\x28\xaf\xc5\x51\x74",
+-	.secret_size = 32,
++	.secret_size = 30,
+ 	.b_public_size = 48,
+ 	.expected_a_public_size = 48,
+ 	.expected_ss_size = 24
+ 	}
+ };
+-#endif
+ 
+ static const struct kpp_testvec ecdh_p256_tv_template[] = {
+ 	{
+@@ -2766,7 +2764,7 @@ static const struct kpp_testvec ecdh_p256_tv_template[] = {
+ 	"\x9f\x4a\x38\xcc\xc0\x2c\x49\x2f"
+ 	"\xb1\x32\xbb\xaf\x22\x61\xda\xcb"
+ 	"\x6f\xdb\xa9\xaa\xfc\x77\x81\xf3",
+-	.secret_size = 40,
++	.secret_size = 38,
+ 	.b_public_size = 64,
+ 	.expected_a_public_size = 64,
+ 	.expected_ss_size = 32
+@@ -2804,8 +2802,8 @@ static const struct kpp_testvec ecdh_p256_tv_template[] = {
+ 	"\x37\x08\xcc\x40\x5e\x7a\xfd\x6a"
+ 	"\x6a\x02\x6e\x41\x87\x68\x38\x77"
+ 	"\xfa\xa9\x44\x43\x2d\xef\x09\xdf",
+-	.secret_size = 8,
+-	.b_secret_size = 40,
++	.secret_size = 6,
++	.b_secret_size = 38,
+ 	.b_public_size = 64,
+ 	.expected_a_public_size = 64,
+ 	.expected_ss_size = 32,
+diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile
+index 700b41adf2db6..9aa82d5272720 100644
+--- a/drivers/acpi/Makefile
++++ b/drivers/acpi/Makefile
+@@ -8,6 +8,11 @@ ccflags-$(CONFIG_ACPI_DEBUG)	+= -DACPI_DEBUG_OUTPUT
+ #
+ # ACPI Boot-Time Table Parsing
+ #
++ifeq ($(CONFIG_ACPI_CUSTOM_DSDT),y)
++tables.o: $(src)/../../include/$(subst $\",,$(CONFIG_ACPI_CUSTOM_DSDT_FILE)) ;
++
++endif
++
+ obj-$(CONFIG_ACPI)		+= tables.o
+ obj-$(CONFIG_X86)		+= blacklist.o
+ 
+diff --git a/drivers/acpi/acpi_fpdt.c b/drivers/acpi/acpi_fpdt.c
+index a89a806a7a2a9..4ee2ad234e3d6 100644
+--- a/drivers/acpi/acpi_fpdt.c
++++ b/drivers/acpi/acpi_fpdt.c
+@@ -240,8 +240,10 @@ static int __init acpi_init_fpdt(void)
+ 		return 0;
+ 
+ 	fpdt_kobj = kobject_create_and_add("fpdt", acpi_kobj);
+-	if (!fpdt_kobj)
++	if (!fpdt_kobj) {
++		acpi_put_table(header);
+ 		return -ENOMEM;
++	}
+ 
+ 	while (offset < header->length) {
+ 		subtable = (void *)header + offset;
+diff --git a/drivers/acpi/acpica/nsrepair2.c b/drivers/acpi/acpica/nsrepair2.c
+index 14b71b41e8453..38e10ab976e67 100644
+--- a/drivers/acpi/acpica/nsrepair2.c
++++ b/drivers/acpi/acpica/nsrepair2.c
+@@ -379,6 +379,13 @@ acpi_ns_repair_CID(struct acpi_evaluate_info *info,
+ 
+ 			(*element_ptr)->common.reference_count =
+ 			    original_ref_count;
++
++			/*
++			 * The original_element holds a reference from the package object
++			 * that represents _HID. Since a new element was created by _HID,
++			 * remove the reference from the _CID package.
++			 */
++			acpi_ut_remove_reference(original_element);
+ 		}
+ 
+ 		element_ptr++;
+diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
+index fce7ade2aba92..0c8330ed1ffd5 100644
+--- a/drivers/acpi/apei/ghes.c
++++ b/drivers/acpi/apei/ghes.c
+@@ -441,28 +441,35 @@ static void ghes_kick_task_work(struct callback_head *head)
+ 	gen_pool_free(ghes_estatus_pool, (unsigned long)estatus_node, node_len);
+ }
+ 
+-static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
+-				       int sev)
++static bool ghes_do_memory_failure(u64 physical_addr, int flags)
+ {
+ 	unsigned long pfn;
+-	int flags = -1;
+-	int sec_sev = ghes_severity(gdata->error_severity);
+-	struct cper_sec_mem_err *mem_err = acpi_hest_get_payload(gdata);
+ 
+ 	if (!IS_ENABLED(CONFIG_ACPI_APEI_MEMORY_FAILURE))
+ 		return false;
+ 
+-	if (!(mem_err->validation_bits & CPER_MEM_VALID_PA))
+-		return false;
+-
+-	pfn = mem_err->physical_addr >> PAGE_SHIFT;
++	pfn = PHYS_PFN(physical_addr);
+ 	if (!pfn_valid(pfn)) {
+ 		pr_warn_ratelimited(FW_WARN GHES_PFX
+ 		"Invalid address in generic error data: %#llx\n",
+-		mem_err->physical_addr);
++		physical_addr);
+ 		return false;
+ 	}
+ 
++	memory_failure_queue(pfn, flags);
++	return true;
++}
++
++static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
++				       int sev)
++{
++	int flags = -1;
++	int sec_sev = ghes_severity(gdata->error_severity);
++	struct cper_sec_mem_err *mem_err = acpi_hest_get_payload(gdata);
++
++	if (!(mem_err->validation_bits & CPER_MEM_VALID_PA))
++		return false;
++
+ 	/* iff following two events can be handled properly by now */
+ 	if (sec_sev == GHES_SEV_CORRECTED &&
+ 	    (gdata->flags & CPER_SEC_ERROR_THRESHOLD_EXCEEDED))
+@@ -470,14 +477,56 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
+ 	if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE)
+ 		flags = 0;
+ 
+-	if (flags != -1) {
+-		memory_failure_queue(pfn, flags);
+-		return true;
+-	}
++	if (flags != -1)
++		return ghes_do_memory_failure(mem_err->physical_addr, flags);
+ 
+ 	return false;
+ }
+ 
++static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int sev)
++{
++	struct cper_sec_proc_arm *err = acpi_hest_get_payload(gdata);
++	bool queued = false;
++	int sec_sev, i;
++	char *p;
++
++	log_arm_hw_error(err);
++
++	sec_sev = ghes_severity(gdata->error_severity);
++	if (sev != GHES_SEV_RECOVERABLE || sec_sev != GHES_SEV_RECOVERABLE)
++		return false;
++
++	p = (char *)(err + 1);
++	for (i = 0; i < err->err_info_num; i++) {
++		struct cper_arm_err_info *err_info = (struct cper_arm_err_info *)p;
++		bool is_cache = (err_info->type == CPER_ARM_CACHE_ERROR);
++		bool has_pa = (err_info->validation_bits & CPER_ARM_INFO_VALID_PHYSICAL_ADDR);
++		const char *error_type = "unknown error";
++
++		/*
++		 * The field (err_info->error_info & BIT(26)) is fixed to set to
++		 * 1 in some old firmware of HiSilicon Kunpeng920. We assume that
++		 * firmware won't mix corrected errors in an uncorrected section,
++		 * and don't filter out 'corrected' error here.
++		 */
++		if (is_cache && has_pa) {
++			queued = ghes_do_memory_failure(err_info->physical_fault_addr, 0);
++			p += err_info->length;
++			continue;
++		}
++
++		if (err_info->type < ARRAY_SIZE(cper_proc_error_type_strs))
++			error_type = cper_proc_error_type_strs[err_info->type];
++
++		pr_warn_ratelimited(FW_WARN GHES_PFX
++				    "Unhandled processor error type: %s\n",
++				    error_type);
++		p += err_info->length;
++	}
++
++	return queued;
++}
++
+ /*
+  * PCIe AER errors need to be sent to the AER driver for reporting and
+  * recovery. The GHES severities map to the following AER severities and
+@@ -605,9 +654,7 @@ static bool ghes_do_proc(struct ghes *ghes,
+ 			ghes_handle_aer(gdata);
+ 		}
+ 		else if (guid_equal(sec_type, &CPER_SEC_PROC_ARM)) {
+-			struct cper_sec_proc_arm *err = acpi_hest_get_payload(gdata);
+-
+-			log_arm_hw_error(err);
++			queued = ghes_handle_arm_hw_error(gdata, sev);
+ 		} else {
+ 			void *err = acpi_hest_get_payload(gdata);
+ 
+diff --git a/drivers/acpi/bgrt.c b/drivers/acpi/bgrt.c
+index 19bb7f870204c..e0d14017706ea 100644
+--- a/drivers/acpi/bgrt.c
++++ b/drivers/acpi/bgrt.c
+@@ -15,40 +15,19 @@
+ static void *bgrt_image;
+ static struct kobject *bgrt_kobj;
+ 
+-static ssize_t version_show(struct device *dev,
+-			    struct device_attribute *attr, char *buf)
+-{
+-	return snprintf(buf, PAGE_SIZE, "%d\n", bgrt_tab.version);
+-}
+-static DEVICE_ATTR_RO(version);
+-
+-static ssize_t status_show(struct device *dev,
+-			   struct device_attribute *attr, char *buf)
+-{
+-	return snprintf(buf, PAGE_SIZE, "%d\n", bgrt_tab.status);
+-}
+-static DEVICE_ATTR_RO(status);
+-
+-static ssize_t type_show(struct device *dev,
+-			 struct device_attribute *attr, char *buf)
+-{
+-	return snprintf(buf, PAGE_SIZE, "%d\n", bgrt_tab.image_type);
+-}
+-static DEVICE_ATTR_RO(type);
+-
+-static ssize_t xoffset_show(struct device *dev,
+-			    struct device_attribute *attr, char *buf)
+-{
+-	return snprintf(buf, PAGE_SIZE, "%d\n", bgrt_tab.image_offset_x);
+-}
+-static DEVICE_ATTR_RO(xoffset);
+-
+-static ssize_t yoffset_show(struct device *dev,
+-			    struct device_attribute *attr, char *buf)
+-{
+-	return snprintf(buf, PAGE_SIZE, "%d\n", bgrt_tab.image_offset_y);
+-}
+-static DEVICE_ATTR_RO(yoffset);
++#define BGRT_SHOW(_name, _member) \
++	static ssize_t _name##_show(struct kobject *kobj,			\
++				    struct kobj_attribute *attr, char *buf)	\
++	{									\
++		return snprintf(buf, PAGE_SIZE, "%d\n", bgrt_tab._member);	\
++	}									\
++	struct kobj_attribute bgrt_attr_##_name = __ATTR_RO(_name)
++
++BGRT_SHOW(version, version);
++BGRT_SHOW(status, status);
++BGRT_SHOW(type, image_type);
++BGRT_SHOW(xoffset, image_offset_x);
++BGRT_SHOW(yoffset, image_offset_y);
+ 
+ static ssize_t image_read(struct file *file, struct kobject *kobj,
+ 	       struct bin_attribute *attr, char *buf, loff_t off, size_t count)
+@@ -60,11 +39,11 @@ static ssize_t image_read(struct file *file, struct kobject *kobj,
+ static BIN_ATTR_RO(image, 0);	/* size gets filled in later */
+ 
+ static struct attribute *bgrt_attributes[] = {
+-	&dev_attr_version.attr,
+-	&dev_attr_status.attr,
+-	&dev_attr_type.attr,
+-	&dev_attr_xoffset.attr,
+-	&dev_attr_yoffset.attr,
++	&bgrt_attr_version.attr,
++	&bgrt_attr_status.attr,
++	&bgrt_attr_type.attr,
++	&bgrt_attr_xoffset.attr,
++	&bgrt_attr_yoffset.attr,
+ 	NULL,
+ };
+ 
+diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
+index a4bd673934c0a..44b4f02e2c6d7 100644
+--- a/drivers/acpi/bus.c
++++ b/drivers/acpi/bus.c
+@@ -1321,6 +1321,7 @@ static int __init acpi_init(void)
+ 
+ 	result = acpi_bus_init();
+ 	if (result) {
++		kobject_put(acpi_kobj);
+ 		disable_acpi();
+ 		return result;
+ 	}
+diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
+index d260bc1f3e6e7..9d2d3b9bb8b59 100644
+--- a/drivers/acpi/device_pm.c
++++ b/drivers/acpi/device_pm.c
+@@ -20,6 +20,7 @@
+ #include <linux/pm_runtime.h>
+ #include <linux/suspend.h>
+ 
++#include "fan.h"
+ #include "internal.h"
+ 
+ /**
+@@ -1310,10 +1311,7 @@ int acpi_dev_pm_attach(struct device *dev, bool power_on)
+ 	 * with the generic ACPI PM domain.
+ 	 */
+ 	static const struct acpi_device_id special_pm_ids[] = {
+-		{"PNP0C0B", }, /* Generic ACPI fan */
+-		{"INT3404", }, /* Fan */
+-		{"INTC1044", }, /* Fan for Tiger Lake generation */
+-		{"INTC1048", }, /* Fan for Alder Lake generation */
++		ACPI_FAN_DEVICE_IDS,
+ 		{}
+ 	};
+ 	struct acpi_device *adev = ACPI_COMPANION(dev);
+diff --git a/drivers/acpi/device_sysfs.c b/drivers/acpi/device_sysfs.c
+index fa2c1c93072cf..a393e0e09381d 100644
+--- a/drivers/acpi/device_sysfs.c
++++ b/drivers/acpi/device_sysfs.c
+@@ -448,7 +448,7 @@ static ssize_t description_show(struct device *dev,
+ 		(wchar_t *)acpi_dev->pnp.str_obj->buffer.pointer,
+ 		acpi_dev->pnp.str_obj->buffer.length,
+ 		UTF16_LITTLE_ENDIAN, buf,
+-		PAGE_SIZE);
++		PAGE_SIZE - 1);
+ 
+ 	buf[result++] = '\n';
+ 
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index 13565629ce0a8..87c3b4a099b94 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -183,6 +183,7 @@ static struct workqueue_struct *ec_query_wq;
+ 
+ static int EC_FLAGS_CORRECT_ECDT; /* Needs ECDT port address correction */
+ static int EC_FLAGS_IGNORE_DSDT_GPE; /* Needs ECDT GPE as correction setting */
++static int EC_FLAGS_TRUST_DSDT_GPE; /* Needs DSDT GPE as correction setting */
+ static int EC_FLAGS_CLEAR_ON_RESUME; /* Needs acpi_ec_clear() on boot/resume */
+ 
+ /* --------------------------------------------------------------------------
+@@ -1593,7 +1594,8 @@ static int acpi_ec_add(struct acpi_device *device)
+ 		}
+ 
+ 		if (boot_ec && ec->command_addr == boot_ec->command_addr &&
+-		    ec->data_addr == boot_ec->data_addr) {
++		    ec->data_addr == boot_ec->data_addr &&
++		    !EC_FLAGS_TRUST_DSDT_GPE) {
+ 			/*
+ 			 * Trust PNP0C09 namespace location rather than
+ 			 * ECDT ID. But trust ECDT GPE rather than _GPE
+@@ -1816,6 +1818,18 @@ static int ec_correct_ecdt(const struct dmi_system_id *id)
+ 	return 0;
+ }
+ 
++/*
++ * Some ECDTs contain wrong GPE setting, but they share the same port addresses
++ * with DSDT EC, don't duplicate the DSDT EC with ECDT EC in this case.
++ * https://bugzilla.kernel.org/show_bug.cgi?id=209989
++ */
++static int ec_honor_dsdt_gpe(const struct dmi_system_id *id)
++{
++	pr_debug("Detected system needing DSDT GPE setting.\n");
++	EC_FLAGS_TRUST_DSDT_GPE = 1;
++	return 0;
++}
++
+ /*
+  * Some DSDTs contain wrong GPE setting.
+  * Asus FX502VD/VE, GL702VMK, X550VXK, X580VD
+@@ -1846,6 +1860,22 @@ static const struct dmi_system_id ec_dmi_table[] __initconst = {
+ 	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+ 	DMI_MATCH(DMI_PRODUCT_NAME, "GL702VMK"),}, NULL},
+ 	{
++	ec_honor_ecdt_gpe, "ASUSTeK COMPUTER INC. X505BA", {
++	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++	DMI_MATCH(DMI_PRODUCT_NAME, "X505BA"),}, NULL},
++	{
++	ec_honor_ecdt_gpe, "ASUSTeK COMPUTER INC. X505BP", {
++	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++	DMI_MATCH(DMI_PRODUCT_NAME, "X505BP"),}, NULL},
++	{
++	ec_honor_ecdt_gpe, "ASUSTeK COMPUTER INC. X542BA", {
++	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++	DMI_MATCH(DMI_PRODUCT_NAME, "X542BA"),}, NULL},
++	{
++	ec_honor_ecdt_gpe, "ASUSTeK COMPUTER INC. X542BP", {
++	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++	DMI_MATCH(DMI_PRODUCT_NAME, "X542BP"),}, NULL},
++	{
+ 	ec_honor_ecdt_gpe, "ASUS X550VXK", {
+ 	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+ 	DMI_MATCH(DMI_PRODUCT_NAME, "X550VXK"),}, NULL},
+@@ -1854,6 +1884,11 @@ static const struct dmi_system_id ec_dmi_table[] __initconst = {
+ 	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+ 	DMI_MATCH(DMI_PRODUCT_NAME, "X580VD"),}, NULL},
+ 	{
++	/* https://bugzilla.kernel.org/show_bug.cgi?id=209989 */
++	ec_honor_dsdt_gpe, "HP Pavilion Gaming Laptop 15-cx0xxx", {
++	DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++	DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion Gaming Laptop 15-cx0xxx"),}, NULL},
++	{
+ 	ec_clear_on_resume, "Samsung hardware", {
+ 	DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD.")}, NULL},
+ 	{},
+diff --git a/drivers/acpi/fan.c b/drivers/acpi/fan.c
+index 66c3983f0ccca..5cd0ceb50bc8a 100644
+--- a/drivers/acpi/fan.c
++++ b/drivers/acpi/fan.c
+@@ -16,6 +16,8 @@
+ #include <linux/platform_device.h>
+ #include <linux/sort.h>
+ 
++#include "fan.h"
++
+ MODULE_AUTHOR("Paul Diefenbaugh");
+ MODULE_DESCRIPTION("ACPI Fan Driver");
+ MODULE_LICENSE("GPL");
+@@ -24,10 +26,7 @@ static int acpi_fan_probe(struct platform_device *pdev);
+ static int acpi_fan_remove(struct platform_device *pdev);
+ 
+ static const struct acpi_device_id fan_device_ids[] = {
+-	{"PNP0C0B", 0},
+-	{"INT3404", 0},
+-	{"INTC1044", 0},
+-	{"INTC1048", 0},
++	ACPI_FAN_DEVICE_IDS,
+ 	{"", 0},
+ };
+ MODULE_DEVICE_TABLE(acpi, fan_device_ids);
+diff --git a/drivers/acpi/fan.h b/drivers/acpi/fan.h
+new file mode 100644
+index 0000000000000..dc9a6efa514b0
+--- /dev/null
++++ b/drivers/acpi/fan.h
+@@ -0,0 +1,13 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++
++/*
++ * ACPI fan device IDs are shared between the fan driver and the device power
++ * management code.
++ *
++ * Add new device IDs before the generic ACPI fan one.
++ */
++#define ACPI_FAN_DEVICE_IDS	\
++	{"INT3404", }, /* Fan */ \
++	{"INTC1044", }, /* Fan for Tiger Lake generation */ \
++	{"INTC1048", }, /* Fan for Alder Lake generation */ \
++	{"PNP0C0B", } /* Generic ACPI fan */
+diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
+index 45a019619e4a5..095c8aca141eb 100644
+--- a/drivers/acpi/processor_idle.c
++++ b/drivers/acpi/processor_idle.c
+@@ -16,6 +16,7 @@
+ #include <linux/acpi.h>
+ #include <linux/dmi.h>
+ #include <linux/sched.h>       /* need_resched() */
++#include <linux/sort.h>
+ #include <linux/tick.h>
+ #include <linux/cpuidle.h>
+ #include <linux/cpu.h>
+@@ -384,10 +385,37 @@ static void acpi_processor_power_verify_c3(struct acpi_processor *pr,
+ 	return;
+ }
+ 
++static int acpi_cst_latency_cmp(const void *a, const void *b)
++{
++	const struct acpi_processor_cx *x = a, *y = b;
++
++	if (!(x->valid && y->valid))
++		return 0;
++	if (x->latency > y->latency)
++		return 1;
++	if (x->latency < y->latency)
++		return -1;
++	return 0;
++}
++static void acpi_cst_latency_swap(void *a, void *b, int n)
++{
++	struct acpi_processor_cx *x = a, *y = b;
++	u32 tmp;
++
++	if (!(x->valid && y->valid))
++		return;
++	tmp = x->latency;
++	x->latency = y->latency;
++	y->latency = tmp;
++}
++
+ static int acpi_processor_power_verify(struct acpi_processor *pr)
+ {
+ 	unsigned int i;
+ 	unsigned int working = 0;
++	unsigned int last_latency = 0;
++	unsigned int last_type = 0;
++	bool buggy_latency = false;
+ 
+ 	pr->power.timer_broadcast_on_state = INT_MAX;
+ 
+@@ -411,12 +439,24 @@ static int acpi_processor_power_verify(struct acpi_processor *pr)
+ 		}
+ 		if (!cx->valid)
+ 			continue;
++		if (cx->type >= last_type && cx->latency < last_latency)
++			buggy_latency = true;
++		last_latency = cx->latency;
++		last_type = cx->type;
+ 
+ 		lapic_timer_check_state(i, pr, cx);
+ 		tsc_check_state(cx->type);
+ 		working++;
+ 	}
+ 
++	if (buggy_latency) {
++		pr_notice("FW issue: working around C-state latencies out of order\n");
++		sort(&pr->power.states[1], max_cstate,
++		     sizeof(struct acpi_processor_cx),
++		     acpi_cst_latency_cmp,
++		     acpi_cst_latency_swap);
++	}
++
+ 	lapic_timer_propagate_broadcast(pr);
+ 
+ 	return (working);
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index ee78a210c6068..dc01fb550b28d 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -423,6 +423,13 @@ static void acpi_dev_get_irqresource(struct resource *res, u32 gsi,
+ 	}
+ }
+ 
++static bool irq_is_legacy(struct acpi_resource_irq *irq)
++{
++	return irq->triggering == ACPI_EDGE_SENSITIVE &&
++		irq->polarity == ACPI_ACTIVE_HIGH &&
++		irq->shareable == ACPI_EXCLUSIVE;
++}
++
+ /**
+  * acpi_dev_resource_interrupt - Extract ACPI interrupt resource information.
+  * @ares: Input ACPI resource object.
+@@ -461,7 +468,7 @@ bool acpi_dev_resource_interrupt(struct acpi_resource *ares, int index,
+ 		}
+ 		acpi_dev_get_irqresource(res, irq->interrupts[index],
+ 					 irq->triggering, irq->polarity,
+-					 irq->shareable, true);
++					 irq->shareable, irq_is_legacy(irq));
+ 		break;
+ 	case ACPI_RESOURCE_TYPE_EXTENDED_IRQ:
+ 		ext_irq = &ares->data.extended_irq;
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index e10d38ac7cf28..438df8da6d122 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -1671,8 +1671,20 @@ void acpi_init_device_object(struct acpi_device *device, acpi_handle handle,
+ 	device_initialize(&device->dev);
+ 	dev_set_uevent_suppress(&device->dev, true);
+ 	acpi_init_coherency(device);
+-	/* Assume there are unmet deps to start with. */
+-	device->dep_unmet = 1;
++}
++
++static void acpi_scan_dep_init(struct acpi_device *adev)
++{
++	struct acpi_dep_data *dep;
++
++	mutex_lock(&acpi_dep_list_lock);
++
++	list_for_each_entry(dep, &acpi_dep_list, node) {
++		if (dep->consumer == adev->handle)
++			adev->dep_unmet++;
++	}
++
++	mutex_unlock(&acpi_dep_list_lock);
+ }
+ 
+ void acpi_device_add_finalize(struct acpi_device *device)
+@@ -1688,7 +1700,7 @@ static void acpi_scan_init_status(struct acpi_device *adev)
+ }
+ 
+ static int acpi_add_single_object(struct acpi_device **child,
+-				  acpi_handle handle, int type)
++				  acpi_handle handle, int type, bool dep_init)
+ {
+ 	struct acpi_device *device;
+ 	int result;
+@@ -1703,8 +1715,12 @@ static int acpi_add_single_object(struct acpi_device **child,
+ 	 * acpi_bus_get_status() and use its quirk handling.  Note that
+ 	 * this must be done before the get power-/wakeup_dev-flags calls.
+ 	 */
+-	if (type == ACPI_BUS_TYPE_DEVICE || type == ACPI_BUS_TYPE_PROCESSOR)
++	if (type == ACPI_BUS_TYPE_DEVICE || type == ACPI_BUS_TYPE_PROCESSOR) {
++		if (dep_init)
++			acpi_scan_dep_init(device);
++
+ 		acpi_scan_init_status(device);
++	}
+ 
+ 	acpi_bus_get_power_flags(device);
+ 	acpi_bus_get_wakeup_device_flags(device);
+@@ -1886,22 +1902,6 @@ static u32 acpi_scan_check_dep(acpi_handle handle, bool check_dep)
+ 	return count;
+ }
+ 
+-static void acpi_scan_dep_init(struct acpi_device *adev)
+-{
+-	struct acpi_dep_data *dep;
+-
+-	adev->dep_unmet = 0;
+-
+-	mutex_lock(&acpi_dep_list_lock);
+-
+-	list_for_each_entry(dep, &acpi_dep_list, node) {
+-		if (dep->consumer == adev->handle)
+-			adev->dep_unmet++;
+-	}
+-
+-	mutex_unlock(&acpi_dep_list_lock);
+-}
+-
+ static bool acpi_bus_scan_second_pass;
+ 
+ static acpi_status acpi_bus_check_add(acpi_handle handle, bool check_dep,
+@@ -1949,19 +1949,15 @@ static acpi_status acpi_bus_check_add(acpi_handle handle, bool check_dep,
+ 		return AE_OK;
+ 	}
+ 
+-	acpi_add_single_object(&device, handle, type);
+-	if (!device)
+-		return AE_CTRL_DEPTH;
+-
+-	acpi_scan_init_hotplug(device);
+ 	/*
+ 	 * If check_dep is true at this point, the device has no dependencies,
+ 	 * or the creation of the device object would have been postponed above.
+ 	 */
+-	if (check_dep)
+-		device->dep_unmet = 0;
+-	else
+-		acpi_scan_dep_init(device);
++	acpi_add_single_object(&device, handle, type, !check_dep);
++	if (!device)
++		return AE_CTRL_DEPTH;
++
++	acpi_scan_init_hotplug(device);
+ 
+ out:
+ 	if (!*adev_p)
+@@ -2223,7 +2219,7 @@ int acpi_bus_register_early_device(int type)
+ 	struct acpi_device *device = NULL;
+ 	int result;
+ 
+-	result = acpi_add_single_object(&device, NULL, type);
++	result = acpi_add_single_object(&device, NULL, type, false);
+ 	if (result)
+ 		return result;
+ 
+@@ -2243,7 +2239,7 @@ static int acpi_bus_scan_fixed(void)
+ 		struct acpi_device *device = NULL;
+ 
+ 		result = acpi_add_single_object(&device, NULL,
+-						ACPI_BUS_TYPE_POWER_BUTTON);
++						ACPI_BUS_TYPE_POWER_BUTTON, false);
+ 		if (result)
+ 			return result;
+ 
+@@ -2259,7 +2255,7 @@ static int acpi_bus_scan_fixed(void)
+ 		struct acpi_device *device = NULL;
+ 
+ 		result = acpi_add_single_object(&device, NULL,
+-						ACPI_BUS_TYPE_SLEEP_BUTTON);
++						ACPI_BUS_TYPE_SLEEP_BUTTON, false);
+ 		if (result)
+ 			return result;
+ 
+diff --git a/drivers/acpi/x86/s2idle.c b/drivers/acpi/x86/s2idle.c
+index 2b69536cdccba..2d7ddb8a8cb65 100644
+--- a/drivers/acpi/x86/s2idle.c
++++ b/drivers/acpi/x86/s2idle.c
+@@ -42,6 +42,8 @@ static const struct acpi_device_id lps0_device_ids[] = {
+ 
+ /* AMD */
+ #define ACPI_LPS0_DSM_UUID_AMD      "e3f32452-febc-43ce-9039-932122d37721"
++#define ACPI_LPS0_ENTRY_AMD         2
++#define ACPI_LPS0_EXIT_AMD          3
+ #define ACPI_LPS0_SCREEN_OFF_AMD    4
+ #define ACPI_LPS0_SCREEN_ON_AMD     5
+ 
+@@ -408,6 +410,7 @@ int acpi_s2idle_prepare_late(void)
+ 
+ 	if (acpi_s2idle_vendor_amd()) {
+ 		acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_OFF_AMD);
++		acpi_sleep_run_lps0_dsm(ACPI_LPS0_ENTRY_AMD);
+ 	} else {
+ 		acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_OFF);
+ 		acpi_sleep_run_lps0_dsm(ACPI_LPS0_ENTRY);
+@@ -422,6 +425,7 @@ void acpi_s2idle_restore_early(void)
+ 		return;
+ 
+ 	if (acpi_s2idle_vendor_amd()) {
++		acpi_sleep_run_lps0_dsm(ACPI_LPS0_EXIT_AMD);
+ 		acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_ON_AMD);
+ 	} else {
+ 		acpi_sleep_run_lps0_dsm(ACPI_LPS0_EXIT);
+diff --git a/drivers/ata/pata_ep93xx.c b/drivers/ata/pata_ep93xx.c
+index badab67088935..46208ececbb6a 100644
+--- a/drivers/ata/pata_ep93xx.c
++++ b/drivers/ata/pata_ep93xx.c
+@@ -928,7 +928,7 @@ static int ep93xx_pata_probe(struct platform_device *pdev)
+ 	/* INT[3] (IRQ_EP93XX_EXT3) line connected as pull down */
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq < 0) {
+-		err = -ENXIO;
++		err = irq;
+ 		goto err_rel_gpio;
+ 	}
+ 
+diff --git a/drivers/ata/pata_octeon_cf.c b/drivers/ata/pata_octeon_cf.c
+index bd87476ab4813..b5a3f710d76de 100644
+--- a/drivers/ata/pata_octeon_cf.c
++++ b/drivers/ata/pata_octeon_cf.c
+@@ -898,10 +898,11 @@ static int octeon_cf_probe(struct platform_device *pdev)
+ 					return -EINVAL;
+ 				}
+ 
+-				irq_handler = octeon_cf_interrupt;
+ 				i = platform_get_irq(dma_dev, 0);
+-				if (i > 0)
++				if (i > 0) {
+ 					irq = i;
++					irq_handler = octeon_cf_interrupt;
++				}
+ 			}
+ 			of_node_put(dma_node);
+ 		}
+diff --git a/drivers/ata/pata_rb532_cf.c b/drivers/ata/pata_rb532_cf.c
+index 479c4b29b8562..303f8c375b3af 100644
+--- a/drivers/ata/pata_rb532_cf.c
++++ b/drivers/ata/pata_rb532_cf.c
+@@ -115,10 +115,12 @@ static int rb532_pata_driver_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (irq <= 0) {
++	if (irq < 0) {
+ 		dev_err(&pdev->dev, "no IRQ resource found\n");
+-		return -ENOENT;
++		return irq;
+ 	}
++	if (!irq)
++		return -EINVAL;
+ 
+ 	gpiod = devm_gpiod_get(&pdev->dev, NULL, GPIOD_IN);
+ 	if (IS_ERR(gpiod)) {
+diff --git a/drivers/ata/sata_highbank.c b/drivers/ata/sata_highbank.c
+index 64b2ef15ec191..8440203e835ed 100644
+--- a/drivers/ata/sata_highbank.c
++++ b/drivers/ata/sata_highbank.c
+@@ -469,10 +469,12 @@ static int ahci_highbank_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (irq <= 0) {
++	if (irq < 0) {
+ 		dev_err(dev, "no irq\n");
+-		return -EINVAL;
++		return irq;
+ 	}
++	if (!irq)
++		return -EINVAL;
+ 
+ 	hpriv = devm_kzalloc(dev, sizeof(*hpriv), GFP_KERNEL);
+ 	if (!hpriv) {
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 76e12f3482a91..8271df1251535 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -1154,6 +1154,7 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
+ 	blk_queue_physical_block_size(lo->lo_queue, bsize);
+ 	blk_queue_io_min(lo->lo_queue, bsize);
+ 
++	loop_config_discard(lo);
+ 	loop_update_rotational(lo);
+ 	loop_update_dio(lo);
+ 	loop_sysfs_init(lo);
+diff --git a/drivers/bluetooth/btqca.c b/drivers/bluetooth/btqca.c
+index 25114f0d13199..bd71dfc9c9748 100644
+--- a/drivers/bluetooth/btqca.c
++++ b/drivers/bluetooth/btqca.c
+@@ -183,7 +183,7 @@ int qca_send_pre_shutdown_cmd(struct hci_dev *hdev)
+ EXPORT_SYMBOL_GPL(qca_send_pre_shutdown_cmd);
+ 
+ static void qca_tlv_check_data(struct qca_fw_config *config,
+-		const struct firmware *fw, enum qca_btsoc_type soc_type)
++		u8 *fw_data, enum qca_btsoc_type soc_type)
+ {
+ 	const u8 *data;
+ 	u32 type_len;
+@@ -194,7 +194,7 @@ static void qca_tlv_check_data(struct qca_fw_config *config,
+ 	struct tlv_type_nvm *tlv_nvm;
+ 	uint8_t nvm_baud_rate = config->user_baud_rate;
+ 
+-	tlv = (struct tlv_type_hdr *)fw->data;
++	tlv = (struct tlv_type_hdr *)fw_data;
+ 
+ 	type_len = le32_to_cpu(tlv->type_len);
+ 	length = (type_len >> 8) & 0x00ffffff;
+@@ -390,8 +390,9 @@ static int qca_download_firmware(struct hci_dev *hdev,
+ 				 enum qca_btsoc_type soc_type)
+ {
+ 	const struct firmware *fw;
++	u8 *data;
+ 	const u8 *segment;
+-	int ret, remain, i = 0;
++	int ret, size, remain, i = 0;
+ 
+ 	bt_dev_info(hdev, "QCA Downloading %s", config->fwname);
+ 
+@@ -402,10 +403,22 @@ static int qca_download_firmware(struct hci_dev *hdev,
+ 		return ret;
+ 	}
+ 
+-	qca_tlv_check_data(config, fw, soc_type);
++	size = fw->size;
++	data = vmalloc(fw->size);
++	if (!data) {
++		bt_dev_err(hdev, "QCA Failed to allocate memory for file: %s",
++			   config->fwname);
++		release_firmware(fw);
++		return -ENOMEM;
++	}
++
++	memcpy(data, fw->data, size);
++	release_firmware(fw);
++
++	qca_tlv_check_data(config, data, soc_type);
+ 
+-	segment = fw->data;
+-	remain = fw->size;
++	segment = data;
++	remain = size;
+ 	while (remain > 0) {
+ 		int segsize = min(MAX_SIZE_PER_TLV_SEGMENT, remain);
+ 
+@@ -435,7 +448,7 @@ static int qca_download_firmware(struct hci_dev *hdev,
+ 		ret = qca_inject_cmd_complete_event(hdev);
+ 
+ out:
+-	release_firmware(fw);
++	vfree(data);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index 0a0056912d51e..dc6551d65912f 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -1835,8 +1835,6 @@ static void qca_power_shutdown(struct hci_uart *hu)
+ 	unsigned long flags;
+ 	enum qca_btsoc_type soc_type = qca_soc_type(hu);
+ 
+-	qcadev = serdev_device_get_drvdata(hu->serdev);
+-
+ 	/* From this point we go into power off state. But serial port is
+ 	 * still open, stop queueing the IBS data and flush all the buffered
+ 	 * data in skb's.
+@@ -1852,6 +1850,8 @@ static void qca_power_shutdown(struct hci_uart *hu)
+ 	if (!hu->serdev)
+ 		return;
+ 
++	qcadev = serdev_device_get_drvdata(hu->serdev);
++
+ 	if (qca_is_wcn399x(soc_type)) {
+ 		host_set_baudrate(hu, 2400);
+ 		qca_send_power_pulse(hu, false);
+diff --git a/drivers/bluetooth/virtio_bt.c b/drivers/bluetooth/virtio_bt.c
+index c804db7e90f8f..57908ce4fae85 100644
+--- a/drivers/bluetooth/virtio_bt.c
++++ b/drivers/bluetooth/virtio_bt.c
+@@ -34,6 +34,9 @@ static int virtbt_add_inbuf(struct virtio_bluetooth *vbt)
+ 	int err;
+ 
+ 	skb = alloc_skb(1000, GFP_KERNEL);
++	if (!skb)
++		return -ENOMEM;
++
+ 	sg_init_one(sg, skb->data, 1000);
+ 
+ 	err = virtqueue_add_inbuf(vq, sg, 1, skb, GFP_KERNEL);
+diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
+index e2e59a341fef6..bbf6cd04861eb 100644
+--- a/drivers/bus/mhi/core/pm.c
++++ b/drivers/bus/mhi/core/pm.c
+@@ -465,23 +465,15 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl)
+ 
+ 	/* Trigger MHI RESET so that the device will not access host memory */
+ 	if (!MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state)) {
+-		u32 in_reset = -1;
+-		unsigned long timeout = msecs_to_jiffies(mhi_cntrl->timeout_ms);
+-
+ 		dev_dbg(dev, "Triggering MHI Reset in device\n");
+ 		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
+ 
+ 		/* Wait for the reset bit to be cleared by the device */
+-		ret = wait_event_timeout(mhi_cntrl->state_event,
+-					 mhi_read_reg_field(mhi_cntrl,
+-							    mhi_cntrl->regs,
+-							    MHICTRL,
+-							    MHICTRL_RESET_MASK,
+-							    MHICTRL_RESET_SHIFT,
+-							    &in_reset) ||
+-					!in_reset, timeout);
+-		if (!ret || in_reset)
+-			dev_err(dev, "Device failed to exit MHI Reset state\n");
++		ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
++				 MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 0,
++				 25000);
++		if (ret)
++			dev_err(dev, "Device failed to clear MHI Reset\n");
+ 
+ 		/*
+ 		 * Device will clear BHI_INTVEC as a part of RESET processing,
+@@ -934,6 +926,7 @@ int mhi_pm_resume(struct mhi_controller *mhi_cntrl)
+ 
+ 	ret = wait_event_timeout(mhi_cntrl->state_event,
+ 				 mhi_cntrl->dev_state == MHI_STATE_M0 ||
++				 mhi_cntrl->dev_state == MHI_STATE_M2 ||
+ 				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+ 				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+ 
+diff --git a/drivers/bus/mhi/pci_generic.c b/drivers/bus/mhi/pci_generic.c
+index b3357a8a2fdbc..ca3bc40427f85 100644
+--- a/drivers/bus/mhi/pci_generic.c
++++ b/drivers/bus/mhi/pci_generic.c
+@@ -665,7 +665,7 @@ static int mhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 
+ 	err = mhi_register_controller(mhi_cntrl, mhi_cntrl_config);
+ 	if (err)
+-		return err;
++		goto err_disable_reporting;
+ 
+ 	/* MHI bus does not power up the controller by default */
+ 	err = mhi_prepare_for_power_up(mhi_cntrl);
+@@ -699,6 +699,8 @@ err_unprepare:
+ 	mhi_unprepare_after_power_down(mhi_cntrl);
+ err_unregister:
+ 	mhi_unregister_controller(mhi_cntrl);
++err_disable_reporting:
++	pci_disable_pcie_error_reporting(pdev);
+ 
+ 	return err;
+ }
+@@ -721,6 +723,7 @@ static void mhi_pci_remove(struct pci_dev *pdev)
+ 		pm_runtime_get_noresume(&pdev->dev);
+ 
+ 	mhi_unregister_controller(mhi_cntrl);
++	pci_disable_pcie_error_reporting(pdev);
+ }
+ 
+ static void mhi_pci_shutdown(struct pci_dev *pdev)
+diff --git a/drivers/char/hw_random/exynos-trng.c b/drivers/char/hw_random/exynos-trng.c
+index 8e1fe3f8dd2df..c8db62bc5ff72 100644
+--- a/drivers/char/hw_random/exynos-trng.c
++++ b/drivers/char/hw_random/exynos-trng.c
+@@ -132,7 +132,7 @@ static int exynos_trng_probe(struct platform_device *pdev)
+ 		return PTR_ERR(trng->mem);
+ 
+ 	pm_runtime_enable(&pdev->dev);
+-	ret = pm_runtime_get_sync(&pdev->dev);
++	ret = pm_runtime_resume_and_get(&pdev->dev);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "Could not get runtime PM.\n");
+ 		goto err_pm_get;
+@@ -165,7 +165,7 @@ err_register:
+ 	clk_disable_unprepare(trng->clk);
+ 
+ err_clock:
+-	pm_runtime_put_sync(&pdev->dev);
++	pm_runtime_put_noidle(&pdev->dev);
+ 
+ err_pm_get:
+ 	pm_runtime_disable(&pdev->dev);
+diff --git a/drivers/char/pcmcia/cm4000_cs.c b/drivers/char/pcmcia/cm4000_cs.c
+index 89681f07bc787..9468e9520cee0 100644
+--- a/drivers/char/pcmcia/cm4000_cs.c
++++ b/drivers/char/pcmcia/cm4000_cs.c
+@@ -544,6 +544,10 @@ static int set_protocol(struct cm4000_dev *dev, struct ptsreq *ptsreq)
+ 		io_read_num_rec_bytes(iobase, &num_bytes_read);
+ 		if (num_bytes_read >= 4) {
+ 			DEBUGP(2, dev, "NumRecBytes = %i\n", num_bytes_read);
++			if (num_bytes_read > 4) {
++				rc = -EIO;
++				goto exit_setprotocol;
++			}
+ 			break;
+ 		}
+ 		usleep_range(10000, 11000);
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index 55b9d3965ae1b..69579efb247b3 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -196,13 +196,24 @@ static u8 tpm_tis_status(struct tpm_chip *chip)
+ 		return 0;
+ 
+ 	if (unlikely((status & TPM_STS_READ_ZERO) != 0)) {
+-		/*
+-		 * If this trips, the chances are the read is
+-		 * returning 0xff because the locality hasn't been
+-		 * acquired.  Usually because tpm_try_get_ops() hasn't
+-		 * been called before doing a TPM operation.
+-		 */
+-		WARN_ONCE(1, "TPM returned invalid status\n");
++		if  (!test_and_set_bit(TPM_TIS_INVALID_STATUS, &priv->flags)) {
++			/*
++			 * If this trips, the chances are the read is
++			 * returning 0xff because the locality hasn't been
++			 * acquired.  Usually because tpm_try_get_ops() hasn't
++			 * been called before doing a TPM operation.
++			 */
++			dev_err(&chip->dev, "invalid TPM_STS.x 0x%02x, dumping stack for forensics\n",
++				status);
++
++			/*
++			 * Dump stack for forensics, as invalid TPM_STS.x could be
++			 * potentially triggered by impaired tpm_try_get_ops() or
++			 * tpm_find_get_ops().
++			 */
++			dump_stack();
++		}
++
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/char/tpm/tpm_tis_core.h b/drivers/char/tpm/tpm_tis_core.h
+index 9b2d32a59f670..b2a3c6c72882d 100644
+--- a/drivers/char/tpm/tpm_tis_core.h
++++ b/drivers/char/tpm/tpm_tis_core.h
+@@ -83,6 +83,7 @@ enum tis_defaults {
+ 
+ enum tpm_tis_flags {
+ 	TPM_TIS_ITPM_WORKAROUND		= BIT(0),
++	TPM_TIS_INVALID_STATUS		= BIT(1),
+ };
+ 
+ struct tpm_tis_data {
+@@ -90,7 +91,7 @@ struct tpm_tis_data {
+ 	int locality;
+ 	int irq;
+ 	bool irq_tested;
+-	unsigned int flags;
++	unsigned long flags;
+ 	void __iomem *ilb_base_addr;
+ 	u16 clkrun_enabled;
+ 	wait_queue_head_t int_queue;
+diff --git a/drivers/char/tpm/tpm_tis_spi_main.c b/drivers/char/tpm/tpm_tis_spi_main.c
+index 3856f6ebcb34f..de4209003a448 100644
+--- a/drivers/char/tpm/tpm_tis_spi_main.c
++++ b/drivers/char/tpm/tpm_tis_spi_main.c
+@@ -260,6 +260,8 @@ static int tpm_tis_spi_remove(struct spi_device *dev)
+ }
+ 
+ static const struct spi_device_id tpm_tis_spi_id[] = {
++	{ "st33htpm-spi", (unsigned long)tpm_tis_spi_probe },
++	{ "slb9670", (unsigned long)tpm_tis_spi_probe },
+ 	{ "tpm_tis_spi", (unsigned long)tpm_tis_spi_probe },
+ 	{ "cr50", (unsigned long)cr50_spi_probe },
+ 	{}
+diff --git a/drivers/clk/actions/owl-s500.c b/drivers/clk/actions/owl-s500.c
+index 61bb224f63309..cbeb51c804eb5 100644
+--- a/drivers/clk/actions/owl-s500.c
++++ b/drivers/clk/actions/owl-s500.c
+@@ -127,8 +127,7 @@ static struct clk_factor_table sd_factor_table[] = {
+ 	{ 12, 1, 13 }, { 13, 1, 14 }, { 14, 1, 15 }, { 15, 1, 16 },
+ 	{ 16, 1, 17 }, { 17, 1, 18 }, { 18, 1, 19 }, { 19, 1, 20 },
+ 	{ 20, 1, 21 }, { 21, 1, 22 }, { 22, 1, 23 }, { 23, 1, 24 },
+-	{ 24, 1, 25 }, { 25, 1, 26 }, { 26, 1, 27 }, { 27, 1, 28 },
+-	{ 28, 1, 29 }, { 29, 1, 30 }, { 30, 1, 31 }, { 31, 1, 32 },
++	{ 24, 1, 25 },
+ 
+ 	/* bit8: /128 */
+ 	{ 256, 1, 1 * 128 }, { 257, 1, 2 * 128 }, { 258, 1, 3 * 128 }, { 259, 1, 4 * 128 },
+@@ -137,19 +136,20 @@ static struct clk_factor_table sd_factor_table[] = {
+ 	{ 268, 1, 13 * 128 }, { 269, 1, 14 * 128 }, { 270, 1, 15 * 128 }, { 271, 1, 16 * 128 },
+ 	{ 272, 1, 17 * 128 }, { 273, 1, 18 * 128 }, { 274, 1, 19 * 128 }, { 275, 1, 20 * 128 },
+ 	{ 276, 1, 21 * 128 }, { 277, 1, 22 * 128 }, { 278, 1, 23 * 128 }, { 279, 1, 24 * 128 },
+-	{ 280, 1, 25 * 128 }, { 281, 1, 26 * 128 }, { 282, 1, 27 * 128 }, { 283, 1, 28 * 128 },
+-	{ 284, 1, 29 * 128 }, { 285, 1, 30 * 128 }, { 286, 1, 31 * 128 }, { 287, 1, 32 * 128 },
++	{ 280, 1, 25 * 128 },
+ 	{ 0, 0, 0 },
+ };
+ 
+-static struct clk_factor_table bisp_factor_table[] = {
+-	{ 0, 1, 1 }, { 1, 1, 2 }, { 2, 1, 3 }, { 3, 1, 4 },
+-	{ 4, 1, 5 }, { 5, 1, 6 }, { 6, 1, 7 }, { 7, 1, 8 },
++static struct clk_factor_table de_factor_table[] = {
++	{ 0, 1, 1 }, { 1, 2, 3 }, { 2, 1, 2 }, { 3, 2, 5 },
++	{ 4, 1, 3 }, { 5, 1, 4 }, { 6, 1, 6 }, { 7, 1, 8 },
++	{ 8, 1, 12 },
+ 	{ 0, 0, 0 },
+ };
+ 
+-static struct clk_factor_table ahb_factor_table[] = {
+-	{ 1, 1, 2 }, { 2, 1, 3 },
++static struct clk_factor_table hde_factor_table[] = {
++	{ 0, 1, 1 }, { 1, 2, 3 }, { 2, 1, 2 }, { 3, 2, 5 },
++	{ 4, 1, 3 }, { 5, 1, 4 }, { 6, 1, 6 }, { 7, 1, 8 },
+ 	{ 0, 0, 0 },
+ };
+ 
+@@ -158,6 +158,13 @@ static struct clk_div_table rmii_ref_div_table[] = {
+ 	{ 0, 0 },
+ };
+ 
++static struct clk_div_table std12rate_div_table[] = {
++	{ 0, 1 }, { 1, 2 }, { 2, 3 }, { 3, 4 },
++	{ 4, 5 }, { 5, 6 }, { 6, 7 }, { 7, 8 },
++	{ 8, 9 }, { 9, 10 }, { 10, 11 }, { 11, 12 },
++	{ 0, 0 },
++};
++
+ static struct clk_div_table i2s_div_table[] = {
+ 	{ 0, 1 }, { 1, 2 }, { 2, 3 }, { 3, 4 },
+ 	{ 4, 6 }, { 5, 8 }, { 6, 12 }, { 7, 16 },
+@@ -174,7 +181,6 @@ static struct clk_div_table nand_div_table[] = {
+ 
+ /* mux clock */
+ static OWL_MUX(dev_clk, "dev_clk", dev_clk_mux_p, CMU_DEVPLL, 12, 1, CLK_SET_RATE_PARENT);
+-static OWL_MUX(ahbprediv_clk, "ahbprediv_clk", ahbprediv_clk_mux_p, CMU_BUSCLK1, 8, 3, CLK_SET_RATE_PARENT);
+ 
+ /* gate clocks */
+ static OWL_GATE(gpio_clk, "gpio_clk", "apb_clk", CMU_DEVCLKEN0, 18, 0, 0);
+@@ -187,45 +193,54 @@ static OWL_GATE(timer_clk, "timer_clk", "hosc", CMU_DEVCLKEN1, 27, 0, 0);
+ static OWL_GATE(hdmi_clk, "hdmi_clk", "hosc", CMU_DEVCLKEN1, 3, 0, 0);
+ 
+ /* divider clocks */
+-static OWL_DIVIDER(h_clk, "h_clk", "ahbprediv_clk", CMU_BUSCLK1, 12, 2, NULL, 0, 0);
++static OWL_DIVIDER(h_clk, "h_clk", "ahbprediv_clk", CMU_BUSCLK1, 2, 2, NULL, 0, 0);
+ static OWL_DIVIDER(apb_clk, "apb_clk", "ahb_clk", CMU_BUSCLK1, 14, 2, NULL, 0, 0);
+ static OWL_DIVIDER(rmii_ref_clk, "rmii_ref_clk", "ethernet_pll_clk", CMU_ETHERNETPLL, 1, 1, rmii_ref_div_table, 0, 0);
+ 
+ /* factor clocks */
+-static OWL_FACTOR(ahb_clk, "ahb_clk", "h_clk", CMU_BUSCLK1, 2, 2, ahb_factor_table, 0, 0);
+-static OWL_FACTOR(de1_clk, "de_clk1", "de_clk", CMU_DECLK, 0, 3, bisp_factor_table, 0, 0);
+-static OWL_FACTOR(de2_clk, "de_clk2", "de_clk", CMU_DECLK, 4, 3, bisp_factor_table, 0, 0);
++static OWL_FACTOR(de1_clk, "de_clk1", "de_clk", CMU_DECLK, 0, 4, de_factor_table, 0, 0);
++static OWL_FACTOR(de2_clk, "de_clk2", "de_clk", CMU_DECLK, 4, 4, de_factor_table, 0, 0);
+ 
+ /* composite clocks */
++static OWL_COMP_DIV(ahbprediv_clk, "ahbprediv_clk", ahbprediv_clk_mux_p,
++			OWL_MUX_HW(CMU_BUSCLK1, 8, 3),
++			{ 0 },
++			OWL_DIVIDER_HW(CMU_BUSCLK1, 12, 2, 0, NULL),
++			CLK_SET_RATE_PARENT);
++
++static OWL_COMP_FIXED_FACTOR(ahb_clk, "ahb_clk", "h_clk",
++			{ 0 },
++			1, 1, 0);
++
+ static OWL_COMP_FACTOR(vce_clk, "vce_clk", hde_clk_mux_p,
+ 			OWL_MUX_HW(CMU_VCECLK, 4, 2),
+ 			OWL_GATE_HW(CMU_DEVCLKEN0, 26, 0),
+-			OWL_FACTOR_HW(CMU_VCECLK, 0, 3, 0, bisp_factor_table),
++			OWL_FACTOR_HW(CMU_VCECLK, 0, 3, 0, hde_factor_table),
+ 			0);
+ 
+ static OWL_COMP_FACTOR(vde_clk, "vde_clk", hde_clk_mux_p,
+ 			OWL_MUX_HW(CMU_VDECLK, 4, 2),
+ 			OWL_GATE_HW(CMU_DEVCLKEN0, 25, 0),
+-			OWL_FACTOR_HW(CMU_VDECLK, 0, 3, 0, bisp_factor_table),
++			OWL_FACTOR_HW(CMU_VDECLK, 0, 3, 0, hde_factor_table),
+ 			0);
+ 
+-static OWL_COMP_FACTOR(bisp_clk, "bisp_clk", bisp_clk_mux_p,
++static OWL_COMP_DIV(bisp_clk, "bisp_clk", bisp_clk_mux_p,
+ 			OWL_MUX_HW(CMU_BISPCLK, 4, 1),
+ 			OWL_GATE_HW(CMU_DEVCLKEN0, 14, 0),
+-			OWL_FACTOR_HW(CMU_BISPCLK, 0, 3, 0, bisp_factor_table),
++			OWL_DIVIDER_HW(CMU_BISPCLK, 0, 4, 0, std12rate_div_table),
+ 			0);
+ 
+-static OWL_COMP_FACTOR(sensor0_clk, "sensor0_clk", sensor_clk_mux_p,
++static OWL_COMP_DIV(sensor0_clk, "sensor0_clk", sensor_clk_mux_p,
+ 			OWL_MUX_HW(CMU_SENSORCLK, 4, 1),
+ 			OWL_GATE_HW(CMU_DEVCLKEN0, 14, 0),
+-			OWL_FACTOR_HW(CMU_SENSORCLK, 0, 3, 0, bisp_factor_table),
+-			CLK_IGNORE_UNUSED);
++			OWL_DIVIDER_HW(CMU_SENSORCLK, 0, 4, 0, std12rate_div_table),
++			0);
+ 
+-static OWL_COMP_FACTOR(sensor1_clk, "sensor1_clk", sensor_clk_mux_p,
++static OWL_COMP_DIV(sensor1_clk, "sensor1_clk", sensor_clk_mux_p,
+ 			OWL_MUX_HW(CMU_SENSORCLK, 4, 1),
+ 			OWL_GATE_HW(CMU_DEVCLKEN0, 14, 0),
+-			OWL_FACTOR_HW(CMU_SENSORCLK, 8, 3, 0, bisp_factor_table),
+-			CLK_IGNORE_UNUSED);
++			OWL_DIVIDER_HW(CMU_SENSORCLK, 8, 4, 0, std12rate_div_table),
++			0);
+ 
+ static OWL_COMP_FACTOR(sd0_clk, "sd0_clk", sd_clk_mux_p,
+ 			OWL_MUX_HW(CMU_SD0CLK, 9, 1),
+@@ -305,7 +320,7 @@ static OWL_COMP_FIXED_FACTOR(i2c3_clk, "i2c3_clk", "ethernet_pll_clk",
+ static OWL_COMP_DIV(uart0_clk, "uart0_clk", uart_clk_mux_p,
+ 			OWL_MUX_HW(CMU_UART0CLK, 16, 1),
+ 			OWL_GATE_HW(CMU_DEVCLKEN1, 6, 0),
+-			OWL_DIVIDER_HW(CMU_UART1CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
++			OWL_DIVIDER_HW(CMU_UART0CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
+ 			CLK_IGNORE_UNUSED);
+ 
+ static OWL_COMP_DIV(uart1_clk, "uart1_clk", uart_clk_mux_p,
+@@ -317,31 +332,31 @@ static OWL_COMP_DIV(uart1_clk, "uart1_clk", uart_clk_mux_p,
+ static OWL_COMP_DIV(uart2_clk, "uart2_clk", uart_clk_mux_p,
+ 			OWL_MUX_HW(CMU_UART2CLK, 16, 1),
+ 			OWL_GATE_HW(CMU_DEVCLKEN1, 8, 0),
+-			OWL_DIVIDER_HW(CMU_UART1CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
++			OWL_DIVIDER_HW(CMU_UART2CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
+ 			CLK_IGNORE_UNUSED);
+ 
+ static OWL_COMP_DIV(uart3_clk, "uart3_clk", uart_clk_mux_p,
+ 			OWL_MUX_HW(CMU_UART3CLK, 16, 1),
+ 			OWL_GATE_HW(CMU_DEVCLKEN1, 19, 0),
+-			OWL_DIVIDER_HW(CMU_UART1CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
++			OWL_DIVIDER_HW(CMU_UART3CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
+ 			CLK_IGNORE_UNUSED);
+ 
+ static OWL_COMP_DIV(uart4_clk, "uart4_clk", uart_clk_mux_p,
+ 			OWL_MUX_HW(CMU_UART4CLK, 16, 1),
+ 			OWL_GATE_HW(CMU_DEVCLKEN1, 20, 0),
+-			OWL_DIVIDER_HW(CMU_UART1CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
++			OWL_DIVIDER_HW(CMU_UART4CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
+ 			CLK_IGNORE_UNUSED);
+ 
+ static OWL_COMP_DIV(uart5_clk, "uart5_clk", uart_clk_mux_p,
+ 			OWL_MUX_HW(CMU_UART5CLK, 16, 1),
+ 			OWL_GATE_HW(CMU_DEVCLKEN1, 21, 0),
+-			OWL_DIVIDER_HW(CMU_UART1CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
++			OWL_DIVIDER_HW(CMU_UART5CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
+ 			CLK_IGNORE_UNUSED);
+ 
+ static OWL_COMP_DIV(uart6_clk, "uart6_clk", uart_clk_mux_p,
+ 			OWL_MUX_HW(CMU_UART6CLK, 16, 1),
+ 			OWL_GATE_HW(CMU_DEVCLKEN1, 18, 0),
+-			OWL_DIVIDER_HW(CMU_UART1CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
++			OWL_DIVIDER_HW(CMU_UART6CLK, 0, 8, CLK_DIVIDER_ROUND_CLOSEST, NULL),
+ 			CLK_IGNORE_UNUSED);
+ 
+ static OWL_COMP_DIV(i2srx_clk, "i2srx_clk", i2s_clk_mux_p,
+diff --git a/drivers/clk/clk-k210.c b/drivers/clk/clk-k210.c
+index 6c84abf5b2e36..67a7cb3503c36 100644
+--- a/drivers/clk/clk-k210.c
++++ b/drivers/clk/clk-k210.c
+@@ -722,6 +722,7 @@ static int k210_clk_set_parent(struct clk_hw *hw, u8 index)
+ 		reg |= BIT(cfg->mux_bit);
+ 	else
+ 		reg &= ~BIT(cfg->mux_bit);
++	writel(reg, ksc->regs + cfg->mux_reg);
+ 	spin_unlock_irqrestore(&ksc->clk_lock, flags);
+ 
+ 	return 0;
+diff --git a/drivers/clk/clk-si5341.c b/drivers/clk/clk-si5341.c
+index e0446e66fa645..eb22f4fdbc6b4 100644
+--- a/drivers/clk/clk-si5341.c
++++ b/drivers/clk/clk-si5341.c
+@@ -92,12 +92,22 @@ struct clk_si5341_output_config {
+ #define SI5341_PN_BASE		0x0002
+ #define SI5341_DEVICE_REV	0x0005
+ #define SI5341_STATUS		0x000C
++#define SI5341_LOS		0x000D
++#define SI5341_STATUS_STICKY	0x0011
++#define SI5341_LOS_STICKY	0x0012
+ #define SI5341_SOFT_RST		0x001C
+ #define SI5341_IN_SEL		0x0021
++#define SI5341_DEVICE_READY	0x00FE
+ #define SI5341_XAXB_CFG		0x090E
+ #define SI5341_IN_EN		0x0949
+ #define SI5341_INX_TO_PFD_EN	0x094A
+ 
++/* Status bits */
++#define SI5341_STATUS_SYSINCAL	BIT(0)
++#define SI5341_STATUS_LOSXAXB	BIT(1)
++#define SI5341_STATUS_LOSREF	BIT(2)
++#define SI5341_STATUS_LOL	BIT(3)
++
+ /* Input selection */
+ #define SI5341_IN_SEL_MASK	0x06
+ #define SI5341_IN_SEL_SHIFT	1
+@@ -340,6 +350,8 @@ static const struct si5341_reg_default si5341_reg_defaults[] = {
+ 	{ 0x094A, 0x00 }, /* INx_TO_PFD_EN (disabled) */
+ 	{ 0x0A02, 0x00 }, /* Not in datasheet */
+ 	{ 0x0B44, 0x0F }, /* PDIV_ENB (datasheet does not mention what it is) */
++	{ 0x0B57, 0x10 }, /* VCO_RESET_CALCODE (not described in datasheet) */
++	{ 0x0B58, 0x05 }, /* VCO_RESET_CALCODE (not described in datasheet) */
+ };
+ 
+ /* Read and interpret a 44-bit followed by a 32-bit value in the regmap */
+@@ -623,6 +635,9 @@ static unsigned long si5341_synth_clk_recalc_rate(struct clk_hw *hw,
+ 			SI5341_SYNTH_N_NUM(synth->index), &n_num, &n_den);
+ 	if (err < 0)
+ 		return err;
++	/* Check for bogus/uninitialized settings */
++	if (!n_num || !n_den)
++		return 0;
+ 
+ 	/*
+ 	 * n_num and n_den are shifted left as much as possible, so to prevent
+@@ -806,6 +821,9 @@ static long si5341_output_clk_round_rate(struct clk_hw *hw, unsigned long rate,
+ {
+ 	unsigned long r;
+ 
++	if (!rate)
++		return 0;
++
+ 	r = *parent_rate >> 1;
+ 
+ 	/* If rate is an even divisor, no changes to parent required */
+@@ -834,11 +852,16 @@ static int si5341_output_clk_set_rate(struct clk_hw *hw, unsigned long rate,
+ 		unsigned long parent_rate)
+ {
+ 	struct clk_si5341_output *output = to_clk_si5341_output(hw);
+-	/* Frequency divider is (r_div + 1) * 2 */
+-	u32 r_div = (parent_rate / rate) >> 1;
++	u32 r_div;
+ 	int err;
+ 	u8 r[3];
+ 
++	if (!rate)
++		return -EINVAL;
++
++	/* Frequency divider is (r_div + 1) * 2 */
++	r_div = (parent_rate / rate) >> 1;
++
+ 	if (r_div <= 1)
+ 		r_div = 0;
+ 	else if (r_div >= BIT(24))
+@@ -1083,7 +1106,7 @@ static const struct si5341_reg_default si5341_preamble[] = {
+ 	{ 0x0B25, 0x00 },
+ 	{ 0x0502, 0x01 },
+ 	{ 0x0505, 0x03 },
+-	{ 0x0957, 0x1F },
++	{ 0x0957, 0x17 },
+ 	{ 0x0B4E, 0x1A },
+ };
+ 
+@@ -1189,6 +1212,32 @@ static const struct regmap_range_cfg si5341_regmap_ranges[] = {
+ 	},
+ };
+ 
++static int si5341_wait_device_ready(struct i2c_client *client)
++{
++	int count;
++
++	/* Datasheet warns: Any attempt to read or write any register other
++	 * than DEVICE_READY before DEVICE_READY reads as 0x0F may corrupt the
++	 * NVM programming and may corrupt the register contents, as they are
++	 * read from NVM. Note that this includes accesses to the PAGE register.
++	 * Also: DEVICE_READY is available on every register page, so no page
++	 * change is needed to read it.
++	 * Do this outside regmap to avoid automatic PAGE register access.
++	 * May take up to 300ms to complete.
++	 */
++	for (count = 0; count < 15; ++count) {
++		s32 result = i2c_smbus_read_byte_data(client,
++						      SI5341_DEVICE_READY);
++		if (result < 0)
++			return result;
++		if (result == 0x0F)
++			return 0;
++		msleep(20);
++	}
++	dev_err(&client->dev, "timeout waiting for DEVICE_READY\n");
++	return -EIO;
++}
++
+ static const struct regmap_config si5341_regmap_config = {
+ 	.reg_bits = 8,
+ 	.val_bits = 8,
+@@ -1378,6 +1427,7 @@ static int si5341_probe(struct i2c_client *client,
+ 	unsigned int i;
+ 	struct clk_si5341_output_config config[SI5341_MAX_NUM_OUTPUTS];
+ 	bool initialization_required;
++	u32 status;
+ 
+ 	data = devm_kzalloc(&client->dev, sizeof(*data), GFP_KERNEL);
+ 	if (!data)
+@@ -1385,6 +1435,11 @@ static int si5341_probe(struct i2c_client *client,
+ 
+ 	data->i2c_client = client;
+ 
++	/* Must be done before otherwise touching hardware */
++	err = si5341_wait_device_ready(client);
++	if (err)
++		return err;
++
+ 	for (i = 0; i < SI5341_NUM_INPUTS; ++i) {
+ 		input = devm_clk_get(&client->dev, si5341_input_clock_names[i]);
+ 		if (IS_ERR(input)) {
+@@ -1540,6 +1595,22 @@ static int si5341_probe(struct i2c_client *client,
+ 			return err;
+ 	}
+ 
++	/* wait for device to report input clock present and PLL lock */
++	err = regmap_read_poll_timeout(data->regmap, SI5341_STATUS, status,
++		!(status & (SI5341_STATUS_LOSREF | SI5341_STATUS_LOL)),
++	       10000, 250000);
++	if (err) {
++		dev_err(&client->dev, "Error waiting for input clock or PLL lock\n");
++		return err;
++	}
++
++	/* clear sticky alarm bits from initialization */
++	err = regmap_write(data->regmap, SI5341_STATUS_STICKY, 0);
++	if (err) {
++		dev_err(&client->dev, "unable to clear sticky status\n");
++		return err;
++	}
++
+ 	/* Free the names, clk framework makes copies */
+ 	for (i = 0; i < data->num_synth; ++i)
+ 		 devm_kfree(&client->dev, (void *)synth_clock_names[i]);
+diff --git a/drivers/clk/clk-versaclock5.c b/drivers/clk/clk-versaclock5.c
+index 344cd6c611887..3c737742c2a92 100644
+--- a/drivers/clk/clk-versaclock5.c
++++ b/drivers/clk/clk-versaclock5.c
+@@ -69,7 +69,10 @@
+ #define VC5_FEEDBACK_FRAC_DIV(n)		(0x19 + (n))
+ #define VC5_RC_CONTROL0				0x1e
+ #define VC5_RC_CONTROL1				0x1f
+-/* Register 0x20 is factory reserved */
++
++/* These registers are named "Unused Factory Reserved Registers" */
++#define VC5_RESERVED_X0(idx)		(0x20 + ((idx) * 0x10))
++#define VC5_RESERVED_X0_BYPASS_SYNC	BIT(7) /* bypass_sync<idx> bit */
+ 
+ /* Output divider control for divider 1,2,3,4 */
+ #define VC5_OUT_DIV_CONTROL(idx)	(0x21 + ((idx) * 0x10))
+@@ -87,7 +90,6 @@
+ #define VC5_OUT_DIV_SKEW_INT(idx, n)	(0x2b + ((idx) * 0x10) + (n))
+ #define VC5_OUT_DIV_INT(idx, n)		(0x2d + ((idx) * 0x10) + (n))
+ #define VC5_OUT_DIV_SKEW_FRAC(idx)	(0x2f + ((idx) * 0x10))
+-/* Registers 0x30, 0x40, 0x50 are factory reserved */
+ 
+ /* Clock control register for clock 1,2 */
+ #define VC5_CLK_OUTPUT_CFG(idx, n)	(0x60 + ((idx) * 0x2) + (n))
+@@ -140,6 +142,8 @@
+ #define VC5_HAS_INTERNAL_XTAL	BIT(0)
+ /* chip has PFD requency doubler */
+ #define VC5_HAS_PFD_FREQ_DBL	BIT(1)
++/* chip has bits to disable FOD sync */
++#define VC5_HAS_BYPASS_SYNC_BIT	BIT(2)
+ 
+ /* Supported IDT VC5 models. */
+ enum vc5_model {
+@@ -581,6 +585,23 @@ static int vc5_clk_out_prepare(struct clk_hw *hw)
+ 	unsigned int src;
+ 	int ret;
+ 
++	/*
++	 * When enabling a FOD, all currently enabled FODs are briefly
++	 * stopped in order to synchronize all of them. This causes a clock
++	 * disruption to any unrelated chips that might be already using
++	 * other clock outputs. Bypass the sync feature to avoid the issue,
++	 * which is possible on the VersaClock 6E family via reserved
++	 * registers.
++	 */
++	if (vc5->chip_info->flags & VC5_HAS_BYPASS_SYNC_BIT) {
++		ret = regmap_update_bits(vc5->regmap,
++					 VC5_RESERVED_X0(hwdata->num),
++					 VC5_RESERVED_X0_BYPASS_SYNC,
++					 VC5_RESERVED_X0_BYPASS_SYNC);
++		if (ret)
++			return ret;
++	}
++
+ 	/*
+ 	 * If the input mux is disabled, enable it first and
+ 	 * select source from matching FOD.
+@@ -1166,7 +1187,7 @@ static const struct vc5_chip_info idt_5p49v6965_info = {
+ 	.model = IDT_VC6_5P49V6965,
+ 	.clk_fod_cnt = 4,
+ 	.clk_out_cnt = 5,
+-	.flags = 0,
++	.flags = VC5_HAS_BYPASS_SYNC_BIT,
+ };
+ 
+ static const struct i2c_device_id vc5_id[] = {
+diff --git a/drivers/clk/imx/clk-imx8mq.c b/drivers/clk/imx/clk-imx8mq.c
+index b08019e1faf99..c491bc9c61ce7 100644
+--- a/drivers/clk/imx/clk-imx8mq.c
++++ b/drivers/clk/imx/clk-imx8mq.c
+@@ -358,46 +358,26 @@ static int imx8mq_clocks_probe(struct platform_device *pdev)
+ 	hws[IMX8MQ_VIDEO2_PLL_OUT] = imx_clk_hw_sscg_pll("video2_pll_out", video2_pll_out_sels, ARRAY_SIZE(video2_pll_out_sels), 0, 0, 0, base + 0x54, 0);
+ 
+ 	/* SYS PLL1 fixed output */
+-	hws[IMX8MQ_SYS1_PLL_40M_CG] = imx_clk_hw_gate("sys1_pll_40m_cg", "sys1_pll_out", base + 0x30, 9);
+-	hws[IMX8MQ_SYS1_PLL_80M_CG] = imx_clk_hw_gate("sys1_pll_80m_cg", "sys1_pll_out", base + 0x30, 11);
+-	hws[IMX8MQ_SYS1_PLL_100M_CG] = imx_clk_hw_gate("sys1_pll_100m_cg", "sys1_pll_out", base + 0x30, 13);
+-	hws[IMX8MQ_SYS1_PLL_133M_CG] = imx_clk_hw_gate("sys1_pll_133m_cg", "sys1_pll_out", base + 0x30, 15);
+-	hws[IMX8MQ_SYS1_PLL_160M_CG] = imx_clk_hw_gate("sys1_pll_160m_cg", "sys1_pll_out", base + 0x30, 17);
+-	hws[IMX8MQ_SYS1_PLL_200M_CG] = imx_clk_hw_gate("sys1_pll_200m_cg", "sys1_pll_out", base + 0x30, 19);
+-	hws[IMX8MQ_SYS1_PLL_266M_CG] = imx_clk_hw_gate("sys1_pll_266m_cg", "sys1_pll_out", base + 0x30, 21);
+-	hws[IMX8MQ_SYS1_PLL_400M_CG] = imx_clk_hw_gate("sys1_pll_400m_cg", "sys1_pll_out", base + 0x30, 23);
+-	hws[IMX8MQ_SYS1_PLL_800M_CG] = imx_clk_hw_gate("sys1_pll_800m_cg", "sys1_pll_out", base + 0x30, 25);
+-
+-	hws[IMX8MQ_SYS1_PLL_40M] = imx_clk_hw_fixed_factor("sys1_pll_40m", "sys1_pll_40m_cg", 1, 20);
+-	hws[IMX8MQ_SYS1_PLL_80M] = imx_clk_hw_fixed_factor("sys1_pll_80m", "sys1_pll_80m_cg", 1, 10);
+-	hws[IMX8MQ_SYS1_PLL_100M] = imx_clk_hw_fixed_factor("sys1_pll_100m", "sys1_pll_100m_cg", 1, 8);
+-	hws[IMX8MQ_SYS1_PLL_133M] = imx_clk_hw_fixed_factor("sys1_pll_133m", "sys1_pll_133m_cg", 1, 6);
+-	hws[IMX8MQ_SYS1_PLL_160M] = imx_clk_hw_fixed_factor("sys1_pll_160m", "sys1_pll_160m_cg", 1, 5);
+-	hws[IMX8MQ_SYS1_PLL_200M] = imx_clk_hw_fixed_factor("sys1_pll_200m", "sys1_pll_200m_cg", 1, 4);
+-	hws[IMX8MQ_SYS1_PLL_266M] = imx_clk_hw_fixed_factor("sys1_pll_266m", "sys1_pll_266m_cg", 1, 3);
+-	hws[IMX8MQ_SYS1_PLL_400M] = imx_clk_hw_fixed_factor("sys1_pll_400m", "sys1_pll_400m_cg", 1, 2);
+-	hws[IMX8MQ_SYS1_PLL_800M] = imx_clk_hw_fixed_factor("sys1_pll_800m", "sys1_pll_800m_cg", 1, 1);
++	hws[IMX8MQ_SYS1_PLL_40M] = imx_clk_hw_fixed_factor("sys1_pll_40m", "sys1_pll_out", 1, 20);
++	hws[IMX8MQ_SYS1_PLL_80M] = imx_clk_hw_fixed_factor("sys1_pll_80m", "sys1_pll_out", 1, 10);
++	hws[IMX8MQ_SYS1_PLL_100M] = imx_clk_hw_fixed_factor("sys1_pll_100m", "sys1_pll_out", 1, 8);
++	hws[IMX8MQ_SYS1_PLL_133M] = imx_clk_hw_fixed_factor("sys1_pll_133m", "sys1_pll_out", 1, 6);
++	hws[IMX8MQ_SYS1_PLL_160M] = imx_clk_hw_fixed_factor("sys1_pll_160m", "sys1_pll_out", 1, 5);
++	hws[IMX8MQ_SYS1_PLL_200M] = imx_clk_hw_fixed_factor("sys1_pll_200m", "sys1_pll_out", 1, 4);
++	hws[IMX8MQ_SYS1_PLL_266M] = imx_clk_hw_fixed_factor("sys1_pll_266m", "sys1_pll_out", 1, 3);
++	hws[IMX8MQ_SYS1_PLL_400M] = imx_clk_hw_fixed_factor("sys1_pll_400m", "sys1_pll_out", 1, 2);
++	hws[IMX8MQ_SYS1_PLL_800M] = imx_clk_hw_fixed_factor("sys1_pll_800m", "sys1_pll_out", 1, 1);
+ 
+ 	/* SYS PLL2 fixed output */
+-	hws[IMX8MQ_SYS2_PLL_50M_CG] = imx_clk_hw_gate("sys2_pll_50m_cg", "sys2_pll_out", base + 0x3c, 9);
+-	hws[IMX8MQ_SYS2_PLL_100M_CG] = imx_clk_hw_gate("sys2_pll_100m_cg", "sys2_pll_out", base + 0x3c, 11);
+-	hws[IMX8MQ_SYS2_PLL_125M_CG] = imx_clk_hw_gate("sys2_pll_125m_cg", "sys2_pll_out", base + 0x3c, 13);
+-	hws[IMX8MQ_SYS2_PLL_166M_CG] = imx_clk_hw_gate("sys2_pll_166m_cg", "sys2_pll_out", base + 0x3c, 15);
+-	hws[IMX8MQ_SYS2_PLL_200M_CG] = imx_clk_hw_gate("sys2_pll_200m_cg", "sys2_pll_out", base + 0x3c, 17);
+-	hws[IMX8MQ_SYS2_PLL_250M_CG] = imx_clk_hw_gate("sys2_pll_250m_cg", "sys2_pll_out", base + 0x3c, 19);
+-	hws[IMX8MQ_SYS2_PLL_333M_CG] = imx_clk_hw_gate("sys2_pll_333m_cg", "sys2_pll_out", base + 0x3c, 21);
+-	hws[IMX8MQ_SYS2_PLL_500M_CG] = imx_clk_hw_gate("sys2_pll_500m_cg", "sys2_pll_out", base + 0x3c, 23);
+-	hws[IMX8MQ_SYS2_PLL_1000M_CG] = imx_clk_hw_gate("sys2_pll_1000m_cg", "sys2_pll_out", base + 0x3c, 25);
+-
+-	hws[IMX8MQ_SYS2_PLL_50M] = imx_clk_hw_fixed_factor("sys2_pll_50m", "sys2_pll_50m_cg", 1, 20);
+-	hws[IMX8MQ_SYS2_PLL_100M] = imx_clk_hw_fixed_factor("sys2_pll_100m", "sys2_pll_100m_cg", 1, 10);
+-	hws[IMX8MQ_SYS2_PLL_125M] = imx_clk_hw_fixed_factor("sys2_pll_125m", "sys2_pll_125m_cg", 1, 8);
+-	hws[IMX8MQ_SYS2_PLL_166M] = imx_clk_hw_fixed_factor("sys2_pll_166m", "sys2_pll_166m_cg", 1, 6);
+-	hws[IMX8MQ_SYS2_PLL_200M] = imx_clk_hw_fixed_factor("sys2_pll_200m", "sys2_pll_200m_cg", 1, 5);
+-	hws[IMX8MQ_SYS2_PLL_250M] = imx_clk_hw_fixed_factor("sys2_pll_250m", "sys2_pll_250m_cg", 1, 4);
+-	hws[IMX8MQ_SYS2_PLL_333M] = imx_clk_hw_fixed_factor("sys2_pll_333m", "sys2_pll_333m_cg", 1, 3);
+-	hws[IMX8MQ_SYS2_PLL_500M] = imx_clk_hw_fixed_factor("sys2_pll_500m", "sys2_pll_500m_cg", 1, 2);
+-	hws[IMX8MQ_SYS2_PLL_1000M] = imx_clk_hw_fixed_factor("sys2_pll_1000m", "sys2_pll_1000m_cg", 1, 1);
++	hws[IMX8MQ_SYS2_PLL_50M] = imx_clk_hw_fixed_factor("sys2_pll_50m", "sys2_pll_out", 1, 20);
++	hws[IMX8MQ_SYS2_PLL_100M] = imx_clk_hw_fixed_factor("sys2_pll_100m", "sys2_pll_out", 1, 10);
++	hws[IMX8MQ_SYS2_PLL_125M] = imx_clk_hw_fixed_factor("sys2_pll_125m", "sys2_pll_out", 1, 8);
++	hws[IMX8MQ_SYS2_PLL_166M] = imx_clk_hw_fixed_factor("sys2_pll_166m", "sys2_pll_out", 1, 6);
++	hws[IMX8MQ_SYS2_PLL_200M] = imx_clk_hw_fixed_factor("sys2_pll_200m", "sys2_pll_out", 1, 5);
++	hws[IMX8MQ_SYS2_PLL_250M] = imx_clk_hw_fixed_factor("sys2_pll_250m", "sys2_pll_out", 1, 4);
++	hws[IMX8MQ_SYS2_PLL_333M] = imx_clk_hw_fixed_factor("sys2_pll_333m", "sys2_pll_out", 1, 3);
++	hws[IMX8MQ_SYS2_PLL_500M] = imx_clk_hw_fixed_factor("sys2_pll_500m", "sys2_pll_out", 1, 2);
++	hws[IMX8MQ_SYS2_PLL_1000M] = imx_clk_hw_fixed_factor("sys2_pll_1000m", "sys2_pll_out", 1, 1);
+ 
+ 	hws[IMX8MQ_CLK_MON_AUDIO_PLL1_DIV] = imx_clk_hw_divider("audio_pll1_out_monitor", "audio_pll1_bypass", base + 0x78, 0, 3);
+ 	hws[IMX8MQ_CLK_MON_AUDIO_PLL2_DIV] = imx_clk_hw_divider("audio_pll2_out_monitor", "audio_pll2_bypass", base + 0x78, 4, 3);
+diff --git a/drivers/clk/meson/g12a.c b/drivers/clk/meson/g12a.c
+index b080359b4645e..a805bac93c113 100644
+--- a/drivers/clk/meson/g12a.c
++++ b/drivers/clk/meson/g12a.c
+@@ -1603,7 +1603,7 @@ static struct clk_regmap g12b_cpub_clk_trace = {
+ };
+ 
+ static const struct pll_mult_range g12a_gp0_pll_mult_range = {
+-	.min = 55,
++	.min = 125,
+ 	.max = 255,
+ };
+ 
+diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c
+index c6eb99169ddc7..6f8f0bbc5ab5b 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.c
++++ b/drivers/clk/qcom/clk-alpha-pll.c
+@@ -1234,7 +1234,7 @@ static int alpha_pll_fabia_prepare(struct clk_hw *hw)
+ 		return ret;
+ 
+ 	/* Setup PLL for calibration frequency */
+-	regmap_write(pll->clkr.regmap, PLL_ALPHA_VAL(pll), cal_l);
++	regmap_write(pll->clkr.regmap, PLL_CAL_L_VAL(pll), cal_l);
+ 
+ 	/* Bringup the PLL at calibration frequency */
+ 	ret = clk_alpha_pll_enable(hw);
+diff --git a/drivers/clk/qcom/gcc-sc7280.c b/drivers/clk/qcom/gcc-sc7280.c
+index ef734db316df8..6cefcdc869905 100644
+--- a/drivers/clk/qcom/gcc-sc7280.c
++++ b/drivers/clk/qcom/gcc-sc7280.c
+@@ -716,6 +716,7 @@ static const struct freq_tbl ftbl_gcc_qupv3_wrap0_s2_clk_src[] = {
+ 	F(29491200, P_GCC_GPLL0_OUT_EVEN, 1, 1536, 15625),
+ 	F(32000000, P_GCC_GPLL0_OUT_EVEN, 1, 8, 75),
+ 	F(48000000, P_GCC_GPLL0_OUT_EVEN, 1, 4, 25),
++	F(52174000, P_GCC_GPLL0_OUT_MAIN, 1, 2, 23),
+ 	F(64000000, P_GCC_GPLL0_OUT_EVEN, 1, 16, 75),
+ 	F(75000000, P_GCC_GPLL0_OUT_EVEN, 4, 0, 0),
+ 	F(80000000, P_GCC_GPLL0_OUT_EVEN, 1, 4, 15),
+diff --git a/drivers/clk/rockchip/clk-rk3568.c b/drivers/clk/rockchip/clk-rk3568.c
+index 946ea2f45bf3b..75ca855e720df 100644
+--- a/drivers/clk/rockchip/clk-rk3568.c
++++ b/drivers/clk/rockchip/clk-rk3568.c
+@@ -454,17 +454,17 @@ static struct rockchip_clk_branch rk3568_clk_branches[] __initdata = {
+ 	COMPOSITE_NOMUX(CPLL_125M, "cpll_125m", "cpll", CLK_IGNORE_UNUSED,
+ 			RK3568_CLKSEL_CON(80), 0, 5, DFLAGS,
+ 			RK3568_CLKGATE_CON(35), 10, GFLAGS),
++	COMPOSITE_NOMUX(CPLL_100M, "cpll_100m", "cpll", CLK_IGNORE_UNUSED,
++			RK3568_CLKSEL_CON(82), 0, 5, DFLAGS,
++			RK3568_CLKGATE_CON(35), 11, GFLAGS),
+ 	COMPOSITE_NOMUX(CPLL_62P5M, "cpll_62p5", "cpll", CLK_IGNORE_UNUSED,
+ 			RK3568_CLKSEL_CON(80), 8, 5, DFLAGS,
+-			RK3568_CLKGATE_CON(35), 11, GFLAGS),
++			RK3568_CLKGATE_CON(35), 12, GFLAGS),
+ 	COMPOSITE_NOMUX(CPLL_50M, "cpll_50m", "cpll", CLK_IGNORE_UNUSED,
+ 			RK3568_CLKSEL_CON(81), 0, 5, DFLAGS,
+-			RK3568_CLKGATE_CON(35), 12, GFLAGS),
++			RK3568_CLKGATE_CON(35), 13, GFLAGS),
+ 	COMPOSITE_NOMUX(CPLL_25M, "cpll_25m", "cpll", CLK_IGNORE_UNUSED,
+ 			RK3568_CLKSEL_CON(81), 8, 6, DFLAGS,
+-			RK3568_CLKGATE_CON(35), 13, GFLAGS),
+-	COMPOSITE_NOMUX(CPLL_100M, "cpll_100m", "cpll", CLK_IGNORE_UNUSED,
+-			RK3568_CLKSEL_CON(82), 0, 5, DFLAGS,
+ 			RK3568_CLKGATE_CON(35), 14, GFLAGS),
+ 	COMPOSITE_NOMUX(0, "clk_osc0_div_750k", "xin24m", CLK_IGNORE_UNUSED,
+ 			RK3568_CLKSEL_CON(82), 8, 6, DFLAGS,
+diff --git a/drivers/clk/socfpga/clk-agilex.c b/drivers/clk/socfpga/clk-agilex.c
+index 92a6d740a799d..1cb21ea79c640 100644
+--- a/drivers/clk/socfpga/clk-agilex.c
++++ b/drivers/clk/socfpga/clk-agilex.c
+@@ -177,6 +177,8 @@ static const struct clk_parent_data emac_mux[] = {
+ 	  .name = "emaca_free_clk", },
+ 	{ .fw_name = "emacb_free_clk",
+ 	  .name = "emacb_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
+ };
+ 
+ static const struct clk_parent_data noc_mux[] = {
+@@ -186,6 +188,41 @@ static const struct clk_parent_data noc_mux[] = {
+ 	  .name = "boot_clk", },
+ };
+ 
++static const struct clk_parent_data sdmmc_mux[] = {
++	{ .fw_name = "sdmmc_free_clk",
++	  .name = "sdmmc_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
++static const struct clk_parent_data s2f_user1_mux[] = {
++	{ .fw_name = "s2f_user1_free_clk",
++	  .name = "s2f_user1_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
++static const struct clk_parent_data psi_mux[] = {
++	{ .fw_name = "psi_ref_free_clk",
++	  .name = "psi_ref_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
++static const struct clk_parent_data gpio_db_mux[] = {
++	{ .fw_name = "gpio_db_free_clk",
++	  .name = "gpio_db_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
++static const struct clk_parent_data emac_ptp_mux[] = {
++	{ .fw_name = "emac_ptp_free_clk",
++	  .name = "emac_ptp_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
+ /* clocks in AO (always on) controller */
+ static const struct stratix10_pll_clock agilex_pll_clks[] = {
+ 	{ AGILEX_BOOT_CLK, "boot_clk", boot_mux, ARRAY_SIZE(boot_mux), 0,
+@@ -222,11 +259,9 @@ static const struct stratix10_perip_cnt_clock agilex_main_perip_cnt_clks[] = {
+ 	{ AGILEX_MPU_FREE_CLK, "mpu_free_clk", NULL, mpu_free_mux, ARRAY_SIZE(mpu_free_mux),
+ 	   0, 0x3C, 0, 0, 0},
+ 	{ AGILEX_NOC_FREE_CLK, "noc_free_clk", NULL, noc_free_mux, ARRAY_SIZE(noc_free_mux),
+-	  0, 0x40, 0, 0, 1},
+-	{ AGILEX_L4_SYS_FREE_CLK, "l4_sys_free_clk", "noc_free_clk", NULL, 1, 0,
+-	  0, 4, 0, 0},
+-	{ AGILEX_NOC_CLK, "noc_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux),
+-	  0, 0, 0, 0x30, 1},
++	  0, 0x40, 0, 0, 0},
++	{ AGILEX_L4_SYS_FREE_CLK, "l4_sys_free_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0,
++	  0, 4, 0x30, 1},
+ 	{ AGILEX_EMAC_A_FREE_CLK, "emaca_free_clk", NULL, emaca_free_mux, ARRAY_SIZE(emaca_free_mux),
+ 	  0, 0xD4, 0, 0x88, 0},
+ 	{ AGILEX_EMAC_B_FREE_CLK, "emacb_free_clk", NULL, emacb_free_mux, ARRAY_SIZE(emacb_free_mux),
+@@ -236,7 +271,7 @@ static const struct stratix10_perip_cnt_clock agilex_main_perip_cnt_clks[] = {
+ 	{ AGILEX_GPIO_DB_FREE_CLK, "gpio_db_free_clk", NULL, gpio_db_free_mux,
+ 	  ARRAY_SIZE(gpio_db_free_mux), 0, 0xE0, 0, 0x88, 3},
+ 	{ AGILEX_SDMMC_FREE_CLK, "sdmmc_free_clk", NULL, sdmmc_free_mux,
+-	  ARRAY_SIZE(sdmmc_free_mux), 0, 0xE4, 0, 0x88, 4},
++	  ARRAY_SIZE(sdmmc_free_mux), 0, 0xE4, 0, 0, 0},
+ 	{ AGILEX_S2F_USER0_FREE_CLK, "s2f_user0_free_clk", NULL, s2f_usr0_free_mux,
+ 	  ARRAY_SIZE(s2f_usr0_free_mux), 0, 0xE8, 0, 0, 0},
+ 	{ AGILEX_S2F_USER1_FREE_CLK, "s2f_user1_free_clk", NULL, s2f_usr1_free_mux,
+@@ -252,24 +287,24 @@ static const struct stratix10_gate_clock agilex_gate_clks[] = {
+ 	  0, 0, 0, 0, 0, 0, 4},
+ 	{ AGILEX_MPU_CCU_CLK, "mpu_ccu_clk", "mpu_clk", NULL, 1, 0, 0x24,
+ 	  0, 0, 0, 0, 0, 0, 2},
+-	{ AGILEX_L4_MAIN_CLK, "l4_main_clk", "noc_clk", NULL, 1, 0, 0x24,
+-	  1, 0x44, 0, 2, 0, 0, 0},
+-	{ AGILEX_L4_MP_CLK, "l4_mp_clk", "noc_clk", NULL, 1, 0, 0x24,
+-	  2, 0x44, 8, 2, 0, 0, 0},
++	{ AGILEX_L4_MAIN_CLK, "l4_main_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x24,
++	  1, 0x44, 0, 2, 0x30, 1, 0},
++	{ AGILEX_L4_MP_CLK, "l4_mp_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x24,
++	  2, 0x44, 8, 2, 0x30, 1, 0},
+ 	/*
+ 	 * The l4_sp_clk feeds a 100 MHz clock to various peripherals, one of them
+ 	 * being the SP timers, thus cannot get gated.
+ 	 */
+-	{ AGILEX_L4_SP_CLK, "l4_sp_clk", "noc_clk", NULL, 1, CLK_IS_CRITICAL, 0x24,
+-	  3, 0x44, 16, 2, 0, 0, 0},
+-	{ AGILEX_CS_AT_CLK, "cs_at_clk", "noc_clk", NULL, 1, 0, 0x24,
+-	  4, 0x44, 24, 2, 0, 0, 0},
+-	{ AGILEX_CS_TRACE_CLK, "cs_trace_clk", "noc_clk", NULL, 1, 0, 0x24,
+-	  4, 0x44, 26, 2, 0, 0, 0},
++	{ AGILEX_L4_SP_CLK, "l4_sp_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), CLK_IS_CRITICAL, 0x24,
++	  3, 0x44, 16, 2, 0x30, 1, 0},
++	{ AGILEX_CS_AT_CLK, "cs_at_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x24,
++	  4, 0x44, 24, 2, 0x30, 1, 0},
++	{ AGILEX_CS_TRACE_CLK, "cs_trace_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x24,
++	  4, 0x44, 26, 2, 0x30, 1, 0},
+ 	{ AGILEX_CS_PDBG_CLK, "cs_pdbg_clk", "cs_at_clk", NULL, 1, 0, 0x24,
+ 	  4, 0x44, 28, 1, 0, 0, 0},
+-	{ AGILEX_CS_TIMER_CLK, "cs_timer_clk", "noc_clk", NULL, 1, 0, 0x24,
+-	  5, 0, 0, 0, 0, 0, 0},
++	{ AGILEX_CS_TIMER_CLK, "cs_timer_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x24,
++	  5, 0, 0, 0, 0x30, 1, 0},
+ 	{ AGILEX_S2F_USER0_CLK, "s2f_user0_clk", NULL, s2f_usr0_mux, ARRAY_SIZE(s2f_usr0_mux), 0, 0x24,
+ 	  6, 0, 0, 0, 0, 0, 0},
+ 	{ AGILEX_EMAC0_CLK, "emac0_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0x7C,
+@@ -278,16 +313,16 @@ static const struct stratix10_gate_clock agilex_gate_clks[] = {
+ 	  1, 0, 0, 0, 0x94, 27, 0},
+ 	{ AGILEX_EMAC2_CLK, "emac2_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0x7C,
+ 	  2, 0, 0, 0, 0x94, 28, 0},
+-	{ AGILEX_EMAC_PTP_CLK, "emac_ptp_clk", "emac_ptp_free_clk", NULL, 1, 0, 0x7C,
+-	  3, 0, 0, 0, 0, 0, 0},
+-	{ AGILEX_GPIO_DB_CLK, "gpio_db_clk", "gpio_db_free_clk", NULL, 1, 0, 0x7C,
+-	  4, 0x98, 0, 16, 0, 0, 0},
+-	{ AGILEX_SDMMC_CLK, "sdmmc_clk", "sdmmc_free_clk", NULL, 1, 0, 0x7C,
+-	  5, 0, 0, 0, 0, 0, 4},
+-	{ AGILEX_S2F_USER1_CLK, "s2f_user1_clk", "s2f_user1_free_clk", NULL, 1, 0, 0x7C,
+-	  6, 0, 0, 0, 0, 0, 0},
+-	{ AGILEX_PSI_REF_CLK, "psi_ref_clk", "psi_ref_free_clk", NULL, 1, 0, 0x7C,
+-	  7, 0, 0, 0, 0, 0, 0},
++	{ AGILEX_EMAC_PTP_CLK, "emac_ptp_clk", NULL, emac_ptp_mux, ARRAY_SIZE(emac_ptp_mux), 0, 0x7C,
++	  3, 0, 0, 0, 0x88, 2, 0},
++	{ AGILEX_GPIO_DB_CLK, "gpio_db_clk", NULL, gpio_db_mux, ARRAY_SIZE(gpio_db_mux), 0, 0x7C,
++	  4, 0x98, 0, 16, 0x88, 3, 0},
++	{ AGILEX_SDMMC_CLK, "sdmmc_clk", NULL, sdmmc_mux, ARRAY_SIZE(sdmmc_mux), 0, 0x7C,
++	  5, 0, 0, 0, 0x88, 4, 4},
++	{ AGILEX_S2F_USER1_CLK, "s2f_user1_clk", NULL, s2f_user1_mux, ARRAY_SIZE(s2f_user1_mux), 0, 0x7C,
++	  6, 0, 0, 0, 0x88, 5, 0},
++	{ AGILEX_PSI_REF_CLK, "psi_ref_clk", NULL, psi_mux, ARRAY_SIZE(psi_mux), 0, 0x7C,
++	  7, 0, 0, 0, 0x88, 6, 0},
+ 	{ AGILEX_USB_CLK, "usb_clk", "l4_mp_clk", NULL, 1, 0, 0x7C,
+ 	  8, 0, 0, 0, 0, 0, 0},
+ 	{ AGILEX_SPI_M_CLK, "spi_m_clk", "l4_mp_clk", NULL, 1, 0, 0x7C,
+@@ -366,7 +401,7 @@ static int agilex_clk_register_gate(const struct stratix10_gate_clock *clks,
+ 	int i;
+ 
+ 	for (i = 0; i < nums; i++) {
+-		hw_clk = s10_register_gate(&clks[i], base);
++		hw_clk = agilex_register_gate(&clks[i], base);
+ 		if (IS_ERR(hw_clk)) {
+ 			pr_err("%s: failed to register clock %s\n",
+ 			       __func__, clks[i].name);
+diff --git a/drivers/clk/socfpga/clk-gate-s10.c b/drivers/clk/socfpga/clk-gate-s10.c
+index b84f2627551e1..32567795765fb 100644
+--- a/drivers/clk/socfpga/clk-gate-s10.c
++++ b/drivers/clk/socfpga/clk-gate-s10.c
+@@ -11,6 +11,13 @@
+ #define SOCFPGA_CS_PDBG_CLK	"cs_pdbg_clk"
+ #define to_socfpga_gate_clk(p) container_of(p, struct socfpga_gate_clk, hw.hw)
+ 
++#define SOCFPGA_EMAC0_CLK		"emac0_clk"
++#define SOCFPGA_EMAC1_CLK		"emac1_clk"
++#define SOCFPGA_EMAC2_CLK		"emac2_clk"
++#define AGILEX_BYPASS_OFFSET		0xC
++#define STRATIX10_BYPASS_OFFSET		0x2C
++#define BOOTCLK_BYPASS			2
++
+ static unsigned long socfpga_gate_clk_recalc_rate(struct clk_hw *hwclk,
+ 						  unsigned long parent_rate)
+ {
+@@ -44,14 +51,61 @@ static unsigned long socfpga_dbg_clk_recalc_rate(struct clk_hw *hwclk,
+ static u8 socfpga_gate_get_parent(struct clk_hw *hwclk)
+ {
+ 	struct socfpga_gate_clk *socfpgaclk = to_socfpga_gate_clk(hwclk);
+-	u32 mask;
++	u32 mask, second_bypass;
++	u8 parent = 0;
++	const char *name = clk_hw_get_name(hwclk);
++
++	if (socfpgaclk->bypass_reg) {
++		mask = (0x1 << socfpgaclk->bypass_shift);
++		parent = ((readl(socfpgaclk->bypass_reg) & mask) >>
++			  socfpgaclk->bypass_shift);
++	}
++
++	if (streq(name, SOCFPGA_EMAC0_CLK) ||
++	    streq(name, SOCFPGA_EMAC1_CLK) ||
++	    streq(name, SOCFPGA_EMAC2_CLK)) {
++		second_bypass = readl(socfpgaclk->bypass_reg -
++				      STRATIX10_BYPASS_OFFSET);
++		/* EMACA bypass to bootclk @0xB0 offset */
++		if (second_bypass & 0x1)
++			if (parent == 0) /* only applicable if parent is maca */
++				parent = BOOTCLK_BYPASS;
++
++		if (second_bypass & 0x2)
++			if (parent == 1) /* only applicable if parent is macb */
++				parent = BOOTCLK_BYPASS;
++	}
++	return parent;
++}
++
++static u8 socfpga_agilex_gate_get_parent(struct clk_hw *hwclk)
++{
++	struct socfpga_gate_clk *socfpgaclk = to_socfpga_gate_clk(hwclk);
++	u32 mask, second_bypass;
+ 	u8 parent = 0;
++	const char *name = clk_hw_get_name(hwclk);
+ 
+ 	if (socfpgaclk->bypass_reg) {
+ 		mask = (0x1 << socfpgaclk->bypass_shift);
+ 		parent = ((readl(socfpgaclk->bypass_reg) & mask) >>
+ 			  socfpgaclk->bypass_shift);
+ 	}
++
++	if (streq(name, SOCFPGA_EMAC0_CLK) ||
++	    streq(name, SOCFPGA_EMAC1_CLK) ||
++	    streq(name, SOCFPGA_EMAC2_CLK)) {
++		second_bypass = readl(socfpgaclk->bypass_reg -
++				      AGILEX_BYPASS_OFFSET);
++		/* EMACA bypass to bootclk @0x88 offset */
++		if (second_bypass & 0x1)
++			if (parent == 0) /* only applicable if parent is maca */
++				parent = BOOTCLK_BYPASS;
++
++		if (second_bypass & 0x2)
++			if (parent == 1) /* only applicable if parent is macb */
++				parent = BOOTCLK_BYPASS;
++	}
++
+ 	return parent;
+ }
+ 
+@@ -60,6 +114,11 @@ static struct clk_ops gateclk_ops = {
+ 	.get_parent = socfpga_gate_get_parent,
+ };
+ 
++static const struct clk_ops agilex_gateclk_ops = {
++	.recalc_rate = socfpga_gate_clk_recalc_rate,
++	.get_parent = socfpga_agilex_gate_get_parent,
++};
++
+ static const struct clk_ops dbgclk_ops = {
+ 	.recalc_rate = socfpga_dbg_clk_recalc_rate,
+ 	.get_parent = socfpga_gate_get_parent,
+@@ -122,3 +181,61 @@ struct clk_hw *s10_register_gate(const struct stratix10_gate_clock *clks, void _
+ 	}
+ 	return hw_clk;
+ }
++
++struct clk_hw *agilex_register_gate(const struct stratix10_gate_clock *clks, void __iomem *regbase)
++{
++	struct clk_hw *hw_clk;
++	struct socfpga_gate_clk *socfpga_clk;
++	struct clk_init_data init;
++	const char *parent_name = clks->parent_name;
++	int ret;
++
++	socfpga_clk = kzalloc(sizeof(*socfpga_clk), GFP_KERNEL);
++	if (!socfpga_clk)
++		return NULL;
++
++	socfpga_clk->hw.reg = regbase + clks->gate_reg;
++	socfpga_clk->hw.bit_idx = clks->gate_idx;
++
++	gateclk_ops.enable = clk_gate_ops.enable;
++	gateclk_ops.disable = clk_gate_ops.disable;
++
++	socfpga_clk->fixed_div = clks->fixed_div;
++
++	if (clks->div_reg)
++		socfpga_clk->div_reg = regbase + clks->div_reg;
++	else
++		socfpga_clk->div_reg = NULL;
++
++	socfpga_clk->width = clks->div_width;
++	socfpga_clk->shift = clks->div_offset;
++
++	if (clks->bypass_reg)
++		socfpga_clk->bypass_reg = regbase + clks->bypass_reg;
++	else
++		socfpga_clk->bypass_reg = NULL;
++	socfpga_clk->bypass_shift = clks->bypass_shift;
++
++	if (streq(clks->name, "cs_pdbg_clk"))
++		init.ops = &dbgclk_ops;
++	else
++		init.ops = &agilex_gateclk_ops;
++
++	init.name = clks->name;
++	init.flags = clks->flags;
++
++	init.num_parents = clks->num_parents;
++	init.parent_names = parent_name ? &parent_name : NULL;
++	if (init.parent_names == NULL)
++		init.parent_data = clks->parent_data;
++	socfpga_clk->hw.hw.init = &init;
++
++	hw_clk = &socfpga_clk->hw.hw;
++
++	ret = clk_hw_register(NULL, &socfpga_clk->hw.hw);
++	if (ret) {
++		kfree(socfpga_clk);
++		return ERR_PTR(ret);
++	}
++	return hw_clk;
++}
+diff --git a/drivers/clk/socfpga/clk-periph-s10.c b/drivers/clk/socfpga/clk-periph-s10.c
+index e5a5fef76df70..cbabde2b476bf 100644
+--- a/drivers/clk/socfpga/clk-periph-s10.c
++++ b/drivers/clk/socfpga/clk-periph-s10.c
+@@ -64,16 +64,21 @@ static u8 clk_periclk_get_parent(struct clk_hw *hwclk)
+ {
+ 	struct socfpga_periph_clk *socfpgaclk = to_periph_clk(hwclk);
+ 	u32 clk_src, mask;
+-	u8 parent;
++	u8 parent = 0;
+ 
++	/* handle the bypass first */
+ 	if (socfpgaclk->bypass_reg) {
+ 		mask = (0x1 << socfpgaclk->bypass_shift);
+ 		parent = ((readl(socfpgaclk->bypass_reg) & mask) >>
+ 			   socfpgaclk->bypass_shift);
+-	} else {
++		if (parent)
++			return parent;
++	}
++
++	if (socfpgaclk->hw.reg) {
+ 		clk_src = readl(socfpgaclk->hw.reg);
+ 		parent = (clk_src >> CLK_MGR_FREE_SHIFT) &
+-			CLK_MGR_FREE_MASK;
++			  CLK_MGR_FREE_MASK;
+ 	}
+ 	return parent;
+ }
+diff --git a/drivers/clk/socfpga/clk-s10.c b/drivers/clk/socfpga/clk-s10.c
+index f0bd77138ecb4..b532d51faaee5 100644
+--- a/drivers/clk/socfpga/clk-s10.c
++++ b/drivers/clk/socfpga/clk-s10.c
+@@ -144,6 +144,41 @@ static const struct clk_parent_data mpu_free_mux[] = {
+ 	  .name = "f2s-free-clk", },
+ };
+ 
++static const struct clk_parent_data sdmmc_mux[] = {
++	{ .fw_name = "sdmmc_free_clk",
++	  .name = "sdmmc_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
++static const struct clk_parent_data s2f_user1_mux[] = {
++	{ .fw_name = "s2f_user1_free_clk",
++	  .name = "s2f_user1_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
++static const struct clk_parent_data psi_mux[] = {
++	{ .fw_name = "psi_ref_free_clk",
++	  .name = "psi_ref_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
++static const struct clk_parent_data gpio_db_mux[] = {
++	{ .fw_name = "gpio_db_free_clk",
++	  .name = "gpio_db_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
++static const struct clk_parent_data emac_ptp_mux[] = {
++	{ .fw_name = "emac_ptp_free_clk",
++	  .name = "emac_ptp_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
+ /* clocks in AO (always on) controller */
+ static const struct stratix10_pll_clock s10_pll_clks[] = {
+ 	{ STRATIX10_BOOT_CLK, "boot_clk", boot_mux, ARRAY_SIZE(boot_mux), 0,
+@@ -167,7 +202,7 @@ static const struct stratix10_perip_cnt_clock s10_main_perip_cnt_clks[] = {
+ 	{ STRATIX10_MPU_FREE_CLK, "mpu_free_clk", NULL, mpu_free_mux, ARRAY_SIZE(mpu_free_mux),
+ 	   0, 0x48, 0, 0, 0},
+ 	{ STRATIX10_NOC_FREE_CLK, "noc_free_clk", NULL, noc_free_mux, ARRAY_SIZE(noc_free_mux),
+-	  0, 0x4C, 0, 0, 0},
++	  0, 0x4C, 0, 0x3C, 1},
+ 	{ STRATIX10_MAIN_EMACA_CLK, "main_emaca_clk", "main_noc_base_clk", NULL, 1, 0,
+ 	  0x50, 0, 0, 0},
+ 	{ STRATIX10_MAIN_EMACB_CLK, "main_emacb_clk", "main_noc_base_clk", NULL, 1, 0,
+@@ -200,10 +235,8 @@ static const struct stratix10_perip_cnt_clock s10_main_perip_cnt_clks[] = {
+ 	  0, 0xD4, 0, 0, 0},
+ 	{ STRATIX10_PERI_PSI_REF_CLK, "peri_psi_ref_clk", "peri_noc_base_clk", NULL, 1, 0,
+ 	  0xD8, 0, 0, 0},
+-	{ STRATIX10_L4_SYS_FREE_CLK, "l4_sys_free_clk", "noc_free_clk", NULL, 1, 0,
+-	  0, 4, 0, 0},
+-	{ STRATIX10_NOC_CLK, "noc_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux),
+-	  0, 0, 0, 0x3C, 1},
++	{ STRATIX10_L4_SYS_FREE_CLK, "l4_sys_free_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0,
++	  0, 4, 0x3C, 1},
+ 	{ STRATIX10_EMAC_A_FREE_CLK, "emaca_free_clk", NULL, emaca_free_mux, ARRAY_SIZE(emaca_free_mux),
+ 	  0, 0, 2, 0xB0, 0},
+ 	{ STRATIX10_EMAC_B_FREE_CLK, "emacb_free_clk", NULL, emacb_free_mux, ARRAY_SIZE(emacb_free_mux),
+@@ -227,20 +260,20 @@ static const struct stratix10_gate_clock s10_gate_clks[] = {
+ 	  0, 0, 0, 0, 0, 0, 4},
+ 	{ STRATIX10_MPU_L2RAM_CLK, "mpu_l2ram_clk", "mpu_clk", NULL, 1, 0, 0x30,
+ 	  0, 0, 0, 0, 0, 0, 2},
+-	{ STRATIX10_L4_MAIN_CLK, "l4_main_clk", "noc_clk", NULL, 1, 0, 0x30,
+-	  1, 0x70, 0, 2, 0, 0, 0},
+-	{ STRATIX10_L4_MP_CLK, "l4_mp_clk", "noc_clk", NULL, 1, 0, 0x30,
+-	  2, 0x70, 8, 2, 0, 0, 0},
+-	{ STRATIX10_L4_SP_CLK, "l4_sp_clk", "noc_clk", NULL, 1, CLK_IS_CRITICAL, 0x30,
+-	  3, 0x70, 16, 2, 0, 0, 0},
+-	{ STRATIX10_CS_AT_CLK, "cs_at_clk", "noc_clk", NULL, 1, 0, 0x30,
+-	  4, 0x70, 24, 2, 0, 0, 0},
+-	{ STRATIX10_CS_TRACE_CLK, "cs_trace_clk", "noc_clk", NULL, 1, 0, 0x30,
+-	  4, 0x70, 26, 2, 0, 0, 0},
++	{ STRATIX10_L4_MAIN_CLK, "l4_main_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x30,
++	  1, 0x70, 0, 2, 0x3C, 1, 0},
++	{ STRATIX10_L4_MP_CLK, "l4_mp_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x30,
++	  2, 0x70, 8, 2, 0x3C, 1, 0},
++	{ STRATIX10_L4_SP_CLK, "l4_sp_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), CLK_IS_CRITICAL, 0x30,
++	  3, 0x70, 16, 2, 0x3C, 1, 0},
++	{ STRATIX10_CS_AT_CLK, "cs_at_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x30,
++	  4, 0x70, 24, 2, 0x3C, 1, 0},
++	{ STRATIX10_CS_TRACE_CLK, "cs_trace_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x30,
++	  4, 0x70, 26, 2, 0x3C, 1, 0},
+ 	{ STRATIX10_CS_PDBG_CLK, "cs_pdbg_clk", "cs_at_clk", NULL, 1, 0, 0x30,
+ 	  4, 0x70, 28, 1, 0, 0, 0},
+-	{ STRATIX10_CS_TIMER_CLK, "cs_timer_clk", "noc_clk", NULL, 1, 0, 0x30,
+-	  5, 0, 0, 0, 0, 0, 0},
++	{ STRATIX10_CS_TIMER_CLK, "cs_timer_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x30,
++	  5, 0, 0, 0, 0x3C, 1, 0},
+ 	{ STRATIX10_S2F_USER0_CLK, "s2f_user0_clk", NULL, s2f_usr0_mux, ARRAY_SIZE(s2f_usr0_mux), 0, 0x30,
+ 	  6, 0, 0, 0, 0, 0, 0},
+ 	{ STRATIX10_EMAC0_CLK, "emac0_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0xA4,
+@@ -249,16 +282,16 @@ static const struct stratix10_gate_clock s10_gate_clks[] = {
+ 	  1, 0, 0, 0, 0xDC, 27, 0},
+ 	{ STRATIX10_EMAC2_CLK, "emac2_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0xA4,
+ 	  2, 0, 0, 0, 0xDC, 28, 0},
+-	{ STRATIX10_EMAC_PTP_CLK, "emac_ptp_clk", "emac_ptp_free_clk", NULL, 1, 0, 0xA4,
+-	  3, 0, 0, 0, 0, 0, 0},
+-	{ STRATIX10_GPIO_DB_CLK, "gpio_db_clk", "gpio_db_free_clk", NULL, 1, 0, 0xA4,
+-	  4, 0xE0, 0, 16, 0, 0, 0},
+-	{ STRATIX10_SDMMC_CLK, "sdmmc_clk", "sdmmc_free_clk", NULL, 1, 0, 0xA4,
+-	  5, 0, 0, 0, 0, 0, 4},
+-	{ STRATIX10_S2F_USER1_CLK, "s2f_user1_clk", "s2f_user1_free_clk", NULL, 1, 0, 0xA4,
+-	  6, 0, 0, 0, 0, 0, 0},
+-	{ STRATIX10_PSI_REF_CLK, "psi_ref_clk", "psi_ref_free_clk", NULL, 1, 0, 0xA4,
+-	  7, 0, 0, 0, 0, 0, 0},
++	{ STRATIX10_EMAC_PTP_CLK, "emac_ptp_clk", NULL, emac_ptp_mux, ARRAY_SIZE(emac_ptp_mux), 0, 0xA4,
++	  3, 0, 0, 0, 0xB0, 2, 0},
++	{ STRATIX10_GPIO_DB_CLK, "gpio_db_clk", NULL, gpio_db_mux, ARRAY_SIZE(gpio_db_mux), 0, 0xA4,
++	  4, 0xE0, 0, 16, 0xB0, 3, 0},
++	{ STRATIX10_SDMMC_CLK, "sdmmc_clk", NULL, sdmmc_mux, ARRAY_SIZE(sdmmc_mux), 0, 0xA4,
++	  5, 0, 0, 0, 0xB0, 4, 4},
++	{ STRATIX10_S2F_USER1_CLK, "s2f_user1_clk", NULL, s2f_user1_mux, ARRAY_SIZE(s2f_user1_mux), 0, 0xA4,
++	  6, 0, 0, 0, 0xB0, 5, 0},
++	{ STRATIX10_PSI_REF_CLK, "psi_ref_clk", NULL, psi_mux, ARRAY_SIZE(psi_mux), 0, 0xA4,
++	  7, 0, 0, 0, 0xB0, 6, 0},
+ 	{ STRATIX10_USB_CLK, "usb_clk", "l4_mp_clk", NULL, 1, 0, 0xA4,
+ 	  8, 0, 0, 0, 0, 0, 0},
+ 	{ STRATIX10_SPI_M_CLK, "spi_m_clk", "l4_mp_clk", NULL, 1, 0, 0xA4,
+diff --git a/drivers/clk/socfpga/stratix10-clk.h b/drivers/clk/socfpga/stratix10-clk.h
+index 61eaf3a41fbb8..75234e0783e1c 100644
+--- a/drivers/clk/socfpga/stratix10-clk.h
++++ b/drivers/clk/socfpga/stratix10-clk.h
+@@ -85,4 +85,6 @@ struct clk_hw *s10_register_cnt_periph(const struct stratix10_perip_cnt_clock *c
+ 				    void __iomem *reg);
+ struct clk_hw *s10_register_gate(const struct stratix10_gate_clock *clks,
+ 			      void __iomem *reg);
++struct clk_hw *agilex_register_gate(const struct stratix10_gate_clock *clks,
++			      void __iomem *reg);
+ #endif	/* __STRATIX10_CLK_H */
+diff --git a/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c b/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c
+index a774942cb153a..f49724a22540e 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c
++++ b/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c
+@@ -817,10 +817,10 @@ static void __init sun8i_v3_v3s_ccu_init(struct device_node *node,
+ 		return;
+ 	}
+ 
+-	/* Force the PLL-Audio-1x divider to 4 */
++	/* Force the PLL-Audio-1x divider to 1 */
+ 	val = readl(reg + SUN8I_V3S_PLL_AUDIO_REG);
+ 	val &= ~GENMASK(19, 16);
+-	writel(val | (3 << 16), reg + SUN8I_V3S_PLL_AUDIO_REG);
++	writel(val, reg + SUN8I_V3S_PLL_AUDIO_REG);
+ 
+ 	sunxi_ccu_probe(node, reg, ccu_desc);
+ }
+diff --git a/drivers/clk/tegra/clk-tegra30.c b/drivers/clk/tegra/clk-tegra30.c
+index 16dbf83d2f62a..a33688b2359e5 100644
+--- a/drivers/clk/tegra/clk-tegra30.c
++++ b/drivers/clk/tegra/clk-tegra30.c
+@@ -1245,7 +1245,7 @@ static struct tegra_clk_init_table init_table[] __initdata = {
+ 	{ TEGRA30_CLK_GR3D, TEGRA30_CLK_PLL_C, 300000000, 0 },
+ 	{ TEGRA30_CLK_GR3D2, TEGRA30_CLK_PLL_C, 300000000, 0 },
+ 	{ TEGRA30_CLK_PLL_U, TEGRA30_CLK_CLK_MAX, 480000000, 0 },
+-	{ TEGRA30_CLK_VDE, TEGRA30_CLK_PLL_C, 600000000, 0 },
++	{ TEGRA30_CLK_VDE, TEGRA30_CLK_PLL_C, 300000000, 0 },
+ 	{ TEGRA30_CLK_SPDIF_IN_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
+ 	{ TEGRA30_CLK_I2S0_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
+ 	{ TEGRA30_CLK_I2S1_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 },
+diff --git a/drivers/clk/zynqmp/clk-mux-zynqmp.c b/drivers/clk/zynqmp/clk-mux-zynqmp.c
+index 06194149be831..d576c900dee06 100644
+--- a/drivers/clk/zynqmp/clk-mux-zynqmp.c
++++ b/drivers/clk/zynqmp/clk-mux-zynqmp.c
+@@ -38,7 +38,7 @@ struct zynqmp_clk_mux {
+  * zynqmp_clk_mux_get_parent() - Get parent of clock
+  * @hw:		handle between common and hardware-specific interfaces
+  *
+- * Return: Parent index
++ * Return: Parent index on success or number of parents in case of error
+  */
+ static u8 zynqmp_clk_mux_get_parent(struct clk_hw *hw)
+ {
+@@ -50,9 +50,15 @@ static u8 zynqmp_clk_mux_get_parent(struct clk_hw *hw)
+ 
+ 	ret = zynqmp_pm_clock_getparent(clk_id, &val);
+ 
+-	if (ret)
++	if (ret) {
+ 		pr_warn_once("%s() getparent failed for clock: %s, ret = %d\n",
+ 			     __func__, clk_name, ret);
++		/*
++		 * clk_core_get_parent_by_index() takes num_parents as incorrect
++		 * index which is exactly what I want to return here
++		 */
++		return clk_hw_get_num_parents(hw);
++	}
+ 
+ 	return val;
+ }
+diff --git a/drivers/clk/zynqmp/pll.c b/drivers/clk/zynqmp/pll.c
+index abe6afbf3407b..e025581f0d54a 100644
+--- a/drivers/clk/zynqmp/pll.c
++++ b/drivers/clk/zynqmp/pll.c
+@@ -31,8 +31,9 @@ struct zynqmp_pll {
+ #define PS_PLL_VCO_MAX 3000000000UL
+ 
+ enum pll_mode {
+-	PLL_MODE_INT,
+-	PLL_MODE_FRAC,
++	PLL_MODE_INT = 0,
++	PLL_MODE_FRAC = 1,
++	PLL_MODE_ERROR = 2,
+ };
+ 
+ #define FRAC_OFFSET 0x8
+@@ -54,9 +55,11 @@ static inline enum pll_mode zynqmp_pll_get_mode(struct clk_hw *hw)
+ 	int ret;
+ 
+ 	ret = zynqmp_pm_get_pll_frac_mode(clk_id, ret_payload);
+-	if (ret)
++	if (ret) {
+ 		pr_warn_once("%s() PLL get frac mode failed for %s, ret = %d\n",
+ 			     __func__, clk_name, ret);
++		return PLL_MODE_ERROR;
++	}
+ 
+ 	return ret_payload[1];
+ }
+@@ -126,7 +129,7 @@ static long zynqmp_pll_round_rate(struct clk_hw *hw, unsigned long rate,
+  * @hw:			Handle between common and hardware-specific interfaces
+  * @parent_rate:	Clock frequency of parent clock
+  *
+- * Return: Current clock frequency
++ * Return: Current clock frequency or 0 in case of error
+  */
+ static unsigned long zynqmp_pll_recalc_rate(struct clk_hw *hw,
+ 					    unsigned long parent_rate)
+@@ -138,14 +141,21 @@ static unsigned long zynqmp_pll_recalc_rate(struct clk_hw *hw,
+ 	unsigned long rate, frac;
+ 	u32 ret_payload[PAYLOAD_ARG_CNT];
+ 	int ret;
++	enum pll_mode mode;
+ 
+ 	ret = zynqmp_pm_clock_getdivider(clk_id, &fbdiv);
+-	if (ret)
++	if (ret) {
+ 		pr_warn_once("%s() get divider failed for %s, ret = %d\n",
+ 			     __func__, clk_name, ret);
++		return 0ul;
++	}
++
++	mode = zynqmp_pll_get_mode(hw);
++	if (mode == PLL_MODE_ERROR)
++		return 0ul;
+ 
+ 	rate =  parent_rate * fbdiv;
+-	if (zynqmp_pll_get_mode(hw) == PLL_MODE_FRAC) {
++	if (mode == PLL_MODE_FRAC) {
+ 		zynqmp_pm_get_pll_frac_data(clk_id, ret_payload);
+ 		data = ret_payload[1];
+ 		frac = (parent_rate * data) / FRAC_DIV;
+diff --git a/drivers/clocksource/timer-ti-dm.c b/drivers/clocksource/timer-ti-dm.c
+index 33eeabf9c3d12..e5c631f1b5cbe 100644
+--- a/drivers/clocksource/timer-ti-dm.c
++++ b/drivers/clocksource/timer-ti-dm.c
+@@ -78,6 +78,9 @@ static void omap_dm_timer_write_reg(struct omap_dm_timer *timer, u32 reg,
+ 
+ static void omap_timer_restore_context(struct omap_dm_timer *timer)
+ {
++	__omap_dm_timer_write(timer, OMAP_TIMER_OCP_CFG_OFFSET,
++			      timer->context.ocp_cfg, 0);
++
+ 	omap_dm_timer_write_reg(timer, OMAP_TIMER_WAKEUP_EN_REG,
+ 				timer->context.twer);
+ 	omap_dm_timer_write_reg(timer, OMAP_TIMER_COUNTER_REG,
+@@ -95,6 +98,9 @@ static void omap_timer_restore_context(struct omap_dm_timer *timer)
+ 
+ static void omap_timer_save_context(struct omap_dm_timer *timer)
+ {
++	timer->context.ocp_cfg =
++		__omap_dm_timer_read(timer, OMAP_TIMER_OCP_CFG_OFFSET, 0);
++
+ 	timer->context.tclr =
+ 			omap_dm_timer_read_reg(timer, OMAP_TIMER_CTRL_REG);
+ 	timer->context.twer =
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 802abc925b2ae..cbab834c37a03 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -1367,9 +1367,14 @@ static int cpufreq_online(unsigned int cpu)
+ 			goto out_free_policy;
+ 		}
+ 
++		/*
++		 * The initialization has succeeded and the policy is online.
++		 * If there is a problem with its frequency table, take it
++		 * offline and drop it.
++		 */
+ 		ret = cpufreq_table_validate_and_sort(policy);
+ 		if (ret)
+-			goto out_exit_policy;
++			goto out_offline_policy;
+ 
+ 		/* related_cpus should at least include policy->cpus. */
+ 		cpumask_copy(policy->related_cpus, policy->cpus);
+@@ -1515,6 +1520,10 @@ out_destroy_policy:
+ 
+ 	up_write(&policy->rwsem);
+ 
++out_offline_policy:
++	if (cpufreq_driver->offline)
++		cpufreq_driver->offline(policy);
++
+ out_exit_policy:
+ 	if (cpufreq_driver->exit)
+ 		cpufreq_driver->exit(policy);
+diff --git a/drivers/crypto/cavium/nitrox/nitrox_isr.c b/drivers/crypto/cavium/nitrox/nitrox_isr.c
+index c288c4b51783d..f19e520da6d0c 100644
+--- a/drivers/crypto/cavium/nitrox/nitrox_isr.c
++++ b/drivers/crypto/cavium/nitrox/nitrox_isr.c
+@@ -307,6 +307,10 @@ int nitrox_register_interrupts(struct nitrox_device *ndev)
+ 	 * Entry 192: NPS_CORE_INT_ACTIVE
+ 	 */
+ 	nr_vecs = pci_msix_vec_count(pdev);
++	if (nr_vecs < 0) {
++		dev_err(DEV(ndev), "Error in getting vec count %d\n", nr_vecs);
++		return nr_vecs;
++	}
+ 
+ 	/* Enable MSI-X */
+ 	ret = pci_alloc_irq_vectors(pdev, nr_vecs, nr_vecs, PCI_IRQ_MSIX);
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 3506b2050fb86..91808402e0bf2 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -43,6 +43,10 @@ static int psp_probe_timeout = 5;
+ module_param(psp_probe_timeout, int, 0644);
+ MODULE_PARM_DESC(psp_probe_timeout, " default timeout value, in seconds, during PSP device probe");
+ 
++MODULE_FIRMWARE("amd/amd_sev_fam17h_model0xh.sbin"); /* 1st gen EPYC */
++MODULE_FIRMWARE("amd/amd_sev_fam17h_model3xh.sbin"); /* 2nd gen EPYC */
++MODULE_FIRMWARE("amd/amd_sev_fam19h_model0xh.sbin"); /* 3rd gen EPYC */
++
+ static bool psp_dead;
+ static int psp_timeout;
+ 
+diff --git a/drivers/crypto/ccp/sp-pci.c b/drivers/crypto/ccp/sp-pci.c
+index f468594ef8afa..6fb6ba35f89d4 100644
+--- a/drivers/crypto/ccp/sp-pci.c
++++ b/drivers/crypto/ccp/sp-pci.c
+@@ -222,7 +222,7 @@ static int sp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 		if (ret) {
+ 			dev_err(dev, "dma_set_mask_and_coherent failed (%d)\n",
+ 				ret);
+-			goto e_err;
++			goto free_irqs;
+ 		}
+ 	}
+ 
+@@ -230,10 +230,12 @@ static int sp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 
+ 	ret = sp_init(sp);
+ 	if (ret)
+-		goto e_err;
++		goto free_irqs;
+ 
+ 	return 0;
+ 
++free_irqs:
++	sp_free_irqs(sp);
+ e_err:
+ 	dev_notice(dev, "initialization failed\n");
+ 	return ret;
+diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
+index a380087c83f77..782ddffa5d904 100644
+--- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c
++++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c
+@@ -298,6 +298,8 @@ static void hpre_hw_data_clr_all(struct hpre_ctx *ctx,
+ 	dma_addr_t tmp;
+ 
+ 	tmp = le64_to_cpu(sqe->in);
++	if (unlikely(dma_mapping_error(dev, tmp)))
++		return;
+ 
+ 	if (src) {
+ 		if (req->src)
+@@ -307,6 +309,8 @@ static void hpre_hw_data_clr_all(struct hpre_ctx *ctx,
+ 	}
+ 
+ 	tmp = le64_to_cpu(sqe->out);
++	if (unlikely(dma_mapping_error(dev, tmp)))
++		return;
+ 
+ 	if (req->dst) {
+ 		if (dst)
+@@ -524,6 +528,8 @@ static int hpre_msg_request_set(struct hpre_ctx *ctx, void *req, bool is_rsa)
+ 		msg->key = cpu_to_le64(ctx->dh.dma_xa_p);
+ 	}
+ 
++	msg->in = cpu_to_le64(DMA_MAPPING_ERROR);
++	msg->out = cpu_to_le64(DMA_MAPPING_ERROR);
+ 	msg->dw0 |= cpu_to_le32(0x1 << HPRE_SQE_DONE_SHIFT);
+ 	msg->task_len1 = (ctx->key_sz >> HPRE_BITS_2_BYTES_SHIFT) - 1;
+ 	h_req->ctx = ctx;
+@@ -1372,11 +1378,15 @@ static void hpre_ecdh_hw_data_clr_all(struct hpre_ctx *ctx,
+ 	dma_addr_t dma;
+ 
+ 	dma = le64_to_cpu(sqe->in);
++	if (unlikely(dma_mapping_error(dev, dma)))
++		return;
+ 
+ 	if (src && req->src)
+ 		dma_free_coherent(dev, ctx->key_sz << 2, req->src, dma);
+ 
+ 	dma = le64_to_cpu(sqe->out);
++	if (unlikely(dma_mapping_error(dev, dma)))
++		return;
+ 
+ 	if (req->dst)
+ 		dma_free_coherent(dev, ctx->key_sz << 1, req->dst, dma);
+@@ -1431,6 +1441,8 @@ static int hpre_ecdh_msg_request_set(struct hpre_ctx *ctx,
+ 	h_req->areq.ecdh = req;
+ 	msg = &h_req->req;
+ 	memset(msg, 0, sizeof(*msg));
++	msg->in = cpu_to_le64(DMA_MAPPING_ERROR);
++	msg->out = cpu_to_le64(DMA_MAPPING_ERROR);
+ 	msg->key = cpu_to_le64(ctx->ecdh.dma_p);
+ 
+ 	msg->dw0 |= cpu_to_le32(0x1U << HPRE_SQE_DONE_SHIFT);
+@@ -1667,11 +1679,15 @@ static void hpre_curve25519_hw_data_clr_all(struct hpre_ctx *ctx,
+ 	dma_addr_t dma;
+ 
+ 	dma = le64_to_cpu(sqe->in);
++	if (unlikely(dma_mapping_error(dev, dma)))
++		return;
+ 
+ 	if (src && req->src)
+ 		dma_free_coherent(dev, ctx->key_sz, req->src, dma);
+ 
+ 	dma = le64_to_cpu(sqe->out);
++	if (unlikely(dma_mapping_error(dev, dma)))
++		return;
+ 
+ 	if (req->dst)
+ 		dma_free_coherent(dev, ctx->key_sz, req->dst, dma);
+@@ -1722,6 +1738,8 @@ static int hpre_curve25519_msg_request_set(struct hpre_ctx *ctx,
+ 	h_req->areq.curve25519 = req;
+ 	msg = &h_req->req;
+ 	memset(msg, 0, sizeof(*msg));
++	msg->in = cpu_to_le64(DMA_MAPPING_ERROR);
++	msg->out = cpu_to_le64(DMA_MAPPING_ERROR);
+ 	msg->key = cpu_to_le64(ctx->curve25519.dma_p);
+ 
+ 	msg->dw0 |= cpu_to_le32(0x1U << HPRE_SQE_DONE_SHIFT);
+diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+index 133aede8bf078..b43fad8b9e8d4 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_crypto.c
++++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+@@ -1541,11 +1541,11 @@ static struct skcipher_alg sec_skciphers[] = {
+ 			 AES_BLOCK_SIZE, AES_BLOCK_SIZE)
+ 
+ 	SEC_SKCIPHER_ALG("ecb(des3_ede)", sec_setkey_3des_ecb,
+-			 SEC_DES3_2KEY_SIZE, SEC_DES3_3KEY_SIZE,
++			 SEC_DES3_3KEY_SIZE, SEC_DES3_3KEY_SIZE,
+ 			 DES3_EDE_BLOCK_SIZE, 0)
+ 
+ 	SEC_SKCIPHER_ALG("cbc(des3_ede)", sec_setkey_3des_cbc,
+-			 SEC_DES3_2KEY_SIZE, SEC_DES3_3KEY_SIZE,
++			 SEC_DES3_3KEY_SIZE, SEC_DES3_3KEY_SIZE,
+ 			 DES3_EDE_BLOCK_SIZE, DES3_EDE_BLOCK_SIZE)
+ 
+ 	SEC_SKCIPHER_ALG("xts(sm4)", sec_setkey_sm4_xts,
+diff --git a/drivers/crypto/ixp4xx_crypto.c b/drivers/crypto/ixp4xx_crypto.c
+index 0616e369522e9..f577ee4afd06f 100644
+--- a/drivers/crypto/ixp4xx_crypto.c
++++ b/drivers/crypto/ixp4xx_crypto.c
+@@ -149,6 +149,8 @@ struct crypt_ctl {
+ struct ablk_ctx {
+ 	struct buffer_desc *src;
+ 	struct buffer_desc *dst;
++	u8 iv[MAX_IVLEN];
++	bool encrypt;
+ };
+ 
+ struct aead_ctx {
+@@ -330,7 +332,7 @@ static void free_buf_chain(struct device *dev, struct buffer_desc *buf,
+ 
+ 		buf1 = buf->next;
+ 		phys1 = buf->phys_next;
+-		dma_unmap_single(dev, buf->phys_next, buf->buf_len, buf->dir);
++		dma_unmap_single(dev, buf->phys_addr, buf->buf_len, buf->dir);
+ 		dma_pool_free(buffer_pool, buf, phys);
+ 		buf = buf1;
+ 		phys = phys1;
+@@ -381,6 +383,20 @@ static void one_packet(dma_addr_t phys)
+ 	case CTL_FLAG_PERFORM_ABLK: {
+ 		struct skcipher_request *req = crypt->data.ablk_req;
+ 		struct ablk_ctx *req_ctx = skcipher_request_ctx(req);
++		struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
++		unsigned int ivsize = crypto_skcipher_ivsize(tfm);
++		unsigned int offset;
++
++		if (ivsize > 0) {
++			offset = req->cryptlen - ivsize;
++			if (req_ctx->encrypt) {
++				scatterwalk_map_and_copy(req->iv, req->dst,
++							 offset, ivsize, 0);
++			} else {
++				memcpy(req->iv, req_ctx->iv, ivsize);
++				memzero_explicit(req_ctx->iv, ivsize);
++			}
++		}
+ 
+ 		if (req_ctx->dst) {
+ 			free_buf_chain(dev, req_ctx->dst, crypt->dst_buf);
+@@ -876,6 +892,7 @@ static int ablk_perform(struct skcipher_request *req, int encrypt)
+ 	struct ablk_ctx *req_ctx = skcipher_request_ctx(req);
+ 	struct buffer_desc src_hook;
+ 	struct device *dev = &pdev->dev;
++	unsigned int offset;
+ 	gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ?
+ 				GFP_KERNEL : GFP_ATOMIC;
+ 
+@@ -885,6 +902,7 @@ static int ablk_perform(struct skcipher_request *req, int encrypt)
+ 		return -EAGAIN;
+ 
+ 	dir = encrypt ? &ctx->encrypt : &ctx->decrypt;
++	req_ctx->encrypt = encrypt;
+ 
+ 	crypt = get_crypt_desc();
+ 	if (!crypt)
+@@ -900,6 +918,10 @@ static int ablk_perform(struct skcipher_request *req, int encrypt)
+ 
+ 	BUG_ON(ivsize && !req->iv);
+ 	memcpy(crypt->iv, req->iv, ivsize);
++	if (ivsize > 0 && !encrypt) {
++		offset = req->cryptlen - ivsize;
++		scatterwalk_map_and_copy(req_ctx->iv, req->src, offset, ivsize, 0);
++	}
+ 	if (req->src != req->dst) {
+ 		struct buffer_desc dst_hook;
+ 		crypt->mode |= NPE_OP_NOT_IN_PLACE;
+diff --git a/drivers/crypto/nx/nx-842-pseries.c b/drivers/crypto/nx/nx-842-pseries.c
+index cc8dd3072b8b7..9b2417ebc95a0 100644
+--- a/drivers/crypto/nx/nx-842-pseries.c
++++ b/drivers/crypto/nx/nx-842-pseries.c
+@@ -538,13 +538,15 @@ static int nx842_OF_set_defaults(struct nx842_devdata *devdata)
+  * The status field indicates if the device is enabled when the status
+  * is 'okay'.  Otherwise the device driver will be disabled.
+  *
+- * @prop - struct property point containing the maxsyncop for the update
++ * @devdata: struct nx842_devdata to use for dev_info
++ * @prop: struct property point containing the maxsyncop for the update
+  *
+  * Returns:
+  *  0 - Device is available
+  *  -ENODEV - Device is not available
+  */
+-static int nx842_OF_upd_status(struct property *prop)
++static int nx842_OF_upd_status(struct nx842_devdata *devdata,
++			       struct property *prop)
+ {
+ 	const char *status = (const char *)prop->value;
+ 
+@@ -758,7 +760,7 @@ static int nx842_OF_upd(struct property *new_prop)
+ 		goto out;
+ 
+ 	/* Perform property updates */
+-	ret = nx842_OF_upd_status(status);
++	ret = nx842_OF_upd_status(new_devdata, status);
+ 	if (ret)
+ 		goto error_out;
+ 
+@@ -1069,6 +1071,7 @@ static const struct vio_device_id nx842_vio_driver_ids[] = {
+ 	{"ibm,compression-v1", "ibm,compression"},
+ 	{"", ""},
+ };
++MODULE_DEVICE_TABLE(vio, nx842_vio_driver_ids);
+ 
+ static struct vio_driver nx842_vio_driver = {
+ 	.name = KBUILD_MODNAME,
+diff --git a/drivers/crypto/nx/nx-aes-ctr.c b/drivers/crypto/nx/nx-aes-ctr.c
+index 13f518802343d..6120e350ff71d 100644
+--- a/drivers/crypto/nx/nx-aes-ctr.c
++++ b/drivers/crypto/nx/nx-aes-ctr.c
+@@ -118,7 +118,7 @@ static int ctr3686_aes_nx_crypt(struct skcipher_request *req)
+ 	struct nx_crypto_ctx *nx_ctx = crypto_skcipher_ctx(tfm);
+ 	u8 iv[16];
+ 
+-	memcpy(iv, nx_ctx->priv.ctr.nonce, CTR_RFC3686_IV_SIZE);
++	memcpy(iv, nx_ctx->priv.ctr.nonce, CTR_RFC3686_NONCE_SIZE);
+ 	memcpy(iv + CTR_RFC3686_NONCE_SIZE, req->iv, CTR_RFC3686_IV_SIZE);
+ 	iv[12] = iv[13] = iv[14] = 0;
+ 	iv[15] = 1;
+diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
+index ae0d320d3c60d..dd53ad9987b0d 100644
+--- a/drivers/crypto/omap-sham.c
++++ b/drivers/crypto/omap-sham.c
+@@ -372,7 +372,7 @@ static int omap_sham_hw_init(struct omap_sham_dev *dd)
+ {
+ 	int err;
+ 
+-	err = pm_runtime_get_sync(dd->dev);
++	err = pm_runtime_resume_and_get(dd->dev);
+ 	if (err < 0) {
+ 		dev_err(dd->dev, "failed to get sync: %d\n", err);
+ 		return err;
+@@ -2244,7 +2244,7 @@ static int omap_sham_suspend(struct device *dev)
+ 
+ static int omap_sham_resume(struct device *dev)
+ {
+-	int err = pm_runtime_get_sync(dev);
++	int err = pm_runtime_resume_and_get(dev);
+ 	if (err < 0) {
+ 		dev_err(dev, "failed to get sync: %d\n", err);
+ 		return err;
+diff --git a/drivers/crypto/qat/qat_common/qat_hal.c b/drivers/crypto/qat/qat_common/qat_hal.c
+index bd3028126cbe6..069f51621f0e8 100644
+--- a/drivers/crypto/qat/qat_common/qat_hal.c
++++ b/drivers/crypto/qat/qat_common/qat_hal.c
+@@ -1417,7 +1417,11 @@ static int qat_hal_put_rel_wr_xfer(struct icp_qat_fw_loader_handle *handle,
+ 		pr_err("QAT: bad xfrAddr=0x%x\n", xfr_addr);
+ 		return -EINVAL;
+ 	}
+-	qat_hal_rd_rel_reg(handle, ae, ctx, ICP_GPB_REL, gprnum, &gprval);
++	status = qat_hal_rd_rel_reg(handle, ae, ctx, ICP_GPB_REL, gprnum, &gprval);
++	if (status) {
++		pr_err("QAT: failed to read register");
++		return status;
++	}
+ 	gpr_addr = qat_hal_get_reg_addr(ICP_GPB_REL, gprnum);
+ 	data16low = 0xffff & data;
+ 	data16hi = 0xffff & (data >> 0x10);
+diff --git a/drivers/crypto/qat/qat_common/qat_uclo.c b/drivers/crypto/qat/qat_common/qat_uclo.c
+index 1fb5fc852f6b8..6d95160e451e5 100644
+--- a/drivers/crypto/qat/qat_common/qat_uclo.c
++++ b/drivers/crypto/qat/qat_common/qat_uclo.c
+@@ -342,7 +342,6 @@ static int qat_uclo_init_umem_seg(struct icp_qat_fw_loader_handle *handle,
+ 	return 0;
+ }
+ 
+-#define ICP_DH895XCC_PESRAM_BAR_SIZE 0x80000
+ static int qat_uclo_init_ae_memory(struct icp_qat_fw_loader_handle *handle,
+ 				   struct icp_qat_uof_initmem *init_mem)
+ {
+diff --git a/drivers/crypto/qce/skcipher.c b/drivers/crypto/qce/skcipher.c
+index c0a0d8c4fce19..8ff10928f581d 100644
+--- a/drivers/crypto/qce/skcipher.c
++++ b/drivers/crypto/qce/skcipher.c
+@@ -72,7 +72,7 @@ qce_skcipher_async_req_handle(struct crypto_async_request *async_req)
+ 	struct scatterlist *sg;
+ 	bool diff_dst;
+ 	gfp_t gfp;
+-	int ret;
++	int dst_nents, src_nents, ret;
+ 
+ 	rctx->iv = req->iv;
+ 	rctx->ivsize = crypto_skcipher_ivsize(skcipher);
+@@ -123,21 +123,26 @@ qce_skcipher_async_req_handle(struct crypto_async_request *async_req)
+ 	sg_mark_end(sg);
+ 	rctx->dst_sg = rctx->dst_tbl.sgl;
+ 
+-	ret = dma_map_sg(qce->dev, rctx->dst_sg, rctx->dst_nents, dir_dst);
+-	if (ret < 0)
++	dst_nents = dma_map_sg(qce->dev, rctx->dst_sg, rctx->dst_nents, dir_dst);
++	if (dst_nents < 0) {
++		ret = dst_nents;
+ 		goto error_free;
++	}
+ 
+ 	if (diff_dst) {
+-		ret = dma_map_sg(qce->dev, req->src, rctx->src_nents, dir_src);
+-		if (ret < 0)
++		src_nents = dma_map_sg(qce->dev, req->src, rctx->src_nents, dir_src);
++		if (src_nents < 0) {
++			ret = src_nents;
+ 			goto error_unmap_dst;
++		}
+ 		rctx->src_sg = req->src;
+ 	} else {
+ 		rctx->src_sg = rctx->dst_sg;
++		src_nents = dst_nents - 1;
+ 	}
+ 
+-	ret = qce_dma_prep_sgs(&qce->dma, rctx->src_sg, rctx->src_nents,
+-			       rctx->dst_sg, rctx->dst_nents,
++	ret = qce_dma_prep_sgs(&qce->dma, rctx->src_sg, src_nents,
++			       rctx->dst_sg, dst_nents,
+ 			       qce_skcipher_done, async_req);
+ 	if (ret)
+ 		goto error_unmap_src;
+diff --git a/drivers/crypto/sa2ul.c b/drivers/crypto/sa2ul.c
+index 1c6929fb3a131..9f077ec9dbb7f 100644
+--- a/drivers/crypto/sa2ul.c
++++ b/drivers/crypto/sa2ul.c
+@@ -2300,9 +2300,9 @@ static int sa_dma_init(struct sa_crypto_data *dd)
+ 
+ 	dd->dma_rx2 = dma_request_chan(dd->dev, "rx2");
+ 	if (IS_ERR(dd->dma_rx2)) {
+-		dma_release_channel(dd->dma_rx1);
+-		return dev_err_probe(dd->dev, PTR_ERR(dd->dma_rx2),
+-				     "Unable to request rx2 DMA channel\n");
++		ret = dev_err_probe(dd->dev, PTR_ERR(dd->dma_rx2),
++				    "Unable to request rx2 DMA channel\n");
++		goto err_dma_rx2;
+ 	}
+ 
+ 	dd->dma_tx = dma_request_chan(dd->dev, "tx");
+@@ -2323,28 +2323,31 @@ static int sa_dma_init(struct sa_crypto_data *dd)
+ 	if (ret) {
+ 		dev_err(dd->dev, "can't configure IN dmaengine slave: %d\n",
+ 			ret);
+-		return ret;
++		goto err_dma_config;
+ 	}
+ 
+ 	ret = dmaengine_slave_config(dd->dma_rx2, &cfg);
+ 	if (ret) {
+ 		dev_err(dd->dev, "can't configure IN dmaengine slave: %d\n",
+ 			ret);
+-		return ret;
++		goto err_dma_config;
+ 	}
+ 
+ 	ret = dmaengine_slave_config(dd->dma_tx, &cfg);
+ 	if (ret) {
+ 		dev_err(dd->dev, "can't configure OUT dmaengine slave: %d\n",
+ 			ret);
+-		return ret;
++		goto err_dma_config;
+ 	}
+ 
+ 	return 0;
+ 
++err_dma_config:
++	dma_release_channel(dd->dma_tx);
+ err_dma_tx:
+-	dma_release_channel(dd->dma_rx1);
+ 	dma_release_channel(dd->dma_rx2);
++err_dma_rx2:
++	dma_release_channel(dd->dma_rx1);
+ 
+ 	return ret;
+ }
+@@ -2385,7 +2388,6 @@ MODULE_DEVICE_TABLE(of, of_match);
+ 
+ static int sa_ul_probe(struct platform_device *pdev)
+ {
+-	const struct of_device_id *match;
+ 	struct device *dev = &pdev->dev;
+ 	struct device_node *node = dev->of_node;
+ 	struct resource *res;
+@@ -2397,6 +2399,10 @@ static int sa_ul_probe(struct platform_device *pdev)
+ 	if (!dev_data)
+ 		return -ENOMEM;
+ 
++	dev_data->match_data = of_device_get_match_data(dev);
++	if (!dev_data->match_data)
++		return -ENODEV;
++
+ 	sa_k3_dev = dev;
+ 	dev_data->dev = dev;
+ 	dev_data->pdev = pdev;
+@@ -2408,20 +2414,14 @@ static int sa_ul_probe(struct platform_device *pdev)
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "%s: failed to get sync: %d\n", __func__,
+ 			ret);
++		pm_runtime_disable(dev);
+ 		return ret;
+ 	}
+ 
+ 	sa_init_mem(dev_data);
+ 	ret = sa_dma_init(dev_data);
+ 	if (ret)
+-		goto disable_pm_runtime;
+-
+-	match = of_match_node(of_match, dev->of_node);
+-	if (!match) {
+-		dev_err(dev, "No compatible match found\n");
+-		return -ENODEV;
+-	}
+-	dev_data->match_data = match->data;
++		goto destroy_dma_pool;
+ 
+ 	spin_lock_init(&dev_data->scid_lock);
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+@@ -2454,9 +2454,9 @@ release_dma:
+ 	dma_release_channel(dev_data->dma_rx1);
+ 	dma_release_channel(dev_data->dma_tx);
+ 
++destroy_dma_pool:
+ 	dma_pool_destroy(dev_data->sc_pool);
+ 
+-disable_pm_runtime:
+ 	pm_runtime_put_sync(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 
+diff --git a/drivers/crypto/ux500/hash/hash_core.c b/drivers/crypto/ux500/hash/hash_core.c
+index ecb7412e84e3e..51a6e1a424349 100644
+--- a/drivers/crypto/ux500/hash/hash_core.c
++++ b/drivers/crypto/ux500/hash/hash_core.c
+@@ -1011,6 +1011,7 @@ static int hash_hw_final(struct ahash_request *req)
+ 			goto out;
+ 		}
+ 	} else if (req->nbytes == 0 && ctx->keylen > 0) {
++		ret = -EPERM;
+ 		dev_err(device_data->dev, "%s: Empty message with keylength > 0, NOT supported\n",
+ 			__func__);
+ 		goto out;
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index fe08c46642f7c..28f3e0ba6cdd9 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -823,6 +823,7 @@ struct devfreq *devfreq_add_device(struct device *dev,
+ 	if (devfreq->profile->timer < 0
+ 		|| devfreq->profile->timer >= DEVFREQ_TIMER_NUM) {
+ 		mutex_unlock(&devfreq->lock);
++		err = -EINVAL;
+ 		goto err_dev;
+ 	}
+ 
+diff --git a/drivers/devfreq/governor_passive.c b/drivers/devfreq/governor_passive.c
+index b094132bd20b3..fc09324a03e03 100644
+--- a/drivers/devfreq/governor_passive.c
++++ b/drivers/devfreq/governor_passive.c
+@@ -65,7 +65,7 @@ static int devfreq_passive_get_target_freq(struct devfreq *devfreq,
+ 		dev_pm_opp_put(p_opp);
+ 
+ 		if (IS_ERR(opp))
+-			return PTR_ERR(opp);
++			goto no_required_opp;
+ 
+ 		*freq = dev_pm_opp_get_freq(opp);
+ 		dev_pm_opp_put(opp);
+@@ -73,6 +73,7 @@ static int devfreq_passive_get_target_freq(struct devfreq *devfreq,
+ 		return 0;
+ 	}
+ 
++no_required_opp:
+ 	/*
+ 	 * Get the OPP table's index of decided frequency by governor
+ 	 * of parent device.
+diff --git a/drivers/edac/Kconfig b/drivers/edac/Kconfig
+index 1e836e320edd9..91164c5f0757d 100644
+--- a/drivers/edac/Kconfig
++++ b/drivers/edac/Kconfig
+@@ -270,7 +270,8 @@ config EDAC_PND2
+ 
+ config EDAC_IGEN6
+ 	tristate "Intel client SoC Integrated MC"
+-	depends on PCI && X86_64 && PCI_MMCONFIG && ARCH_HAVE_NMI_SAFE_CMPXCHG
++	depends on PCI && PCI_MMCONFIG && ARCH_HAVE_NMI_SAFE_CMPXCHG
++	depends on X64_64 && X86_MCE_INTEL
+ 	help
+ 	  Support for error detection and correction on the Intel
+ 	  client SoC Integrated Memory Controller using In-Band ECC IP.
+diff --git a/drivers/edac/aspeed_edac.c b/drivers/edac/aspeed_edac.c
+index a46da56d6d544..6bd5f88159193 100644
+--- a/drivers/edac/aspeed_edac.c
++++ b/drivers/edac/aspeed_edac.c
+@@ -254,8 +254,8 @@ static int init_csrows(struct mem_ctl_info *mci)
+ 		return rc;
+ 	}
+ 
+-	dev_dbg(mci->pdev, "dt: /memory node resources: first page r.start=0x%x, resource_size=0x%x, PAGE_SHIFT macro=0x%x\n",
+-		r.start, resource_size(&r), PAGE_SHIFT);
++	dev_dbg(mci->pdev, "dt: /memory node resources: first page %pR, PAGE_SHIFT macro=0x%x\n",
++		&r, PAGE_SHIFT);
+ 
+ 	csrow->first_page = r.start >> PAGE_SHIFT;
+ 	nr_pages = resource_size(&r) >> PAGE_SHIFT;
+diff --git a/drivers/edac/i10nm_base.c b/drivers/edac/i10nm_base.c
+index 238a4ad1e526e..37b4e875420e4 100644
+--- a/drivers/edac/i10nm_base.c
++++ b/drivers/edac/i10nm_base.c
+@@ -278,6 +278,9 @@ static int __init i10nm_init(void)
+ 	if (owner && strncmp(owner, EDAC_MOD_STR, sizeof(EDAC_MOD_STR)))
+ 		return -EBUSY;
+ 
++	if (cpu_feature_enabled(X86_FEATURE_HYPERVISOR))
++		return -ENODEV;
++
+ 	id = x86_match_cpu(i10nm_cpuids);
+ 	if (!id)
+ 		return -ENODEV;
+diff --git a/drivers/edac/pnd2_edac.c b/drivers/edac/pnd2_edac.c
+index 928f63a374c78..c94ca1f790c43 100644
+--- a/drivers/edac/pnd2_edac.c
++++ b/drivers/edac/pnd2_edac.c
+@@ -1554,6 +1554,9 @@ static int __init pnd2_init(void)
+ 	if (owner && strncmp(owner, EDAC_MOD_STR, sizeof(EDAC_MOD_STR)))
+ 		return -EBUSY;
+ 
++	if (cpu_feature_enabled(X86_FEATURE_HYPERVISOR))
++		return -ENODEV;
++
+ 	id = x86_match_cpu(pnd2_cpuids);
+ 	if (!id)
+ 		return -ENODEV;
+diff --git a/drivers/edac/sb_edac.c b/drivers/edac/sb_edac.c
+index 93daa4297f2e0..4c626fcd4dcbb 100644
+--- a/drivers/edac/sb_edac.c
++++ b/drivers/edac/sb_edac.c
+@@ -3510,6 +3510,9 @@ static int __init sbridge_init(void)
+ 	if (owner && strncmp(owner, EDAC_MOD_STR, sizeof(EDAC_MOD_STR)))
+ 		return -EBUSY;
+ 
++	if (cpu_feature_enabled(X86_FEATURE_HYPERVISOR))
++		return -ENODEV;
++
+ 	id = x86_match_cpu(sbridge_cpuids);
+ 	if (!id)
+ 		return -ENODEV;
+diff --git a/drivers/edac/skx_base.c b/drivers/edac/skx_base.c
+index 6a4f0b27c6545..4dbd46575bfb4 100644
+--- a/drivers/edac/skx_base.c
++++ b/drivers/edac/skx_base.c
+@@ -656,6 +656,9 @@ static int __init skx_init(void)
+ 	if (owner && strncmp(owner, EDAC_MOD_STR, sizeof(EDAC_MOD_STR)))
+ 		return -EBUSY;
+ 
++	if (cpu_feature_enabled(X86_FEATURE_HYPERVISOR))
++		return -ENODEV;
++
+ 	id = x86_match_cpu(skx_cpuids);
+ 	if (!id)
+ 		return -ENODEV;
+diff --git a/drivers/edac/ti_edac.c b/drivers/edac/ti_edac.c
+index e7eae20f83d1d..169f96e51c293 100644
+--- a/drivers/edac/ti_edac.c
++++ b/drivers/edac/ti_edac.c
+@@ -197,6 +197,7 @@ static const struct of_device_id ti_edac_of_match[] = {
+ 	{ .compatible = "ti,emif-dra7xx", .data = (void *)EMIF_TYPE_DRA7 },
+ 	{},
+ };
++MODULE_DEVICE_TABLE(of, ti_edac_of_match);
+ 
+ static int _emif_get_id(struct device_node *node)
+ {
+diff --git a/drivers/extcon/extcon-max8997.c b/drivers/extcon/extcon-max8997.c
+index e1408075ef7d6..5c3cdb725514d 100644
+--- a/drivers/extcon/extcon-max8997.c
++++ b/drivers/extcon/extcon-max8997.c
+@@ -733,7 +733,7 @@ static int max8997_muic_probe(struct platform_device *pdev)
+ 				2, info->status);
+ 	if (ret) {
+ 		dev_err(info->dev, "failed to read MUIC register\n");
+-		return ret;
++		goto err_irq;
+ 	}
+ 	cable_type = max8997_muic_get_cable_type(info,
+ 					   MAX8997_CABLE_GROUP_ADC, &attached);
+@@ -788,3 +788,4 @@ module_platform_driver(max8997_muic_driver);
+ MODULE_DESCRIPTION("Maxim MAX8997 Extcon driver");
+ MODULE_AUTHOR("Donggeun Kim <dg77.kim@samsung.com>");
+ MODULE_LICENSE("GPL");
++MODULE_ALIAS("platform:max8997-muic");
+diff --git a/drivers/extcon/extcon-sm5502.c b/drivers/extcon/extcon-sm5502.c
+index db41d1c58efd5..c3e4b220e66fa 100644
+--- a/drivers/extcon/extcon-sm5502.c
++++ b/drivers/extcon/extcon-sm5502.c
+@@ -88,7 +88,6 @@ static struct reg_data sm5502_reg_data[] = {
+ 			| SM5502_REG_INTM2_MHL_MASK,
+ 		.invert = true,
+ 	},
+-	{ }
+ };
+ 
+ /* List of detectable cables */
+diff --git a/drivers/firmware/stratix10-svc.c b/drivers/firmware/stratix10-svc.c
+index 3aa489dba30a7..2a7687911c097 100644
+--- a/drivers/firmware/stratix10-svc.c
++++ b/drivers/firmware/stratix10-svc.c
+@@ -1034,24 +1034,32 @@ static int stratix10_svc_drv_probe(struct platform_device *pdev)
+ 
+ 	/* add svc client device(s) */
+ 	svc = devm_kzalloc(dev, sizeof(*svc), GFP_KERNEL);
+-	if (!svc)
+-		return -ENOMEM;
++	if (!svc) {
++		ret = -ENOMEM;
++		goto err_free_kfifo;
++	}
+ 
+ 	svc->stratix10_svc_rsu = platform_device_alloc(STRATIX10_RSU, 0);
+ 	if (!svc->stratix10_svc_rsu) {
+ 		dev_err(dev, "failed to allocate %s device\n", STRATIX10_RSU);
+-		return -ENOMEM;
++		ret = -ENOMEM;
++		goto err_free_kfifo;
+ 	}
+ 
+ 	ret = platform_device_add(svc->stratix10_svc_rsu);
+-	if (ret) {
+-		platform_device_put(svc->stratix10_svc_rsu);
+-		return ret;
+-	}
++	if (ret)
++		goto err_put_device;
++
+ 	dev_set_drvdata(dev, svc);
+ 
+ 	pr_info("Intel Service Layer Driver Initialized\n");
+ 
++	return 0;
++
++err_put_device:
++	platform_device_put(svc->stratix10_svc_rsu);
++err_free_kfifo:
++	kfifo_free(&controller->svc_fifo);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/fsi/fsi-core.c b/drivers/fsi/fsi-core.c
+index 4e60e84cd17a5..59ddc9fd5bca4 100644
+--- a/drivers/fsi/fsi-core.c
++++ b/drivers/fsi/fsi-core.c
+@@ -724,7 +724,7 @@ static ssize_t cfam_read(struct file *filep, char __user *buf, size_t count,
+ 	rc = count;
+  fail:
+ 	*offset = off;
+-	return count;
++	return rc;
+ }
+ 
+ static ssize_t cfam_write(struct file *filep, const char __user *buf,
+@@ -761,7 +761,7 @@ static ssize_t cfam_write(struct file *filep, const char __user *buf,
+ 	rc = count;
+  fail:
+ 	*offset = off;
+-	return count;
++	return rc;
+ }
+ 
+ static loff_t cfam_llseek(struct file *file, loff_t offset, int whence)
+diff --git a/drivers/fsi/fsi-occ.c b/drivers/fsi/fsi-occ.c
+index 10ca2e290655b..cb05b6dacc9d5 100644
+--- a/drivers/fsi/fsi-occ.c
++++ b/drivers/fsi/fsi-occ.c
+@@ -495,6 +495,7 @@ int fsi_occ_submit(struct device *dev, const void *request, size_t req_len,
+ 			goto done;
+ 
+ 		if (resp->return_status == OCC_RESP_CMD_IN_PRG ||
++		    resp->return_status == OCC_RESP_CRIT_INIT ||
+ 		    resp->seq_no != seq_no) {
+ 			rc = -ETIMEDOUT;
+ 
+diff --git a/drivers/fsi/fsi-sbefifo.c b/drivers/fsi/fsi-sbefifo.c
+index bfd5e5da80209..84cb965bfed5c 100644
+--- a/drivers/fsi/fsi-sbefifo.c
++++ b/drivers/fsi/fsi-sbefifo.c
+@@ -325,7 +325,8 @@ static int sbefifo_up_write(struct sbefifo *sbefifo, __be32 word)
+ static int sbefifo_request_reset(struct sbefifo *sbefifo)
+ {
+ 	struct device *dev = &sbefifo->fsi_dev->dev;
+-	u32 status, timeout;
++	unsigned long end_time;
++	u32 status;
+ 	int rc;
+ 
+ 	dev_dbg(dev, "Requesting FIFO reset\n");
+@@ -341,7 +342,8 @@ static int sbefifo_request_reset(struct sbefifo *sbefifo)
+ 	}
+ 
+ 	/* Wait for it to complete */
+-	for (timeout = 0; timeout < SBEFIFO_RESET_TIMEOUT; timeout++) {
++	end_time = jiffies + msecs_to_jiffies(SBEFIFO_RESET_TIMEOUT);
++	while (!time_after(jiffies, end_time)) {
+ 		rc = sbefifo_regr(sbefifo, SBEFIFO_UP | SBEFIFO_STS, &status);
+ 		if (rc) {
+ 			dev_err(dev, "Failed to read UP fifo status during reset"
+@@ -355,7 +357,7 @@ static int sbefifo_request_reset(struct sbefifo *sbefifo)
+ 			return 0;
+ 		}
+ 
+-		msleep(1);
++		cond_resched();
+ 	}
+ 	dev_err(dev, "FIFO reset timed out\n");
+ 
+@@ -400,7 +402,7 @@ static int sbefifo_cleanup_hw(struct sbefifo *sbefifo)
+ 	/* The FIFO already contains a reset request from the SBE ? */
+ 	if (down_status & SBEFIFO_STS_RESET_REQ) {
+ 		dev_info(dev, "Cleanup: FIFO reset request set, resetting\n");
+-		rc = sbefifo_regw(sbefifo, SBEFIFO_UP, SBEFIFO_PERFORM_RESET);
++		rc = sbefifo_regw(sbefifo, SBEFIFO_DOWN, SBEFIFO_PERFORM_RESET);
+ 		if (rc) {
+ 			sbefifo->broken = true;
+ 			dev_err(dev, "Cleanup: Reset reg write failed, rc=%d\n", rc);
+diff --git a/drivers/fsi/fsi-scom.c b/drivers/fsi/fsi-scom.c
+index b45bfab7b7f55..75d1389e2626d 100644
+--- a/drivers/fsi/fsi-scom.c
++++ b/drivers/fsi/fsi-scom.c
+@@ -38,9 +38,10 @@
+ #define SCOM_STATUS_PIB_RESP_MASK	0x00007000
+ #define SCOM_STATUS_PIB_RESP_SHIFT	12
+ 
+-#define SCOM_STATUS_ANY_ERR		(SCOM_STATUS_PROTECTION | \
+-					 SCOM_STATUS_PARITY |	  \
+-					 SCOM_STATUS_PIB_ABORT | \
++#define SCOM_STATUS_FSI2PIB_ERROR	(SCOM_STATUS_PROTECTION |	\
++					 SCOM_STATUS_PARITY |		\
++					 SCOM_STATUS_PIB_ABORT)
++#define SCOM_STATUS_ANY_ERR		(SCOM_STATUS_FSI2PIB_ERROR |	\
+ 					 SCOM_STATUS_PIB_RESP_MASK)
+ /* SCOM address encodings */
+ #define XSCOM_ADDR_IND_FLAG		BIT_ULL(63)
+@@ -240,13 +241,14 @@ static int handle_fsi2pib_status(struct scom_device *scom, uint32_t status)
+ {
+ 	uint32_t dummy = -1;
+ 
+-	if (status & SCOM_STATUS_PROTECTION)
+-		return -EPERM;
+-	if (status & SCOM_STATUS_PARITY) {
++	if (status & SCOM_STATUS_FSI2PIB_ERROR)
+ 		fsi_device_write(scom->fsi_dev, SCOM_FSI2PIB_RESET_REG, &dummy,
+ 				 sizeof(uint32_t));
++
++	if (status & SCOM_STATUS_PROTECTION)
++		return -EPERM;
++	if (status & SCOM_STATUS_PARITY)
+ 		return -EIO;
+-	}
+ 	/* Return -EBUSY on PIB abort to force a retry */
+ 	if (status & SCOM_STATUS_PIB_ABORT)
+ 		return -EBUSY;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 652cc1a0e450f..2b2d7b9f26f16 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -28,6 +28,7 @@
+ 
+ #include "dm_services_types.h"
+ #include "dc.h"
++#include "dc_link_dp.h"
+ #include "dc/inc/core_types.h"
+ #include "dal_asic_id.h"
+ #include "dmub/dmub_srv.h"
+@@ -2696,6 +2697,7 @@ static void handle_hpd_rx_irq(void *param)
+ 	enum dc_connection_type new_connection_type = dc_connection_none;
+ 	struct amdgpu_device *adev = drm_to_adev(dev);
+ 	union hpd_irq_data hpd_irq_data;
++	bool lock_flag = 0;
+ 
+ 	memset(&hpd_irq_data, 0, sizeof(hpd_irq_data));
+ 
+@@ -2726,13 +2728,28 @@ static void handle_hpd_rx_irq(void *param)
+ 		}
+ 	}
+ 
+-	mutex_lock(&adev->dm.dc_lock);
++	/*
++	 * TODO: We need the lock to avoid touching DC state while it's being
++	 * modified during automated compliance testing, or when link loss
++	 * happens. While this should be split into subhandlers and proper
++	 * interfaces to avoid having to conditionally lock like this in the
++	 * outer layer, we need this workaround temporarily to allow MST
++	 * lightup in some scenarios to avoid timeout.
++	 */
++	if (!amdgpu_in_reset(adev) &&
++	    (hpd_rx_irq_check_link_loss_status(dc_link, &hpd_irq_data) ||
++	     hpd_irq_data.bytes.device_service_irq.bits.AUTOMATED_TEST)) {
++		mutex_lock(&adev->dm.dc_lock);
++		lock_flag = 1;
++	}
++
+ #ifdef CONFIG_DRM_AMD_DC_HDCP
+ 	result = dc_link_handle_hpd_rx_irq(dc_link, &hpd_irq_data, NULL);
+ #else
+ 	result = dc_link_handle_hpd_rx_irq(dc_link, NULL, NULL);
+ #endif
+-	mutex_unlock(&adev->dm.dc_lock);
++	if (!amdgpu_in_reset(adev) && lock_flag)
++		mutex_unlock(&adev->dm.dc_lock);
+ 
+ out:
+ 	if (result && !is_mst_root_connector) {
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 9b221db526dc9..d62460b69d954 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -278,6 +278,9 @@ dm_dp_mst_detect(struct drm_connector *connector,
+ 	struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector);
+ 	struct amdgpu_dm_connector *master = aconnector->mst_port;
+ 
++	if (drm_connector_is_unregistered(connector))
++		return connector_status_disconnected;
++
+ 	return drm_dp_mst_detect_port(connector, ctx, &master->mst_mgr,
+ 				      aconnector->port);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index 3ff3d9e909837..72bd7bc681a81 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -1976,7 +1976,7 @@ enum dc_status read_hpd_rx_irq_data(
+ 	return retval;
+ }
+ 
+-static bool hpd_rx_irq_check_link_loss_status(
++bool hpd_rx_irq_check_link_loss_status(
+ 	struct dc_link *link,
+ 	union hpd_irq_data *hpd_irq_dpcd_data)
+ {
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h b/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h
+index 3ae05c96d5572..a9c0c7f7a55dc 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h
+@@ -67,6 +67,10 @@ bool perform_link_training_with_retries(
+ 	struct pipe_ctx *pipe_ctx,
+ 	enum signal_type signal);
+ 
++bool hpd_rx_irq_check_link_loss_status(
++	struct dc_link *link,
++	union hpd_irq_data *hpd_irq_dpcd_data);
++
+ bool is_mst_supported(struct dc_link *link);
+ 
+ bool detect_dp_sink_caps(struct dc_link *link);
+diff --git a/drivers/gpu/drm/ast/ast_main.c b/drivers/gpu/drm/ast/ast_main.c
+index 0ac3c2039c4b1..c29cc7f19863a 100644
+--- a/drivers/gpu/drm/ast/ast_main.c
++++ b/drivers/gpu/drm/ast/ast_main.c
+@@ -413,7 +413,7 @@ struct ast_private *ast_device_create(const struct drm_driver *drv,
+ 
+ 	pci_set_drvdata(pdev, dev);
+ 
+-	ast->regs = pci_iomap(pdev, 1, 0);
++	ast->regs = pcim_iomap(pdev, 1, 0);
+ 	if (!ast->regs)
+ 		return ERR_PTR(-EIO);
+ 
+@@ -429,7 +429,7 @@ struct ast_private *ast_device_create(const struct drm_driver *drv,
+ 
+ 	/* "map" IO regs if the above hasn't done so already */
+ 	if (!ast->ioregs) {
+-		ast->ioregs = pci_iomap(pdev, 2, 0);
++		ast->ioregs = pcim_iomap(pdev, 2, 0);
+ 		if (!ast->ioregs)
+ 			return ERR_PTR(-EIO);
+ 	}
+diff --git a/drivers/gpu/drm/bridge/Kconfig b/drivers/gpu/drm/bridge/Kconfig
+index 400193e38d298..9ce8438fb58b9 100644
+--- a/drivers/gpu/drm/bridge/Kconfig
++++ b/drivers/gpu/drm/bridge/Kconfig
+@@ -68,6 +68,7 @@ config DRM_LONTIUM_LT8912B
+ 	select DRM_KMS_HELPER
+ 	select DRM_MIPI_DSI
+ 	select REGMAP_I2C
++	select VIDEOMODE_HELPERS
+ 	help
+ 	  Driver for Lontium LT8912B DSI to HDMI bridge
+ 	  chip driver.
+@@ -172,7 +173,7 @@ config DRM_SIL_SII8620
+ 	tristate "Silicon Image SII8620 HDMI/MHL bridge"
+ 	depends on OF
+ 	select DRM_KMS_HELPER
+-	imply EXTCON
++	select EXTCON
+ 	depends on RC_CORE || !RC_CORE
+ 	help
+ 	  Silicon Image SII8620 HDMI/MHL bridge chip driver.
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
+index 23283ba0c4f93..b4e349ca38fe3 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
+@@ -893,7 +893,7 @@ static void anx7625_power_on(struct anx7625_data *ctx)
+ 		usleep_range(2000, 2100);
+ 	}
+ 
+-	usleep_range(4000, 4100);
++	usleep_range(11000, 12000);
+ 
+ 	/* Power on pin enable */
+ 	gpiod_set_value(ctx->pdata.gpio_p_on, 1);
+diff --git a/drivers/gpu/drm/drm_bridge.c b/drivers/gpu/drm/drm_bridge.c
+index 64f0effb52ac1..044acd07c1538 100644
+--- a/drivers/gpu/drm/drm_bridge.c
++++ b/drivers/gpu/drm/drm_bridge.c
+@@ -522,6 +522,9 @@ void drm_bridge_chain_pre_enable(struct drm_bridge *bridge)
+ 	list_for_each_entry_reverse(iter, &encoder->bridge_chain, chain_node) {
+ 		if (iter->funcs->pre_enable)
+ 			iter->funcs->pre_enable(iter);
++
++		if (iter == bridge)
++			break;
+ 	}
+ }
+ EXPORT_SYMBOL(drm_bridge_chain_pre_enable);
+diff --git a/drivers/gpu/drm/i915/display/skl_universal_plane.c b/drivers/gpu/drm/i915/display/skl_universal_plane.c
+index 7ffd7b570b54d..538682f882b16 100644
+--- a/drivers/gpu/drm/i915/display/skl_universal_plane.c
++++ b/drivers/gpu/drm/i915/display/skl_universal_plane.c
+@@ -1082,7 +1082,6 @@ static int skl_plane_check_fb(const struct intel_crtc_state *crtc_state,
+ 	struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
+ 	const struct drm_framebuffer *fb = plane_state->hw.fb;
+ 	unsigned int rotation = plane_state->hw.rotation;
+-	struct drm_format_name_buf format_name;
+ 
+ 	if (!fb)
+ 		return 0;
+@@ -1130,9 +1129,8 @@ static int skl_plane_check_fb(const struct intel_crtc_state *crtc_state,
+ 		case DRM_FORMAT_XVYU12_16161616:
+ 		case DRM_FORMAT_XVYU16161616:
+ 			drm_dbg_kms(&dev_priv->drm,
+-				    "Unsupported pixel format %s for 90/270!\n",
+-				    drm_get_format_name(fb->format->format,
+-							&format_name));
++				    "Unsupported pixel format %p4cc for 90/270!\n",
++				    &fb->format->format);
+ 			return -EINVAL;
+ 		default:
+ 			break;
+diff --git a/drivers/gpu/drm/i915/gt/selftest_execlists.c b/drivers/gpu/drm/i915/gt/selftest_execlists.c
+index 1081cd36a2bd3..1e5d59a776b81 100644
+--- a/drivers/gpu/drm/i915/gt/selftest_execlists.c
++++ b/drivers/gpu/drm/i915/gt/selftest_execlists.c
+@@ -551,6 +551,32 @@ static int live_pin_rewind(void *arg)
+ 	return err;
+ }
+ 
++static int engine_lock_reset_tasklet(struct intel_engine_cs *engine)
++{
++	tasklet_disable(&engine->execlists.tasklet);
++	local_bh_disable();
++
++	if (test_and_set_bit(I915_RESET_ENGINE + engine->id,
++			     &engine->gt->reset.flags)) {
++		local_bh_enable();
++		tasklet_enable(&engine->execlists.tasklet);
++
++		intel_gt_set_wedged(engine->gt);
++		return -EBUSY;
++	}
++
++	return 0;
++}
++
++static void engine_unlock_reset_tasklet(struct intel_engine_cs *engine)
++{
++	clear_and_wake_up_bit(I915_RESET_ENGINE + engine->id,
++			      &engine->gt->reset.flags);
++
++	local_bh_enable();
++	tasklet_enable(&engine->execlists.tasklet);
++}
++
+ static int live_hold_reset(void *arg)
+ {
+ 	struct intel_gt *gt = arg;
+@@ -598,15 +624,9 @@ static int live_hold_reset(void *arg)
+ 
+ 		/* We have our request executing, now remove it and reset */
+ 
+-		local_bh_disable();
+-		if (test_and_set_bit(I915_RESET_ENGINE + id,
+-				     &gt->reset.flags)) {
+-			local_bh_enable();
+-			intel_gt_set_wedged(gt);
+-			err = -EBUSY;
++		err = engine_lock_reset_tasklet(engine);
++		if (err)
+ 			goto out;
+-		}
+-		tasklet_disable(&engine->execlists.tasklet);
+ 
+ 		engine->execlists.tasklet.callback(&engine->execlists.tasklet);
+ 		GEM_BUG_ON(execlists_active(&engine->execlists) != rq);
+@@ -618,10 +638,7 @@ static int live_hold_reset(void *arg)
+ 		__intel_engine_reset_bh(engine, NULL);
+ 		GEM_BUG_ON(rq->fence.error != -EIO);
+ 
+-		tasklet_enable(&engine->execlists.tasklet);
+-		clear_and_wake_up_bit(I915_RESET_ENGINE + id,
+-				      &gt->reset.flags);
+-		local_bh_enable();
++		engine_unlock_reset_tasklet(engine);
+ 
+ 		/* Check that we do not resubmit the held request */
+ 		if (!i915_request_wait(rq, 0, HZ / 5)) {
+@@ -4585,15 +4602,9 @@ static int reset_virtual_engine(struct intel_gt *gt,
+ 	GEM_BUG_ON(engine == ve->engine);
+ 
+ 	/* Take ownership of the reset and tasklet */
+-	local_bh_disable();
+-	if (test_and_set_bit(I915_RESET_ENGINE + engine->id,
+-			     &gt->reset.flags)) {
+-		local_bh_enable();
+-		intel_gt_set_wedged(gt);
+-		err = -EBUSY;
++	err = engine_lock_reset_tasklet(engine);
++	if (err)
+ 		goto out_heartbeat;
+-	}
+-	tasklet_disable(&engine->execlists.tasklet);
+ 
+ 	engine->execlists.tasklet.callback(&engine->execlists.tasklet);
+ 	GEM_BUG_ON(execlists_active(&engine->execlists) != rq);
+@@ -4612,9 +4623,7 @@ static int reset_virtual_engine(struct intel_gt *gt,
+ 	GEM_BUG_ON(rq->fence.error != -EIO);
+ 
+ 	/* Release our grasp on the engine, letting CS flow again */
+-	tasklet_enable(&engine->execlists.tasklet);
+-	clear_and_wake_up_bit(I915_RESET_ENGINE + engine->id, &gt->reset.flags);
+-	local_bh_enable();
++	engine_unlock_reset_tasklet(engine);
+ 
+ 	/* Check that we do not resubmit the held request */
+ 	i915_request_get(rq);
+diff --git a/drivers/gpu/drm/imx/ipuv3-plane.c b/drivers/gpu/drm/imx/ipuv3-plane.c
+index fa5009705365e..233310712deb0 100644
+--- a/drivers/gpu/drm/imx/ipuv3-plane.c
++++ b/drivers/gpu/drm/imx/ipuv3-plane.c
+@@ -35,7 +35,7 @@ static inline struct ipu_plane *to_ipu_plane(struct drm_plane *p)
+ 	return container_of(p, struct ipu_plane, base);
+ }
+ 
+-static const uint32_t ipu_plane_formats[] = {
++static const uint32_t ipu_plane_all_formats[] = {
+ 	DRM_FORMAT_ARGB1555,
+ 	DRM_FORMAT_XRGB1555,
+ 	DRM_FORMAT_ABGR1555,
+@@ -72,6 +72,31 @@ static const uint32_t ipu_plane_formats[] = {
+ 	DRM_FORMAT_BGRX8888_A8,
+ };
+ 
++static const uint32_t ipu_plane_rgb_formats[] = {
++	DRM_FORMAT_ARGB1555,
++	DRM_FORMAT_XRGB1555,
++	DRM_FORMAT_ABGR1555,
++	DRM_FORMAT_XBGR1555,
++	DRM_FORMAT_RGBA5551,
++	DRM_FORMAT_BGRA5551,
++	DRM_FORMAT_ARGB4444,
++	DRM_FORMAT_ARGB8888,
++	DRM_FORMAT_XRGB8888,
++	DRM_FORMAT_ABGR8888,
++	DRM_FORMAT_XBGR8888,
++	DRM_FORMAT_RGBA8888,
++	DRM_FORMAT_RGBX8888,
++	DRM_FORMAT_BGRA8888,
++	DRM_FORMAT_BGRX8888,
++	DRM_FORMAT_RGB565,
++	DRM_FORMAT_RGB565_A8,
++	DRM_FORMAT_BGR565_A8,
++	DRM_FORMAT_RGB888_A8,
++	DRM_FORMAT_BGR888_A8,
++	DRM_FORMAT_RGBX8888_A8,
++	DRM_FORMAT_BGRX8888_A8,
++};
++
+ static const uint64_t ipu_format_modifiers[] = {
+ 	DRM_FORMAT_MOD_LINEAR,
+ 	DRM_FORMAT_MOD_INVALID
+@@ -320,10 +345,11 @@ static bool ipu_plane_format_mod_supported(struct drm_plane *plane,
+ 	if (modifier == DRM_FORMAT_MOD_LINEAR)
+ 		return true;
+ 
+-	/* without a PRG there are no supported modifiers */
+-	if (!ipu_prg_present(ipu))
+-		return false;
+-
++	/*
++	 * Without a PRG the possible modifiers list only includes the linear
++	 * modifier, so we always take the early return from this function and
++	 * only end up here if the PRG is present.
++	 */
+ 	return ipu_prg_format_supported(ipu, format, modifier);
+ }
+ 
+@@ -830,16 +856,28 @@ struct ipu_plane *ipu_plane_init(struct drm_device *dev, struct ipu_soc *ipu,
+ 	struct ipu_plane *ipu_plane;
+ 	const uint64_t *modifiers = ipu_format_modifiers;
+ 	unsigned int zpos = (type == DRM_PLANE_TYPE_PRIMARY) ? 0 : 1;
++	unsigned int format_count;
++	const uint32_t *formats;
+ 	int ret;
+ 
+ 	DRM_DEBUG_KMS("channel %d, dp flow %d, possible_crtcs=0x%x\n",
+ 		      dma, dp, possible_crtcs);
+ 
++	if (dp == IPU_DP_FLOW_SYNC_BG || dp == IPU_DP_FLOW_SYNC_FG) {
++		formats = ipu_plane_all_formats;
++		format_count = ARRAY_SIZE(ipu_plane_all_formats);
++	} else {
++		formats = ipu_plane_rgb_formats;
++		format_count = ARRAY_SIZE(ipu_plane_rgb_formats);
++	}
++
++	if (ipu_prg_present(ipu))
++		modifiers = pre_format_modifiers;
++
+ 	ipu_plane = drmm_universal_plane_alloc(dev, struct ipu_plane, base,
+ 					       possible_crtcs, &ipu_plane_funcs,
+-					       ipu_plane_formats,
+-					       ARRAY_SIZE(ipu_plane_formats),
+-					       modifiers, type, NULL);
++					       formats, format_count, modifiers,
++					       type, NULL);
+ 	if (IS_ERR(ipu_plane)) {
+ 		DRM_ERROR("failed to allocate and initialize %s plane\n",
+ 			  zpos ? "overlay" : "primary");
+@@ -850,9 +888,6 @@ struct ipu_plane *ipu_plane_init(struct drm_device *dev, struct ipu_soc *ipu,
+ 	ipu_plane->dma = dma;
+ 	ipu_plane->dp_flow = dp;
+ 
+-	if (ipu_prg_present(ipu))
+-		modifiers = pre_format_modifiers;
+-
+ 	drm_plane_helper_add(&ipu_plane->base, &ipu_plane_helper_funcs);
+ 
+ 	if (dp == IPU_DP_FLOW_SYNC_BG || dp == IPU_DP_FLOW_SYNC_FG)
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+index 18bc76b7f1a33..4523d6ba891b5 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+@@ -407,9 +407,6 @@ static void dpu_crtc_frame_event_work(struct kthread_work *work)
+ 								fevent->event);
+ 		}
+ 
+-		if (fevent->event & DPU_ENCODER_FRAME_EVENT_DONE)
+-			dpu_core_perf_crtc_update(crtc, 0, false);
+-
+ 		if (fevent->event & (DPU_ENCODER_FRAME_EVENT_DONE
+ 					| DPU_ENCODER_FRAME_EVENT_ERROR))
+ 			frame_done = true;
+@@ -477,6 +474,7 @@ static void dpu_crtc_frame_event_cb(void *data, u32 event)
+ void dpu_crtc_complete_commit(struct drm_crtc *crtc)
+ {
+ 	trace_dpu_crtc_complete_commit(DRMID(crtc));
++	dpu_core_perf_crtc_update(crtc, 0, false);
+ 	_dpu_crtc_complete_flip(crtc);
+ }
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
+index 06b56fec04e04..6b0a7bc87eb75 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
+@@ -225,7 +225,7 @@ int dpu_mdss_init(struct drm_device *dev)
+ 	struct msm_drm_private *priv = dev->dev_private;
+ 	struct dpu_mdss *dpu_mdss;
+ 	struct dss_module_power *mp;
+-	int ret = 0;
++	int ret;
+ 	int irq;
+ 
+ 	dpu_mdss = devm_kzalloc(dev->dev, sizeof(*dpu_mdss), GFP_KERNEL);
+@@ -253,8 +253,10 @@ int dpu_mdss_init(struct drm_device *dev)
+ 		goto irq_domain_error;
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (irq < 0)
++	if (irq < 0) {
++		ret = irq;
+ 		goto irq_error;
++	}
+ 
+ 	irq_set_chained_handler_and_data(irq, dpu_mdss_irq,
+ 					 dpu_mdss);
+@@ -263,7 +265,7 @@ int dpu_mdss_init(struct drm_device *dev)
+ 
+ 	pm_runtime_enable(dev->dev);
+ 
+-	return ret;
++	return 0;
+ 
+ irq_error:
+ 	_dpu_mdss_irq_domain_fini(dpu_mdss);
+diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.c b/drivers/gpu/drm/msm/dp/dp_catalog.c
+index b1a9b1b98f5f6..f4f53f23e331e 100644
+--- a/drivers/gpu/drm/msm/dp/dp_catalog.c
++++ b/drivers/gpu/drm/msm/dp/dp_catalog.c
+@@ -582,10 +582,9 @@ void dp_catalog_ctrl_hpd_config(struct dp_catalog *dp_catalog)
+ 
+ 	u32 reftimer = dp_read_aux(catalog, REG_DP_DP_HPD_REFTIMER);
+ 
+-	/* enable HPD interrupts */
++	/* enable HPD plug and unplug interrupts */
+ 	dp_catalog_hpd_config_intr(dp_catalog,
+-		DP_DP_HPD_PLUG_INT_MASK | DP_DP_IRQ_HPD_INT_MASK
+-		| DP_DP_HPD_UNPLUG_INT_MASK | DP_DP_HPD_REPLUG_INT_MASK, true);
++		DP_DP_HPD_PLUG_INT_MASK | DP_DP_HPD_UNPLUG_INT_MASK, true);
+ 
+ 	/* Configure REFTIMER and enable it */
+ 	reftimer |= DP_DP_HPD_REFTIMER_ENABLE;
+diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+index 1390f3547fde4..2a8955ca70d1a 100644
+--- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
++++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+@@ -1809,6 +1809,61 @@ end:
+ 	return ret;
+ }
+ 
++int dp_ctrl_off_link_stream(struct dp_ctrl *dp_ctrl)
++{
++	struct dp_ctrl_private *ctrl;
++	struct dp_io *dp_io;
++	struct phy *phy;
++	int ret;
++
++	ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl);
++	dp_io = &ctrl->parser->io;
++	phy = dp_io->phy;
++
++	/* set dongle to D3 (power off) mode */
++	dp_link_psm_config(ctrl->link, &ctrl->panel->link_info, true);
++
++	dp_catalog_ctrl_mainlink_ctrl(ctrl->catalog, false);
++
++	ret = dp_power_clk_enable(ctrl->power, DP_STREAM_PM, false);
++	if (ret) {
++		DRM_ERROR("Failed to disable pixel clocks. ret=%d\n", ret);
++		return ret;
++	}
++
++	ret = dp_power_clk_enable(ctrl->power, DP_CTRL_PM, false);
++	if (ret) {
++		DRM_ERROR("Failed to disable link clocks. ret=%d\n", ret);
++		return ret;
++	}
++
++	phy_power_off(phy);
++
++	/* aux channel down, reinit phy */
++	phy_exit(phy);
++	phy_init(phy);
++
++	DRM_DEBUG_DP("DP off link/stream done\n");
++	return ret;
++}
++
++void dp_ctrl_off_phy(struct dp_ctrl *dp_ctrl)
++{
++	struct dp_ctrl_private *ctrl;
++	struct dp_io *dp_io;
++	struct phy *phy;
++
++	ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl);
++	dp_io = &ctrl->parser->io;
++	phy = dp_io->phy;
++
++	dp_catalog_ctrl_reset(ctrl->catalog);
++
++	phy_exit(phy);
++
++	DRM_DEBUG_DP("DP off phy done\n");
++}
++
+ int dp_ctrl_off(struct dp_ctrl *dp_ctrl)
+ {
+ 	struct dp_ctrl_private *ctrl;
+diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.h b/drivers/gpu/drm/msm/dp/dp_ctrl.h
+index a836bd358447c..25e4f75122522 100644
+--- a/drivers/gpu/drm/msm/dp/dp_ctrl.h
++++ b/drivers/gpu/drm/msm/dp/dp_ctrl.h
+@@ -23,6 +23,8 @@ int dp_ctrl_host_init(struct dp_ctrl *dp_ctrl, bool flip, bool reset);
+ void dp_ctrl_host_deinit(struct dp_ctrl *dp_ctrl);
+ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl);
+ int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl);
++int dp_ctrl_off_link_stream(struct dp_ctrl *dp_ctrl);
++void dp_ctrl_off_phy(struct dp_ctrl *dp_ctrl);
+ int dp_ctrl_off(struct dp_ctrl *dp_ctrl);
+ void dp_ctrl_push_idle(struct dp_ctrl *dp_ctrl);
+ void dp_ctrl_isr(struct dp_ctrl *dp_ctrl);
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index 1784e119269b7..cdec0a367a2cb 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -346,6 +346,12 @@ static int dp_display_process_hpd_high(struct dp_display_private *dp)
+ 	dp->dp_display.max_pclk_khz = DP_MAX_PIXEL_CLK_KHZ;
+ 	dp->dp_display.max_dp_lanes = dp->parser->max_dp_lanes;
+ 
++	/*
++	 * set sink to normal operation mode -- D0
++	 * before dpcd read
++	 */
++	dp_link_psm_config(dp->link, &dp->panel->link_info, false);
++
+ 	dp_link_reset_phy_params_vx_px(dp->link);
+ 	rc = dp_ctrl_on_link(dp->ctrl);
+ 	if (rc) {
+@@ -414,11 +420,6 @@ static int dp_display_usbpd_configure_cb(struct device *dev)
+ 
+ 	dp_display_host_init(dp, false);
+ 
+-	/*
+-	 * set sink to normal operation mode -- D0
+-	 * before dpcd read
+-	 */
+-	dp_link_psm_config(dp->link, &dp->panel->link_info, false);
+ 	rc = dp_display_process_hpd_high(dp);
+ end:
+ 	return rc;
+@@ -579,6 +580,10 @@ static int dp_hpd_plug_handle(struct dp_display_private *dp, u32 data)
+ 		dp_add_event(dp, EV_CONNECT_PENDING_TIMEOUT, 0, tout);
+ 	}
+ 
++	/* enable HDP irq_hpd/replug interrupt */
++	dp_catalog_hpd_config_intr(dp->catalog,
++		DP_DP_IRQ_HPD_INT_MASK | DP_DP_HPD_REPLUG_INT_MASK, true);
++
+ 	mutex_unlock(&dp->event_mutex);
+ 
+ 	/* uevent will complete connection part */
+@@ -628,7 +633,26 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data)
+ 	mutex_lock(&dp->event_mutex);
+ 
+ 	state = dp->hpd_state;
+-	if (state == ST_DISCONNECT_PENDING || state == ST_DISCONNECTED) {
++
++	/* disable irq_hpd/replug interrupts */
++	dp_catalog_hpd_config_intr(dp->catalog,
++		DP_DP_IRQ_HPD_INT_MASK | DP_DP_HPD_REPLUG_INT_MASK, false);
++
++	/* unplugged, no more irq_hpd handle */
++	dp_del_event(dp, EV_IRQ_HPD_INT);
++
++	if (state == ST_DISCONNECTED) {
++		/* triggered by irq_hdp with sink_count = 0 */
++		if (dp->link->sink_count == 0) {
++			dp_ctrl_off_phy(dp->ctrl);
++			hpd->hpd_high = 0;
++			dp->core_initialized = false;
++		}
++		mutex_unlock(&dp->event_mutex);
++		return 0;
++	}
++
++	if (state == ST_DISCONNECT_PENDING) {
+ 		mutex_unlock(&dp->event_mutex);
+ 		return 0;
+ 	}
+@@ -642,9 +666,8 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data)
+ 
+ 	dp->hpd_state = ST_DISCONNECT_PENDING;
+ 
+-	/* disable HPD plug interrupt until disconnect is done */
+-	dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK
+-				| DP_DP_IRQ_HPD_INT_MASK, false);
++	/* disable HPD plug interrupts */
++	dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK, false);
+ 
+ 	hpd->hpd_high = 0;
+ 
+@@ -660,8 +683,8 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data)
+ 	/* signal the disconnect event early to ensure proper teardown */
+ 	dp_display_handle_plugged_change(g_dp_display, false);
+ 
+-	dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK |
+-					DP_DP_IRQ_HPD_INT_MASK, true);
++	/* enable HDP plug interrupt to prepare for next plugin */
++	dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK, true);
+ 
+ 	/* uevent will complete disconnection part */
+ 	mutex_unlock(&dp->event_mutex);
+@@ -692,7 +715,7 @@ static int dp_irq_hpd_handle(struct dp_display_private *dp, u32 data)
+ 
+ 	/* irq_hpd can happen at either connected or disconnected state */
+ 	state =  dp->hpd_state;
+-	if (state == ST_DISPLAY_OFF) {
++	if (state == ST_DISPLAY_OFF || state == ST_SUSPENDED) {
+ 		mutex_unlock(&dp->event_mutex);
+ 		return 0;
+ 	}
+@@ -910,9 +933,13 @@ static int dp_display_disable(struct dp_display_private *dp, u32 data)
+ 
+ 	dp_display->audio_enabled = false;
+ 
+-	dp_ctrl_off(dp->ctrl);
+-
+-	dp->core_initialized = false;
++	/* triggered by irq_hpd with sink_count = 0 */
++	if (dp->link->sink_count == 0) {
++		dp_ctrl_off_link_stream(dp->ctrl);
++	} else {
++		dp_ctrl_off(dp->ctrl);
++		dp->core_initialized = false;
++	}
+ 
+ 	dp_display->power_on = false;
+ 
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index fe7d17cd35ecd..afd555b0c105e 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -523,6 +523,7 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
+ 		priv->event_thread[i].worker = kthread_create_worker(0,
+ 			"crtc_event:%d", priv->event_thread[i].crtc_id);
+ 		if (IS_ERR(priv->event_thread[i].worker)) {
++			ret = PTR_ERR(priv->event_thread[i].worker);
+ 			DRM_DEV_ERROR(dev, "failed to create crtc_event kthread\n");
+ 			goto err_msm_uninit;
+ 		}
+diff --git a/drivers/gpu/drm/pl111/Kconfig b/drivers/gpu/drm/pl111/Kconfig
+index 80f6748055e36..3aae387a96af2 100644
+--- a/drivers/gpu/drm/pl111/Kconfig
++++ b/drivers/gpu/drm/pl111/Kconfig
+@@ -3,6 +3,7 @@ config DRM_PL111
+ 	tristate "DRM Support for PL111 CLCD Controller"
+ 	depends on DRM
+ 	depends on ARM || ARM64 || COMPILE_TEST
++	depends on VEXPRESS_CONFIG || VEXPRESS_CONFIG=n
+ 	depends on COMMON_CLK
+ 	select DRM_KMS_HELPER
+ 	select DRM_KMS_CMA_HELPER
+diff --git a/drivers/gpu/drm/qxl/qxl_dumb.c b/drivers/gpu/drm/qxl/qxl_dumb.c
+index 48a58ba1db965..686485b19d0f2 100644
+--- a/drivers/gpu/drm/qxl/qxl_dumb.c
++++ b/drivers/gpu/drm/qxl/qxl_dumb.c
+@@ -58,6 +58,8 @@ int qxl_mode_dumb_create(struct drm_file *file_priv,
+ 	surf.height = args->height;
+ 	surf.stride = pitch;
+ 	surf.format = format;
++	surf.data = 0;
++
+ 	r = qxl_gem_object_create_with_handle(qdev, file_priv,
+ 					      QXL_GEM_DOMAIN_CPU,
+ 					      args->size, &surf, &qobj,
+diff --git a/drivers/gpu/drm/rockchip/cdn-dp-core.c b/drivers/gpu/drm/rockchip/cdn-dp-core.c
+index a4a45daf93f2b..6802d9b65f828 100644
+--- a/drivers/gpu/drm/rockchip/cdn-dp-core.c
++++ b/drivers/gpu/drm/rockchip/cdn-dp-core.c
+@@ -73,6 +73,7 @@ static int cdn_dp_grf_write(struct cdn_dp_device *dp,
+ 	ret = regmap_write(dp->grf, reg, val);
+ 	if (ret) {
+ 		DRM_DEV_ERROR(dp->dev, "Could not write to GRF: %d\n", ret);
++		clk_disable_unprepare(dp->grf_clk);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/rockchip/cdn-dp-reg.c b/drivers/gpu/drm/rockchip/cdn-dp-reg.c
+index 9d2163ef4d6e2..33fb4d05c5065 100644
+--- a/drivers/gpu/drm/rockchip/cdn-dp-reg.c
++++ b/drivers/gpu/drm/rockchip/cdn-dp-reg.c
+@@ -658,7 +658,7 @@ int cdn_dp_config_video(struct cdn_dp_device *dp)
+ 	 */
+ 	do {
+ 		tu_size_reg += 2;
+-		symbol = tu_size_reg * mode->clock * bit_per_pix;
++		symbol = (u64)tu_size_reg * mode->clock * bit_per_pix;
+ 		do_div(symbol, dp->max_lanes * link_rate * 8);
+ 		rem = do_div(symbol, 1000);
+ 		if (tu_size_reg > 64) {
+diff --git a/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c b/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
+index 24a71091759cc..d8c47ee3cad37 100644
+--- a/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
++++ b/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
+@@ -692,13 +692,8 @@ static const struct dw_mipi_dsi_phy_ops dw_mipi_dsi_rockchip_phy_ops = {
+ 	.get_timing = dw_mipi_dsi_phy_get_timing,
+ };
+ 
+-static void dw_mipi_dsi_rockchip_config(struct dw_mipi_dsi_rockchip *dsi,
+-					int mux)
++static void dw_mipi_dsi_rockchip_config(struct dw_mipi_dsi_rockchip *dsi)
+ {
+-	if (dsi->cdata->lcdsel_grf_reg)
+-		regmap_write(dsi->grf_regmap, dsi->cdata->lcdsel_grf_reg,
+-			mux ? dsi->cdata->lcdsel_lit : dsi->cdata->lcdsel_big);
+-
+ 	if (dsi->cdata->lanecfg1_grf_reg)
+ 		regmap_write(dsi->grf_regmap, dsi->cdata->lanecfg1_grf_reg,
+ 					      dsi->cdata->lanecfg1);
+@@ -712,6 +707,13 @@ static void dw_mipi_dsi_rockchip_config(struct dw_mipi_dsi_rockchip *dsi,
+ 					      dsi->cdata->enable);
+ }
+ 
++static void dw_mipi_dsi_rockchip_set_lcdsel(struct dw_mipi_dsi_rockchip *dsi,
++					    int mux)
++{
++	regmap_write(dsi->grf_regmap, dsi->cdata->lcdsel_grf_reg,
++		mux ? dsi->cdata->lcdsel_lit : dsi->cdata->lcdsel_big);
++}
++
+ static int
+ dw_mipi_dsi_encoder_atomic_check(struct drm_encoder *encoder,
+ 				 struct drm_crtc_state *crtc_state,
+@@ -767,9 +769,9 @@ static void dw_mipi_dsi_encoder_enable(struct drm_encoder *encoder)
+ 		return;
+ 	}
+ 
+-	dw_mipi_dsi_rockchip_config(dsi, mux);
++	dw_mipi_dsi_rockchip_set_lcdsel(dsi, mux);
+ 	if (dsi->slave)
+-		dw_mipi_dsi_rockchip_config(dsi->slave, mux);
++		dw_mipi_dsi_rockchip_set_lcdsel(dsi->slave, mux);
+ 
+ 	clk_disable_unprepare(dsi->grf_clk);
+ }
+@@ -923,6 +925,24 @@ static int dw_mipi_dsi_rockchip_bind(struct device *dev,
+ 		return ret;
+ 	}
+ 
++	/*
++	 * With the GRF clock running, write lane and dual-mode configurations
++	 * that won't change immediately. If we waited until enable() to do
++	 * this, things like panel preparation would not be able to send
++	 * commands over DSI.
++	 */
++	ret = clk_prepare_enable(dsi->grf_clk);
++	if (ret) {
++		DRM_DEV_ERROR(dsi->dev, "Failed to enable grf_clk: %d\n", ret);
++		return ret;
++	}
++
++	dw_mipi_dsi_rockchip_config(dsi);
++	if (dsi->slave)
++		dw_mipi_dsi_rockchip_config(dsi->slave);
++
++	clk_disable_unprepare(dsi->grf_clk);
++
+ 	ret = rockchip_dsi_drm_create_encoder(dsi, drm_dev);
+ 	if (ret) {
+ 		DRM_DEV_ERROR(dev, "Failed to create drm encoder\n");
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+index 64469439ddf2f..f5b9028a16a38 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -1022,6 +1022,7 @@ static void vop_plane_atomic_update(struct drm_plane *plane,
+ 		VOP_WIN_SET(vop, win, alpha_en, 1);
+ 	} else {
+ 		VOP_WIN_SET(vop, win, src_alpha_ctl, SRC_ALPHA_EN(0));
++		VOP_WIN_SET(vop, win, alpha_en, 0);
+ 	}
+ 
+ 	VOP_WIN_SET(vop, win, enable, 1);
+diff --git a/drivers/gpu/drm/rockchip/rockchip_lvds.c b/drivers/gpu/drm/rockchip/rockchip_lvds.c
+index bd5ba10822c24..489d63c05c0d9 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_lvds.c
++++ b/drivers/gpu/drm/rockchip/rockchip_lvds.c
+@@ -499,11 +499,11 @@ static int px30_lvds_probe(struct platform_device *pdev,
+ 	if (IS_ERR(lvds->dphy))
+ 		return PTR_ERR(lvds->dphy);
+ 
+-	phy_init(lvds->dphy);
++	ret = phy_init(lvds->dphy);
+ 	if (ret)
+ 		return ret;
+ 
+-	phy_set_mode(lvds->dphy, PHY_MODE_LVDS);
++	ret = phy_set_mode(lvds->dphy, PHY_MODE_LVDS);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_crtc.c b/drivers/gpu/drm/vc4/vc4_crtc.c
+index 76657dcdf9b00..1f36b67cd6ce9 100644
+--- a/drivers/gpu/drm/vc4/vc4_crtc.c
++++ b/drivers/gpu/drm/vc4/vc4_crtc.c
+@@ -279,14 +279,22 @@ static u32 vc4_crtc_get_fifo_full_level_bits(struct vc4_crtc *vc4_crtc,
+  * allows drivers to push pixels to more than one encoder from the
+  * same CRTC.
+  */
+-static struct drm_encoder *vc4_get_crtc_encoder(struct drm_crtc *crtc)
++static struct drm_encoder *vc4_get_crtc_encoder(struct drm_crtc *crtc,
++						struct drm_atomic_state *state,
++						struct drm_connector_state *(*get_state)(struct drm_atomic_state *state,
++											 struct drm_connector *connector))
+ {
+ 	struct drm_connector *connector;
+ 	struct drm_connector_list_iter conn_iter;
+ 
+ 	drm_connector_list_iter_begin(crtc->dev, &conn_iter);
+ 	drm_for_each_connector_iter(connector, &conn_iter) {
+-		if (connector->state->crtc == crtc) {
++		struct drm_connector_state *conn_state = get_state(state, connector);
++
++		if (!conn_state)
++			continue;
++
++		if (conn_state->crtc == crtc) {
+ 			drm_connector_list_iter_end(&conn_iter);
+ 			return connector->encoder;
+ 		}
+@@ -305,16 +313,17 @@ static void vc4_crtc_pixelvalve_reset(struct drm_crtc *crtc)
+ 	CRTC_WRITE(PV_CONTROL, CRTC_READ(PV_CONTROL) | PV_CONTROL_FIFO_CLR);
+ }
+ 
+-static void vc4_crtc_config_pv(struct drm_crtc *crtc)
++static void vc4_crtc_config_pv(struct drm_crtc *crtc, struct drm_atomic_state *state)
+ {
+ 	struct drm_device *dev = crtc->dev;
+ 	struct vc4_dev *vc4 = to_vc4_dev(dev);
+-	struct drm_encoder *encoder = vc4_get_crtc_encoder(crtc);
++	struct drm_encoder *encoder = vc4_get_crtc_encoder(crtc, state,
++							   drm_atomic_get_new_connector_state);
+ 	struct vc4_encoder *vc4_encoder = to_vc4_encoder(encoder);
+ 	struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc);
+ 	const struct vc4_pv_data *pv_data = vc4_crtc_to_vc4_pv_data(vc4_crtc);
+-	struct drm_crtc_state *state = crtc->state;
+-	struct drm_display_mode *mode = &state->adjusted_mode;
++	struct drm_crtc_state *crtc_state = crtc->state;
++	struct drm_display_mode *mode = &crtc_state->adjusted_mode;
+ 	bool interlace = mode->flags & DRM_MODE_FLAG_INTERLACE;
+ 	u32 pixel_rep = (mode->flags & DRM_MODE_FLAG_DBLCLK) ? 2 : 1;
+ 	bool is_dsi = (vc4_encoder->type == VC4_ENCODER_TYPE_DSI0 ||
+@@ -421,10 +430,10 @@ static void require_hvs_enabled(struct drm_device *dev)
+ }
+ 
+ static int vc4_crtc_disable(struct drm_crtc *crtc,
++			    struct drm_encoder *encoder,
+ 			    struct drm_atomic_state *state,
+ 			    unsigned int channel)
+ {
+-	struct drm_encoder *encoder = vc4_get_crtc_encoder(crtc);
+ 	struct vc4_encoder *vc4_encoder = to_vc4_encoder(encoder);
+ 	struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc);
+ 	struct drm_device *dev = crtc->dev;
+@@ -465,10 +474,29 @@ static int vc4_crtc_disable(struct drm_crtc *crtc,
+ 	return 0;
+ }
+ 
++static struct drm_encoder *vc4_crtc_get_encoder_by_type(struct drm_crtc *crtc,
++							enum vc4_encoder_type type)
++{
++	struct drm_encoder *encoder;
++
++	drm_for_each_encoder(encoder, crtc->dev) {
++		struct vc4_encoder *vc4_encoder = to_vc4_encoder(encoder);
++
++		if (vc4_encoder->type == type)
++			return encoder;
++	}
++
++	return NULL;
++}
++
+ int vc4_crtc_disable_at_boot(struct drm_crtc *crtc)
+ {
+ 	struct drm_device *drm = crtc->dev;
+ 	struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc);
++	enum vc4_encoder_type encoder_type;
++	const struct vc4_pv_data *pv_data;
++	struct drm_encoder *encoder;
++	unsigned encoder_sel;
+ 	int channel;
+ 
+ 	if (!(of_device_is_compatible(vc4_crtc->pdev->dev.of_node,
+@@ -487,7 +515,17 @@ int vc4_crtc_disable_at_boot(struct drm_crtc *crtc)
+ 	if (channel < 0)
+ 		return 0;
+ 
+-	return vc4_crtc_disable(crtc, NULL, channel);
++	encoder_sel = VC4_GET_FIELD(CRTC_READ(PV_CONTROL), PV_CONTROL_CLK_SELECT);
++	if (WARN_ON(encoder_sel != 0))
++		return 0;
++
++	pv_data = vc4_crtc_to_vc4_pv_data(vc4_crtc);
++	encoder_type = pv_data->encoder_types[encoder_sel];
++	encoder = vc4_crtc_get_encoder_by_type(crtc, encoder_type);
++	if (WARN_ON(!encoder))
++		return 0;
++
++	return vc4_crtc_disable(crtc, encoder, NULL, channel);
+ }
+ 
+ static void vc4_crtc_atomic_disable(struct drm_crtc *crtc,
+@@ -496,6 +534,8 @@ static void vc4_crtc_atomic_disable(struct drm_crtc *crtc,
+ 	struct drm_crtc_state *old_state = drm_atomic_get_old_crtc_state(state,
+ 									 crtc);
+ 	struct vc4_crtc_state *old_vc4_state = to_vc4_crtc_state(old_state);
++	struct drm_encoder *encoder = vc4_get_crtc_encoder(crtc, state,
++							   drm_atomic_get_old_connector_state);
+ 	struct drm_device *dev = crtc->dev;
+ 
+ 	require_hvs_enabled(dev);
+@@ -503,7 +543,7 @@ static void vc4_crtc_atomic_disable(struct drm_crtc *crtc,
+ 	/* Disable vblank irq handling before crtc is disabled. */
+ 	drm_crtc_vblank_off(crtc);
+ 
+-	vc4_crtc_disable(crtc, state, old_vc4_state->assigned_channel);
++	vc4_crtc_disable(crtc, encoder, state, old_vc4_state->assigned_channel);
+ 
+ 	/*
+ 	 * Make sure we issue a vblank event after disabling the CRTC if
+@@ -524,7 +564,8 @@ static void vc4_crtc_atomic_enable(struct drm_crtc *crtc,
+ {
+ 	struct drm_device *dev = crtc->dev;
+ 	struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc);
+-	struct drm_encoder *encoder = vc4_get_crtc_encoder(crtc);
++	struct drm_encoder *encoder = vc4_get_crtc_encoder(crtc, state,
++							   drm_atomic_get_new_connector_state);
+ 	struct vc4_encoder *vc4_encoder = to_vc4_encoder(encoder);
+ 
+ 	require_hvs_enabled(dev);
+@@ -539,7 +580,7 @@ static void vc4_crtc_atomic_enable(struct drm_crtc *crtc,
+ 	if (vc4_encoder->pre_crtc_configure)
+ 		vc4_encoder->pre_crtc_configure(encoder, state);
+ 
+-	vc4_crtc_config_pv(crtc);
++	vc4_crtc_config_pv(crtc, state);
+ 
+ 	CRTC_WRITE(PV_CONTROL, CRTC_READ(PV_CONTROL) | PV_CONTROL_EN);
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 8106b5634fe10..e94730beb15b7 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -2000,7 +2000,7 @@ static int vc4_hdmi_bind(struct device *dev, struct device *master, void *data)
+ 							     &hpd_gpio_flags);
+ 		if (vc4_hdmi->hpd_gpio < 0) {
+ 			ret = vc4_hdmi->hpd_gpio;
+-			goto err_unprepare_hsm;
++			goto err_put_ddc;
+ 		}
+ 
+ 		vc4_hdmi->hpd_active_low = hpd_gpio_flags & OF_GPIO_ACTIVE_LOW;
+@@ -2041,8 +2041,8 @@ err_destroy_conn:
+ 	vc4_hdmi_connector_destroy(&vc4_hdmi->connector);
+ err_destroy_encoder:
+ 	drm_encoder_cleanup(encoder);
+-err_unprepare_hsm:
+ 	pm_runtime_disable(dev);
++err_put_ddc:
+ 	put_device(&vc4_hdmi->ddc->dev);
+ 
+ 	return ret;
+diff --git a/drivers/gpu/drm/vmwgfx/device_include/svga3d_surfacedefs.h b/drivers/gpu/drm/vmwgfx/device_include/svga3d_surfacedefs.h
+index 4db25bd9fa22d..127eaf0a0a580 100644
+--- a/drivers/gpu/drm/vmwgfx/device_include/svga3d_surfacedefs.h
++++ b/drivers/gpu/drm/vmwgfx/device_include/svga3d_surfacedefs.h
+@@ -1467,6 +1467,7 @@ struct svga3dsurface_cache {
+ 
+ /**
+  * struct svga3dsurface_loc - Surface location
++ * @sheet: The multisample sheet.
+  * @sub_resource: Surface subresource. Defined as layer * num_mip_levels +
+  * mip_level.
+  * @x: X coordinate.
+@@ -1474,6 +1475,7 @@ struct svga3dsurface_cache {
+  * @z: Z coordinate.
+  */
+ struct svga3dsurface_loc {
++	u32 sheet;
+ 	u32 sub_resource;
+ 	u32 x, y, z;
+ };
+@@ -1566,8 +1568,8 @@ svga3dsurface_get_loc(const struct svga3dsurface_cache *cache,
+ 	u32 layer;
+ 	int i;
+ 
+-	if (offset >= cache->sheet_bytes)
+-		offset %= cache->sheet_bytes;
++	loc->sheet = offset / cache->sheet_bytes;
++	offset -= loc->sheet * cache->sheet_bytes;
+ 
+ 	layer = offset / cache->mip_chain_bytes;
+ 	offset -= layer * cache->mip_chain_bytes;
+@@ -1631,6 +1633,7 @@ svga3dsurface_min_loc(const struct svga3dsurface_cache *cache,
+ 		      u32 sub_resource,
+ 		      struct svga3dsurface_loc *loc)
+ {
++	loc->sheet = 0;
+ 	loc->sub_resource = sub_resource;
+ 	loc->x = loc->y = loc->z = 0;
+ }
+@@ -1652,6 +1655,7 @@ svga3dsurface_max_loc(const struct svga3dsurface_cache *cache,
+ 	const struct drm_vmw_size *size;
+ 	u32 mip;
+ 
++	loc->sheet = 0;
+ 	loc->sub_resource = sub_resource + 1;
+ 	mip = sub_resource % cache->num_mip_levels;
+ 	size = &cache->mip[mip].size;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+index 7a24196f92c38..d6a6d8a3387a9 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+@@ -2763,12 +2763,24 @@ static int vmw_cmd_dx_genmips(struct vmw_private *dev_priv,
+ {
+ 	VMW_DECLARE_CMD_VAR(*cmd, SVGA3dCmdDXGenMips) =
+ 		container_of(header, typeof(*cmd), header);
+-	struct vmw_resource *ret;
++	struct vmw_resource *view;
++	struct vmw_res_cache_entry *rcache;
+ 
+-	ret = vmw_view_id_val_add(sw_context, vmw_view_sr,
+-				  cmd->body.shaderResourceViewId);
++	view = vmw_view_id_val_add(sw_context, vmw_view_sr,
++				   cmd->body.shaderResourceViewId);
++	if (IS_ERR(view))
++		return PTR_ERR(view);
+ 
+-	return PTR_ERR_OR_ZERO(ret);
++	/*
++	 * Normally the shader-resource view is not gpu-dirtying, but for
++	 * this particular command it is...
++	 * So mark the last looked-up surface, which is the surface
++	 * the view points to, gpu-dirty.
++	 */
++	rcache = &sw_context->res_cache[vmw_res_surface];
++	vmw_validation_res_set_dirty(sw_context->ctx, rcache->private,
++				     VMW_RES_DIRTY_SET);
++	return 0;
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+index c3e55c1376eb8..beab3e19d8e21 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+@@ -1804,6 +1804,19 @@ static void vmw_surface_tex_dirty_range_add(struct vmw_resource *res,
+ 	svga3dsurface_get_loc(cache, &loc2, end - 1);
+ 	svga3dsurface_inc_loc(cache, &loc2);
+ 
++	if (loc1.sheet != loc2.sheet) {
++		u32 sub_res;
++
++		/*
++		 * Multiple multisample sheets. To do this in an optimized
++		 * fashion, compute the dirty region for each sheet and the
++		 * resulting union. Since this is not a common case, just dirty
++		 * the whole surface.
++		 */
++		for (sub_res = 0; sub_res < dirty->num_subres; ++sub_res)
++			vmw_subres_dirty_full(dirty, sub_res);
++		return;
++	}
+ 	if (loc1.sub_resource + 1 == loc2.sub_resource) {
+ 		/* Dirty range covers a single sub-resource */
+ 		vmw_subres_dirty_add(dirty, &loc1, &loc2);
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 0de2788b9814c..7db332139f7d5 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -2306,12 +2306,8 @@ static int hid_device_remove(struct device *dev)
+ {
+ 	struct hid_device *hdev = to_hid_device(dev);
+ 	struct hid_driver *hdrv;
+-	int ret = 0;
+ 
+-	if (down_interruptible(&hdev->driver_input_lock)) {
+-		ret = -EINTR;
+-		goto end;
+-	}
++	down(&hdev->driver_input_lock);
+ 	hdev->io_started = false;
+ 
+ 	hdrv = hdev->driver;
+@@ -2326,8 +2322,8 @@ static int hid_device_remove(struct device *dev)
+ 
+ 	if (!hdev->io_started)
+ 		up(&hdev->driver_input_lock);
+-end:
+-	return ret;
++
++	return 0;
+ }
+ 
+ static ssize_t modalias_show(struct device *dev, struct device_attribute *a,
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index b84a0a11e05bf..63ca5959dc679 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -396,6 +396,7 @@
+ #define USB_DEVICE_ID_HP_X2_10_COVER	0x0755
+ #define I2C_DEVICE_ID_HP_SPECTRE_X360_15	0x2817
+ #define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN	0x2706
++#define I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN	0x261A
+ 
+ #define USB_VENDOR_ID_ELECOM		0x056e
+ #define USB_DEVICE_ID_ELECOM_BM084	0x0061
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index abbfa91e73e43..68c8644234a4a 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -326,6 +326,8 @@ static const struct hid_device_id hid_battery_quirks[] = {
+ 	  HID_BATTERY_QUIRK_IGNORE },
+ 	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_15),
+ 	  HID_BATTERY_QUIRK_IGNORE },
++	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN),
++	  HID_BATTERY_QUIRK_IGNORE },
+ 	{}
+ };
+ 
+diff --git a/drivers/hid/hid-sony.c b/drivers/hid/hid-sony.c
+index 8319b0ce385a5..b3722c51ec78a 100644
+--- a/drivers/hid/hid-sony.c
++++ b/drivers/hid/hid-sony.c
+@@ -597,9 +597,8 @@ struct sony_sc {
+ 	/* DS4 calibration data */
+ 	struct ds4_calibration_data ds4_calib_data[6];
+ 	/* GH Live */
++	struct urb *ghl_urb;
+ 	struct timer_list ghl_poke_timer;
+-	struct usb_ctrlrequest *ghl_cr;
+-	u8 *ghl_databuf;
+ };
+ 
+ static void sony_set_leds(struct sony_sc *sc);
+@@ -625,66 +624,54 @@ static inline void sony_schedule_work(struct sony_sc *sc,
+ 
+ static void ghl_magic_poke_cb(struct urb *urb)
+ {
+-	if (urb) {
+-		/* Free sc->ghl_cr and sc->ghl_databuf allocated in
+-		 * ghl_magic_poke()
+-		 */
+-		kfree(urb->setup_packet);
+-		kfree(urb->transfer_buffer);
+-	}
++	struct sony_sc *sc = urb->context;
++
++	if (urb->status < 0)
++		hid_err(sc->hdev, "URB transfer failed : %d", urb->status);
++
++	mod_timer(&sc->ghl_poke_timer, jiffies + GHL_GUITAR_POKE_INTERVAL*HZ);
+ }
+ 
+ static void ghl_magic_poke(struct timer_list *t)
+ {
++	int ret;
+ 	struct sony_sc *sc = from_timer(sc, t, ghl_poke_timer);
+ 
+-	int ret;
++	ret = usb_submit_urb(sc->ghl_urb, GFP_ATOMIC);
++	if (ret < 0)
++		hid_err(sc->hdev, "usb_submit_urb failed: %d", ret);
++}
++
++static int ghl_init_urb(struct sony_sc *sc, struct usb_device *usbdev)
++{
++	struct usb_ctrlrequest *cr;
++	u16 poke_size;
++	u8 *databuf;
+ 	unsigned int pipe;
+-	struct urb *urb;
+-	struct usb_device *usbdev = to_usb_device(sc->hdev->dev.parent->parent);
+-	const u16 poke_size =
+-		ARRAY_SIZE(ghl_ps3wiiu_magic_data);
+ 
++	poke_size = ARRAY_SIZE(ghl_ps3wiiu_magic_data);
+ 	pipe = usb_sndctrlpipe(usbdev, 0);
+ 
+-	if (!sc->ghl_cr) {
+-		sc->ghl_cr = kzalloc(sizeof(*sc->ghl_cr), GFP_ATOMIC);
+-		if (!sc->ghl_cr)
+-			goto resched;
+-	}
+-
+-	if (!sc->ghl_databuf) {
+-		sc->ghl_databuf = kzalloc(poke_size, GFP_ATOMIC);
+-		if (!sc->ghl_databuf)
+-			goto resched;
+-	}
++	cr = devm_kzalloc(&sc->hdev->dev, sizeof(*cr), GFP_ATOMIC);
++	if (cr == NULL)
++		return -ENOMEM;
+ 
+-	urb = usb_alloc_urb(0, GFP_ATOMIC);
+-	if (!urb)
+-		goto resched;
++	databuf = devm_kzalloc(&sc->hdev->dev, poke_size, GFP_ATOMIC);
++	if (databuf == NULL)
++		return -ENOMEM;
+ 
+-	sc->ghl_cr->bRequestType =
++	cr->bRequestType =
+ 		USB_RECIP_INTERFACE | USB_TYPE_CLASS | USB_DIR_OUT;
+-	sc->ghl_cr->bRequest = USB_REQ_SET_CONFIGURATION;
+-	sc->ghl_cr->wValue = cpu_to_le16(ghl_ps3wiiu_magic_value);
+-	sc->ghl_cr->wIndex = 0;
+-	sc->ghl_cr->wLength = cpu_to_le16(poke_size);
+-	memcpy(sc->ghl_databuf, ghl_ps3wiiu_magic_data, poke_size);
+-
++	cr->bRequest = USB_REQ_SET_CONFIGURATION;
++	cr->wValue = cpu_to_le16(ghl_ps3wiiu_magic_value);
++	cr->wIndex = 0;
++	cr->wLength = cpu_to_le16(poke_size);
++	memcpy(databuf, ghl_ps3wiiu_magic_data, poke_size);
+ 	usb_fill_control_urb(
+-		urb, usbdev, pipe,
+-		(unsigned char *) sc->ghl_cr, sc->ghl_databuf,
+-		poke_size, ghl_magic_poke_cb, NULL);
+-	ret = usb_submit_urb(urb, GFP_ATOMIC);
+-	if (ret < 0) {
+-		kfree(sc->ghl_databuf);
+-		kfree(sc->ghl_cr);
+-	}
+-	usb_free_urb(urb);
+-
+-resched:
+-	/* Reschedule for next time */
+-	mod_timer(&sc->ghl_poke_timer, jiffies + GHL_GUITAR_POKE_INTERVAL*HZ);
++		sc->ghl_urb, usbdev, pipe,
++		(unsigned char *) cr, databuf, poke_size,
++		ghl_magic_poke_cb, sc);
++	return 0;
+ }
+ 
+ static int guitar_mapping(struct hid_device *hdev, struct hid_input *hi,
+@@ -2981,6 +2968,7 @@ static int sony_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	int ret;
+ 	unsigned long quirks = id->driver_data;
+ 	struct sony_sc *sc;
++	struct usb_device *usbdev;
+ 	unsigned int connect_mask = HID_CONNECT_DEFAULT;
+ 
+ 	if (!strcmp(hdev->name, "FutureMax Dance Mat"))
+@@ -3000,6 +2988,7 @@ static int sony_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	sc->quirks = quirks;
+ 	hid_set_drvdata(hdev, sc);
+ 	sc->hdev = hdev;
++	usbdev = to_usb_device(sc->hdev->dev.parent->parent);
+ 
+ 	ret = hid_parse(hdev);
+ 	if (ret) {
+@@ -3042,6 +3031,15 @@ static int sony_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	}
+ 
+ 	if (sc->quirks & GHL_GUITAR_PS3WIIU) {
++		sc->ghl_urb = usb_alloc_urb(0, GFP_ATOMIC);
++		if (!sc->ghl_urb)
++			return -ENOMEM;
++		ret = ghl_init_urb(sc, usbdev);
++		if (ret) {
++			hid_err(hdev, "error preparing URB\n");
++			return ret;
++		}
++
+ 		timer_setup(&sc->ghl_poke_timer, ghl_magic_poke, 0);
+ 		mod_timer(&sc->ghl_poke_timer,
+ 			  jiffies + GHL_GUITAR_POKE_INTERVAL*HZ);
+@@ -3054,8 +3052,10 @@ static void sony_remove(struct hid_device *hdev)
+ {
+ 	struct sony_sc *sc = hid_get_drvdata(hdev);
+ 
+-	if (sc->quirks & GHL_GUITAR_PS3WIIU)
++	if (sc->quirks & GHL_GUITAR_PS3WIIU) {
+ 		del_timer_sync(&sc->ghl_poke_timer);
++		usb_free_urb(sc->ghl_urb);
++	}
+ 
+ 	hid_hw_close(hdev);
+ 
+diff --git a/drivers/hid/surface-hid/surface_hid.c b/drivers/hid/surface-hid/surface_hid.c
+index 3477b31611ae1..a3a70e4f3f6c9 100644
+--- a/drivers/hid/surface-hid/surface_hid.c
++++ b/drivers/hid/surface-hid/surface_hid.c
+@@ -143,7 +143,7 @@ static int ssam_hid_get_raw_report(struct surface_hid_device *shid, u8 rprt_id,
+ 	rqst.target_id = shid->uid.target;
+ 	rqst.instance_id = shid->uid.instance;
+ 	rqst.command_id = SURFACE_HID_CID_GET_FEATURE_REPORT;
+-	rqst.flags = 0;
++	rqst.flags = SSAM_REQUEST_HAS_RESPONSE;
+ 	rqst.length = sizeof(rprt_id);
+ 	rqst.payload = &rprt_id;
+ 
+diff --git a/drivers/hid/wacom_wac.h b/drivers/hid/wacom_wac.h
+index 71c886245dbf2..8f16654eca098 100644
+--- a/drivers/hid/wacom_wac.h
++++ b/drivers/hid/wacom_wac.h
+@@ -122,7 +122,7 @@
+ #define WACOM_HID_WD_TOUCHONOFF         (WACOM_HID_UP_WACOMDIGITIZER | 0x0454)
+ #define WACOM_HID_WD_BATTERY_LEVEL      (WACOM_HID_UP_WACOMDIGITIZER | 0x043b)
+ #define WACOM_HID_WD_EXPRESSKEY00       (WACOM_HID_UP_WACOMDIGITIZER | 0x0910)
+-#define WACOM_HID_WD_EXPRESSKEYCAP00    (WACOM_HID_UP_WACOMDIGITIZER | 0x0950)
++#define WACOM_HID_WD_EXPRESSKEYCAP00    (WACOM_HID_UP_WACOMDIGITIZER | 0x0940)
+ #define WACOM_HID_WD_MODE_CHANGE        (WACOM_HID_UP_WACOMDIGITIZER | 0x0980)
+ #define WACOM_HID_WD_MUTE_DEVICE        (WACOM_HID_UP_WACOMDIGITIZER | 0x0981)
+ #define WACOM_HID_WD_CONTROLPANEL       (WACOM_HID_UP_WACOMDIGITIZER | 0x0982)
+diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
+index 311cd005b3be6..5e479d54918cf 100644
+--- a/drivers/hv/connection.c
++++ b/drivers/hv/connection.c
+@@ -232,8 +232,10 @@ int vmbus_connect(void)
+ 	 */
+ 
+ 	for (i = 0; ; i++) {
+-		if (i == ARRAY_SIZE(vmbus_versions))
++		if (i == ARRAY_SIZE(vmbus_versions)) {
++			ret = -EDOM;
+ 			goto cleanup;
++		}
+ 
+ 		version = vmbus_versions[i];
+ 		if (version > max_version)
+diff --git a/drivers/hv/hv_util.c b/drivers/hv/hv_util.c
+index e4aefeb330daf..136576cba26f5 100644
+--- a/drivers/hv/hv_util.c
++++ b/drivers/hv/hv_util.c
+@@ -750,8 +750,8 @@ static int hv_timesync_init(struct hv_util_service *srv)
+ 	 */
+ 	hv_ptp_clock = ptp_clock_register(&ptp_hyperv_info, NULL);
+ 	if (IS_ERR_OR_NULL(hv_ptp_clock)) {
+-		pr_err("cannot register PTP clock: %ld\n",
+-		       PTR_ERR(hv_ptp_clock));
++		pr_err("cannot register PTP clock: %d\n",
++		       PTR_ERR_OR_ZERO(hv_ptp_clock));
+ 		hv_ptp_clock = NULL;
+ 	}
+ 
+diff --git a/drivers/hwmon/lm70.c b/drivers/hwmon/lm70.c
+index 40eab3349904b..6b884ea009877 100644
+--- a/drivers/hwmon/lm70.c
++++ b/drivers/hwmon/lm70.c
+@@ -22,10 +22,10 @@
+ #include <linux/hwmon.h>
+ #include <linux/mutex.h>
+ #include <linux/mod_devicetable.h>
++#include <linux/of.h>
+ #include <linux/property.h>
+ #include <linux/spi/spi.h>
+ #include <linux/slab.h>
+-#include <linux/acpi.h>
+ 
+ #define DRVNAME		"lm70"
+ 
+@@ -148,29 +148,6 @@ static const struct of_device_id lm70_of_ids[] = {
+ MODULE_DEVICE_TABLE(of, lm70_of_ids);
+ #endif
+ 
+-#ifdef CONFIG_ACPI
+-static const struct acpi_device_id lm70_acpi_ids[] = {
+-	{
+-		.id = "LM000070",
+-		.driver_data = LM70_CHIP_LM70,
+-	},
+-	{
+-		.id = "TMP00121",
+-		.driver_data = LM70_CHIP_TMP121,
+-	},
+-	{
+-		.id = "LM000071",
+-		.driver_data = LM70_CHIP_LM71,
+-	},
+-	{
+-		.id = "LM000074",
+-		.driver_data = LM70_CHIP_LM74,
+-	},
+-	{},
+-};
+-MODULE_DEVICE_TABLE(acpi, lm70_acpi_ids);
+-#endif
+-
+ static int lm70_probe(struct spi_device *spi)
+ {
+ 	struct device *hwmon_dev;
+@@ -217,7 +194,6 @@ static struct spi_driver lm70_driver = {
+ 	.driver = {
+ 		.name	= "lm70",
+ 		.of_match_table	= of_match_ptr(lm70_of_ids),
+-		.acpi_match_table = ACPI_PTR(lm70_acpi_ids),
+ 	},
+ 	.id_table = lm70_ids,
+ 	.probe	= lm70_probe,
+diff --git a/drivers/hwmon/max31722.c b/drivers/hwmon/max31722.c
+index 062eceb7be0db..613338cbcb170 100644
+--- a/drivers/hwmon/max31722.c
++++ b/drivers/hwmon/max31722.c
+@@ -6,7 +6,6 @@
+  * Copyright (c) 2016, Intel Corporation.
+  */
+ 
+-#include <linux/acpi.h>
+ #include <linux/hwmon.h>
+ #include <linux/hwmon-sysfs.h>
+ #include <linux/kernel.h>
+@@ -133,20 +132,12 @@ static const struct spi_device_id max31722_spi_id[] = {
+ 	{"max31723", 0},
+ 	{}
+ };
+-
+-static const struct acpi_device_id __maybe_unused max31722_acpi_id[] = {
+-	{"MAX31722", 0},
+-	{"MAX31723", 0},
+-	{}
+-};
+-
+ MODULE_DEVICE_TABLE(spi, max31722_spi_id);
+ 
+ static struct spi_driver max31722_driver = {
+ 	.driver = {
+ 		.name = "max31722",
+ 		.pm = &max31722_pm_ops,
+-		.acpi_match_table = ACPI_PTR(max31722_acpi_id),
+ 	},
+ 	.probe =            max31722_probe,
+ 	.remove =           max31722_remove,
+diff --git a/drivers/hwmon/max31790.c b/drivers/hwmon/max31790.c
+index 86e6c71db685c..67677c4377687 100644
+--- a/drivers/hwmon/max31790.c
++++ b/drivers/hwmon/max31790.c
+@@ -27,6 +27,7 @@
+ 
+ /* Fan Config register bits */
+ #define MAX31790_FAN_CFG_RPM_MODE	0x80
++#define MAX31790_FAN_CFG_CTRL_MON	0x10
+ #define MAX31790_FAN_CFG_TACH_INPUT_EN	0x08
+ #define MAX31790_FAN_CFG_TACH_INPUT	0x01
+ 
+@@ -104,7 +105,7 @@ static struct max31790_data *max31790_update_device(struct device *dev)
+ 				data->tach[NR_CHANNEL + i] = rv;
+ 			} else {
+ 				rv = i2c_smbus_read_word_swapped(client,
+-						MAX31790_REG_PWMOUT(i));
++						MAX31790_REG_PWM_DUTY_CYCLE(i));
+ 				if (rv < 0)
+ 					goto abort;
+ 				data->pwm[i] = rv;
+@@ -170,7 +171,7 @@ static int max31790_read_fan(struct device *dev, u32 attr, int channel,
+ 
+ 	switch (attr) {
+ 	case hwmon_fan_input:
+-		sr = get_tach_period(data->fan_dynamics[channel]);
++		sr = get_tach_period(data->fan_dynamics[channel % NR_CHANNEL]);
+ 		rpm = RPM_FROM_REG(data->tach[channel], sr);
+ 		*val = rpm;
+ 		return 0;
+@@ -271,12 +272,12 @@ static int max31790_read_pwm(struct device *dev, u32 attr, int channel,
+ 		*val = data->pwm[channel] >> 8;
+ 		return 0;
+ 	case hwmon_pwm_enable:
+-		if (fan_config & MAX31790_FAN_CFG_RPM_MODE)
++		if (fan_config & MAX31790_FAN_CFG_CTRL_MON)
++			*val = 0;
++		else if (fan_config & MAX31790_FAN_CFG_RPM_MODE)
+ 			*val = 2;
+-		else if (fan_config & MAX31790_FAN_CFG_TACH_INPUT_EN)
+-			*val = 1;
+ 		else
+-			*val = 0;
++			*val = 1;
+ 		return 0;
+ 	default:
+ 		return -EOPNOTSUPP;
+@@ -299,31 +300,41 @@ static int max31790_write_pwm(struct device *dev, u32 attr, int channel,
+ 			err = -EINVAL;
+ 			break;
+ 		}
+-		data->pwm[channel] = val << 8;
++		data->valid = false;
+ 		err = i2c_smbus_write_word_swapped(client,
+ 						   MAX31790_REG_PWMOUT(channel),
+-						   data->pwm[channel]);
++						   val << 8);
+ 		break;
+ 	case hwmon_pwm_enable:
+ 		fan_config = data->fan_config[channel];
+ 		if (val == 0) {
+-			fan_config &= ~(MAX31790_FAN_CFG_TACH_INPUT_EN |
+-					MAX31790_FAN_CFG_RPM_MODE);
++			fan_config |= MAX31790_FAN_CFG_CTRL_MON;
++			/*
++			 * Disable RPM mode; otherwise disabling fan speed
++			 * monitoring is not possible.
++			 */
++			fan_config &= ~MAX31790_FAN_CFG_RPM_MODE;
+ 		} else if (val == 1) {
+-			fan_config = (fan_config |
+-				      MAX31790_FAN_CFG_TACH_INPUT_EN) &
+-				     ~MAX31790_FAN_CFG_RPM_MODE;
++			fan_config &= ~(MAX31790_FAN_CFG_CTRL_MON | MAX31790_FAN_CFG_RPM_MODE);
+ 		} else if (val == 2) {
+-			fan_config |= MAX31790_FAN_CFG_TACH_INPUT_EN |
+-				      MAX31790_FAN_CFG_RPM_MODE;
++			fan_config &= ~MAX31790_FAN_CFG_CTRL_MON;
++			/*
++			 * The chip sets MAX31790_FAN_CFG_TACH_INPUT_EN on its
++			 * own if MAX31790_FAN_CFG_RPM_MODE is set.
++			 * Do it here as well to reflect the actual register
++			 * value in the cache.
++			 */
++			fan_config |= (MAX31790_FAN_CFG_RPM_MODE | MAX31790_FAN_CFG_TACH_INPUT_EN);
+ 		} else {
+ 			err = -EINVAL;
+ 			break;
+ 		}
+-		data->fan_config[channel] = fan_config;
+-		err = i2c_smbus_write_byte_data(client,
+-					MAX31790_REG_FAN_CONFIG(channel),
+-					fan_config);
++		if (fan_config != data->fan_config[channel]) {
++			err = i2c_smbus_write_byte_data(client, MAX31790_REG_FAN_CONFIG(channel),
++							fan_config);
++			if (!err)
++				data->fan_config[channel] = fan_config;
++		}
+ 		break;
+ 	default:
+ 		err = -EOPNOTSUPP;
+diff --git a/drivers/hwmon/pmbus/bpa-rs600.c b/drivers/hwmon/pmbus/bpa-rs600.c
+index f6558ee9dec36..2be69fedfa361 100644
+--- a/drivers/hwmon/pmbus/bpa-rs600.c
++++ b/drivers/hwmon/pmbus/bpa-rs600.c
+@@ -46,6 +46,32 @@ static int bpa_rs600_read_byte_data(struct i2c_client *client, int page, int reg
+ 	return ret;
+ }
+ 
++/*
++ * The BPA-RS600 violates the PMBus spec. Specifically it treats the
++ * mantissa as unsigned. Deal with this here to allow the PMBus core
++ * to work with correctly encoded data.
++ */
++static int bpa_rs600_read_vin(struct i2c_client *client)
++{
++	int ret, exponent, mantissa;
++
++	ret = pmbus_read_word_data(client, 0, 0xff, PMBUS_READ_VIN);
++	if (ret < 0)
++		return ret;
++
++	if (ret & BIT(10)) {
++		exponent = ret >> 11;
++		mantissa = ret & 0x7ff;
++
++		exponent++;
++		mantissa >>= 1;
++
++		ret = (exponent << 11) | mantissa;
++	}
++
++	return ret;
++}
++
+ static int bpa_rs600_read_word_data(struct i2c_client *client, int page, int phase, int reg)
+ {
+ 	int ret;
+@@ -85,6 +111,9 @@ static int bpa_rs600_read_word_data(struct i2c_client *client, int page, int pha
+ 		/* These commands return data but it is invalid/un-documented */
+ 		ret = -ENXIO;
+ 		break;
++	case PMBUS_READ_VIN:
++		ret = bpa_rs600_read_vin(client);
++		break;
+ 	default:
+ 		if (reg >= PMBUS_VIRT_BASE)
+ 			ret = -ENXIO;
+diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
+index 6c68d34d956e8..4ddf3d2338443 100644
+--- a/drivers/hwtracing/coresight/coresight-core.c
++++ b/drivers/hwtracing/coresight/coresight-core.c
+@@ -608,7 +608,7 @@ static struct coresight_device *
+ coresight_find_enabled_sink(struct coresight_device *csdev)
+ {
+ 	int i;
+-	struct coresight_device *sink;
++	struct coresight_device *sink = NULL;
+ 
+ 	if ((csdev->type == CORESIGHT_DEV_TYPE_SINK ||
+ 	     csdev->type == CORESIGHT_DEV_TYPE_LINKSINK) &&
+diff --git a/drivers/i2c/busses/i2c-mpc.c b/drivers/i2c/busses/i2c-mpc.c
+index dcca9c2396db1..6d5014ebaab5e 100644
+--- a/drivers/i2c/busses/i2c-mpc.c
++++ b/drivers/i2c/busses/i2c-mpc.c
+@@ -635,6 +635,8 @@ static irqreturn_t mpc_i2c_isr(int irq, void *dev_id)
+ 
+ 	status = readb(i2c->base + MPC_I2C_SR);
+ 	if (status & CSR_MIF) {
++		/* Read again to allow register to stabilise */
++		status = readb(i2c->base + MPC_I2C_SR);
+ 		writeb(0, i2c->base + MPC_I2C_SR);
+ 		mpc_i2c_do_intr(i2c, status);
+ 		return IRQ_HANDLED;
+diff --git a/drivers/iio/accel/bma180.c b/drivers/iio/accel/bma180.c
+index b8a7469cdae41..b8cea42fca1a1 100644
+--- a/drivers/iio/accel/bma180.c
++++ b/drivers/iio/accel/bma180.c
+@@ -55,7 +55,7 @@ struct bma180_part_info {
+ 
+ 	u8 int_reset_reg, int_reset_mask;
+ 	u8 sleep_reg, sleep_mask;
+-	u8 bw_reg, bw_mask;
++	u8 bw_reg, bw_mask, bw_offset;
+ 	u8 scale_reg, scale_mask;
+ 	u8 power_reg, power_mask, lowpower_val;
+ 	u8 int_enable_reg, int_enable_mask;
+@@ -127,6 +127,7 @@ struct bma180_part_info {
+ 
+ #define BMA250_RANGE_MASK	GENMASK(3, 0) /* Range of accel values */
+ #define BMA250_BW_MASK		GENMASK(4, 0) /* Accel bandwidth */
++#define BMA250_BW_OFFSET	8
+ #define BMA250_SUSPEND_MASK	BIT(7) /* chip will sleep */
+ #define BMA250_LOWPOWER_MASK	BIT(6)
+ #define BMA250_DATA_INTEN_MASK	BIT(4)
+@@ -143,6 +144,7 @@ struct bma180_part_info {
+ 
+ #define BMA254_RANGE_MASK	GENMASK(3, 0) /* Range of accel values */
+ #define BMA254_BW_MASK		GENMASK(4, 0) /* Accel bandwidth */
++#define BMA254_BW_OFFSET	8
+ #define BMA254_SUSPEND_MASK	BIT(7) /* chip will sleep */
+ #define BMA254_LOWPOWER_MASK	BIT(6)
+ #define BMA254_DATA_INTEN_MASK	BIT(4)
+@@ -162,7 +164,11 @@ struct bma180_data {
+ 	int scale;
+ 	int bw;
+ 	bool pmode;
+-	u8 buff[16]; /* 3x 16-bit + 8-bit + padding + timestamp */
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		s16 chan[4];
++		s64 timestamp __aligned(8);
++	} scan;
+ };
+ 
+ enum bma180_chan {
+@@ -283,7 +289,8 @@ static int bma180_set_bw(struct bma180_data *data, int val)
+ 	for (i = 0; i < data->part_info->num_bw; ++i) {
+ 		if (data->part_info->bw_table[i] == val) {
+ 			ret = bma180_set_bits(data, data->part_info->bw_reg,
+-				data->part_info->bw_mask, i);
++				data->part_info->bw_mask,
++				i + data->part_info->bw_offset);
+ 			if (ret) {
+ 				dev_err(&data->client->dev,
+ 					"failed to set bandwidth\n");
+@@ -876,6 +883,7 @@ static const struct bma180_part_info bma180_part_info[] = {
+ 		.sleep_mask = BMA250_SUSPEND_MASK,
+ 		.bw_reg = BMA250_BW_REG,
+ 		.bw_mask = BMA250_BW_MASK,
++		.bw_offset = BMA250_BW_OFFSET,
+ 		.scale_reg = BMA250_RANGE_REG,
+ 		.scale_mask = BMA250_RANGE_MASK,
+ 		.power_reg = BMA250_POWER_REG,
+@@ -905,6 +913,7 @@ static const struct bma180_part_info bma180_part_info[] = {
+ 		.sleep_mask = BMA254_SUSPEND_MASK,
+ 		.bw_reg = BMA254_BW_REG,
+ 		.bw_mask = BMA254_BW_MASK,
++		.bw_offset = BMA254_BW_OFFSET,
+ 		.scale_reg = BMA254_RANGE_REG,
+ 		.scale_mask = BMA254_RANGE_MASK,
+ 		.power_reg = BMA254_POWER_REG,
+@@ -938,12 +947,12 @@ static irqreturn_t bma180_trigger_handler(int irq, void *p)
+ 			mutex_unlock(&data->mutex);
+ 			goto err;
+ 		}
+-		((s16 *)data->buff)[i++] = ret;
++		data->scan.chan[i++] = ret;
+ 	}
+ 
+ 	mutex_unlock(&data->mutex);
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buff, time_ns);
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan, time_ns);
+ err:
+ 	iio_trigger_notify_done(indio_dev->trig);
+ 
+diff --git a/drivers/iio/accel/bma220_spi.c b/drivers/iio/accel/bma220_spi.c
+index 36fc9876dbcaf..0622c79364994 100644
+--- a/drivers/iio/accel/bma220_spi.c
++++ b/drivers/iio/accel/bma220_spi.c
+@@ -63,7 +63,11 @@ static const int bma220_scale_table[][2] = {
+ struct bma220_data {
+ 	struct spi_device *spi_device;
+ 	struct mutex lock;
+-	s8 buffer[16]; /* 3x8-bit channels + 5x8 padding + 8x8 timestamp */
++	struct {
++		s8 chans[3];
++		/* Ensure timestamp is naturally aligned. */
++		s64 timestamp __aligned(8);
++	} scan;
+ 	u8 tx_buf[2] ____cacheline_aligned;
+ };
+ 
+@@ -94,12 +98,12 @@ static irqreturn_t bma220_trigger_handler(int irq, void *p)
+ 
+ 	mutex_lock(&data->lock);
+ 	data->tx_buf[0] = BMA220_REG_ACCEL_X | BMA220_READ_MASK;
+-	ret = spi_write_then_read(spi, data->tx_buf, 1, data->buffer,
++	ret = spi_write_then_read(spi, data->tx_buf, 1, &data->scan.chans,
+ 				  ARRAY_SIZE(bma220_channels) - 1);
+ 	if (ret < 0)
+ 		goto err;
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 					   pf->timestamp);
+ err:
+ 	mutex_unlock(&data->lock);
+diff --git a/drivers/iio/accel/bmc150-accel-core.c b/drivers/iio/accel/bmc150-accel-core.c
+index 04d85ce34e9f5..5d58b5533cb82 100644
+--- a/drivers/iio/accel/bmc150-accel-core.c
++++ b/drivers/iio/accel/bmc150-accel-core.c
+@@ -1177,11 +1177,12 @@ static const struct bmc150_accel_chip_info bmc150_accel_chip_info_tbl[] = {
+ 		/*
+ 		 * The datasheet page 17 says:
+ 		 * 15.6, 31.3, 62.5 and 125 mg per LSB.
++		 * IIO unit is m/s^2 so multiply by g = 9.80665 m/s^2.
+ 		 */
+-		.scale_table = { {156000, BMC150_ACCEL_DEF_RANGE_2G},
+-				 {313000, BMC150_ACCEL_DEF_RANGE_4G},
+-				 {625000, BMC150_ACCEL_DEF_RANGE_8G},
+-				 {1250000, BMC150_ACCEL_DEF_RANGE_16G} },
++		.scale_table = { {152984, BMC150_ACCEL_DEF_RANGE_2G},
++				 {306948, BMC150_ACCEL_DEF_RANGE_4G},
++				 {612916, BMC150_ACCEL_DEF_RANGE_8G},
++				 {1225831, BMC150_ACCEL_DEF_RANGE_16G} },
+ 	},
+ 	[bma222e] = {
+ 		.name = "BMA222E",
+@@ -1809,21 +1810,17 @@ EXPORT_SYMBOL_GPL(bmc150_accel_core_probe);
+ 
+ struct i2c_client *bmc150_get_second_device(struct i2c_client *client)
+ {
+-	struct bmc150_accel_data *data = i2c_get_clientdata(client);
+-
+-	if (!data)
+-		return NULL;
++	struct bmc150_accel_data *data = iio_priv(i2c_get_clientdata(client));
+ 
+ 	return data->second_device;
+ }
+ EXPORT_SYMBOL_GPL(bmc150_get_second_device);
+ 
+-void bmc150_set_second_device(struct i2c_client *client)
++void bmc150_set_second_device(struct i2c_client *client, struct i2c_client *second_dev)
+ {
+-	struct bmc150_accel_data *data = i2c_get_clientdata(client);
++	struct bmc150_accel_data *data = iio_priv(i2c_get_clientdata(client));
+ 
+-	if (data)
+-		data->second_device = client;
++	data->second_device = second_dev;
+ }
+ EXPORT_SYMBOL_GPL(bmc150_set_second_device);
+ 
+diff --git a/drivers/iio/accel/bmc150-accel-i2c.c b/drivers/iio/accel/bmc150-accel-i2c.c
+index 69f709319484f..2afaae0294eef 100644
+--- a/drivers/iio/accel/bmc150-accel-i2c.c
++++ b/drivers/iio/accel/bmc150-accel-i2c.c
+@@ -70,7 +70,7 @@ static int bmc150_accel_probe(struct i2c_client *client,
+ 
+ 		second_dev = i2c_acpi_new_device(&client->dev, 1, &board_info);
+ 		if (!IS_ERR(second_dev))
+-			bmc150_set_second_device(second_dev);
++			bmc150_set_second_device(client, second_dev);
+ 	}
+ #endif
+ 
+diff --git a/drivers/iio/accel/bmc150-accel.h b/drivers/iio/accel/bmc150-accel.h
+index 6024f15b97004..e30c1698f6fbd 100644
+--- a/drivers/iio/accel/bmc150-accel.h
++++ b/drivers/iio/accel/bmc150-accel.h
+@@ -18,7 +18,7 @@ int bmc150_accel_core_probe(struct device *dev, struct regmap *regmap, int irq,
+ 			    const char *name, bool block_supported);
+ int bmc150_accel_core_remove(struct device *dev);
+ struct i2c_client *bmc150_get_second_device(struct i2c_client *second_device);
+-void bmc150_set_second_device(struct i2c_client *second_device);
++void bmc150_set_second_device(struct i2c_client *client, struct i2c_client *second_dev);
+ extern const struct dev_pm_ops bmc150_accel_pm_ops;
+ extern const struct regmap_config bmc150_regmap_conf;
+ 
+diff --git a/drivers/iio/accel/hid-sensor-accel-3d.c b/drivers/iio/accel/hid-sensor-accel-3d.c
+index 2f9465cb382ff..27f47e1c251e9 100644
+--- a/drivers/iio/accel/hid-sensor-accel-3d.c
++++ b/drivers/iio/accel/hid-sensor-accel-3d.c
+@@ -28,8 +28,11 @@ struct accel_3d_state {
+ 	struct hid_sensor_hub_callbacks callbacks;
+ 	struct hid_sensor_common common_attributes;
+ 	struct hid_sensor_hub_attribute_info accel[ACCEL_3D_CHANNEL_MAX];
+-	/* Reserve for 3 channels + padding + timestamp */
+-	u32 accel_val[ACCEL_3D_CHANNEL_MAX + 3];
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		u32 accel_val[3];
++		s64 timestamp __aligned(8);
++	} scan;
+ 	int scale_pre_decml;
+ 	int scale_post_decml;
+ 	int scale_precision;
+@@ -245,8 +248,8 @@ static int accel_3d_proc_event(struct hid_sensor_hub_device *hsdev,
+ 			accel_state->timestamp = iio_get_time_ns(indio_dev);
+ 
+ 		hid_sensor_push_data(indio_dev,
+-				     accel_state->accel_val,
+-				     sizeof(accel_state->accel_val),
++				     &accel_state->scan,
++				     sizeof(accel_state->scan),
+ 				     accel_state->timestamp);
+ 
+ 		accel_state->timestamp = 0;
+@@ -271,7 +274,7 @@ static int accel_3d_capture_sample(struct hid_sensor_hub_device *hsdev,
+ 	case HID_USAGE_SENSOR_ACCEL_Y_AXIS:
+ 	case HID_USAGE_SENSOR_ACCEL_Z_AXIS:
+ 		offset = usage_id - HID_USAGE_SENSOR_ACCEL_X_AXIS;
+-		accel_state->accel_val[CHANNEL_SCAN_INDEX_X + offset] =
++		accel_state->scan.accel_val[CHANNEL_SCAN_INDEX_X + offset] =
+ 						*(u32 *)raw_data;
+ 		ret = 0;
+ 	break;
+diff --git a/drivers/iio/accel/kxcjk-1013.c b/drivers/iio/accel/kxcjk-1013.c
+index ff724bc17a458..f6720dbba0aa3 100644
+--- a/drivers/iio/accel/kxcjk-1013.c
++++ b/drivers/iio/accel/kxcjk-1013.c
+@@ -133,6 +133,13 @@ enum kx_acpi_type {
+ 	ACPI_KIOX010A,
+ };
+ 
++enum kxcjk1013_axis {
++	AXIS_X,
++	AXIS_Y,
++	AXIS_Z,
++	AXIS_MAX
++};
++
+ struct kxcjk1013_data {
+ 	struct regulator_bulk_data regulators[2];
+ 	struct i2c_client *client;
+@@ -140,7 +147,11 @@ struct kxcjk1013_data {
+ 	struct iio_trigger *motion_trig;
+ 	struct iio_mount_matrix orientation;
+ 	struct mutex mutex;
+-	s16 buffer[8];
++	/* Ensure timestamp naturally aligned */
++	struct {
++		s16 chans[AXIS_MAX];
++		s64 timestamp __aligned(8);
++	} scan;
+ 	u8 odr_bits;
+ 	u8 range;
+ 	int wake_thres;
+@@ -154,13 +165,6 @@ struct kxcjk1013_data {
+ 	enum kx_acpi_type acpi_type;
+ };
+ 
+-enum kxcjk1013_axis {
+-	AXIS_X,
+-	AXIS_Y,
+-	AXIS_Z,
+-	AXIS_MAX,
+-};
+-
+ enum kxcjk1013_mode {
+ 	STANDBY,
+ 	OPERATION,
+@@ -1094,12 +1098,12 @@ static irqreturn_t kxcjk1013_trigger_handler(int irq, void *p)
+ 	ret = i2c_smbus_read_i2c_block_data_or_emulated(data->client,
+ 							KXCJK1013_REG_XOUT_L,
+ 							AXIS_MAX * 2,
+-							(u8 *)data->buffer);
++							(u8 *)data->scan.chans);
+ 	mutex_unlock(&data->mutex);
+ 	if (ret < 0)
+ 		goto err;
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 					   data->timestamp);
+ err:
+ 	iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/iio/accel/mxc4005.c b/drivers/iio/accel/mxc4005.c
+index fb3cbaa62bd87..0f90e6ec01e17 100644
+--- a/drivers/iio/accel/mxc4005.c
++++ b/drivers/iio/accel/mxc4005.c
+@@ -56,7 +56,11 @@ struct mxc4005_data {
+ 	struct mutex mutex;
+ 	struct regmap *regmap;
+ 	struct iio_trigger *dready_trig;
+-	__be16 buffer[8];
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		__be16 chans[3];
++		s64 timestamp __aligned(8);
++	} scan;
+ 	bool trigger_enabled;
+ };
+ 
+@@ -135,7 +139,7 @@ static int mxc4005_read_xyz(struct mxc4005_data *data)
+ 	int ret;
+ 
+ 	ret = regmap_bulk_read(data->regmap, MXC4005_REG_XOUT_UPPER,
+-			       data->buffer, sizeof(data->buffer));
++			       data->scan.chans, sizeof(data->scan.chans));
+ 	if (ret < 0) {
+ 		dev_err(data->dev, "failed to read axes\n");
+ 		return ret;
+@@ -301,7 +305,7 @@ static irqreturn_t mxc4005_trigger_handler(int irq, void *private)
+ 	if (ret < 0)
+ 		goto err;
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 					   pf->timestamp);
+ 
+ err:
+diff --git a/drivers/iio/accel/stk8312.c b/drivers/iio/accel/stk8312.c
+index 157d8faefb9e4..ba571f0f5c985 100644
+--- a/drivers/iio/accel/stk8312.c
++++ b/drivers/iio/accel/stk8312.c
+@@ -103,7 +103,11 @@ struct stk8312_data {
+ 	u8 mode;
+ 	struct iio_trigger *dready_trig;
+ 	bool dready_trigger_on;
+-	s8 buffer[16]; /* 3x8-bit channels + 5x8 padding + 64-bit timestamp */
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		s8 chans[3];
++		s64 timestamp __aligned(8);
++	} scan;
+ };
+ 
+ static IIO_CONST_ATTR(in_accel_scale_available, STK8312_SCALE_AVAIL);
+@@ -438,7 +442,7 @@ static irqreturn_t stk8312_trigger_handler(int irq, void *p)
+ 		ret = i2c_smbus_read_i2c_block_data(data->client,
+ 						    STK8312_REG_XOUT,
+ 						    STK8312_ALL_CHANNEL_SIZE,
+-						    data->buffer);
++						    data->scan.chans);
+ 		if (ret < STK8312_ALL_CHANNEL_SIZE) {
+ 			dev_err(&data->client->dev, "register read failed\n");
+ 			mutex_unlock(&data->lock);
+@@ -452,12 +456,12 @@ static irqreturn_t stk8312_trigger_handler(int irq, void *p)
+ 				mutex_unlock(&data->lock);
+ 				goto err;
+ 			}
+-			data->buffer[i++] = ret;
++			data->scan.chans[i++] = ret;
+ 		}
+ 	}
+ 	mutex_unlock(&data->lock);
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 					   pf->timestamp);
+ err:
+ 	iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/iio/accel/stk8ba50.c b/drivers/iio/accel/stk8ba50.c
+index 7cf9cb7e86667..eb9daa4e623a8 100644
+--- a/drivers/iio/accel/stk8ba50.c
++++ b/drivers/iio/accel/stk8ba50.c
+@@ -91,12 +91,11 @@ struct stk8ba50_data {
+ 	u8 sample_rate_idx;
+ 	struct iio_trigger *dready_trig;
+ 	bool dready_trigger_on;
+-	/*
+-	 * 3 x 16-bit channels (10-bit data, 6-bit padding) +
+-	 * 1 x 16 padding +
+-	 * 4 x 16 64-bit timestamp
+-	 */
+-	s16 buffer[8];
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		s16 chans[3];
++		s64 timetamp __aligned(8);
++	} scan;
+ };
+ 
+ #define STK8BA50_ACCEL_CHANNEL(index, reg, axis) {			\
+@@ -324,7 +323,7 @@ static irqreturn_t stk8ba50_trigger_handler(int irq, void *p)
+ 		ret = i2c_smbus_read_i2c_block_data(data->client,
+ 						    STK8BA50_REG_XOUT,
+ 						    STK8BA50_ALL_CHANNEL_SIZE,
+-						    (u8 *)data->buffer);
++						    (u8 *)data->scan.chans);
+ 		if (ret < STK8BA50_ALL_CHANNEL_SIZE) {
+ 			dev_err(&data->client->dev, "register read failed\n");
+ 			goto err;
+@@ -337,10 +336,10 @@ static irqreturn_t stk8ba50_trigger_handler(int irq, void *p)
+ 			if (ret < 0)
+ 				goto err;
+ 
+-			data->buffer[i++] = ret;
++			data->scan.chans[i++] = ret;
+ 		}
+ 	}
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 					   pf->timestamp);
+ err:
+ 	mutex_unlock(&data->lock);
+diff --git a/drivers/iio/adc/at91-sama5d2_adc.c b/drivers/iio/adc/at91-sama5d2_adc.c
+index a7826f097b95c..d356b515df090 100644
+--- a/drivers/iio/adc/at91-sama5d2_adc.c
++++ b/drivers/iio/adc/at91-sama5d2_adc.c
+@@ -403,7 +403,8 @@ struct at91_adc_state {
+ 	struct at91_adc_dma		dma_st;
+ 	struct at91_adc_touch		touch_st;
+ 	struct iio_dev			*indio_dev;
+-	u16				buffer[AT91_BUFFER_MAX_HWORDS];
++	/* Ensure naturally aligned timestamp */
++	u16				buffer[AT91_BUFFER_MAX_HWORDS] __aligned(8);
+ 	/*
+ 	 * lock to prevent concurrent 'single conversion' requests through
+ 	 * sysfs.
+diff --git a/drivers/iio/adc/hx711.c b/drivers/iio/adc/hx711.c
+index 6a173531d355b..f7ee856a6b8b6 100644
+--- a/drivers/iio/adc/hx711.c
++++ b/drivers/iio/adc/hx711.c
+@@ -86,9 +86,9 @@ struct hx711_data {
+ 	struct mutex		lock;
+ 	/*
+ 	 * triggered buffer
+-	 * 2x32-bit channel + 64-bit timestamp
++	 * 2x32-bit channel + 64-bit naturally aligned timestamp
+ 	 */
+-	u32			buffer[4];
++	u32			buffer[4] __aligned(8);
+ 	/*
+ 	 * delay after a rising edge on SCK until the data is ready DOUT
+ 	 * this is dependent on the hx711 where the datasheet tells a
+diff --git a/drivers/iio/adc/mxs-lradc-adc.c b/drivers/iio/adc/mxs-lradc-adc.c
+index 30e29f44ebd2e..c480cb489c1a3 100644
+--- a/drivers/iio/adc/mxs-lradc-adc.c
++++ b/drivers/iio/adc/mxs-lradc-adc.c
+@@ -115,7 +115,8 @@ struct mxs_lradc_adc {
+ 	struct device		*dev;
+ 
+ 	void __iomem		*base;
+-	u32			buffer[10];
++	/* Maximum of 8 channels + 8 byte ts */
++	u32			buffer[10] __aligned(8);
+ 	struct iio_trigger	*trig;
+ 	struct completion	completion;
+ 	spinlock_t		lock;
+diff --git a/drivers/iio/adc/ti-ads1015.c b/drivers/iio/adc/ti-ads1015.c
+index 9fef39bcf997b..5b828428be77c 100644
+--- a/drivers/iio/adc/ti-ads1015.c
++++ b/drivers/iio/adc/ti-ads1015.c
+@@ -395,10 +395,14 @@ static irqreturn_t ads1015_trigger_handler(int irq, void *p)
+ 	struct iio_poll_func *pf = p;
+ 	struct iio_dev *indio_dev = pf->indio_dev;
+ 	struct ads1015_data *data = iio_priv(indio_dev);
+-	s16 buf[8]; /* 1x s16 ADC val + 3x s16 padding +  4x s16 timestamp */
++	/* Ensure natural alignment of timestamp */
++	struct {
++		s16 chan;
++		s64 timestamp __aligned(8);
++	} scan;
+ 	int chan, ret, res;
+ 
+-	memset(buf, 0, sizeof(buf));
++	memset(&scan, 0, sizeof(scan));
+ 
+ 	mutex_lock(&data->lock);
+ 	chan = find_first_bit(indio_dev->active_scan_mask,
+@@ -409,10 +413,10 @@ static irqreturn_t ads1015_trigger_handler(int irq, void *p)
+ 		goto err;
+ 	}
+ 
+-	buf[0] = res;
++	scan.chan = res;
+ 	mutex_unlock(&data->lock);
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, buf,
++	iio_push_to_buffers_with_timestamp(indio_dev, &scan,
+ 					   iio_get_time_ns(indio_dev));
+ 
+ err:
+diff --git a/drivers/iio/adc/ti-ads8688.c b/drivers/iio/adc/ti-ads8688.c
+index 16bcb37eebb72..79c803537dc42 100644
+--- a/drivers/iio/adc/ti-ads8688.c
++++ b/drivers/iio/adc/ti-ads8688.c
+@@ -383,7 +383,8 @@ static irqreturn_t ads8688_trigger_handler(int irq, void *p)
+ {
+ 	struct iio_poll_func *pf = p;
+ 	struct iio_dev *indio_dev = pf->indio_dev;
+-	u16 buffer[ADS8688_MAX_CHANNELS + sizeof(s64)/sizeof(u16)];
++	/* Ensure naturally aligned timestamp */
++	u16 buffer[ADS8688_MAX_CHANNELS + sizeof(s64)/sizeof(u16)] __aligned(8);
+ 	int i, j = 0;
+ 
+ 	for (i = 0; i < indio_dev->masklength; i++) {
+diff --git a/drivers/iio/adc/vf610_adc.c b/drivers/iio/adc/vf610_adc.c
+index 1d794cf3e3f13..fd57fc43e8e5c 100644
+--- a/drivers/iio/adc/vf610_adc.c
++++ b/drivers/iio/adc/vf610_adc.c
+@@ -167,7 +167,11 @@ struct vf610_adc {
+ 	u32 sample_freq_avail[5];
+ 
+ 	struct completion completion;
+-	u16 buffer[8];
++	/* Ensure the timestamp is naturally aligned */
++	struct {
++		u16 chan;
++		s64 timestamp __aligned(8);
++	} scan;
+ };
+ 
+ static const u32 vf610_hw_avgs[] = { 1, 4, 8, 16, 32 };
+@@ -579,9 +583,9 @@ static irqreturn_t vf610_adc_isr(int irq, void *dev_id)
+ 	if (coco & VF610_ADC_HS_COCO0) {
+ 		info->value = vf610_adc_read_data(info);
+ 		if (iio_buffer_enabled(indio_dev)) {
+-			info->buffer[0] = info->value;
++			info->scan.chan = info->value;
+ 			iio_push_to_buffers_with_timestamp(indio_dev,
+-					info->buffer,
++					&info->scan,
+ 					iio_get_time_ns(indio_dev));
+ 			iio_trigger_notify_done(indio_dev->trig);
+ 		} else
+diff --git a/drivers/iio/chemical/atlas-sensor.c b/drivers/iio/chemical/atlas-sensor.c
+index 56ba6c82b501f..6795722c68b25 100644
+--- a/drivers/iio/chemical/atlas-sensor.c
++++ b/drivers/iio/chemical/atlas-sensor.c
+@@ -91,8 +91,8 @@ struct atlas_data {
+ 	struct regmap *regmap;
+ 	struct irq_work work;
+ 	unsigned int interrupt_enabled;
+-
+-	__be32 buffer[6]; /* 96-bit data + 32-bit pad + 64-bit timestamp */
++	/* 96-bit data + 32-bit pad + 64-bit timestamp */
++	__be32 buffer[6] __aligned(8);
+ };
+ 
+ static const struct regmap_config atlas_regmap_config = {
+diff --git a/drivers/iio/dummy/Kconfig b/drivers/iio/dummy/Kconfig
+index 5c5c2f8c55f36..1f46cb9e51b74 100644
+--- a/drivers/iio/dummy/Kconfig
++++ b/drivers/iio/dummy/Kconfig
+@@ -34,6 +34,7 @@ config IIO_SIMPLE_DUMMY_BUFFER
+ 	select IIO_BUFFER
+ 	select IIO_TRIGGER
+ 	select IIO_KFIFO_BUF
++	select IIO_TRIGGERED_BUFFER
+ 	help
+ 	  Add buffered data capture to the simple dummy driver.
+ 
+diff --git a/drivers/iio/frequency/adf4350.c b/drivers/iio/frequency/adf4350.c
+index 1462a6a5bc6da..3d9eba716b691 100644
+--- a/drivers/iio/frequency/adf4350.c
++++ b/drivers/iio/frequency/adf4350.c
+@@ -563,8 +563,10 @@ static int adf4350_probe(struct spi_device *spi)
+ 
+ 	st->lock_detect_gpiod = devm_gpiod_get_optional(&spi->dev, NULL,
+ 							GPIOD_IN);
+-	if (IS_ERR(st->lock_detect_gpiod))
+-		return PTR_ERR(st->lock_detect_gpiod);
++	if (IS_ERR(st->lock_detect_gpiod)) {
++		ret = PTR_ERR(st->lock_detect_gpiod);
++		goto error_disable_reg;
++	}
+ 
+ 	if (pdata->power_up_frequency) {
+ 		ret = adf4350_set_freq(st, pdata->power_up_frequency);
+diff --git a/drivers/iio/gyro/bmg160_core.c b/drivers/iio/gyro/bmg160_core.c
+index b11ebd9bb7a41..7bc13ff2c3ac0 100644
+--- a/drivers/iio/gyro/bmg160_core.c
++++ b/drivers/iio/gyro/bmg160_core.c
+@@ -98,7 +98,11 @@ struct bmg160_data {
+ 	struct iio_trigger *motion_trig;
+ 	struct iio_mount_matrix orientation;
+ 	struct mutex mutex;
+-	s16 buffer[8];
++	/* Ensure naturally aligned timestamp */
++	struct {
++		s16 chans[3];
++		s64 timestamp __aligned(8);
++	} scan;
+ 	u32 dps_range;
+ 	int ev_enable_state;
+ 	int slope_thres;
+@@ -882,12 +886,12 @@ static irqreturn_t bmg160_trigger_handler(int irq, void *p)
+ 
+ 	mutex_lock(&data->mutex);
+ 	ret = regmap_bulk_read(data->regmap, BMG160_REG_XOUT_L,
+-			       data->buffer, AXIS_MAX * 2);
++			       data->scan.chans, AXIS_MAX * 2);
+ 	mutex_unlock(&data->mutex);
+ 	if (ret < 0)
+ 		goto err;
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 					   pf->timestamp);
+ err:
+ 	iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/iio/humidity/am2315.c b/drivers/iio/humidity/am2315.c
+index 23bc9c784ef4b..248d0f262d601 100644
+--- a/drivers/iio/humidity/am2315.c
++++ b/drivers/iio/humidity/am2315.c
+@@ -33,7 +33,11 @@
+ struct am2315_data {
+ 	struct i2c_client *client;
+ 	struct mutex lock;
+-	s16 buffer[8]; /* 2x16-bit channels + 2x16 padding + 4x16 timestamp */
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		s16 chans[2];
++		s64 timestamp __aligned(8);
++	} scan;
+ };
+ 
+ struct am2315_sensor_data {
+@@ -167,20 +171,20 @@ static irqreturn_t am2315_trigger_handler(int irq, void *p)
+ 
+ 	mutex_lock(&data->lock);
+ 	if (*(indio_dev->active_scan_mask) == AM2315_ALL_CHANNEL_MASK) {
+-		data->buffer[0] = sensor_data.hum_data;
+-		data->buffer[1] = sensor_data.temp_data;
++		data->scan.chans[0] = sensor_data.hum_data;
++		data->scan.chans[1] = sensor_data.temp_data;
+ 	} else {
+ 		i = 0;
+ 		for_each_set_bit(bit, indio_dev->active_scan_mask,
+ 				 indio_dev->masklength) {
+-			data->buffer[i] = (bit ? sensor_data.temp_data :
+-						 sensor_data.hum_data);
++			data->scan.chans[i] = (bit ? sensor_data.temp_data :
++					       sensor_data.hum_data);
+ 			i++;
+ 		}
+ 	}
+ 	mutex_unlock(&data->lock);
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 					   pf->timestamp);
+ err:
+ 	iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/iio/imu/adis16400.c b/drivers/iio/imu/adis16400.c
+index 768aa493a1a60..b2f92b55b910c 100644
+--- a/drivers/iio/imu/adis16400.c
++++ b/drivers/iio/imu/adis16400.c
+@@ -645,9 +645,6 @@ static irqreturn_t adis16400_trigger_handler(int irq, void *p)
+ 	void *buffer;
+ 	int ret;
+ 
+-	if (!adis->buffer)
+-		return -ENOMEM;
+-
+ 	if (!(st->variant->flags & ADIS16400_NO_BURST) &&
+ 		st->adis.spi->max_speed_hz > ADIS16400_SPI_BURST) {
+ 		st->adis.spi->max_speed_hz = ADIS16400_SPI_BURST;
+diff --git a/drivers/iio/imu/adis16475.c b/drivers/iio/imu/adis16475.c
+index 1de62fc79e0fc..51b76444db0b9 100644
+--- a/drivers/iio/imu/adis16475.c
++++ b/drivers/iio/imu/adis16475.c
+@@ -1068,7 +1068,7 @@ static irqreturn_t adis16475_trigger_handler(int irq, void *p)
+ 
+ 	ret = spi_sync(adis->spi, &adis->msg);
+ 	if (ret)
+-		return ret;
++		goto check_burst32;
+ 
+ 	adis->spi->max_speed_hz = cached_spi_speed_hz;
+ 	buffer = adis->buffer;
+diff --git a/drivers/iio/imu/adis_buffer.c b/drivers/iio/imu/adis_buffer.c
+index ac354321f63a3..175af154e4437 100644
+--- a/drivers/iio/imu/adis_buffer.c
++++ b/drivers/iio/imu/adis_buffer.c
+@@ -129,9 +129,6 @@ static irqreturn_t adis_trigger_handler(int irq, void *p)
+ 	struct adis *adis = iio_device_get_drvdata(indio_dev);
+ 	int ret;
+ 
+-	if (!adis->buffer)
+-		return -ENOMEM;
+-
+ 	if (adis->data->has_paging) {
+ 		mutex_lock(&adis->state_lock);
+ 		if (adis->current_page != 0) {
+diff --git a/drivers/iio/light/isl29125.c b/drivers/iio/light/isl29125.c
+index b93b85dbc3a6a..ba53b50d711a1 100644
+--- a/drivers/iio/light/isl29125.c
++++ b/drivers/iio/light/isl29125.c
+@@ -51,7 +51,11 @@
+ struct isl29125_data {
+ 	struct i2c_client *client;
+ 	u8 conf1;
+-	u16 buffer[8]; /* 3x 16-bit, padding, 8 bytes timestamp */
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		u16 chans[3];
++		s64 timestamp __aligned(8);
++	} scan;
+ };
+ 
+ #define ISL29125_CHANNEL(_color, _si) { \
+@@ -184,10 +188,10 @@ static irqreturn_t isl29125_trigger_handler(int irq, void *p)
+ 		if (ret < 0)
+ 			goto done;
+ 
+-		data->buffer[j++] = ret;
++		data->scan.chans[j++] = ret;
+ 	}
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 		iio_get_time_ns(indio_dev));
+ 
+ done:
+diff --git a/drivers/iio/light/ltr501.c b/drivers/iio/light/ltr501.c
+index b4323d2db0b19..74ed2d88a3ed3 100644
+--- a/drivers/iio/light/ltr501.c
++++ b/drivers/iio/light/ltr501.c
+@@ -32,9 +32,12 @@
+ #define LTR501_PART_ID 0x86
+ #define LTR501_MANUFAC_ID 0x87
+ #define LTR501_ALS_DATA1 0x88 /* 16-bit, little endian */
++#define LTR501_ALS_DATA1_UPPER 0x89 /* upper 8 bits of LTR501_ALS_DATA1 */
+ #define LTR501_ALS_DATA0 0x8a /* 16-bit, little endian */
++#define LTR501_ALS_DATA0_UPPER 0x8b /* upper 8 bits of LTR501_ALS_DATA0 */
+ #define LTR501_ALS_PS_STATUS 0x8c
+ #define LTR501_PS_DATA 0x8d /* 16-bit, little endian */
++#define LTR501_PS_DATA_UPPER 0x8e /* upper 8 bits of LTR501_PS_DATA */
+ #define LTR501_INTR 0x8f /* output mode, polarity, mode */
+ #define LTR501_PS_THRESH_UP 0x90 /* 11 bit, ps upper threshold */
+ #define LTR501_PS_THRESH_LOW 0x92 /* 11 bit, ps lower threshold */
+@@ -406,18 +409,19 @@ static int ltr501_read_als(const struct ltr501_data *data, __le16 buf[2])
+ 
+ static int ltr501_read_ps(const struct ltr501_data *data)
+ {
+-	int ret, status;
++	__le16 status;
++	int ret;
+ 
+ 	ret = ltr501_drdy(data, LTR501_STATUS_PS_RDY);
+ 	if (ret < 0)
+ 		return ret;
+ 
+ 	ret = regmap_bulk_read(data->regmap, LTR501_PS_DATA,
+-			       &status, 2);
++			       &status, sizeof(status));
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	return status;
++	return le16_to_cpu(status);
+ }
+ 
+ static int ltr501_read_intr_prst(const struct ltr501_data *data,
+@@ -1205,7 +1209,7 @@ static struct ltr501_chip_info ltr501_chip_info_tbl[] = {
+ 		.als_gain_tbl_size = ARRAY_SIZE(ltr559_als_gain_tbl),
+ 		.ps_gain = ltr559_ps_gain_tbl,
+ 		.ps_gain_tbl_size = ARRAY_SIZE(ltr559_ps_gain_tbl),
+-		.als_mode_active = BIT(1),
++		.als_mode_active = BIT(0),
+ 		.als_gain_mask = BIT(2) | BIT(3) | BIT(4),
+ 		.als_gain_shift = 2,
+ 		.info = &ltr501_info,
+@@ -1354,9 +1358,12 @@ static bool ltr501_is_volatile_reg(struct device *dev, unsigned int reg)
+ {
+ 	switch (reg) {
+ 	case LTR501_ALS_DATA1:
++	case LTR501_ALS_DATA1_UPPER:
+ 	case LTR501_ALS_DATA0:
++	case LTR501_ALS_DATA0_UPPER:
+ 	case LTR501_ALS_PS_STATUS:
+ 	case LTR501_PS_DATA:
++	case LTR501_PS_DATA_UPPER:
+ 		return true;
+ 	default:
+ 		return false;
+diff --git a/drivers/iio/light/tcs3414.c b/drivers/iio/light/tcs3414.c
+index 6fe5d46f80d40..0593abd600ec2 100644
+--- a/drivers/iio/light/tcs3414.c
++++ b/drivers/iio/light/tcs3414.c
+@@ -53,7 +53,11 @@ struct tcs3414_data {
+ 	u8 control;
+ 	u8 gain;
+ 	u8 timing;
+-	u16 buffer[8]; /* 4x 16-bit + 8 bytes timestamp */
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		u16 chans[4];
++		s64 timestamp __aligned(8);
++	} scan;
+ };
+ 
+ #define TCS3414_CHANNEL(_color, _si, _addr) { \
+@@ -209,10 +213,10 @@ static irqreturn_t tcs3414_trigger_handler(int irq, void *p)
+ 		if (ret < 0)
+ 			goto done;
+ 
+-		data->buffer[j++] = ret;
++		data->scan.chans[j++] = ret;
+ 	}
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 		iio_get_time_ns(indio_dev));
+ 
+ done:
+diff --git a/drivers/iio/light/tcs3472.c b/drivers/iio/light/tcs3472.c
+index a0dc447aeb68b..371c6a39a1654 100644
+--- a/drivers/iio/light/tcs3472.c
++++ b/drivers/iio/light/tcs3472.c
+@@ -64,7 +64,11 @@ struct tcs3472_data {
+ 	u8 control;
+ 	u8 atime;
+ 	u8 apers;
+-	u16 buffer[8]; /* 4 16-bit channels + 64-bit timestamp */
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		u16 chans[4];
++		s64 timestamp __aligned(8);
++	} scan;
+ };
+ 
+ static const struct iio_event_spec tcs3472_events[] = {
+@@ -386,10 +390,10 @@ static irqreturn_t tcs3472_trigger_handler(int irq, void *p)
+ 		if (ret < 0)
+ 			goto done;
+ 
+-		data->buffer[j++] = ret;
++		data->scan.chans[j++] = ret;
+ 	}
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 		iio_get_time_ns(indio_dev));
+ 
+ done:
+@@ -531,7 +535,8 @@ static int tcs3472_probe(struct i2c_client *client,
+ 	return 0;
+ 
+ free_irq:
+-	free_irq(client->irq, indio_dev);
++	if (client->irq)
++		free_irq(client->irq, indio_dev);
+ buffer_cleanup:
+ 	iio_triggered_buffer_cleanup(indio_dev);
+ 	return ret;
+@@ -559,7 +564,8 @@ static int tcs3472_remove(struct i2c_client *client)
+ 	struct iio_dev *indio_dev = i2c_get_clientdata(client);
+ 
+ 	iio_device_unregister(indio_dev);
+-	free_irq(client->irq, indio_dev);
++	if (client->irq)
++		free_irq(client->irq, indio_dev);
+ 	iio_triggered_buffer_cleanup(indio_dev);
+ 	tcs3472_powerdown(iio_priv(indio_dev));
+ 
+diff --git a/drivers/iio/light/vcnl4000.c b/drivers/iio/light/vcnl4000.c
+index 2f7916f95689e..3b5e27053ef29 100644
+--- a/drivers/iio/light/vcnl4000.c
++++ b/drivers/iio/light/vcnl4000.c
+@@ -910,7 +910,7 @@ static irqreturn_t vcnl4010_trigger_handler(int irq, void *p)
+ 	struct iio_dev *indio_dev = pf->indio_dev;
+ 	struct vcnl4000_data *data = iio_priv(indio_dev);
+ 	const unsigned long *active_scan_mask = indio_dev->active_scan_mask;
+-	u16 buffer[8] = {0}; /* 1x16-bit + ts */
++	u16 buffer[8] __aligned(8) = {0}; /* 1x16-bit + naturally aligned ts */
+ 	bool data_read = false;
+ 	unsigned long isr;
+ 	int val = 0;
+diff --git a/drivers/iio/light/vcnl4035.c b/drivers/iio/light/vcnl4035.c
+index ae87740d9cef2..bc07774117124 100644
+--- a/drivers/iio/light/vcnl4035.c
++++ b/drivers/iio/light/vcnl4035.c
+@@ -102,7 +102,8 @@ static irqreturn_t vcnl4035_trigger_consumer_handler(int irq, void *p)
+ 	struct iio_poll_func *pf = p;
+ 	struct iio_dev *indio_dev = pf->indio_dev;
+ 	struct vcnl4035_data *data = iio_priv(indio_dev);
+-	u8 buffer[ALIGN(sizeof(u16), sizeof(s64)) + sizeof(s64)];
++	/* Ensure naturally aligned timestamp */
++	u8 buffer[ALIGN(sizeof(u16), sizeof(s64)) + sizeof(s64)]  __aligned(8);
+ 	int ret;
+ 
+ 	ret = regmap_read(data->regmap, VCNL4035_ALS_DATA, (int *)buffer);
+diff --git a/drivers/iio/magnetometer/bmc150_magn.c b/drivers/iio/magnetometer/bmc150_magn.c
+index 00f9766bad5c5..d534f4f3909eb 100644
+--- a/drivers/iio/magnetometer/bmc150_magn.c
++++ b/drivers/iio/magnetometer/bmc150_magn.c
+@@ -138,8 +138,11 @@ struct bmc150_magn_data {
+ 	struct regmap *regmap;
+ 	struct regulator_bulk_data regulators[2];
+ 	struct iio_mount_matrix orientation;
+-	/* 4 x 32 bits for x, y z, 4 bytes align, 64 bits timestamp */
+-	s32 buffer[6];
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		s32 chans[3];
++		s64 timestamp __aligned(8);
++	} scan;
+ 	struct iio_trigger *dready_trig;
+ 	bool dready_trigger_on;
+ 	int max_odr;
+@@ -675,11 +678,11 @@ static irqreturn_t bmc150_magn_trigger_handler(int irq, void *p)
+ 	int ret;
+ 
+ 	mutex_lock(&data->mutex);
+-	ret = bmc150_magn_read_xyz(data, data->buffer);
++	ret = bmc150_magn_read_xyz(data, data->scan.chans);
+ 	if (ret < 0)
+ 		goto err;
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 					   pf->timestamp);
+ 
+ err:
+diff --git a/drivers/iio/magnetometer/hmc5843.h b/drivers/iio/magnetometer/hmc5843.h
+index 3f6c0b6629415..242f742f2643a 100644
+--- a/drivers/iio/magnetometer/hmc5843.h
++++ b/drivers/iio/magnetometer/hmc5843.h
+@@ -33,7 +33,8 @@ enum hmc5843_ids {
+  * @lock:		update and read regmap data
+  * @regmap:		hardware access register maps
+  * @variant:		describe chip variants
+- * @buffer:		3x 16-bit channels + padding + 64-bit timestamp
++ * @scan:		buffer to pack data for passing to
++ *			iio_push_to_buffers_with_timestamp()
+  */
+ struct hmc5843_data {
+ 	struct device *dev;
+@@ -41,7 +42,10 @@ struct hmc5843_data {
+ 	struct regmap *regmap;
+ 	const struct hmc5843_chip_info *variant;
+ 	struct iio_mount_matrix orientation;
+-	__be16 buffer[8];
++	struct {
++		__be16 chans[3];
++		s64 timestamp __aligned(8);
++	} scan;
+ };
+ 
+ int hmc5843_common_probe(struct device *dev, struct regmap *regmap,
+diff --git a/drivers/iio/magnetometer/hmc5843_core.c b/drivers/iio/magnetometer/hmc5843_core.c
+index 780faea61d82e..221563e0c18fd 100644
+--- a/drivers/iio/magnetometer/hmc5843_core.c
++++ b/drivers/iio/magnetometer/hmc5843_core.c
+@@ -446,13 +446,13 @@ static irqreturn_t hmc5843_trigger_handler(int irq, void *p)
+ 	}
+ 
+ 	ret = regmap_bulk_read(data->regmap, HMC5843_DATA_OUT_MSB_REGS,
+-			       data->buffer, 3 * sizeof(__be16));
++			       data->scan.chans, sizeof(data->scan.chans));
+ 
+ 	mutex_unlock(&data->lock);
+ 	if (ret < 0)
+ 		goto done;
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++	iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 					   iio_get_time_ns(indio_dev));
+ 
+ done:
+diff --git a/drivers/iio/magnetometer/rm3100-core.c b/drivers/iio/magnetometer/rm3100-core.c
+index dd811da9cb6db..934da20781bba 100644
+--- a/drivers/iio/magnetometer/rm3100-core.c
++++ b/drivers/iio/magnetometer/rm3100-core.c
+@@ -78,7 +78,8 @@ struct rm3100_data {
+ 	bool use_interrupt;
+ 	int conversion_time;
+ 	int scale;
+-	u8 buffer[RM3100_SCAN_BYTES];
++	/* Ensure naturally aligned timestamp */
++	u8 buffer[RM3100_SCAN_BYTES] __aligned(8);
+ 	struct iio_trigger *drdy_trig;
+ 
+ 	/*
+diff --git a/drivers/iio/potentiostat/lmp91000.c b/drivers/iio/potentiostat/lmp91000.c
+index 8a9c576616ee5..ff39ba975da70 100644
+--- a/drivers/iio/potentiostat/lmp91000.c
++++ b/drivers/iio/potentiostat/lmp91000.c
+@@ -71,8 +71,8 @@ struct lmp91000_data {
+ 
+ 	struct completion completion;
+ 	u8 chan_select;
+-
+-	u32 buffer[4]; /* 64-bit data + 64-bit timestamp */
++	/* 64-bit data + 64-bit naturally aligned timestamp */
++	u32 buffer[4] __aligned(8);
+ };
+ 
+ static const struct iio_chan_spec lmp91000_channels[] = {
+diff --git a/drivers/iio/proximity/as3935.c b/drivers/iio/proximity/as3935.c
+index edc4a35ae66d1..1d5ace2bde44d 100644
+--- a/drivers/iio/proximity/as3935.c
++++ b/drivers/iio/proximity/as3935.c
+@@ -59,7 +59,11 @@ struct as3935_state {
+ 	unsigned long noise_tripped;
+ 	u32 tune_cap;
+ 	u32 nflwdth_reg;
+-	u8 buffer[16]; /* 8-bit data + 56-bit padding + 64-bit timestamp */
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		u8 chan;
++		s64 timestamp __aligned(8);
++	} scan;
+ 	u8 buf[2] ____cacheline_aligned;
+ };
+ 
+@@ -225,8 +229,8 @@ static irqreturn_t as3935_trigger_handler(int irq, void *private)
+ 	if (ret)
+ 		goto err_read;
+ 
+-	st->buffer[0] = val & AS3935_DATA_MASK;
+-	iio_push_to_buffers_with_timestamp(indio_dev, &st->buffer,
++	st->scan.chan = val & AS3935_DATA_MASK;
++	iio_push_to_buffers_with_timestamp(indio_dev, &st->scan,
+ 					   iio_get_time_ns(indio_dev));
+ err_read:
+ 	iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/iio/proximity/isl29501.c b/drivers/iio/proximity/isl29501.c
+index 90e76451c972a..5b6ea783795d9 100644
+--- a/drivers/iio/proximity/isl29501.c
++++ b/drivers/iio/proximity/isl29501.c
+@@ -938,7 +938,7 @@ static irqreturn_t isl29501_trigger_handler(int irq, void *p)
+ 	struct iio_dev *indio_dev = pf->indio_dev;
+ 	struct isl29501_private *isl29501 = iio_priv(indio_dev);
+ 	const unsigned long *active_mask = indio_dev->active_scan_mask;
+-	u32 buffer[4] = {}; /* 1x16-bit + ts */
++	u32 buffer[4] __aligned(8) = {}; /* 1x16-bit + naturally aligned ts */
+ 
+ 	if (test_bit(ISL29501_DISTANCE_SCAN_INDEX, active_mask))
+ 		isl29501_register_read(isl29501, REG_DISTANCE, buffer);
+diff --git a/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c b/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c
+index cc206bfa09c78..d854b8d5fbbaf 100644
+--- a/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c
++++ b/drivers/iio/proximity/pulsedlight-lidar-lite-v2.c
+@@ -44,7 +44,11 @@ struct lidar_data {
+ 	int (*xfer)(struct lidar_data *data, u8 reg, u8 *val, int len);
+ 	int i2c_enabled;
+ 
+-	u16 buffer[8]; /* 2 byte distance + 8 byte timestamp */
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		u16 chan;
++		s64 timestamp __aligned(8);
++	} scan;
+ };
+ 
+ static const struct iio_chan_spec lidar_channels[] = {
+@@ -230,9 +234,9 @@ static irqreturn_t lidar_trigger_handler(int irq, void *private)
+ 	struct lidar_data *data = iio_priv(indio_dev);
+ 	int ret;
+ 
+-	ret = lidar_get_measurement(data, data->buffer);
++	ret = lidar_get_measurement(data, &data->scan.chan);
+ 	if (!ret) {
+-		iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
++		iio_push_to_buffers_with_timestamp(indio_dev, &data->scan,
+ 						   iio_get_time_ns(indio_dev));
+ 	} else if (ret != -EINVAL) {
+ 		dev_err(&data->client->dev, "cannot read LIDAR measurement");
+diff --git a/drivers/iio/proximity/srf08.c b/drivers/iio/proximity/srf08.c
+index 70beac5c9c1df..9b0886760f76d 100644
+--- a/drivers/iio/proximity/srf08.c
++++ b/drivers/iio/proximity/srf08.c
+@@ -63,11 +63,11 @@ struct srf08_data {
+ 	int			range_mm;
+ 	struct mutex		lock;
+ 
+-	/*
+-	 * triggered buffer
+-	 * 1x16-bit channel + 3x16 padding + 4x16 timestamp
+-	 */
+-	s16			buffer[8];
++	/* Ensure timestamp is naturally aligned */
++	struct {
++		s16 chan;
++		s64 timestamp __aligned(8);
++	} scan;
+ 
+ 	/* Sensor-Type */
+ 	enum srf08_sensor_type	sensor_type;
+@@ -190,9 +190,9 @@ static irqreturn_t srf08_trigger_handler(int irq, void *p)
+ 
+ 	mutex_lock(&data->lock);
+ 
+-	data->buffer[0] = sensor_data;
++	data->scan.chan = sensor_data;
+ 	iio_push_to_buffers_with_timestamp(indio_dev,
+-						data->buffer, pf->timestamp);
++					   &data->scan, pf->timestamp);
+ 
+ 	mutex_unlock(&data->lock);
+ err:
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index 0ead0d2231540..81d832646d27a 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -121,8 +121,6 @@ static struct ib_cm {
+ 	__be32 random_id_operand;
+ 	struct list_head timewait_list;
+ 	struct workqueue_struct *wq;
+-	/* Sync on cm change port state */
+-	spinlock_t state_lock;
+ } cm;
+ 
+ /* Counter indexes ordered by attribute ID */
+@@ -203,8 +201,6 @@ struct cm_port {
+ 	struct cm_device *cm_dev;
+ 	struct ib_mad_agent *mad_agent;
+ 	u32 port_num;
+-	struct list_head cm_priv_prim_list;
+-	struct list_head cm_priv_altr_list;
+ 	struct cm_counter_group counter_group[CM_COUNTER_GROUPS];
+ };
+ 
+@@ -285,12 +281,6 @@ struct cm_id_private {
+ 	u8 service_timeout;
+ 	u8 target_ack_delay;
+ 
+-	struct list_head prim_list;
+-	struct list_head altr_list;
+-	/* Indicates that the send port mad is registered and av is set */
+-	int prim_send_port_not_ready;
+-	int altr_send_port_not_ready;
+-
+ 	struct list_head work_list;
+ 	atomic_t work_count;
+ 
+@@ -305,53 +295,25 @@ static inline void cm_deref_id(struct cm_id_private *cm_id_priv)
+ 		complete(&cm_id_priv->comp);
+ }
+ 
+-static int cm_alloc_msg(struct cm_id_private *cm_id_priv,
+-			struct ib_mad_send_buf **msg)
++static struct ib_mad_send_buf *cm_alloc_msg(struct cm_id_private *cm_id_priv)
+ {
+ 	struct ib_mad_agent *mad_agent;
+ 	struct ib_mad_send_buf *m;
+ 	struct ib_ah *ah;
+-	struct cm_av *av;
+-	unsigned long flags, flags2;
+-	int ret = 0;
+ 
+-	/* don't let the port to be released till the agent is down */
+-	spin_lock_irqsave(&cm.state_lock, flags2);
+-	spin_lock_irqsave(&cm.lock, flags);
+-	if (!cm_id_priv->prim_send_port_not_ready)
+-		av = &cm_id_priv->av;
+-	else if (!cm_id_priv->altr_send_port_not_ready &&
+-		 (cm_id_priv->alt_av.port))
+-		av = &cm_id_priv->alt_av;
+-	else {
+-		pr_info("%s: not valid CM id\n", __func__);
+-		ret = -ENODEV;
+-		spin_unlock_irqrestore(&cm.lock, flags);
+-		goto out;
+-	}
+-	spin_unlock_irqrestore(&cm.lock, flags);
+-	/* Make sure the port haven't released the mad yet */
+ 	mad_agent = cm_id_priv->av.port->mad_agent;
+-	if (!mad_agent) {
+-		pr_info("%s: not a valid MAD agent\n", __func__);
+-		ret = -ENODEV;
+-		goto out;
+-	}
+-	ah = rdma_create_ah(mad_agent->qp->pd, &av->ah_attr, 0);
+-	if (IS_ERR(ah)) {
+-		ret = PTR_ERR(ah);
+-		goto out;
+-	}
++	ah = rdma_create_ah(mad_agent->qp->pd, &cm_id_priv->av.ah_attr, 0);
++	if (IS_ERR(ah))
++		return (void *)ah;
+ 
+ 	m = ib_create_send_mad(mad_agent, cm_id_priv->id.remote_cm_qpn,
+-			       av->pkey_index,
++			       cm_id_priv->av.pkey_index,
+ 			       0, IB_MGMT_MAD_HDR, IB_MGMT_MAD_DATA,
+ 			       GFP_ATOMIC,
+ 			       IB_MGMT_BASE_VERSION);
+ 	if (IS_ERR(m)) {
+ 		rdma_destroy_ah(ah, 0);
+-		ret = PTR_ERR(m);
+-		goto out;
++		return m;
+ 	}
+ 
+ 	/* Timeout set by caller if response is expected. */
+@@ -360,11 +322,36 @@ static int cm_alloc_msg(struct cm_id_private *cm_id_priv,
+ 
+ 	refcount_inc(&cm_id_priv->refcount);
+ 	m->context[0] = cm_id_priv;
+-	*msg = m;
++	return m;
++}
+ 
+-out:
+-	spin_unlock_irqrestore(&cm.state_lock, flags2);
+-	return ret;
++static struct ib_mad_send_buf *
++cm_alloc_priv_msg(struct cm_id_private *cm_id_priv)
++{
++	struct ib_mad_send_buf *msg;
++
++	lockdep_assert_held(&cm_id_priv->lock);
++
++	msg = cm_alloc_msg(cm_id_priv);
++	if (IS_ERR(msg))
++		return msg;
++	cm_id_priv->msg = msg;
++	return msg;
++}
++
++static void cm_free_priv_msg(struct ib_mad_send_buf *msg)
++{
++	struct cm_id_private *cm_id_priv = msg->context[0];
++
++	lockdep_assert_held(&cm_id_priv->lock);
++
++	if (!WARN_ON(cm_id_priv->msg != msg))
++		cm_id_priv->msg = NULL;
++
++	if (msg->ah)
++		rdma_destroy_ah(msg->ah, 0);
++	cm_deref_id(cm_id_priv);
++	ib_free_send_mad(msg);
+ }
+ 
+ static struct ib_mad_send_buf *cm_alloc_response_msg_no_ah(struct cm_port *port,
+@@ -413,7 +400,7 @@ static int cm_alloc_response_msg(struct cm_port *port,
+ 
+ 	ret = cm_create_response_msg_ah(port, mad_recv_wc, m);
+ 	if (ret) {
+-		cm_free_msg(m);
++		ib_free_send_mad(m);
+ 		return ret;
+ 	}
+ 
+@@ -421,6 +408,13 @@ static int cm_alloc_response_msg(struct cm_port *port,
+ 	return 0;
+ }
+ 
++static void cm_free_response_msg(struct ib_mad_send_buf *msg)
++{
++	if (msg->ah)
++		rdma_destroy_ah(msg->ah, 0);
++	ib_free_send_mad(msg);
++}
++
+ static void *cm_copy_private_data(const void *private_data, u8 private_data_len)
+ {
+ 	void *data;
+@@ -445,30 +439,12 @@ static void cm_set_private_data(struct cm_id_private *cm_id_priv,
+ 	cm_id_priv->private_data_len = private_data_len;
+ }
+ 
+-static int cm_init_av_for_lap(struct cm_port *port, struct ib_wc *wc,
+-			      struct ib_grh *grh, struct cm_av *av)
++static void cm_init_av_for_lap(struct cm_port *port, struct ib_wc *wc,
++			       struct rdma_ah_attr *ah_attr, struct cm_av *av)
+ {
+-	struct rdma_ah_attr new_ah_attr;
+-	int ret;
+-
+ 	av->port = port;
+ 	av->pkey_index = wc->pkey_index;
+-
+-	/*
+-	 * av->ah_attr might be initialized based on past wc during incoming
+-	 * connect request or while sending out connect request. So initialize
+-	 * a new ah_attr on stack. If initialization fails, old ah_attr is
+-	 * used for sending any responses. If initialization is successful,
+-	 * than new ah_attr is used by overwriting old one.
+-	 */
+-	ret = ib_init_ah_attr_from_wc(port->cm_dev->ib_device,
+-				      port->port_num, wc,
+-				      grh, &new_ah_attr);
+-	if (ret)
+-		return ret;
+-
+-	rdma_move_ah_attr(&av->ah_attr, &new_ah_attr);
+-	return 0;
++	rdma_move_ah_attr(&av->ah_attr, ah_attr);
+ }
+ 
+ static int cm_init_av_for_response(struct cm_port *port, struct ib_wc *wc,
+@@ -481,21 +457,6 @@ static int cm_init_av_for_response(struct cm_port *port, struct ib_wc *wc,
+ 				       grh, &av->ah_attr);
+ }
+ 
+-static void add_cm_id_to_port_list(struct cm_id_private *cm_id_priv,
+-				   struct cm_av *av, struct cm_port *port)
+-{
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&cm.lock, flags);
+-	if (&cm_id_priv->av == av)
+-		list_add_tail(&cm_id_priv->prim_list, &port->cm_priv_prim_list);
+-	else if (&cm_id_priv->alt_av == av)
+-		list_add_tail(&cm_id_priv->altr_list, &port->cm_priv_altr_list);
+-	else
+-		WARN_ON(true);
+-	spin_unlock_irqrestore(&cm.lock, flags);
+-}
+-
+ static struct cm_port *
+ get_cm_port_from_path(struct sa_path_rec *path, const struct ib_gid_attr *attr)
+ {
+@@ -539,8 +500,7 @@ get_cm_port_from_path(struct sa_path_rec *path, const struct ib_gid_attr *attr)
+ 
+ static int cm_init_av_by_path(struct sa_path_rec *path,
+ 			      const struct ib_gid_attr *sgid_attr,
+-			      struct cm_av *av,
+-			      struct cm_id_private *cm_id_priv)
++			      struct cm_av *av)
+ {
+ 	struct rdma_ah_attr new_ah_attr;
+ 	struct cm_device *cm_dev;
+@@ -574,11 +534,24 @@ static int cm_init_av_by_path(struct sa_path_rec *path,
+ 		return ret;
+ 
+ 	av->timeout = path->packet_life_time + 1;
+-	add_cm_id_to_port_list(cm_id_priv, av, port);
+ 	rdma_move_ah_attr(&av->ah_attr, &new_ah_attr);
+ 	return 0;
+ }
+ 
++/* Move av created by cm_init_av_by_path(), so av.dgid is not moved */
++static void cm_move_av_from_path(struct cm_av *dest, struct cm_av *src)
++{
++	dest->port = src->port;
++	dest->pkey_index = src->pkey_index;
++	rdma_move_ah_attr(&dest->ah_attr, &src->ah_attr);
++	dest->timeout = src->timeout;
++}
++
++static void cm_destroy_av(struct cm_av *av)
++{
++	rdma_destroy_ah_attr(&av->ah_attr);
++}
++
+ static u32 cm_local_id(__be32 local_id)
+ {
+ 	return (__force u32) (local_id ^ cm.random_id_operand);
+@@ -854,8 +827,6 @@ static struct cm_id_private *cm_alloc_id_priv(struct ib_device *device,
+ 	spin_lock_init(&cm_id_priv->lock);
+ 	init_completion(&cm_id_priv->comp);
+ 	INIT_LIST_HEAD(&cm_id_priv->work_list);
+-	INIT_LIST_HEAD(&cm_id_priv->prim_list);
+-	INIT_LIST_HEAD(&cm_id_priv->altr_list);
+ 	atomic_set(&cm_id_priv->work_count, -1);
+ 	refcount_set(&cm_id_priv->refcount, 1);
+ 
+@@ -1156,12 +1127,7 @@ retest:
+ 		kfree(cm_id_priv->timewait_info);
+ 		cm_id_priv->timewait_info = NULL;
+ 	}
+-	if (!list_empty(&cm_id_priv->altr_list) &&
+-	    (!cm_id_priv->altr_send_port_not_ready))
+-		list_del(&cm_id_priv->altr_list);
+-	if (!list_empty(&cm_id_priv->prim_list) &&
+-	    (!cm_id_priv->prim_send_port_not_ready))
+-		list_del(&cm_id_priv->prim_list);
++
+ 	WARN_ON(cm_id_priv->listen_sharecount);
+ 	WARN_ON(!RB_EMPTY_NODE(&cm_id_priv->service_node));
+ 	if (!RB_EMPTY_NODE(&cm_id_priv->sidr_id_node))
+@@ -1175,8 +1141,8 @@ retest:
+ 	while ((work = cm_dequeue_work(cm_id_priv)) != NULL)
+ 		cm_free_work(work);
+ 
+-	rdma_destroy_ah_attr(&cm_id_priv->av.ah_attr);
+-	rdma_destroy_ah_attr(&cm_id_priv->alt_av.ah_attr);
++	cm_destroy_av(&cm_id_priv->av);
++	cm_destroy_av(&cm_id_priv->alt_av);
+ 	kfree(cm_id_priv->private_data);
+ 	kfree_rcu(cm_id_priv, rcu);
+ }
+@@ -1500,7 +1466,9 @@ static int cm_validate_req_param(struct ib_cm_req_param *param)
+ int ib_send_cm_req(struct ib_cm_id *cm_id,
+ 		   struct ib_cm_req_param *param)
+ {
++	struct cm_av av = {}, alt_av = {};
+ 	struct cm_id_private *cm_id_priv;
++	struct ib_mad_send_buf *msg;
+ 	struct cm_req_msg *req_msg;
+ 	unsigned long flags;
+ 	int ret;
+@@ -1514,8 +1482,7 @@ int ib_send_cm_req(struct ib_cm_id *cm_id,
+ 	spin_lock_irqsave(&cm_id_priv->lock, flags);
+ 	if (cm_id->state != IB_CM_IDLE || WARN_ON(cm_id_priv->timewait_info)) {
+ 		spin_unlock_irqrestore(&cm_id_priv->lock, flags);
+-		ret = -EINVAL;
+-		goto out;
++		return -EINVAL;
+ 	}
+ 	spin_unlock_irqrestore(&cm_id_priv->lock, flags);
+ 
+@@ -1524,19 +1491,20 @@ int ib_send_cm_req(struct ib_cm_id *cm_id,
+ 	if (IS_ERR(cm_id_priv->timewait_info)) {
+ 		ret = PTR_ERR(cm_id_priv->timewait_info);
+ 		cm_id_priv->timewait_info = NULL;
+-		goto out;
++		return ret;
+ 	}
+ 
+ 	ret = cm_init_av_by_path(param->primary_path,
+-				 param->ppath_sgid_attr, &cm_id_priv->av,
+-				 cm_id_priv);
++				 param->ppath_sgid_attr, &av);
+ 	if (ret)
+-		goto out;
++		return ret;
+ 	if (param->alternate_path) {
+ 		ret = cm_init_av_by_path(param->alternate_path, NULL,
+-					 &cm_id_priv->alt_av, cm_id_priv);
+-		if (ret)
+-			goto out;
++					 &alt_av);
++		if (ret) {
++			cm_destroy_av(&av);
++			return ret;
++		}
+ 	}
+ 	cm_id->service_id = param->service_id;
+ 	cm_id->service_mask = ~cpu_to_be64(0);
+@@ -1552,33 +1520,40 @@ int ib_send_cm_req(struct ib_cm_id *cm_id,
+ 	cm_id_priv->pkey = param->primary_path->pkey;
+ 	cm_id_priv->qp_type = param->qp_type;
+ 
+-	ret = cm_alloc_msg(cm_id_priv, &cm_id_priv->msg);
+-	if (ret)
+-		goto out;
++	spin_lock_irqsave(&cm_id_priv->lock, flags);
++
++	cm_move_av_from_path(&cm_id_priv->av, &av);
++	if (param->alternate_path)
++		cm_move_av_from_path(&cm_id_priv->alt_av, &alt_av);
++
++	msg = cm_alloc_priv_msg(cm_id_priv);
++	if (IS_ERR(msg)) {
++		ret = PTR_ERR(msg);
++		goto out_unlock;
++	}
+ 
+-	req_msg = (struct cm_req_msg *) cm_id_priv->msg->mad;
++	req_msg = (struct cm_req_msg *)msg->mad;
+ 	cm_format_req(req_msg, cm_id_priv, param);
+ 	cm_id_priv->tid = req_msg->hdr.tid;
+-	cm_id_priv->msg->timeout_ms = cm_id_priv->timeout_ms;
+-	cm_id_priv->msg->context[1] = (void *) (unsigned long) IB_CM_REQ_SENT;
++	msg->timeout_ms = cm_id_priv->timeout_ms;
++	msg->context[1] = (void *)(unsigned long)IB_CM_REQ_SENT;
+ 
+ 	cm_id_priv->local_qpn = cpu_to_be32(IBA_GET(CM_REQ_LOCAL_QPN, req_msg));
+ 	cm_id_priv->rq_psn = cpu_to_be32(IBA_GET(CM_REQ_STARTING_PSN, req_msg));
+ 
+ 	trace_icm_send_req(&cm_id_priv->id);
+-	spin_lock_irqsave(&cm_id_priv->lock, flags);
+-	ret = ib_post_send_mad(cm_id_priv->msg, NULL);
+-	if (ret) {
+-		spin_unlock_irqrestore(&cm_id_priv->lock, flags);
+-		goto error2;
+-	}
++	ret = ib_post_send_mad(msg, NULL);
++	if (ret)
++		goto out_free;
+ 	BUG_ON(cm_id->state != IB_CM_IDLE);
+ 	cm_id->state = IB_CM_REQ_SENT;
+ 	spin_unlock_irqrestore(&cm_id_priv->lock, flags);
+ 	return 0;
+-
+-error2:	cm_free_msg(cm_id_priv->msg);
+-out:	return ret;
++out_free:
++	cm_free_priv_msg(msg);
++out_unlock:
++	spin_unlock_irqrestore(&cm_id_priv->lock, flags);
++	return ret;
+ }
+ EXPORT_SYMBOL(ib_send_cm_req);
+ 
+@@ -1618,7 +1593,7 @@ static int cm_issue_rej(struct cm_port *port,
+ 		IBA_GET(CM_REJ_REMOTE_COMM_ID, rcv_msg));
+ 	ret = ib_post_send_mad(msg, NULL);
+ 	if (ret)
+-		cm_free_msg(msg);
++		cm_free_response_msg(msg);
+ 
+ 	return ret;
+ }
+@@ -1974,7 +1949,7 @@ static void cm_dup_req_handler(struct cm_work *work,
+ 	return;
+ 
+ unlock:	spin_unlock_irq(&cm_id_priv->lock);
+-free:	cm_free_msg(msg);
++free:	cm_free_response_msg(msg);
+ }
+ 
+ static struct cm_id_private *cm_match_req(struct cm_work *work,
+@@ -2163,8 +2138,7 @@ static int cm_req_handler(struct cm_work *work)
+ 		sa_path_set_dmac(&work->path[0],
+ 				 cm_id_priv->av.ah_attr.roce.dmac);
+ 	work->path[0].hop_limit = grh->hop_limit;
+-	ret = cm_init_av_by_path(&work->path[0], gid_attr, &cm_id_priv->av,
+-				 cm_id_priv);
++	ret = cm_init_av_by_path(&work->path[0], gid_attr, &cm_id_priv->av);
+ 	if (ret) {
+ 		int err;
+ 
+@@ -2183,7 +2157,7 @@ static int cm_req_handler(struct cm_work *work)
+ 	}
+ 	if (cm_req_has_alt_path(req_msg)) {
+ 		ret = cm_init_av_by_path(&work->path[1], NULL,
+-					 &cm_id_priv->alt_av, cm_id_priv);
++					 &cm_id_priv->alt_av);
+ 		if (ret) {
+ 			ib_send_cm_rej(&cm_id_priv->id,
+ 				       IB_CM_REJ_INVALID_ALT_GID,
+@@ -2283,9 +2257,11 @@ int ib_send_cm_rep(struct ib_cm_id *cm_id,
+ 		goto out;
+ 	}
+ 
+-	ret = cm_alloc_msg(cm_id_priv, &msg);
+-	if (ret)
++	msg = cm_alloc_priv_msg(cm_id_priv);
++	if (IS_ERR(msg)) {
++		ret = PTR_ERR(msg);
+ 		goto out;
++	}
+ 
+ 	rep_msg = (struct cm_rep_msg *) msg->mad;
+ 	cm_format_rep(rep_msg, cm_id_priv, param);
+@@ -2294,14 +2270,10 @@ int ib_send_cm_rep(struct ib_cm_id *cm_id,
+ 
+ 	trace_icm_send_rep(cm_id);
+ 	ret = ib_post_send_mad(msg, NULL);
+-	if (ret) {
+-		spin_unlock_irqrestore(&cm_id_priv->lock, flags);
+-		cm_free_msg(msg);
+-		return ret;
+-	}
++	if (ret)
++		goto out_free;
+ 
+ 	cm_id->state = IB_CM_REP_SENT;
+-	cm_id_priv->msg = msg;
+ 	cm_id_priv->initiator_depth = param->initiator_depth;
+ 	cm_id_priv->responder_resources = param->responder_resources;
+ 	cm_id_priv->rq_psn = cpu_to_be32(IBA_GET(CM_REP_STARTING_PSN, rep_msg));
+@@ -2309,8 +2281,13 @@ int ib_send_cm_rep(struct ib_cm_id *cm_id,
+ 		  "IBTA declares QPN to be 24 bits, but it is 0x%X\n",
+ 		  param->qp_num);
+ 	cm_id_priv->local_qpn = cpu_to_be32(param->qp_num & 0xFFFFFF);
++	spin_unlock_irqrestore(&cm_id_priv->lock, flags);
++	return 0;
+ 
+-out:	spin_unlock_irqrestore(&cm_id_priv->lock, flags);
++out_free:
++	cm_free_priv_msg(msg);
++out:
++	spin_unlock_irqrestore(&cm_id_priv->lock, flags);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(ib_send_cm_rep);
+@@ -2357,9 +2334,11 @@ int ib_send_cm_rtu(struct ib_cm_id *cm_id,
+ 		goto error;
+ 	}
+ 
+-	ret = cm_alloc_msg(cm_id_priv, &msg);
+-	if (ret)
++	msg = cm_alloc_msg(cm_id_priv);
++	if (IS_ERR(msg)) {
++		ret = PTR_ERR(msg);
+ 		goto error;
++	}
+ 
+ 	cm_format_rtu((struct cm_rtu_msg *) msg->mad, cm_id_priv,
+ 		      private_data, private_data_len);
+@@ -2453,7 +2432,7 @@ static void cm_dup_rep_handler(struct cm_work *work)
+ 	goto deref;
+ 
+ unlock:	spin_unlock_irq(&cm_id_priv->lock);
+-free:	cm_free_msg(msg);
++free:	cm_free_response_msg(msg);
+ deref:	cm_deref_id(cm_id_priv);
+ }
+ 
+@@ -2657,10 +2636,10 @@ static int cm_send_dreq_locked(struct cm_id_private *cm_id_priv,
+ 	    cm_id_priv->id.lap_state == IB_CM_MRA_LAP_RCVD)
+ 		ib_cancel_mad(cm_id_priv->av.port->mad_agent, cm_id_priv->msg);
+ 
+-	ret = cm_alloc_msg(cm_id_priv, &msg);
+-	if (ret) {
++	msg = cm_alloc_priv_msg(cm_id_priv);
++	if (IS_ERR(msg)) {
+ 		cm_enter_timewait(cm_id_priv);
+-		return ret;
++		return PTR_ERR(msg);
+ 	}
+ 
+ 	cm_format_dreq((struct cm_dreq_msg *) msg->mad, cm_id_priv,
+@@ -2672,12 +2651,11 @@ static int cm_send_dreq_locked(struct cm_id_private *cm_id_priv,
+ 	ret = ib_post_send_mad(msg, NULL);
+ 	if (ret) {
+ 		cm_enter_timewait(cm_id_priv);
+-		cm_free_msg(msg);
++		cm_free_priv_msg(msg);
+ 		return ret;
+ 	}
+ 
+ 	cm_id_priv->id.state = IB_CM_DREQ_SENT;
+-	cm_id_priv->msg = msg;
+ 	return 0;
+ }
+ 
+@@ -2732,9 +2710,9 @@ static int cm_send_drep_locked(struct cm_id_private *cm_id_priv,
+ 	cm_set_private_data(cm_id_priv, private_data, private_data_len);
+ 	cm_enter_timewait(cm_id_priv);
+ 
+-	ret = cm_alloc_msg(cm_id_priv, &msg);
+-	if (ret)
+-		return ret;
++	msg = cm_alloc_msg(cm_id_priv);
++	if (IS_ERR(msg))
++		return PTR_ERR(msg);
+ 
+ 	cm_format_drep((struct cm_drep_msg *) msg->mad, cm_id_priv,
+ 		       private_data, private_data_len);
+@@ -2794,7 +2772,7 @@ static int cm_issue_drep(struct cm_port *port,
+ 		IBA_GET(CM_DREQ_REMOTE_COMM_ID, dreq_msg));
+ 	ret = ib_post_send_mad(msg, NULL);
+ 	if (ret)
+-		cm_free_msg(msg);
++		cm_free_response_msg(msg);
+ 
+ 	return ret;
+ }
+@@ -2853,7 +2831,7 @@ static int cm_dreq_handler(struct cm_work *work)
+ 
+ 		if (cm_create_response_msg_ah(work->port, work->mad_recv_wc, msg) ||
+ 		    ib_post_send_mad(msg, NULL))
+-			cm_free_msg(msg);
++			cm_free_response_msg(msg);
+ 		goto deref;
+ 	case IB_CM_DREQ_RCVD:
+ 		atomic_long_inc(&work->port->counter_group[CM_RECV_DUPLICATES].
+@@ -2927,9 +2905,9 @@ static int cm_send_rej_locked(struct cm_id_private *cm_id_priv,
+ 	case IB_CM_REP_RCVD:
+ 	case IB_CM_MRA_REP_SENT:
+ 		cm_reset_to_idle(cm_id_priv);
+-		ret = cm_alloc_msg(cm_id_priv, &msg);
+-		if (ret)
+-			return ret;
++		msg = cm_alloc_msg(cm_id_priv);
++		if (IS_ERR(msg))
++			return PTR_ERR(msg);
+ 		cm_format_rej((struct cm_rej_msg *)msg->mad, cm_id_priv, reason,
+ 			      ari, ari_length, private_data, private_data_len,
+ 			      state);
+@@ -2937,9 +2915,9 @@ static int cm_send_rej_locked(struct cm_id_private *cm_id_priv,
+ 	case IB_CM_REP_SENT:
+ 	case IB_CM_MRA_REP_RCVD:
+ 		cm_enter_timewait(cm_id_priv);
+-		ret = cm_alloc_msg(cm_id_priv, &msg);
+-		if (ret)
+-			return ret;
++		msg = cm_alloc_msg(cm_id_priv);
++		if (IS_ERR(msg))
++			return PTR_ERR(msg);
+ 		cm_format_rej((struct cm_rej_msg *)msg->mad, cm_id_priv, reason,
+ 			      ari, ari_length, private_data, private_data_len,
+ 			      state);
+@@ -3117,13 +3095,15 @@ int ib_send_cm_mra(struct ib_cm_id *cm_id,
+ 	default:
+ 		trace_icm_send_mra_unknown_err(&cm_id_priv->id);
+ 		ret = -EINVAL;
+-		goto error1;
++		goto error_unlock;
+ 	}
+ 
+ 	if (!(service_timeout & IB_CM_MRA_FLAG_DELAY)) {
+-		ret = cm_alloc_msg(cm_id_priv, &msg);
+-		if (ret)
+-			goto error1;
++		msg = cm_alloc_msg(cm_id_priv);
++		if (IS_ERR(msg)) {
++			ret = PTR_ERR(msg);
++			goto error_unlock;
++		}
+ 
+ 		cm_format_mra((struct cm_mra_msg *) msg->mad, cm_id_priv,
+ 			      msg_response, service_timeout,
+@@ -3131,7 +3111,7 @@ int ib_send_cm_mra(struct ib_cm_id *cm_id,
+ 		trace_icm_send_mra(cm_id);
+ 		ret = ib_post_send_mad(msg, NULL);
+ 		if (ret)
+-			goto error2;
++			goto error_free_msg;
+ 	}
+ 
+ 	cm_id->state = cm_state;
+@@ -3141,13 +3121,11 @@ int ib_send_cm_mra(struct ib_cm_id *cm_id,
+ 	spin_unlock_irqrestore(&cm_id_priv->lock, flags);
+ 	return 0;
+ 
+-error1:	spin_unlock_irqrestore(&cm_id_priv->lock, flags);
+-	kfree(data);
+-	return ret;
+-
+-error2:	spin_unlock_irqrestore(&cm_id_priv->lock, flags);
+-	kfree(data);
++error_free_msg:
+ 	cm_free_msg(msg);
++error_unlock:
++	spin_unlock_irqrestore(&cm_id_priv->lock, flags);
++	kfree(data);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(ib_send_cm_mra);
+@@ -3291,6 +3269,8 @@ static int cm_lap_handler(struct cm_work *work)
+ 	struct cm_lap_msg *lap_msg;
+ 	struct ib_cm_lap_event_param *param;
+ 	struct ib_mad_send_buf *msg = NULL;
++	struct rdma_ah_attr ah_attr;
++	struct cm_av alt_av = {};
+ 	int ret;
+ 
+ 	/* Currently Alternate path messages are not supported for
+@@ -3319,7 +3299,25 @@ static int cm_lap_handler(struct cm_work *work)
+ 	work->cm_event.private_data =
+ 		IBA_GET_MEM_PTR(CM_LAP_PRIVATE_DATA, lap_msg);
+ 
++	ret = ib_init_ah_attr_from_wc(work->port->cm_dev->ib_device,
++				      work->port->port_num,
++				      work->mad_recv_wc->wc,
++				      work->mad_recv_wc->recv_buf.grh,
++				      &ah_attr);
++	if (ret)
++		goto deref;
++
++	ret = cm_init_av_by_path(param->alternate_path, NULL, &alt_av);
++	if (ret) {
++		rdma_destroy_ah_attr(&ah_attr);
++		return -EINVAL;
++	}
++
+ 	spin_lock_irq(&cm_id_priv->lock);
++	cm_init_av_for_lap(work->port, work->mad_recv_wc->wc,
++			   &ah_attr, &cm_id_priv->av);
++	cm_move_av_from_path(&cm_id_priv->alt_av, &alt_av);
++
+ 	if (cm_id_priv->id.state != IB_CM_ESTABLISHED)
+ 		goto unlock;
+ 
+@@ -3343,7 +3341,7 @@ static int cm_lap_handler(struct cm_work *work)
+ 
+ 		if (cm_create_response_msg_ah(work->port, work->mad_recv_wc, msg) ||
+ 		    ib_post_send_mad(msg, NULL))
+-			cm_free_msg(msg);
++			cm_free_response_msg(msg);
+ 		goto deref;
+ 	case IB_CM_LAP_RCVD:
+ 		atomic_long_inc(&work->port->counter_group[CM_RECV_DUPLICATES].
+@@ -3353,17 +3351,6 @@ static int cm_lap_handler(struct cm_work *work)
+ 		goto unlock;
+ 	}
+ 
+-	ret = cm_init_av_for_lap(work->port, work->mad_recv_wc->wc,
+-				 work->mad_recv_wc->recv_buf.grh,
+-				 &cm_id_priv->av);
+-	if (ret)
+-		goto unlock;
+-
+-	ret = cm_init_av_by_path(param->alternate_path, NULL,
+-				 &cm_id_priv->alt_av, cm_id_priv);
+-	if (ret)
+-		goto unlock;
+-
+ 	cm_id_priv->id.lap_state = IB_CM_LAP_RCVD;
+ 	cm_id_priv->tid = lap_msg->hdr.tid;
+ 	cm_queue_work_unlock(cm_id_priv, work);
+@@ -3471,6 +3458,7 @@ int ib_send_cm_sidr_req(struct ib_cm_id *cm_id,
+ {
+ 	struct cm_id_private *cm_id_priv;
+ 	struct ib_mad_send_buf *msg;
++	struct cm_av av = {};
+ 	unsigned long flags;
+ 	int ret;
+ 
+@@ -3479,42 +3467,43 @@ int ib_send_cm_sidr_req(struct ib_cm_id *cm_id,
+ 		return -EINVAL;
+ 
+ 	cm_id_priv = container_of(cm_id, struct cm_id_private, id);
+-	ret = cm_init_av_by_path(param->path, param->sgid_attr,
+-				 &cm_id_priv->av,
+-				 cm_id_priv);
++	ret = cm_init_av_by_path(param->path, param->sgid_attr, &av);
+ 	if (ret)
+-		goto out;
++		return ret;
+ 
++	spin_lock_irqsave(&cm_id_priv->lock, flags);
++	cm_move_av_from_path(&cm_id_priv->av, &av);
+ 	cm_id->service_id = param->service_id;
+ 	cm_id->service_mask = ~cpu_to_be64(0);
+ 	cm_id_priv->timeout_ms = param->timeout_ms;
+ 	cm_id_priv->max_cm_retries = param->max_cm_retries;
+-	ret = cm_alloc_msg(cm_id_priv, &msg);
+-	if (ret)
+-		goto out;
+-
+-	cm_format_sidr_req((struct cm_sidr_req_msg *) msg->mad, cm_id_priv,
+-			   param);
+-	msg->timeout_ms = cm_id_priv->timeout_ms;
+-	msg->context[1] = (void *) (unsigned long) IB_CM_SIDR_REQ_SENT;
+-
+-	spin_lock_irqsave(&cm_id_priv->lock, flags);
+-	if (cm_id->state == IB_CM_IDLE) {
+-		trace_icm_send_sidr_req(&cm_id_priv->id);
+-		ret = ib_post_send_mad(msg, NULL);
+-	} else {
++	if (cm_id->state != IB_CM_IDLE) {
+ 		ret = -EINVAL;
++		goto out_unlock;
+ 	}
+ 
+-	if (ret) {
+-		spin_unlock_irqrestore(&cm_id_priv->lock, flags);
+-		cm_free_msg(msg);
+-		goto out;
++	msg = cm_alloc_priv_msg(cm_id_priv);
++	if (IS_ERR(msg)) {
++		ret = PTR_ERR(msg);
++		goto out_unlock;
+ 	}
++
++	cm_format_sidr_req((struct cm_sidr_req_msg *)msg->mad, cm_id_priv,
++			   param);
++	msg->timeout_ms = cm_id_priv->timeout_ms;
++	msg->context[1] = (void *)(unsigned long)IB_CM_SIDR_REQ_SENT;
++
++	trace_icm_send_sidr_req(&cm_id_priv->id);
++	ret = ib_post_send_mad(msg, NULL);
++	if (ret)
++		goto out_free;
+ 	cm_id->state = IB_CM_SIDR_REQ_SENT;
+-	cm_id_priv->msg = msg;
+ 	spin_unlock_irqrestore(&cm_id_priv->lock, flags);
+-out:
++	return 0;
++out_free:
++	cm_free_priv_msg(msg);
++out_unlock:
++	spin_unlock_irqrestore(&cm_id_priv->lock, flags);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(ib_send_cm_sidr_req);
+@@ -3661,9 +3650,9 @@ static int cm_send_sidr_rep_locked(struct cm_id_private *cm_id_priv,
+ 	if (cm_id_priv->id.state != IB_CM_SIDR_REQ_RCVD)
+ 		return -EINVAL;
+ 
+-	ret = cm_alloc_msg(cm_id_priv, &msg);
+-	if (ret)
+-		return ret;
++	msg = cm_alloc_msg(cm_id_priv);
++	if (IS_ERR(msg))
++		return PTR_ERR(msg);
+ 
+ 	cm_format_sidr_rep((struct cm_sidr_rep_msg *) msg->mad, cm_id_priv,
+ 			   param);
+@@ -3963,9 +3952,7 @@ out:
+ static int cm_migrate(struct ib_cm_id *cm_id)
+ {
+ 	struct cm_id_private *cm_id_priv;
+-	struct cm_av tmp_av;
+ 	unsigned long flags;
+-	int tmp_send_port_not_ready;
+ 	int ret = 0;
+ 
+ 	cm_id_priv = container_of(cm_id, struct cm_id_private, id);
+@@ -3974,14 +3961,7 @@ static int cm_migrate(struct ib_cm_id *cm_id)
+ 	    (cm_id->lap_state == IB_CM_LAP_UNINIT ||
+ 	     cm_id->lap_state == IB_CM_LAP_IDLE)) {
+ 		cm_id->lap_state = IB_CM_LAP_IDLE;
+-		/* Swap address vector */
+-		tmp_av = cm_id_priv->av;
+ 		cm_id_priv->av = cm_id_priv->alt_av;
+-		cm_id_priv->alt_av = tmp_av;
+-		/* Swap port send ready state */
+-		tmp_send_port_not_ready = cm_id_priv->prim_send_port_not_ready;
+-		cm_id_priv->prim_send_port_not_ready = cm_id_priv->altr_send_port_not_ready;
+-		cm_id_priv->altr_send_port_not_ready = tmp_send_port_not_ready;
+ 	} else
+ 		ret = -EINVAL;
+ 	spin_unlock_irqrestore(&cm_id_priv->lock, flags);
+@@ -4356,9 +4336,6 @@ static int cm_add_one(struct ib_device *ib_device)
+ 		port->cm_dev = cm_dev;
+ 		port->port_num = i;
+ 
+-		INIT_LIST_HEAD(&port->cm_priv_prim_list);
+-		INIT_LIST_HEAD(&port->cm_priv_altr_list);
+-
+ 		ret = cm_create_port_fs(port);
+ 		if (ret)
+ 			goto error1;
+@@ -4422,8 +4399,6 @@ static void cm_remove_one(struct ib_device *ib_device, void *client_data)
+ {
+ 	struct cm_device *cm_dev = client_data;
+ 	struct cm_port *port;
+-	struct cm_id_private *cm_id_priv;
+-	struct ib_mad_agent *cur_mad_agent;
+ 	struct ib_port_modify port_modify = {
+ 		.clr_port_cap_mask = IB_PORT_CM_SUP
+ 	};
+@@ -4444,24 +4419,13 @@ static void cm_remove_one(struct ib_device *ib_device, void *client_data)
+ 
+ 		port = cm_dev->port[i-1];
+ 		ib_modify_port(ib_device, port->port_num, 0, &port_modify);
+-		/* Mark all the cm_id's as not valid */
+-		spin_lock_irq(&cm.lock);
+-		list_for_each_entry(cm_id_priv, &port->cm_priv_altr_list, altr_list)
+-			cm_id_priv->altr_send_port_not_ready = 1;
+-		list_for_each_entry(cm_id_priv, &port->cm_priv_prim_list, prim_list)
+-			cm_id_priv->prim_send_port_not_ready = 1;
+-		spin_unlock_irq(&cm.lock);
+ 		/*
+ 		 * We flush the queue here after the going_down set, this
+ 		 * verify that no new works will be queued in the recv handler,
+ 		 * after that we can call the unregister_mad_agent
+ 		 */
+ 		flush_workqueue(cm.wq);
+-		spin_lock_irq(&cm.state_lock);
+-		cur_mad_agent = port->mad_agent;
+-		port->mad_agent = NULL;
+-		spin_unlock_irq(&cm.state_lock);
+-		ib_unregister_mad_agent(cur_mad_agent);
++		ib_unregister_mad_agent(port->mad_agent);
+ 		cm_remove_port_fs(port);
+ 		kfree(port);
+ 	}
+@@ -4476,7 +4440,6 @@ static int __init ib_cm_init(void)
+ 	INIT_LIST_HEAD(&cm.device_list);
+ 	rwlock_init(&cm.device_lock);
+ 	spin_lock_init(&cm.lock);
+-	spin_lock_init(&cm.state_lock);
+ 	cm.listen_service_table = RB_ROOT;
+ 	cm.listen_service_id = be64_to_cpu(IB_CM_ASSIGN_SERVICE_ID);
+ 	cm.remote_id_table = RB_ROOT;
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index ab148a696c0ce..ad9a9ba5f00d1 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -1852,6 +1852,7 @@ static void _destroy_id(struct rdma_id_private *id_priv,
+ {
+ 	cma_cancel_operation(id_priv, state);
+ 
++	rdma_restrack_del(&id_priv->res);
+ 	if (id_priv->cma_dev) {
+ 		if (rdma_cap_ib_cm(id_priv->id.device, 1)) {
+ 			if (id_priv->cm_id.ib)
+@@ -1861,7 +1862,6 @@ static void _destroy_id(struct rdma_id_private *id_priv,
+ 				iw_destroy_cm_id(id_priv->cm_id.iw);
+ 		}
+ 		cma_leave_mc_groups(id_priv);
+-		rdma_restrack_del(&id_priv->res);
+ 		cma_release_dev(id_priv);
+ 	}
+ 
+@@ -2472,8 +2472,10 @@ static int cma_iw_listen(struct rdma_id_private *id_priv, int backlog)
+ 	if (IS_ERR(id))
+ 		return PTR_ERR(id);
+ 
++	mutex_lock(&id_priv->qp_mutex);
+ 	id->tos = id_priv->tos;
+ 	id->tos_set = id_priv->tos_set;
++	mutex_unlock(&id_priv->qp_mutex);
+ 	id->afonly = id_priv->afonly;
+ 	id_priv->cm_id.iw = id;
+ 
+@@ -2534,8 +2536,10 @@ static int cma_listen_on_dev(struct rdma_id_private *id_priv,
+ 	cma_id_get(id_priv);
+ 	dev_id_priv->internal_id = 1;
+ 	dev_id_priv->afonly = id_priv->afonly;
++	mutex_lock(&id_priv->qp_mutex);
+ 	dev_id_priv->tos_set = id_priv->tos_set;
+ 	dev_id_priv->tos = id_priv->tos;
++	mutex_unlock(&id_priv->qp_mutex);
+ 
+ 	ret = rdma_listen(&dev_id_priv->id, id_priv->backlog);
+ 	if (ret)
+@@ -2582,8 +2586,10 @@ void rdma_set_service_type(struct rdma_cm_id *id, int tos)
+ 	struct rdma_id_private *id_priv;
+ 
+ 	id_priv = container_of(id, struct rdma_id_private, id);
++	mutex_lock(&id_priv->qp_mutex);
+ 	id_priv->tos = (u8) tos;
+ 	id_priv->tos_set = true;
++	mutex_unlock(&id_priv->qp_mutex);
+ }
+ EXPORT_SYMBOL(rdma_set_service_type);
+ 
+@@ -2610,8 +2616,10 @@ int rdma_set_ack_timeout(struct rdma_cm_id *id, u8 timeout)
+ 		return -EINVAL;
+ 
+ 	id_priv = container_of(id, struct rdma_id_private, id);
++	mutex_lock(&id_priv->qp_mutex);
+ 	id_priv->timeout = timeout;
+ 	id_priv->timeout_set = true;
++	mutex_unlock(&id_priv->qp_mutex);
+ 
+ 	return 0;
+ }
+@@ -2647,8 +2655,10 @@ int rdma_set_min_rnr_timer(struct rdma_cm_id *id, u8 min_rnr_timer)
+ 		return -EINVAL;
+ 
+ 	id_priv = container_of(id, struct rdma_id_private, id);
++	mutex_lock(&id_priv->qp_mutex);
+ 	id_priv->min_rnr_timer = min_rnr_timer;
+ 	id_priv->min_rnr_timer_set = true;
++	mutex_unlock(&id_priv->qp_mutex);
+ 
+ 	return 0;
+ }
+@@ -3034,8 +3044,11 @@ static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
+ 
+ 	u8 default_roce_tos = id_priv->cma_dev->default_roce_tos[id_priv->id.port_num -
+ 					rdma_start_port(id_priv->cma_dev->device)];
+-	u8 tos = id_priv->tos_set ? id_priv->tos : default_roce_tos;
++	u8 tos;
+ 
++	mutex_lock(&id_priv->qp_mutex);
++	tos = id_priv->tos_set ? id_priv->tos : default_roce_tos;
++	mutex_unlock(&id_priv->qp_mutex);
+ 
+ 	work = kzalloc(sizeof *work, GFP_KERNEL);
+ 	if (!work)
+@@ -3082,8 +3095,12 @@ static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
+ 	 * PacketLifeTime = local ACK timeout/2
+ 	 * as a reasonable approximation for RoCE networks.
+ 	 */
+-	route->path_rec->packet_life_time = id_priv->timeout_set ?
+-		id_priv->timeout - 1 : CMA_IBOE_PACKET_LIFETIME;
++	mutex_lock(&id_priv->qp_mutex);
++	if (id_priv->timeout_set && id_priv->timeout)
++		route->path_rec->packet_life_time = id_priv->timeout - 1;
++	else
++		route->path_rec->packet_life_time = CMA_IBOE_PACKET_LIFETIME;
++	mutex_unlock(&id_priv->qp_mutex);
+ 
+ 	if (!route->path_rec->mtu) {
+ 		ret = -EINVAL;
+@@ -4107,8 +4124,11 @@ static int cma_connect_iw(struct rdma_id_private *id_priv,
+ 	if (IS_ERR(cm_id))
+ 		return PTR_ERR(cm_id);
+ 
++	mutex_lock(&id_priv->qp_mutex);
+ 	cm_id->tos = id_priv->tos;
+ 	cm_id->tos_set = id_priv->tos_set;
++	mutex_unlock(&id_priv->qp_mutex);
++
+ 	id_priv->cm_id.iw = cm_id;
+ 
+ 	memcpy(&cm_id->local_addr, cma_src_addr(id_priv),
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index 64e4be1cbec7c..a1d1deca7c063 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -3034,12 +3034,29 @@ static int ib_uverbs_ex_modify_wq(struct uverbs_attr_bundle *attrs)
+ 	if (!wq)
+ 		return -EINVAL;
+ 
+-	wq_attr.curr_wq_state = cmd.curr_wq_state;
+-	wq_attr.wq_state = cmd.wq_state;
+ 	if (cmd.attr_mask & IB_WQ_FLAGS) {
+ 		wq_attr.flags = cmd.flags;
+ 		wq_attr.flags_mask = cmd.flags_mask;
+ 	}
++
++	if (cmd.attr_mask & IB_WQ_CUR_STATE) {
++		if (cmd.curr_wq_state > IB_WQS_ERR)
++			return -EINVAL;
++
++		wq_attr.curr_wq_state = cmd.curr_wq_state;
++	} else {
++		wq_attr.curr_wq_state = wq->state;
++	}
++
++	if (cmd.attr_mask & IB_WQ_STATE) {
++		if (cmd.wq_state > IB_WQS_ERR)
++			return -EINVAL;
++
++		wq_attr.wq_state = cmd.wq_state;
++	} else {
++		wq_attr.wq_state = wq_attr.curr_wq_state;
++	}
++
+ 	ret = wq->device->ops.modify_wq(wq, &wq_attr, cmd.attr_mask,
+ 					&attrs->driver_udata);
+ 	rdma_lookup_put_uobject(&wq->uobject->uevent.uobject,
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 7652dafe32eca..dcbe5e28a4f7a 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -274,8 +274,6 @@ static int set_rc_inl(struct hns_roce_qp *qp, const struct ib_send_wr *wr,
+ 
+ 	dseg += sizeof(struct hns_roce_v2_rc_send_wqe);
+ 
+-	roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_INLINE_S, 1);
+-
+ 	if (msg_len <= HNS_ROCE_V2_MAX_RC_INL_INN_SZ) {
+ 		roce_set_bit(rc_sq_wqe->byte_20,
+ 			     V2_RC_SEND_WQE_BYTE_20_INL_TYPE_S, 0);
+@@ -320,6 +318,8 @@ static int set_rwqe_data_seg(struct ib_qp *ibqp, const struct ib_send_wr *wr,
+ 		       V2_RC_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_S,
+ 		       (*sge_ind) & (qp->sge.sge_cnt - 1));
+ 
++	roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_INLINE_S,
++		     !!(wr->send_flags & IB_SEND_INLINE));
+ 	if (wr->send_flags & IB_SEND_INLINE)
+ 		return set_rc_inl(qp, wr, rc_sq_wqe, sge_ind);
+ 
+@@ -791,8 +791,7 @@ out:
+ 		qp->sq.head += nreq;
+ 		qp->next_sge = sge_idx;
+ 
+-		if (nreq == 1 && qp->sq.head == qp->sq.tail + 1 &&
+-		    (qp->en_flags & HNS_ROCE_QP_CAP_DIRECT_WQE))
++		if (nreq == 1 && (qp->en_flags & HNS_ROCE_QP_CAP_DIRECT_WQE))
+ 			write_dwqe(hr_dev, qp, wqe);
+ 		else
+ 			update_sq_db(hr_dev, qp);
+@@ -1620,6 +1619,22 @@ static void hns_roce_function_clear(struct hns_roce_dev *hr_dev)
+ 	}
+ }
+ 
++static int hns_roce_clear_extdb_list_info(struct hns_roce_dev *hr_dev)
++{
++	struct hns_roce_cmq_desc desc;
++	int ret;
++
++	hns_roce_cmq_setup_basic_desc(&desc, HNS_ROCE_OPC_CLEAR_EXTDB_LIST_INFO,
++				      false);
++	ret = hns_roce_cmq_send(hr_dev, &desc, 1);
++	if (ret)
++		ibdev_err(&hr_dev->ib_dev,
++			  "failed to clear extended doorbell info, ret = %d.\n",
++			  ret);
++
++	return ret;
++}
++
+ static int hns_roce_query_fw_ver(struct hns_roce_dev *hr_dev)
+ {
+ 	struct hns_roce_query_fw_info *resp;
+@@ -2093,12 +2108,6 @@ static void set_hem_page_size(struct hns_roce_dev *hr_dev)
+ 	calc_pg_sz(caps->max_cqes, caps->cqe_sz, caps->cqe_hop_num,
+ 		   1, &caps->cqe_buf_pg_sz, &caps->cqe_ba_pg_sz, HEM_TYPE_CQE);
+ 
+-	if (caps->cqc_timer_entry_sz)
+-		calc_pg_sz(caps->num_cqc_timer, caps->cqc_timer_entry_sz,
+-			   caps->cqc_timer_hop_num, caps->cqc_timer_bt_num,
+-			   &caps->cqc_timer_buf_pg_sz,
+-			   &caps->cqc_timer_ba_pg_sz, HEM_TYPE_CQC_TIMER);
+-
+ 	/* SRQ */
+ 	if (caps->flags & HNS_ROCE_CAP_FLAG_SRQ) {
+ 		calc_pg_sz(caps->num_srqs, caps->srqc_entry_sz,
+@@ -2739,6 +2748,11 @@ static int hns_roce_v2_init(struct hns_roce_dev *hr_dev)
+ 	struct hns_roce_v2_priv *priv = hr_dev->priv;
+ 	int ret;
+ 
++	/* The hns ROCEE requires the extdb info to be cleared before using */
++	ret = hns_roce_clear_extdb_list_info(hr_dev);
++	if (ret)
++		return ret;
++
+ 	ret = get_hem_table(hr_dev);
+ 	if (ret)
+ 		return ret;
+@@ -4485,12 +4499,13 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
+ 	struct ib_device *ibdev = &hr_dev->ib_dev;
+ 	dma_addr_t trrl_ba;
+ 	dma_addr_t irrl_ba;
+-	enum ib_mtu mtu;
++	enum ib_mtu ib_mtu;
+ 	u8 lp_pktn_ini;
+ 	u64 *mtts;
+ 	u8 *dmac;
+ 	u8 *smac;
+ 	u32 port;
++	int mtu;
+ 	int ret;
+ 
+ 	ret = config_qp_rq_buf(hr_dev, hr_qp, context, qpc_mask);
+@@ -4574,19 +4589,23 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
+ 	roce_set_field(qpc_mask->byte_52_udpspn_dmac, V2_QPC_BYTE_52_DMAC_M,
+ 		       V2_QPC_BYTE_52_DMAC_S, 0);
+ 
+-	mtu = get_mtu(ibqp, attr);
+-	hr_qp->path_mtu = mtu;
++	ib_mtu = get_mtu(ibqp, attr);
++	hr_qp->path_mtu = ib_mtu;
++
++	mtu = ib_mtu_enum_to_int(ib_mtu);
++	if (WARN_ON(mtu < 0))
++		return -EINVAL;
+ 
+ 	if (attr_mask & IB_QP_PATH_MTU) {
+ 		roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M,
+-			       V2_QPC_BYTE_24_MTU_S, mtu);
++			       V2_QPC_BYTE_24_MTU_S, ib_mtu);
+ 		roce_set_field(qpc_mask->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M,
+ 			       V2_QPC_BYTE_24_MTU_S, 0);
+ 	}
+ 
+ #define MAX_LP_MSG_LEN 65536
+ 	/* MTU * (2 ^ LP_PKTN_INI) shouldn't be bigger than 64KB */
+-	lp_pktn_ini = ilog2(MAX_LP_MSG_LEN / ib_mtu_enum_to_int(mtu));
++	lp_pktn_ini = ilog2(MAX_LP_MSG_LEN / mtu);
+ 
+ 	roce_set_field(context->byte_56_dqpn_err, V2_QPC_BYTE_56_LP_PKTN_INI_M,
+ 		       V2_QPC_BYTE_56_LP_PKTN_INI_S, lp_pktn_ini);
+@@ -4758,6 +4777,11 @@ enum {
+ 	DIP_VALID,
+ };
+ 
++enum {
++	WND_LIMIT,
++	WND_UNLIMIT,
++};
++
+ static int check_cong_type(struct ib_qp *ibqp,
+ 			   struct hns_roce_congestion_algorithm *cong_alg)
+ {
+@@ -4769,21 +4793,25 @@ static int check_cong_type(struct ib_qp *ibqp,
+ 		cong_alg->alg_sel = CONG_DCQCN;
+ 		cong_alg->alg_sub_sel = UNSUPPORT_CONG_LEVEL;
+ 		cong_alg->dip_vld = DIP_INVALID;
++		cong_alg->wnd_mode_sel = WND_LIMIT;
+ 		break;
+ 	case CONG_TYPE_LDCP:
+ 		cong_alg->alg_sel = CONG_WINDOW;
+ 		cong_alg->alg_sub_sel = CONG_LDCP;
+ 		cong_alg->dip_vld = DIP_INVALID;
++		cong_alg->wnd_mode_sel = WND_UNLIMIT;
+ 		break;
+ 	case CONG_TYPE_HC3:
+ 		cong_alg->alg_sel = CONG_WINDOW;
+ 		cong_alg->alg_sub_sel = CONG_HC3;
+ 		cong_alg->dip_vld = DIP_INVALID;
++		cong_alg->wnd_mode_sel = WND_LIMIT;
+ 		break;
+ 	case CONG_TYPE_DIP:
+ 		cong_alg->alg_sel = CONG_DCQCN;
+ 		cong_alg->alg_sub_sel = UNSUPPORT_CONG_LEVEL;
+ 		cong_alg->dip_vld = DIP_VALID;
++		cong_alg->wnd_mode_sel = WND_LIMIT;
+ 		break;
+ 	default:
+ 		ibdev_err(&hr_dev->ib_dev,
+@@ -4824,6 +4852,9 @@ static int fill_cong_field(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
+ 	hr_reg_write(&qpc_mask->ext, QPCEX_CONG_ALG_SUB_SEL, 0);
+ 	hr_reg_write(&context->ext, QPCEX_DIP_CTX_IDX_VLD, cong_field.dip_vld);
+ 	hr_reg_write(&qpc_mask->ext, QPCEX_DIP_CTX_IDX_VLD, 0);
++	hr_reg_write(&context->ext, QPCEX_SQ_RQ_NOT_FORBID_EN,
++		     cong_field.wnd_mode_sel);
++	hr_reg_clear(&qpc_mask->ext, QPCEX_SQ_RQ_NOT_FORBID_EN);
+ 
+ 	/* if dip is disabled, there is no need to set dip idx */
+ 	if (cong_field.dip_vld == 0)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index a2100a629859a..23cf2f6bc7a54 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -248,6 +248,7 @@ enum hns_roce_opcode_type {
+ 	HNS_ROCE_OPC_CLR_SCCC				= 0x8509,
+ 	HNS_ROCE_OPC_QUERY_SCCC				= 0x850a,
+ 	HNS_ROCE_OPC_RESET_SCCC				= 0x850b,
++	HNS_ROCE_OPC_CLEAR_EXTDB_LIST_INFO		= 0x850d,
+ 	HNS_ROCE_OPC_QUERY_VF_RES			= 0x850e,
+ 	HNS_ROCE_OPC_CFG_GMV_TBL			= 0x850f,
+ 	HNS_ROCE_OPC_CFG_GMV_BT				= 0x8510,
+@@ -963,6 +964,7 @@ struct hns_roce_v2_qp_context {
+ #define QPCEX_CONG_ALG_SUB_SEL QPCEX_FIELD_LOC(1, 1)
+ #define QPCEX_DIP_CTX_IDX_VLD QPCEX_FIELD_LOC(2, 2)
+ #define QPCEX_DIP_CTX_IDX QPCEX_FIELD_LOC(22, 3)
++#define QPCEX_SQ_RQ_NOT_FORBID_EN QPCEX_FIELD_LOC(23, 23)
+ #define QPCEX_STASH QPCEX_FIELD_LOC(82, 82)
+ 
+ #define	V2_QP_RWE_S 1 /* rdma write enable */
+@@ -1642,6 +1644,7 @@ struct hns_roce_congestion_algorithm {
+ 	u8 alg_sel;
+ 	u8 alg_sub_sel;
+ 	u8 dip_vld;
++	u8 wnd_mode_sel;
+ };
+ 
+ #define V2_QUERY_PF_CAPS_D_CEQ_DEPTH_S 0
+diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
+index 79b3c3023fe7a..b8454dcb03183 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
++++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
+@@ -776,7 +776,7 @@ int hns_roce_mtr_map(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
+ 	struct ib_device *ibdev = &hr_dev->ib_dev;
+ 	struct hns_roce_buf_region *r;
+ 	unsigned int i, mapped_cnt;
+-	int ret;
++	int ret = 0;
+ 
+ 	/*
+ 	 * Only use the first page address as root ba when hopnum is 0, this
+diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
+index 92ddbcc00eb2a..2ae22bf50016a 100644
+--- a/drivers/infiniband/hw/mlx4/qp.c
++++ b/drivers/infiniband/hw/mlx4/qp.c
+@@ -4251,13 +4251,8 @@ int mlx4_ib_modify_wq(struct ib_wq *ibwq, struct ib_wq_attr *wq_attr,
+ 	if (wq_attr_mask & IB_WQ_FLAGS)
+ 		return -EOPNOTSUPP;
+ 
+-	cur_state = wq_attr_mask & IB_WQ_CUR_STATE ? wq_attr->curr_wq_state :
+-						     ibwq->state;
+-	new_state = wq_attr_mask & IB_WQ_STATE ? wq_attr->wq_state : cur_state;
+-
+-	if (cur_state  < IB_WQS_RESET || cur_state  > IB_WQS_ERR ||
+-	    new_state < IB_WQS_RESET || new_state > IB_WQS_ERR)
+-		return -EINVAL;
++	cur_state = wq_attr->curr_wq_state;
++	new_state = wq_attr->wq_state;
+ 
+ 	if ((new_state == IB_WQS_RDY) && (cur_state == IB_WQS_ERR))
+ 		return -EINVAL;
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index 644d5d0ac5442..cca7296b12d01 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -3178,8 +3178,6 @@ static void mlx5_ib_unbind_slave_port(struct mlx5_ib_dev *ibdev,
+ 
+ 	port->mp.mpi = NULL;
+ 
+-	list_add_tail(&mpi->list, &mlx5_ib_unaffiliated_port_list);
+-
+ 	spin_unlock(&port->mp.mpi_lock);
+ 
+ 	err = mlx5_nic_vport_unaffiliate_multiport(mpi->mdev);
+@@ -3327,7 +3325,10 @@ static void mlx5_ib_cleanup_multiport_master(struct mlx5_ib_dev *dev)
+ 			} else {
+ 				mlx5_ib_dbg(dev, "unbinding port_num: %u\n",
+ 					    i + 1);
+-				mlx5_ib_unbind_slave_port(dev, dev->port[i].mp.mpi);
++				list_add_tail(&dev->port[i].mp.mpi->list,
++					      &mlx5_ib_unaffiliated_port_list);
++				mlx5_ib_unbind_slave_port(dev,
++							  dev->port[i].mp.mpi);
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index 9282eb10bfaed..5851486c0d930 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -5309,10 +5309,8 @@ int mlx5_ib_modify_wq(struct ib_wq *wq, struct ib_wq_attr *wq_attr,
+ 
+ 	rqc = MLX5_ADDR_OF(modify_rq_in, in, ctx);
+ 
+-	curr_wq_state = (wq_attr_mask & IB_WQ_CUR_STATE) ?
+-		wq_attr->curr_wq_state : wq->state;
+-	wq_state = (wq_attr_mask & IB_WQ_STATE) ?
+-		wq_attr->wq_state : curr_wq_state;
++	curr_wq_state = wq_attr->curr_wq_state;
++	wq_state = wq_attr->wq_state;
+ 	if (curr_wq_state == IB_WQS_ERR)
+ 		curr_wq_state = MLX5_RQC_STATE_ERR;
+ 	if (wq_state == IB_WQS_ERR)
+diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c
+index 01662727dca08..fc1ba49042792 100644
+--- a/drivers/infiniband/sw/rxe/rxe_net.c
++++ b/drivers/infiniband/sw/rxe/rxe_net.c
+@@ -207,10 +207,8 @@ static struct socket *rxe_setup_udp_tunnel(struct net *net, __be16 port,
+ 
+ 	/* Create UDP socket */
+ 	err = udp_sock_create(net, &udp_cfg, &sock);
+-	if (err < 0) {
+-		pr_err("failed to create udp socket. err = %d\n", err);
++	if (err < 0)
+ 		return ERR_PTR(err);
+-	}
+ 
+ 	tnl_cfg.encap_type = 1;
+ 	tnl_cfg.encap_rcv = rxe_udp_encap_recv;
+@@ -619,6 +617,12 @@ static int rxe_net_ipv6_init(void)
+ 
+ 	recv_sockets.sk6 = rxe_setup_udp_tunnel(&init_net,
+ 						htons(ROCE_V2_UDP_DPORT), true);
++	if (PTR_ERR(recv_sockets.sk6) == -EAFNOSUPPORT) {
++		recv_sockets.sk6 = NULL;
++		pr_warn("IPv6 is not supported, can not create a UDPv6 socket\n");
++		return 0;
++	}
++
+ 	if (IS_ERR(recv_sockets.sk6)) {
+ 		recv_sockets.sk6 = NULL;
+ 		pr_err("Failed to create IPv6 UDP tunnel\n");
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index b0f350d674fdb..93a41ebda1a85 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -136,7 +136,6 @@ static void free_rd_atomic_resources(struct rxe_qp *qp)
+ void free_rd_atomic_resource(struct rxe_qp *qp, struct resp_res *res)
+ {
+ 	if (res->type == RXE_ATOMIC_MASK) {
+-		rxe_drop_ref(qp);
+ 		kfree_skb(res->atomic.skb);
+ 	} else if (res->type == RXE_READ_MASK) {
+ 		if (res->read.mr)
+diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
+index 2b220659bddbf..39dc39be586ec 100644
+--- a/drivers/infiniband/sw/rxe/rxe_resp.c
++++ b/drivers/infiniband/sw/rxe/rxe_resp.c
+@@ -966,8 +966,6 @@ static int send_atomic_ack(struct rxe_qp *qp, struct rxe_pkt_info *pkt,
+ 		goto out;
+ 	}
+ 
+-	rxe_add_ref(qp);
+-
+ 	res = &qp->resp.resources[qp->resp.res_head];
+ 	free_rd_atomic_resource(qp, res);
+ 	rxe_advance_resp_resource(qp);
+diff --git a/drivers/infiniband/ulp/iser/iscsi_iser.c b/drivers/infiniband/ulp/iser/iscsi_iser.c
+index 8fcaa1136f2cd..776e46ee95dad 100644
+--- a/drivers/infiniband/ulp/iser/iscsi_iser.c
++++ b/drivers/infiniband/ulp/iser/iscsi_iser.c
+@@ -506,6 +506,7 @@ iscsi_iser_conn_bind(struct iscsi_cls_session *cls_session,
+ 	iser_conn->iscsi_conn = conn;
+ 
+ out:
++	iscsi_put_endpoint(ep);
+ 	mutex_unlock(&iser_conn->state_mutex);
+ 	return error;
+ }
+@@ -1002,6 +1003,7 @@ static struct iscsi_transport iscsi_iser_transport = {
+ 	/* connection management */
+ 	.create_conn            = iscsi_iser_conn_create,
+ 	.bind_conn              = iscsi_iser_conn_bind,
++	.unbind_conn		= iscsi_conn_unbind,
+ 	.destroy_conn           = iscsi_conn_teardown,
+ 	.attr_is_visible	= iser_attr_is_visible,
+ 	.set_param              = iscsi_iser_set_param,
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index 0a794d748a7a6..ed7cf25a65c27 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -814,6 +814,9 @@ static struct rtrs_clt_sess *get_next_path_min_inflight(struct path_it *it)
+ 	int inflight;
+ 
+ 	list_for_each_entry_rcu(sess, &clt->paths_list, s.entry) {
++		if (unlikely(READ_ONCE(sess->state) != RTRS_CLT_CONNECTED))
++			continue;
++
+ 		if (unlikely(!list_empty(raw_cpu_ptr(sess->mp_skip_entry))))
+ 			continue;
+ 
+@@ -1788,7 +1791,19 @@ static int rtrs_rdma_conn_established(struct rtrs_clt_con *con,
+ 				  queue_depth);
+ 			return -ECONNRESET;
+ 		}
+-		if (!sess->rbufs || sess->queue_depth < queue_depth) {
++		if (sess->queue_depth > 0 && queue_depth != sess->queue_depth) {
++			rtrs_err(clt, "Error: queue depth changed\n");
++
++			/*
++			 * Stop any more reconnection attempts
++			 */
++			sess->reconnect_attempts = -1;
++			rtrs_err(clt,
++				"Disabling auto-reconnect. Trigger a manual reconnect after issue is resolved\n");
++			return -ECONNRESET;
++		}
++
++		if (!sess->rbufs) {
+ 			kfree(sess->rbufs);
+ 			sess->rbufs = kcalloc(queue_depth, sizeof(*sess->rbufs),
+ 					      GFP_KERNEL);
+@@ -1802,7 +1817,7 @@ static int rtrs_rdma_conn_established(struct rtrs_clt_con *con,
+ 		sess->chunk_size = sess->max_io_size + sess->max_hdr_size;
+ 
+ 		/*
+-		 * Global queue depth and IO size is always a minimum.
++		 * Global IO size is always a minimum.
+ 		 * If while a reconnection server sends us a value a bit
+ 		 * higher - client does not care and uses cached minimum.
+ 		 *
+@@ -1810,8 +1825,7 @@ static int rtrs_rdma_conn_established(struct rtrs_clt_con *con,
+ 		 * connections in parallel, use lock.
+ 		 */
+ 		mutex_lock(&clt->paths_mutex);
+-		clt->queue_depth = min_not_zero(sess->queue_depth,
+-						clt->queue_depth);
++		clt->queue_depth = sess->queue_depth;
+ 		clt->max_io_size = min_not_zero(sess->max_io_size,
+ 						clt->max_io_size);
+ 		mutex_unlock(&clt->paths_mutex);
+@@ -2762,6 +2776,8 @@ struct rtrs_clt *rtrs_clt_open(struct rtrs_clt_ops *ops,
+ 		if (err) {
+ 			list_del_rcu(&sess->s.entry);
+ 			rtrs_clt_close_conns(sess, true);
++			free_percpu(sess->stats->pcpu_stats);
++			kfree(sess->stats);
+ 			free_sess(sess);
+ 			goto close_all_sess;
+ 		}
+@@ -2770,6 +2786,8 @@ struct rtrs_clt *rtrs_clt_open(struct rtrs_clt_ops *ops,
+ 		if (err) {
+ 			list_del_rcu(&sess->s.entry);
+ 			rtrs_clt_close_conns(sess, true);
++			free_percpu(sess->stats->pcpu_stats);
++			kfree(sess->stats);
+ 			free_sess(sess);
+ 			goto close_all_sess;
+ 		}
+@@ -3052,6 +3070,8 @@ int rtrs_clt_create_path_from_sysfs(struct rtrs_clt *clt,
+ close_sess:
+ 	rtrs_clt_remove_path_from_arr(sess);
+ 	rtrs_clt_close_conns(sess, true);
++	free_percpu(sess->stats->pcpu_stats);
++	kfree(sess->stats);
+ 	free_sess(sess);
+ 
+ 	return err;
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c b/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c
+index a9288175fbb54..20efd44297fbb 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c
+@@ -208,6 +208,7 @@ rtrs_srv_destroy_once_sysfs_root_folders(struct rtrs_srv_sess *sess)
+ 		device_del(&srv->dev);
+ 		put_device(&srv->dev);
+ 	} else {
++		put_device(&srv->dev);
+ 		mutex_unlock(&srv->paths_mutex);
+ 	}
+ }
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+index 0fa116cabc445..8a9099684b8e3 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+@@ -1481,6 +1481,7 @@ static void free_sess(struct rtrs_srv_sess *sess)
+ 		kobject_del(&sess->kobj);
+ 		kobject_put(&sess->kobj);
+ 	} else {
++		kfree(sess->stats);
+ 		kfree(sess);
+ 	}
+ }
+@@ -1604,7 +1605,7 @@ static int create_con(struct rtrs_srv_sess *sess,
+ 	struct rtrs_sess *s = &sess->s;
+ 	struct rtrs_srv_con *con;
+ 
+-	u32 cq_size, wr_queue_size;
++	u32 cq_size, max_send_wr, max_recv_wr, wr_limit;
+ 	int err, cq_vector;
+ 
+ 	con = kzalloc(sizeof(*con), GFP_KERNEL);
+@@ -1625,30 +1626,42 @@ static int create_con(struct rtrs_srv_sess *sess,
+ 		 * All receive and all send (each requiring invalidate)
+ 		 * + 2 for drain and heartbeat
+ 		 */
+-		wr_queue_size = SERVICE_CON_QUEUE_DEPTH * 3 + 2;
+-		cq_size = wr_queue_size;
++		max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2;
++		max_recv_wr = SERVICE_CON_QUEUE_DEPTH + 2;
++		cq_size = max_send_wr + max_recv_wr;
+ 	} else {
+-		/*
+-		 * If we have all receive requests posted and
+-		 * all write requests posted and each read request
+-		 * requires an invalidate request + drain
+-		 * and qp gets into error state.
+-		 */
+-		cq_size = srv->queue_depth * 3 + 1;
+ 		/*
+ 		 * In theory we might have queue_depth * 32
+ 		 * outstanding requests if an unsafe global key is used
+ 		 * and we have queue_depth read requests each consisting
+ 		 * of 32 different addresses. div 3 for mlx5.
+ 		 */
+-		wr_queue_size = sess->s.dev->ib_dev->attrs.max_qp_wr / 3;
++		wr_limit = sess->s.dev->ib_dev->attrs.max_qp_wr / 3;
++		/* when always_invlaidate enalbed, we need linv+rinv+mr+imm */
++		if (always_invalidate)
++			max_send_wr =
++				min_t(int, wr_limit,
++				      srv->queue_depth * (1 + 4) + 1);
++		else
++			max_send_wr =
++				min_t(int, wr_limit,
++				      srv->queue_depth * (1 + 2) + 1);
++
++		max_recv_wr = srv->queue_depth + 1;
++		/*
++		 * If we have all receive requests posted and
++		 * all write requests posted and each read request
++		 * requires an invalidate request + drain
++		 * and qp gets into error state.
++		 */
++		cq_size = max_send_wr + max_recv_wr;
+ 	}
+-	atomic_set(&con->sq_wr_avail, wr_queue_size);
++	atomic_set(&con->sq_wr_avail, max_send_wr);
+ 	cq_vector = rtrs_srv_get_next_cq_vector(sess);
+ 
+ 	/* TODO: SOFTIRQ can be faster, but be careful with softirq context */
+ 	err = rtrs_cq_qp_create(&sess->s, &con->c, 1, cq_vector, cq_size,
+-				 wr_queue_size, wr_queue_size,
++				 max_send_wr, max_recv_wr,
+ 				 IB_POLL_WORKQUEUE);
+ 	if (err) {
+ 		rtrs_err(s, "rtrs_cq_qp_create(), err: %d\n", err);
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs.c b/drivers/infiniband/ulp/rtrs/rtrs.c
+index a7847282a2ebf..4e602e40f623b 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs.c
+@@ -376,7 +376,6 @@ void rtrs_stop_hb(struct rtrs_sess *sess)
+ {
+ 	cancel_delayed_work_sync(&sess->hb_dwork);
+ 	sess->hb_missed_cnt = 0;
+-	sess->hb_missed_max = 0;
+ }
+ EXPORT_SYMBOL_GPL(rtrs_stop_hb);
+ 
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index 31f8aa2c40ed8..168705c88e2fa 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -998,7 +998,6 @@ static int srp_alloc_req_data(struct srp_rdma_ch *ch)
+ 	struct srp_device *srp_dev = target->srp_host->srp_dev;
+ 	struct ib_device *ibdev = srp_dev->dev;
+ 	struct srp_request *req;
+-	void *mr_list;
+ 	dma_addr_t dma_addr;
+ 	int i, ret = -ENOMEM;
+ 
+@@ -1009,12 +1008,12 @@ static int srp_alloc_req_data(struct srp_rdma_ch *ch)
+ 
+ 	for (i = 0; i < target->req_ring_size; ++i) {
+ 		req = &ch->req_ring[i];
+-		mr_list = kmalloc_array(target->mr_per_cmd, sizeof(void *),
+-					GFP_KERNEL);
+-		if (!mr_list)
+-			goto out;
+-		if (srp_dev->use_fast_reg)
+-			req->fr_list = mr_list;
++		if (srp_dev->use_fast_reg) {
++			req->fr_list = kmalloc_array(target->mr_per_cmd,
++						sizeof(void *), GFP_KERNEL);
++			if (!req->fr_list)
++				goto out;
++		}
+ 		req->indirect_desc = kmalloc(target->indirect_size, GFP_KERNEL);
+ 		if (!req->indirect_desc)
+ 			goto out;
+diff --git a/drivers/input/joydev.c b/drivers/input/joydev.c
+index da8963a9f044c..947d440a3be63 100644
+--- a/drivers/input/joydev.c
++++ b/drivers/input/joydev.c
+@@ -499,7 +499,7 @@ static int joydev_handle_JSIOCSBTNMAP(struct joydev *joydev,
+ 	memcpy(joydev->keypam, keypam, len);
+ 
+ 	for (i = 0; i < joydev->nkey; i++)
+-		joydev->keymap[keypam[i] - BTN_MISC] = i;
++		joydev->keymap[joydev->keypam[i] - BTN_MISC] = i;
+ 
+  out:
+ 	kfree(keypam);
+diff --git a/drivers/input/keyboard/Kconfig b/drivers/input/keyboard/Kconfig
+index 32d15809ae586..40a070a2e7f5b 100644
+--- a/drivers/input/keyboard/Kconfig
++++ b/drivers/input/keyboard/Kconfig
+@@ -67,9 +67,6 @@ config KEYBOARD_AMIGA
+ 	  To compile this driver as a module, choose M here: the
+ 	  module will be called amikbd.
+ 
+-config ATARI_KBD_CORE
+-	bool
+-
+ config KEYBOARD_APPLESPI
+ 	tristate "Apple SPI keyboard and trackpad"
+ 	depends on ACPI && EFI
+diff --git a/drivers/input/keyboard/hil_kbd.c b/drivers/input/keyboard/hil_kbd.c
+index bb29a7c9a1c0c..54afb38601b9f 100644
+--- a/drivers/input/keyboard/hil_kbd.c
++++ b/drivers/input/keyboard/hil_kbd.c
+@@ -512,6 +512,7 @@ static int hil_dev_connect(struct serio *serio, struct serio_driver *drv)
+ 		    HIL_IDD_NUM_AXES_PER_SET(*idd)) {
+ 			printk(KERN_INFO PREFIX
+ 				"combo devices are not supported.\n");
++			error = -EINVAL;
+ 			goto bail1;
+ 		}
+ 
+diff --git a/drivers/input/touchscreen/elants_i2c.c b/drivers/input/touchscreen/elants_i2c.c
+index 17540bdb1eaf7..0f9e3ec99aae1 100644
+--- a/drivers/input/touchscreen/elants_i2c.c
++++ b/drivers/input/touchscreen/elants_i2c.c
+@@ -1396,7 +1396,7 @@ static int elants_i2c_probe(struct i2c_client *client,
+ 	init_completion(&ts->cmd_done);
+ 
+ 	ts->client = client;
+-	ts->chip_id = (enum elants_chip_id)id->driver_data;
++	ts->chip_id = (enum elants_chip_id)(uintptr_t)device_get_match_data(&client->dev);
+ 	i2c_set_clientdata(client, ts);
+ 
+ 	ts->vcc33 = devm_regulator_get(&client->dev, "vcc33");
+@@ -1636,8 +1636,8 @@ MODULE_DEVICE_TABLE(acpi, elants_acpi_id);
+ 
+ #ifdef CONFIG_OF
+ static const struct of_device_id elants_of_match[] = {
+-	{ .compatible = "elan,ekth3500" },
+-	{ .compatible = "elan,ektf3624" },
++	{ .compatible = "elan,ekth3500", .data = (void *)EKTH3500 },
++	{ .compatible = "elan,ektf3624", .data = (void *)EKTF3624 },
+ 	{ /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, elants_of_match);
+diff --git a/drivers/input/touchscreen/goodix.c b/drivers/input/touchscreen/goodix.c
+index c682b028f0a29..4f53d3c57e698 100644
+--- a/drivers/input/touchscreen/goodix.c
++++ b/drivers/input/touchscreen/goodix.c
+@@ -178,51 +178,6 @@ static const unsigned long goodix_irq_flags[] = {
+ 	IRQ_TYPE_LEVEL_HIGH,
+ };
+ 
+-/*
+- * Those tablets have their coordinates origin at the bottom right
+- * of the tablet, as if rotated 180 degrees
+- */
+-static const struct dmi_system_id rotated_screen[] = {
+-#if defined(CONFIG_DMI) && defined(CONFIG_X86)
+-	{
+-		.ident = "Teclast X89",
+-		.matches = {
+-			/* tPAD is too generic, also match on bios date */
+-			DMI_MATCH(DMI_BOARD_VENDOR, "TECLAST"),
+-			DMI_MATCH(DMI_BOARD_NAME, "tPAD"),
+-			DMI_MATCH(DMI_BIOS_DATE, "12/19/2014"),
+-		},
+-	},
+-	{
+-		.ident = "Teclast X98 Pro",
+-		.matches = {
+-			/*
+-			 * Only match BIOS date, because the manufacturers
+-			 * BIOS does not report the board name at all
+-			 * (sometimes)...
+-			 */
+-			DMI_MATCH(DMI_BOARD_VENDOR, "TECLAST"),
+-			DMI_MATCH(DMI_BIOS_DATE, "10/28/2015"),
+-		},
+-	},
+-	{
+-		.ident = "WinBook TW100",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "WinBook"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "TW100")
+-		}
+-	},
+-	{
+-		.ident = "WinBook TW700",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "WinBook"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "TW700")
+-		},
+-	},
+-#endif
+-	{}
+-};
+-
+ static const struct dmi_system_id nine_bytes_report[] = {
+ #if defined(CONFIG_DMI) && defined(CONFIG_X86)
+ 	{
+@@ -1123,13 +1078,6 @@ static int goodix_configure_dev(struct goodix_ts_data *ts)
+ 				  ABS_MT_POSITION_Y, ts->prop.max_y);
+ 	}
+ 
+-	if (dmi_check_system(rotated_screen)) {
+-		ts->prop.invert_x = true;
+-		ts->prop.invert_y = true;
+-		dev_dbg(&ts->client->dev,
+-			"Applying '180 degrees rotated screen' quirk\n");
+-	}
+-
+ 	if (dmi_check_system(nine_bytes_report)) {
+ 		ts->contact_size = 9;
+ 
+diff --git a/drivers/input/touchscreen/usbtouchscreen.c b/drivers/input/touchscreen/usbtouchscreen.c
+index c847453a03c26..43c521f50c851 100644
+--- a/drivers/input/touchscreen/usbtouchscreen.c
++++ b/drivers/input/touchscreen/usbtouchscreen.c
+@@ -251,7 +251,7 @@ static int e2i_init(struct usbtouch_usb *usbtouch)
+ 	int ret;
+ 	struct usb_device *udev = interface_to_usbdev(usbtouch->interface);
+ 
+-	ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
++	ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
+ 	                      0x01, 0x02, 0x0000, 0x0081,
+ 	                      NULL, 0, USB_CTRL_SET_TIMEOUT);
+ 
+@@ -531,7 +531,7 @@ static int mtouch_init(struct usbtouch_usb *usbtouch)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
++	ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
+ 	                      MTOUCHUSB_RESET,
+ 	                      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 	                      1, 0, NULL, 0, USB_CTRL_SET_TIMEOUT);
+@@ -543,7 +543,7 @@ static int mtouch_init(struct usbtouch_usb *usbtouch)
+ 	msleep(150);
+ 
+ 	for (i = 0; i < 3; i++) {
+-		ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
++		ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
+ 				      MTOUCHUSB_ASYNC_REPORT,
+ 				      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 				      1, 1, NULL, 0, USB_CTRL_SET_TIMEOUT);
+@@ -722,7 +722,7 @@ static int dmc_tsc10_init(struct usbtouch_usb *usbtouch)
+ 	}
+ 
+ 	/* start sending data */
+-	ret = usb_control_msg(dev, usb_rcvctrlpipe (dev, 0),
++	ret = usb_control_msg(dev, usb_sndctrlpipe(dev, 0),
+ 	                      TSC10_CMD_DATA1,
+ 	                      USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 	                      0, 0, NULL, 0, USB_CTRL_SET_TIMEOUT);
+diff --git a/drivers/iommu/amd/amd_iommu.h b/drivers/iommu/amd/amd_iommu.h
+index 55dd38d814d92..416815a525d67 100644
+--- a/drivers/iommu/amd/amd_iommu.h
++++ b/drivers/iommu/amd/amd_iommu.h
+@@ -11,8 +11,6 @@
+ 
+ #include "amd_iommu_types.h"
+ 
+-extern int amd_iommu_init_dma_ops(void);
+-extern int amd_iommu_init_passthrough(void);
+ extern irqreturn_t amd_iommu_int_thread(int irq, void *data);
+ extern irqreturn_t amd_iommu_int_handler(int irq, void *data);
+ extern void amd_iommu_apply_erratum_63(u16 devid);
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index d006724f4dc21..5ff7e5364ef44 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -231,7 +231,6 @@ enum iommu_init_state {
+ 	IOMMU_ENABLED,
+ 	IOMMU_PCI_INIT,
+ 	IOMMU_INTERRUPTS_EN,
+-	IOMMU_DMA_OPS,
+ 	IOMMU_INITIALIZED,
+ 	IOMMU_NOT_FOUND,
+ 	IOMMU_INIT_ERROR,
+@@ -1908,8 +1907,8 @@ static void print_iommu_info(void)
+ 		pci_info(pdev, "Found IOMMU cap 0x%x\n", iommu->cap_ptr);
+ 
+ 		if (iommu->cap & (1 << IOMMU_CAP_EFR)) {
+-			pci_info(pdev, "Extended features (%#llx):",
+-				 iommu->features);
++			pr_info("Extended features (%#llx):", iommu->features);
++
+ 			for (i = 0; i < ARRAY_SIZE(feat_str); ++i) {
+ 				if (iommu_feature(iommu, (1ULL << i)))
+ 					pr_cont(" %s", feat_str[i]);
+@@ -2895,10 +2894,6 @@ static int __init state_next(void)
+ 		init_state = ret ? IOMMU_INIT_ERROR : IOMMU_INTERRUPTS_EN;
+ 		break;
+ 	case IOMMU_INTERRUPTS_EN:
+-		ret = amd_iommu_init_dma_ops();
+-		init_state = ret ? IOMMU_INIT_ERROR : IOMMU_DMA_OPS;
+-		break;
+-	case IOMMU_DMA_OPS:
+ 		init_state = IOMMU_INITIALIZED;
+ 		break;
+ 	case IOMMU_INITIALIZED:
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index 3ac42bbdefc63..c46dde88a132b 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -30,7 +30,6 @@
+ #include <linux/msi.h>
+ #include <linux/irqdomain.h>
+ #include <linux/percpu.h>
+-#include <linux/iova.h>
+ #include <linux/io-pgtable.h>
+ #include <asm/irq_remapping.h>
+ #include <asm/io_apic.h>
+@@ -1773,13 +1772,22 @@ void amd_iommu_domain_update(struct protection_domain *domain)
+ 	amd_iommu_domain_flush_complete(domain);
+ }
+ 
++static void __init amd_iommu_init_dma_ops(void)
++{
++	swiotlb = (iommu_default_passthrough() || sme_me_mask) ? 1 : 0;
++
++	if (amd_iommu_unmap_flush)
++		pr_info("IO/TLB flush on unmap enabled\n");
++	else
++		pr_info("Lazy IO/TLB flushing enabled\n");
++	iommu_set_dma_strict(amd_iommu_unmap_flush);
++}
++
+ int __init amd_iommu_init_api(void)
+ {
+-	int ret, err = 0;
++	int err = 0;
+ 
+-	ret = iova_cache_get();
+-	if (ret)
+-		return ret;
++	amd_iommu_init_dma_ops();
+ 
+ 	err = bus_set_iommu(&pci_bus_type, &amd_iommu_ops);
+ 	if (err)
+@@ -1796,19 +1804,6 @@ int __init amd_iommu_init_api(void)
+ 	return 0;
+ }
+ 
+-int __init amd_iommu_init_dma_ops(void)
+-{
+-	swiotlb        = (iommu_default_passthrough() || sme_me_mask) ? 1 : 0;
+-
+-	if (amd_iommu_unmap_flush)
+-		pr_info("IO/TLB flush on unmap enabled\n");
+-	else
+-		pr_info("Lazy IO/TLB flushing enabled\n");
+-	iommu_set_dma_strict(amd_iommu_unmap_flush);
+-	return 0;
+-
+-}
+-
+ /*****************************************************************************
+  *
+  * The following functions belong to the exported interface of AMD IOMMU
+diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
+index 7bcdd12055358..5d96fcc45feca 100644
+--- a/drivers/iommu/dma-iommu.c
++++ b/drivers/iommu/dma-iommu.c
+@@ -243,9 +243,11 @@ resv_iova:
+ 			lo = iova_pfn(iovad, start);
+ 			hi = iova_pfn(iovad, end);
+ 			reserve_iova(iovad, lo, hi);
+-		} else {
++		} else if (end < start) {
+ 			/* dma_ranges list should be sorted */
+-			dev_err(&dev->dev, "Failed to reserve IOVA\n");
++			dev_err(&dev->dev,
++				"Failed to reserve IOVA [%pa-%pa]\n",
++				&start, &end);
+ 			return -EINVAL;
+ 		}
+ 
+diff --git a/drivers/leds/Kconfig b/drivers/leds/Kconfig
+index 49d99cb084dbd..c81b1e60953c1 100644
+--- a/drivers/leds/Kconfig
++++ b/drivers/leds/Kconfig
+@@ -199,6 +199,7 @@ config LEDS_LM3530
+ 
+ config LEDS_LM3532
+ 	tristate "LCD Backlight driver for LM3532"
++	select REGMAP_I2C
+ 	depends on LEDS_CLASS
+ 	depends on I2C
+ 	help
+diff --git a/drivers/leds/blink/leds-lgm-sso.c b/drivers/leds/blink/leds-lgm-sso.c
+index 6a63846d10b5e..7d5f0bf2817ad 100644
+--- a/drivers/leds/blink/leds-lgm-sso.c
++++ b/drivers/leds/blink/leds-lgm-sso.c
+@@ -132,8 +132,7 @@ struct sso_led_priv {
+ 	struct regmap *mmap;
+ 	struct device *dev;
+ 	struct platform_device *pdev;
+-	struct clk *gclk;
+-	struct clk *fpid_clk;
++	struct clk_bulk_data clocks[2];
+ 	u32 fpid_clkrate;
+ 	u32 gptc_clkrate;
+ 	u32 freq[MAX_FREQ_RANK];
+@@ -763,12 +762,11 @@ static int sso_probe_gpios(struct sso_led_priv *priv)
+ 	return sso_gpio_gc_init(dev, priv);
+ }
+ 
+-static void sso_clk_disable(void *data)
++static void sso_clock_disable_unprepare(void *data)
+ {
+ 	struct sso_led_priv *priv = data;
+ 
+-	clk_disable_unprepare(priv->fpid_clk);
+-	clk_disable_unprepare(priv->gclk);
++	clk_bulk_disable_unprepare(ARRAY_SIZE(priv->clocks), priv->clocks);
+ }
+ 
+ static int intel_sso_led_probe(struct platform_device *pdev)
+@@ -785,36 +783,30 @@ static int intel_sso_led_probe(struct platform_device *pdev)
+ 	priv->dev = dev;
+ 
+ 	/* gate clock */
+-	priv->gclk = devm_clk_get(dev, "sso");
+-	if (IS_ERR(priv->gclk)) {
+-		dev_err(dev, "get sso gate clock failed!\n");
+-		return PTR_ERR(priv->gclk);
+-	}
++	priv->clocks[0].id = "sso";
++
++	/* fpid clock */
++	priv->clocks[1].id = "fpid";
+ 
+-	ret = clk_prepare_enable(priv->gclk);
++	ret = devm_clk_bulk_get(dev, ARRAY_SIZE(priv->clocks), priv->clocks);
+ 	if (ret) {
+-		dev_err(dev, "Failed to prepare/enable sso gate clock!\n");
++		dev_err(dev, "Getting clocks failed!\n");
+ 		return ret;
+ 	}
+ 
+-	priv->fpid_clk = devm_clk_get(dev, "fpid");
+-	if (IS_ERR(priv->fpid_clk)) {
+-		dev_err(dev, "Failed to get fpid clock!\n");
+-		return PTR_ERR(priv->fpid_clk);
+-	}
+-
+-	ret = clk_prepare_enable(priv->fpid_clk);
++	ret = clk_bulk_prepare_enable(ARRAY_SIZE(priv->clocks), priv->clocks);
+ 	if (ret) {
+-		dev_err(dev, "Failed to prepare/enable fpid clock!\n");
++		dev_err(dev, "Failed to prepare and enable clocks!\n");
+ 		return ret;
+ 	}
+-	priv->fpid_clkrate = clk_get_rate(priv->fpid_clk);
+ 
+-	ret = devm_add_action_or_reset(dev, sso_clk_disable, priv);
+-	if (ret) {
+-		dev_err(dev, "Failed to devm_add_action_or_reset, %d\n", ret);
++	ret = devm_add_action_or_reset(dev, sso_clock_disable_unprepare, priv);
++	if (ret)
+ 		return ret;
+-	}
++
++	priv->fpid_clkrate = clk_get_rate(priv->clocks[1].clk);
++
++	priv->mmap = syscon_node_to_regmap(dev->of_node);
+ 
+ 	priv->mmap = syscon_node_to_regmap(dev->of_node);
+ 	if (IS_ERR(priv->mmap)) {
+@@ -859,8 +851,6 @@ static int intel_sso_led_remove(struct platform_device *pdev)
+ 		sso_led_shutdown(led);
+ 	}
+ 
+-	clk_disable_unprepare(priv->fpid_clk);
+-	clk_disable_unprepare(priv->gclk);
+ 	regmap_exit(priv->mmap);
+ 
+ 	return 0;
+diff --git a/drivers/leds/led-class.c b/drivers/leds/led-class.c
+index 2e495ff678562..fa3f5f504ff7d 100644
+--- a/drivers/leds/led-class.c
++++ b/drivers/leds/led-class.c
+@@ -285,10 +285,6 @@ struct led_classdev *__must_check devm_of_led_get(struct device *dev,
+ 	if (!dev)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	/* Not using device tree? */
+-	if (!IS_ENABLED(CONFIG_OF) || !dev->of_node)
+-		return ERR_PTR(-ENOTSUPP);
+-
+ 	led = of_led_get(dev->of_node, index);
+ 	if (IS_ERR(led))
+ 		return led;
+diff --git a/drivers/leds/leds-as3645a.c b/drivers/leds/leds-as3645a.c
+index e8922fa033796..80411d41e802d 100644
+--- a/drivers/leds/leds-as3645a.c
++++ b/drivers/leds/leds-as3645a.c
+@@ -545,6 +545,7 @@ static int as3645a_parse_node(struct as3645a *flash,
+ 	if (!flash->indicator_node) {
+ 		dev_warn(&flash->client->dev,
+ 			 "can't find indicator node\n");
++		rval = -ENODEV;
+ 		goto out_err;
+ 	}
+ 
+diff --git a/drivers/leds/leds-ktd2692.c b/drivers/leds/leds-ktd2692.c
+index 632f10db4b3ff..f341da1503a49 100644
+--- a/drivers/leds/leds-ktd2692.c
++++ b/drivers/leds/leds-ktd2692.c
+@@ -256,6 +256,17 @@ static void ktd2692_setup(struct ktd2692_context *led)
+ 				 | KTD2692_REG_FLASH_CURRENT_BASE);
+ }
+ 
++static void regulator_disable_action(void *_data)
++{
++	struct device *dev = _data;
++	struct ktd2692_context *led = dev_get_drvdata(dev);
++	int ret;
++
++	ret = regulator_disable(led->regulator);
++	if (ret)
++		dev_err(dev, "Failed to disable supply: %d\n", ret);
++}
++
+ static int ktd2692_parse_dt(struct ktd2692_context *led, struct device *dev,
+ 			    struct ktd2692_led_config_data *cfg)
+ {
+@@ -286,8 +297,14 @@ static int ktd2692_parse_dt(struct ktd2692_context *led, struct device *dev,
+ 
+ 	if (led->regulator) {
+ 		ret = regulator_enable(led->regulator);
+-		if (ret)
++		if (ret) {
+ 			dev_err(dev, "Failed to enable supply: %d\n", ret);
++		} else {
++			ret = devm_add_action_or_reset(dev,
++						regulator_disable_action, dev);
++			if (ret)
++				return ret;
++		}
+ 	}
+ 
+ 	child_node = of_get_next_available_child(np, NULL);
+@@ -377,17 +394,9 @@ static int ktd2692_probe(struct platform_device *pdev)
+ static int ktd2692_remove(struct platform_device *pdev)
+ {
+ 	struct ktd2692_context *led = platform_get_drvdata(pdev);
+-	int ret;
+ 
+ 	led_classdev_flash_unregister(&led->fled_cdev);
+ 
+-	if (led->regulator) {
+-		ret = regulator_disable(led->regulator);
+-		if (ret)
+-			dev_err(&pdev->dev,
+-				"Failed to disable supply: %d\n", ret);
+-	}
+-
+ 	mutex_destroy(&led->lock);
+ 
+ 	return 0;
+diff --git a/drivers/leds/leds-lm36274.c b/drivers/leds/leds-lm36274.c
+index aadb03468a40a..a23a9424c2f38 100644
+--- a/drivers/leds/leds-lm36274.c
++++ b/drivers/leds/leds-lm36274.c
+@@ -127,6 +127,7 @@ static int lm36274_probe(struct platform_device *pdev)
+ 
+ 	ret = lm36274_init(chip);
+ 	if (ret) {
++		fwnode_handle_put(init_data.fwnode);
+ 		dev_err(chip->dev, "Failed to init the device\n");
+ 		return ret;
+ 	}
+diff --git a/drivers/leds/leds-lm3692x.c b/drivers/leds/leds-lm3692x.c
+index e945de45388ca..55e6443997ec9 100644
+--- a/drivers/leds/leds-lm3692x.c
++++ b/drivers/leds/leds-lm3692x.c
+@@ -435,6 +435,7 @@ static int lm3692x_probe_dt(struct lm3692x_led *led)
+ 
+ 	ret = fwnode_property_read_u32(child, "reg", &led->led_enable);
+ 	if (ret) {
++		fwnode_handle_put(child);
+ 		dev_err(&led->client->dev, "reg DT property missing\n");
+ 		return ret;
+ 	}
+@@ -449,12 +450,11 @@ static int lm3692x_probe_dt(struct lm3692x_led *led)
+ 
+ 	ret = devm_led_classdev_register_ext(&led->client->dev, &led->led_dev,
+ 					     &init_data);
+-	if (ret) {
++	if (ret)
+ 		dev_err(&led->client->dev, "led register err: %d\n", ret);
+-		return ret;
+-	}
+ 
+-	return 0;
++	fwnode_handle_put(init_data.fwnode);
++	return ret;
+ }
+ 
+ static int lm3692x_probe(struct i2c_client *client,
+diff --git a/drivers/leds/leds-lm3697.c b/drivers/leds/leds-lm3697.c
+index 7d216cdb91a8a..912e8bb22a995 100644
+--- a/drivers/leds/leds-lm3697.c
++++ b/drivers/leds/leds-lm3697.c
+@@ -203,11 +203,9 @@ static int lm3697_probe_dt(struct lm3697 *priv)
+ 
+ 	priv->enable_gpio = devm_gpiod_get_optional(dev, "enable",
+ 						    GPIOD_OUT_LOW);
+-	if (IS_ERR(priv->enable_gpio)) {
+-		ret = PTR_ERR(priv->enable_gpio);
+-		dev_err(dev, "Failed to get enable gpio: %d\n", ret);
+-		return ret;
+-	}
++	if (IS_ERR(priv->enable_gpio))
++		return dev_err_probe(dev, PTR_ERR(priv->enable_gpio),
++					  "Failed to get enable GPIO\n");
+ 
+ 	priv->regulator = devm_regulator_get(dev, "vled");
+ 	if (IS_ERR(priv->regulator))
+diff --git a/drivers/leds/leds-lp50xx.c b/drivers/leds/leds-lp50xx.c
+index 06230614fdc56..401df1e2e05d0 100644
+--- a/drivers/leds/leds-lp50xx.c
++++ b/drivers/leds/leds-lp50xx.c
+@@ -490,6 +490,7 @@ static int lp50xx_probe_dt(struct lp50xx *priv)
+ 			ret = fwnode_property_read_u32(led_node, "color",
+ 						       &color_id);
+ 			if (ret) {
++				fwnode_handle_put(led_node);
+ 				dev_err(priv->dev, "Cannot read color\n");
+ 				goto child_out;
+ 			}
+@@ -512,7 +513,6 @@ static int lp50xx_probe_dt(struct lp50xx *priv)
+ 			goto child_out;
+ 		}
+ 		i++;
+-		fwnode_handle_put(child);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/mailbox/qcom-apcs-ipc-mailbox.c b/drivers/mailbox/qcom-apcs-ipc-mailbox.c
+index f25324d03842e..15236d7296258 100644
+--- a/drivers/mailbox/qcom-apcs-ipc-mailbox.c
++++ b/drivers/mailbox/qcom-apcs-ipc-mailbox.c
+@@ -132,7 +132,7 @@ static int qcom_apcs_ipc_probe(struct platform_device *pdev)
+ 	if (apcs_data->clk_name) {
+ 		apcs->clk = platform_device_register_data(&pdev->dev,
+ 							  apcs_data->clk_name,
+-							  PLATFORM_DEVID_NONE,
++							  PLATFORM_DEVID_AUTO,
+ 							  NULL, 0);
+ 		if (IS_ERR(apcs->clk))
+ 			dev_err(&pdev->dev, "failed to register APCS clk\n");
+diff --git a/drivers/mailbox/qcom-ipcc.c b/drivers/mailbox/qcom-ipcc.c
+index 2d13c72944c6f..584700cd15855 100644
+--- a/drivers/mailbox/qcom-ipcc.c
++++ b/drivers/mailbox/qcom-ipcc.c
+@@ -155,6 +155,11 @@ static int qcom_ipcc_mbox_send_data(struct mbox_chan *chan, void *data)
+ 	return 0;
+ }
+ 
++static void qcom_ipcc_mbox_shutdown(struct mbox_chan *chan)
++{
++	chan->con_priv = NULL;
++}
++
+ static struct mbox_chan *qcom_ipcc_mbox_xlate(struct mbox_controller *mbox,
+ 					const struct of_phandle_args *ph)
+ {
+@@ -184,6 +189,7 @@ static struct mbox_chan *qcom_ipcc_mbox_xlate(struct mbox_controller *mbox,
+ 
+ static const struct mbox_chan_ops ipcc_mbox_chan_ops = {
+ 	.send_data = qcom_ipcc_mbox_send_data,
++	.shutdown = qcom_ipcc_mbox_shutdown,
+ };
+ 
+ static int qcom_ipcc_setup_mbox(struct qcom_ipcc *ipcc)
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 49f897fbb89ba..7ba00e4c862d7 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -441,30 +441,6 @@ check_suspended:
+ }
+ EXPORT_SYMBOL(md_handle_request);
+ 
+-struct md_io {
+-	struct mddev *mddev;
+-	bio_end_io_t *orig_bi_end_io;
+-	void *orig_bi_private;
+-	struct block_device *orig_bi_bdev;
+-	unsigned long start_time;
+-};
+-
+-static void md_end_io(struct bio *bio)
+-{
+-	struct md_io *md_io = bio->bi_private;
+-	struct mddev *mddev = md_io->mddev;
+-
+-	bio_end_io_acct_remapped(bio, md_io->start_time, md_io->orig_bi_bdev);
+-
+-	bio->bi_end_io = md_io->orig_bi_end_io;
+-	bio->bi_private = md_io->orig_bi_private;
+-
+-	mempool_free(md_io, &mddev->md_io_pool);
+-
+-	if (bio->bi_end_io)
+-		bio->bi_end_io(bio);
+-}
+-
+ static blk_qc_t md_submit_bio(struct bio *bio)
+ {
+ 	const int rw = bio_data_dir(bio);
+@@ -489,21 +465,6 @@ static blk_qc_t md_submit_bio(struct bio *bio)
+ 		return BLK_QC_T_NONE;
+ 	}
+ 
+-	if (bio->bi_end_io != md_end_io) {
+-		struct md_io *md_io;
+-
+-		md_io = mempool_alloc(&mddev->md_io_pool, GFP_NOIO);
+-		md_io->mddev = mddev;
+-		md_io->orig_bi_end_io = bio->bi_end_io;
+-		md_io->orig_bi_private = bio->bi_private;
+-		md_io->orig_bi_bdev = bio->bi_bdev;
+-
+-		bio->bi_end_io = md_end_io;
+-		bio->bi_private = md_io;
+-
+-		md_io->start_time = bio_start_io_acct(bio);
+-	}
+-
+ 	/* bio could be mergeable after passing to underlayer */
+ 	bio->bi_opf &= ~REQ_NOMERGE;
+ 
+@@ -5608,7 +5569,6 @@ static void md_free(struct kobject *ko)
+ 
+ 	bioset_exit(&mddev->bio_set);
+ 	bioset_exit(&mddev->sync_set);
+-	mempool_exit(&mddev->md_io_pool);
+ 	kfree(mddev);
+ }
+ 
+@@ -5705,11 +5665,6 @@ static int md_alloc(dev_t dev, char *name)
+ 		 */
+ 		mddev->hold_active = UNTIL_STOP;
+ 
+-	error = mempool_init_kmalloc_pool(&mddev->md_io_pool, BIO_POOL_SIZE,
+-					  sizeof(struct md_io));
+-	if (error)
+-		goto abort;
+-
+ 	error = -ENOMEM;
+ 	mddev->queue = blk_alloc_queue(NUMA_NO_NODE);
+ 	if (!mddev->queue)
+diff --git a/drivers/md/md.h b/drivers/md/md.h
+index fb7eab58cfd51..4da240ffe2c5e 100644
+--- a/drivers/md/md.h
++++ b/drivers/md/md.h
+@@ -487,7 +487,6 @@ struct mddev {
+ 	struct bio_set			sync_set; /* for sync operations like
+ 						   * metadata and bitmap writes
+ 						   */
+-	mempool_t			md_io_pool;
+ 
+ 	/* Generic flush handling.
+ 	 * The last to finish preflush schedules a worker to submit
+diff --git a/drivers/media/cec/platform/s5p/s5p_cec.c b/drivers/media/cec/platform/s5p/s5p_cec.c
+index 2a3e7ffefe0a2..028a09a7531ef 100644
+--- a/drivers/media/cec/platform/s5p/s5p_cec.c
++++ b/drivers/media/cec/platform/s5p/s5p_cec.c
+@@ -35,10 +35,13 @@ MODULE_PARM_DESC(debug, "debug level (0-2)");
+ 
+ static int s5p_cec_adap_enable(struct cec_adapter *adap, bool enable)
+ {
++	int ret;
+ 	struct s5p_cec_dev *cec = cec_get_drvdata(adap);
+ 
+ 	if (enable) {
+-		pm_runtime_get_sync(cec->dev);
++		ret = pm_runtime_resume_and_get(cec->dev);
++		if (ret < 0)
++			return ret;
+ 
+ 		s5p_cec_reset(cec);
+ 
+@@ -51,7 +54,7 @@ static int s5p_cec_adap_enable(struct cec_adapter *adap, bool enable)
+ 	} else {
+ 		s5p_cec_mask_tx_interrupts(cec);
+ 		s5p_cec_mask_rx_interrupts(cec);
+-		pm_runtime_disable(cec->dev);
++		pm_runtime_put(cec->dev);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/media/common/siano/smscoreapi.c b/drivers/media/common/siano/smscoreapi.c
+index 410cc3ac6f948..bceaf91faa15f 100644
+--- a/drivers/media/common/siano/smscoreapi.c
++++ b/drivers/media/common/siano/smscoreapi.c
+@@ -908,7 +908,7 @@ static int smscore_load_firmware_family2(struct smscore_device_t *coredev,
+ 					 void *buffer, size_t size)
+ {
+ 	struct sms_firmware *firmware = (struct sms_firmware *) buffer;
+-	struct sms_msg_data4 *msg;
++	struct sms_msg_data5 *msg;
+ 	u32 mem_address,  calc_checksum = 0;
+ 	u32 i, *ptr;
+ 	u8 *payload = firmware->payload;
+@@ -989,24 +989,20 @@ static int smscore_load_firmware_family2(struct smscore_device_t *coredev,
+ 		goto exit_fw_download;
+ 
+ 	if (coredev->mode == DEVICE_MODE_NONE) {
+-		struct sms_msg_data *trigger_msg =
+-			(struct sms_msg_data *) msg;
+-
+ 		pr_debug("sending MSG_SMS_SWDOWNLOAD_TRIGGER_REQ\n");
+ 		SMS_INIT_MSG(&msg->x_msg_header,
+ 				MSG_SMS_SWDOWNLOAD_TRIGGER_REQ,
+-				sizeof(struct sms_msg_hdr) +
+-				sizeof(u32) * 5);
++				sizeof(*msg));
+ 
+-		trigger_msg->msg_data[0] = firmware->start_address;
++		msg->msg_data[0] = firmware->start_address;
+ 					/* Entry point */
+-		trigger_msg->msg_data[1] = 6; /* Priority */
+-		trigger_msg->msg_data[2] = 0x200; /* Stack size */
+-		trigger_msg->msg_data[3] = 0; /* Parameter */
+-		trigger_msg->msg_data[4] = 4; /* Task ID */
++		msg->msg_data[1] = 6; /* Priority */
++		msg->msg_data[2] = 0x200; /* Stack size */
++		msg->msg_data[3] = 0; /* Parameter */
++		msg->msg_data[4] = 4; /* Task ID */
+ 
+-		rc = smscore_sendrequest_and_wait(coredev, trigger_msg,
+-					trigger_msg->x_msg_header.msg_length,
++		rc = smscore_sendrequest_and_wait(coredev, msg,
++					msg->x_msg_header.msg_length,
+ 					&coredev->trigger_done);
+ 	} else {
+ 		SMS_INIT_MSG(&msg->x_msg_header, MSG_SW_RELOAD_EXEC_REQ,
+diff --git a/drivers/media/common/siano/smscoreapi.h b/drivers/media/common/siano/smscoreapi.h
+index 4a6b9f4c44ace..f8789ee0d554e 100644
+--- a/drivers/media/common/siano/smscoreapi.h
++++ b/drivers/media/common/siano/smscoreapi.h
+@@ -624,9 +624,9 @@ struct sms_msg_data2 {
+ 	u32 msg_data[2];
+ };
+ 
+-struct sms_msg_data4 {
++struct sms_msg_data5 {
+ 	struct sms_msg_hdr x_msg_header;
+-	u32 msg_data[4];
++	u32 msg_data[5];
+ };
+ 
+ struct sms_data_download {
+diff --git a/drivers/media/common/siano/smsdvb-main.c b/drivers/media/common/siano/smsdvb-main.c
+index cd5bafe9a3aca..7e4100263381c 100644
+--- a/drivers/media/common/siano/smsdvb-main.c
++++ b/drivers/media/common/siano/smsdvb-main.c
+@@ -1212,6 +1212,10 @@ static int smsdvb_hotplug(struct smscore_device_t *coredev,
+ 	return 0;
+ 
+ media_graph_error:
++	mutex_lock(&g_smsdvb_clientslock);
++	list_del(&client->entry);
++	mutex_unlock(&g_smsdvb_clientslock);
++
+ 	smsdvb_debugfs_release(client);
+ 
+ client_error:
+diff --git a/drivers/media/dvb-core/dvb_net.c b/drivers/media/dvb-core/dvb_net.c
+index 89620da983bab..dddebea644bb8 100644
+--- a/drivers/media/dvb-core/dvb_net.c
++++ b/drivers/media/dvb-core/dvb_net.c
+@@ -45,6 +45,7 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/netdevice.h>
++#include <linux/nospec.h>
+ #include <linux/etherdevice.h>
+ #include <linux/dvb/net.h>
+ #include <linux/uio.h>
+@@ -1462,14 +1463,20 @@ static int dvb_net_do_ioctl(struct file *file,
+ 		struct net_device *netdev;
+ 		struct dvb_net_priv *priv_data;
+ 		struct dvb_net_if *dvbnetif = parg;
++		int if_num = dvbnetif->if_num;
+ 
+-		if (dvbnetif->if_num >= DVB_NET_DEVICES_MAX ||
+-		    !dvbnet->state[dvbnetif->if_num]) {
++		if (if_num >= DVB_NET_DEVICES_MAX) {
+ 			ret = -EINVAL;
+ 			goto ioctl_error;
+ 		}
++		if_num = array_index_nospec(if_num, DVB_NET_DEVICES_MAX);
+ 
+-		netdev = dvbnet->device[dvbnetif->if_num];
++		if (!dvbnet->state[if_num]) {
++			ret = -EINVAL;
++			goto ioctl_error;
++		}
++
++		netdev = dvbnet->device[if_num];
+ 
+ 		priv_data = netdev_priv(netdev);
+ 		dvbnetif->pid=priv_data->pid;
+@@ -1522,14 +1529,20 @@ static int dvb_net_do_ioctl(struct file *file,
+ 		struct net_device *netdev;
+ 		struct dvb_net_priv *priv_data;
+ 		struct __dvb_net_if_old *dvbnetif = parg;
++		int if_num = dvbnetif->if_num;
++
++		if (if_num >= DVB_NET_DEVICES_MAX) {
++			ret = -EINVAL;
++			goto ioctl_error;
++		}
++		if_num = array_index_nospec(if_num, DVB_NET_DEVICES_MAX);
+ 
+-		if (dvbnetif->if_num >= DVB_NET_DEVICES_MAX ||
+-		    !dvbnet->state[dvbnetif->if_num]) {
++		if (!dvbnet->state[if_num]) {
+ 			ret = -EINVAL;
+ 			goto ioctl_error;
+ 		}
+ 
+-		netdev = dvbnet->device[dvbnetif->if_num];
++		netdev = dvbnet->device[if_num];
+ 
+ 		priv_data = netdev_priv(netdev);
+ 		dvbnetif->pid=priv_data->pid;
+diff --git a/drivers/media/dvb-core/dvbdev.c b/drivers/media/dvb-core/dvbdev.c
+index 3862ddc86ec48..795d9bfaba5cf 100644
+--- a/drivers/media/dvb-core/dvbdev.c
++++ b/drivers/media/dvb-core/dvbdev.c
+@@ -506,6 +506,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ 			break;
+ 
+ 	if (minor == MAX_DVB_MINORS) {
++		list_del (&dvbdev->list_head);
+ 		kfree(dvbdevfops);
+ 		kfree(dvbdev);
+ 		up_write(&minor_rwsem);
+@@ -526,6 +527,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ 		      __func__);
+ 
+ 		dvb_media_device_free(dvbdev);
++		list_del (&dvbdev->list_head);
+ 		kfree(dvbdevfops);
+ 		kfree(dvbdev);
+ 		mutex_unlock(&dvbdev_register_lock);
+@@ -541,6 +543,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
+ 		pr_err("%s: failed to create device dvb%d.%s%d (%ld)\n",
+ 		       __func__, adap->num, dnames[type], id, PTR_ERR(clsdev));
+ 		dvb_media_device_free(dvbdev);
++		list_del (&dvbdev->list_head);
+ 		kfree(dvbdevfops);
+ 		kfree(dvbdev);
+ 		return PTR_ERR(clsdev);
+diff --git a/drivers/media/i2c/ccs/ccs-core.c b/drivers/media/i2c/ccs/ccs-core.c
+index 9dc3f45da3dcd..b05f409014b2f 100644
+--- a/drivers/media/i2c/ccs/ccs-core.c
++++ b/drivers/media/i2c/ccs/ccs-core.c
+@@ -3093,7 +3093,7 @@ static int __maybe_unused ccs_suspend(struct device *dev)
+ 	if (rval < 0) {
+ 		pm_runtime_put_noidle(dev);
+ 
+-		return -EAGAIN;
++		return rval;
+ 	}
+ 
+ 	if (sensor->streaming)
+diff --git a/drivers/media/i2c/imx334.c b/drivers/media/i2c/imx334.c
+index 047aa7658d217..23f28606e570f 100644
+--- a/drivers/media/i2c/imx334.c
++++ b/drivers/media/i2c/imx334.c
+@@ -717,9 +717,9 @@ static int imx334_set_stream(struct v4l2_subdev *sd, int enable)
+ 	}
+ 
+ 	if (enable) {
+-		ret = pm_runtime_get_sync(imx334->dev);
+-		if (ret)
+-			goto error_power_off;
++		ret = pm_runtime_resume_and_get(imx334->dev);
++		if (ret < 0)
++			goto error_unlock;
+ 
+ 		ret = imx334_start_streaming(imx334);
+ 		if (ret)
+@@ -737,6 +737,7 @@ static int imx334_set_stream(struct v4l2_subdev *sd, int enable)
+ 
+ error_power_off:
+ 	pm_runtime_put(imx334->dev);
++error_unlock:
+ 	mutex_unlock(&imx334->mutex);
+ 
+ 	return ret;
+diff --git a/drivers/media/i2c/ir-kbd-i2c.c b/drivers/media/i2c/ir-kbd-i2c.c
+index e8119ad0bc71d..92376592455ee 100644
+--- a/drivers/media/i2c/ir-kbd-i2c.c
++++ b/drivers/media/i2c/ir-kbd-i2c.c
+@@ -678,8 +678,8 @@ static int zilog_tx(struct rc_dev *rcdev, unsigned int *txbuf,
+ 		goto out_unlock;
+ 	}
+ 
+-	i = i2c_master_recv(ir->tx_c, buf, 1);
+-	if (i != 1) {
++	ret = i2c_master_recv(ir->tx_c, buf, 1);
++	if (ret != 1) {
+ 		dev_err(&ir->rc->dev, "i2c_master_recv failed with %d\n", ret);
+ 		ret = -EIO;
+ 		goto out_unlock;
+diff --git a/drivers/media/i2c/ov2659.c b/drivers/media/i2c/ov2659.c
+index 42f64175a6dff..fb78a1cedc03b 100644
+--- a/drivers/media/i2c/ov2659.c
++++ b/drivers/media/i2c/ov2659.c
+@@ -204,6 +204,7 @@ struct ov2659 {
+ 	struct i2c_client *client;
+ 	struct v4l2_ctrl_handler ctrls;
+ 	struct v4l2_ctrl *link_frequency;
++	struct clk *clk;
+ 	const struct ov2659_framesize *frame_size;
+ 	struct sensor_register *format_ctrl_regs;
+ 	struct ov2659_pll_ctrl pll;
+@@ -1270,6 +1271,8 @@ static int ov2659_power_off(struct device *dev)
+ 
+ 	gpiod_set_value(ov2659->pwdn_gpio, 1);
+ 
++	clk_disable_unprepare(ov2659->clk);
++
+ 	return 0;
+ }
+ 
+@@ -1278,9 +1281,17 @@ static int ov2659_power_on(struct device *dev)
+ 	struct i2c_client *client = to_i2c_client(dev);
+ 	struct v4l2_subdev *sd = i2c_get_clientdata(client);
+ 	struct ov2659 *ov2659 = to_ov2659(sd);
++	int ret;
+ 
+ 	dev_dbg(&client->dev, "%s:\n", __func__);
+ 
++	ret = clk_prepare_enable(ov2659->clk);
++	if (ret) {
++		dev_err(&client->dev, "%s: failed to enable clock\n",
++			__func__);
++		return ret;
++	}
++
+ 	gpiod_set_value(ov2659->pwdn_gpio, 0);
+ 
+ 	if (ov2659->resetb_gpio) {
+@@ -1425,7 +1436,6 @@ static int ov2659_probe(struct i2c_client *client)
+ 	const struct ov2659_platform_data *pdata = ov2659_get_pdata(client);
+ 	struct v4l2_subdev *sd;
+ 	struct ov2659 *ov2659;
+-	struct clk *clk;
+ 	int ret;
+ 
+ 	if (!pdata) {
+@@ -1440,11 +1450,11 @@ static int ov2659_probe(struct i2c_client *client)
+ 	ov2659->pdata = pdata;
+ 	ov2659->client = client;
+ 
+-	clk = devm_clk_get(&client->dev, "xvclk");
+-	if (IS_ERR(clk))
+-		return PTR_ERR(clk);
++	ov2659->clk = devm_clk_get(&client->dev, "xvclk");
++	if (IS_ERR(ov2659->clk))
++		return PTR_ERR(ov2659->clk);
+ 
+-	ov2659->xvclk_frequency = clk_get_rate(clk);
++	ov2659->xvclk_frequency = clk_get_rate(ov2659->clk);
+ 	if (ov2659->xvclk_frequency < 6000000 ||
+ 	    ov2659->xvclk_frequency > 27000000)
+ 		return -EINVAL;
+@@ -1506,7 +1516,9 @@ static int ov2659_probe(struct i2c_client *client)
+ 	ov2659->frame_size = &ov2659_framesizes[2];
+ 	ov2659->format_ctrl_regs = ov2659_formats[0].format_ctrl_regs;
+ 
+-	ov2659_power_on(&client->dev);
++	ret = ov2659_power_on(&client->dev);
++	if (ret < 0)
++		goto error;
+ 
+ 	ret = ov2659_detect(sd);
+ 	if (ret < 0)
+diff --git a/drivers/media/i2c/rdacm21.c b/drivers/media/i2c/rdacm21.c
+index 179d107f494ca..50e2af5227603 100644
+--- a/drivers/media/i2c/rdacm21.c
++++ b/drivers/media/i2c/rdacm21.c
+@@ -69,6 +69,7 @@
+ #define OV490_ISP_VSIZE_LOW		0x80820062
+ #define OV490_ISP_VSIZE_HIGH		0x80820063
+ 
++#define OV10640_PID_TIMEOUT		20
+ #define OV10640_ID_HIGH			0xa6
+ #define OV10640_CHIP_ID			0x300a
+ #define OV10640_PIXEL_RATE		55000000
+@@ -329,30 +330,51 @@ static const struct v4l2_subdev_ops rdacm21_subdev_ops = {
+ 	.pad		= &rdacm21_subdev_pad_ops,
+ };
+ 
+-static int ov10640_initialize(struct rdacm21_device *dev)
++static void ov10640_power_up(struct rdacm21_device *dev)
+ {
+-	u8 val;
+-
+-	/* Power-up OV10640 by setting RESETB and PWDNB pins high. */
++	/* Enable GPIO0#0 (reset) and GPIO1#0 (pwdn) as output lines. */
+ 	ov490_write_reg(dev, OV490_GPIO_SEL0, OV490_GPIO0);
+ 	ov490_write_reg(dev, OV490_GPIO_SEL1, OV490_SPWDN0);
+ 	ov490_write_reg(dev, OV490_GPIO_DIRECTION0, OV490_GPIO0);
+ 	ov490_write_reg(dev, OV490_GPIO_DIRECTION1, OV490_SPWDN0);
++
++	/* Power up OV10640 and then reset it. */
++	ov490_write_reg(dev, OV490_GPIO_OUTPUT_VALUE1, OV490_SPWDN0);
++	usleep_range(1500, 3000);
++
++	ov490_write_reg(dev, OV490_GPIO_OUTPUT_VALUE0, 0x00);
++	usleep_range(1500, 3000);
+ 	ov490_write_reg(dev, OV490_GPIO_OUTPUT_VALUE0, OV490_GPIO0);
+-	ov490_write_reg(dev, OV490_GPIO_OUTPUT_VALUE0, OV490_SPWDN0);
+ 	usleep_range(3000, 5000);
++}
+ 
+-	/* Read OV10640 ID to test communications. */
+-	ov490_write_reg(dev, OV490_SCCB_SLAVE0_DIR, OV490_SCCB_SLAVE_READ);
+-	ov490_write_reg(dev, OV490_SCCB_SLAVE0_ADDR_HIGH, OV10640_CHIP_ID >> 8);
+-	ov490_write_reg(dev, OV490_SCCB_SLAVE0_ADDR_LOW, OV10640_CHIP_ID & 0xff);
+-
+-	/* Trigger SCCB slave transaction and give it some time to complete. */
+-	ov490_write_reg(dev, OV490_HOST_CMD, OV490_HOST_CMD_TRIGGER);
+-	usleep_range(1000, 1500);
++static int ov10640_check_id(struct rdacm21_device *dev)
++{
++	unsigned int i;
++	u8 val;
+ 
+-	ov490_read_reg(dev, OV490_SCCB_SLAVE0_DIR, &val);
+-	if (val != OV10640_ID_HIGH) {
++	/* Read OV10640 ID to test communications. */
++	for (i = 0; i < OV10640_PID_TIMEOUT; ++i) {
++		ov490_write_reg(dev, OV490_SCCB_SLAVE0_DIR,
++				OV490_SCCB_SLAVE_READ);
++		ov490_write_reg(dev, OV490_SCCB_SLAVE0_ADDR_HIGH,
++				OV10640_CHIP_ID >> 8);
++		ov490_write_reg(dev, OV490_SCCB_SLAVE0_ADDR_LOW,
++				OV10640_CHIP_ID & 0xff);
++
++		/*
++		 * Trigger SCCB slave transaction and give it some time
++		 * to complete.
++		 */
++		ov490_write_reg(dev, OV490_HOST_CMD, OV490_HOST_CMD_TRIGGER);
++		usleep_range(1000, 1500);
++
++		ov490_read_reg(dev, OV490_SCCB_SLAVE0_DIR, &val);
++		if (val == OV10640_ID_HIGH)
++			break;
++		usleep_range(1000, 1500);
++	}
++	if (i == OV10640_PID_TIMEOUT) {
+ 		dev_err(dev->dev, "OV10640 ID mismatch: (0x%02x)\n", val);
+ 		return -ENODEV;
+ 	}
+@@ -368,6 +390,8 @@ static int ov490_initialize(struct rdacm21_device *dev)
+ 	unsigned int i;
+ 	int ret;
+ 
++	ov10640_power_up(dev);
++
+ 	/*
+ 	 * Read OV490 Id to test communications. Give it up to 40msec to
+ 	 * exit from reset.
+@@ -405,7 +429,7 @@ static int ov490_initialize(struct rdacm21_device *dev)
+ 		return -ENODEV;
+ 	}
+ 
+-	ret = ov10640_initialize(dev);
++	ret = ov10640_check_id(dev);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/media/i2c/s5c73m3/s5c73m3-core.c b/drivers/media/i2c/s5c73m3/s5c73m3-core.c
+index 5b4c4a3547c93..71804a70bc6d7 100644
+--- a/drivers/media/i2c/s5c73m3/s5c73m3-core.c
++++ b/drivers/media/i2c/s5c73m3/s5c73m3-core.c
+@@ -1386,7 +1386,7 @@ static int __s5c73m3_power_on(struct s5c73m3 *state)
+ 	s5c73m3_gpio_deassert(state, STBY);
+ 	usleep_range(100, 200);
+ 
+-	s5c73m3_gpio_deassert(state, RST);
++	s5c73m3_gpio_deassert(state, RSET);
+ 	usleep_range(50, 100);
+ 
+ 	return 0;
+@@ -1401,7 +1401,7 @@ static int __s5c73m3_power_off(struct s5c73m3 *state)
+ {
+ 	int i, ret;
+ 
+-	if (s5c73m3_gpio_assert(state, RST))
++	if (s5c73m3_gpio_assert(state, RSET))
+ 		usleep_range(10, 50);
+ 
+ 	if (s5c73m3_gpio_assert(state, STBY))
+@@ -1606,7 +1606,7 @@ static int s5c73m3_get_platform_data(struct s5c73m3 *state)
+ 
+ 		state->mclk_frequency = pdata->mclk_frequency;
+ 		state->gpio[STBY] = pdata->gpio_stby;
+-		state->gpio[RST] = pdata->gpio_reset;
++		state->gpio[RSET] = pdata->gpio_reset;
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/media/i2c/s5c73m3/s5c73m3.h b/drivers/media/i2c/s5c73m3/s5c73m3.h
+index ef7e85b34263b..c3fcfdd3ea66d 100644
+--- a/drivers/media/i2c/s5c73m3/s5c73m3.h
++++ b/drivers/media/i2c/s5c73m3/s5c73m3.h
+@@ -353,7 +353,7 @@ struct s5c73m3_ctrls {
+ 
+ enum s5c73m3_gpio_id {
+ 	STBY,
+-	RST,
++	RSET,
+ 	GPIO_NUM,
+ };
+ 
+diff --git a/drivers/media/i2c/s5k4ecgx.c b/drivers/media/i2c/s5k4ecgx.c
+index b2d53417badf6..4e97309a67f41 100644
+--- a/drivers/media/i2c/s5k4ecgx.c
++++ b/drivers/media/i2c/s5k4ecgx.c
+@@ -173,7 +173,7 @@ static const char * const s5k4ecgx_supply_names[] = {
+ 
+ enum s5k4ecgx_gpio_id {
+ 	STBY,
+-	RST,
++	RSET,
+ 	GPIO_NUM,
+ };
+ 
+@@ -476,7 +476,7 @@ static int __s5k4ecgx_power_on(struct s5k4ecgx *priv)
+ 	if (s5k4ecgx_gpio_set_value(priv, STBY, priv->gpio[STBY].level))
+ 		usleep_range(30, 50);
+ 
+-	if (s5k4ecgx_gpio_set_value(priv, RST, priv->gpio[RST].level))
++	if (s5k4ecgx_gpio_set_value(priv, RSET, priv->gpio[RSET].level))
+ 		usleep_range(30, 50);
+ 
+ 	return 0;
+@@ -484,7 +484,7 @@ static int __s5k4ecgx_power_on(struct s5k4ecgx *priv)
+ 
+ static int __s5k4ecgx_power_off(struct s5k4ecgx *priv)
+ {
+-	if (s5k4ecgx_gpio_set_value(priv, RST, !priv->gpio[RST].level))
++	if (s5k4ecgx_gpio_set_value(priv, RSET, !priv->gpio[RSET].level))
+ 		usleep_range(30, 50);
+ 
+ 	if (s5k4ecgx_gpio_set_value(priv, STBY, !priv->gpio[STBY].level))
+@@ -872,7 +872,7 @@ static int s5k4ecgx_config_gpios(struct s5k4ecgx *priv,
+ 	int ret;
+ 
+ 	priv->gpio[STBY].gpio = -EINVAL;
+-	priv->gpio[RST].gpio  = -EINVAL;
++	priv->gpio[RSET].gpio  = -EINVAL;
+ 
+ 	ret = s5k4ecgx_config_gpio(gpio->gpio, gpio->level, "S5K4ECGX_STBY");
+ 
+@@ -891,7 +891,7 @@ static int s5k4ecgx_config_gpios(struct s5k4ecgx *priv,
+ 		s5k4ecgx_free_gpios(priv);
+ 		return ret;
+ 	}
+-	priv->gpio[RST] = *gpio;
++	priv->gpio[RSET] = *gpio;
+ 	if (gpio_is_valid(gpio->gpio))
+ 		gpio_set_value(gpio->gpio, 0);
+ 
+diff --git a/drivers/media/i2c/s5k5baf.c b/drivers/media/i2c/s5k5baf.c
+index 6e702b57c37da..bc560817e5046 100644
+--- a/drivers/media/i2c/s5k5baf.c
++++ b/drivers/media/i2c/s5k5baf.c
+@@ -235,7 +235,7 @@ struct s5k5baf_gpio {
+ 
+ enum s5k5baf_gpio_id {
+ 	STBY,
+-	RST,
++	RSET,
+ 	NUM_GPIOS,
+ };
+ 
+@@ -969,7 +969,7 @@ static int s5k5baf_power_on(struct s5k5baf *state)
+ 
+ 	s5k5baf_gpio_deassert(state, STBY);
+ 	usleep_range(50, 100);
+-	s5k5baf_gpio_deassert(state, RST);
++	s5k5baf_gpio_deassert(state, RSET);
+ 	return 0;
+ 
+ err_reg_dis:
+@@ -987,7 +987,7 @@ static int s5k5baf_power_off(struct s5k5baf *state)
+ 	state->apply_cfg = 0;
+ 	state->apply_crop = 0;
+ 
+-	s5k5baf_gpio_assert(state, RST);
++	s5k5baf_gpio_assert(state, RSET);
+ 	s5k5baf_gpio_assert(state, STBY);
+ 
+ 	if (!IS_ERR(state->clock))
+diff --git a/drivers/media/i2c/s5k6aa.c b/drivers/media/i2c/s5k6aa.c
+index 038e385007601..e9be7323a22e9 100644
+--- a/drivers/media/i2c/s5k6aa.c
++++ b/drivers/media/i2c/s5k6aa.c
+@@ -177,7 +177,7 @@ static const char * const s5k6aa_supply_names[] = {
+ 
+ enum s5k6aa_gpio_id {
+ 	STBY,
+-	RST,
++	RSET,
+ 	GPIO_NUM,
+ };
+ 
+@@ -841,7 +841,7 @@ static int __s5k6aa_power_on(struct s5k6aa *s5k6aa)
+ 		ret = s5k6aa->s_power(1);
+ 	usleep_range(4000, 5000);
+ 
+-	if (s5k6aa_gpio_deassert(s5k6aa, RST))
++	if (s5k6aa_gpio_deassert(s5k6aa, RSET))
+ 		msleep(20);
+ 
+ 	return ret;
+@@ -851,7 +851,7 @@ static int __s5k6aa_power_off(struct s5k6aa *s5k6aa)
+ {
+ 	int ret;
+ 
+-	if (s5k6aa_gpio_assert(s5k6aa, RST))
++	if (s5k6aa_gpio_assert(s5k6aa, RSET))
+ 		usleep_range(100, 150);
+ 
+ 	if (s5k6aa->s_power) {
+@@ -1510,7 +1510,7 @@ static int s5k6aa_configure_gpios(struct s5k6aa *s5k6aa,
+ 	int ret;
+ 
+ 	s5k6aa->gpio[STBY].gpio = -EINVAL;
+-	s5k6aa->gpio[RST].gpio  = -EINVAL;
++	s5k6aa->gpio[RSET].gpio  = -EINVAL;
+ 
+ 	gpio = &pdata->gpio_stby;
+ 	if (gpio_is_valid(gpio->gpio)) {
+@@ -1533,7 +1533,7 @@ static int s5k6aa_configure_gpios(struct s5k6aa *s5k6aa,
+ 		if (ret < 0)
+ 			return ret;
+ 
+-		s5k6aa->gpio[RST] = *gpio;
++		s5k6aa->gpio[RSET] = *gpio;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
+index 1b309bb743c7b..f21da11caf224 100644
+--- a/drivers/media/i2c/tc358743.c
++++ b/drivers/media/i2c/tc358743.c
+@@ -1974,6 +1974,7 @@ static int tc358743_probe_of(struct tc358743_state *state)
+ 	bps_pr_lane = 2 * endpoint.link_frequencies[0];
+ 	if (bps_pr_lane < 62500000U || bps_pr_lane > 1000000000U) {
+ 		dev_err(dev, "unsupported bps per lane: %u bps\n", bps_pr_lane);
++		ret = -EINVAL;
+ 		goto disable_clk;
+ 	}
+ 
+diff --git a/drivers/media/mc/Makefile b/drivers/media/mc/Makefile
+index 119037f0e686d..2b7af42ba59c1 100644
+--- a/drivers/media/mc/Makefile
++++ b/drivers/media/mc/Makefile
+@@ -3,7 +3,7 @@
+ mc-objs	:= mc-device.o mc-devnode.o mc-entity.o \
+ 	   mc-request.o
+ 
+-ifeq ($(CONFIG_USB),y)
++ifneq ($(CONFIG_USB),)
+ 	mc-objs += mc-dev-allocator.o
+ endif
+ 
+diff --git a/drivers/media/pci/bt8xx/bt878.c b/drivers/media/pci/bt8xx/bt878.c
+index 78dd35c9b65d7..90972d6952f1c 100644
+--- a/drivers/media/pci/bt8xx/bt878.c
++++ b/drivers/media/pci/bt8xx/bt878.c
+@@ -300,7 +300,8 @@ static irqreturn_t bt878_irq(int irq, void *dev_id)
+ 		}
+ 		if (astat & BT878_ARISCI) {
+ 			bt->finished_block = (stat & BT878_ARISCS) >> 28;
+-			tasklet_schedule(&bt->tasklet);
++			if (bt->tasklet.callback)
++				tasklet_schedule(&bt->tasklet);
+ 			break;
+ 		}
+ 		count++;
+@@ -477,6 +478,9 @@ static int bt878_probe(struct pci_dev *dev, const struct pci_device_id *pci_id)
+ 	btwrite(0, BT878_AINT_MASK);
+ 	bt878_num++;
+ 
++	if (!bt->tasklet.func)
++		tasklet_disable(&bt->tasklet);
++
+ 	return 0;
+ 
+       fail2:
+diff --git a/drivers/media/pci/cobalt/cobalt-driver.c b/drivers/media/pci/cobalt/cobalt-driver.c
+index 839503e654f46..16af58f2f93cc 100644
+--- a/drivers/media/pci/cobalt/cobalt-driver.c
++++ b/drivers/media/pci/cobalt/cobalt-driver.c
+@@ -667,6 +667,7 @@ static int cobalt_probe(struct pci_dev *pci_dev,
+ 		return -ENOMEM;
+ 	cobalt->pci_dev = pci_dev;
+ 	cobalt->instance = i;
++	mutex_init(&cobalt->pci_lock);
+ 
+ 	retval = v4l2_device_register(&pci_dev->dev, &cobalt->v4l2_dev);
+ 	if (retval) {
+diff --git a/drivers/media/pci/cobalt/cobalt-driver.h b/drivers/media/pci/cobalt/cobalt-driver.h
+index bca68572b3242..12c33e035904c 100644
+--- a/drivers/media/pci/cobalt/cobalt-driver.h
++++ b/drivers/media/pci/cobalt/cobalt-driver.h
+@@ -251,6 +251,8 @@ struct cobalt {
+ 	int instance;
+ 	struct pci_dev *pci_dev;
+ 	struct v4l2_device v4l2_dev;
++	/* serialize PCI access in cobalt_s_bit_sysctrl() */
++	struct mutex pci_lock;
+ 
+ 	void __iomem *bar0, *bar1;
+ 
+@@ -320,10 +322,13 @@ static inline u32 cobalt_g_sysctrl(struct cobalt *cobalt)
+ static inline void cobalt_s_bit_sysctrl(struct cobalt *cobalt,
+ 					int bit, int val)
+ {
+-	u32 ctrl = cobalt_read_bar1(cobalt, COBALT_SYS_CTRL_BASE);
++	u32 ctrl;
+ 
++	mutex_lock(&cobalt->pci_lock);
++	ctrl = cobalt_read_bar1(cobalt, COBALT_SYS_CTRL_BASE);
+ 	cobalt_write_bar1(cobalt, COBALT_SYS_CTRL_BASE,
+ 			(ctrl & ~(1UL << bit)) | (val << bit));
++	mutex_unlock(&cobalt->pci_lock);
+ }
+ 
+ static inline u32 cobalt_g_sysstat(struct cobalt *cobalt)
+diff --git a/drivers/media/pci/intel/ipu3/cio2-bridge.c b/drivers/media/pci/intel/ipu3/cio2-bridge.c
+index e8511787c1e43..4657e99df0339 100644
+--- a/drivers/media/pci/intel/ipu3/cio2-bridge.c
++++ b/drivers/media/pci/intel/ipu3/cio2-bridge.c
+@@ -173,14 +173,15 @@ static int cio2_bridge_connect_sensor(const struct cio2_sensor_config *cfg,
+ 	int ret;
+ 
+ 	for_each_acpi_dev_match(adev, cfg->hid, NULL, -1) {
+-		if (!adev->status.enabled)
++		if (!adev->status.enabled) {
++			acpi_dev_put(adev);
+ 			continue;
++		}
+ 
+ 		if (bridge->n_sensors >= CIO2_NUM_PORTS) {
++			acpi_dev_put(adev);
+ 			dev_err(&cio2->dev, "Exceeded available CIO2 ports\n");
+-			cio2_bridge_unregister_sensors(bridge);
+-			ret = -EINVAL;
+-			goto err_out;
++			return -EINVAL;
+ 		}
+ 
+ 		sensor = &bridge->sensors[bridge->n_sensors];
+@@ -228,7 +229,6 @@ err_free_swnodes:
+ 	software_node_unregister_nodes(sensor->swnodes);
+ err_put_adev:
+ 	acpi_dev_put(sensor->adev);
+-err_out:
+ 	return ret;
+ }
+ 
+diff --git a/drivers/media/platform/am437x/am437x-vpfe.c b/drivers/media/platform/am437x/am437x-vpfe.c
+index 6cdc77dda0e49..1c9cb9e05fdf6 100644
+--- a/drivers/media/platform/am437x/am437x-vpfe.c
++++ b/drivers/media/platform/am437x/am437x-vpfe.c
+@@ -1021,7 +1021,9 @@ static int vpfe_initialize_device(struct vpfe_device *vpfe)
+ 	if (ret)
+ 		return ret;
+ 
+-	pm_runtime_get_sync(vpfe->pdev);
++	ret = pm_runtime_resume_and_get(vpfe->pdev);
++	if (ret < 0)
++		return ret;
+ 
+ 	vpfe_config_enable(&vpfe->ccdc, 1);
+ 
+@@ -2443,7 +2445,11 @@ static int vpfe_probe(struct platform_device *pdev)
+ 	pm_runtime_enable(&pdev->dev);
+ 
+ 	/* for now just enable it here instead of waiting for the open */
+-	pm_runtime_get_sync(&pdev->dev);
++	ret = pm_runtime_resume_and_get(&pdev->dev);
++	if (ret < 0) {
++		vpfe_err(vpfe, "Unable to resume device.\n");
++		goto probe_out_v4l2_unregister;
++	}
+ 
+ 	vpfe_ccdc_config_defaults(ccdc);
+ 
+@@ -2530,6 +2536,11 @@ static int vpfe_suspend(struct device *dev)
+ 
+ 	/* only do full suspend if streaming has started */
+ 	if (vb2_start_streaming_called(&vpfe->buffer_queue)) {
++		/*
++		 * ignore RPM resume errors here, as it is already too late.
++		 * A check like that should happen earlier, either at
++		 * open() or just before start streaming.
++		 */
+ 		pm_runtime_get_sync(dev);
+ 		vpfe_config_enable(ccdc, 1);
+ 
+diff --git a/drivers/media/platform/exynos-gsc/gsc-m2m.c b/drivers/media/platform/exynos-gsc/gsc-m2m.c
+index 27a3c92c73bce..f1cf847d1cc2d 100644
+--- a/drivers/media/platform/exynos-gsc/gsc-m2m.c
++++ b/drivers/media/platform/exynos-gsc/gsc-m2m.c
+@@ -56,10 +56,8 @@ static void __gsc_m2m_job_abort(struct gsc_ctx *ctx)
+ static int gsc_m2m_start_streaming(struct vb2_queue *q, unsigned int count)
+ {
+ 	struct gsc_ctx *ctx = q->drv_priv;
+-	int ret;
+ 
+-	ret = pm_runtime_get_sync(&ctx->gsc_dev->pdev->dev);
+-	return ret > 0 ? 0 : ret;
++	return pm_runtime_resume_and_get(&ctx->gsc_dev->pdev->dev);
+ }
+ 
+ static void __gsc_m2m_cleanup_queue(struct gsc_ctx *ctx)
+diff --git a/drivers/media/platform/exynos4-is/fimc-capture.c b/drivers/media/platform/exynos4-is/fimc-capture.c
+index 13c838d3f9473..0da36443173c1 100644
+--- a/drivers/media/platform/exynos4-is/fimc-capture.c
++++ b/drivers/media/platform/exynos4-is/fimc-capture.c
+@@ -478,11 +478,9 @@ static int fimc_capture_open(struct file *file)
+ 		goto unlock;
+ 
+ 	set_bit(ST_CAPT_BUSY, &fimc->state);
+-	ret = pm_runtime_get_sync(&fimc->pdev->dev);
+-	if (ret < 0) {
+-		pm_runtime_put_sync(&fimc->pdev->dev);
++	ret = pm_runtime_resume_and_get(&fimc->pdev->dev);
++	if (ret < 0)
+ 		goto unlock;
+-	}
+ 
+ 	ret = v4l2_fh_open(file);
+ 	if (ret) {
+diff --git a/drivers/media/platform/exynos4-is/fimc-is.c b/drivers/media/platform/exynos4-is/fimc-is.c
+index 972d9601d2360..1b24f5bfc4af4 100644
+--- a/drivers/media/platform/exynos4-is/fimc-is.c
++++ b/drivers/media/platform/exynos4-is/fimc-is.c
+@@ -828,9 +828,9 @@ static int fimc_is_probe(struct platform_device *pdev)
+ 			goto err_irq;
+ 	}
+ 
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0)
+-		goto err_pm;
++		goto err_irq;
+ 
+ 	vb2_dma_contig_set_max_seg_size(dev, DMA_BIT_MASK(32));
+ 
+diff --git a/drivers/media/platform/exynos4-is/fimc-isp-video.c b/drivers/media/platform/exynos4-is/fimc-isp-video.c
+index 612b9872afc87..83688a7982f70 100644
+--- a/drivers/media/platform/exynos4-is/fimc-isp-video.c
++++ b/drivers/media/platform/exynos4-is/fimc-isp-video.c
+@@ -275,7 +275,7 @@ static int isp_video_open(struct file *file)
+ 	if (ret < 0)
+ 		goto unlock;
+ 
+-	ret = pm_runtime_get_sync(&isp->pdev->dev);
++	ret = pm_runtime_resume_and_get(&isp->pdev->dev);
+ 	if (ret < 0)
+ 		goto rel_fh;
+ 
+@@ -293,7 +293,6 @@ static int isp_video_open(struct file *file)
+ 	if (!ret)
+ 		goto unlock;
+ rel_fh:
+-	pm_runtime_put_noidle(&isp->pdev->dev);
+ 	v4l2_fh_release(file);
+ unlock:
+ 	mutex_unlock(&isp->video_lock);
+@@ -306,17 +305,20 @@ static int isp_video_release(struct file *file)
+ 	struct fimc_is_video *ivc = &isp->video_capture;
+ 	struct media_entity *entity = &ivc->ve.vdev.entity;
+ 	struct media_device *mdev = entity->graph_obj.mdev;
++	bool is_singular_file;
+ 
+ 	mutex_lock(&isp->video_lock);
+ 
+-	if (v4l2_fh_is_singular_file(file) && ivc->streaming) {
++	is_singular_file = v4l2_fh_is_singular_file(file);
++
++	if (is_singular_file && ivc->streaming) {
+ 		media_pipeline_stop(entity);
+ 		ivc->streaming = 0;
+ 	}
+ 
+ 	_vb2_fop_release(file, NULL);
+ 
+-	if (v4l2_fh_is_singular_file(file)) {
++	if (is_singular_file) {
+ 		fimc_pipeline_call(&ivc->ve, close);
+ 
+ 		mutex_lock(&mdev->graph_mutex);
+diff --git a/drivers/media/platform/exynos4-is/fimc-isp.c b/drivers/media/platform/exynos4-is/fimc-isp.c
+index a77c49b185115..74b49d30901ed 100644
+--- a/drivers/media/platform/exynos4-is/fimc-isp.c
++++ b/drivers/media/platform/exynos4-is/fimc-isp.c
+@@ -304,11 +304,10 @@ static int fimc_isp_subdev_s_power(struct v4l2_subdev *sd, int on)
+ 	pr_debug("on: %d\n", on);
+ 
+ 	if (on) {
+-		ret = pm_runtime_get_sync(&is->pdev->dev);
+-		if (ret < 0) {
+-			pm_runtime_put(&is->pdev->dev);
++		ret = pm_runtime_resume_and_get(&is->pdev->dev);
++		if (ret < 0)
+ 			return ret;
+-		}
++
+ 		set_bit(IS_ST_PWR_ON, &is->state);
+ 
+ 		ret = fimc_is_start_firmware(is);
+diff --git a/drivers/media/platform/exynos4-is/fimc-lite.c b/drivers/media/platform/exynos4-is/fimc-lite.c
+index fe20af3a7178a..4d8b18078ff37 100644
+--- a/drivers/media/platform/exynos4-is/fimc-lite.c
++++ b/drivers/media/platform/exynos4-is/fimc-lite.c
+@@ -469,9 +469,9 @@ static int fimc_lite_open(struct file *file)
+ 	}
+ 
+ 	set_bit(ST_FLITE_IN_USE, &fimc->state);
+-	ret = pm_runtime_get_sync(&fimc->pdev->dev);
++	ret = pm_runtime_resume_and_get(&fimc->pdev->dev);
+ 	if (ret < 0)
+-		goto err_pm;
++		goto err_in_use;
+ 
+ 	ret = v4l2_fh_open(file);
+ 	if (ret < 0)
+@@ -499,6 +499,7 @@ static int fimc_lite_open(struct file *file)
+ 	v4l2_fh_release(file);
+ err_pm:
+ 	pm_runtime_put_sync(&fimc->pdev->dev);
++err_in_use:
+ 	clear_bit(ST_FLITE_IN_USE, &fimc->state);
+ unlock:
+ 	mutex_unlock(&fimc->lock);
+diff --git a/drivers/media/platform/exynos4-is/fimc-m2m.c b/drivers/media/platform/exynos4-is/fimc-m2m.c
+index c9704a147e5cf..df8e2aa454d8f 100644
+--- a/drivers/media/platform/exynos4-is/fimc-m2m.c
++++ b/drivers/media/platform/exynos4-is/fimc-m2m.c
+@@ -73,17 +73,14 @@ static void fimc_m2m_shutdown(struct fimc_ctx *ctx)
+ static int start_streaming(struct vb2_queue *q, unsigned int count)
+ {
+ 	struct fimc_ctx *ctx = q->drv_priv;
+-	int ret;
+ 
+-	ret = pm_runtime_get_sync(&ctx->fimc_dev->pdev->dev);
+-	return ret > 0 ? 0 : ret;
++	return pm_runtime_resume_and_get(&ctx->fimc_dev->pdev->dev);
+ }
+ 
+ static void stop_streaming(struct vb2_queue *q)
+ {
+ 	struct fimc_ctx *ctx = q->drv_priv;
+ 
+-
+ 	fimc_m2m_shutdown(ctx);
+ 	fimc_m2m_job_finish(ctx, VB2_BUF_STATE_ERROR);
+ 	pm_runtime_put(&ctx->fimc_dev->pdev->dev);
+diff --git a/drivers/media/platform/exynos4-is/media-dev.c b/drivers/media/platform/exynos4-is/media-dev.c
+index 13d192ba4aa6e..3b8a24bb724c8 100644
+--- a/drivers/media/platform/exynos4-is/media-dev.c
++++ b/drivers/media/platform/exynos4-is/media-dev.c
+@@ -512,11 +512,9 @@ static int fimc_md_register_sensor_entities(struct fimc_md *fmd)
+ 	if (!fmd->pmf)
+ 		return -ENXIO;
+ 
+-	ret = pm_runtime_get_sync(fmd->pmf);
+-	if (ret < 0) {
+-		pm_runtime_put(fmd->pmf);
++	ret = pm_runtime_resume_and_get(fmd->pmf);
++	if (ret < 0)
+ 		return ret;
+-	}
+ 
+ 	fmd->num_sensors = 0;
+ 
+@@ -1286,13 +1284,11 @@ static DEVICE_ATTR(subdev_conf_mode, S_IWUSR | S_IRUGO,
+ static int cam_clk_prepare(struct clk_hw *hw)
+ {
+ 	struct cam_clk *camclk = to_cam_clk(hw);
+-	int ret;
+ 
+ 	if (camclk->fmd->pmf == NULL)
+ 		return -ENODEV;
+ 
+-	ret = pm_runtime_get_sync(camclk->fmd->pmf);
+-	return ret < 0 ? ret : 0;
++	return pm_runtime_resume_and_get(camclk->fmd->pmf);
+ }
+ 
+ static void cam_clk_unprepare(struct clk_hw *hw)
+diff --git a/drivers/media/platform/exynos4-is/mipi-csis.c b/drivers/media/platform/exynos4-is/mipi-csis.c
+index 1aac167abb175..ebf39c8568943 100644
+--- a/drivers/media/platform/exynos4-is/mipi-csis.c
++++ b/drivers/media/platform/exynos4-is/mipi-csis.c
+@@ -494,7 +494,7 @@ static int s5pcsis_s_power(struct v4l2_subdev *sd, int on)
+ 	struct device *dev = &state->pdev->dev;
+ 
+ 	if (on)
+-		return pm_runtime_get_sync(dev);
++		return pm_runtime_resume_and_get(dev);
+ 
+ 	return pm_runtime_put_sync(dev);
+ }
+@@ -509,11 +509,9 @@ static int s5pcsis_s_stream(struct v4l2_subdev *sd, int enable)
+ 
+ 	if (enable) {
+ 		s5pcsis_clear_counters(state);
+-		ret = pm_runtime_get_sync(&state->pdev->dev);
+-		if (ret && ret != 1) {
+-			pm_runtime_put_noidle(&state->pdev->dev);
++		ret = pm_runtime_resume_and_get(&state->pdev->dev);
++		if (ret < 0)
+ 			return ret;
+-		}
+ 	}
+ 
+ 	mutex_lock(&state->lock);
+@@ -535,7 +533,7 @@ unlock:
+ 	if (!enable)
+ 		pm_runtime_put(&state->pdev->dev);
+ 
+-	return ret == 1 ? 0 : ret;
++	return ret;
+ }
+ 
+ static int s5pcsis_enum_mbus_code(struct v4l2_subdev *sd,
+diff --git a/drivers/media/platform/marvell-ccic/mcam-core.c b/drivers/media/platform/marvell-ccic/mcam-core.c
+index 141bf5d97a044..ea87110d90738 100644
+--- a/drivers/media/platform/marvell-ccic/mcam-core.c
++++ b/drivers/media/platform/marvell-ccic/mcam-core.c
+@@ -918,6 +918,7 @@ static int mclk_enable(struct clk_hw *hw)
+ 	struct mcam_camera *cam = container_of(hw, struct mcam_camera, mclk_hw);
+ 	int mclk_src;
+ 	int mclk_div;
++	int ret;
+ 
+ 	/*
+ 	 * Clock the sensor appropriately.  Controller clock should
+@@ -931,7 +932,9 @@ static int mclk_enable(struct clk_hw *hw)
+ 		mclk_div = 2;
+ 	}
+ 
+-	pm_runtime_get_sync(cam->dev);
++	ret = pm_runtime_resume_and_get(cam->dev);
++	if (ret < 0)
++		return ret;
+ 	clk_enable(cam->clk[0]);
+ 	mcam_reg_write(cam, REG_CLKCTRL, (mclk_src << 29) | mclk_div);
+ 	mcam_ctlr_power_up(cam);
+@@ -1611,7 +1614,9 @@ static int mcam_v4l_open(struct file *filp)
+ 		ret = sensor_call(cam, core, s_power, 1);
+ 		if (ret)
+ 			goto out;
+-		pm_runtime_get_sync(cam->dev);
++		ret = pm_runtime_resume_and_get(cam->dev);
++		if (ret < 0)
++			goto out;
+ 		__mcam_cam_reset(cam);
+ 		mcam_set_config_needed(cam, 1);
+ 	}
+diff --git a/drivers/media/platform/mtk-mdp/mtk_mdp_m2m.c b/drivers/media/platform/mtk-mdp/mtk_mdp_m2m.c
+index ace4528cdc5ef..f14779e7596e5 100644
+--- a/drivers/media/platform/mtk-mdp/mtk_mdp_m2m.c
++++ b/drivers/media/platform/mtk-mdp/mtk_mdp_m2m.c
+@@ -391,12 +391,12 @@ static int mtk_mdp_m2m_start_streaming(struct vb2_queue *q, unsigned int count)
+ 	struct mtk_mdp_ctx *ctx = q->drv_priv;
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(&ctx->mdp_dev->pdev->dev);
++	ret = pm_runtime_resume_and_get(&ctx->mdp_dev->pdev->dev);
+ 	if (ret < 0)
+-		mtk_mdp_dbg(1, "[%d] pm_runtime_get_sync failed:%d",
++		mtk_mdp_dbg(1, "[%d] pm_runtime_resume_and_get failed:%d",
+ 			    ctx->id, ret);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static void *mtk_mdp_m2m_buf_remove(struct mtk_mdp_ctx *ctx,
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_drv.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_drv.c
+index 147dfef1638d2..f87dc47d9e638 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_drv.c
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_drv.c
+@@ -126,7 +126,9 @@ static int fops_vcodec_open(struct file *file)
+ 	mtk_vcodec_dec_set_default_params(ctx);
+ 
+ 	if (v4l2_fh_is_singular(&ctx->fh)) {
+-		mtk_vcodec_dec_pw_on(&dev->pm);
++		ret = mtk_vcodec_dec_pw_on(&dev->pm);
++		if (ret < 0)
++			goto err_load_fw;
+ 		/*
+ 		 * Does nothing if firmware was already loaded.
+ 		 */
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.c
+index ddee7046ce422..6038db96f71c3 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.c
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.c
+@@ -88,13 +88,15 @@ void mtk_vcodec_release_dec_pm(struct mtk_vcodec_dev *dev)
+ 	put_device(dev->pm.larbvdec);
+ }
+ 
+-void mtk_vcodec_dec_pw_on(struct mtk_vcodec_pm *pm)
++int mtk_vcodec_dec_pw_on(struct mtk_vcodec_pm *pm)
+ {
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(pm->dev);
++	ret = pm_runtime_resume_and_get(pm->dev);
+ 	if (ret)
+-		mtk_v4l2_err("pm_runtime_get_sync fail %d", ret);
++		mtk_v4l2_err("pm_runtime_resume_and_get fail %d", ret);
++
++	return ret;
+ }
+ 
+ void mtk_vcodec_dec_pw_off(struct mtk_vcodec_pm *pm)
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.h b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.h
+index 872d8bf8cfaf3..280aeaefdb651 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.h
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_pm.h
+@@ -12,7 +12,7 @@
+ int mtk_vcodec_init_dec_pm(struct mtk_vcodec_dev *dev);
+ void mtk_vcodec_release_dec_pm(struct mtk_vcodec_dev *dev);
+ 
+-void mtk_vcodec_dec_pw_on(struct mtk_vcodec_pm *pm);
++int mtk_vcodec_dec_pw_on(struct mtk_vcodec_pm *pm);
+ void mtk_vcodec_dec_pw_off(struct mtk_vcodec_pm *pm);
+ void mtk_vcodec_dec_clock_on(struct mtk_vcodec_pm *pm);
+ void mtk_vcodec_dec_clock_off(struct mtk_vcodec_pm *pm);
+diff --git a/drivers/media/platform/mtk-vpu/mtk_vpu.c b/drivers/media/platform/mtk-vpu/mtk_vpu.c
+index c8a56271b259e..7c4428cf14e6d 100644
+--- a/drivers/media/platform/mtk-vpu/mtk_vpu.c
++++ b/drivers/media/platform/mtk-vpu/mtk_vpu.c
+@@ -987,6 +987,12 @@ static int mtk_vpu_suspend(struct device *dev)
+ 		return ret;
+ 	}
+ 
++	if (!vpu_running(vpu)) {
++		vpu_clock_disable(vpu);
++		clk_unprepare(vpu->clk);
++		return 0;
++	}
++
+ 	mutex_lock(&vpu->vpu_mutex);
+ 	/* disable vpu timer interrupt */
+ 	vpu_cfg_writel(vpu, vpu_cfg_readl(vpu, VPU_INT_STATUS) | VPU_IDLE_STATE,
+diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c
+index 54bac7ec14c50..91b15842c5558 100644
+--- a/drivers/media/platform/qcom/venus/core.c
++++ b/drivers/media/platform/qcom/venus/core.c
+@@ -78,22 +78,32 @@ static const struct hfi_core_ops venus_core_ops = {
+ 	.event_notify = venus_event_notify,
+ };
+ 
++#define RPM_WAIT_FOR_IDLE_MAX_ATTEMPTS 10
++
+ static void venus_sys_error_handler(struct work_struct *work)
+ {
+ 	struct venus_core *core =
+ 			container_of(work, struct venus_core, work.work);
+-	int ret = 0;
+-
+-	pm_runtime_get_sync(core->dev);
++	int ret, i, max_attempts = RPM_WAIT_FOR_IDLE_MAX_ATTEMPTS;
++	const char *err_msg = "";
++	bool failed = false;
++
++	ret = pm_runtime_get_sync(core->dev);
++	if (ret < 0) {
++		err_msg = "resume runtime PM";
++		max_attempts = 0;
++		failed = true;
++	}
+ 
+ 	hfi_core_deinit(core, true);
+ 
+-	dev_warn(core->dev, "system error has occurred, starting recovery!\n");
+-
+ 	mutex_lock(&core->lock);
+ 
+-	while (pm_runtime_active(core->dev_dec) || pm_runtime_active(core->dev_enc))
++	for (i = 0; i < max_attempts; i++) {
++		if (!pm_runtime_active(core->dev_dec) && !pm_runtime_active(core->dev_enc))
++			break;
+ 		msleep(10);
++	}
+ 
+ 	venus_shutdown(core);
+ 
+@@ -101,31 +111,55 @@ static void venus_sys_error_handler(struct work_struct *work)
+ 
+ 	pm_runtime_put_sync(core->dev);
+ 
+-	while (core->pmdomains[0] && pm_runtime_active(core->pmdomains[0]))
++	for (i = 0; i < max_attempts; i++) {
++		if (!core->pmdomains[0] || !pm_runtime_active(core->pmdomains[0]))
++			break;
+ 		usleep_range(1000, 1500);
++	}
+ 
+ 	hfi_reinit(core);
+ 
+-	pm_runtime_get_sync(core->dev);
++	ret = pm_runtime_get_sync(core->dev);
++	if (ret < 0) {
++		err_msg = "resume runtime PM";
++		failed = true;
++	}
+ 
+-	ret |= venus_boot(core);
+-	ret |= hfi_core_resume(core, true);
++	ret = venus_boot(core);
++	if (ret && !failed) {
++		err_msg = "boot Venus";
++		failed = true;
++	}
++
++	ret = hfi_core_resume(core, true);
++	if (ret && !failed) {
++		err_msg = "resume HFI";
++		failed = true;
++	}
+ 
+ 	enable_irq(core->irq);
+ 
+ 	mutex_unlock(&core->lock);
+ 
+-	ret |= hfi_core_init(core);
++	ret = hfi_core_init(core);
++	if (ret && !failed) {
++		err_msg = "init HFI";
++		failed = true;
++	}
+ 
+ 	pm_runtime_put_sync(core->dev);
+ 
+-	if (ret) {
++	if (failed) {
+ 		disable_irq_nosync(core->irq);
+-		dev_warn(core->dev, "recovery failed (%d)\n", ret);
++		dev_warn_ratelimited(core->dev,
++				     "System error has occurred, recovery failed to %s\n",
++				     err_msg);
+ 		schedule_delayed_work(&core->work, msecs_to_jiffies(10));
+ 		return;
+ 	}
+ 
++	dev_warn(core->dev, "system error has occurred (recovered)\n");
++
+ 	mutex_lock(&core->lock);
+ 	core->sys_error = false;
+ 	mutex_unlock(&core->lock);
+diff --git a/drivers/media/platform/qcom/venus/hfi_cmds.c b/drivers/media/platform/qcom/venus/hfi_cmds.c
+index 11a8347e5f5c8..4b9dea7f6940e 100644
+--- a/drivers/media/platform/qcom/venus/hfi_cmds.c
++++ b/drivers/media/platform/qcom/venus/hfi_cmds.c
+@@ -1226,6 +1226,17 @@ pkt_session_set_property_4xx(struct hfi_session_set_property_pkt *pkt,
+ 		pkt->shdr.hdr.size += sizeof(u32) + sizeof(*hdr10);
+ 		break;
+ 	}
++	case HFI_PROPERTY_PARAM_VDEC_CONCEAL_COLOR: {
++		struct hfi_conceal_color_v4 *color = prop_data;
++		u32 *in = pdata;
++
++		color->conceal_color_8bit = *in & 0xff;
++		color->conceal_color_8bit |= ((*in >> 10) & 0xff) << 8;
++		color->conceal_color_8bit |= ((*in >> 20) & 0xff) << 16;
++		color->conceal_color_10bit = *in;
++		pkt->shdr.hdr.size += sizeof(u32) + sizeof(*color);
++		break;
++	}
+ 
+ 	case HFI_PROPERTY_CONFIG_VENC_MAX_BITRATE:
+ 	case HFI_PROPERTY_CONFIG_VDEC_POST_LOOP_DEBLOCKER:
+@@ -1279,17 +1290,6 @@ pkt_session_set_property_6xx(struct hfi_session_set_property_pkt *pkt,
+ 		pkt->shdr.hdr.size += sizeof(u32) + sizeof(*cq);
+ 		break;
+ 	}
+-	case HFI_PROPERTY_PARAM_VDEC_CONCEAL_COLOR: {
+-		struct hfi_conceal_color_v4 *color = prop_data;
+-		u32 *in = pdata;
+-
+-		color->conceal_color_8bit = *in & 0xff;
+-		color->conceal_color_8bit |= ((*in >> 10) & 0xff) << 8;
+-		color->conceal_color_8bit |= ((*in >> 20) & 0xff) << 16;
+-		color->conceal_color_10bit = *in;
+-		pkt->shdr.hdr.size += sizeof(u32) + sizeof(*color);
+-		break;
+-	}
+ 	default:
+ 		return pkt_session_set_property_4xx(pkt, cookie, ptype, pdata);
+ 	}
+diff --git a/drivers/media/platform/s5p-g2d/g2d.c b/drivers/media/platform/s5p-g2d/g2d.c
+index 15bcb7f6e113c..1cb5eaabf340b 100644
+--- a/drivers/media/platform/s5p-g2d/g2d.c
++++ b/drivers/media/platform/s5p-g2d/g2d.c
+@@ -276,6 +276,9 @@ static int g2d_release(struct file *file)
+ 	struct g2d_dev *dev = video_drvdata(file);
+ 	struct g2d_ctx *ctx = fh2ctx(file->private_data);
+ 
++	mutex_lock(&dev->mutex);
++	v4l2_m2m_ctx_release(ctx->fh.m2m_ctx);
++	mutex_unlock(&dev->mutex);
+ 	v4l2_ctrl_handler_free(&ctx->ctrl_handler);
+ 	v4l2_fh_del(&ctx->fh);
+ 	v4l2_fh_exit(&ctx->fh);
+diff --git a/drivers/media/platform/s5p-jpeg/jpeg-core.c b/drivers/media/platform/s5p-jpeg/jpeg-core.c
+index 026111505f5a5..d402e456f27df 100644
+--- a/drivers/media/platform/s5p-jpeg/jpeg-core.c
++++ b/drivers/media/platform/s5p-jpeg/jpeg-core.c
+@@ -2566,11 +2566,8 @@ static void s5p_jpeg_buf_queue(struct vb2_buffer *vb)
+ static int s5p_jpeg_start_streaming(struct vb2_queue *q, unsigned int count)
+ {
+ 	struct s5p_jpeg_ctx *ctx = vb2_get_drv_priv(q);
+-	int ret;
+-
+-	ret = pm_runtime_get_sync(ctx->jpeg->dev);
+ 
+-	return ret > 0 ? 0 : ret;
++	return pm_runtime_resume_and_get(ctx->jpeg->dev);
+ }
+ 
+ static void s5p_jpeg_stop_streaming(struct vb2_queue *q)
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc_dec.c b/drivers/media/platform/s5p-mfc/s5p_mfc_dec.c
+index a92a9ca6e87eb..c1d3bda8385b1 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc_dec.c
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc_dec.c
+@@ -172,6 +172,7 @@ static struct mfc_control controls[] = {
+ 		.type = V4L2_CTRL_TYPE_INTEGER,
+ 		.minimum = 0,
+ 		.maximum = 16383,
++		.step = 1,
+ 		.default_value = 0,
+ 	},
+ 	{
+diff --git a/drivers/media/platform/sh_vou.c b/drivers/media/platform/sh_vou.c
+index 4ac48441f22c4..ca4310e26c49e 100644
+--- a/drivers/media/platform/sh_vou.c
++++ b/drivers/media/platform/sh_vou.c
+@@ -1133,7 +1133,11 @@ static int sh_vou_open(struct file *file)
+ 	if (v4l2_fh_is_singular_file(file) &&
+ 	    vou_dev->status == SH_VOU_INITIALISING) {
+ 		/* First open */
+-		pm_runtime_get_sync(vou_dev->v4l2_dev.dev);
++		err = pm_runtime_resume_and_get(vou_dev->v4l2_dev.dev);
++		if (err < 0) {
++			v4l2_fh_release(file);
++			goto done_open;
++		}
+ 		err = sh_vou_hw_init(vou_dev);
+ 		if (err < 0) {
+ 			pm_runtime_put(vou_dev->v4l2_dev.dev);
+diff --git a/drivers/media/platform/sti/bdisp/Makefile b/drivers/media/platform/sti/bdisp/Makefile
+index caf7ccd193eaa..39ade0a347236 100644
+--- a/drivers/media/platform/sti/bdisp/Makefile
++++ b/drivers/media/platform/sti/bdisp/Makefile
+@@ -1,4 +1,4 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+-obj-$(CONFIG_VIDEO_STI_BDISP) := bdisp.o
++obj-$(CONFIG_VIDEO_STI_BDISP) += bdisp.o
+ 
+ bdisp-objs := bdisp-v4l2.o bdisp-hw.o bdisp-debug.o
+diff --git a/drivers/media/platform/sti/bdisp/bdisp-v4l2.c b/drivers/media/platform/sti/bdisp/bdisp-v4l2.c
+index 060ca85f64d5d..85288da9d2ae6 100644
+--- a/drivers/media/platform/sti/bdisp/bdisp-v4l2.c
++++ b/drivers/media/platform/sti/bdisp/bdisp-v4l2.c
+@@ -499,7 +499,7 @@ static int bdisp_start_streaming(struct vb2_queue *q, unsigned int count)
+ {
+ 	struct bdisp_ctx *ctx = q->drv_priv;
+ 	struct vb2_v4l2_buffer *buf;
+-	int ret = pm_runtime_get_sync(ctx->bdisp_dev->dev);
++	int ret = pm_runtime_resume_and_get(ctx->bdisp_dev->dev);
+ 
+ 	if (ret < 0) {
+ 		dev_err(ctx->bdisp_dev->dev, "failed to set runtime PM\n");
+@@ -1364,10 +1364,10 @@ static int bdisp_probe(struct platform_device *pdev)
+ 
+ 	/* Power management */
+ 	pm_runtime_enable(dev);
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0) {
+ 		dev_err(dev, "failed to set PM\n");
+-		goto err_pm;
++		goto err_remove;
+ 	}
+ 
+ 	/* Filters */
+@@ -1395,6 +1395,7 @@ err_filter:
+ 	bdisp_hw_free_filters(bdisp->dev);
+ err_pm:
+ 	pm_runtime_put(dev);
++err_remove:
+ 	bdisp_debugfs_remove(bdisp);
+ 	v4l2_device_unregister(&bdisp->v4l2_dev);
+ err_clk:
+diff --git a/drivers/media/platform/sti/delta/Makefile b/drivers/media/platform/sti/delta/Makefile
+index 92b37e216f004..32412fa4c6328 100644
+--- a/drivers/media/platform/sti/delta/Makefile
++++ b/drivers/media/platform/sti/delta/Makefile
+@@ -1,5 +1,5 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+-obj-$(CONFIG_VIDEO_STI_DELTA_DRIVER) := st-delta.o
++obj-$(CONFIG_VIDEO_STI_DELTA_DRIVER) += st-delta.o
+ st-delta-y := delta-v4l2.o delta-mem.o delta-ipc.o delta-debug.o
+ 
+ # MJPEG support
+diff --git a/drivers/media/platform/sti/hva/Makefile b/drivers/media/platform/sti/hva/Makefile
+index 74b41ec52f976..b5a5478bdd016 100644
+--- a/drivers/media/platform/sti/hva/Makefile
++++ b/drivers/media/platform/sti/hva/Makefile
+@@ -1,4 +1,4 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+-obj-$(CONFIG_VIDEO_STI_HVA) := st-hva.o
++obj-$(CONFIG_VIDEO_STI_HVA) += st-hva.o
+ st-hva-y := hva-v4l2.o hva-hw.o hva-mem.o hva-h264.o
+ st-hva-$(CONFIG_VIDEO_STI_HVA_DEBUGFS) += hva-debugfs.o
+diff --git a/drivers/media/platform/sti/hva/hva-hw.c b/drivers/media/platform/sti/hva/hva-hw.c
+index f59811e27f51f..6eeee5017fac4 100644
+--- a/drivers/media/platform/sti/hva/hva-hw.c
++++ b/drivers/media/platform/sti/hva/hva-hw.c
+@@ -130,8 +130,7 @@ static irqreturn_t hva_hw_its_irq_thread(int irq, void *arg)
+ 	ctx_id = (hva->sts_reg & 0xFF00) >> 8;
+ 	if (ctx_id >= HVA_MAX_INSTANCES) {
+ 		dev_err(dev, "%s     %s: bad context identifier: %d\n",
+-			ctx->name, __func__, ctx_id);
+-		ctx->hw_err = true;
++			HVA_PREFIX, __func__, ctx_id);
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/media/platform/sunxi/sun8i-rotate/sun8i_rotate.c b/drivers/media/platform/sunxi/sun8i-rotate/sun8i_rotate.c
+index 3f81dd17755cb..fbcca59a0517c 100644
+--- a/drivers/media/platform/sunxi/sun8i-rotate/sun8i_rotate.c
++++ b/drivers/media/platform/sunxi/sun8i-rotate/sun8i_rotate.c
+@@ -494,7 +494,7 @@ static int rotate_start_streaming(struct vb2_queue *vq, unsigned int count)
+ 		struct device *dev = ctx->dev->dev;
+ 		int ret;
+ 
+-		ret = pm_runtime_get_sync(dev);
++		ret = pm_runtime_resume_and_get(dev);
+ 		if (ret < 0) {
+ 			dev_err(dev, "Failed to enable module\n");
+ 
+diff --git a/drivers/media/platform/video-mux.c b/drivers/media/platform/video-mux.c
+index 133122e385150..9bc0b4d8de095 100644
+--- a/drivers/media/platform/video-mux.c
++++ b/drivers/media/platform/video-mux.c
+@@ -362,7 +362,7 @@ static int video_mux_async_register(struct video_mux *vmux,
+ 
+ 	for (i = 0; i < num_input_pads; i++) {
+ 		struct v4l2_async_subdev *asd;
+-		struct fwnode_handle *ep;
++		struct fwnode_handle *ep, *remote_ep;
+ 
+ 		ep = fwnode_graph_get_endpoint_by_id(
+ 			dev_fwnode(vmux->subdev.dev), i, 0,
+@@ -370,6 +370,14 @@ static int video_mux_async_register(struct video_mux *vmux,
+ 		if (!ep)
+ 			continue;
+ 
++		/* Skip dangling endpoints for backwards compatibility */
++		remote_ep = fwnode_graph_get_remote_endpoint(ep);
++		if (!remote_ep) {
++			fwnode_handle_put(ep);
++			continue;
++		}
++		fwnode_handle_put(remote_ep);
++
+ 		asd = v4l2_async_notifier_add_fwnode_remote_subdev(
+ 			&vmux->notifier, ep, struct v4l2_async_subdev);
+ 
+diff --git a/drivers/media/usb/au0828/au0828-core.c b/drivers/media/usb/au0828/au0828-core.c
+index a8a72d5fbd129..caefac07af927 100644
+--- a/drivers/media/usb/au0828/au0828-core.c
++++ b/drivers/media/usb/au0828/au0828-core.c
+@@ -199,8 +199,8 @@ static int au0828_media_device_init(struct au0828_dev *dev,
+ 	struct media_device *mdev;
+ 
+ 	mdev = media_device_usb_allocate(udev, KBUILD_MODNAME, THIS_MODULE);
+-	if (!mdev)
+-		return -ENOMEM;
++	if (IS_ERR(mdev))
++		return PTR_ERR(mdev);
+ 
+ 	dev->media_dev = mdev;
+ #endif
+diff --git a/drivers/media/usb/cpia2/cpia2.h b/drivers/media/usb/cpia2/cpia2.h
+index 50835f5f7512c..57b7f1ea68da5 100644
+--- a/drivers/media/usb/cpia2/cpia2.h
++++ b/drivers/media/usb/cpia2/cpia2.h
+@@ -429,6 +429,7 @@ int cpia2_send_command(struct camera_data *cam, struct cpia2_command *cmd);
+ int cpia2_do_command(struct camera_data *cam,
+ 		     unsigned int command,
+ 		     unsigned char direction, unsigned char param);
++void cpia2_deinit_camera_struct(struct camera_data *cam, struct usb_interface *intf);
+ struct camera_data *cpia2_init_camera_struct(struct usb_interface *intf);
+ int cpia2_init_camera(struct camera_data *cam);
+ int cpia2_allocate_buffers(struct camera_data *cam);
+diff --git a/drivers/media/usb/cpia2/cpia2_core.c b/drivers/media/usb/cpia2/cpia2_core.c
+index e747548ab2869..b5a2d06fb356b 100644
+--- a/drivers/media/usb/cpia2/cpia2_core.c
++++ b/drivers/media/usb/cpia2/cpia2_core.c
+@@ -2163,6 +2163,18 @@ static void reset_camera_struct(struct camera_data *cam)
+ 	cam->height = cam->params.roi.height;
+ }
+ 
++/******************************************************************************
++ *
++ *  cpia2_init_camera_struct
++ *
++ *  Deinitialize camera struct
++ *****************************************************************************/
++void cpia2_deinit_camera_struct(struct camera_data *cam, struct usb_interface *intf)
++{
++	v4l2_device_unregister(&cam->v4l2_dev);
++	kfree(cam);
++}
++
+ /******************************************************************************
+  *
+  *  cpia2_init_camera_struct
+diff --git a/drivers/media/usb/cpia2/cpia2_usb.c b/drivers/media/usb/cpia2/cpia2_usb.c
+index 3ab80a7b44985..76aac06f9fb8e 100644
+--- a/drivers/media/usb/cpia2/cpia2_usb.c
++++ b/drivers/media/usb/cpia2/cpia2_usb.c
+@@ -844,15 +844,13 @@ static int cpia2_usb_probe(struct usb_interface *intf,
+ 	ret = set_alternate(cam, USBIF_CMDONLY);
+ 	if (ret < 0) {
+ 		ERR("%s: usb_set_interface error (ret = %d)\n", __func__, ret);
+-		kfree(cam);
+-		return ret;
++		goto alt_err;
+ 	}
+ 
+ 
+ 	if((ret = cpia2_init_camera(cam)) < 0) {
+ 		ERR("%s: failed to initialize cpia2 camera (ret = %d)\n", __func__, ret);
+-		kfree(cam);
+-		return ret;
++		goto alt_err;
+ 	}
+ 	LOG("  CPiA Version: %d.%02d (%d.%d)\n",
+ 	       cam->params.version.firmware_revision_hi,
+@@ -872,11 +870,14 @@ static int cpia2_usb_probe(struct usb_interface *intf,
+ 	ret = cpia2_register_camera(cam);
+ 	if (ret < 0) {
+ 		ERR("%s: Failed to register cpia2 camera (ret = %d)\n", __func__, ret);
+-		kfree(cam);
+-		return ret;
++		goto alt_err;
+ 	}
+ 
+ 	return 0;
++
++alt_err:
++	cpia2_deinit_camera_struct(cam, intf);
++	return ret;
+ }
+ 
+ /******************************************************************************
+diff --git a/drivers/media/usb/dvb-usb/cinergyT2-core.c b/drivers/media/usb/dvb-usb/cinergyT2-core.c
+index 969a7ec71dff7..4116ba5c45fcb 100644
+--- a/drivers/media/usb/dvb-usb/cinergyT2-core.c
++++ b/drivers/media/usb/dvb-usb/cinergyT2-core.c
+@@ -78,6 +78,8 @@ static int cinergyt2_frontend_attach(struct dvb_usb_adapter *adap)
+ 
+ 	ret = dvb_usb_generic_rw(d, st->data, 1, st->data, 3, 0);
+ 	if (ret < 0) {
++		if (adap->fe_adap[0].fe)
++			adap->fe_adap[0].fe->ops.release(adap->fe_adap[0].fe);
+ 		deb_rc("cinergyt2_power_ctrl() Failed to retrieve sleep state info\n");
+ 	}
+ 	mutex_unlock(&d->data_mutex);
+diff --git a/drivers/media/usb/dvb-usb/cxusb.c b/drivers/media/usb/dvb-usb/cxusb.c
+index 761992ad05e2a..7707de7bae7ca 100644
+--- a/drivers/media/usb/dvb-usb/cxusb.c
++++ b/drivers/media/usb/dvb-usb/cxusb.c
+@@ -1947,7 +1947,7 @@ static struct dvb_usb_device_properties cxusb_bluebird_lgz201_properties = {
+ 
+ 	.size_of_priv     = sizeof(struct cxusb_state),
+ 
+-	.num_adapters = 2,
++	.num_adapters = 1,
+ 	.adapter = {
+ 		{
+ 		.num_frontends = 1,
+diff --git a/drivers/media/usb/em28xx/em28xx-input.c b/drivers/media/usb/em28xx/em28xx-input.c
+index 5aa15a7a49def..59529cbf9cd0b 100644
+--- a/drivers/media/usb/em28xx/em28xx-input.c
++++ b/drivers/media/usb/em28xx/em28xx-input.c
+@@ -720,7 +720,8 @@ static int em28xx_ir_init(struct em28xx *dev)
+ 			dev->board.has_ir_i2c = 0;
+ 			dev_warn(&dev->intf->dev,
+ 				 "No i2c IR remote control device found.\n");
+-			return -ENODEV;
++			err = -ENODEV;
++			goto ref_put;
+ 		}
+ 	}
+ 
+@@ -735,7 +736,7 @@ static int em28xx_ir_init(struct em28xx *dev)
+ 
+ 	ir = kzalloc(sizeof(*ir), GFP_KERNEL);
+ 	if (!ir)
+-		return -ENOMEM;
++		goto ref_put;
+ 	rc = rc_allocate_device(RC_DRIVER_SCANCODE);
+ 	if (!rc)
+ 		goto error;
+@@ -839,6 +840,9 @@ error:
+ 	dev->ir = NULL;
+ 	rc_free_device(rc);
+ 	kfree(ir);
++ref_put:
++	em28xx_shutdown_buttons(dev);
++	kref_put(&dev->ref, em28xx_free_device);
+ 	return err;
+ }
+ 
+diff --git a/drivers/media/usb/gspca/gl860/gl860.c b/drivers/media/usb/gspca/gl860/gl860.c
+index 2c05ea2598e76..ce4ee8bc75c85 100644
+--- a/drivers/media/usb/gspca/gl860/gl860.c
++++ b/drivers/media/usb/gspca/gl860/gl860.c
+@@ -561,8 +561,8 @@ int gl860_RTx(struct gspca_dev *gspca_dev,
+ 					len, 400 + 200 * (len > 1));
+ 			memcpy(pdata, gspca_dev->usb_buf, len);
+ 		} else {
+-			r = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
+-					req, pref, val, index, NULL, len, 400);
++			gspca_err(gspca_dev, "zero-length read request\n");
++			r = -EINVAL;
+ 		}
+ 	}
+ 
+diff --git a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+index f4a727918e352..d38dee1792e41 100644
+--- a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
++++ b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+@@ -2676,9 +2676,8 @@ void pvr2_hdw_destroy(struct pvr2_hdw *hdw)
+ 		pvr2_stream_destroy(hdw->vid_stream);
+ 		hdw->vid_stream = NULL;
+ 	}
+-	pvr2_i2c_core_done(hdw);
+ 	v4l2_device_unregister(&hdw->v4l2_dev);
+-	pvr2_hdw_remove_usb_stuff(hdw);
++	pvr2_hdw_disconnect(hdw);
+ 	mutex_lock(&pvr2_unit_mtx);
+ 	do {
+ 		if ((hdw->unit_number >= 0) &&
+@@ -2705,6 +2704,7 @@ void pvr2_hdw_disconnect(struct pvr2_hdw *hdw)
+ {
+ 	pvr2_trace(PVR2_TRACE_INIT,"pvr2_hdw_disconnect(hdw=%p)",hdw);
+ 	LOCK_TAKE(hdw->big_lock);
++	pvr2_i2c_core_done(hdw);
+ 	LOCK_TAKE(hdw->ctl_lock);
+ 	pvr2_hdw_remove_usb_stuff(hdw);
+ 	LOCK_GIVE(hdw->ctl_lock);
+diff --git a/drivers/media/v4l2-core/v4l2-fh.c b/drivers/media/v4l2-core/v4l2-fh.c
+index 684574f58e82d..90eec79ee995a 100644
+--- a/drivers/media/v4l2-core/v4l2-fh.c
++++ b/drivers/media/v4l2-core/v4l2-fh.c
+@@ -96,6 +96,7 @@ int v4l2_fh_release(struct file *filp)
+ 		v4l2_fh_del(fh);
+ 		v4l2_fh_exit(fh);
+ 		kfree(fh);
++		filp->private_data = NULL;
+ 	}
+ 	return 0;
+ }
+diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
+index 2673f51aafa4d..07d823656ee65 100644
+--- a/drivers/media/v4l2-core/v4l2-ioctl.c
++++ b/drivers/media/v4l2-core/v4l2-ioctl.c
+@@ -3072,8 +3072,8 @@ static int check_array_args(unsigned int cmd, void *parg, size_t *array_size,
+ 
+ static unsigned int video_translate_cmd(unsigned int cmd)
+ {
++#if !defined(CONFIG_64BIT) && defined(CONFIG_COMPAT_32BIT_TIME)
+ 	switch (cmd) {
+-#ifdef CONFIG_COMPAT_32BIT_TIME
+ 	case VIDIOC_DQEVENT_TIME32:
+ 		return VIDIOC_DQEVENT;
+ 	case VIDIOC_QUERYBUF_TIME32:
+@@ -3084,8 +3084,8 @@ static unsigned int video_translate_cmd(unsigned int cmd)
+ 		return VIDIOC_DQBUF;
+ 	case VIDIOC_PREPARE_BUF_TIME32:
+ 		return VIDIOC_PREPARE_BUF;
+-#endif
+ 	}
++#endif
+ 	if (in_compat_syscall())
+ 		return v4l2_compat_translate_cmd(cmd);
+ 
+@@ -3126,8 +3126,8 @@ static int video_get_user(void __user *arg, void *parg,
+ 	} else if (in_compat_syscall()) {
+ 		err = v4l2_compat_get_user(arg, parg, cmd);
+ 	} else {
++#if !defined(CONFIG_64BIT) && defined(CONFIG_COMPAT_32BIT_TIME)
+ 		switch (cmd) {
+-#ifdef CONFIG_COMPAT_32BIT_TIME
+ 		case VIDIOC_QUERYBUF_TIME32:
+ 		case VIDIOC_QBUF_TIME32:
+ 		case VIDIOC_DQBUF_TIME32:
+@@ -3155,8 +3155,8 @@ static int video_get_user(void __user *arg, void *parg,
+ 			};
+ 			break;
+ 		}
+-#endif
+ 		}
++#endif
+ 	}
+ 
+ 	/* zero out anything we don't copy from userspace */
+@@ -3181,8 +3181,8 @@ static int video_put_user(void __user *arg, void *parg,
+ 	if (in_compat_syscall())
+ 		return v4l2_compat_put_user(arg, parg, cmd);
+ 
++#if !defined(CONFIG_64BIT) && defined(CONFIG_COMPAT_32BIT_TIME)
+ 	switch (cmd) {
+-#ifdef CONFIG_COMPAT_32BIT_TIME
+ 	case VIDIOC_DQEVENT_TIME32: {
+ 		struct v4l2_event *ev = parg;
+ 		struct v4l2_event_time32 ev32;
+@@ -3230,8 +3230,8 @@ static int video_put_user(void __user *arg, void *parg,
+ 			return -EFAULT;
+ 		break;
+ 	}
+-#endif
+ 	}
++#endif
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/v4l2-core/v4l2-subdev.c b/drivers/media/v4l2-core/v4l2-subdev.c
+index 956dafab43d49..bf3aa92524584 100644
+--- a/drivers/media/v4l2-core/v4l2-subdev.c
++++ b/drivers/media/v4l2-core/v4l2-subdev.c
+@@ -428,30 +428,6 @@ static long subdev_do_ioctl(struct file *file, unsigned int cmd, void *arg)
+ 
+ 		return v4l2_event_dequeue(vfh, arg, file->f_flags & O_NONBLOCK);
+ 
+-	case VIDIOC_DQEVENT_TIME32: {
+-		struct v4l2_event_time32 *ev32 = arg;
+-		struct v4l2_event ev = { };
+-
+-		if (!(sd->flags & V4L2_SUBDEV_FL_HAS_EVENTS))
+-			return -ENOIOCTLCMD;
+-
+-		rval = v4l2_event_dequeue(vfh, &ev, file->f_flags & O_NONBLOCK);
+-
+-		*ev32 = (struct v4l2_event_time32) {
+-			.type		= ev.type,
+-			.pending	= ev.pending,
+-			.sequence	= ev.sequence,
+-			.timestamp.tv_sec  = ev.timestamp.tv_sec,
+-			.timestamp.tv_nsec = ev.timestamp.tv_nsec,
+-			.id		= ev.id,
+-		};
+-
+-		memcpy(&ev32->u, &ev.u, sizeof(ev.u));
+-		memcpy(&ev32->reserved, &ev.reserved, sizeof(ev.reserved));
+-
+-		return rval;
+-	}
+-
+ 	case VIDIOC_SUBSCRIBE_EVENT:
+ 		return v4l2_subdev_call(sd, core, subscribe_event, vfh, arg);
+ 
+diff --git a/drivers/memstick/host/rtsx_usb_ms.c b/drivers/memstick/host/rtsx_usb_ms.c
+index 102dbb8080da5..29271ad4728a2 100644
+--- a/drivers/memstick/host/rtsx_usb_ms.c
++++ b/drivers/memstick/host/rtsx_usb_ms.c
+@@ -799,9 +799,9 @@ static int rtsx_usb_ms_drv_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ err_out:
+-	memstick_free_host(msh);
+ 	pm_runtime_disable(ms_dev(host));
+ 	pm_runtime_put_noidle(ms_dev(host));
++	memstick_free_host(msh);
+ 	return err;
+ }
+ 
+@@ -828,9 +828,6 @@ static int rtsx_usb_ms_drv_remove(struct platform_device *pdev)
+ 	}
+ 	mutex_unlock(&host->host_mutex);
+ 
+-	memstick_remove_host(msh);
+-	memstick_free_host(msh);
+-
+ 	/* Balance possible unbalanced usage count
+ 	 * e.g. unconditional module removal
+ 	 */
+@@ -838,10 +835,11 @@ static int rtsx_usb_ms_drv_remove(struct platform_device *pdev)
+ 		pm_runtime_put(ms_dev(host));
+ 
+ 	pm_runtime_disable(ms_dev(host));
+-	platform_set_drvdata(pdev, NULL);
+-
++	memstick_remove_host(msh);
+ 	dev_dbg(ms_dev(host),
+ 		": Realtek USB Memstick controller has been removed\n");
++	memstick_free_host(msh);
++	platform_set_drvdata(pdev, NULL);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
+index 5c7f2b1001911..5c408c1dc58ce 100644
+--- a/drivers/mfd/Kconfig
++++ b/drivers/mfd/Kconfig
+@@ -465,6 +465,7 @@ config MFD_MP2629
+ 	tristate "Monolithic Power Systems MP2629 ADC and Battery charger"
+ 	depends on I2C
+ 	select REGMAP_I2C
++	select MFD_CORE
+ 	help
+ 	  Select this option to enable support for Monolithic Power Systems
+ 	  battery charger. This provides ADC, thermal and battery charger power
+diff --git a/drivers/mfd/mfd-core.c b/drivers/mfd/mfd-core.c
+index 6f02b8022c6d5..79f5c6a18815a 100644
+--- a/drivers/mfd/mfd-core.c
++++ b/drivers/mfd/mfd-core.c
+@@ -266,18 +266,18 @@ static int mfd_add_device(struct device *parent, int id,
+ 			if (has_acpi_companion(&pdev->dev)) {
+ 				ret = acpi_check_resource_conflict(&res[r]);
+ 				if (ret)
+-					goto fail_of_entry;
++					goto fail_res_conflict;
+ 			}
+ 		}
+ 	}
+ 
+ 	ret = platform_device_add_resources(pdev, res, cell->num_resources);
+ 	if (ret)
+-		goto fail_of_entry;
++		goto fail_res_conflict;
+ 
+ 	ret = platform_device_add(pdev);
+ 	if (ret)
+-		goto fail_of_entry;
++		goto fail_res_conflict;
+ 
+ 	if (cell->pm_runtime_no_callbacks)
+ 		pm_runtime_no_callbacks(&pdev->dev);
+@@ -286,13 +286,15 @@ static int mfd_add_device(struct device *parent, int id,
+ 
+ 	return 0;
+ 
++fail_res_conflict:
++	if (cell->swnode)
++		device_remove_software_node(&pdev->dev);
+ fail_of_entry:
+ 	list_for_each_entry_safe(of_entry, tmp, &mfd_of_node_list, list)
+ 		if (of_entry->dev == &pdev->dev) {
+ 			list_del(&of_entry->list);
+ 			kfree(of_entry);
+ 		}
+-	device_remove_software_node(&pdev->dev);
+ fail_alias:
+ 	regulator_bulk_unregister_supply_alias(&pdev->dev,
+ 					       cell->parent_supplies,
+@@ -358,11 +360,12 @@ static int mfd_remove_devices_fn(struct device *dev, void *data)
+ 	if (level && cell->level > *level)
+ 		return 0;
+ 
++	if (cell->swnode)
++		device_remove_software_node(&pdev->dev);
++
+ 	regulator_bulk_unregister_supply_alias(dev, cell->parent_supplies,
+ 					       cell->num_parent_supplies);
+ 
+-	device_remove_software_node(&pdev->dev);
+-
+ 	platform_device_unregister(pdev);
+ 	return 0;
+ }
+diff --git a/drivers/mfd/rn5t618.c b/drivers/mfd/rn5t618.c
+index 6ed04e6dbc783..384acb4594272 100644
+--- a/drivers/mfd/rn5t618.c
++++ b/drivers/mfd/rn5t618.c
+@@ -107,7 +107,7 @@ static int rn5t618_irq_init(struct rn5t618 *rn5t618)
+ 
+ 	ret = devm_regmap_add_irq_chip(rn5t618->dev, rn5t618->regmap,
+ 				       rn5t618->irq,
+-				       IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
++				       IRQF_TRIGGER_LOW | IRQF_ONESHOT,
+ 				       0, irq_chip, &rn5t618->irq_data);
+ 	if (ret)
+ 		dev_err(rn5t618->dev, "Failed to register IRQ chip\n");
+diff --git a/drivers/misc/eeprom/idt_89hpesx.c b/drivers/misc/eeprom/idt_89hpesx.c
+index 81c70e5bc168f..3e4a594c110b3 100644
+--- a/drivers/misc/eeprom/idt_89hpesx.c
++++ b/drivers/misc/eeprom/idt_89hpesx.c
+@@ -1126,11 +1126,10 @@ static void idt_get_fw_data(struct idt_89hpesx_dev *pdev)
+ 
+ 	device_for_each_child_node(dev, fwnode) {
+ 		ee_id = idt_ee_match_id(fwnode);
+-		if (!ee_id) {
+-			dev_warn(dev, "Skip unsupported EEPROM device");
+-			continue;
+-		} else
++		if (ee_id)
+ 			break;
++
++		dev_warn(dev, "Skip unsupported EEPROM device %pfw\n", fwnode);
+ 	}
+ 
+ 	/* If there is no fwnode EEPROM device, then set zero size */
+@@ -1161,6 +1160,7 @@ static void idt_get_fw_data(struct idt_89hpesx_dev *pdev)
+ 	else /* if (!fwnode_property_read_bool(node, "read-only")) */
+ 		pdev->eero = false;
+ 
++	fwnode_handle_put(fwnode);
+ 	dev_info(dev, "EEPROM of %d bytes found by 0x%x",
+ 		pdev->eesize, pdev->eeaddr);
+ }
+diff --git a/drivers/misc/habanalabs/common/habanalabs_drv.c b/drivers/misc/habanalabs/common/habanalabs_drv.c
+index 64d1530db9854..d15b912a347bd 100644
+--- a/drivers/misc/habanalabs/common/habanalabs_drv.c
++++ b/drivers/misc/habanalabs/common/habanalabs_drv.c
+@@ -464,6 +464,7 @@ static int hl_pci_probe(struct pci_dev *pdev,
+ 	return 0;
+ 
+ disable_device:
++	pci_disable_pcie_error_reporting(pdev);
+ 	pci_set_drvdata(pdev, NULL);
+ 	destroy_hdev(hdev);
+ 
+diff --git a/drivers/misc/pvpanic/pvpanic-mmio.c b/drivers/misc/pvpanic/pvpanic-mmio.c
+index 4c08417760874..69b31f7adf4f1 100644
+--- a/drivers/misc/pvpanic/pvpanic-mmio.c
++++ b/drivers/misc/pvpanic/pvpanic-mmio.c
+@@ -93,7 +93,7 @@ static int pvpanic_mmio_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 
+-	pi = kmalloc(sizeof(*pi), GFP_ATOMIC);
++	pi = devm_kmalloc(dev, sizeof(*pi), GFP_ATOMIC);
+ 	if (!pi)
+ 		return -ENOMEM;
+ 
+@@ -114,7 +114,6 @@ static int pvpanic_mmio_remove(struct platform_device *pdev)
+ 	struct pvpanic_instance *pi = dev_get_drvdata(&pdev->dev);
+ 
+ 	pvpanic_remove(pi);
+-	kfree(pi);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/misc/pvpanic/pvpanic-pci.c b/drivers/misc/pvpanic/pvpanic-pci.c
+index 9ecc4e8559d5d..046ce4ecc1959 100644
+--- a/drivers/misc/pvpanic/pvpanic-pci.c
++++ b/drivers/misc/pvpanic/pvpanic-pci.c
+@@ -78,15 +78,15 @@ static int pvpanic_pci_probe(struct pci_dev *pdev,
+ 	void __iomem *base;
+ 	int ret;
+ 
+-	ret = pci_enable_device(pdev);
++	ret = pcim_enable_device(pdev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	base = pci_iomap(pdev, 0, 0);
++	base = pcim_iomap(pdev, 0, 0);
+ 	if (!base)
+ 		return -ENOMEM;
+ 
+-	pi = kmalloc(sizeof(*pi), GFP_ATOMIC);
++	pi = devm_kmalloc(&pdev->dev, sizeof(*pi), GFP_ATOMIC);
+ 	if (!pi)
+ 		return -ENOMEM;
+ 
+@@ -107,9 +107,6 @@ static void pvpanic_pci_remove(struct pci_dev *pdev)
+ 	struct pvpanic_instance *pi = dev_get_drvdata(&pdev->dev);
+ 
+ 	pvpanic_remove(pi);
+-	iounmap(pi->base);
+-	kfree(pi);
+-	pci_disable_device(pdev);
+ }
+ 
+ static struct pci_driver pvpanic_pci_driver = {
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 689eb9afeeed1..2518bc0856596 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -1004,6 +1004,12 @@ static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req)
+ 
+ 	switch (mq_rq->drv_op) {
+ 	case MMC_DRV_OP_IOCTL:
++		if (card->ext_csd.cmdq_en) {
++			ret = mmc_cmdq_disable(card);
++			if (ret)
++				break;
++		}
++		fallthrough;
+ 	case MMC_DRV_OP_IOCTL_RPMB:
+ 		idata = mq_rq->drv_op_data;
+ 		for (i = 0, ret = 0; i < mq_rq->ioc_count; i++) {
+@@ -1014,6 +1020,8 @@ static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req)
+ 		/* Always switch back to main area after RPMB access */
+ 		if (rpmb_ioctl)
+ 			mmc_blk_part_switch(card, 0);
++		else if (card->reenable_cmdq && !card->ext_csd.cmdq_en)
++			mmc_cmdq_enable(card);
+ 		break;
+ 	case MMC_DRV_OP_BOOT_WP:
+ 		ret = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_BOOT_WP,
+diff --git a/drivers/mmc/host/sdhci-of-aspeed.c b/drivers/mmc/host/sdhci-of-aspeed.c
+index d001c51074a06..e4665a438ec56 100644
+--- a/drivers/mmc/host/sdhci-of-aspeed.c
++++ b/drivers/mmc/host/sdhci-of-aspeed.c
+@@ -150,7 +150,7 @@ static int aspeed_sdhci_phase_to_tap(struct device *dev, unsigned long rate_hz,
+ 
+ 	tap = div_u64(phase_period_ps, prop_delay_ps);
+ 	if (tap > ASPEED_SDHCI_NR_TAPS) {
+-		dev_warn(dev,
++		dev_dbg(dev,
+ 			 "Requested out of range phase tap %d for %d degrees of phase compensation at %luHz, clamping to tap %d\n",
+ 			 tap, phase_deg, rate_hz, ASPEED_SDHCI_NR_TAPS);
+ 		tap = ASPEED_SDHCI_NR_TAPS;
+diff --git a/drivers/mmc/host/sdhci-sprd.c b/drivers/mmc/host/sdhci-sprd.c
+index 5dc36efff47ff..11e375579cfb9 100644
+--- a/drivers/mmc/host/sdhci-sprd.c
++++ b/drivers/mmc/host/sdhci-sprd.c
+@@ -393,6 +393,7 @@ static void sdhci_sprd_request_done(struct sdhci_host *host,
+ static struct sdhci_ops sdhci_sprd_ops = {
+ 	.read_l = sdhci_sprd_readl,
+ 	.write_l = sdhci_sprd_writel,
++	.write_w = sdhci_sprd_writew,
+ 	.write_b = sdhci_sprd_writeb,
+ 	.set_clock = sdhci_sprd_set_clock,
+ 	.get_max_clock = sdhci_sprd_get_max_clock,
+diff --git a/drivers/mmc/host/usdhi6rol0.c b/drivers/mmc/host/usdhi6rol0.c
+index 615f3d008af1e..b9b79b1089a00 100644
+--- a/drivers/mmc/host/usdhi6rol0.c
++++ b/drivers/mmc/host/usdhi6rol0.c
+@@ -1801,6 +1801,7 @@ static int usdhi6_probe(struct platform_device *pdev)
+ 
+ 	version = usdhi6_read(host, USDHI6_VERSION);
+ 	if ((version & 0xfff) != 0xa0d) {
++		ret = -EPERM;
+ 		dev_err(dev, "Version not recognized %x\n", version);
+ 		goto e_clk_off;
+ 	}
+diff --git a/drivers/mmc/host/via-sdmmc.c b/drivers/mmc/host/via-sdmmc.c
+index a1d0985600990..c32df5530b943 100644
+--- a/drivers/mmc/host/via-sdmmc.c
++++ b/drivers/mmc/host/via-sdmmc.c
+@@ -857,6 +857,9 @@ static void via_sdc_data_isr(struct via_crdr_mmc_host *host, u16 intmask)
+ {
+ 	BUG_ON(intmask == 0);
+ 
++	if (!host->data)
++		return;
++
+ 	if (intmask & VIA_CRDR_SDSTS_DT)
+ 		host->data->error = -ETIMEDOUT;
+ 	else if (intmask & (VIA_CRDR_SDSTS_RC | VIA_CRDR_SDSTS_WC))
+diff --git a/drivers/mmc/host/vub300.c b/drivers/mmc/host/vub300.c
+index 739cf63ef6e2f..4950d10d3a191 100644
+--- a/drivers/mmc/host/vub300.c
++++ b/drivers/mmc/host/vub300.c
+@@ -2279,7 +2279,7 @@ static int vub300_probe(struct usb_interface *interface,
+ 	if (retval < 0)
+ 		goto error5;
+ 	retval =
+-		usb_control_msg(vub300->udev, usb_rcvctrlpipe(vub300->udev, 0),
++		usb_control_msg(vub300->udev, usb_sndctrlpipe(vub300->udev, 0),
+ 				SET_ROM_WAIT_STATES,
+ 				USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 				firmware_rom_wait_states, 0x0000, NULL, 0, HZ);
+diff --git a/drivers/mtd/nand/raw/arasan-nand-controller.c b/drivers/mtd/nand/raw/arasan-nand-controller.c
+index 549aac00228eb..390f8d719c258 100644
+--- a/drivers/mtd/nand/raw/arasan-nand-controller.c
++++ b/drivers/mtd/nand/raw/arasan-nand-controller.c
+@@ -273,6 +273,37 @@ static int anfc_pkt_len_config(unsigned int len, unsigned int *steps,
+ 	return 0;
+ }
+ 
++static int anfc_select_target(struct nand_chip *chip, int target)
++{
++	struct anand *anand = to_anand(chip);
++	struct arasan_nfc *nfc = to_anfc(chip->controller);
++	int ret;
++
++	/* Update the controller timings and the potential ECC configuration */
++	writel_relaxed(anand->timings, nfc->base + DATA_INTERFACE_REG);
++
++	/* Update clock frequency */
++	if (nfc->cur_clk != anand->clk) {
++		clk_disable_unprepare(nfc->controller_clk);
++		ret = clk_set_rate(nfc->controller_clk, anand->clk);
++		if (ret) {
++			dev_err(nfc->dev, "Failed to change clock rate\n");
++			return ret;
++		}
++
++		ret = clk_prepare_enable(nfc->controller_clk);
++		if (ret) {
++			dev_err(nfc->dev,
++				"Failed to re-enable the controller clock\n");
++			return ret;
++		}
++
++		nfc->cur_clk = anand->clk;
++	}
++
++	return 0;
++}
++
+ /*
+  * When using the embedded hardware ECC engine, the controller is in charge of
+  * feeding the engine with, first, the ECC residue present in the data array.
+@@ -401,6 +432,18 @@ static int anfc_read_page_hw_ecc(struct nand_chip *chip, u8 *buf,
+ 	return 0;
+ }
+ 
++static int anfc_sel_read_page_hw_ecc(struct nand_chip *chip, u8 *buf,
++				     int oob_required, int page)
++{
++	int ret;
++
++	ret = anfc_select_target(chip, chip->cur_cs);
++	if (ret)
++		return ret;
++
++	return anfc_read_page_hw_ecc(chip, buf, oob_required, page);
++};
++
+ static int anfc_write_page_hw_ecc(struct nand_chip *chip, const u8 *buf,
+ 				  int oob_required, int page)
+ {
+@@ -461,6 +504,18 @@ static int anfc_write_page_hw_ecc(struct nand_chip *chip, const u8 *buf,
+ 	return ret;
+ }
+ 
++static int anfc_sel_write_page_hw_ecc(struct nand_chip *chip, const u8 *buf,
++				      int oob_required, int page)
++{
++	int ret;
++
++	ret = anfc_select_target(chip, chip->cur_cs);
++	if (ret)
++		return ret;
++
++	return anfc_write_page_hw_ecc(chip, buf, oob_required, page);
++};
++
+ /* NAND framework ->exec_op() hooks and related helpers */
+ static int anfc_parse_instructions(struct nand_chip *chip,
+ 				   const struct nand_subop *subop,
+@@ -753,37 +808,6 @@ static const struct nand_op_parser anfc_op_parser = NAND_OP_PARSER(
+ 		NAND_OP_PARSER_PAT_WAITRDY_ELEM(false)),
+ 	);
+ 
+-static int anfc_select_target(struct nand_chip *chip, int target)
+-{
+-	struct anand *anand = to_anand(chip);
+-	struct arasan_nfc *nfc = to_anfc(chip->controller);
+-	int ret;
+-
+-	/* Update the controller timings and the potential ECC configuration */
+-	writel_relaxed(anand->timings, nfc->base + DATA_INTERFACE_REG);
+-
+-	/* Update clock frequency */
+-	if (nfc->cur_clk != anand->clk) {
+-		clk_disable_unprepare(nfc->controller_clk);
+-		ret = clk_set_rate(nfc->controller_clk, anand->clk);
+-		if (ret) {
+-			dev_err(nfc->dev, "Failed to change clock rate\n");
+-			return ret;
+-		}
+-
+-		ret = clk_prepare_enable(nfc->controller_clk);
+-		if (ret) {
+-			dev_err(nfc->dev,
+-				"Failed to re-enable the controller clock\n");
+-			return ret;
+-		}
+-
+-		nfc->cur_clk = anand->clk;
+-	}
+-
+-	return 0;
+-}
+-
+ static int anfc_check_op(struct nand_chip *chip,
+ 			 const struct nand_operation *op)
+ {
+@@ -1007,8 +1031,8 @@ static int anfc_init_hw_ecc_controller(struct arasan_nfc *nfc,
+ 	if (!anand->bch)
+ 		return -EINVAL;
+ 
+-	ecc->read_page = anfc_read_page_hw_ecc;
+-	ecc->write_page = anfc_write_page_hw_ecc;
++	ecc->read_page = anfc_sel_read_page_hw_ecc;
++	ecc->write_page = anfc_sel_write_page_hw_ecc;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/mtd/nand/raw/marvell_nand.c b/drivers/mtd/nand/raw/marvell_nand.c
+index 79da6b02e2095..f83525a1ab0e6 100644
+--- a/drivers/mtd/nand/raw/marvell_nand.c
++++ b/drivers/mtd/nand/raw/marvell_nand.c
+@@ -3030,8 +3030,10 @@ static int __maybe_unused marvell_nfc_resume(struct device *dev)
+ 		return ret;
+ 
+ 	ret = clk_prepare_enable(nfc->reg_clk);
+-	if (ret < 0)
++	if (ret < 0) {
++		clk_disable_unprepare(nfc->core_clk);
+ 		return ret;
++	}
+ 
+ 	/*
+ 	 * Reset nfc->selected_chip so the next command will cause the timing
+diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c
+index 17f63f95f4a28..54ae540bc66b4 100644
+--- a/drivers/mtd/nand/spi/core.c
++++ b/drivers/mtd/nand/spi/core.c
+@@ -290,6 +290,8 @@ static int spinand_ondie_ecc_finish_io_req(struct nand_device *nand,
+ {
+ 	struct spinand_ondie_ecc_conf *engine_conf = nand->ecc.ctx.priv;
+ 	struct spinand_device *spinand = nand_to_spinand(nand);
++	struct mtd_info *mtd = spinand_to_mtd(spinand);
++	int ret;
+ 
+ 	if (req->mode == MTD_OPS_RAW)
+ 		return 0;
+@@ -299,7 +301,13 @@ static int spinand_ondie_ecc_finish_io_req(struct nand_device *nand,
+ 		return 0;
+ 
+ 	/* Finish a page write: check the status, report errors/bitflips */
+-	return spinand_check_ecc_status(spinand, engine_conf->status);
++	ret = spinand_check_ecc_status(spinand, engine_conf->status);
++	if (ret == -EBADMSG)
++		mtd->ecc_stats.failed++;
++	else if (ret > 0)
++		mtd->ecc_stats.corrected += ret;
++
++	return ret;
+ }
+ 
+ static struct nand_ecc_engine_ops spinand_ondie_ecc_engine_ops = {
+@@ -620,13 +628,10 @@ static int spinand_mtd_read(struct mtd_info *mtd, loff_t from,
+ 		if (ret < 0 && ret != -EBADMSG)
+ 			break;
+ 
+-		if (ret == -EBADMSG) {
++		if (ret == -EBADMSG)
+ 			ecc_failed = true;
+-			mtd->ecc_stats.failed++;
+-		} else {
+-			mtd->ecc_stats.corrected += ret;
++		else
+ 			max_bitflips = max_t(unsigned int, max_bitflips, ret);
+-		}
+ 
+ 		ret = 0;
+ 		ops->retlen += iter.req.datalen;
+diff --git a/drivers/mtd/parsers/qcomsmempart.c b/drivers/mtd/parsers/qcomsmempart.c
+index d9083308f6ba6..06a818cd2433f 100644
+--- a/drivers/mtd/parsers/qcomsmempart.c
++++ b/drivers/mtd/parsers/qcomsmempart.c
+@@ -159,6 +159,15 @@ out_free_parts:
+ 	return ret;
+ }
+ 
++static void parse_qcomsmem_cleanup(const struct mtd_partition *pparts,
++				   int nr_parts)
++{
++	int i;
++
++	for (i = 0; i < nr_parts; i++)
++		kfree(pparts[i].name);
++}
++
+ static const struct of_device_id qcomsmem_of_match_table[] = {
+ 	{ .compatible = "qcom,smem-part" },
+ 	{},
+@@ -167,6 +176,7 @@ MODULE_DEVICE_TABLE(of, qcomsmem_of_match_table);
+ 
+ static struct mtd_part_parser mtd_parser_qcomsmem = {
+ 	.parse_fn = parse_qcomsmem_part,
++	.cleanup = parse_qcomsmem_cleanup,
+ 	.name = "qcomsmem",
+ 	.of_match_table = qcomsmem_of_match_table,
+ };
+diff --git a/drivers/mtd/parsers/redboot.c b/drivers/mtd/parsers/redboot.c
+index 91146bdc47132..3ccd6363ee8cb 100644
+--- a/drivers/mtd/parsers/redboot.c
++++ b/drivers/mtd/parsers/redboot.c
+@@ -45,6 +45,7 @@ static inline int redboot_checksum(struct fis_image_desc *img)
+ static void parse_redboot_of(struct mtd_info *master)
+ {
+ 	struct device_node *np;
++	struct device_node *npart;
+ 	u32 dirblock;
+ 	int ret;
+ 
+@@ -52,7 +53,11 @@ static void parse_redboot_of(struct mtd_info *master)
+ 	if (!np)
+ 		return;
+ 
+-	ret = of_property_read_u32(np, "fis-index-block", &dirblock);
++	npart = of_get_child_by_name(np, "partitions");
++	if (!npart)
++		return;
++
++	ret = of_property_read_u32(npart, "fis-index-block", &dirblock);
+ 	if (ret)
+ 		return;
+ 
+diff --git a/drivers/mtd/spi-nor/otp.c b/drivers/mtd/spi-nor/otp.c
+index fcf38d2603450..d8e68120a4b11 100644
+--- a/drivers/mtd/spi-nor/otp.c
++++ b/drivers/mtd/spi-nor/otp.c
+@@ -40,7 +40,6 @@ int spi_nor_otp_read_secr(struct spi_nor *nor, loff_t addr, size_t len, u8 *buf)
+ 	rdesc = nor->dirmap.rdesc;
+ 
+ 	nor->read_opcode = SPINOR_OP_RSECR;
+-	nor->addr_width = 3;
+ 	nor->read_dummy = 8;
+ 	nor->read_proto = SNOR_PROTO_1_1_1;
+ 	nor->dirmap.rdesc = NULL;
+@@ -84,7 +83,6 @@ int spi_nor_otp_write_secr(struct spi_nor *nor, loff_t addr, size_t len,
+ 	wdesc = nor->dirmap.wdesc;
+ 
+ 	nor->program_opcode = SPINOR_OP_PSECR;
+-	nor->addr_width = 3;
+ 	nor->write_proto = SNOR_PROTO_1_1_1;
+ 	nor->dirmap.wdesc = NULL;
+ 
+@@ -240,6 +238,29 @@ out:
+ 	return ret;
+ }
+ 
++static int spi_nor_mtd_otp_range_is_locked(struct spi_nor *nor, loff_t ofs,
++					   size_t len)
++{
++	const struct spi_nor_otp_ops *ops = nor->params->otp.ops;
++	unsigned int region;
++	int locked;
++
++	/*
++	 * If any of the affected OTP regions are locked the entire range is
++	 * considered locked.
++	 */
++	for (region = spi_nor_otp_offset_to_region(nor, ofs);
++	     region <= spi_nor_otp_offset_to_region(nor, ofs + len - 1);
++	     region++) {
++		locked = ops->is_locked(nor, region);
++		/* take the branch it is locked or in case of an error */
++		if (locked)
++			return locked;
++	}
++
++	return 0;
++}
++
+ static int spi_nor_mtd_otp_read_write(struct mtd_info *mtd, loff_t ofs,
+ 				      size_t total_len, size_t *retlen,
+ 				      const u8 *buf, bool is_write)
+@@ -255,14 +276,26 @@ static int spi_nor_mtd_otp_read_write(struct mtd_info *mtd, loff_t ofs,
+ 	if (ofs < 0 || ofs >= spi_nor_otp_size(nor))
+ 		return 0;
+ 
++	/* don't access beyond the end */
++	total_len = min_t(size_t, total_len, spi_nor_otp_size(nor) - ofs);
++
++	if (!total_len)
++		return 0;
++
+ 	ret = spi_nor_lock_and_prep(nor);
+ 	if (ret)
+ 		return ret;
+ 
+-	/* don't access beyond the end */
+-	total_len = min_t(size_t, total_len, spi_nor_otp_size(nor) - ofs);
++	if (is_write) {
++		ret = spi_nor_mtd_otp_range_is_locked(nor, ofs, total_len);
++		if (ret < 0) {
++			goto out;
++		} else if (ret) {
++			ret = -EROFS;
++			goto out;
++		}
++	}
+ 
+-	*retlen = 0;
+ 	while (total_len) {
+ 		/*
+ 		 * The OTP regions are mapped into a contiguous area starting
+diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
+index 74dc8e249faa3..9b12a8e110f43 100644
+--- a/drivers/net/Kconfig
++++ b/drivers/net/Kconfig
+@@ -431,6 +431,7 @@ config VSOCKMON
+ config MHI_NET
+ 	tristate "MHI network driver"
+ 	depends on MHI_BUS
++	select WWAN
+ 	help
+ 	  This is the network driver for MHI bus.  It can be used with
+ 	  QCOM based WWAN modems (like SDX55).  Say Y or M.
+diff --git a/drivers/net/can/peak_canfd/peak_canfd.c b/drivers/net/can/peak_canfd/peak_canfd.c
+index 00847cbaf7b62..d08718e98e110 100644
+--- a/drivers/net/can/peak_canfd/peak_canfd.c
++++ b/drivers/net/can/peak_canfd/peak_canfd.c
+@@ -351,8 +351,8 @@ static int pucan_handle_status(struct peak_canfd_priv *priv,
+ 				return err;
+ 		}
+ 
+-		/* start network queue (echo_skb array is empty) */
+-		netif_start_queue(ndev);
++		/* wake network queue up (echo_skb array is empty) */
++		netif_wake_queue(ndev);
+ 
+ 		return 0;
+ 	}
+diff --git a/drivers/net/can/usb/ems_usb.c b/drivers/net/can/usb/ems_usb.c
+index 5af69787d9d5d..0a37af4a3fa40 100644
+--- a/drivers/net/can/usb/ems_usb.c
++++ b/drivers/net/can/usb/ems_usb.c
+@@ -1053,7 +1053,6 @@ static void ems_usb_disconnect(struct usb_interface *intf)
+ 
+ 	if (dev) {
+ 		unregister_netdev(dev->netdev);
+-		free_candev(dev->netdev);
+ 
+ 		unlink_all_urbs(dev);
+ 
+@@ -1061,6 +1060,8 @@ static void ems_usb_disconnect(struct usb_interface *intf)
+ 
+ 		kfree(dev->intr_in_buffer);
+ 		kfree(dev->tx_msg_buffer);
++
++		free_candev(dev->netdev);
+ 	}
+ }
+ 
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index eca285aaf72f8..961fa6b75cad8 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -1618,9 +1618,6 @@ static int mv88e6xxx_port_check_hw_vlan(struct dsa_switch *ds, int port,
+ 	struct mv88e6xxx_vtu_entry vlan;
+ 	int i, err;
+ 
+-	if (!vid)
+-		return -EOPNOTSUPP;
+-
+ 	/* DSA and CPU ports have to be members of multiple vlans */
+ 	if (dsa_is_dsa_port(ds, port) || dsa_is_cpu_port(ds, port))
+ 		return 0;
+@@ -2109,6 +2106,9 @@ static int mv88e6xxx_port_vlan_add(struct dsa_switch *ds, int port,
+ 	u8 member;
+ 	int err;
+ 
++	if (!vlan->vid)
++		return 0;
++
+ 	err = mv88e6xxx_port_vlan_prepare(ds, port, vlan);
+ 	if (err)
+ 		return err;
+diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
+index b88d9ef45a1f1..ebe4d33cda276 100644
+--- a/drivers/net/dsa/sja1105/sja1105_main.c
++++ b/drivers/net/dsa/sja1105/sja1105_main.c
+@@ -1798,6 +1798,12 @@ static int sja1105_reload_cbs(struct sja1105_private *priv)
+ {
+ 	int rc = 0, i;
+ 
++	/* The credit based shapers are only allocated if
++	 * CONFIG_NET_SCH_CBS is enabled.
++	 */
++	if (!priv->cbs)
++		return 0;
++
+ 	for (i = 0; i < priv->info->num_cbs_shapers; i++) {
+ 		struct sja1105_cbs_entry *cbs = &priv->cbs[i];
+ 
+diff --git a/drivers/net/ethernet/aeroflex/greth.c b/drivers/net/ethernet/aeroflex/greth.c
+index d77fafbc15301..c560ad06f0be3 100644
+--- a/drivers/net/ethernet/aeroflex/greth.c
++++ b/drivers/net/ethernet/aeroflex/greth.c
+@@ -1539,10 +1539,11 @@ static int greth_of_remove(struct platform_device *of_dev)
+ 	mdiobus_unregister(greth->mdio);
+ 
+ 	unregister_netdev(ndev);
+-	free_netdev(ndev);
+ 
+ 	of_iounmap(&of_dev->resource[0], greth->regs, resource_size(&of_dev->resource[0]));
+ 
++	free_netdev(ndev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h
+index f5fba8b8cdea9..a47e2710487ec 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_macsec.h
+@@ -91,7 +91,7 @@ struct aq_macsec_txsc {
+ 	u32 hw_sc_idx;
+ 	unsigned long tx_sa_idx_busy;
+ 	const struct macsec_secy *sw_secy;
+-	u8 tx_sa_key[MACSEC_NUM_AN][MACSEC_KEYID_LEN];
++	u8 tx_sa_key[MACSEC_NUM_AN][MACSEC_MAX_KEY_LEN];
+ 	struct aq_macsec_tx_sc_stats stats;
+ 	struct aq_macsec_tx_sa_stats tx_sa_stats[MACSEC_NUM_AN];
+ };
+@@ -101,7 +101,7 @@ struct aq_macsec_rxsc {
+ 	unsigned long rx_sa_idx_busy;
+ 	const struct macsec_secy *sw_secy;
+ 	const struct macsec_rx_sc *sw_rxsc;
+-	u8 rx_sa_key[MACSEC_NUM_AN][MACSEC_KEYID_LEN];
++	u8 rx_sa_key[MACSEC_NUM_AN][MACSEC_MAX_KEY_LEN];
+ 	struct aq_macsec_rx_sa_stats rx_sa_stats[MACSEC_NUM_AN];
+ };
+ 
+diff --git a/drivers/net/ethernet/broadcom/bcm4908_enet.c b/drivers/net/ethernet/broadcom/bcm4908_enet.c
+index 60d908507f51d..02a569500234c 100644
+--- a/drivers/net/ethernet/broadcom/bcm4908_enet.c
++++ b/drivers/net/ethernet/broadcom/bcm4908_enet.c
+@@ -174,9 +174,6 @@ static int bcm4908_dma_alloc_buf_descs(struct bcm4908_enet *enet,
+ 	if (!ring->slots)
+ 		goto err_free_buf_descs;
+ 
+-	ring->read_idx = 0;
+-	ring->write_idx = 0;
+-
+ 	return 0;
+ 
+ err_free_buf_descs:
+@@ -304,6 +301,9 @@ static void bcm4908_enet_dma_ring_init(struct bcm4908_enet *enet,
+ 
+ 	enet_write(enet, ring->st_ram_block + ENET_DMA_CH_STATE_RAM_BASE_DESC_PTR,
+ 		   (uint32_t)ring->dma_addr);
++
++	ring->read_idx = 0;
++	ring->write_idx = 0;
+ }
+ 
+ static void bcm4908_enet_dma_uninit(struct bcm4908_enet *enet)
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index fcca023f22e54..41f7f078cd27c 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -4296,3 +4296,4 @@ MODULE_AUTHOR("Broadcom Corporation");
+ MODULE_DESCRIPTION("Broadcom GENET Ethernet controller driver");
+ MODULE_ALIAS("platform:bcmgenet");
+ MODULE_LICENSE("GPL");
++MODULE_SOFTDEP("pre: mdio-bcm-unimac");
+diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c
+index 701c12c9e0337..649c5c429bd7c 100644
+--- a/drivers/net/ethernet/emulex/benet/be_cmds.c
++++ b/drivers/net/ethernet/emulex/benet/be_cmds.c
+@@ -550,7 +550,7 @@ int be_process_mcc(struct be_adapter *adapter)
+ 	int num = 0, status = 0;
+ 	struct be_mcc_obj *mcc_obj = &adapter->mcc_obj;
+ 
+-	spin_lock_bh(&adapter->mcc_cq_lock);
++	spin_lock(&adapter->mcc_cq_lock);
+ 
+ 	while ((compl = be_mcc_compl_get(adapter))) {
+ 		if (compl->flags & CQE_FLAGS_ASYNC_MASK) {
+@@ -566,7 +566,7 @@ int be_process_mcc(struct be_adapter *adapter)
+ 	if (num)
+ 		be_cq_notify(adapter, mcc_obj->cq.id, mcc_obj->rearm_cq, num);
+ 
+-	spin_unlock_bh(&adapter->mcc_cq_lock);
++	spin_unlock(&adapter->mcc_cq_lock);
+ 	return status;
+ }
+ 
+@@ -581,7 +581,9 @@ static int be_mcc_wait_compl(struct be_adapter *adapter)
+ 		if (be_check_error(adapter, BE_ERROR_ANY))
+ 			return -EIO;
+ 
++		local_bh_disable();
+ 		status = be_process_mcc(adapter);
++		local_bh_enable();
+ 
+ 		if (atomic_read(&mcc_obj->q.used) == 0)
+ 			break;
+diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
+index 7968568bbe214..361c1c87c1830 100644
+--- a/drivers/net/ethernet/emulex/benet/be_main.c
++++ b/drivers/net/ethernet/emulex/benet/be_main.c
+@@ -5501,7 +5501,9 @@ static void be_worker(struct work_struct *work)
+ 	 * mcc completions
+ 	 */
+ 	if (!netif_running(adapter->netdev)) {
++		local_bh_disable();
+ 		be_process_mcc(adapter);
++		local_bh_enable();
+ 		goto reschedule;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/ezchip/nps_enet.c b/drivers/net/ethernet/ezchip/nps_enet.c
+index e3954d8835e71..49957598301b5 100644
+--- a/drivers/net/ethernet/ezchip/nps_enet.c
++++ b/drivers/net/ethernet/ezchip/nps_enet.c
+@@ -607,7 +607,7 @@ static s32 nps_enet_probe(struct platform_device *pdev)
+ 
+ 	/* Get IRQ number */
+ 	priv->irq = platform_get_irq(pdev, 0);
+-	if (!priv->irq) {
++	if (priv->irq < 0) {
+ 		dev_err(dev, "failed to retrieve <irq Rx-Tx> value from device tree\n");
+ 		err = -ENODEV;
+ 		goto out_netdev;
+@@ -642,8 +642,8 @@ static s32 nps_enet_remove(struct platform_device *pdev)
+ 	struct nps_enet_priv *priv = netdev_priv(ndev);
+ 
+ 	unregister_netdev(ndev);
+-	free_netdev(ndev);
+ 	netif_napi_del(&priv->napi);
++	free_netdev(ndev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/faraday/ftgmac100.c b/drivers/net/ethernet/faraday/ftgmac100.c
+index 04421aec2dfd6..11dbbfd38770c 100644
+--- a/drivers/net/ethernet/faraday/ftgmac100.c
++++ b/drivers/net/ethernet/faraday/ftgmac100.c
+@@ -1830,14 +1830,17 @@ static int ftgmac100_probe(struct platform_device *pdev)
+ 	if (np && of_get_property(np, "use-ncsi", NULL)) {
+ 		if (!IS_ENABLED(CONFIG_NET_NCSI)) {
+ 			dev_err(&pdev->dev, "NCSI stack not enabled\n");
++			err = -EINVAL;
+ 			goto err_phy_connect;
+ 		}
+ 
+ 		dev_info(&pdev->dev, "Using NCSI interface\n");
+ 		priv->use_ncsi = true;
+ 		priv->ndev = ncsi_register_dev(netdev, ftgmac100_ncsi_handler);
+-		if (!priv->ndev)
++		if (!priv->ndev) {
++			err = -EINVAL;
+ 			goto err_phy_connect;
++		}
+ 	} else if (np && of_get_property(np, "phy-handle", NULL)) {
+ 		struct phy_device *phy;
+ 
+@@ -1856,6 +1859,7 @@ static int ftgmac100_probe(struct platform_device *pdev)
+ 					     &ftgmac100_adjust_link);
+ 		if (!phy) {
+ 			dev_err(&pdev->dev, "Failed to connect to phy\n");
++			err = -EINVAL;
+ 			goto err_phy_connect;
+ 		}
+ 
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index bbc423e931223..79cefe85a799f 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -1295,8 +1295,8 @@ static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	gve_write_version(&reg_bar->driver_version);
+ 	/* Get max queues to alloc etherdev */
+-	max_rx_queues = ioread32be(&reg_bar->max_tx_queues);
+-	max_tx_queues = ioread32be(&reg_bar->max_rx_queues);
++	max_tx_queues = ioread32be(&reg_bar->max_tx_queues);
++	max_rx_queues = ioread32be(&reg_bar->max_rx_queues);
+ 	/* Alloc and setup the netdev and priv */
+ 	dev = alloc_etherdev_mqs(sizeof(*priv), max_tx_queues, max_rx_queues);
+ 	if (!dev) {
+diff --git a/drivers/net/ethernet/ibm/ehea/ehea_main.c b/drivers/net/ethernet/ibm/ehea/ehea_main.c
+index ea55314b209db..d105bfbc7c1c0 100644
+--- a/drivers/net/ethernet/ibm/ehea/ehea_main.c
++++ b/drivers/net/ethernet/ibm/ehea/ehea_main.c
+@@ -2618,10 +2618,8 @@ static int ehea_restart_qps(struct net_device *dev)
+ 	u16 dummy16 = 0;
+ 
+ 	cb0 = (void *)get_zeroed_page(GFP_KERNEL);
+-	if (!cb0) {
+-		ret = -ENOMEM;
+-		goto out;
+-	}
++	if (!cb0)
++		return -ENOMEM;
+ 
+ 	for (i = 0; i < (port->num_def_qps); i++) {
+ 		struct ehea_port_res *pr =  &port->port_res[i];
+@@ -2641,6 +2639,7 @@ static int ehea_restart_qps(struct net_device *dev)
+ 					    cb0);
+ 		if (hret != H_SUCCESS) {
+ 			netdev_err(dev, "query_ehea_qp failed (1)\n");
++			ret = -EFAULT;
+ 			goto out;
+ 		}
+ 
+@@ -2653,6 +2652,7 @@ static int ehea_restart_qps(struct net_device *dev)
+ 					     &dummy64, &dummy16, &dummy16);
+ 		if (hret != H_SUCCESS) {
+ 			netdev_err(dev, "modify_ehea_qp failed (1)\n");
++			ret = -EFAULT;
+ 			goto out;
+ 		}
+ 
+@@ -2661,6 +2661,7 @@ static int ehea_restart_qps(struct net_device *dev)
+ 					    cb0);
+ 		if (hret != H_SUCCESS) {
+ 			netdev_err(dev, "query_ehea_qp failed (2)\n");
++			ret = -EFAULT;
+ 			goto out;
+ 		}
+ 
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 5788bb956d733..ede65b32f8212 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -106,6 +106,8 @@ static void release_crq_queue(struct ibmvnic_adapter *);
+ static int __ibmvnic_set_mac(struct net_device *, u8 *);
+ static int init_crq_queue(struct ibmvnic_adapter *adapter);
+ static int send_query_phys_parms(struct ibmvnic_adapter *adapter);
++static void ibmvnic_tx_scrq_clean_buffer(struct ibmvnic_adapter *adapter,
++					 struct ibmvnic_sub_crq_queue *tx_scrq);
+ 
+ struct ibmvnic_stat {
+ 	char name[ETH_GSTRING_LEN];
+@@ -209,12 +211,11 @@ static int alloc_long_term_buff(struct ibmvnic_adapter *adapter,
+ 	mutex_lock(&adapter->fw_lock);
+ 	adapter->fw_done_rc = 0;
+ 	reinit_completion(&adapter->fw_done);
+-	rc = send_request_map(adapter, ltb->addr,
+-			      ltb->size, ltb->map_id);
++
++	rc = send_request_map(adapter, ltb->addr, ltb->size, ltb->map_id);
+ 	if (rc) {
+-		dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr);
+-		mutex_unlock(&adapter->fw_lock);
+-		return rc;
++		dev_err(dev, "send_request_map failed, rc = %d\n", rc);
++		goto out;
+ 	}
+ 
+ 	rc = ibmvnic_wait_for_completion(adapter, &adapter->fw_done, 10000);
+@@ -222,20 +223,23 @@ static int alloc_long_term_buff(struct ibmvnic_adapter *adapter,
+ 		dev_err(dev,
+ 			"Long term map request aborted or timed out,rc = %d\n",
+ 			rc);
+-		dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr);
+-		mutex_unlock(&adapter->fw_lock);
+-		return rc;
++		goto out;
+ 	}
+ 
+ 	if (adapter->fw_done_rc) {
+ 		dev_err(dev, "Couldn't map long term buffer,rc = %d\n",
+ 			adapter->fw_done_rc);
++		rc = -1;
++		goto out;
++	}
++	rc = 0;
++out:
++	if (rc) {
+ 		dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr);
+-		mutex_unlock(&adapter->fw_lock);
+-		return -1;
++		ltb->buff = NULL;
+ 	}
+ 	mutex_unlock(&adapter->fw_lock);
+-	return 0;
++	return rc;
+ }
+ 
+ static void free_long_term_buff(struct ibmvnic_adapter *adapter,
+@@ -255,14 +259,44 @@ static void free_long_term_buff(struct ibmvnic_adapter *adapter,
+ 	    adapter->reset_reason != VNIC_RESET_TIMEOUT)
+ 		send_request_unmap(adapter, ltb->map_id);
+ 	dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr);
++	ltb->buff = NULL;
++	ltb->map_id = 0;
+ }
+ 
+-static int reset_long_term_buff(struct ibmvnic_long_term_buff *ltb)
++static int reset_long_term_buff(struct ibmvnic_adapter *adapter,
++				struct ibmvnic_long_term_buff *ltb)
+ {
+-	if (!ltb->buff)
+-		return -EINVAL;
++	struct device *dev = &adapter->vdev->dev;
++	int rc;
+ 
+ 	memset(ltb->buff, 0, ltb->size);
++
++	mutex_lock(&adapter->fw_lock);
++	adapter->fw_done_rc = 0;
++
++	reinit_completion(&adapter->fw_done);
++	rc = send_request_map(adapter, ltb->addr, ltb->size, ltb->map_id);
++	if (rc) {
++		mutex_unlock(&adapter->fw_lock);
++		return rc;
++	}
++
++	rc = ibmvnic_wait_for_completion(adapter, &adapter->fw_done, 10000);
++	if (rc) {
++		dev_info(dev,
++			 "Reset failed, long term map request timed out or aborted\n");
++		mutex_unlock(&adapter->fw_lock);
++		return rc;
++	}
++
++	if (adapter->fw_done_rc) {
++		dev_info(dev,
++			 "Reset failed, attempting to free and reallocate buffer\n");
++		free_long_term_buff(adapter, ltb);
++		mutex_unlock(&adapter->fw_lock);
++		return alloc_long_term_buff(adapter, ltb, ltb->size);
++	}
++	mutex_unlock(&adapter->fw_lock);
+ 	return 0;
+ }
+ 
+@@ -298,7 +332,14 @@ static void replenish_rx_pool(struct ibmvnic_adapter *adapter,
+ 
+ 	rx_scrq = adapter->rx_scrq[pool->index];
+ 	ind_bufp = &rx_scrq->ind_buf;
+-	for (i = 0; i < count; ++i) {
++
++	/* netdev_skb_alloc() could have failed after we saved a few skbs
++	 * in the indir_buf and we would not have sent them to VIOS yet.
++	 * To account for them, start the loop at ind_bufp->index rather
++	 * than 0. If we pushed all the skbs to VIOS, ind_bufp->index will
++	 * be 0.
++	 */
++	for (i = ind_bufp->index; i < count; ++i) {
+ 		skb = netdev_alloc_skb(adapter->netdev, pool->buff_size);
+ 		if (!skb) {
+ 			dev_err(dev, "Couldn't replenish rx buff\n");
+@@ -484,7 +525,8 @@ static int reset_rx_pools(struct ibmvnic_adapter *adapter)
+ 						  rx_pool->size *
+ 						  rx_pool->buff_size);
+ 		} else {
+-			rc = reset_long_term_buff(&rx_pool->long_term_buff);
++			rc = reset_long_term_buff(adapter,
++						  &rx_pool->long_term_buff);
+ 		}
+ 
+ 		if (rc)
+@@ -607,11 +649,12 @@ static int init_rx_pools(struct net_device *netdev)
+ 	return 0;
+ }
+ 
+-static int reset_one_tx_pool(struct ibmvnic_tx_pool *tx_pool)
++static int reset_one_tx_pool(struct ibmvnic_adapter *adapter,
++			     struct ibmvnic_tx_pool *tx_pool)
+ {
+ 	int rc, i;
+ 
+-	rc = reset_long_term_buff(&tx_pool->long_term_buff);
++	rc = reset_long_term_buff(adapter, &tx_pool->long_term_buff);
+ 	if (rc)
+ 		return rc;
+ 
+@@ -638,10 +681,11 @@ static int reset_tx_pools(struct ibmvnic_adapter *adapter)
+ 
+ 	tx_scrqs = adapter->num_active_tx_pools;
+ 	for (i = 0; i < tx_scrqs; i++) {
+-		rc = reset_one_tx_pool(&adapter->tso_pool[i]);
++		ibmvnic_tx_scrq_clean_buffer(adapter, adapter->tx_scrq[i]);
++		rc = reset_one_tx_pool(adapter, &adapter->tso_pool[i]);
+ 		if (rc)
+ 			return rc;
+-		rc = reset_one_tx_pool(&adapter->tx_pool[i]);
++		rc = reset_one_tx_pool(adapter, &adapter->tx_pool[i]);
+ 		if (rc)
+ 			return rc;
+ 	}
+@@ -734,8 +778,11 @@ static int init_tx_pools(struct net_device *netdev)
+ 
+ 	adapter->tso_pool = kcalloc(tx_subcrqs,
+ 				    sizeof(struct ibmvnic_tx_pool), GFP_KERNEL);
+-	if (!adapter->tso_pool)
++	if (!adapter->tso_pool) {
++		kfree(adapter->tx_pool);
++		adapter->tx_pool = NULL;
+ 		return -1;
++	}
+ 
+ 	adapter->num_active_tx_pools = tx_subcrqs;
+ 
+@@ -1180,6 +1227,11 @@ static int __ibmvnic_open(struct net_device *netdev)
+ 
+ 	netif_tx_start_all_queues(netdev);
+ 
++	if (prev_state == VNIC_CLOSED) {
++		for (i = 0; i < adapter->req_rx_queues; i++)
++			napi_schedule(&adapter->napi[i]);
++	}
++
+ 	adapter->state = VNIC_OPEN;
+ 	return rc;
+ }
+@@ -1583,7 +1635,8 @@ static void ibmvnic_tx_scrq_clean_buffer(struct ibmvnic_adapter *adapter,
+ 	ind_bufp->index = 0;
+ 	if (atomic_sub_return(entries, &tx_scrq->used) <=
+ 	    (adapter->req_tx_entries_per_subcrq / 2) &&
+-	    __netif_subqueue_stopped(adapter->netdev, queue_num)) {
++	    __netif_subqueue_stopped(adapter->netdev, queue_num) &&
++	    !test_bit(0, &adapter->resetting)) {
+ 		netif_wake_subqueue(adapter->netdev, queue_num);
+ 		netdev_dbg(adapter->netdev, "Started queue %d\n",
+ 			   queue_num);
+@@ -1676,7 +1729,6 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ 		tx_send_failed++;
+ 		tx_dropped++;
+ 		ret = NETDEV_TX_OK;
+-		ibmvnic_tx_scrq_flush(adapter, tx_scrq);
+ 		goto out;
+ 	}
+ 
+@@ -3140,6 +3192,7 @@ static void release_sub_crqs(struct ibmvnic_adapter *adapter, bool do_h_free)
+ 
+ 			netdev_dbg(adapter->netdev, "Releasing tx_scrq[%d]\n",
+ 				   i);
++			ibmvnic_tx_scrq_clean_buffer(adapter, adapter->tx_scrq[i]);
+ 			if (adapter->tx_scrq[i]->irq) {
+ 				free_irq(adapter->tx_scrq[i]->irq,
+ 					 adapter->tx_scrq[i]);
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index 88e9035b75cf7..dc0ded7e5e614 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -5223,18 +5223,20 @@ static void e1000_watchdog_task(struct work_struct *work)
+ 			pm_runtime_resume(netdev->dev.parent);
+ 
+ 			/* Checking if MAC is in DMoff state*/
+-			pcim_state = er32(STATUS);
+-			while (pcim_state & E1000_STATUS_PCIM_STATE) {
+-				if (tries++ == dmoff_exit_timeout) {
+-					e_dbg("Error in exiting dmoff\n");
+-					break;
+-				}
+-				usleep_range(10000, 20000);
++			if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID) {
+ 				pcim_state = er32(STATUS);
+-
+-				/* Checking if MAC exited DMoff state */
+-				if (!(pcim_state & E1000_STATUS_PCIM_STATE))
+-					e1000_phy_hw_reset(&adapter->hw);
++				while (pcim_state & E1000_STATUS_PCIM_STATE) {
++					if (tries++ == dmoff_exit_timeout) {
++						e_dbg("Error in exiting dmoff\n");
++						break;
++					}
++					usleep_range(10000, 20000);
++					pcim_state = er32(STATUS);
++
++					/* Checking if MAC exited DMoff state */
++					if (!(pcim_state & E1000_STATUS_PCIM_STATE))
++						e1000_phy_hw_reset(&adapter->hw);
++				}
+ 			}
+ 
+ 			/* update snapshot of PHY registers on LSC */
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index ccd5b9486ea98..3e822bad48513 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -1262,8 +1262,7 @@ static int i40e_set_link_ksettings(struct net_device *netdev,
+ 			if (ethtool_link_ksettings_test_link_mode(&safe_ks,
+ 								  supported,
+ 								  Autoneg) &&
+-			    hw->phy.link_info.phy_type !=
+-			    I40E_PHY_TYPE_10GBASE_T) {
++			    hw->phy.media_type != I40E_MEDIA_TYPE_BASET) {
+ 				netdev_info(netdev, "Autoneg cannot be disabled on this phy\n");
+ 				err = -EINVAL;
+ 				goto done;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 704e474879c5b..f9fe500d4ec44 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -32,7 +32,7 @@ static void i40e_vsi_reinit_locked(struct i40e_vsi *vsi);
+ static void i40e_handle_reset_warning(struct i40e_pf *pf, bool lock_acquired);
+ static int i40e_add_vsi(struct i40e_vsi *vsi);
+ static int i40e_add_veb(struct i40e_veb *veb, struct i40e_vsi *vsi);
+-static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit);
++static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit, bool lock_acquired);
+ static int i40e_setup_misc_vector(struct i40e_pf *pf);
+ static void i40e_determine_queue_usage(struct i40e_pf *pf);
+ static int i40e_setup_pf_filter_control(struct i40e_pf *pf);
+@@ -8703,6 +8703,8 @@ int i40e_vsi_open(struct i40e_vsi *vsi)
+ 			 dev_driver_string(&pf->pdev->dev),
+ 			 dev_name(&pf->pdev->dev));
+ 		err = i40e_vsi_request_irq(vsi, int_name);
++		if (err)
++			goto err_setup_rx;
+ 
+ 	} else {
+ 		err = -EINVAL;
+@@ -10569,7 +10571,7 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
+ #endif /* CONFIG_I40E_DCB */
+ 	if (!lock_acquired)
+ 		rtnl_lock();
+-	ret = i40e_setup_pf_switch(pf, reinit);
++	ret = i40e_setup_pf_switch(pf, reinit, true);
+ 	if (ret)
+ 		goto end_unlock;
+ 
+@@ -14627,10 +14629,11 @@ int i40e_fetch_switch_configuration(struct i40e_pf *pf, bool printconfig)
+  * i40e_setup_pf_switch - Setup the HW switch on startup or after reset
+  * @pf: board private structure
+  * @reinit: if the Main VSI needs to re-initialized.
++ * @lock_acquired: indicates whether or not the lock has been acquired
+  *
+  * Returns 0 on success, negative value on failure
+  **/
+-static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
++static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit, bool lock_acquired)
+ {
+ 	u16 flags = 0;
+ 	int ret;
+@@ -14732,9 +14735,15 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
+ 
+ 	i40e_ptp_init(pf);
+ 
++	if (!lock_acquired)
++		rtnl_lock();
++
+ 	/* repopulate tunnel port filters */
+ 	udp_tunnel_nic_reset_ntf(pf->vsi[pf->lan_vsi]->netdev);
+ 
++	if (!lock_acquired)
++		rtnl_unlock();
++
+ 	return ret;
+ }
+ 
+@@ -15528,7 +15537,7 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 			pf->flags |= I40E_FLAG_VEB_MODE_ENABLED;
+ 	}
+ #endif
+-	err = i40e_setup_pf_switch(pf, false);
++	err = i40e_setup_pf_switch(pf, false, false);
+ 	if (err) {
+ 		dev_info(&pdev->dev, "setup_pf_switch failed: %d\n", err);
+ 		goto err_vsis;
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index d39c7639cdbab..b3041fe6c0aed 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -7588,6 +7588,8 @@ static int mvpp2_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ err_port_probe:
++	fwnode_handle_put(port_fwnode);
++
+ 	i = 0;
+ 	fwnode_for_each_available_child_node(fwnode, port_fwnode) {
+ 		if (priv->port_list[i])
+diff --git a/drivers/net/ethernet/marvell/pxa168_eth.c b/drivers/net/ethernet/marvell/pxa168_eth.c
+index e967867828d89..9b48ae4bac39f 100644
+--- a/drivers/net/ethernet/marvell/pxa168_eth.c
++++ b/drivers/net/ethernet/marvell/pxa168_eth.c
+@@ -1528,6 +1528,7 @@ static int pxa168_eth_remove(struct platform_device *pdev)
+ 	struct net_device *dev = platform_get_drvdata(pdev);
+ 	struct pxa168_eth_private *pep = netdev_priv(dev);
+ 
++	cancel_work_sync(&pep->tx_timeout_task);
+ 	if (pep->htpr) {
+ 		dma_free_coherent(pep->dev->dev.parent, HASH_ADDR_TABLE_SIZE,
+ 				  pep->htpr, pep->htpr_dma);
+@@ -1539,7 +1540,6 @@ static int pxa168_eth_remove(struct platform_device *pdev)
+ 	clk_disable_unprepare(pep->clk);
+ 	mdiobus_unregister(pep->smi_bus);
+ 	mdiobus_free(pep->smi_bus);
+-	cancel_work_sync(&pep->tx_timeout_task);
+ 	unregister_netdev(dev);
+ 	free_netdev(dev);
+ 	return 0;
+diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
+index 04d067243457b..1ed25e48f6165 100644
+--- a/drivers/net/ethernet/microsoft/mana/mana_en.c
++++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
+@@ -1230,8 +1230,10 @@ static int mana_create_txq(struct mana_port_context *apc,
+ 
+ 		cq->gdma_id = cq->gdma_cq->id;
+ 
+-		if (WARN_ON(cq->gdma_id >= gc->max_num_cqs))
+-			return -EINVAL;
++		if (WARN_ON(cq->gdma_id >= gc->max_num_cqs)) {
++			err = -EINVAL;
++			goto out;
++		}
+ 
+ 		gc->cq_table[cq->gdma_id] = cq->gdma_cq;
+ 
+diff --git a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
+index 334af49e5add1..3dc29b282a884 100644
+--- a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
++++ b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
+@@ -2532,9 +2532,13 @@ static int pch_gbe_probe(struct pci_dev *pdev,
+ 	adapter->pdev = pdev;
+ 	adapter->hw.back = adapter;
+ 	adapter->hw.reg = pcim_iomap_table(pdev)[PCH_GBE_PCI_BAR];
++
+ 	adapter->pdata = (struct pch_gbe_privdata *)pci_id->driver_data;
+-	if (adapter->pdata && adapter->pdata->platform_init)
+-		adapter->pdata->platform_init(pdev);
++	if (adapter->pdata && adapter->pdata->platform_init) {
++		ret = adapter->pdata->platform_init(pdev);
++		if (ret)
++			goto err_free_netdev;
++	}
+ 
+ 	adapter->ptp_pdev =
+ 		pci_get_domain_bus_and_slot(pci_domain_nr(adapter->pdev->bus),
+@@ -2629,7 +2633,7 @@ err_free_netdev:
+  */
+ static int pch_gbe_minnow_platform_init(struct pci_dev *pdev)
+ {
+-	unsigned long flags = GPIOF_DIR_OUT | GPIOF_INIT_HIGH | GPIOF_EXPORT;
++	unsigned long flags = GPIOF_OUT_INIT_HIGH;
+ 	unsigned gpio = MINNOW_PHY_RESET_GPIO;
+ 	int ret;
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+index b6cd43eda7acc..8aa55612d0949 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+@@ -75,7 +75,7 @@ struct stmmac_tx_queue {
+ 	unsigned int cur_tx;
+ 	unsigned int dirty_tx;
+ 	dma_addr_t dma_tx_phy;
+-	u32 tx_tail_addr;
++	dma_addr_t tx_tail_addr;
+ 	u32 mss;
+ };
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index c87202cbd3d6d..91cd5073ddb26 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -5138,7 +5138,7 @@ read_again:
+ 
+ 		/* Buffer is good. Go on. */
+ 
+-		prefetch(page_address(buf->page));
++		prefetch(page_address(buf->page) + buf->page_offset);
+ 		if (buf->sec_page)
+ 			prefetch(page_address(buf->sec_page));
+ 
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index 6a67b026df0b6..718539cdd2f2e 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -1506,12 +1506,12 @@ static void am65_cpsw_nuss_free_tx_chns(void *data)
+ 	for (i = 0; i < common->tx_ch_num; i++) {
+ 		struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[i];
+ 
+-		if (!IS_ERR_OR_NULL(tx_chn->tx_chn))
+-			k3_udma_glue_release_tx_chn(tx_chn->tx_chn);
+-
+ 		if (!IS_ERR_OR_NULL(tx_chn->desc_pool))
+ 			k3_cppi_desc_pool_destroy(tx_chn->desc_pool);
+ 
++		if (!IS_ERR_OR_NULL(tx_chn->tx_chn))
++			k3_udma_glue_release_tx_chn(tx_chn->tx_chn);
++
+ 		memset(tx_chn, 0, sizeof(*tx_chn));
+ 	}
+ }
+@@ -1531,12 +1531,12 @@ void am65_cpsw_nuss_remove_tx_chns(struct am65_cpsw_common *common)
+ 
+ 		netif_napi_del(&tx_chn->napi_tx);
+ 
+-		if (!IS_ERR_OR_NULL(tx_chn->tx_chn))
+-			k3_udma_glue_release_tx_chn(tx_chn->tx_chn);
+-
+ 		if (!IS_ERR_OR_NULL(tx_chn->desc_pool))
+ 			k3_cppi_desc_pool_destroy(tx_chn->desc_pool);
+ 
++		if (!IS_ERR_OR_NULL(tx_chn->tx_chn))
++			k3_udma_glue_release_tx_chn(tx_chn->tx_chn);
++
+ 		memset(tx_chn, 0, sizeof(*tx_chn));
+ 	}
+ }
+@@ -1624,11 +1624,11 @@ static void am65_cpsw_nuss_free_rx_chns(void *data)
+ 
+ 	rx_chn = &common->rx_chns;
+ 
+-	if (!IS_ERR_OR_NULL(rx_chn->rx_chn))
+-		k3_udma_glue_release_rx_chn(rx_chn->rx_chn);
+-
+ 	if (!IS_ERR_OR_NULL(rx_chn->desc_pool))
+ 		k3_cppi_desc_pool_destroy(rx_chn->desc_pool);
++
++	if (!IS_ERR_OR_NULL(rx_chn->rx_chn))
++		k3_udma_glue_release_rx_chn(rx_chn->rx_chn);
+ }
+ 
+ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common)
+diff --git a/drivers/net/ieee802154/mac802154_hwsim.c b/drivers/net/ieee802154/mac802154_hwsim.c
+index da9135231c079..ebc976b7fcc2a 100644
+--- a/drivers/net/ieee802154/mac802154_hwsim.c
++++ b/drivers/net/ieee802154/mac802154_hwsim.c
+@@ -480,7 +480,7 @@ static int hwsim_del_edge_nl(struct sk_buff *msg, struct genl_info *info)
+ 	struct hwsim_edge *e;
+ 	u32 v0, v1;
+ 
+-	if (!info->attrs[MAC802154_HWSIM_ATTR_RADIO_ID] &&
++	if (!info->attrs[MAC802154_HWSIM_ATTR_RADIO_ID] ||
+ 	    !info->attrs[MAC802154_HWSIM_ATTR_RADIO_EDGE])
+ 		return -EINVAL;
+ 
+@@ -715,6 +715,8 @@ static int hwsim_subscribe_all_others(struct hwsim_phy *phy)
+ 
+ 	return 0;
+ 
++sub_fail:
++	hwsim_edge_unsubscribe_me(phy);
+ me_fail:
+ 	rcu_read_lock();
+ 	list_for_each_entry_rcu(e, &phy->edges, list) {
+@@ -722,8 +724,6 @@ me_fail:
+ 		hwsim_free_edge(e);
+ 	}
+ 	rcu_read_unlock();
+-sub_fail:
+-	hwsim_edge_unsubscribe_me(phy);
+ 	return -ENOMEM;
+ }
+ 
+@@ -824,12 +824,17 @@ err_pib:
+ static void hwsim_del(struct hwsim_phy *phy)
+ {
+ 	struct hwsim_pib *pib;
++	struct hwsim_edge *e;
+ 
+ 	hwsim_edge_unsubscribe_me(phy);
+ 
+ 	list_del(&phy->list);
+ 
+ 	rcu_read_lock();
++	list_for_each_entry_rcu(e, &phy->edges, list) {
++		list_del_rcu(&e->list);
++		hwsim_free_edge(e);
++	}
+ 	pib = rcu_dereference(phy->pib);
+ 	rcu_read_unlock();
+ 
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 92425e1fd70c0..93dc48b9b4f24 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -1819,7 +1819,7 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info)
+ 		ctx.sa.rx_sa = rx_sa;
+ 		ctx.secy = secy;
+ 		memcpy(ctx.sa.key, nla_data(tb_sa[MACSEC_SA_ATTR_KEY]),
+-		       MACSEC_KEYID_LEN);
++		       secy->key_len);
+ 
+ 		err = macsec_offload(ops->mdo_add_rxsa, &ctx);
+ 		if (err)
+@@ -2061,7 +2061,7 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info)
+ 		ctx.sa.tx_sa = tx_sa;
+ 		ctx.secy = secy;
+ 		memcpy(ctx.sa.key, nla_data(tb_sa[MACSEC_SA_ATTR_KEY]),
+-		       MACSEC_KEYID_LEN);
++		       secy->key_len);
+ 
+ 		err = macsec_offload(ops->mdo_add_txsa, &ctx);
+ 		if (err)
+diff --git a/drivers/net/phy/mscc/mscc_macsec.c b/drivers/net/phy/mscc/mscc_macsec.c
+index 10be266e48e8b..b7b2521c73fb6 100644
+--- a/drivers/net/phy/mscc/mscc_macsec.c
++++ b/drivers/net/phy/mscc/mscc_macsec.c
+@@ -501,7 +501,7 @@ static u32 vsc8584_macsec_flow_context_id(struct macsec_flow *flow)
+ }
+ 
+ /* Derive the AES key to get a key for the hash autentication */
+-static int vsc8584_macsec_derive_key(const u8 key[MACSEC_KEYID_LEN],
++static int vsc8584_macsec_derive_key(const u8 key[MACSEC_MAX_KEY_LEN],
+ 				     u16 key_len, u8 hkey[16])
+ {
+ 	const u8 input[AES_BLOCK_SIZE] = {0};
+diff --git a/drivers/net/phy/mscc/mscc_macsec.h b/drivers/net/phy/mscc/mscc_macsec.h
+index 9c6d25e36de2a..453304bae7784 100644
+--- a/drivers/net/phy/mscc/mscc_macsec.h
++++ b/drivers/net/phy/mscc/mscc_macsec.h
+@@ -81,7 +81,7 @@ struct macsec_flow {
+ 	/* Highest takes precedence [0..15] */
+ 	u8 priority;
+ 
+-	u8 key[MACSEC_KEYID_LEN];
++	u8 key[MACSEC_MAX_KEY_LEN];
+ 
+ 	union {
+ 		struct macsec_rx_sa *rx_sa;
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 28a6c4cfe9b8c..414afcb0a23f8 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -1366,22 +1366,22 @@ static struct sk_buff *vrf_ip6_rcv(struct net_device *vrf_dev,
+ 	int orig_iif = skb->skb_iif;
+ 	bool need_strict = rt6_need_strict(&ipv6_hdr(skb)->daddr);
+ 	bool is_ndisc = ipv6_ndisc_frame(skb);
+-	bool is_ll_src;
+ 
+ 	/* loopback, multicast & non-ND link-local traffic; do not push through
+ 	 * packet taps again. Reset pkt_type for upper layers to process skb.
+-	 * for packets with lladdr src, however, skip so that the dst can be
+-	 * determine at input using original ifindex in the case that daddr
+-	 * needs strict
++	 * For strict packets with a source LLA, determine the dst using the
++	 * original ifindex.
+ 	 */
+-	is_ll_src = ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL;
+-	if (skb->pkt_type == PACKET_LOOPBACK ||
+-	    (need_strict && !is_ndisc && !is_ll_src)) {
++	if (skb->pkt_type == PACKET_LOOPBACK || (need_strict && !is_ndisc)) {
+ 		skb->dev = vrf_dev;
+ 		skb->skb_iif = vrf_dev->ifindex;
+ 		IP6CB(skb)->flags |= IP6SKB_L3SLAVE;
++
+ 		if (skb->pkt_type == PACKET_LOOPBACK)
+ 			skb->pkt_type = PACKET_HOST;
++		else if (ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL)
++			vrf_ip6_input_dst(skb, vrf_dev, orig_iif);
++
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index 02a14f1b938ad..5a8df5a195cb5 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -2164,6 +2164,7 @@ static int neigh_reduce(struct net_device *dev, struct sk_buff *skb, __be32 vni)
+ 	struct neighbour *n;
+ 	struct nd_msg *msg;
+ 
++	rcu_read_lock();
+ 	in6_dev = __in6_dev_get(dev);
+ 	if (!in6_dev)
+ 		goto out;
+@@ -2215,6 +2216,7 @@ static int neigh_reduce(struct net_device *dev, struct sk_buff *skb, __be32 vni)
+ 	}
+ 
+ out:
++	rcu_read_unlock();
+ 	consume_skb(skb);
+ 	return NETDEV_TX_OK;
+ }
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 5ce4f8d038b9b..c272b290fa73d 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -5592,6 +5592,7 @@ static int ath10k_add_interface(struct ieee80211_hw *hw,
+ 
+ 	if (arvif->nohwcrypt &&
+ 	    !test_bit(ATH10K_FLAG_RAW_MODE, &ar->dev_flags)) {
++		ret = -EINVAL;
+ 		ath10k_warn(ar, "cryptmode module param needed for sw crypto\n");
+ 		goto err;
+ 	}
+diff --git a/drivers/net/wireless/ath/ath10k/pci.c b/drivers/net/wireless/ath/ath10k/pci.c
+index e7fde635e0eef..71878ab35b93c 100644
+--- a/drivers/net/wireless/ath/ath10k/pci.c
++++ b/drivers/net/wireless/ath/ath10k/pci.c
+@@ -3685,8 +3685,10 @@ static int ath10k_pci_probe(struct pci_dev *pdev,
+ 			ath10k_pci_soc_read32(ar, SOC_CHIP_ID_ADDRESS);
+ 		if (bus_params.chip_id != 0xffffffff) {
+ 			if (!ath10k_pci_chip_is_supported(pdev->device,
+-							  bus_params.chip_id))
++							  bus_params.chip_id)) {
++				ret = -ENODEV;
+ 				goto err_unsupported;
++			}
+ 		}
+ 	}
+ 
+@@ -3697,11 +3699,15 @@ static int ath10k_pci_probe(struct pci_dev *pdev,
+ 	}
+ 
+ 	bus_params.chip_id = ath10k_pci_soc_read32(ar, SOC_CHIP_ID_ADDRESS);
+-	if (bus_params.chip_id == 0xffffffff)
++	if (bus_params.chip_id == 0xffffffff) {
++		ret = -ENODEV;
+ 		goto err_unsupported;
++	}
+ 
+-	if (!ath10k_pci_chip_is_supported(pdev->device, bus_params.chip_id))
+-		goto err_free_irq;
++	if (!ath10k_pci_chip_is_supported(pdev->device, bus_params.chip_id)) {
++		ret = -ENODEV;
++		goto err_unsupported;
++	}
+ 
+ 	ret = ath10k_core_register(ar, &bus_params);
+ 	if (ret) {
+diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
+index 77ce3347ab86d..595e83fe09904 100644
+--- a/drivers/net/wireless/ath/ath11k/core.c
++++ b/drivers/net/wireless/ath/ath11k/core.c
+@@ -488,7 +488,8 @@ static int ath11k_core_fetch_board_data_api_n(struct ath11k_base *ab,
+ 		if (len < ALIGN(ie_len, 4)) {
+ 			ath11k_err(ab, "invalid length for board ie_id %d ie_len %zu len %zu\n",
+ 				   ie_id, ie_len, len);
+-			return -EINVAL;
++			ret = -EINVAL;
++			goto err;
+ 		}
+ 
+ 		switch (ie_id) {
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 9d0ff150ec30f..eb52332dbe3f1 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -5379,11 +5379,6 @@ ath11k_mac_update_vif_chan(struct ath11k *ar,
+ 		if (WARN_ON(!arvif->is_up))
+ 			continue;
+ 
+-		ret = ath11k_mac_setup_bcn_tmpl(arvif);
+-		if (ret)
+-			ath11k_warn(ab, "failed to update bcn tmpl during csa: %d\n",
+-				    ret);
+-
+ 		ret = ath11k_mac_vdev_restart(arvif, &vifs[i].new_ctx->def);
+ 		if (ret) {
+ 			ath11k_warn(ab, "failed to restart vdev %d: %d\n",
+@@ -5391,6 +5386,11 @@ ath11k_mac_update_vif_chan(struct ath11k *ar,
+ 			continue;
+ 		}
+ 
++		ret = ath11k_mac_setup_bcn_tmpl(arvif);
++		if (ret)
++			ath11k_warn(ab, "failed to update bcn tmpl during csa: %d\n",
++				    ret);
++
+ 		ret = ath11k_wmi_vdev_up(arvif->ar, arvif->vdev_id, arvif->aid,
+ 					 arvif->bssid);
+ 		if (ret) {
+diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
+index 45f6402478b50..97c3a53f9cef2 100644
+--- a/drivers/net/wireless/ath/ath9k/main.c
++++ b/drivers/net/wireless/ath/ath9k/main.c
+@@ -307,6 +307,11 @@ static int ath_reset_internal(struct ath_softc *sc, struct ath9k_channel *hchan)
+ 		hchan = ah->curchan;
+ 	}
+ 
++	if (!hchan) {
++		fastcc = false;
++		hchan = ath9k_cmn_get_channel(sc->hw, ah, &sc->cur_chan->chandef);
++	}
++
+ 	if (!ath_prepare_reset(sc))
+ 		fastcc = false;
+ 
+diff --git a/drivers/net/wireless/ath/carl9170/Kconfig b/drivers/net/wireless/ath/carl9170/Kconfig
+index b2d760873992f..ba9bea79381c5 100644
+--- a/drivers/net/wireless/ath/carl9170/Kconfig
++++ b/drivers/net/wireless/ath/carl9170/Kconfig
+@@ -16,13 +16,11 @@ config CARL9170
+ 
+ config CARL9170_LEDS
+ 	bool "SoftLED Support"
+-	depends on CARL9170
+-	select MAC80211_LEDS
+-	select LEDS_CLASS
+-	select NEW_LEDS
+ 	default y
++	depends on CARL9170
++	depends on MAC80211_LEDS
+ 	help
+-	  This option is necessary, if you want your device' LEDs to blink
++	  This option is necessary, if you want your device's LEDs to blink.
+ 
+ 	  Say Y, unless you need the LEDs for firmware debugging.
+ 
+diff --git a/drivers/net/wireless/ath/wcn36xx/main.c b/drivers/net/wireless/ath/wcn36xx/main.c
+index afb4877eaad8f..dabed4e3ca457 100644
+--- a/drivers/net/wireless/ath/wcn36xx/main.c
++++ b/drivers/net/wireless/ath/wcn36xx/main.c
+@@ -293,23 +293,16 @@ static int wcn36xx_start(struct ieee80211_hw *hw)
+ 		goto out_free_dxe_pool;
+ 	}
+ 
+-	wcn->hal_buf = kmalloc(WCN36XX_HAL_BUF_SIZE, GFP_KERNEL);
+-	if (!wcn->hal_buf) {
+-		wcn36xx_err("Failed to allocate smd buf\n");
+-		ret = -ENOMEM;
+-		goto out_free_dxe_ctl;
+-	}
+-
+ 	ret = wcn36xx_smd_load_nv(wcn);
+ 	if (ret) {
+ 		wcn36xx_err("Failed to push NV to chip\n");
+-		goto out_free_smd_buf;
++		goto out_free_dxe_ctl;
+ 	}
+ 
+ 	ret = wcn36xx_smd_start(wcn);
+ 	if (ret) {
+ 		wcn36xx_err("Failed to start chip\n");
+-		goto out_free_smd_buf;
++		goto out_free_dxe_ctl;
+ 	}
+ 
+ 	if (!wcn36xx_is_fw_version(wcn, 1, 2, 2, 24)) {
+@@ -336,8 +329,6 @@ static int wcn36xx_start(struct ieee80211_hw *hw)
+ 
+ out_smd_stop:
+ 	wcn36xx_smd_stop(wcn);
+-out_free_smd_buf:
+-	kfree(wcn->hal_buf);
+ out_free_dxe_ctl:
+ 	wcn36xx_dxe_free_ctl_blks(wcn);
+ out_free_dxe_pool:
+@@ -372,8 +363,6 @@ static void wcn36xx_stop(struct ieee80211_hw *hw)
+ 
+ 	wcn36xx_dxe_free_mem_pools(wcn);
+ 	wcn36xx_dxe_free_ctl_blks(wcn);
+-
+-	kfree(wcn->hal_buf);
+ }
+ 
+ static void wcn36xx_change_ps(struct wcn36xx *wcn, bool enable)
+@@ -1401,6 +1390,12 @@ static int wcn36xx_probe(struct platform_device *pdev)
+ 	mutex_init(&wcn->hal_mutex);
+ 	mutex_init(&wcn->scan_lock);
+ 
++	wcn->hal_buf = devm_kmalloc(wcn->dev, WCN36XX_HAL_BUF_SIZE, GFP_KERNEL);
++	if (!wcn->hal_buf) {
++		ret = -ENOMEM;
++		goto out_wq;
++	}
++
+ 	ret = dma_set_mask_and_coherent(wcn->dev, DMA_BIT_MASK(32));
+ 	if (ret < 0) {
+ 		wcn36xx_err("failed to set DMA mask: %d\n", ret);
+diff --git a/drivers/net/wireless/ath/wil6210/cfg80211.c b/drivers/net/wireless/ath/wil6210/cfg80211.c
+index 6746fd206d2a9..1ff2679963f06 100644
+--- a/drivers/net/wireless/ath/wil6210/cfg80211.c
++++ b/drivers/net/wireless/ath/wil6210/cfg80211.c
+@@ -2842,9 +2842,7 @@ void wil_p2p_wdev_free(struct wil6210_priv *wil)
+ 	wil->radio_wdev = wil->main_ndev->ieee80211_ptr;
+ 	mutex_unlock(&wil->vif_mutex);
+ 	if (p2p_wdev) {
+-		wiphy_lock(wil->wiphy);
+ 		cfg80211_unregister_wdev(p2p_wdev);
+-		wiphy_unlock(wil->wiphy);
+ 		kfree(p2p_wdev);
+ 	}
+ }
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index f4405d7861b69..d8822a01d277e 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -2767,8 +2767,9 @@ brcmf_cfg80211_get_station(struct wiphy *wiphy, struct net_device *ndev,
+ 	struct brcmf_sta_info_le sta_info_le;
+ 	u32 sta_flags;
+ 	u32 is_tdls_peer;
+-	s32 total_rssi;
+-	s32 count_rssi;
++	s32 total_rssi_avg = 0;
++	s32 total_rssi = 0;
++	s32 count_rssi = 0;
+ 	int rssi;
+ 	u32 i;
+ 
+@@ -2834,25 +2835,27 @@ brcmf_cfg80211_get_station(struct wiphy *wiphy, struct net_device *ndev,
+ 			sinfo->filled |= BIT_ULL(NL80211_STA_INFO_RX_BYTES);
+ 			sinfo->rx_bytes = le64_to_cpu(sta_info_le.rx_tot_bytes);
+ 		}
+-		total_rssi = 0;
+-		count_rssi = 0;
+ 		for (i = 0; i < BRCMF_ANT_MAX; i++) {
+-			if (sta_info_le.rssi[i]) {
+-				sinfo->chain_signal_avg[count_rssi] =
+-					sta_info_le.rssi[i];
+-				sinfo->chain_signal[count_rssi] =
+-					sta_info_le.rssi[i];
+-				total_rssi += sta_info_le.rssi[i];
+-				count_rssi++;
+-			}
++			if (sta_info_le.rssi[i] == 0 ||
++			    sta_info_le.rx_lastpkt_rssi[i] == 0)
++				continue;
++			sinfo->chains |= BIT(count_rssi);
++			sinfo->chain_signal[count_rssi] =
++				sta_info_le.rx_lastpkt_rssi[i];
++			sinfo->chain_signal_avg[count_rssi] =
++				sta_info_le.rssi[i];
++			total_rssi += sta_info_le.rx_lastpkt_rssi[i];
++			total_rssi_avg += sta_info_le.rssi[i];
++			count_rssi++;
+ 		}
+ 		if (count_rssi) {
+-			sinfo->filled |= BIT_ULL(NL80211_STA_INFO_CHAIN_SIGNAL);
+-			sinfo->chains = count_rssi;
+-
+ 			sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL);
+-			total_rssi /= count_rssi;
+-			sinfo->signal = total_rssi;
++			sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL_AVG);
++			sinfo->filled |= BIT_ULL(NL80211_STA_INFO_CHAIN_SIGNAL);
++			sinfo->filled |=
++				BIT_ULL(NL80211_STA_INFO_CHAIN_SIGNAL_AVG);
++			sinfo->signal = total_rssi / count_rssi;
++			sinfo->signal_avg = total_rssi_avg / count_rssi;
+ 		} else if (test_bit(BRCMF_VIF_STATUS_CONNECTED,
+ 			&ifp->vif->sme_state)) {
+ 			memset(&scb_val, 0, sizeof(scb_val));
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+index 16ed325795a8b..faf5f8e5eee33 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+@@ -626,8 +626,8 @@ BRCMF_FW_DEF(4373, "brcmfmac4373-sdio");
+ BRCMF_FW_DEF(43012, "brcmfmac43012-sdio");
+ 
+ /* firmware config files */
+-MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcm/brcmfmac*-sdio.*.txt");
+-MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcm/brcmfmac*-pcie.*.txt");
++MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-sdio.*.txt");
++MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-pcie.*.txt");
+ 
+ static const struct brcmf_firmware_mapping brcmf_sdio_fwnames[] = {
+ 	BRCMF_FW_ENTRY(BRCM_CC_43143_CHIP_ID, 0xFFFFFFFF, 43143),
+@@ -4162,7 +4162,6 @@ static int brcmf_sdio_bus_reset(struct device *dev)
+ 	if (ret) {
+ 		brcmf_err("Failed to probe after sdio device reset: ret %d\n",
+ 			  ret);
+-		brcmf_sdiod_remove(sdiodev);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c
+index 39f3af2d0439b..eadac0f5590fc 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c
+@@ -1220,6 +1220,7 @@ static int brcms_bcma_probe(struct bcma_device *pdev)
+ {
+ 	struct brcms_info *wl;
+ 	struct ieee80211_hw *hw;
++	int ret;
+ 
+ 	dev_info(&pdev->dev, "mfg %x core %x rev %d class %d irq %d\n",
+ 		 pdev->id.manuf, pdev->id.id, pdev->id.rev, pdev->id.class,
+@@ -1244,11 +1245,16 @@ static int brcms_bcma_probe(struct bcma_device *pdev)
+ 	wl = brcms_attach(pdev);
+ 	if (!wl) {
+ 		pr_err("%s: brcms_attach failed!\n", __func__);
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto err_free_ieee80211;
+ 	}
+ 	brcms_led_register(wl);
+ 
+ 	return 0;
++
++err_free_ieee80211:
++	ieee80211_free_hw(hw);
++	return ret;
+ }
+ 
+ static int brcms_suspend(struct bcma_device *pdev)
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.h b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.h
+index e4f91bce222d8..61d3d4e0b7d94 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.h
+@@ -1,7 +1,7 @@
+ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
+ /******************************************************************************
+  *
+- * Copyright(c) 2020 Intel Corporation
++ * Copyright(c) 2020-2021 Intel Corporation
+  *
+  *****************************************************************************/
+ 
+@@ -10,7 +10,7 @@
+ 
+ #include "fw/notif-wait.h"
+ 
+-#define MVM_UCODE_PNVM_TIMEOUT	(HZ / 10)
++#define MVM_UCODE_PNVM_TIMEOUT	(HZ / 4)
+ 
+ int iwl_pnvm_load(struct iwl_trans *trans,
+ 		  struct iwl_notif_wait_data *notif_wait);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index 1ad621d13ad3a..0a13c2bda2eed 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -1032,6 +1032,9 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
+ 	if (WARN_ON_ONCE(mvmsta->sta_id == IWL_MVM_INVALID_STA))
+ 		return -1;
+ 
++	if (unlikely(ieee80211_is_any_nullfunc(fc)) && sta->he_cap.has_he)
++		return -1;
++
+ 	if (unlikely(ieee80211_is_probe_resp(fc)))
+ 		iwl_mvm_probe_resp_set_noa(mvm, skb);
+ 
+diff --git a/drivers/net/wireless/marvell/mwifiex/pcie.c b/drivers/net/wireless/marvell/mwifiex/pcie.c
+index 94228b316df1b..46517515ba728 100644
+--- a/drivers/net/wireless/marvell/mwifiex/pcie.c
++++ b/drivers/net/wireless/marvell/mwifiex/pcie.c
+@@ -1231,7 +1231,7 @@ static int mwifiex_pcie_delete_cmdrsp_buf(struct mwifiex_adapter *adapter)
+ static int mwifiex_pcie_alloc_sleep_cookie_buf(struct mwifiex_adapter *adapter)
+ {
+ 	struct pcie_service_card *card = adapter->card;
+-	u32 tmp;
++	u32 *cookie;
+ 
+ 	card->sleep_cookie_vbase = dma_alloc_coherent(&card->dev->dev,
+ 						      sizeof(u32),
+@@ -1242,13 +1242,11 @@ static int mwifiex_pcie_alloc_sleep_cookie_buf(struct mwifiex_adapter *adapter)
+ 			    "dma_alloc_coherent failed!\n");
+ 		return -ENOMEM;
+ 	}
++	cookie = (u32 *)card->sleep_cookie_vbase;
+ 	/* Init val of Sleep Cookie */
+-	tmp = FW_AWAKE_COOKIE;
+-	put_unaligned(tmp, card->sleep_cookie_vbase);
++	*cookie = FW_AWAKE_COOKIE;
+ 
+-	mwifiex_dbg(adapter, INFO,
+-		    "alloc_scook: sleep cookie=0x%x\n",
+-		    get_unaligned(card->sleep_cookie_vbase));
++	mwifiex_dbg(adapter, INFO, "alloc_scook: sleep cookie=0x%x\n", *cookie);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+index aa42af9ebfd6a..ae2191371f511 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+@@ -411,6 +411,9 @@ mt7615_mcu_rx_csa_notify(struct mt7615_dev *dev, struct sk_buff *skb)
+ 
+ 	c = (struct mt7615_mcu_csa_notify *)skb->data;
+ 
++	if (c->omac_idx > EXT_BSSID_MAX)
++		return;
++
+ 	if (ext_phy && ext_phy->omac_mask & BIT_ULL(c->omac_idx))
+ 		mphy = dev->mt76.phy2;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/pci_mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/pci_mac.c
+index d7cbef752f9fd..cc278d8cb8886 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/pci_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/pci_mac.c
+@@ -131,20 +131,21 @@ int mt7615_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
+ 			  struct mt76_tx_info *tx_info)
+ {
+ 	struct mt7615_dev *dev = container_of(mdev, struct mt7615_dev, mt76);
+-	struct mt7615_sta *msta = container_of(wcid, struct mt7615_sta, wcid);
+ 	struct ieee80211_tx_info *info = IEEE80211_SKB_CB(tx_info->skb);
+ 	struct ieee80211_key_conf *key = info->control.hw_key;
+ 	int pid, id;
+ 	u8 *txwi = (u8 *)txwi_ptr;
+ 	struct mt76_txwi_cache *t;
++	struct mt7615_sta *msta;
+ 	void *txp;
+ 
++	msta = wcid ? container_of(wcid, struct mt7615_sta, wcid) : NULL;
+ 	if (!wcid)
+ 		wcid = &dev->mt76.global_wcid;
+ 
+ 	pid = mt76_tx_status_skb_add(mdev, wcid, tx_info->skb);
+ 
+-	if (info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE) {
++	if ((info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE) && msta) {
+ 		struct mt7615_phy *phy = &dev->phy;
+ 
+ 		if ((info->hw_queue & MT_TX_HW_QUEUE_EXT_PHY) && mdev->phy2)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/usb_sdio.c b/drivers/net/wireless/mediatek/mt76/mt7615/usb_sdio.c
+index f8d3673c2cae8..7010101f6b147 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/usb_sdio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/usb_sdio.c
+@@ -191,14 +191,15 @@ int mt7663_usb_sdio_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
+ 				   struct ieee80211_sta *sta,
+ 				   struct mt76_tx_info *tx_info)
+ {
+-	struct mt7615_sta *msta = container_of(wcid, struct mt7615_sta, wcid);
+ 	struct mt7615_dev *dev = container_of(mdev, struct mt7615_dev, mt76);
+ 	struct sk_buff *skb = tx_info->skb;
+ 	struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
++	struct mt7615_sta *msta;
+ 	int pad;
+ 
++	msta = wcid ? container_of(wcid, struct mt7615_sta, wcid) : NULL;
+ 	if ((info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE) &&
+-	    !msta->rate_probe) {
++	    msta && !msta->rate_probe) {
+ 		/* request to configure sampling rate */
+ 		spin_lock_bh(&dev->mt76.lock);
+ 		mt7615_mac_set_rates(&dev->phy, msta, &info->control.rates[0],
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac.h b/drivers/net/wireless/mediatek/mt76/mt76_connac.h
+index 6c889b90fd12a..c26cfef425ed8 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac.h
+@@ -12,7 +12,7 @@
+ #define MT76_CONNAC_MAX_SCAN_MATCH		16
+ 
+ #define MT76_CONNAC_COREDUMP_TIMEOUT		(HZ / 20)
+-#define MT76_CONNAC_COREDUMP_SZ			(128 * 1024)
++#define MT76_CONNAC_COREDUMP_SZ			(1300 * 1024)
+ 
+ enum {
+ 	CMD_CBW_20MHZ = IEEE80211_STA_RX_BW_20,
+@@ -45,6 +45,7 @@ enum {
+ 
+ struct mt76_connac_pm {
+ 	bool enable;
++	bool suspended;
+ 
+ 	spinlock_t txq_lock;
+ 	struct {
+@@ -127,8 +128,12 @@ mt76_connac_pm_unref(struct mt76_connac_pm *pm)
+ static inline bool
+ mt76_connac_skip_fw_pmctrl(struct mt76_phy *phy, struct mt76_connac_pm *pm)
+ {
++	struct mt76_dev *dev = phy->dev;
+ 	bool ret;
+ 
++	if (dev->token_count)
++		return true;
++
+ 	spin_lock_bh(&pm->wake.lock);
+ 	ret = pm->wake.count || test_and_set_bit(MT76_STATE_PM, &phy->state);
+ 	spin_unlock_bh(&pm->wake.lock);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
+index 6f180c92d4132..5f2705fbd6803 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
+@@ -17,6 +17,9 @@ int mt76_connac_pm_wake(struct mt76_phy *phy, struct mt76_connac_pm *pm)
+ 	if (!test_bit(MT76_STATE_PM, &phy->state))
+ 		return 0;
+ 
++	if (pm->suspended)
++		return 0;
++
+ 	queue_work(dev->wq, &pm->wake_work);
+ 	if (!wait_event_timeout(pm->wait,
+ 				!test_bit(MT76_STATE_PM, &phy->state),
+@@ -40,6 +43,9 @@ void mt76_connac_power_save_sched(struct mt76_phy *phy,
+ 	if (!pm->enable)
+ 		return;
+ 
++	if (pm->suspended)
++		return;
++
+ 	pm->last_activity = jiffies;
+ 
+ 	if (!test_bit(MT76_STATE_PM, &phy->state)) {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+index 619561606f96d..eb19721f9d79a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+@@ -1939,7 +1939,7 @@ mt76_connac_mcu_set_wow_pattern(struct mt76_dev *dev,
+ 	ptlv->index = index;
+ 
+ 	memcpy(ptlv->pattern, pattern->pattern, pattern->pattern_len);
+-	memcpy(ptlv->mask, pattern->mask, pattern->pattern_len / 8);
++	memcpy(ptlv->mask, pattern->mask, DIV_ROUND_UP(pattern->pattern_len, 8));
+ 
+ 	return mt76_mcu_skb_send_msg(dev, skb, MCU_UNI_CMD_SUSPEND, true);
+ }
+@@ -1974,14 +1974,17 @@ mt76_connac_mcu_set_wow_ctrl(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ 	};
+ 
+ 	if (wowlan->magic_pkt)
+-		req.wow_ctrl_tlv.trigger |= BIT(0);
++		req.wow_ctrl_tlv.trigger |= UNI_WOW_DETECT_TYPE_MAGIC;
+ 	if (wowlan->disconnect)
+-		req.wow_ctrl_tlv.trigger |= BIT(2);
++		req.wow_ctrl_tlv.trigger |= (UNI_WOW_DETECT_TYPE_DISCONNECT |
++					     UNI_WOW_DETECT_TYPE_BCN_LOST);
+ 	if (wowlan->nd_config) {
+ 		mt76_connac_mcu_sched_scan_req(phy, vif, wowlan->nd_config);
+-		req.wow_ctrl_tlv.trigger |= BIT(5);
++		req.wow_ctrl_tlv.trigger |= UNI_WOW_DETECT_TYPE_SCH_SCAN_HIT;
+ 		mt76_connac_mcu_sched_scan_enable(phy, vif, suspend);
+ 	}
++	if (wowlan->n_patterns)
++		req.wow_ctrl_tlv.trigger |= UNI_WOW_DETECT_TYPE_BITMAP;
+ 
+ 	if (mt76_is_mmio(dev))
+ 		req.wow_ctrl_tlv.wakeup_hif = WOW_PCIE;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+index a1096861d04a3..3bcae732872ed 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+@@ -590,6 +590,14 @@ enum {
+ 	UNI_OFFLOAD_OFFLOAD_BMC_RPY_DETECT,
+ };
+ 
++#define UNI_WOW_DETECT_TYPE_MAGIC		BIT(0)
++#define UNI_WOW_DETECT_TYPE_ANY			BIT(1)
++#define UNI_WOW_DETECT_TYPE_DISCONNECT		BIT(2)
++#define UNI_WOW_DETECT_TYPE_GTK_REKEY_FAIL	BIT(3)
++#define UNI_WOW_DETECT_TYPE_BCN_LOST		BIT(4)
++#define UNI_WOW_DETECT_TYPE_SCH_SCAN_HIT	BIT(5)
++#define UNI_WOW_DETECT_TYPE_BITMAP		BIT(6)
++
+ enum {
+ 	UNI_SUSPEND_MODE_SETTING,
+ 	UNI_SUSPEND_WOW_CTRL,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.h b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.h
+index 033fb592bdf02..30bf41b8ed152 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.h
+@@ -33,7 +33,7 @@ enum mt7915_eeprom_field {
+ #define MT_EE_WIFI_CAL_GROUP			BIT(0)
+ #define MT_EE_WIFI_CAL_DPD			GENMASK(2, 1)
+ #define MT_EE_CAL_UNIT				1024
+-#define MT_EE_CAL_GROUP_SIZE			(44 * MT_EE_CAL_UNIT)
++#define MT_EE_CAL_GROUP_SIZE			(49 * MT_EE_CAL_UNIT + 16)
+ #define MT_EE_CAL_DPD_SIZE			(54 * MT_EE_CAL_UNIT)
+ 
+ #define MT_EE_WIFI_CONF0_TX_PATH		GENMASK(2, 0)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index b3f14ff67c5ae..764f25a828fa2 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -3440,8 +3440,9 @@ int mt7915_mcu_apply_tx_dpd(struct mt7915_phy *phy)
+ {
+ 	struct mt7915_dev *dev = phy->dev;
+ 	struct cfg80211_chan_def *chandef = &phy->mt76->chandef;
+-	u16 total = 2, idx, center_freq = chandef->center_freq1;
++	u16 total = 2, center_freq = chandef->center_freq1;
+ 	u8 *cal = dev->cal, *eep = dev->mt76.eeprom.data;
++	int idx;
+ 
+ 	if (!(eep[MT_EE_DO_PRE_CAL] & MT_EE_WIFI_CAL_DPD))
+ 		return 0;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/testmode.c b/drivers/net/wireless/mediatek/mt76/mt7915/testmode.c
+index f9d81e36ef09a..b220b334906bc 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/testmode.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/testmode.c
+@@ -464,10 +464,17 @@ mt7915_tm_set_tx_frames(struct mt7915_phy *phy, bool en)
+ static void
+ mt7915_tm_set_rx_frames(struct mt7915_phy *phy, bool en)
+ {
+-	if (en)
++	mt7915_tm_set_trx(phy, TM_MAC_RX_RXV, false);
++
++	if (en) {
++		struct mt7915_dev *dev = phy->dev;
++
+ 		mt7915_tm_update_channel(phy);
+ 
+-	mt7915_tm_set_trx(phy, TM_MAC_RX_RXV, en);
++		/* read-clear */
++		mt76_rr(dev, MT_MIB_SDR3(phy != &dev->phy));
++		mt7915_tm_set_trx(phy, TM_MAC_RX_RXV, en);
++	}
+ }
+ 
+ static int
+@@ -690,7 +697,11 @@ static int
+ mt7915_tm_dump_stats(struct mt76_phy *mphy, struct sk_buff *msg)
+ {
+ 	struct mt7915_phy *phy = mphy->priv;
++	struct mt7915_dev *dev = phy->dev;
++	bool ext_phy = phy != &dev->phy;
++	enum mt76_rxq_id q;
+ 	void *rx, *rssi;
++	u16 fcs_err;
+ 	int i;
+ 
+ 	rx = nla_nest_start(msg, MT76_TM_STATS_ATTR_LAST_RX);
+@@ -735,6 +746,12 @@ mt7915_tm_dump_stats(struct mt76_phy *mphy, struct sk_buff *msg)
+ 
+ 	nla_nest_end(msg, rx);
+ 
++	fcs_err = mt76_get_field(dev, MT_MIB_SDR3(ext_phy),
++				 MT_MIB_SDR3_FCS_ERR_MASK);
++	q = ext_phy ? MT_RXQ_EXT : MT_RXQ_MAIN;
++	mphy->test.rx_stats.packets[q] += fcs_err;
++	mphy->test.rx_stats.fcs_error[q] += fcs_err;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c
+index 6ee423dd4027c..6602903c0d026 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c
+@@ -184,7 +184,10 @@ mt7921_txpwr(struct seq_file *s, void *data)
+ 	struct mt7921_txpwr txpwr;
+ 	int ret;
+ 
++	mt7921_mutex_acquire(dev);
+ 	ret = mt7921_get_txpwr_info(dev, &txpwr);
++	mt7921_mutex_release(dev);
++
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/dma.c b/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
+index 71e664ee76520..bd9143dc865f9 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
+@@ -313,9 +313,9 @@ static int mt7921_dma_reset(struct mt7921_dev *dev, bool force)
+ 
+ int mt7921_wfsys_reset(struct mt7921_dev *dev)
+ {
+-	mt76_set(dev, 0x70002600, BIT(0));
+-	msleep(200);
+-	mt76_clear(dev, 0x70002600, BIT(0));
++	mt76_clear(dev, MT_WFSYS_SW_RST_B, WFSYS_SW_RST_B);
++	msleep(50);
++	mt76_set(dev, MT_WFSYS_SW_RST_B, WFSYS_SW_RST_B);
+ 
+ 	if (!__mt76_poll_msec(&dev->mt76, MT_WFSYS_SW_RST_B,
+ 			      WFSYS_SW_INIT_DONE, WFSYS_SW_INIT_DONE, 500))
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/init.c b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
+index 1763ea0614ce2..2cb0252e63b21 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
+@@ -73,6 +73,7 @@ static void
+ mt7921_init_wiphy(struct ieee80211_hw *hw)
+ {
+ 	struct mt7921_phy *phy = mt7921_hw_phy(hw);
++	struct mt7921_dev *dev = phy->dev;
+ 	struct wiphy *wiphy = hw->wiphy;
+ 
+ 	hw->queues = 4;
+@@ -110,36 +111,21 @@ mt7921_init_wiphy(struct ieee80211_hw *hw)
+ 	ieee80211_hw_set(hw, SUPPORTS_PS);
+ 	ieee80211_hw_set(hw, SUPPORTS_DYNAMIC_PS);
+ 
++	if (dev->pm.enable)
++		ieee80211_hw_set(hw, CONNECTION_MONITOR);
++
+ 	hw->max_tx_fragments = 4;
+ }
+ 
+ static void
+ mt7921_mac_init_band(struct mt7921_dev *dev, u8 band)
+ {
+-	u32 mask, set;
+-
+ 	mt76_rmw_field(dev, MT_TMAC_CTCR0(band),
+ 		       MT_TMAC_CTCR0_INS_DDLMT_REFTIME, 0x3f);
+ 	mt76_set(dev, MT_TMAC_CTCR0(band),
+ 		 MT_TMAC_CTCR0_INS_DDLMT_VHT_SMPDU_EN |
+ 		 MT_TMAC_CTCR0_INS_DDLMT_EN);
+ 
+-	mask = MT_MDP_RCFR0_MCU_RX_MGMT |
+-	       MT_MDP_RCFR0_MCU_RX_CTL_NON_BAR |
+-	       MT_MDP_RCFR0_MCU_RX_CTL_BAR;
+-	set = FIELD_PREP(MT_MDP_RCFR0_MCU_RX_MGMT, MT_MDP_TO_HIF) |
+-	      FIELD_PREP(MT_MDP_RCFR0_MCU_RX_CTL_NON_BAR, MT_MDP_TO_HIF) |
+-	      FIELD_PREP(MT_MDP_RCFR0_MCU_RX_CTL_BAR, MT_MDP_TO_HIF);
+-	mt76_rmw(dev, MT_MDP_BNRCFR0(band), mask, set);
+-
+-	mask = MT_MDP_RCFR1_MCU_RX_BYPASS |
+-	       MT_MDP_RCFR1_RX_DROPPED_UCAST |
+-	       MT_MDP_RCFR1_RX_DROPPED_MCAST;
+-	set = FIELD_PREP(MT_MDP_RCFR1_MCU_RX_BYPASS, MT_MDP_TO_HIF) |
+-	      FIELD_PREP(MT_MDP_RCFR1_RX_DROPPED_UCAST, MT_MDP_TO_HIF) |
+-	      FIELD_PREP(MT_MDP_RCFR1_RX_DROPPED_MCAST, MT_MDP_TO_HIF);
+-	mt76_rmw(dev, MT_MDP_BNRCFR1(band), mask, set);
+-
+ 	mt76_set(dev, MT_WF_RMAC_MIB_TIME0(band), MT_WF_RMAC_MIB_RXTIME_EN);
+ 	mt76_set(dev, MT_WF_RMAC_MIB_AIRTIME0(band), MT_WF_RMAC_MIB_RXTIME_EN);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+index decf2d5f0ce3a..493c2aba2f791 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+@@ -444,16 +444,19 @@ int mt7921_mac_fill_rx(struct mt7921_dev *dev, struct sk_buff *skb)
+ 		status->chain_signal[1] = to_rssi(MT_PRXV_RCPI1, v1);
+ 		status->chain_signal[2] = to_rssi(MT_PRXV_RCPI2, v1);
+ 		status->chain_signal[3] = to_rssi(MT_PRXV_RCPI3, v1);
+-		status->signal = status->chain_signal[0];
+-
+-		for (i = 1; i < hweight8(mphy->antenna_mask); i++) {
+-			if (!(status->chains & BIT(i)))
++		status->signal = -128;
++		for (i = 0; i < hweight8(mphy->antenna_mask); i++) {
++			if (!(status->chains & BIT(i)) ||
++			    status->chain_signal[i] >= 0)
+ 				continue;
+ 
+ 			status->signal = max(status->signal,
+ 					     status->chain_signal[i]);
+ 		}
+ 
++		if (status->signal == -128)
++			status->flag |= RX_FLAG_NO_SIGNAL_VAL;
++
+ 		stbc = FIELD_GET(MT_PRXV_STBC, v0);
+ 		gi = FIELD_GET(MT_PRXV_SGI, v0);
+ 		cck = false;
+@@ -1196,7 +1199,8 @@ mt7921_vif_connect_iter(void *priv, u8 *mac,
+ 	struct mt7921_vif *mvif = (struct mt7921_vif *)vif->drv_priv;
+ 	struct mt7921_dev *dev = mvif->phy->dev;
+ 
+-	ieee80211_disconnect(vif, true);
++	if (vif->type == NL80211_IFTYPE_STATION)
++		ieee80211_disconnect(vif, true);
+ 
+ 	mt76_connac_mcu_uni_add_dev(&dev->mphy, vif, &mvif->sta.wcid, true);
+ 	mt7921_mcu_set_tx(dev, vif);
+@@ -1269,6 +1273,7 @@ void mt7921_mac_reset_work(struct work_struct *work)
+ 	hw = mt76_hw(dev);
+ 
+ 	dev_err(dev->mt76.dev, "chip reset\n");
++	dev->hw_full_reset = true;
+ 	ieee80211_stop_queues(hw);
+ 
+ 	cancel_delayed_work_sync(&dev->mphy.mac_work);
+@@ -1293,6 +1298,7 @@ void mt7921_mac_reset_work(struct work_struct *work)
+ 		ieee80211_scan_completed(dev->mphy.hw, &info);
+ 	}
+ 
++	dev->hw_full_reset = false;
+ 	ieee80211_wake_queues(hw);
+ 	ieee80211_iterate_active_interfaces(hw,
+ 					    IEEE80211_IFACE_ITER_RESUME_ALL,
+@@ -1303,7 +1309,11 @@ void mt7921_reset(struct mt76_dev *mdev)
+ {
+ 	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
+ 
+-	queue_work(dev->mt76.wq, &dev->reset_work);
++	if (!test_bit(MT76_STATE_RUNNING, &dev->mphy.state))
++		return;
++
++	if (!dev->hw_full_reset)
++		queue_work(dev->mt76.wq, &dev->reset_work);
+ }
+ 
+ static void
+@@ -1494,7 +1504,7 @@ void mt7921_coredump_work(struct work_struct *work)
+ 			break;
+ 
+ 		skb_pull(skb, sizeof(struct mt7921_mcu_rxd));
+-		if (data + skb->len - dump > MT76_CONNAC_COREDUMP_SZ) {
++		if (!dump || data + skb->len - dump > MT76_CONNAC_COREDUMP_SZ) {
+ 			dev_kfree_skb(skb);
+ 			continue;
+ 		}
+@@ -1504,7 +1514,10 @@ void mt7921_coredump_work(struct work_struct *work)
+ 
+ 		dev_kfree_skb(skb);
+ 	}
+-	dev_coredumpv(dev->mt76.dev, dump, MT76_CONNAC_COREDUMP_SZ,
+-		      GFP_KERNEL);
++
++	if (dump)
++		dev_coredumpv(dev->mt76.dev, dump, MT76_CONNAC_COREDUMP_SZ,
++			      GFP_KERNEL);
++
+ 	mt7921_reset(&dev->mt76);
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index 97a0ef331ac32..bd77a04a15fb2 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -223,54 +223,6 @@ static void mt7921_stop(struct ieee80211_hw *hw)
+ 	mt7921_mutex_release(dev);
+ }
+ 
+-static inline int get_free_idx(u32 mask, u8 start, u8 end)
+-{
+-	return ffs(~mask & GENMASK(end, start));
+-}
+-
+-static int get_omac_idx(enum nl80211_iftype type, u64 mask)
+-{
+-	int i;
+-
+-	switch (type) {
+-	case NL80211_IFTYPE_STATION:
+-		/* prefer hw bssid slot 1-3 */
+-		i = get_free_idx(mask, HW_BSSID_1, HW_BSSID_3);
+-		if (i)
+-			return i - 1;
+-
+-		/* next, try to find a free repeater entry for the sta */
+-		i = get_free_idx(mask >> REPEATER_BSSID_START, 0,
+-				 REPEATER_BSSID_MAX - REPEATER_BSSID_START);
+-		if (i)
+-			return i + 32 - 1;
+-
+-		i = get_free_idx(mask, EXT_BSSID_1, EXT_BSSID_MAX);
+-		if (i)
+-			return i - 1;
+-
+-		if (~mask & BIT(HW_BSSID_0))
+-			return HW_BSSID_0;
+-
+-		break;
+-	case NL80211_IFTYPE_MONITOR:
+-		/* ap uses hw bssid 0 and ext bssid */
+-		if (~mask & BIT(HW_BSSID_0))
+-			return HW_BSSID_0;
+-
+-		i = get_free_idx(mask, EXT_BSSID_1, EXT_BSSID_MAX);
+-		if (i)
+-			return i - 1;
+-
+-		break;
+-	default:
+-		WARN_ON(1);
+-		break;
+-	}
+-
+-	return -1;
+-}
+-
+ static int mt7921_add_interface(struct ieee80211_hw *hw,
+ 				struct ieee80211_vif *vif)
+ {
+@@ -292,12 +244,7 @@ static int mt7921_add_interface(struct ieee80211_hw *hw,
+ 		goto out;
+ 	}
+ 
+-	idx = get_omac_idx(vif->type, phy->omac_mask);
+-	if (idx < 0) {
+-		ret = -ENOSPC;
+-		goto out;
+-	}
+-	mvif->mt76.omac_idx = idx;
++	mvif->mt76.omac_idx = mvif->mt76.idx;
+ 	mvif->phy = phy;
+ 	mvif->mt76.band_idx = 0;
+ 	mvif->mt76.wmm_idx = mvif->mt76.idx % MT7921_MAX_WMM_SETS;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+index 67dc4b4cc0945..7c68182cad552 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+@@ -450,22 +450,33 @@ mt7921_mcu_scan_event(struct mt7921_dev *dev, struct sk_buff *skb)
+ }
+ 
+ static void
+-mt7921_mcu_beacon_loss_event(struct mt7921_dev *dev, struct sk_buff *skb)
++mt7921_mcu_connection_loss_iter(void *priv, u8 *mac,
++				struct ieee80211_vif *vif)
++{
++	struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
++	struct mt76_connac_beacon_loss_event *event = priv;
++
++	if (mvif->idx != event->bss_idx)
++		return;
++
++	if (!(vif->driver_flags & IEEE80211_VIF_BEACON_FILTER))
++		return;
++
++	ieee80211_connection_loss(vif);
++}
++
++static void
++mt7921_mcu_connection_loss_event(struct mt7921_dev *dev, struct sk_buff *skb)
+ {
+ 	struct mt76_connac_beacon_loss_event *event;
+-	struct mt76_phy *mphy;
+-	u8 band_idx = 0; /* DBDC support */
++	struct mt76_phy *mphy = &dev->mt76.phy;
+ 
+ 	skb_pull(skb, sizeof(struct mt7921_mcu_rxd));
+ 	event = (struct mt76_connac_beacon_loss_event *)skb->data;
+-	if (band_idx && dev->mt76.phy2)
+-		mphy = dev->mt76.phy2;
+-	else
+-		mphy = &dev->mt76.phy;
+ 
+ 	ieee80211_iterate_active_interfaces_atomic(mphy->hw,
+ 					IEEE80211_IFACE_ITER_RESUME_ALL,
+-					mt76_connac_mcu_beacon_loss_iter, event);
++					mt7921_mcu_connection_loss_iter, event);
+ }
+ 
+ static void
+@@ -530,7 +541,7 @@ mt7921_mcu_rx_unsolicited_event(struct mt7921_dev *dev, struct sk_buff *skb)
+ 
+ 	switch (rxd->eid) {
+ 	case MCU_EVENT_BSS_BEACON_LOSS:
+-		mt7921_mcu_beacon_loss_event(dev, skb);
++		mt7921_mcu_connection_loss_event(dev, skb);
+ 		break;
+ 	case MCU_EVENT_SCHED_SCAN_DONE:
+ 	case MCU_EVENT_SCAN_DONE:
+@@ -1368,6 +1379,7 @@ mt7921_pm_interface_iter(void *priv, u8 *mac, struct ieee80211_vif *vif)
+ {
+ 	struct mt7921_phy *phy = priv;
+ 	struct mt7921_dev *dev = phy->dev;
++	struct ieee80211_hw *hw = mt76_hw(dev);
+ 	int ret;
+ 
+ 	if (dev->pm.enable)
+@@ -1380,9 +1392,11 @@ mt7921_pm_interface_iter(void *priv, u8 *mac, struct ieee80211_vif *vif)
+ 
+ 	if (dev->pm.enable) {
+ 		vif->driver_flags |= IEEE80211_VIF_BEACON_FILTER;
++		ieee80211_hw_set(hw, CONNECTION_MONITOR);
+ 		mt76_set(dev, MT_WF_RFCR(0), MT_WF_RFCR_DROP_OTHER_BEACON);
+ 	} else {
+ 		vif->driver_flags &= ~IEEE80211_VIF_BEACON_FILTER;
++		__clear_bit(IEEE80211_HW_CONNECTION_MONITOR, hw->flags);
+ 		mt76_clear(dev, MT_WF_RFCR(0), MT_WF_RFCR_DROP_OTHER_BEACON);
+ 	}
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+index 59862ea4951ce..4cc8a372b2772 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+@@ -156,6 +156,7 @@ struct mt7921_dev {
+ 	u16 chainmask;
+ 
+ 	struct work_struct reset_work;
++	bool hw_full_reset;
+ 
+ 	struct list_head sta_poll_list;
+ 	spinlock_t sta_poll_lock;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+index fa02d934f0bff..13263f50dc00a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+@@ -188,21 +188,26 @@ static int mt7921_pci_suspend(struct pci_dev *pdev, pm_message_t state)
+ {
+ 	struct mt76_dev *mdev = pci_get_drvdata(pdev);
+ 	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
++	struct mt76_connac_pm *pm = &dev->pm;
+ 	bool hif_suspend;
+ 	int i, err;
+ 
+-	err = mt76_connac_pm_wake(&dev->mphy, &dev->pm);
++	pm->suspended = true;
++	cancel_delayed_work_sync(&pm->ps_work);
++	cancel_work_sync(&pm->wake_work);
++
++	err = mt7921_mcu_drv_pmctrl(dev);
+ 	if (err < 0)
+-		return err;
++		goto restore_suspend;
+ 
+ 	hif_suspend = !test_bit(MT76_STATE_SUSPEND, &dev->mphy.state);
+ 	if (hif_suspend) {
+ 		err = mt76_connac_mcu_set_hif_suspend(mdev, true);
+ 		if (err)
+-			return err;
++			goto restore_suspend;
+ 	}
+ 
+-	if (!dev->pm.enable)
++	if (!pm->enable)
+ 		mt76_connac_mcu_set_deep_sleep(&dev->mt76, true);
+ 
+ 	napi_disable(&mdev->tx_napi);
+@@ -231,27 +236,30 @@ static int mt7921_pci_suspend(struct pci_dev *pdev, pm_message_t state)
+ 
+ 	err = mt7921_mcu_fw_pmctrl(dev);
+ 	if (err)
+-		goto restore;
++		goto restore_napi;
+ 
+ 	pci_save_state(pdev);
+ 	err = pci_set_power_state(pdev, pci_choose_state(pdev, state));
+ 	if (err)
+-		goto restore;
++		goto restore_napi;
+ 
+ 	return 0;
+ 
+-restore:
++restore_napi:
+ 	mt76_for_each_q_rx(mdev, i) {
+ 		napi_enable(&mdev->napi[i]);
+ 	}
+ 	napi_enable(&mdev->tx_napi);
+ 
+-	if (!dev->pm.enable)
++	if (!pm->enable)
+ 		mt76_connac_mcu_set_deep_sleep(&dev->mt76, false);
+ 
+ 	if (hif_suspend)
+ 		mt76_connac_mcu_set_hif_suspend(mdev, false);
+ 
++restore_suspend:
++	pm->suspended = false;
++
+ 	return err;
+ }
+ 
+@@ -261,6 +269,7 @@ static int mt7921_pci_resume(struct pci_dev *pdev)
+ 	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
+ 	int i, err;
+ 
++	dev->pm.suspended = false;
+ 	err = pci_set_power_state(pdev, PCI_D0);
+ 	if (err)
+ 		return err;
+diff --git a/drivers/net/wireless/mediatek/mt76/testmode.c b/drivers/net/wireless/mediatek/mt76/testmode.c
+index 001d0ba5f73e6..f614c887f3233 100644
+--- a/drivers/net/wireless/mediatek/mt76/testmode.c
++++ b/drivers/net/wireless/mediatek/mt76/testmode.c
+@@ -158,19 +158,18 @@ int mt76_testmode_alloc_skb(struct mt76_phy *phy, u32 len)
+ 			frag_len = MT_TXP_MAX_LEN;
+ 
+ 		frag = alloc_skb(frag_len, GFP_KERNEL);
+-		if (!frag)
++		if (!frag) {
++			mt76_testmode_free_skb(phy);
++			dev_kfree_skb(head);
+ 			return -ENOMEM;
++		}
+ 
+ 		__skb_put_zero(frag, frag_len);
+ 		head->len += frag->len;
+ 		head->data_len += frag->len;
+ 
+-		if (*frag_tail) {
+-			(*frag_tail)->next = frag;
+-			frag_tail = &frag;
+-		} else {
+-			*frag_tail = frag;
+-		}
++		*frag_tail = frag;
++		frag_tail = &(*frag_tail)->next;
+ 	}
+ 
+ 	mt76_testmode_free_skb(phy);
+diff --git a/drivers/net/wireless/mediatek/mt76/tx.c b/drivers/net/wireless/mediatek/mt76/tx.c
+index 53ea8de82df06..441d06e30b1a5 100644
+--- a/drivers/net/wireless/mediatek/mt76/tx.c
++++ b/drivers/net/wireless/mediatek/mt76/tx.c
+@@ -285,7 +285,7 @@ mt76_tx(struct mt76_phy *phy, struct ieee80211_sta *sta,
+ 		skb_set_queue_mapping(skb, qid);
+ 	}
+ 
+-	if (!(wcid->tx_info & MT_WCID_TX_INFO_SET))
++	if (wcid && !(wcid->tx_info & MT_WCID_TX_INFO_SET))
+ 		ieee80211_get_tx_rates(info->control.vif, sta, skb,
+ 				       info->control.rates, 1);
+ 
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c.c b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+index 6cb593cc33c2d..6d06f26a48942 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822c.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+@@ -4371,26 +4371,28 @@ static void rtw8822c_pwrtrack_set(struct rtw_dev *rtwdev, u8 rf_path)
+ 	}
+ }
+ 
+-static void rtw8822c_pwr_track_path(struct rtw_dev *rtwdev,
+-				    struct rtw_swing_table *swing_table,
+-				    u8 path)
++static void rtw8822c_pwr_track_stats(struct rtw_dev *rtwdev, u8 path)
+ {
+-	struct rtw_dm_info *dm_info = &rtwdev->dm_info;
+-	u8 thermal_value, delta;
++	u8 thermal_value;
+ 
+ 	if (rtwdev->efuse.thermal_meter[path] == 0xff)
+ 		return;
+ 
+ 	thermal_value = rtw_read_rf(rtwdev, path, RF_T_METER, 0x7e);
+-
+ 	rtw_phy_pwrtrack_avg(rtwdev, thermal_value, path);
++}
+ 
+-	delta = rtw_phy_pwrtrack_get_delta(rtwdev, path);
++static void rtw8822c_pwr_track_path(struct rtw_dev *rtwdev,
++				    struct rtw_swing_table *swing_table,
++				    u8 path)
++{
++	struct rtw_dm_info *dm_info = &rtwdev->dm_info;
++	u8 delta;
+ 
++	delta = rtw_phy_pwrtrack_get_delta(rtwdev, path);
+ 	dm_info->delta_power_index[path] =
+ 		rtw_phy_pwrtrack_get_pwridx(rtwdev, swing_table, path, path,
+ 					    delta);
+-
+ 	rtw8822c_pwrtrack_set(rtwdev, path);
+ }
+ 
+@@ -4401,12 +4403,12 @@ static void __rtw8822c_pwr_track(struct rtw_dev *rtwdev)
+ 
+ 	rtw_phy_config_swing_table(rtwdev, &swing_table);
+ 
++	for (i = 0; i < rtwdev->hal.rf_path_num; i++)
++		rtw8822c_pwr_track_stats(rtwdev, i);
+ 	if (rtw_phy_pwrtrack_need_lck(rtwdev))
+ 		rtw8822c_do_lck(rtwdev);
+-
+ 	for (i = 0; i < rtwdev->hal.rf_path_num; i++)
+ 		rtw8822c_pwr_track_path(rtwdev, &swing_table, i);
+-
+ }
+ 
+ static void rtw8822c_pwr_track(struct rtw_dev *rtwdev)
+diff --git a/drivers/net/wireless/rsi/rsi_91x_hal.c b/drivers/net/wireless/rsi/rsi_91x_hal.c
+index ce9892152f4d4..99b21a2c83861 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_hal.c
++++ b/drivers/net/wireless/rsi/rsi_91x_hal.c
+@@ -203,7 +203,7 @@ int rsi_prepare_data_desc(struct rsi_common *common, struct sk_buff *skb)
+ 		wh->frame_control |= cpu_to_le16(RSI_SET_PS_ENABLE);
+ 
+ 	if ((!(info->flags & IEEE80211_TX_INTFL_DONT_ENCRYPT)) &&
+-	    (common->secinfo.security_enable)) {
++	    info->control.hw_key) {
+ 		if (rsi_is_cipher_wep(common))
+ 			ieee80211_size += 4;
+ 		else
+@@ -470,9 +470,9 @@ int rsi_prepare_beacon(struct rsi_common *common, struct sk_buff *skb)
+ 	}
+ 
+ 	if (common->band == NL80211_BAND_2GHZ)
+-		bcn_frm->bbp_info |= cpu_to_le16(RSI_RATE_1);
++		bcn_frm->rate_info |= cpu_to_le16(RSI_RATE_1);
+ 	else
+-		bcn_frm->bbp_info |= cpu_to_le16(RSI_RATE_6);
++		bcn_frm->rate_info |= cpu_to_le16(RSI_RATE_6);
+ 
+ 	if (mac_bcn->data[tim_offset + 2] == 0)
+ 		bcn_frm->frame_info |= cpu_to_le16(RSI_DATA_DESC_DTIM_BEACON);
+diff --git a/drivers/net/wireless/rsi/rsi_91x_mac80211.c b/drivers/net/wireless/rsi/rsi_91x_mac80211.c
+index 16025300cddb3..57c9e3559dfd1 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_mac80211.c
++++ b/drivers/net/wireless/rsi/rsi_91x_mac80211.c
+@@ -1028,7 +1028,6 @@ static int rsi_mac80211_set_key(struct ieee80211_hw *hw,
+ 	mutex_lock(&common->mutex);
+ 	switch (cmd) {
+ 	case SET_KEY:
+-		secinfo->security_enable = true;
+ 		status = rsi_hal_key_config(hw, vif, key, sta);
+ 		if (status) {
+ 			mutex_unlock(&common->mutex);
+@@ -1047,8 +1046,6 @@ static int rsi_mac80211_set_key(struct ieee80211_hw *hw,
+ 		break;
+ 
+ 	case DISABLE_KEY:
+-		if (vif->type == NL80211_IFTYPE_STATION)
+-			secinfo->security_enable = false;
+ 		rsi_dbg(ERR_ZONE, "%s: RSI del key\n", __func__);
+ 		memset(key, 0, sizeof(struct ieee80211_key_conf));
+ 		status = rsi_hal_key_config(hw, vif, key, sta);
+diff --git a/drivers/net/wireless/rsi/rsi_91x_mgmt.c b/drivers/net/wireless/rsi/rsi_91x_mgmt.c
+index 33c76d39a8e96..b6d050a2fbe7e 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_mgmt.c
++++ b/drivers/net/wireless/rsi/rsi_91x_mgmt.c
+@@ -1803,8 +1803,7 @@ int rsi_send_wowlan_request(struct rsi_common *common, u16 flags,
+ 			RSI_WIFI_MGMT_Q);
+ 	cmd_frame->desc.desc_dword0.frame_type = WOWLAN_CONFIG_PARAMS;
+ 	cmd_frame->host_sleep_status = sleep_status;
+-	if (common->secinfo.security_enable &&
+-	    common->secinfo.gtk_cipher)
++	if (common->secinfo.gtk_cipher)
+ 		flags |= RSI_WOW_GTK_REKEY;
+ 	if (sleep_status)
+ 		cmd_frame->wow_flags = flags;
+diff --git a/drivers/net/wireless/rsi/rsi_main.h b/drivers/net/wireless/rsi/rsi_main.h
+index a1065e5a92b43..0f535850a3836 100644
+--- a/drivers/net/wireless/rsi/rsi_main.h
++++ b/drivers/net/wireless/rsi/rsi_main.h
+@@ -151,7 +151,6 @@ enum edca_queue {
+ };
+ 
+ struct security_info {
+-	bool security_enable;
+ 	u32 ptk_cipher;
+ 	u32 gtk_cipher;
+ };
+diff --git a/drivers/net/wireless/st/cw1200/scan.c b/drivers/net/wireless/st/cw1200/scan.c
+index 988581cc134b7..1f856fbbc0ea4 100644
+--- a/drivers/net/wireless/st/cw1200/scan.c
++++ b/drivers/net/wireless/st/cw1200/scan.c
+@@ -75,30 +75,27 @@ int cw1200_hw_scan(struct ieee80211_hw *hw,
+ 	if (req->n_ssids > WSM_SCAN_MAX_NUM_OF_SSIDS)
+ 		return -EINVAL;
+ 
+-	/* will be unlocked in cw1200_scan_work() */
+-	down(&priv->scan.lock);
+-	mutex_lock(&priv->conf_mutex);
+-
+ 	frame.skb = ieee80211_probereq_get(hw, priv->vif->addr, NULL, 0,
+ 		req->ie_len);
+-	if (!frame.skb) {
+-		mutex_unlock(&priv->conf_mutex);
+-		up(&priv->scan.lock);
++	if (!frame.skb)
+ 		return -ENOMEM;
+-	}
+ 
+ 	if (req->ie_len)
+ 		skb_put_data(frame.skb, req->ie, req->ie_len);
+ 
++	/* will be unlocked in cw1200_scan_work() */
++	down(&priv->scan.lock);
++	mutex_lock(&priv->conf_mutex);
++
+ 	ret = wsm_set_template_frame(priv, &frame);
+ 	if (!ret) {
+ 		/* Host want to be the probe responder. */
+ 		ret = wsm_set_probe_responder(priv, true);
+ 	}
+ 	if (ret) {
+-		dev_kfree_skb(frame.skb);
+ 		mutex_unlock(&priv->conf_mutex);
+ 		up(&priv->scan.lock);
++		dev_kfree_skb(frame.skb);
+ 		return ret;
+ 	}
+ 
+@@ -120,8 +117,8 @@ int cw1200_hw_scan(struct ieee80211_hw *hw,
+ 		++priv->scan.n_ssids;
+ 	}
+ 
+-	dev_kfree_skb(frame.skb);
+ 	mutex_unlock(&priv->conf_mutex);
++	dev_kfree_skb(frame.skb);
+ 	queue_work(priv->workqueue, &priv->scan.work);
+ 	return 0;
+ }
+diff --git a/drivers/net/wwan/Kconfig b/drivers/net/wwan/Kconfig
+index 7ad1920120bcb..e9d8a1c25e433 100644
+--- a/drivers/net/wwan/Kconfig
++++ b/drivers/net/wwan/Kconfig
+@@ -3,15 +3,9 @@
+ # Wireless WAN device configuration
+ #
+ 
+-menuconfig WWAN
+-	bool "Wireless WAN"
+-	help
+-	  This section contains Wireless WAN configuration for WWAN framework
+-	  and drivers.
+-
+-if WWAN
++menu "Wireless WAN"
+ 
+-config WWAN_CORE
++config WWAN
+ 	tristate "WWAN Driver Core"
+ 	help
+ 	  Say Y here if you want to use the WWAN driver core. This driver
+@@ -20,9 +14,10 @@ config WWAN_CORE
+ 	  To compile this driver as a module, choose M here: the module will be
+ 	  called wwan.
+ 
++if WWAN
++
+ config MHI_WWAN_CTRL
+ 	tristate "MHI WWAN control driver for QCOM-based PCIe modems"
+-	select WWAN_CORE
+ 	depends on MHI_BUS
+ 	help
+ 	  MHI WWAN CTRL allows QCOM-based PCIe modems to expose different modem
+@@ -35,3 +30,5 @@ config MHI_WWAN_CTRL
+ 	  called mhi_wwan_ctrl.
+ 
+ endif # WWAN
++
++endmenu
+diff --git a/drivers/net/wwan/Makefile b/drivers/net/wwan/Makefile
+index 556cd90958cae..289771a4f952e 100644
+--- a/drivers/net/wwan/Makefile
++++ b/drivers/net/wwan/Makefile
+@@ -3,7 +3,7 @@
+ # Makefile for the Linux WWAN device drivers.
+ #
+ 
+-obj-$(CONFIG_WWAN_CORE) += wwan.o
++obj-$(CONFIG_WWAN) += wwan.o
+ wwan-objs += wwan_core.o
+ 
+ obj-$(CONFIG_MHI_WWAN_CTRL) += mhi_wwan_ctrl.o
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index a29b170701fc6..42ad75ff13481 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -1032,7 +1032,7 @@ static inline void nvme_handle_cqe(struct nvme_queue *nvmeq, u16 idx)
+ 
+ static inline void nvme_update_cq_head(struct nvme_queue *nvmeq)
+ {
+-	u16 tmp = nvmeq->cq_head + 1;
++	u32 tmp = nvmeq->cq_head + 1;
+ 
+ 	if (tmp == nvmeq->q_depth) {
+ 		nvmeq->cq_head = 0;
+@@ -2831,10 +2831,7 @@ static unsigned long check_vendor_combination_bug(struct pci_dev *pdev)
+ #ifdef CONFIG_ACPI
+ static bool nvme_acpi_storage_d3(struct pci_dev *dev)
+ {
+-	struct acpi_device *adev;
+-	struct pci_dev *root;
+-	acpi_handle handle;
+-	acpi_status status;
++	struct acpi_device *adev = ACPI_COMPANION(&dev->dev);
+ 	u8 val;
+ 
+ 	/*
+@@ -2842,28 +2839,9 @@ static bool nvme_acpi_storage_d3(struct pci_dev *dev)
+ 	 * must use D3 to support deep platform power savings during
+ 	 * suspend-to-idle.
+ 	 */
+-	root = pcie_find_root_port(dev);
+-	if (!root)
+-		return false;
+ 
+-	adev = ACPI_COMPANION(&root->dev);
+ 	if (!adev)
+ 		return false;
+-
+-	/*
+-	 * The property is defined in the PXSX device for South complex ports
+-	 * and in the PEGP device for North complex ports.
+-	 */
+-	status = acpi_get_handle(adev->handle, "PXSX", &handle);
+-	if (ACPI_FAILURE(status)) {
+-		status = acpi_get_handle(adev->handle, "PEGP", &handle);
+-		if (ACPI_FAILURE(status))
+-			return false;
+-	}
+-
+-	if (acpi_bus_get_device(handle, &adev))
+-		return false;
+-
+ 	if (fwnode_property_read_u8(acpi_fwnode_handle(adev), "StorageD3Enable",
+ 			&val))
+ 		return false;
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 34f4b3402f7c1..79a463090dd30 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1973,11 +1973,13 @@ static int nvme_tcp_setup_ctrl(struct nvme_ctrl *ctrl, bool new)
+ 		return ret;
+ 
+ 	if (ctrl->icdoff) {
++		ret = -EOPNOTSUPP;
+ 		dev_err(ctrl->device, "icdoff is not supported!\n");
+ 		goto destroy_admin;
+ 	}
+ 
+ 	if (!(ctrl->sgls & ((1 << 0) | (1 << 1)))) {
++		ret = -EOPNOTSUPP;
+ 		dev_err(ctrl->device, "Mandatory sgls are not supported!\n");
+ 		goto destroy_admin;
+ 	}
+diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
+index 19e113240fff9..22b5108168a6a 100644
+--- a/drivers/nvme/target/fc.c
++++ b/drivers/nvme/target/fc.c
+@@ -2510,13 +2510,6 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
+ 	u32 xfrlen = be32_to_cpu(cmdiu->data_len);
+ 	int ret;
+ 
+-	/*
+-	 * if there is no nvmet mapping to the targetport there
+-	 * shouldn't be requests. just terminate them.
+-	 */
+-	if (!tgtport->pe)
+-		goto transport_error;
+-
+ 	/*
+ 	 * Fused commands are currently not supported in the linux
+ 	 * implementation.
+@@ -2544,7 +2537,8 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
+ 
+ 	fod->req.cmd = &fod->cmdiubuf.sqe;
+ 	fod->req.cqe = &fod->rspiubuf.cqe;
+-	fod->req.port = tgtport->pe->port;
++	if (tgtport->pe)
++		fod->req.port = tgtport->pe->port;
+ 
+ 	/* clear any response payload */
+ 	memset(&fod->rspiubuf, 0, sizeof(fod->rspiubuf));
+diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
+index ba17a80b8c79c..cc71e0b3eed9f 100644
+--- a/drivers/of/fdt.c
++++ b/drivers/of/fdt.c
+@@ -510,11 +510,11 @@ static int __init __reserved_mem_reserve_reg(unsigned long node,
+ 
+ 		if (size &&
+ 		    early_init_dt_reserve_memory_arch(base, size, nomap) == 0)
+-			pr_debug("Reserved memory: reserved region for node '%s': base %pa, size %ld MiB\n",
+-				uname, &base, (unsigned long)size / SZ_1M);
++			pr_debug("Reserved memory: reserved region for node '%s': base %pa, size %lu MiB\n",
++				uname, &base, (unsigned long)(size / SZ_1M));
+ 		else
+-			pr_info("Reserved memory: failed to reserve memory for node '%s': base %pa, size %ld MiB\n",
+-				uname, &base, (unsigned long)size / SZ_1M);
++			pr_info("Reserved memory: failed to reserve memory for node '%s': base %pa, size %lu MiB\n",
++				uname, &base, (unsigned long)(size / SZ_1M));
+ 
+ 		len -= t_len;
+ 		if (first) {
+diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
+index 15e2417974d67..3502ba522c397 100644
+--- a/drivers/of/of_reserved_mem.c
++++ b/drivers/of/of_reserved_mem.c
+@@ -134,9 +134,9 @@ static int __init __reserved_mem_alloc_size(unsigned long node,
+ 			ret = early_init_dt_alloc_reserved_memory_arch(size,
+ 					align, start, end, nomap, &base);
+ 			if (ret == 0) {
+-				pr_debug("allocated memory for '%s' node: base %pa, size %ld MiB\n",
++				pr_debug("allocated memory for '%s' node: base %pa, size %lu MiB\n",
+ 					uname, &base,
+-					(unsigned long)size / SZ_1M);
++					(unsigned long)(size / SZ_1M));
+ 				break;
+ 			}
+ 			len -= t_len;
+@@ -146,8 +146,8 @@ static int __init __reserved_mem_alloc_size(unsigned long node,
+ 		ret = early_init_dt_alloc_reserved_memory_arch(size, align,
+ 							0, 0, nomap, &base);
+ 		if (ret == 0)
+-			pr_debug("allocated memory for '%s' node: base %pa, size %ld MiB\n",
+-				uname, &base, (unsigned long)size / SZ_1M);
++			pr_debug("allocated memory for '%s' node: base %pa, size %lu MiB\n",
++				uname, &base, (unsigned long)(size / SZ_1M));
+ 	}
+ 
+ 	if (base == 0) {
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index 6511648271b23..bebe3eeebc4e1 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -3476,6 +3476,9 @@ static void __exit exit_hv_pci_drv(void)
+ 
+ static int __init init_hv_pci_drv(void)
+ {
++	if (!hv_is_hyperv_initialized())
++		return -ENODEV;
++
+ 	/* Set the invalid domain number's bit, so it will not be used */
+ 	set_bit(HVPCI_DOM_INVALID, hvpci_dom_map);
+ 
+diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
+index 56a5c355701d0..49016f2f505e0 100644
+--- a/drivers/perf/arm-cmn.c
++++ b/drivers/perf/arm-cmn.c
+@@ -1212,7 +1212,7 @@ static int arm_cmn_init_irqs(struct arm_cmn *cmn)
+ 		irq = cmn->dtc[i].irq;
+ 		for (j = i; j--; ) {
+ 			if (cmn->dtc[j].irq == irq) {
+-				cmn->dtc[j].irq_friend = j - i;
++				cmn->dtc[j].irq_friend = i - j;
+ 				goto next;
+ 			}
+ 		}
+diff --git a/drivers/perf/arm_smmuv3_pmu.c b/drivers/perf/arm_smmuv3_pmu.c
+index ff6fab4bae30d..863d9f702aa17 100644
+--- a/drivers/perf/arm_smmuv3_pmu.c
++++ b/drivers/perf/arm_smmuv3_pmu.c
+@@ -277,7 +277,7 @@ static int smmu_pmu_apply_event_filter(struct smmu_pmu *smmu_pmu,
+ 				       struct perf_event *event, int idx)
+ {
+ 	u32 span, sid;
+-	unsigned int num_ctrs = smmu_pmu->num_counters;
++	unsigned int cur_idx, num_ctrs = smmu_pmu->num_counters;
+ 	bool filter_en = !!get_filter_enable(event);
+ 
+ 	span = filter_en ? get_filter_span(event) :
+@@ -285,17 +285,19 @@ static int smmu_pmu_apply_event_filter(struct smmu_pmu *smmu_pmu,
+ 	sid = filter_en ? get_filter_stream_id(event) :
+ 			   SMMU_PMCG_DEFAULT_FILTER_SID;
+ 
+-	/* Support individual filter settings */
+-	if (!smmu_pmu->global_filter) {
++	cur_idx = find_first_bit(smmu_pmu->used_counters, num_ctrs);
++	/*
++	 * Per-counter filtering, or scheduling the first globally-filtered
++	 * event into an empty PMU so idx == 0 and it works out equivalent.
++	 */
++	if (!smmu_pmu->global_filter || cur_idx == num_ctrs) {
+ 		smmu_pmu_set_event_filter(event, idx, span, sid);
+ 		return 0;
+ 	}
+ 
+-	/* Requested settings same as current global settings*/
+-	idx = find_first_bit(smmu_pmu->used_counters, num_ctrs);
+-	if (idx == num_ctrs ||
+-	    smmu_pmu_check_global_filter(smmu_pmu->events[idx], event)) {
+-		smmu_pmu_set_event_filter(event, 0, span, sid);
++	/* Otherwise, must match whatever's currently scheduled */
++	if (smmu_pmu_check_global_filter(smmu_pmu->events[cur_idx], event)) {
++		smmu_pmu_set_evtyper(smmu_pmu, idx, get_event(event));
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/perf/fsl_imx8_ddr_perf.c b/drivers/perf/fsl_imx8_ddr_perf.c
+index 2bbb931880649..7b87aaf267d57 100644
+--- a/drivers/perf/fsl_imx8_ddr_perf.c
++++ b/drivers/perf/fsl_imx8_ddr_perf.c
+@@ -705,8 +705,10 @@ static int ddr_perf_probe(struct platform_device *pdev)
+ 
+ 	name = devm_kasprintf(&pdev->dev, GFP_KERNEL, DDR_PERF_DEV_NAME "%d",
+ 			      num);
+-	if (!name)
+-		return -ENOMEM;
++	if (!name) {
++		ret = -ENOMEM;
++		goto cpuhp_state_err;
++	}
+ 
+ 	pmu->devtype_data = of_device_get_match_data(&pdev->dev);
+ 
+diff --git a/drivers/perf/hisilicon/hisi_uncore_hha_pmu.c b/drivers/perf/hisilicon/hisi_uncore_hha_pmu.c
+index 0316fabe32f1a..acc864bded2be 100644
+--- a/drivers/perf/hisilicon/hisi_uncore_hha_pmu.c
++++ b/drivers/perf/hisilicon/hisi_uncore_hha_pmu.c
+@@ -90,7 +90,7 @@ static void hisi_hha_pmu_config_ds(struct perf_event *event)
+ 
+ 		val = readl(hha_pmu->base + HHA_DATSRC_CTRL);
+ 		val |= HHA_DATSRC_SKT_EN;
+-		writel(ds_skt, hha_pmu->base + HHA_DATSRC_CTRL);
++		writel(val, hha_pmu->base + HHA_DATSRC_CTRL);
+ 	}
+ }
+ 
+@@ -104,7 +104,7 @@ static void hisi_hha_pmu_clear_ds(struct perf_event *event)
+ 
+ 		val = readl(hha_pmu->base + HHA_DATSRC_CTRL);
+ 		val &= ~HHA_DATSRC_SKT_EN;
+-		writel(ds_skt, hha_pmu->base + HHA_DATSRC_CTRL);
++		writel(val, hha_pmu->base + HHA_DATSRC_CTRL);
+ 	}
+ }
+ 
+diff --git a/drivers/phy/ralink/phy-mt7621-pci.c b/drivers/phy/ralink/phy-mt7621-pci.c
+index 2a9465f4bb3a9..3b1245fc5a02e 100644
+--- a/drivers/phy/ralink/phy-mt7621-pci.c
++++ b/drivers/phy/ralink/phy-mt7621-pci.c
+@@ -272,8 +272,8 @@ static struct phy *mt7621_pcie_phy_of_xlate(struct device *dev,
+ 
+ 	mt7621_phy->has_dual_port = args->args[0];
+ 
+-	dev_info(dev, "PHY for 0x%08x (dual port = %d)\n",
+-		 (unsigned int)mt7621_phy->port_base, mt7621_phy->has_dual_port);
++	dev_dbg(dev, "PHY for 0x%px (dual port = %d)\n",
++		mt7621_phy->port_base, mt7621_phy->has_dual_port);
+ 
+ 	return mt7621_phy->phy;
+ }
+diff --git a/drivers/phy/socionext/phy-uniphier-pcie.c b/drivers/phy/socionext/phy-uniphier-pcie.c
+index e4adab375c737..6bdbd1f214dd4 100644
+--- a/drivers/phy/socionext/phy-uniphier-pcie.c
++++ b/drivers/phy/socionext/phy-uniphier-pcie.c
+@@ -24,11 +24,13 @@
+ #define PORT_SEL_1		FIELD_PREP(PORT_SEL_MASK, 1)
+ 
+ #define PCL_PHY_TEST_I		0x2000
+-#define PCL_PHY_TEST_O		0x2004
+ #define TESTI_DAT_MASK		GENMASK(13, 6)
+ #define TESTI_ADR_MASK		GENMASK(5, 1)
+ #define TESTI_WR_EN		BIT(0)
+ 
++#define PCL_PHY_TEST_O		0x2004
++#define TESTO_DAT_MASK		GENMASK(7, 0)
++
+ #define PCL_PHY_RESET		0x200c
+ #define PCL_PHY_RESET_N_MNMODE	BIT(8)	/* =1:manual */
+ #define PCL_PHY_RESET_N		BIT(0)	/* =1:deasssert */
+@@ -77,11 +79,12 @@ static void uniphier_pciephy_set_param(struct uniphier_pciephy_priv *priv,
+ 	val  = FIELD_PREP(TESTI_DAT_MASK, 1);
+ 	val |= FIELD_PREP(TESTI_ADR_MASK, reg);
+ 	uniphier_pciephy_testio_write(priv, val);
+-	val = readl(priv->base + PCL_PHY_TEST_O);
++	val = readl(priv->base + PCL_PHY_TEST_O) & TESTO_DAT_MASK;
+ 
+ 	/* update value */
+-	val &= ~FIELD_PREP(TESTI_DAT_MASK, mask);
+-	val  = FIELD_PREP(TESTI_DAT_MASK, mask & param);
++	val &= ~mask;
++	val |= mask & param;
++	val = FIELD_PREP(TESTI_DAT_MASK, val);
+ 	val |= FIELD_PREP(TESTI_ADR_MASK, reg);
+ 	uniphier_pciephy_testio_write(priv, val);
+ 	uniphier_pciephy_testio_write(priv, val | TESTI_WR_EN);
+diff --git a/drivers/phy/ti/phy-dm816x-usb.c b/drivers/phy/ti/phy-dm816x-usb.c
+index 57adc08a89b2d..9fe6ea6fdae55 100644
+--- a/drivers/phy/ti/phy-dm816x-usb.c
++++ b/drivers/phy/ti/phy-dm816x-usb.c
+@@ -242,19 +242,28 @@ static int dm816x_usb_phy_probe(struct platform_device *pdev)
+ 
+ 	pm_runtime_enable(phy->dev);
+ 	generic_phy = devm_phy_create(phy->dev, NULL, &ops);
+-	if (IS_ERR(generic_phy))
+-		return PTR_ERR(generic_phy);
++	if (IS_ERR(generic_phy)) {
++		error = PTR_ERR(generic_phy);
++		goto clk_unprepare;
++	}
+ 
+ 	phy_set_drvdata(generic_phy, phy);
+ 
+ 	phy_provider = devm_of_phy_provider_register(phy->dev,
+ 						     of_phy_simple_xlate);
+-	if (IS_ERR(phy_provider))
+-		return PTR_ERR(phy_provider);
++	if (IS_ERR(phy_provider)) {
++		error = PTR_ERR(phy_provider);
++		goto clk_unprepare;
++	}
+ 
+ 	usb_add_phy_dev(&phy->phy);
+ 
+ 	return 0;
++
++clk_unprepare:
++	pm_runtime_disable(phy->dev);
++	clk_unprepare(phy->refclk);
++	return error;
+ }
+ 
+ static int dm816x_usb_phy_remove(struct platform_device *pdev)
+diff --git a/drivers/pinctrl/renesas/pfc-r8a7796.c b/drivers/pinctrl/renesas/pfc-r8a7796.c
+index 44e9d2eea484a..bbb1b436ded31 100644
+--- a/drivers/pinctrl/renesas/pfc-r8a7796.c
++++ b/drivers/pinctrl/renesas/pfc-r8a7796.c
+@@ -67,6 +67,7 @@
+ 	PIN_NOGP_CFG(QSPI1_MOSI_IO0, "QSPI1_MOSI_IO0", fn, CFG_FLAGS),	\
+ 	PIN_NOGP_CFG(QSPI1_SPCLK, "QSPI1_SPCLK", fn, CFG_FLAGS),	\
+ 	PIN_NOGP_CFG(QSPI1_SSL, "QSPI1_SSL", fn, CFG_FLAGS),		\
++	PIN_NOGP_CFG(PRESET_N, "PRESET#", fn, SH_PFC_PIN_CFG_PULL_DOWN),\
+ 	PIN_NOGP_CFG(RPC_INT_N, "RPC_INT#", fn, CFG_FLAGS),		\
+ 	PIN_NOGP_CFG(RPC_RESET_N, "RPC_RESET#", fn, CFG_FLAGS),		\
+ 	PIN_NOGP_CFG(RPC_WP_N, "RPC_WP#", fn, CFG_FLAGS),		\
+@@ -6218,7 +6219,7 @@ static const struct pinmux_bias_reg pinmux_bias_regs[] = {
+ 		[ 4] = RCAR_GP_PIN(6, 29),	/* USB30_OVC */
+ 		[ 5] = RCAR_GP_PIN(6, 30),	/* GP6_30 */
+ 		[ 6] = RCAR_GP_PIN(6, 31),	/* GP6_31 */
+-		[ 7] = SH_PFC_PIN_NONE,
++		[ 7] = PIN_PRESET_N,		/* PRESET# */
+ 		[ 8] = SH_PFC_PIN_NONE,
+ 		[ 9] = SH_PFC_PIN_NONE,
+ 		[10] = SH_PFC_PIN_NONE,
+diff --git a/drivers/pinctrl/renesas/pfc-r8a77990.c b/drivers/pinctrl/renesas/pfc-r8a77990.c
+index d040eb3e305da..eeebbab4dd811 100644
+--- a/drivers/pinctrl/renesas/pfc-r8a77990.c
++++ b/drivers/pinctrl/renesas/pfc-r8a77990.c
+@@ -53,10 +53,10 @@
+ 	PIN_NOGP_CFG(FSCLKST_N, "FSCLKST_N", fn, CFG_FLAGS),		\
+ 	PIN_NOGP_CFG(MLB_REF, "MLB_REF", fn, CFG_FLAGS),		\
+ 	PIN_NOGP_CFG(PRESETOUT_N, "PRESETOUT_N", fn, CFG_FLAGS),	\
+-	PIN_NOGP_CFG(TCK, "TCK", fn, CFG_FLAGS),			\
+-	PIN_NOGP_CFG(TDI, "TDI", fn, CFG_FLAGS),			\
+-	PIN_NOGP_CFG(TMS, "TMS", fn, CFG_FLAGS),			\
+-	PIN_NOGP_CFG(TRST_N, "TRST_N", fn, CFG_FLAGS)
++	PIN_NOGP_CFG(TCK, "TCK", fn, SH_PFC_PIN_CFG_PULL_UP),		\
++	PIN_NOGP_CFG(TDI, "TDI", fn, SH_PFC_PIN_CFG_PULL_UP),		\
++	PIN_NOGP_CFG(TMS, "TMS", fn, SH_PFC_PIN_CFG_PULL_UP),		\
++	PIN_NOGP_CFG(TRST_N, "TRST_N", fn, SH_PFC_PIN_CFG_PULL_UP)
+ 
+ /*
+  * F_() : just information
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index d41d7ad14be0d..0cb927f0f301a 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -110,11 +110,6 @@ static struct quirk_entry quirk_asus_forceals = {
+ 	.wmi_force_als_set = true,
+ };
+ 
+-static struct quirk_entry quirk_asus_vendor_backlight = {
+-	.wmi_backlight_power = true,
+-	.wmi_backlight_set_devstate = true,
+-};
+-
+ static struct quirk_entry quirk_asus_use_kbd_dock_devid = {
+ 	.use_kbd_dock_devid = true,
+ };
+@@ -425,78 +420,6 @@ static const struct dmi_system_id asus_quirks[] = {
+ 		},
+ 		.driver_data = &quirk_asus_forceals,
+ 	},
+-	{
+-		.callback = dmi_matched,
+-		.ident = "ASUSTeK COMPUTER INC. GA401IH",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "GA401IH"),
+-		},
+-		.driver_data = &quirk_asus_vendor_backlight,
+-	},
+-	{
+-		.callback = dmi_matched,
+-		.ident = "ASUSTeK COMPUTER INC. GA401II",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "GA401II"),
+-		},
+-		.driver_data = &quirk_asus_vendor_backlight,
+-	},
+-	{
+-		.callback = dmi_matched,
+-		.ident = "ASUSTeK COMPUTER INC. GA401IU",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "GA401IU"),
+-		},
+-		.driver_data = &quirk_asus_vendor_backlight,
+-	},
+-	{
+-		.callback = dmi_matched,
+-		.ident = "ASUSTeK COMPUTER INC. GA401IV",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "GA401IV"),
+-		},
+-		.driver_data = &quirk_asus_vendor_backlight,
+-	},
+-	{
+-		.callback = dmi_matched,
+-		.ident = "ASUSTeK COMPUTER INC. GA401IVC",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "GA401IVC"),
+-		},
+-		.driver_data = &quirk_asus_vendor_backlight,
+-	},
+-		{
+-		.callback = dmi_matched,
+-		.ident = "ASUSTeK COMPUTER INC. GA502II",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "GA502II"),
+-		},
+-		.driver_data = &quirk_asus_vendor_backlight,
+-	},
+-	{
+-		.callback = dmi_matched,
+-		.ident = "ASUSTeK COMPUTER INC. GA502IU",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "GA502IU"),
+-		},
+-		.driver_data = &quirk_asus_vendor_backlight,
+-	},
+-	{
+-		.callback = dmi_matched,
+-		.ident = "ASUSTeK COMPUTER INC. GA502IV",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "GA502IV"),
+-		},
+-		.driver_data = &quirk_asus_vendor_backlight,
+-	},
+ 	{
+ 		.callback = dmi_matched,
+ 		.ident = "Asus Transformer T100TA / T100HA / T100CHI",
+diff --git a/drivers/platform/x86/toshiba_acpi.c b/drivers/platform/x86/toshiba_acpi.c
+index fa7232ad8c395..352508d304675 100644
+--- a/drivers/platform/x86/toshiba_acpi.c
++++ b/drivers/platform/x86/toshiba_acpi.c
+@@ -2831,6 +2831,7 @@ static int toshiba_acpi_setup_keyboard(struct toshiba_acpi_dev *dev)
+ 
+ 	if (!dev->info_supported && !dev->system_event_supported) {
+ 		pr_warn("No hotkey query interface found\n");
++		error = -EINVAL;
+ 		goto err_remove_filter;
+ 	}
+ 
+diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
+index bde740d6120e1..424cf2a847448 100644
+--- a/drivers/platform/x86/touchscreen_dmi.c
++++ b/drivers/platform/x86/touchscreen_dmi.c
+@@ -299,6 +299,35 @@ static const struct ts_dmi_data estar_beauty_hd_data = {
+ 	.properties	= estar_beauty_hd_props,
+ };
+ 
++/* Generic props + data for upside-down mounted GDIX1001 touchscreens */
++static const struct property_entry gdix1001_upside_down_props[] = {
++	PROPERTY_ENTRY_BOOL("touchscreen-inverted-x"),
++	PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"),
++	{ }
++};
++
++static const struct ts_dmi_data gdix1001_00_upside_down_data = {
++	.acpi_name	= "GDIX1001:00",
++	.properties	= gdix1001_upside_down_props,
++};
++
++static const struct ts_dmi_data gdix1001_01_upside_down_data = {
++	.acpi_name	= "GDIX1001:01",
++	.properties	= gdix1001_upside_down_props,
++};
++
++static const struct property_entry glavey_tm800a550l_props[] = {
++	PROPERTY_ENTRY_STRING("firmware-name", "gt912-glavey-tm800a550l.fw"),
++	PROPERTY_ENTRY_STRING("goodix,config-name", "gt912-glavey-tm800a550l.cfg"),
++	PROPERTY_ENTRY_U32("goodix,main-clk", 54),
++	{ }
++};
++
++static const struct ts_dmi_data glavey_tm800a550l_data = {
++	.acpi_name	= "GDIX1001:00",
++	.properties	= glavey_tm800a550l_props,
++};
++
+ static const struct property_entry gp_electronic_t701_props[] = {
+ 	PROPERTY_ENTRY_U32("touchscreen-size-x", 960),
+ 	PROPERTY_ENTRY_U32("touchscreen-size-y", 640),
+@@ -1038,6 +1067,15 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "eSTAR BEAUTY HD Intel Quad core"),
+ 		},
+ 	},
++	{	/* Glavey TM800A550L */
++		.driver_data = (void *)&glavey_tm800a550l_data,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
++			DMI_MATCH(DMI_BOARD_NAME, "Aptio CRB"),
++			/* Above strings are too generic, also match on BIOS version */
++			DMI_MATCH(DMI_BIOS_VERSION, "ZY-8-BI-PX4S70VTR400-X423B-005-D"),
++		},
++	},
+ 	{
+ 		/* GP-electronic T701 */
+ 		.driver_data = (void *)&gp_electronic_t701_data,
+@@ -1330,6 +1368,24 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "X3 Plus"),
+ 		},
+ 	},
++	{
++		/* Teclast X89 (Android version / BIOS) */
++		.driver_data = (void *)&gdix1001_00_upside_down_data,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "WISKY"),
++			DMI_MATCH(DMI_BOARD_NAME, "3G062i"),
++		},
++	},
++	{
++		/* Teclast X89 (Windows version / BIOS) */
++		.driver_data = (void *)&gdix1001_01_upside_down_data,
++		.matches = {
++			/* tPAD is too generic, also match on bios date */
++			DMI_MATCH(DMI_BOARD_VENDOR, "TECLAST"),
++			DMI_MATCH(DMI_BOARD_NAME, "tPAD"),
++			DMI_MATCH(DMI_BIOS_DATE, "12/19/2014"),
++		},
++	},
+ 	{
+ 		/* Teclast X98 Plus II */
+ 		.driver_data = (void *)&teclast_x98plus2_data,
+@@ -1338,6 +1394,19 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "X98 Plus II"),
+ 		},
+ 	},
++	{
++		/* Teclast X98 Pro */
++		.driver_data = (void *)&gdix1001_00_upside_down_data,
++		.matches = {
++			/*
++			 * Only match BIOS date, because the manufacturers
++			 * BIOS does not report the board name at all
++			 * (sometimes)...
++			 */
++			DMI_MATCH(DMI_BOARD_VENDOR, "TECLAST"),
++			DMI_MATCH(DMI_BIOS_DATE, "10/28/2015"),
++		},
++	},
+ 	{
+ 		/* Trekstor Primebook C11 */
+ 		.driver_data = (void *)&trekstor_primebook_c11_data,
+@@ -1413,6 +1482,22 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "VINGA Twizzle J116"),
+ 		},
+ 	},
++	{
++		/* "WinBook TW100" */
++		.driver_data = (void *)&gdix1001_00_upside_down_data,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "WinBook"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "TW100")
++		}
++	},
++	{
++		/* WinBook TW700 */
++		.driver_data = (void *)&gdix1001_00_upside_down_data,
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "WinBook"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "TW700")
++		},
++	},
+ 	{
+ 		/* Yours Y8W81, same case and touchscreen as Chuwi Vi8 */
+ 		.driver_data = (void *)&chuwi_vi8_data,
+diff --git a/drivers/regulator/Kconfig b/drivers/regulator/Kconfig
+index 3e7a38525cb3f..fc9e8f589d16e 100644
+--- a/drivers/regulator/Kconfig
++++ b/drivers/regulator/Kconfig
+@@ -207,6 +207,7 @@ config REGULATOR_BD70528
+ config REGULATOR_BD71815
+ 	tristate "ROHM BD71815 Power Regulator"
+ 	depends on MFD_ROHM_BD71828
++	select REGULATOR_ROHM
+ 	help
+ 	  This driver supports voltage regulators on ROHM BD71815 PMIC.
+ 	  This will enable support for the software controllable buck
+diff --git a/drivers/regulator/bd9576-regulator.c b/drivers/regulator/bd9576-regulator.c
+index 204a2da054f53..cdf30481a5820 100644
+--- a/drivers/regulator/bd9576-regulator.c
++++ b/drivers/regulator/bd9576-regulator.c
+@@ -312,8 +312,8 @@ static int bd957x_probe(struct platform_device *pdev)
+ }
+ 
+ static const struct platform_device_id bd957x_pmic_id[] = {
+-	{ "bd9573-pmic", ROHM_CHIP_TYPE_BD9573 },
+-	{ "bd9576-pmic", ROHM_CHIP_TYPE_BD9576 },
++	{ "bd9573-regulator", ROHM_CHIP_TYPE_BD9573 },
++	{ "bd9576-regulator", ROHM_CHIP_TYPE_BD9576 },
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(platform, bd957x_pmic_id);
+diff --git a/drivers/regulator/da9052-regulator.c b/drivers/regulator/da9052-regulator.c
+index e18d291c7f21c..23fa429ebe760 100644
+--- a/drivers/regulator/da9052-regulator.c
++++ b/drivers/regulator/da9052-regulator.c
+@@ -250,7 +250,8 @@ static int da9052_regulator_set_voltage_time_sel(struct regulator_dev *rdev,
+ 	case DA9052_ID_BUCK3:
+ 	case DA9052_ID_LDO2:
+ 	case DA9052_ID_LDO3:
+-		ret = (new_sel - old_sel) * info->step_uV / 6250;
++		ret = DIV_ROUND_UP(abs(new_sel - old_sel) * info->step_uV,
++				   6250);
+ 		break;
+ 	}
+ 
+diff --git a/drivers/regulator/fan53555.c b/drivers/regulator/fan53555.c
+index 26f06f685b1b6..b2ee38c5b573a 100644
+--- a/drivers/regulator/fan53555.c
++++ b/drivers/regulator/fan53555.c
+@@ -293,6 +293,9 @@ static int fan53526_voltages_setup_fairchild(struct fan53555_device_info *di)
+ 		return -EINVAL;
+ 	}
+ 
++	di->slew_reg = FAN53555_CONTROL;
++	di->slew_mask = CTL_SLEW_MASK;
++	di->slew_shift = CTL_SLEW_SHIFT;
+ 	di->vsel_count = FAN53526_NVOLTAGES;
+ 
+ 	return 0;
+diff --git a/drivers/regulator/fan53880.c b/drivers/regulator/fan53880.c
+index 1684faf82ed25..94f02f3099dd4 100644
+--- a/drivers/regulator/fan53880.c
++++ b/drivers/regulator/fan53880.c
+@@ -79,7 +79,7 @@ static const struct regulator_desc fan53880_regulators[] = {
+ 		.n_linear_ranges = 2,
+ 		.n_voltages =	   0xf8,
+ 		.vsel_reg =	   FAN53880_BUCKVOUT,
+-		.vsel_mask =	   0x7f,
++		.vsel_mask =	   0xff,
+ 		.enable_reg =	   FAN53880_ENABLE,
+ 		.enable_mask =	   0x10,
+ 		.enable_time =	   480,
+diff --git a/drivers/regulator/hi6421v600-regulator.c b/drivers/regulator/hi6421v600-regulator.c
+index d6340bb492967..d1e9406b2e3e2 100644
+--- a/drivers/regulator/hi6421v600-regulator.c
++++ b/drivers/regulator/hi6421v600-regulator.c
+@@ -129,7 +129,7 @@ static unsigned int hi6421_spmi_regulator_get_mode(struct regulator_dev *rdev)
+ {
+ 	struct hi6421_spmi_reg_info *sreg = rdev_get_drvdata(rdev);
+ 	struct hi6421_spmi_pmic *pmic = sreg->pmic;
+-	u32 reg_val;
++	unsigned int reg_val;
+ 
+ 	regmap_read(pmic->regmap, rdev->desc->enable_reg, &reg_val);
+ 
+@@ -144,14 +144,17 @@ static int hi6421_spmi_regulator_set_mode(struct regulator_dev *rdev,
+ {
+ 	struct hi6421_spmi_reg_info *sreg = rdev_get_drvdata(rdev);
+ 	struct hi6421_spmi_pmic *pmic = sreg->pmic;
+-	u32 val;
++	unsigned int val;
+ 
+ 	switch (mode) {
+ 	case REGULATOR_MODE_NORMAL:
+ 		val = 0;
+ 		break;
+ 	case REGULATOR_MODE_IDLE:
+-		val = sreg->eco_mode_mask << (ffs(sreg->eco_mode_mask) - 1);
++		if (!sreg->eco_mode_mask)
++			return -EINVAL;
++
++		val = sreg->eco_mode_mask;
+ 		break;
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/regulator/hi655x-regulator.c b/drivers/regulator/hi655x-regulator.c
+index 68cdb173196d6..556bb73f33292 100644
+--- a/drivers/regulator/hi655x-regulator.c
++++ b/drivers/regulator/hi655x-regulator.c
+@@ -72,7 +72,7 @@ enum hi655x_regulator_id {
+ static int hi655x_is_enabled(struct regulator_dev *rdev)
+ {
+ 	unsigned int value = 0;
+-	struct hi655x_regulator *regulator = rdev_get_drvdata(rdev);
++	const struct hi655x_regulator *regulator = rdev_get_drvdata(rdev);
+ 
+ 	regmap_read(rdev->regmap, regulator->status_reg, &value);
+ 	return (value & rdev->desc->enable_mask);
+@@ -80,7 +80,7 @@ static int hi655x_is_enabled(struct regulator_dev *rdev)
+ 
+ static int hi655x_disable(struct regulator_dev *rdev)
+ {
+-	struct hi655x_regulator *regulator = rdev_get_drvdata(rdev);
++	const struct hi655x_regulator *regulator = rdev_get_drvdata(rdev);
+ 
+ 	return regmap_write(rdev->regmap, regulator->disable_reg,
+ 			    rdev->desc->enable_mask);
+@@ -169,7 +169,6 @@ static const struct hi655x_regulator regulators[] = {
+ static int hi655x_regulator_probe(struct platform_device *pdev)
+ {
+ 	unsigned int i;
+-	struct hi655x_regulator *regulator;
+ 	struct hi655x_pmic *pmic;
+ 	struct regulator_config config = { };
+ 	struct regulator_dev *rdev;
+@@ -180,22 +179,17 @@ static int hi655x_regulator_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	}
+ 
+-	regulator = devm_kzalloc(&pdev->dev, sizeof(*regulator), GFP_KERNEL);
+-	if (!regulator)
+-		return -ENOMEM;
+-
+-	platform_set_drvdata(pdev, regulator);
+-
+ 	config.dev = pdev->dev.parent;
+ 	config.regmap = pmic->regmap;
+-	config.driver_data = regulator;
+ 	for (i = 0; i < ARRAY_SIZE(regulators); i++) {
++		config.driver_data = (void *) &regulators[i];
++
+ 		rdev = devm_regulator_register(&pdev->dev,
+ 					       &regulators[i].rdesc,
+ 					       &config);
+ 		if (IS_ERR(rdev)) {
+ 			dev_err(&pdev->dev, "failed to register regulator %s\n",
+-				regulator->rdesc.name);
++				regulators[i].rdesc.name);
+ 			return PTR_ERR(rdev);
+ 		}
+ 	}
+diff --git a/drivers/regulator/mt6315-regulator.c b/drivers/regulator/mt6315-regulator.c
+index 6b8be52c3772a..7514702f78cf7 100644
+--- a/drivers/regulator/mt6315-regulator.c
++++ b/drivers/regulator/mt6315-regulator.c
+@@ -223,8 +223,8 @@ static int mt6315_regulator_probe(struct spmi_device *pdev)
+ 	int i;
+ 
+ 	regmap = devm_regmap_init_spmi_ext(pdev, &mt6315_regmap_config);
+-	if (!regmap)
+-		return -ENODEV;
++	if (IS_ERR(regmap))
++		return PTR_ERR(regmap);
+ 
+ 	chip = devm_kzalloc(dev, sizeof(struct mt6315_chip), GFP_KERNEL);
+ 	if (!chip)
+diff --git a/drivers/regulator/mt6358-regulator.c b/drivers/regulator/mt6358-regulator.c
+index 13cb6ac9a8929..1d4eb5dc4fac8 100644
+--- a/drivers/regulator/mt6358-regulator.c
++++ b/drivers/regulator/mt6358-regulator.c
+@@ -457,7 +457,7 @@ static struct mt6358_regulator_info mt6358_regulators[] = {
+ 	MT6358_REG_FIXED("ldo_vaud28", VAUD28,
+ 			 MT6358_LDO_VAUD28_CON0, 0, 2800000),
+ 	MT6358_LDO("ldo_vdram2", VDRAM2, vdram2_voltages, vdram2_idx,
+-		   MT6358_LDO_VDRAM2_CON0, 0, MT6358_LDO_VDRAM2_ELR0, 0x10, 0),
++		   MT6358_LDO_VDRAM2_CON0, 0, MT6358_LDO_VDRAM2_ELR0, 0xf, 0),
+ 	MT6358_LDO("ldo_vsim1", VSIM1, vsim_voltages, vsim_idx,
+ 		   MT6358_LDO_VSIM1_CON0, 0, MT6358_VSIM1_ANA_CON0, 0xf00, 8),
+ 	MT6358_LDO("ldo_vibr", VIBR, vibr_voltages, vibr_idx,
+diff --git a/drivers/regulator/qcom-rpmh-regulator.c b/drivers/regulator/qcom-rpmh-regulator.c
+index 22fec370fa610..ac79dc34f9e8b 100644
+--- a/drivers/regulator/qcom-rpmh-regulator.c
++++ b/drivers/regulator/qcom-rpmh-regulator.c
+@@ -1070,6 +1070,7 @@ static const struct rpmh_vreg_init_data pm7325_vreg_data[] = {
+ 	RPMH_VREG("ldo17",  "ldo%s17", &pmic5_pldo_lv,   "vdd-l11-l17-l18-l19"),
+ 	RPMH_VREG("ldo18",  "ldo%s18", &pmic5_pldo_lv,   "vdd-l11-l17-l18-l19"),
+ 	RPMH_VREG("ldo19",  "ldo%s19", &pmic5_pldo_lv,   "vdd-l11-l17-l18-l19"),
++	{}
+ };
+ 
+ static const struct rpmh_vreg_init_data pmr735a_vreg_data[] = {
+@@ -1083,6 +1084,7 @@ static const struct rpmh_vreg_init_data pmr735a_vreg_data[] = {
+ 	RPMH_VREG("ldo5",   "ldo%s5",  &pmic5_nldo,      "vdd-l5-l6"),
+ 	RPMH_VREG("ldo6",   "ldo%s6",  &pmic5_nldo,      "vdd-l5-l6"),
+ 	RPMH_VREG("ldo7",   "ldo%s7",  &pmic5_pldo,      "vdd-l7-bob"),
++	{}
+ };
+ 
+ static int rpmh_regulator_probe(struct platform_device *pdev)
+diff --git a/drivers/regulator/uniphier-regulator.c b/drivers/regulator/uniphier-regulator.c
+index 2e02e26b516c4..e75b0973e3256 100644
+--- a/drivers/regulator/uniphier-regulator.c
++++ b/drivers/regulator/uniphier-regulator.c
+@@ -201,6 +201,7 @@ static const struct of_device_id uniphier_regulator_match[] = {
+ 	},
+ 	{ /* Sentinel */ },
+ };
++MODULE_DEVICE_TABLE(of, uniphier_regulator_match);
+ 
+ static struct platform_driver uniphier_regulator_driver = {
+ 	.probe = uniphier_regulator_probe,
+diff --git a/drivers/rtc/rtc-stm32.c b/drivers/rtc/rtc-stm32.c
+index 75a8924ba12b3..ac9e228b56d0b 100644
+--- a/drivers/rtc/rtc-stm32.c
++++ b/drivers/rtc/rtc-stm32.c
+@@ -754,7 +754,7 @@ static int stm32_rtc_probe(struct platform_device *pdev)
+ 
+ 	ret = clk_prepare_enable(rtc->rtc_ck);
+ 	if (ret)
+-		goto err;
++		goto err_no_rtc_ck;
+ 
+ 	if (rtc->data->need_dbp)
+ 		regmap_update_bits(rtc->dbp, rtc->dbp_reg,
+@@ -830,10 +830,12 @@ static int stm32_rtc_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	return 0;
++
+ err:
++	clk_disable_unprepare(rtc->rtc_ck);
++err_no_rtc_ck:
+ 	if (rtc->data->has_pclk)
+ 		clk_disable_unprepare(rtc->pclk);
+-	clk_disable_unprepare(rtc->rtc_ck);
+ 
+ 	if (rtc->data->need_dbp)
+ 		regmap_update_bits(rtc->dbp, rtc->dbp_reg, rtc->dbp_mask, 0);
+diff --git a/drivers/s390/cio/chp.c b/drivers/s390/cio/chp.c
+index e421138254152..1097e76982a5d 100644
+--- a/drivers/s390/cio/chp.c
++++ b/drivers/s390/cio/chp.c
+@@ -255,6 +255,9 @@ static ssize_t chp_status_write(struct device *dev,
+ 	if (!num_args)
+ 		return count;
+ 
++	/* Wait until previous actions have settled. */
++	css_wait_for_slow_path();
++
+ 	if (!strncasecmp(cmd, "on", 2) || !strcmp(cmd, "1")) {
+ 		mutex_lock(&cp->lock);
+ 		error = s390_vary_chpid(cp->chpid, 1);
+diff --git a/drivers/s390/cio/chsc.c b/drivers/s390/cio/chsc.c
+index c22d9ee27ba19..297fb399363cc 100644
+--- a/drivers/s390/cio/chsc.c
++++ b/drivers/s390/cio/chsc.c
+@@ -801,8 +801,6 @@ int chsc_chp_vary(struct chp_id chpid, int on)
+ {
+ 	struct channel_path *chp = chpid_to_chp(chpid);
+ 
+-	/* Wait until previous actions have settled. */
+-	css_wait_for_slow_path();
+ 	/*
+ 	 * Redo PathVerification on the devices the chpid connects to
+ 	 */
+diff --git a/drivers/scsi/FlashPoint.c b/drivers/scsi/FlashPoint.c
+index 0464e37c806a4..2e25ef67825ac 100644
+--- a/drivers/scsi/FlashPoint.c
++++ b/drivers/scsi/FlashPoint.c
+@@ -40,7 +40,7 @@ struct sccb_mgr_info {
+ 	u16 si_per_targ_ultra_nego;
+ 	u16 si_per_targ_no_disc;
+ 	u16 si_per_targ_wide_nego;
+-	u16 si_flags;
++	u16 si_mflags;
+ 	unsigned char si_card_family;
+ 	unsigned char si_bustype;
+ 	unsigned char si_card_model[3];
+@@ -1073,22 +1073,22 @@ static int FlashPoint_ProbeHostAdapter(struct sccb_mgr_info *pCardInfo)
+ 		ScamFlg =
+ 		    (unsigned char)FPT_utilEERead(ioport, SCAM_CONFIG / 2);
+ 
+-	pCardInfo->si_flags = 0x0000;
++	pCardInfo->si_mflags = 0x0000;
+ 
+ 	if (i & 0x01)
+-		pCardInfo->si_flags |= SCSI_PARITY_ENA;
++		pCardInfo->si_mflags |= SCSI_PARITY_ENA;
+ 
+ 	if (!(i & 0x02))
+-		pCardInfo->si_flags |= SOFT_RESET;
++		pCardInfo->si_mflags |= SOFT_RESET;
+ 
+ 	if (i & 0x10)
+-		pCardInfo->si_flags |= EXTENDED_TRANSLATION;
++		pCardInfo->si_mflags |= EXTENDED_TRANSLATION;
+ 
+ 	if (ScamFlg & SCAM_ENABLED)
+-		pCardInfo->si_flags |= FLAG_SCAM_ENABLED;
++		pCardInfo->si_mflags |= FLAG_SCAM_ENABLED;
+ 
+ 	if (ScamFlg & SCAM_LEVEL2)
+-		pCardInfo->si_flags |= FLAG_SCAM_LEVEL2;
++		pCardInfo->si_mflags |= FLAG_SCAM_LEVEL2;
+ 
+ 	j = (RD_HARPOON(ioport + hp_bm_ctrl) & ~SCSI_TERM_ENA_L);
+ 	if (i & 0x04) {
+@@ -1104,7 +1104,7 @@ static int FlashPoint_ProbeHostAdapter(struct sccb_mgr_info *pCardInfo)
+ 
+ 	if (!(RD_HARPOON(ioport + hp_page_ctrl) & NARROW_SCSI_CARD))
+ 
+-		pCardInfo->si_flags |= SUPPORT_16TAR_32LUN;
++		pCardInfo->si_mflags |= SUPPORT_16TAR_32LUN;
+ 
+ 	pCardInfo->si_card_family = HARPOON_FAMILY;
+ 	pCardInfo->si_bustype = BUSTYPE_PCI;
+@@ -1140,15 +1140,15 @@ static int FlashPoint_ProbeHostAdapter(struct sccb_mgr_info *pCardInfo)
+ 
+ 	if (pCardInfo->si_card_model[1] == '3') {
+ 		if (RD_HARPOON(ioport + hp_ee_ctrl) & BIT(7))
+-			pCardInfo->si_flags |= LOW_BYTE_TERM;
++			pCardInfo->si_mflags |= LOW_BYTE_TERM;
+ 	} else if (pCardInfo->si_card_model[2] == '0') {
+ 		temp = RD_HARPOON(ioport + hp_xfer_pad);
+ 		WR_HARPOON(ioport + hp_xfer_pad, (temp & ~BIT(4)));
+ 		if (RD_HARPOON(ioport + hp_ee_ctrl) & BIT(7))
+-			pCardInfo->si_flags |= LOW_BYTE_TERM;
++			pCardInfo->si_mflags |= LOW_BYTE_TERM;
+ 		WR_HARPOON(ioport + hp_xfer_pad, (temp | BIT(4)));
+ 		if (RD_HARPOON(ioport + hp_ee_ctrl) & BIT(7))
+-			pCardInfo->si_flags |= HIGH_BYTE_TERM;
++			pCardInfo->si_mflags |= HIGH_BYTE_TERM;
+ 		WR_HARPOON(ioport + hp_xfer_pad, temp);
+ 	} else {
+ 		temp = RD_HARPOON(ioport + hp_ee_ctrl);
+@@ -1166,9 +1166,9 @@ static int FlashPoint_ProbeHostAdapter(struct sccb_mgr_info *pCardInfo)
+ 		WR_HARPOON(ioport + hp_ee_ctrl, temp);
+ 		WR_HARPOON(ioport + hp_xfer_pad, temp2);
+ 		if (!(temp3 & BIT(7)))
+-			pCardInfo->si_flags |= LOW_BYTE_TERM;
++			pCardInfo->si_mflags |= LOW_BYTE_TERM;
+ 		if (!(temp3 & BIT(6)))
+-			pCardInfo->si_flags |= HIGH_BYTE_TERM;
++			pCardInfo->si_mflags |= HIGH_BYTE_TERM;
+ 	}
+ 
+ 	ARAM_ACCESS(ioport);
+@@ -1275,7 +1275,7 @@ static void *FlashPoint_HardwareResetHostAdapter(struct sccb_mgr_info
+ 	WR_HARPOON(ioport + hp_arb_id, pCardInfo->si_id);
+ 	CurrCard->ourId = pCardInfo->si_id;
+ 
+-	i = (unsigned char)pCardInfo->si_flags;
++	i = (unsigned char)pCardInfo->si_mflags;
+ 	if (i & SCSI_PARITY_ENA)
+ 		WR_HARPOON(ioport + hp_portctrl_1, (HOST_MODE8 | CHK_SCSI_P));
+ 
+@@ -1289,14 +1289,14 @@ static void *FlashPoint_HardwareResetHostAdapter(struct sccb_mgr_info
+ 		j |= SCSI_TERM_ENA_H;
+ 	WR_HARPOON(ioport + hp_ee_ctrl, j);
+ 
+-	if (!(pCardInfo->si_flags & SOFT_RESET)) {
++	if (!(pCardInfo->si_mflags & SOFT_RESET)) {
+ 
+ 		FPT_sresb(ioport, thisCard);
+ 
+ 		FPT_scini(thisCard, pCardInfo->si_id, 0);
+ 	}
+ 
+-	if (pCardInfo->si_flags & POST_ALL_UNDERRRUNS)
++	if (pCardInfo->si_mflags & POST_ALL_UNDERRRUNS)
+ 		CurrCard->globalFlags |= F_NO_FILTER;
+ 
+ 	if (pCurrNvRam) {
+diff --git a/drivers/scsi/be2iscsi/be_iscsi.c b/drivers/scsi/be2iscsi/be_iscsi.c
+index 0e935c49b57bd..dd419e295184d 100644
+--- a/drivers/scsi/be2iscsi/be_iscsi.c
++++ b/drivers/scsi/be2iscsi/be_iscsi.c
+@@ -182,6 +182,7 @@ int beiscsi_conn_bind(struct iscsi_cls_session *cls_session,
+ 	struct beiscsi_endpoint *beiscsi_ep;
+ 	struct iscsi_endpoint *ep;
+ 	uint16_t cri_index;
++	int rc = 0;
+ 
+ 	ep = iscsi_lookup_endpoint(transport_fd);
+ 	if (!ep)
+@@ -189,15 +190,17 @@ int beiscsi_conn_bind(struct iscsi_cls_session *cls_session,
+ 
+ 	beiscsi_ep = ep->dd_data;
+ 
+-	if (iscsi_conn_bind(cls_session, cls_conn, is_leading))
+-		return -EINVAL;
++	if (iscsi_conn_bind(cls_session, cls_conn, is_leading)) {
++		rc = -EINVAL;
++		goto put_ep;
++	}
+ 
+ 	if (beiscsi_ep->phba != phba) {
+ 		beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG,
+ 			    "BS_%d : beiscsi_ep->hba=%p not equal to phba=%p\n",
+ 			    beiscsi_ep->phba, phba);
+-
+-		return -EEXIST;
++		rc = -EEXIST;
++		goto put_ep;
+ 	}
+ 	cri_index = BE_GET_CRI_FROM_CID(beiscsi_ep->ep_cid);
+ 	if (phba->conn_table[cri_index]) {
+@@ -209,7 +212,8 @@ int beiscsi_conn_bind(struct iscsi_cls_session *cls_session,
+ 				      beiscsi_ep->ep_cid,
+ 				      beiscsi_conn,
+ 				      phba->conn_table[cri_index]);
+-			return -EINVAL;
++			rc = -EINVAL;
++			goto put_ep;
+ 		}
+ 	}
+ 
+@@ -226,7 +230,10 @@ int beiscsi_conn_bind(struct iscsi_cls_session *cls_session,
+ 		    "BS_%d : cid %d phba->conn_table[%u]=%p\n",
+ 		    beiscsi_ep->ep_cid, cri_index, beiscsi_conn);
+ 	phba->conn_table[cri_index] = beiscsi_conn;
+-	return 0;
++
++put_ep:
++	iscsi_put_endpoint(ep);
++	return rc;
+ }
+ 
+ static int beiscsi_iface_create_ipv4(struct beiscsi_hba *phba)
+diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_main.c
+index 22cf7f4b8d8c8..27c4f1598f765 100644
+--- a/drivers/scsi/be2iscsi/be_main.c
++++ b/drivers/scsi/be2iscsi/be_main.c
+@@ -5809,6 +5809,7 @@ struct iscsi_transport beiscsi_iscsi_transport = {
+ 	.destroy_session = beiscsi_session_destroy,
+ 	.create_conn = beiscsi_conn_create,
+ 	.bind_conn = beiscsi_conn_bind,
++	.unbind_conn = iscsi_conn_unbind,
+ 	.destroy_conn = iscsi_conn_teardown,
+ 	.attr_is_visible = beiscsi_attr_is_visible,
+ 	.set_iface_param = beiscsi_iface_set_param,
+diff --git a/drivers/scsi/bnx2i/bnx2i_iscsi.c b/drivers/scsi/bnx2i/bnx2i_iscsi.c
+index 1e6d8f62ea3c2..2ad85c6b99fd2 100644
+--- a/drivers/scsi/bnx2i/bnx2i_iscsi.c
++++ b/drivers/scsi/bnx2i/bnx2i_iscsi.c
+@@ -1420,17 +1420,23 @@ static int bnx2i_conn_bind(struct iscsi_cls_session *cls_session,
+ 	 * Forcefully terminate all in progress connection recovery at the
+ 	 * earliest, either in bind(), send_pdu(LOGIN), or conn_start()
+ 	 */
+-	if (bnx2i_adapter_ready(hba))
+-		return -EIO;
++	if (bnx2i_adapter_ready(hba)) {
++		ret_code = -EIO;
++		goto put_ep;
++	}
+ 
+ 	bnx2i_ep = ep->dd_data;
+ 	if ((bnx2i_ep->state == EP_STATE_TCP_FIN_RCVD) ||
+-	    (bnx2i_ep->state == EP_STATE_TCP_RST_RCVD))
++	    (bnx2i_ep->state == EP_STATE_TCP_RST_RCVD)) {
+ 		/* Peer disconnect via' FIN or RST */
+-		return -EINVAL;
++		ret_code = -EINVAL;
++		goto put_ep;
++	}
+ 
+-	if (iscsi_conn_bind(cls_session, cls_conn, is_leading))
+-		return -EINVAL;
++	if (iscsi_conn_bind(cls_session, cls_conn, is_leading)) {
++		ret_code = -EINVAL;
++		goto put_ep;
++	}
+ 
+ 	if (bnx2i_ep->hba != hba) {
+ 		/* Error - TCP connection does not belong to this device
+@@ -1441,7 +1447,8 @@ static int bnx2i_conn_bind(struct iscsi_cls_session *cls_session,
+ 		iscsi_conn_printk(KERN_ALERT, cls_conn->dd_data,
+ 				  "belong to hba (%s)\n",
+ 				  hba->netdev->name);
+-		return -EEXIST;
++		ret_code = -EEXIST;
++		goto put_ep;
+ 	}
+ 	bnx2i_ep->conn = bnx2i_conn;
+ 	bnx2i_conn->ep = bnx2i_ep;
+@@ -1458,6 +1465,8 @@ static int bnx2i_conn_bind(struct iscsi_cls_session *cls_session,
+ 		bnx2i_put_rq_buf(bnx2i_conn, 0);
+ 
+ 	bnx2i_arm_cq_event_coalescing(bnx2i_conn->ep, CNIC_ARM_CQE);
++put_ep:
++	iscsi_put_endpoint(ep);
+ 	return ret_code;
+ }
+ 
+@@ -2276,6 +2285,7 @@ struct iscsi_transport bnx2i_iscsi_transport = {
+ 	.destroy_session	= bnx2i_session_destroy,
+ 	.create_conn		= bnx2i_conn_create,
+ 	.bind_conn		= bnx2i_conn_bind,
++	.unbind_conn		= iscsi_conn_unbind,
+ 	.destroy_conn		= bnx2i_conn_destroy,
+ 	.attr_is_visible	= bnx2i_attr_is_visible,
+ 	.set_param		= iscsi_set_param,
+diff --git a/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c b/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
+index 203f938fca7e5..f949a4e007834 100644
+--- a/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
++++ b/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
+@@ -117,6 +117,7 @@ static struct iscsi_transport cxgb3i_iscsi_transport = {
+ 	/* connection management */
+ 	.create_conn	= cxgbi_create_conn,
+ 	.bind_conn	= cxgbi_bind_conn,
++	.unbind_conn	= iscsi_conn_unbind,
+ 	.destroy_conn	= iscsi_tcp_conn_teardown,
+ 	.start_conn	= iscsi_conn_start,
+ 	.stop_conn	= iscsi_conn_stop,
+diff --git a/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c b/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
+index 2c3491528d424..efb3e2b3398e2 100644
+--- a/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
++++ b/drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
+@@ -134,6 +134,7 @@ static struct iscsi_transport cxgb4i_iscsi_transport = {
+ 	/* connection management */
+ 	.create_conn	= cxgbi_create_conn,
+ 	.bind_conn		= cxgbi_bind_conn,
++	.unbind_conn	= iscsi_conn_unbind,
+ 	.destroy_conn	= iscsi_tcp_conn_teardown,
+ 	.start_conn		= iscsi_conn_start,
+ 	.stop_conn		= iscsi_conn_stop,
+diff --git a/drivers/scsi/cxgbi/libcxgbi.c b/drivers/scsi/cxgbi/libcxgbi.c
+index f078b3c4e083f..f6bcae829c29b 100644
+--- a/drivers/scsi/cxgbi/libcxgbi.c
++++ b/drivers/scsi/cxgbi/libcxgbi.c
+@@ -2690,11 +2690,13 @@ int cxgbi_bind_conn(struct iscsi_cls_session *cls_session,
+ 	err = csk->cdev->csk_ddp_setup_pgidx(csk, csk->tid,
+ 					     ppm->tformat.pgsz_idx_dflt);
+ 	if (err < 0)
+-		return err;
++		goto put_ep;
+ 
+ 	err = iscsi_conn_bind(cls_session, cls_conn, is_leading);
+-	if (err)
+-		return -EINVAL;
++	if (err) {
++		err = -EINVAL;
++		goto put_ep;
++	}
+ 
+ 	/*  calculate the tag idx bits needed for this conn based on cmds_max */
+ 	cconn->task_idx_bits = (__ilog2_u32(conn->session->cmds_max - 1)) + 1;
+@@ -2715,7 +2717,9 @@ int cxgbi_bind_conn(struct iscsi_cls_session *cls_session,
+ 	/*  init recv engine */
+ 	iscsi_tcp_hdr_recv_prep(tcp_conn);
+ 
+-	return 0;
++put_ep:
++	iscsi_put_endpoint(ep);
++	return err;
+ }
+ EXPORT_SYMBOL_GPL(cxgbi_bind_conn);
+ 
+diff --git a/drivers/scsi/libfc/fc_encode.h b/drivers/scsi/libfc/fc_encode.h
+index 602c97a651bc0..9ea4ceadb5594 100644
+--- a/drivers/scsi/libfc/fc_encode.h
++++ b/drivers/scsi/libfc/fc_encode.h
+@@ -166,9 +166,11 @@ static inline int fc_ct_ns_fill(struct fc_lport *lport,
+ static inline void fc_ct_ms_fill_attr(struct fc_fdmi_attr_entry *entry,
+ 				    const char *in, size_t len)
+ {
+-	int copied = strscpy(entry->value, in, len);
+-	if (copied > 0)
+-		memset(entry->value, copied, len - copied);
++	int copied;
++
++	copied = strscpy((char *)&entry->value, in, len);
++	if (copied > 0 && (copied + 1) < len)
++		memset((entry->value + copied + 1), 0, len - copied - 1);
+ }
+ 
+ /**
+diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
+index 4834219497eeb..2aaf836786548 100644
+--- a/drivers/scsi/libiscsi.c
++++ b/drivers/scsi/libiscsi.c
+@@ -1387,23 +1387,32 @@ void iscsi_session_failure(struct iscsi_session *session,
+ }
+ EXPORT_SYMBOL_GPL(iscsi_session_failure);
+ 
+-void iscsi_conn_failure(struct iscsi_conn *conn, enum iscsi_err err)
++static bool iscsi_set_conn_failed(struct iscsi_conn *conn)
+ {
+ 	struct iscsi_session *session = conn->session;
+ 
+-	spin_lock_bh(&session->frwd_lock);
+-	if (session->state == ISCSI_STATE_FAILED) {
+-		spin_unlock_bh(&session->frwd_lock);
+-		return;
+-	}
++	if (session->state == ISCSI_STATE_FAILED)
++		return false;
+ 
+ 	if (conn->stop_stage == 0)
+ 		session->state = ISCSI_STATE_FAILED;
+-	spin_unlock_bh(&session->frwd_lock);
+ 
+ 	set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx);
+ 	set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_rx);
+-	iscsi_conn_error_event(conn->cls_conn, err);
++	return true;
++}
++
++void iscsi_conn_failure(struct iscsi_conn *conn, enum iscsi_err err)
++{
++	struct iscsi_session *session = conn->session;
++	bool needs_evt;
++
++	spin_lock_bh(&session->frwd_lock);
++	needs_evt = iscsi_set_conn_failed(conn);
++	spin_unlock_bh(&session->frwd_lock);
++
++	if (needs_evt)
++		iscsi_conn_error_event(conn->cls_conn, err);
+ }
+ EXPORT_SYMBOL_GPL(iscsi_conn_failure);
+ 
+@@ -2180,6 +2189,51 @@ done:
+ 	spin_unlock(&session->frwd_lock);
+ }
+ 
++/**
++ * iscsi_conn_unbind - prevent queueing to conn.
++ * @cls_conn: iscsi conn ep is bound to.
++ * @is_active: is the conn in use for boot or is this for EH/termination
++ *
++ * This must be called by drivers implementing the ep_disconnect callout.
++ * It disables queueing to the connection from libiscsi in preparation for
++ * an ep_disconnect call.
++ */
++void iscsi_conn_unbind(struct iscsi_cls_conn *cls_conn, bool is_active)
++{
++	struct iscsi_session *session;
++	struct iscsi_conn *conn;
++
++	if (!cls_conn)
++		return;
++
++	conn = cls_conn->dd_data;
++	session = conn->session;
++	/*
++	 * Wait for iscsi_eh calls to exit. We don't wait for the tmf to
++	 * complete or timeout. The caller just wants to know what's running
++	 * is everything that needs to be cleaned up, and no cmds will be
++	 * queued.
++	 */
++	mutex_lock(&session->eh_mutex);
++
++	iscsi_suspend_queue(conn);
++	iscsi_suspend_tx(conn);
++
++	spin_lock_bh(&session->frwd_lock);
++	if (!is_active) {
++		/*
++		 * if logout timed out before userspace could even send a PDU
++		 * the state might still be in ISCSI_STATE_LOGGED_IN and
++		 * allowing new cmds and TMFs.
++		 */
++		if (session->state == ISCSI_STATE_LOGGED_IN)
++			iscsi_set_conn_failed(conn);
++	}
++	spin_unlock_bh(&session->frwd_lock);
++	mutex_unlock(&session->eh_mutex);
++}
++EXPORT_SYMBOL_GPL(iscsi_conn_unbind);
++
+ static void iscsi_prep_abort_task_pdu(struct iscsi_task *task,
+ 				      struct iscsi_tm *hdr)
+ {
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
+index 658a962832b35..7bddd74658b9e 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.c
++++ b/drivers/scsi/lpfc/lpfc_debugfs.c
+@@ -868,11 +868,8 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size)
+ 		len += scnprintf(buf+len, size-len,
+ 				"WWNN x%llx ",
+ 				wwn_to_u64(ndlp->nlp_nodename.u.wwn));
+-		if (ndlp->nlp_flag & NLP_RPI_REGISTERED)
+-			len += scnprintf(buf+len, size-len, "RPI:%04d ",
+-					ndlp->nlp_rpi);
+-		else
+-			len += scnprintf(buf+len, size-len, "RPI:none ");
++		len += scnprintf(buf+len, size-len, "RPI:x%04x ",
++				 ndlp->nlp_rpi);
+ 		len +=  scnprintf(buf+len, size-len, "flag:x%08x ",
+ 			ndlp->nlp_flag);
+ 		if (!ndlp->nlp_type)
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index 21108f322c995..c3ca2ccf9f828 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -1998,9 +1998,20 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ 			lpfc_disc_state_machine(vport, ndlp, cmdiocb,
+ 						NLP_EVT_CMPL_PLOGI);
+ 
+-		/* As long as this node is not registered with the scsi or nvme
+-		 * transport, it is no longer an active node.  Otherwise
+-		 * devloss handles the final cleanup.
++		/* If a PLOGI collision occurred, the node needs to continue
++		 * with the reglogin process.
++		 */
++		spin_lock_irq(&ndlp->lock);
++		if ((ndlp->nlp_flag & (NLP_ACC_REGLOGIN | NLP_RCV_PLOGI)) &&
++		    ndlp->nlp_state == NLP_STE_REG_LOGIN_ISSUE) {
++			spin_unlock_irq(&ndlp->lock);
++			goto out;
++		}
++		spin_unlock_irq(&ndlp->lock);
++
++		/* No PLOGI collision and the node is not registered with the
++		 * scsi or nvme transport. It is no longer an active node. Just
++		 * start the device remove process.
+ 		 */
+ 		if (!(ndlp->fc4_xpt_flags & (SCSI_XPT_REGD | NVME_XPT_REGD))) {
+ 			spin_lock_irq(&ndlp->lock);
+@@ -2869,6 +2880,11 @@ lpfc_cmpl_els_logo(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ 	 * log into the remote port.
+ 	 */
+ 	if (ndlp->nlp_flag & NLP_TARGET_REMOVE) {
++		spin_lock_irq(&ndlp->lock);
++		if (phba->sli_rev == LPFC_SLI_REV4)
++			ndlp->nlp_flag |= NLP_RELEASE_RPI;
++		ndlp->nlp_flag &= ~NLP_NPR_2B_DISC;
++		spin_unlock_irq(&ndlp->lock);
+ 		lpfc_disc_state_machine(vport, ndlp, cmdiocb,
+ 					NLP_EVT_DEVICE_RM);
+ 		lpfc_els_free_iocb(phba, cmdiocb);
+@@ -4371,6 +4387,7 @@ lpfc_cmpl_els_logo_acc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ 	struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) cmdiocb->context1;
+ 	struct lpfc_vport *vport = cmdiocb->vport;
+ 	IOCB_t *irsp;
++	u32 xpt_flags = 0, did_mask = 0;
+ 
+ 	irsp = &rspiocb->iocb;
+ 	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP,
+@@ -4386,9 +4403,20 @@ lpfc_cmpl_els_logo_acc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ 	if (ndlp->nlp_state == NLP_STE_NPR_NODE) {
+ 		/* NPort Recovery mode or node is just allocated */
+ 		if (!lpfc_nlp_not_used(ndlp)) {
+-			/* If the ndlp is being used by another discovery
+-			 * thread, just unregister the RPI.
++			/* A LOGO is completing and the node is in NPR state.
++			 * If this a fabric node that cleared its transport
++			 * registration, release the rpi.
+ 			 */
++			xpt_flags = SCSI_XPT_REGD | NVME_XPT_REGD;
++			did_mask = ndlp->nlp_DID & Fabric_DID_MASK;
++			if (did_mask == Fabric_DID_MASK &&
++			    !(ndlp->fc4_xpt_flags & xpt_flags)) {
++				spin_lock_irq(&ndlp->lock);
++				ndlp->nlp_flag &= ~NLP_NPR_2B_DISC;
++				if (phba->sli_rev == LPFC_SLI_REV4)
++					ndlp->nlp_flag |= NLP_RELEASE_RPI;
++				spin_unlock_irq(&ndlp->lock);
++			}
+ 			lpfc_unreg_rpi(vport, ndlp);
+ 		} else {
+ 			/* Indicate the node has already released, should
+@@ -4424,28 +4452,37 @@ lpfc_mbx_cmpl_dflt_rpi(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ {
+ 	struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *)(pmb->ctx_buf);
+ 	struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp;
++	u32 mbx_flag = pmb->mbox_flag;
++	u32 mbx_cmd = pmb->u.mb.mbxCommand;
+ 
+ 	pmb->ctx_buf = NULL;
+ 	pmb->ctx_ndlp = NULL;
+ 
+-	lpfc_mbuf_free(phba, mp->virt, mp->phys);
+-	kfree(mp);
+-	mempool_free(pmb, phba->mbox_mem_pool);
+ 	if (ndlp) {
+ 		lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_NODE,
+-				 "0006 rpi x%x DID:%x flg:%x %d x%px\n",
++				 "0006 rpi x%x DID:%x flg:%x %d x%px "
++				 "mbx_cmd x%x mbx_flag x%x x%px\n",
+ 				 ndlp->nlp_rpi, ndlp->nlp_DID, ndlp->nlp_flag,
+-				 kref_read(&ndlp->kref),
+-				 ndlp);
+-		/* This is the end of the default RPI cleanup logic for
+-		 * this ndlp and it could get released.  Clear the nlp_flags to
+-		 * prevent any further processing.
++				 kref_read(&ndlp->kref), ndlp, mbx_cmd,
++				 mbx_flag, pmb);
++
++		/* This ends the default/temporary RPI cleanup logic for this
++		 * ndlp and the node and rpi needs to be released. Free the rpi
++		 * first on an UNREG_LOGIN and then release the final
++		 * references.
+ 		 */
++		spin_lock_irq(&ndlp->lock);
+ 		ndlp->nlp_flag &= ~NLP_REG_LOGIN_SEND;
++		if (mbx_cmd == MBX_UNREG_LOGIN)
++			ndlp->nlp_flag &= ~NLP_UNREG_INP;
++		spin_unlock_irq(&ndlp->lock);
+ 		lpfc_nlp_put(ndlp);
+-		lpfc_nlp_not_used(ndlp);
++		lpfc_drop_node(ndlp->vport, ndlp);
+ 	}
+ 
++	lpfc_mbuf_free(phba, mp->virt, mp->phys);
++	kfree(mp);
++	mempool_free(pmb, phba->mbox_mem_pool);
+ 	return;
+ }
+ 
+@@ -4503,11 +4540,11 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ 	/* ELS response tag <ulpIoTag> completes */
+ 	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+ 			 "0110 ELS response tag x%x completes "
+-			 "Data: x%x x%x x%x x%x x%x x%x x%x\n",
++			 "Data: x%x x%x x%x x%x x%x x%x x%x x%x x%px\n",
+ 			 cmdiocb->iocb.ulpIoTag, rspiocb->iocb.ulpStatus,
+ 			 rspiocb->iocb.un.ulpWord[4], rspiocb->iocb.ulpTimeout,
+ 			 ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
+-			 ndlp->nlp_rpi);
++			 ndlp->nlp_rpi, kref_read(&ndlp->kref), mbox);
+ 	if (mbox) {
+ 		if ((rspiocb->iocb.ulpStatus == 0) &&
+ 		    (ndlp->nlp_flag & NLP_ACC_REGLOGIN)) {
+@@ -4587,6 +4624,20 @@ out:
+ 		spin_unlock_irq(&ndlp->lock);
+ 	}
+ 
++	/* An SLI4 NPIV instance wants to drop the node at this point under
++	 * these conditions and release the RPI.
++	 */
++	if (phba->sli_rev == LPFC_SLI_REV4 &&
++	    (vport && vport->port_type == LPFC_NPIV_PORT) &&
++	    ndlp->nlp_flag & NLP_RELEASE_RPI) {
++		lpfc_sli4_free_rpi(phba, ndlp->nlp_rpi);
++		spin_lock_irq(&ndlp->lock);
++		ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR;
++		ndlp->nlp_flag &= ~NLP_RELEASE_RPI;
++		spin_unlock_irq(&ndlp->lock);
++		lpfc_drop_node(vport, ndlp);
++	}
++
+ 	/* Release the originating I/O reference. */
+ 	lpfc_els_free_iocb(phba, cmdiocb);
+ 	lpfc_nlp_put(ndlp);
+@@ -4775,10 +4826,10 @@ lpfc_els_rsp_acc(struct lpfc_vport *vport, uint32_t flag,
+ 	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+ 			 "0128 Xmit ELS ACC response Status: x%x, IoTag: x%x, "
+ 			 "XRI: x%x, DID: x%x, nlp_flag: x%x nlp_state: x%x "
+-			 "RPI: x%x, fc_flag x%x\n",
++			 "RPI: x%x, fc_flag x%x refcnt %d\n",
+ 			 rc, elsiocb->iotag, elsiocb->sli4_xritag,
+ 			 ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
+-			 ndlp->nlp_rpi, vport->fc_flag);
++			 ndlp->nlp_rpi, vport->fc_flag, kref_read(&ndlp->kref));
+ 	return 0;
+ }
+ 
+@@ -4856,6 +4907,17 @@ lpfc_els_rsp_reject(struct lpfc_vport *vport, uint32_t rejectError,
+ 		return 1;
+ 	}
+ 
++	/* The NPIV instance is rejecting this unsolicited ELS. Make sure the
++	 * node's assigned RPI needs to be released as this node will get
++	 * freed.
++	 */
++	if (phba->sli_rev == LPFC_SLI_REV4 &&
++	    vport->port_type == LPFC_NPIV_PORT) {
++		spin_lock_irq(&ndlp->lock);
++		ndlp->nlp_flag |= NLP_RELEASE_RPI;
++		spin_unlock_irq(&ndlp->lock);
++	}
++
+ 	rc = lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, elsiocb, 0);
+ 	if (rc == IOCB_ERROR) {
+ 		lpfc_els_free_iocb(phba, elsiocb);
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index f5a898c2c9043..3ea07034ab97c 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -4789,12 +4789,17 @@ lpfc_nlp_logo_unreg(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ 		ndlp->nlp_defer_did = NLP_EVT_NOTHING_PENDING;
+ 		lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0);
+ 	} else {
++		/* NLP_RELEASE_RPI is only set for SLI4 ports. */
+ 		if (ndlp->nlp_flag & NLP_RELEASE_RPI) {
+ 			lpfc_sli4_free_rpi(vport->phba, ndlp->nlp_rpi);
++			spin_lock_irq(&ndlp->lock);
+ 			ndlp->nlp_flag &= ~NLP_RELEASE_RPI;
+ 			ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR;
++			spin_unlock_irq(&ndlp->lock);
+ 		}
++		spin_lock_irq(&ndlp->lock);
+ 		ndlp->nlp_flag &= ~NLP_UNREG_INP;
++		spin_unlock_irq(&ndlp->lock);
+ 	}
+ }
+ 
+@@ -5129,8 +5134,10 @@ lpfc_cleanup_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 	list_del_init(&ndlp->dev_loss_evt.evt_listp);
+ 	list_del_init(&ndlp->recovery_evt.evt_listp);
+ 	lpfc_cleanup_vports_rrqs(vport, ndlp);
++
+ 	if (phba->sli_rev == LPFC_SLI_REV4)
+ 		ndlp->nlp_flag |= NLP_RELEASE_RPI;
++
+ 	return 0;
+ }
+ 
+@@ -6176,8 +6183,23 @@ lpfc_nlp_release(struct kref *kref)
+ 	lpfc_cancel_retry_delay_tmo(vport, ndlp);
+ 	lpfc_cleanup_node(vport, ndlp);
+ 
+-	/* Clear Node key fields to give other threads notice
+-	 * that this node memory is not valid anymore.
++	/* Not all ELS transactions have registered the RPI with the port.
++	 * In these cases the rpi usage is temporary and the node is
++	 * released when the WQE is completed.  Catch this case to free the
++	 * RPI to the pool.  Because this node is in the release path, a lock
++	 * is unnecessary.  All references are gone and the node has been
++	 * dequeued.
++	 */
++	if (ndlp->nlp_flag & NLP_RELEASE_RPI) {
++		if (ndlp->nlp_rpi != LPFC_RPI_ALLOC_ERROR &&
++		    !(ndlp->nlp_flag & (NLP_RPI_REGISTERED | NLP_UNREG_INP))) {
++			lpfc_sli4_free_rpi(vport->phba, ndlp->nlp_rpi);
++			ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR;
++		}
++	}
++
++	/* The node is not freed back to memory, it is released to a pool so
++	 * the node fields need to be cleaned up.
+ 	 */
+ 	ndlp->vport = NULL;
+ 	ndlp->nlp_state = NLP_STE_FREED_NODE;
+@@ -6257,6 +6279,7 @@ lpfc_nlp_not_used(struct lpfc_nodelist *ndlp)
+ 		"node not used:   did:x%x flg:x%x refcnt:x%x",
+ 		ndlp->nlp_DID, ndlp->nlp_flag,
+ 		kref_read(&ndlp->kref));
++
+ 	if (kref_read(&ndlp->kref) == 1)
+ 		if (lpfc_nlp_put(ndlp))
+ 			return 1;
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 5f018d02bf562..f81dfa3cb0a1e 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -3532,13 +3532,6 @@ lpfc_offline_prep(struct lpfc_hba *phba, int mbx_action)
+ 			list_for_each_entry_safe(ndlp, next_ndlp,
+ 						 &vports[i]->fc_nodes,
+ 						 nlp_listp) {
+-				if (ndlp->nlp_state == NLP_STE_UNUSED_NODE) {
+-					/* Driver must assume RPI is invalid for
+-					 * any unused or inactive node.
+-					 */
+-					ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR;
+-					continue;
+-				}
+ 
+ 				spin_lock_irq(&ndlp->lock);
+ 				ndlp->nlp_flag &= ~NLP_NPR_ADISC;
+diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c
+index bb4e65a32ecc6..3dac116c405bf 100644
+--- a/drivers/scsi/lpfc/lpfc_nportdisc.c
++++ b/drivers/scsi/lpfc/lpfc_nportdisc.c
+@@ -567,15 +567,24 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ 		/* no deferred ACC */
+ 		kfree(save_iocb);
+ 
+-		/* In order to preserve RPIs, we want to cleanup
+-		 * the default RPI the firmware created to rcv
+-		 * this ELS request. The only way to do this is
+-		 * to register, then unregister the RPI.
++		/* This is an NPIV SLI4 instance that does not need to register
++		 * a default RPI.
+ 		 */
+-		spin_lock_irq(&ndlp->lock);
+-		ndlp->nlp_flag |= (NLP_RM_DFLT_RPI | NLP_ACC_REGLOGIN |
+-				   NLP_RCV_PLOGI);
+-		spin_unlock_irq(&ndlp->lock);
++		if (phba->sli_rev == LPFC_SLI_REV4) {
++			mempool_free(login_mbox, phba->mbox_mem_pool);
++			login_mbox = NULL;
++		} else {
++			/* In order to preserve RPIs, we want to cleanup
++			 * the default RPI the firmware created to rcv
++			 * this ELS request. The only way to do this is
++			 * to register, then unregister the RPI.
++			 */
++			spin_lock_irq(&ndlp->lock);
++			ndlp->nlp_flag |= (NLP_RM_DFLT_RPI | NLP_ACC_REGLOGIN |
++					   NLP_RCV_PLOGI);
++			spin_unlock_irq(&ndlp->lock);
++		}
++
+ 		stat.un.b.lsRjtRsnCode = LSRJT_INVALID_CMD;
+ 		stat.un.b.lsRjtRsnCodeExp = LSEXP_NOTHING_MORE;
+ 		rc = lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb,
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index fc3682f15f509..bc0bcb0dccc9a 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -13625,9 +13625,15 @@ lpfc_sli4_sp_handle_mbox_event(struct lpfc_hba *phba, struct lpfc_mcqe *mcqe)
+ 		if (mcqe_status == MB_CQE_STATUS_SUCCESS) {
+ 			mp = (struct lpfc_dmabuf *)(pmb->ctx_buf);
+ 			ndlp = (struct lpfc_nodelist *)pmb->ctx_ndlp;
+-			/* Reg_LOGIN of dflt RPI was successful. Now lets get
+-			 * RID of the PPI using the same mbox buffer.
++
++			/* Reg_LOGIN of dflt RPI was successful. Mark the
++			 * node as having an UNREG_LOGIN in progress to stop
++			 * an unsolicited PLOGI from the same NPortId from
++			 * starting another mailbox transaction.
+ 			 */
++			spin_lock_irqsave(&ndlp->lock, iflags);
++			ndlp->nlp_flag |= NLP_UNREG_INP;
++			spin_unlock_irqrestore(&ndlp->lock, iflags);
+ 			lpfc_unreg_login(phba, vport->vpi,
+ 					 pmbox->un.varWords[0], pmb);
+ 			pmb->mbox_cmpl = lpfc_mbx_cmpl_dflt_rpi;
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+index 2221175ae051f..cd94a0c81f835 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+@@ -3203,6 +3203,8 @@ megasas_build_io_fusion(struct megasas_instance *instance,
+ {
+ 	int sge_count;
+ 	u8  cmd_type;
++	u16 pd_index = 0;
++	u8 drive_type = 0;
+ 	struct MPI2_RAID_SCSI_IO_REQUEST *io_request = cmd->io_request;
+ 	struct MR_PRIV_DEVICE *mr_device_priv_data;
+ 	mr_device_priv_data = scp->device->hostdata;
+@@ -3237,8 +3239,12 @@ megasas_build_io_fusion(struct megasas_instance *instance,
+ 		megasas_build_syspd_fusion(instance, scp, cmd, true);
+ 		break;
+ 	case NON_READ_WRITE_SYSPDIO:
+-		if (instance->secure_jbod_support ||
+-		    mr_device_priv_data->is_tm_capable)
++		pd_index = MEGASAS_PD_INDEX(scp);
++		drive_type = instance->pd_list[pd_index].driveType;
++		if ((instance->secure_jbod_support ||
++		     mr_device_priv_data->is_tm_capable) ||
++		     (instance->adapter_type >= VENTURA_SERIES &&
++		     drive_type == TYPE_ENCLOSURE))
+ 			megasas_build_syspd_fusion(instance, scp, cmd, false);
+ 		else
+ 			megasas_build_syspd_fusion(instance, scp, cmd, true);
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index d00aca3c77cec..a5f70f0e02871 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -6884,8 +6884,10 @@ _scsih_expander_add(struct MPT3SAS_ADAPTER *ioc, u16 handle)
+ 		 handle, parent_handle,
+ 		 (u64)sas_expander->sas_address, sas_expander->num_phys);
+ 
+-	if (!sas_expander->num_phys)
++	if (!sas_expander->num_phys) {
++		rc = -1;
+ 		goto out_fail;
++	}
+ 	sas_expander->phy = kcalloc(sas_expander->num_phys,
+ 	    sizeof(struct _sas_phy), GFP_KERNEL);
+ 	if (!sas_expander->phy) {
+diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c
+index 08c05403cd720..087c7ff28cd52 100644
+--- a/drivers/scsi/qedi/qedi_iscsi.c
++++ b/drivers/scsi/qedi/qedi_iscsi.c
+@@ -377,6 +377,7 @@ static int qedi_conn_bind(struct iscsi_cls_session *cls_session,
+ 	struct qedi_ctx *qedi = iscsi_host_priv(shost);
+ 	struct qedi_endpoint *qedi_ep;
+ 	struct iscsi_endpoint *ep;
++	int rc = 0;
+ 
+ 	ep = iscsi_lookup_endpoint(transport_fd);
+ 	if (!ep)
+@@ -384,11 +385,16 @@ static int qedi_conn_bind(struct iscsi_cls_session *cls_session,
+ 
+ 	qedi_ep = ep->dd_data;
+ 	if ((qedi_ep->state == EP_STATE_TCP_FIN_RCVD) ||
+-	    (qedi_ep->state == EP_STATE_TCP_RST_RCVD))
+-		return -EINVAL;
++	    (qedi_ep->state == EP_STATE_TCP_RST_RCVD)) {
++		rc = -EINVAL;
++		goto put_ep;
++	}
++
++	if (iscsi_conn_bind(cls_session, cls_conn, is_leading)) {
++		rc = -EINVAL;
++		goto put_ep;
++	}
+ 
+-	if (iscsi_conn_bind(cls_session, cls_conn, is_leading))
+-		return -EINVAL;
+ 
+ 	qedi_ep->conn = qedi_conn;
+ 	qedi_conn->ep = qedi_ep;
+@@ -398,13 +404,18 @@ static int qedi_conn_bind(struct iscsi_cls_session *cls_session,
+ 	qedi_conn->cmd_cleanup_req = 0;
+ 	qedi_conn->cmd_cleanup_cmpl = 0;
+ 
+-	if (qedi_bind_conn_to_iscsi_cid(qedi, qedi_conn))
+-		return -EINVAL;
++	if (qedi_bind_conn_to_iscsi_cid(qedi, qedi_conn)) {
++		rc = -EINVAL;
++		goto put_ep;
++	}
++
+ 
+ 	spin_lock_init(&qedi_conn->tmf_work_lock);
+ 	INIT_LIST_HEAD(&qedi_conn->tmf_work_list);
+ 	init_waitqueue_head(&qedi_conn->wait_queue);
+-	return 0;
++put_ep:
++	iscsi_put_endpoint(ep);
++	return rc;
+ }
+ 
+ static int qedi_iscsi_update_conn(struct qedi_ctx *qedi,
+@@ -1401,6 +1412,7 @@ struct iscsi_transport qedi_iscsi_transport = {
+ 	.destroy_session = qedi_session_destroy,
+ 	.create_conn = qedi_conn_create,
+ 	.bind_conn = qedi_conn_bind,
++	.unbind_conn = iscsi_conn_unbind,
+ 	.start_conn = qedi_conn_start,
+ 	.stop_conn = iscsi_conn_stop,
+ 	.destroy_conn = qedi_conn_destroy,
+diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c
+index ad3afe30f617d..0e7a7e82e0284 100644
+--- a/drivers/scsi/qla4xxx/ql4_os.c
++++ b/drivers/scsi/qla4xxx/ql4_os.c
+@@ -259,6 +259,7 @@ static struct iscsi_transport qla4xxx_iscsi_transport = {
+ 	.start_conn             = qla4xxx_conn_start,
+ 	.create_conn            = qla4xxx_conn_create,
+ 	.bind_conn              = qla4xxx_conn_bind,
++	.unbind_conn		= iscsi_conn_unbind,
+ 	.stop_conn              = iscsi_conn_stop,
+ 	.destroy_conn           = qla4xxx_conn_destroy,
+ 	.set_param              = iscsi_set_param,
+@@ -3234,6 +3235,7 @@ static int qla4xxx_conn_bind(struct iscsi_cls_session *cls_session,
+ 	conn = cls_conn->dd_data;
+ 	qla_conn = conn->dd_data;
+ 	qla_conn->qla_ep = ep->dd_data;
++	iscsi_put_endpoint(ep);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 532304d42f00e..269bfb8f91655 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -728,6 +728,7 @@ static void scsi_io_completion_action(struct scsi_cmnd *cmd, int result)
+ 				case 0x07: /* operation in progress */
+ 				case 0x08: /* Long write in progress */
+ 				case 0x09: /* self test in progress */
++				case 0x11: /* notify (enable spinup) required */
+ 				case 0x14: /* space allocation in progress */
+ 				case 0x1a: /* start stop unit in progress */
+ 				case 0x1b: /* sanitize in progress */
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index 441f0152193f7..6ce1cc992d1d0 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -86,16 +86,10 @@ struct iscsi_internal {
+ 	struct transport_container session_cont;
+ };
+ 
+-/* Worker to perform connection failure on unresponsive connections
+- * completely in kernel space.
+- */
+-static void stop_conn_work_fn(struct work_struct *work);
+-static DECLARE_WORK(stop_conn_work, stop_conn_work_fn);
+-
+ static atomic_t iscsi_session_nr; /* sysfs session id for next new session */
+ static struct workqueue_struct *iscsi_eh_timer_workq;
+ 
+-static struct workqueue_struct *iscsi_destroy_workq;
++static struct workqueue_struct *iscsi_conn_cleanup_workq;
+ 
+ static DEFINE_IDA(iscsi_sess_ida);
+ /*
+@@ -268,9 +262,20 @@ void iscsi_destroy_endpoint(struct iscsi_endpoint *ep)
+ }
+ EXPORT_SYMBOL_GPL(iscsi_destroy_endpoint);
+ 
++void iscsi_put_endpoint(struct iscsi_endpoint *ep)
++{
++	put_device(&ep->dev);
++}
++EXPORT_SYMBOL_GPL(iscsi_put_endpoint);
++
++/**
++ * iscsi_lookup_endpoint - get ep from handle
++ * @handle: endpoint handle
++ *
++ * Caller must do a iscsi_put_endpoint.
++ */
+ struct iscsi_endpoint *iscsi_lookup_endpoint(u64 handle)
+ {
+-	struct iscsi_endpoint *ep;
+ 	struct device *dev;
+ 
+ 	dev = class_find_device(&iscsi_endpoint_class, NULL, &handle,
+@@ -278,13 +283,7 @@ struct iscsi_endpoint *iscsi_lookup_endpoint(u64 handle)
+ 	if (!dev)
+ 		return NULL;
+ 
+-	ep = iscsi_dev_to_endpoint(dev);
+-	/*
+-	 * we can drop this now because the interface will prevent
+-	 * removals and lookups from racing.
+-	 */
+-	put_device(dev);
+-	return ep;
++	return iscsi_dev_to_endpoint(dev);
+ }
+ EXPORT_SYMBOL_GPL(iscsi_lookup_endpoint);
+ 
+@@ -1620,12 +1619,6 @@ static DECLARE_TRANSPORT_CLASS(iscsi_connection_class,
+ static struct sock *nls;
+ static DEFINE_MUTEX(rx_queue_mutex);
+ 
+-/*
+- * conn_mutex protects the {start,bind,stop,destroy}_conn from racing
+- * against the kernel stop_connection recovery mechanism
+- */
+-static DEFINE_MUTEX(conn_mutex);
+-
+ static LIST_HEAD(sesslist);
+ static DEFINE_SPINLOCK(sesslock);
+ static LIST_HEAD(connlist);
+@@ -1976,6 +1969,8 @@ static void __iscsi_unblock_session(struct work_struct *work)
+  */
+ void iscsi_unblock_session(struct iscsi_cls_session *session)
+ {
++	flush_work(&session->block_work);
++
+ 	queue_work(iscsi_eh_timer_workq, &session->unblock_work);
+ 	/*
+ 	 * Blocking the session can be done from any context so we only
+@@ -2242,6 +2237,123 @@ void iscsi_remove_session(struct iscsi_cls_session *session)
+ }
+ EXPORT_SYMBOL_GPL(iscsi_remove_session);
+ 
++static void iscsi_stop_conn(struct iscsi_cls_conn *conn, int flag)
++{
++	ISCSI_DBG_TRANS_CONN(conn, "Stopping conn.\n");
++
++	switch (flag) {
++	case STOP_CONN_RECOVER:
++		conn->state = ISCSI_CONN_FAILED;
++		break;
++	case STOP_CONN_TERM:
++		conn->state = ISCSI_CONN_DOWN;
++		break;
++	default:
++		iscsi_cls_conn_printk(KERN_ERR, conn, "invalid stop flag %d\n",
++				      flag);
++		return;
++	}
++
++	conn->transport->stop_conn(conn, flag);
++	ISCSI_DBG_TRANS_CONN(conn, "Stopping conn done.\n");
++}
++
++static int iscsi_if_stop_conn(struct iscsi_transport *transport,
++			      struct iscsi_uevent *ev)
++{
++	int flag = ev->u.stop_conn.flag;
++	struct iscsi_cls_conn *conn;
++
++	conn = iscsi_conn_lookup(ev->u.stop_conn.sid, ev->u.stop_conn.cid);
++	if (!conn)
++		return -EINVAL;
++
++	ISCSI_DBG_TRANS_CONN(conn, "iscsi if conn stop.\n");
++	/*
++	 * If this is a termination we have to call stop_conn with that flag
++	 * so the correct states get set. If we haven't run the work yet try to
++	 * avoid the extra run.
++	 */
++	if (flag == STOP_CONN_TERM) {
++		cancel_work_sync(&conn->cleanup_work);
++		iscsi_stop_conn(conn, flag);
++	} else {
++		/*
++		 * Figure out if it was the kernel or userspace initiating this.
++		 */
++		if (!test_and_set_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags)) {
++			iscsi_stop_conn(conn, flag);
++		} else {
++			ISCSI_DBG_TRANS_CONN(conn,
++					     "flush kernel conn cleanup.\n");
++			flush_work(&conn->cleanup_work);
++		}
++		/*
++		 * Only clear for recovery to avoid extra cleanup runs during
++		 * termination.
++		 */
++		clear_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags);
++	}
++	ISCSI_DBG_TRANS_CONN(conn, "iscsi if conn stop done.\n");
++	return 0;
++}
++
++static void iscsi_ep_disconnect(struct iscsi_cls_conn *conn, bool is_active)
++{
++	struct iscsi_cls_session *session = iscsi_conn_to_session(conn);
++	struct iscsi_endpoint *ep;
++
++	ISCSI_DBG_TRANS_CONN(conn, "disconnect ep.\n");
++	conn->state = ISCSI_CONN_FAILED;
++
++	if (!conn->ep || !session->transport->ep_disconnect)
++		return;
++
++	ep = conn->ep;
++	conn->ep = NULL;
++
++	session->transport->unbind_conn(conn, is_active);
++	session->transport->ep_disconnect(ep);
++	ISCSI_DBG_TRANS_CONN(conn, "disconnect ep done.\n");
++}
++
++static void iscsi_cleanup_conn_work_fn(struct work_struct *work)
++{
++	struct iscsi_cls_conn *conn = container_of(work, struct iscsi_cls_conn,
++						   cleanup_work);
++	struct iscsi_cls_session *session = iscsi_conn_to_session(conn);
++
++	mutex_lock(&conn->ep_mutex);
++	/*
++	 * If we are not at least bound there is nothing for us to do. Userspace
++	 * will do a ep_disconnect call if offload is used, but will not be
++	 * doing a stop since there is nothing to clean up, so we have to clear
++	 * the cleanup bit here.
++	 */
++	if (conn->state != ISCSI_CONN_BOUND && conn->state != ISCSI_CONN_UP) {
++		ISCSI_DBG_TRANS_CONN(conn, "Got error while conn is already failed. Ignoring.\n");
++		clear_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags);
++		mutex_unlock(&conn->ep_mutex);
++		return;
++	}
++
++	iscsi_ep_disconnect(conn, false);
++
++	if (system_state != SYSTEM_RUNNING) {
++		/*
++		 * If the user has set up for the session to never timeout
++		 * then hang like they wanted. For all other cases fail right
++		 * away since userspace is not going to relogin.
++		 */
++		if (session->recovery_tmo > 0)
++			session->recovery_tmo = 0;
++	}
++
++	iscsi_stop_conn(conn, STOP_CONN_RECOVER);
++	mutex_unlock(&conn->ep_mutex);
++	ISCSI_DBG_TRANS_CONN(conn, "cleanup done.\n");
++}
++
+ void iscsi_free_session(struct iscsi_cls_session *session)
+ {
+ 	ISCSI_DBG_TRANS_SESSION(session, "Freeing session\n");
+@@ -2281,7 +2393,7 @@ iscsi_create_conn(struct iscsi_cls_session *session, int dd_size, uint32_t cid)
+ 
+ 	mutex_init(&conn->ep_mutex);
+ 	INIT_LIST_HEAD(&conn->conn_list);
+-	INIT_LIST_HEAD(&conn->conn_list_err);
++	INIT_WORK(&conn->cleanup_work, iscsi_cleanup_conn_work_fn);
+ 	conn->transport = transport;
+ 	conn->cid = cid;
+ 	conn->state = ISCSI_CONN_DOWN;
+@@ -2338,7 +2450,6 @@ int iscsi_destroy_conn(struct iscsi_cls_conn *conn)
+ 
+ 	spin_lock_irqsave(&connlock, flags);
+ 	list_del(&conn->conn_list);
+-	list_del(&conn->conn_list_err);
+ 	spin_unlock_irqrestore(&connlock, flags);
+ 
+ 	transport_unregister_device(&conn->dev);
+@@ -2453,77 +2564,6 @@ int iscsi_offload_mesg(struct Scsi_Host *shost,
+ }
+ EXPORT_SYMBOL_GPL(iscsi_offload_mesg);
+ 
+-/*
+- * This can be called without the rx_queue_mutex, if invoked by the kernel
+- * stop work. But, in that case, it is guaranteed not to race with
+- * iscsi_destroy by conn_mutex.
+- */
+-static void iscsi_if_stop_conn(struct iscsi_cls_conn *conn, int flag)
+-{
+-	/*
+-	 * It is important that this path doesn't rely on
+-	 * rx_queue_mutex, otherwise, a thread doing allocation on a
+-	 * start_session/start_connection could sleep waiting on a
+-	 * writeback to a failed iscsi device, that cannot be recovered
+-	 * because the lock is held.  If we don't hold it here, the
+-	 * kernel stop_conn_work_fn has a chance to stop the broken
+-	 * session and resolve the allocation.
+-	 *
+-	 * Still, the user invoked .stop_conn() needs to be serialized
+-	 * with stop_conn_work_fn by a private mutex.  Not pretty, but
+-	 * it works.
+-	 */
+-	mutex_lock(&conn_mutex);
+-	switch (flag) {
+-	case STOP_CONN_RECOVER:
+-		conn->state = ISCSI_CONN_FAILED;
+-		break;
+-	case STOP_CONN_TERM:
+-		conn->state = ISCSI_CONN_DOWN;
+-		break;
+-	default:
+-		iscsi_cls_conn_printk(KERN_ERR, conn,
+-				      "invalid stop flag %d\n", flag);
+-		goto unlock;
+-	}
+-
+-	conn->transport->stop_conn(conn, flag);
+-unlock:
+-	mutex_unlock(&conn_mutex);
+-}
+-
+-static void stop_conn_work_fn(struct work_struct *work)
+-{
+-	struct iscsi_cls_conn *conn, *tmp;
+-	unsigned long flags;
+-	LIST_HEAD(recovery_list);
+-
+-	spin_lock_irqsave(&connlock, flags);
+-	if (list_empty(&connlist_err)) {
+-		spin_unlock_irqrestore(&connlock, flags);
+-		return;
+-	}
+-	list_splice_init(&connlist_err, &recovery_list);
+-	spin_unlock_irqrestore(&connlock, flags);
+-
+-	list_for_each_entry_safe(conn, tmp, &recovery_list, conn_list_err) {
+-		uint32_t sid = iscsi_conn_get_sid(conn);
+-		struct iscsi_cls_session *session;
+-
+-		session = iscsi_session_lookup(sid);
+-		if (session) {
+-			if (system_state != SYSTEM_RUNNING) {
+-				session->recovery_tmo = 0;
+-				iscsi_if_stop_conn(conn, STOP_CONN_TERM);
+-			} else {
+-				iscsi_if_stop_conn(conn, STOP_CONN_RECOVER);
+-			}
+-		}
+-
+-		list_del_init(&conn->conn_list_err);
+-	}
+-}
+-
+ void iscsi_conn_error_event(struct iscsi_cls_conn *conn, enum iscsi_err error)
+ {
+ 	struct nlmsghdr	*nlh;
+@@ -2531,12 +2571,9 @@ void iscsi_conn_error_event(struct iscsi_cls_conn *conn, enum iscsi_err error)
+ 	struct iscsi_uevent *ev;
+ 	struct iscsi_internal *priv;
+ 	int len = nlmsg_total_size(sizeof(*ev));
+-	unsigned long flags;
+ 
+-	spin_lock_irqsave(&connlock, flags);
+-	list_add(&conn->conn_list_err, &connlist_err);
+-	spin_unlock_irqrestore(&connlock, flags);
+-	queue_work(system_unbound_wq, &stop_conn_work);
++	if (!test_and_set_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags))
++		queue_work(iscsi_conn_cleanup_workq, &conn->cleanup_work);
+ 
+ 	priv = iscsi_if_transport_lookup(conn->transport);
+ 	if (!priv)
+@@ -2866,26 +2903,17 @@ static int
+ iscsi_if_destroy_conn(struct iscsi_transport *transport, struct iscsi_uevent *ev)
+ {
+ 	struct iscsi_cls_conn *conn;
+-	unsigned long flags;
+ 
+ 	conn = iscsi_conn_lookup(ev->u.d_conn.sid, ev->u.d_conn.cid);
+ 	if (!conn)
+ 		return -EINVAL;
+ 
+-	spin_lock_irqsave(&connlock, flags);
+-	if (!list_empty(&conn->conn_list_err)) {
+-		spin_unlock_irqrestore(&connlock, flags);
+-		return -EAGAIN;
+-	}
+-	spin_unlock_irqrestore(&connlock, flags);
+-
++	ISCSI_DBG_TRANS_CONN(conn, "Flushing cleanup during destruction\n");
++	flush_work(&conn->cleanup_work);
+ 	ISCSI_DBG_TRANS_CONN(conn, "Destroying transport conn\n");
+ 
+-	mutex_lock(&conn_mutex);
+ 	if (transport->destroy_conn)
+ 		transport->destroy_conn(conn);
+-	mutex_unlock(&conn_mutex);
+-
+ 	return 0;
+ }
+ 
+@@ -2975,15 +3003,31 @@ static int iscsi_if_ep_disconnect(struct iscsi_transport *transport,
+ 	ep = iscsi_lookup_endpoint(ep_handle);
+ 	if (!ep)
+ 		return -EINVAL;
++
+ 	conn = ep->conn;
+-	if (conn) {
+-		mutex_lock(&conn->ep_mutex);
+-		conn->ep = NULL;
++	if (!conn) {
++		/*
++		 * conn was not even bound yet, so we can't get iscsi conn
++		 * failures yet.
++		 */
++		transport->ep_disconnect(ep);
++		goto put_ep;
++	}
++
++	mutex_lock(&conn->ep_mutex);
++	/* Check if this was a conn error and the kernel took ownership */
++	if (test_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags)) {
++		ISCSI_DBG_TRANS_CONN(conn, "flush kernel conn cleanup.\n");
+ 		mutex_unlock(&conn->ep_mutex);
+-		conn->state = ISCSI_CONN_FAILED;
++
++		flush_work(&conn->cleanup_work);
++		goto put_ep;
+ 	}
+ 
+-	transport->ep_disconnect(ep);
++	iscsi_ep_disconnect(conn, false);
++	mutex_unlock(&conn->ep_mutex);
++put_ep:
++	iscsi_put_endpoint(ep);
+ 	return 0;
+ }
+ 
+@@ -3009,6 +3053,7 @@ iscsi_if_transport_ep(struct iscsi_transport *transport,
+ 
+ 		ev->r.retcode = transport->ep_poll(ep,
+ 						   ev->u.ep_poll.timeout_ms);
++		iscsi_put_endpoint(ep);
+ 		break;
+ 	case ISCSI_UEVENT_TRANSPORT_EP_DISCONNECT:
+ 		rc = iscsi_if_ep_disconnect(transport,
+@@ -3639,18 +3684,129 @@ exit_host_stats:
+ 	return err;
+ }
+ 
++static int iscsi_if_transport_conn(struct iscsi_transport *transport,
++				   struct nlmsghdr *nlh)
++{
++	struct iscsi_uevent *ev = nlmsg_data(nlh);
++	struct iscsi_cls_session *session;
++	struct iscsi_cls_conn *conn = NULL;
++	struct iscsi_endpoint *ep;
++	uint32_t pdu_len;
++	int err = 0;
++
++	switch (nlh->nlmsg_type) {
++	case ISCSI_UEVENT_CREATE_CONN:
++		return iscsi_if_create_conn(transport, ev);
++	case ISCSI_UEVENT_DESTROY_CONN:
++		return iscsi_if_destroy_conn(transport, ev);
++	case ISCSI_UEVENT_STOP_CONN:
++		return iscsi_if_stop_conn(transport, ev);
++	}
++
++	/*
++	 * The following cmds need to be run under the ep_mutex so in kernel
++	 * conn cleanup (ep_disconnect + unbind and conn) is not done while
++	 * these are running. They also must not run if we have just run a conn
++	 * cleanup because they would set the state in a way that might allow
++	 * IO or send IO themselves.
++	 */
++	switch (nlh->nlmsg_type) {
++	case ISCSI_UEVENT_START_CONN:
++		conn = iscsi_conn_lookup(ev->u.start_conn.sid,
++					 ev->u.start_conn.cid);
++		break;
++	case ISCSI_UEVENT_BIND_CONN:
++		conn = iscsi_conn_lookup(ev->u.b_conn.sid, ev->u.b_conn.cid);
++		break;
++	case ISCSI_UEVENT_SEND_PDU:
++		conn = iscsi_conn_lookup(ev->u.send_pdu.sid, ev->u.send_pdu.cid);
++		break;
++	}
++
++	if (!conn)
++		return -EINVAL;
++
++	mutex_lock(&conn->ep_mutex);
++	if (test_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags)) {
++		mutex_unlock(&conn->ep_mutex);
++		ev->r.retcode = -ENOTCONN;
++		return 0;
++	}
++
++	switch (nlh->nlmsg_type) {
++	case ISCSI_UEVENT_BIND_CONN:
++		if (conn->ep) {
++			/*
++			 * For offload boot support where iscsid is restarted
++			 * during the pivot root stage, the ep will be intact
++			 * here when the new iscsid instance starts up and
++			 * reconnects.
++			 */
++			iscsi_ep_disconnect(conn, true);
++		}
++
++		session = iscsi_session_lookup(ev->u.b_conn.sid);
++		if (!session) {
++			err = -EINVAL;
++			break;
++		}
++
++		ev->r.retcode =	transport->bind_conn(session, conn,
++						ev->u.b_conn.transport_eph,
++						ev->u.b_conn.is_leading);
++		if (!ev->r.retcode)
++			conn->state = ISCSI_CONN_BOUND;
++
++		if (ev->r.retcode || !transport->ep_connect)
++			break;
++
++		ep = iscsi_lookup_endpoint(ev->u.b_conn.transport_eph);
++		if (ep) {
++			ep->conn = conn;
++			conn->ep = ep;
++			iscsi_put_endpoint(ep);
++		} else {
++			err = -ENOTCONN;
++			iscsi_cls_conn_printk(KERN_ERR, conn,
++					      "Could not set ep conn binding\n");
++		}
++		break;
++	case ISCSI_UEVENT_START_CONN:
++		ev->r.retcode = transport->start_conn(conn);
++		if (!ev->r.retcode)
++			conn->state = ISCSI_CONN_UP;
++		break;
++	case ISCSI_UEVENT_SEND_PDU:
++		pdu_len = nlh->nlmsg_len - sizeof(*nlh) - sizeof(*ev);
++
++		if ((ev->u.send_pdu.hdr_size > pdu_len) ||
++		    (ev->u.send_pdu.data_size > (pdu_len - ev->u.send_pdu.hdr_size))) {
++			err = -EINVAL;
++			break;
++		}
++
++		ev->r.retcode =	transport->send_pdu(conn,
++				(struct iscsi_hdr *)((char *)ev + sizeof(*ev)),
++				(char *)ev + sizeof(*ev) + ev->u.send_pdu.hdr_size,
++				ev->u.send_pdu.data_size);
++		break;
++	default:
++		err = -ENOSYS;
++	}
++
++	mutex_unlock(&conn->ep_mutex);
++	return err;
++}
+ 
+ static int
+ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ {
+ 	int err = 0;
+ 	u32 portid;
+-	u32 pdu_len;
+ 	struct iscsi_uevent *ev = nlmsg_data(nlh);
+ 	struct iscsi_transport *transport = NULL;
+ 	struct iscsi_internal *priv;
+ 	struct iscsi_cls_session *session;
+-	struct iscsi_cls_conn *conn;
+ 	struct iscsi_endpoint *ep = NULL;
+ 
+ 	if (!netlink_capable(skb, CAP_SYS_ADMIN))
+@@ -3691,6 +3847,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 					ev->u.c_bound_session.initial_cmdsn,
+ 					ev->u.c_bound_session.cmds_max,
+ 					ev->u.c_bound_session.queue_depth);
++		iscsi_put_endpoint(ep);
+ 		break;
+ 	case ISCSI_UEVENT_DESTROY_SESSION:
+ 		session = iscsi_session_lookup(ev->u.d_session.sid);
+@@ -3715,7 +3872,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 			list_del_init(&session->sess_list);
+ 			spin_unlock_irqrestore(&sesslock, flags);
+ 
+-			queue_work(iscsi_destroy_workq, &session->destroy_work);
++			queue_work(system_unbound_wq, &session->destroy_work);
+ 		}
+ 		break;
+ 	case ISCSI_UEVENT_UNBIND_SESSION:
+@@ -3726,89 +3883,16 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 		else
+ 			err = -EINVAL;
+ 		break;
+-	case ISCSI_UEVENT_CREATE_CONN:
+-		err = iscsi_if_create_conn(transport, ev);
+-		break;
+-	case ISCSI_UEVENT_DESTROY_CONN:
+-		err = iscsi_if_destroy_conn(transport, ev);
+-		break;
+-	case ISCSI_UEVENT_BIND_CONN:
+-		session = iscsi_session_lookup(ev->u.b_conn.sid);
+-		conn = iscsi_conn_lookup(ev->u.b_conn.sid, ev->u.b_conn.cid);
+-
+-		if (conn && conn->ep)
+-			iscsi_if_ep_disconnect(transport, conn->ep->id);
+-
+-		if (!session || !conn) {
+-			err = -EINVAL;
+-			break;
+-		}
+-
+-		mutex_lock(&conn_mutex);
+-		ev->r.retcode =	transport->bind_conn(session, conn,
+-						ev->u.b_conn.transport_eph,
+-						ev->u.b_conn.is_leading);
+-		if (!ev->r.retcode)
+-			conn->state = ISCSI_CONN_BOUND;
+-		mutex_unlock(&conn_mutex);
+-
+-		if (ev->r.retcode || !transport->ep_connect)
+-			break;
+-
+-		ep = iscsi_lookup_endpoint(ev->u.b_conn.transport_eph);
+-		if (ep) {
+-			ep->conn = conn;
+-
+-			mutex_lock(&conn->ep_mutex);
+-			conn->ep = ep;
+-			mutex_unlock(&conn->ep_mutex);
+-		} else
+-			iscsi_cls_conn_printk(KERN_ERR, conn,
+-					      "Could not set ep conn "
+-					      "binding\n");
+-		break;
+ 	case ISCSI_UEVENT_SET_PARAM:
+ 		err = iscsi_set_param(transport, ev);
+ 		break;
+-	case ISCSI_UEVENT_START_CONN:
+-		conn = iscsi_conn_lookup(ev->u.start_conn.sid, ev->u.start_conn.cid);
+-		if (conn) {
+-			mutex_lock(&conn_mutex);
+-			ev->r.retcode = transport->start_conn(conn);
+-			if (!ev->r.retcode)
+-				conn->state = ISCSI_CONN_UP;
+-			mutex_unlock(&conn_mutex);
+-		}
+-		else
+-			err = -EINVAL;
+-		break;
++	case ISCSI_UEVENT_CREATE_CONN:
++	case ISCSI_UEVENT_DESTROY_CONN:
+ 	case ISCSI_UEVENT_STOP_CONN:
+-		conn = iscsi_conn_lookup(ev->u.stop_conn.sid, ev->u.stop_conn.cid);
+-		if (conn)
+-			iscsi_if_stop_conn(conn, ev->u.stop_conn.flag);
+-		else
+-			err = -EINVAL;
+-		break;
++	case ISCSI_UEVENT_START_CONN:
++	case ISCSI_UEVENT_BIND_CONN:
+ 	case ISCSI_UEVENT_SEND_PDU:
+-		pdu_len = nlh->nlmsg_len - sizeof(*nlh) - sizeof(*ev);
+-
+-		if ((ev->u.send_pdu.hdr_size > pdu_len) ||
+-		    (ev->u.send_pdu.data_size > (pdu_len - ev->u.send_pdu.hdr_size))) {
+-			err = -EINVAL;
+-			break;
+-		}
+-
+-		conn = iscsi_conn_lookup(ev->u.send_pdu.sid, ev->u.send_pdu.cid);
+-		if (conn) {
+-			mutex_lock(&conn_mutex);
+-			ev->r.retcode =	transport->send_pdu(conn,
+-				(struct iscsi_hdr*)((char*)ev + sizeof(*ev)),
+-				(char*)ev + sizeof(*ev) + ev->u.send_pdu.hdr_size,
+-				ev->u.send_pdu.data_size);
+-			mutex_unlock(&conn_mutex);
+-		}
+-		else
+-			err = -EINVAL;
++		err = iscsi_if_transport_conn(transport, nlh);
+ 		break;
+ 	case ISCSI_UEVENT_GET_STATS:
+ 		err = iscsi_if_get_stats(transport, nlh);
+@@ -4656,6 +4740,7 @@ iscsi_register_transport(struct iscsi_transport *tt)
+ 	int err;
+ 
+ 	BUG_ON(!tt);
++	WARN_ON(tt->ep_disconnect && !tt->unbind_conn);
+ 
+ 	priv = iscsi_if_transport_lookup(tt);
+ 	if (priv)
+@@ -4810,10 +4895,10 @@ static __init int iscsi_transport_init(void)
+ 		goto release_nls;
+ 	}
+ 
+-	iscsi_destroy_workq = alloc_workqueue("%s",
+-			WQ_SYSFS | __WQ_LEGACY | WQ_MEM_RECLAIM | WQ_UNBOUND,
+-			1, "iscsi_destroy");
+-	if (!iscsi_destroy_workq) {
++	iscsi_conn_cleanup_workq = alloc_workqueue("%s",
++			WQ_SYSFS | WQ_MEM_RECLAIM | WQ_UNBOUND, 0,
++			"iscsi_conn_cleanup");
++	if (!iscsi_conn_cleanup_workq) {
+ 		err = -ENOMEM;
+ 		goto destroy_wq;
+ 	}
+@@ -4843,7 +4928,7 @@ unregister_transport_class:
+ 
+ static void __exit iscsi_transport_exit(void)
+ {
+-	destroy_workqueue(iscsi_destroy_workq);
++	destroy_workqueue(iscsi_conn_cleanup_workq);
+ 	destroy_workqueue(iscsi_eh_timer_workq);
+ 	netlink_kernel_release(nls);
+ 	bus_unregister(&iscsi_flashnode_bus);
+diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
+index 1eaedaaba0944..1a18308f4ef4f 100644
+--- a/drivers/soundwire/stream.c
++++ b/drivers/soundwire/stream.c
+@@ -422,7 +422,6 @@ static int sdw_prep_deprep_slave_ports(struct sdw_bus *bus,
+ 	struct completion *port_ready;
+ 	struct sdw_dpn_prop *dpn_prop;
+ 	struct sdw_prepare_ch prep_ch;
+-	unsigned int time_left;
+ 	bool intr = false;
+ 	int ret = 0, val;
+ 	u32 addr;
+@@ -479,15 +478,15 @@ static int sdw_prep_deprep_slave_ports(struct sdw_bus *bus,
+ 
+ 		/* Wait for completion on port ready */
+ 		port_ready = &s_rt->slave->port_ready[prep_ch.num];
+-		time_left = wait_for_completion_timeout(port_ready,
+-				msecs_to_jiffies(dpn_prop->ch_prep_timeout));
++		wait_for_completion_timeout(port_ready,
++			msecs_to_jiffies(dpn_prop->ch_prep_timeout));
+ 
+ 		val = sdw_read(s_rt->slave, SDW_DPN_PREPARESTATUS(p_rt->num));
+-		val &= p_rt->ch_mask;
+-		if (!time_left || val) {
++		if ((val < 0) || (val & p_rt->ch_mask)) {
++			ret = (val < 0) ? val : -ETIMEDOUT;
+ 			dev_err(&s_rt->slave->dev,
+-				"Chn prep failed for port:%d\n", prep_ch.num);
+-			return -ETIMEDOUT;
++				"Chn prep failed for port %d: %d\n", prep_ch.num, ret);
++			return ret;
+ 		}
+ 	}
+ 
+diff --git a/drivers/spi/spi-loopback-test.c b/drivers/spi/spi-loopback-test.c
+index f1cf2232f0b5e..4d4f77a186a98 100644
+--- a/drivers/spi/spi-loopback-test.c
++++ b/drivers/spi/spi-loopback-test.c
+@@ -875,7 +875,7 @@ static int spi_test_run_iter(struct spi_device *spi,
+ 		test.transfers[i].len = len;
+ 		if (test.transfers[i].tx_buf)
+ 			test.transfers[i].tx_buf += tx_off;
+-		if (test.transfers[i].tx_buf)
++		if (test.transfers[i].rx_buf)
+ 			test.transfers[i].rx_buf += rx_off;
+ 	}
+ 
+diff --git a/drivers/spi/spi-meson-spicc.c b/drivers/spi/spi-meson-spicc.c
+index ecba6b4a5d85d..b2c4621db34d7 100644
+--- a/drivers/spi/spi-meson-spicc.c
++++ b/drivers/spi/spi-meson-spicc.c
+@@ -725,7 +725,7 @@ static int meson_spicc_probe(struct platform_device *pdev)
+ 	ret = clk_prepare_enable(spicc->pclk);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "pclk clock enable failed\n");
+-		goto out_master;
++		goto out_core_clk;
+ 	}
+ 
+ 	device_reset_optional(&pdev->dev);
+@@ -752,7 +752,7 @@ static int meson_spicc_probe(struct platform_device *pdev)
+ 	ret = meson_spicc_clk_init(spicc);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "clock registration failed\n");
+-		goto out_master;
++		goto out_clk;
+ 	}
+ 
+ 	ret = devm_spi_register_master(&pdev->dev, master);
+@@ -764,9 +764,11 @@ static int meson_spicc_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ out_clk:
+-	clk_disable_unprepare(spicc->core);
+ 	clk_disable_unprepare(spicc->pclk);
+ 
++out_core_clk:
++	clk_disable_unprepare(spicc->core);
++
+ out_master:
+ 	spi_master_put(master);
+ 
+diff --git a/drivers/spi/spi-omap-100k.c b/drivers/spi/spi-omap-100k.c
+index 7062f29022539..f104470605b38 100644
+--- a/drivers/spi/spi-omap-100k.c
++++ b/drivers/spi/spi-omap-100k.c
+@@ -241,7 +241,7 @@ static int omap1_spi100k_setup_transfer(struct spi_device *spi,
+ 	else
+ 		word_len = spi->bits_per_word;
+ 
+-	if (spi->bits_per_word > 32)
++	if (word_len > 32)
+ 		return -EINVAL;
+ 	cs->word_len = word_len;
+ 
+diff --git a/drivers/spi/spi-sun6i.c b/drivers/spi/spi-sun6i.c
+index cc8401980125d..23ad052528dbe 100644
+--- a/drivers/spi/spi-sun6i.c
++++ b/drivers/spi/spi-sun6i.c
+@@ -379,6 +379,10 @@ static int sun6i_spi_transfer_one(struct spi_master *master,
+ 	}
+ 
+ 	sun6i_spi_write(sspi, SUN6I_CLK_CTL_REG, reg);
++	/* Finally enable the bus - doing so before might raise SCK to HIGH */
++	reg = sun6i_spi_read(sspi, SUN6I_GBL_CTL_REG);
++	reg |= SUN6I_GBL_CTL_BUS_ENABLE;
++	sun6i_spi_write(sspi, SUN6I_GBL_CTL_REG, reg);
+ 
+ 	/* Setup the transfer now... */
+ 	if (sspi->tx_buf)
+@@ -504,7 +508,7 @@ static int sun6i_spi_runtime_resume(struct device *dev)
+ 	}
+ 
+ 	sun6i_spi_write(sspi, SUN6I_GBL_CTL_REG,
+-			SUN6I_GBL_CTL_BUS_ENABLE | SUN6I_GBL_CTL_MASTER | SUN6I_GBL_CTL_TP);
++			SUN6I_GBL_CTL_MASTER | SUN6I_GBL_CTL_TP);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/spi/spi-topcliff-pch.c b/drivers/spi/spi-topcliff-pch.c
+index b8870784fc6ef..8c4615b763398 100644
+--- a/drivers/spi/spi-topcliff-pch.c
++++ b/drivers/spi/spi-topcliff-pch.c
+@@ -580,8 +580,10 @@ static void pch_spi_set_tx(struct pch_spi_data *data, int *bpw)
+ 	data->pkt_tx_buff = kzalloc(size, GFP_KERNEL);
+ 	if (data->pkt_tx_buff != NULL) {
+ 		data->pkt_rx_buff = kzalloc(size, GFP_KERNEL);
+-		if (!data->pkt_rx_buff)
++		if (!data->pkt_rx_buff) {
+ 			kfree(data->pkt_tx_buff);
++			data->pkt_tx_buff = NULL;
++		}
+ 	}
+ 
+ 	if (!data->pkt_rx_buff) {
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index e353b7a9e54eb..56c173869d975 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -2057,6 +2057,7 @@ of_register_spi_device(struct spi_controller *ctlr, struct device_node *nc)
+ 	/* Store a pointer to the node in the device structure */
+ 	of_node_get(nc);
+ 	spi->dev.of_node = nc;
++	spi->dev.fwnode = of_fwnode_handle(nc);
+ 
+ 	/* Register the new device */
+ 	rc = spi_add_device(spi);
+@@ -2621,9 +2622,10 @@ static int spi_get_gpio_descs(struct spi_controller *ctlr)
+ 		native_cs_mask |= BIT(i);
+ 	}
+ 
+-	ctlr->unused_native_cs = ffz(native_cs_mask);
+-	if (num_cs_gpios && ctlr->max_native_cs &&
+-	    ctlr->unused_native_cs >= ctlr->max_native_cs) {
++	ctlr->unused_native_cs = ffs(~native_cs_mask) - 1;
++
++	if ((ctlr->flags & SPI_MASTER_GPIO_SS) && num_cs_gpios &&
++	    ctlr->max_native_cs && ctlr->unused_native_cs >= ctlr->max_native_cs) {
+ 		dev_err(dev, "No unused native chip select available\n");
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/ssb/scan.c b/drivers/ssb/scan.c
+index f49ab1aa2149a..4161e5d1f276e 100644
+--- a/drivers/ssb/scan.c
++++ b/drivers/ssb/scan.c
+@@ -325,6 +325,7 @@ int ssb_bus_scan(struct ssb_bus *bus,
+ 	if (bus->nr_devices > ARRAY_SIZE(bus->devices)) {
+ 		pr_err("More than %d ssb cores found (%d)\n",
+ 		       SSB_MAX_NR_CORES, bus->nr_devices);
++		err = -EINVAL;
+ 		goto err_unmap;
+ 	}
+ 	if (bus->bustype == SSB_BUSTYPE_SSB) {
+diff --git a/drivers/ssb/sdio.c b/drivers/ssb/sdio.c
+index 7fe0afb42234f..66c5c2169704b 100644
+--- a/drivers/ssb/sdio.c
++++ b/drivers/ssb/sdio.c
+@@ -411,7 +411,6 @@ static void ssb_sdio_block_write(struct ssb_device *dev, const void *buffer,
+ 	sdio_claim_host(bus->host_sdio);
+ 	if (unlikely(ssb_sdio_switch_core(bus, dev))) {
+ 		error = -EIO;
+-		memset((void *)buffer, 0xff, count);
+ 		goto err_out;
+ 	}
+ 	offset |= bus->sdio_sbaddr & 0xffff;
+diff --git a/drivers/staging/fbtft/fb_agm1264k-fl.c b/drivers/staging/fbtft/fb_agm1264k-fl.c
+index eeeeec97ad278..b545c2ca80a41 100644
+--- a/drivers/staging/fbtft/fb_agm1264k-fl.c
++++ b/drivers/staging/fbtft/fb_agm1264k-fl.c
+@@ -84,9 +84,9 @@ static void reset(struct fbtft_par *par)
+ 
+ 	dev_dbg(par->info->device, "%s()\n", __func__);
+ 
+-	gpiod_set_value(par->gpio.reset, 0);
+-	udelay(20);
+ 	gpiod_set_value(par->gpio.reset, 1);
++	udelay(20);
++	gpiod_set_value(par->gpio.reset, 0);
+ 	mdelay(120);
+ }
+ 
+@@ -194,12 +194,12 @@ static void write_reg8_bus8(struct fbtft_par *par, int len, ...)
+ 	/* select chip */
+ 	if (*buf) {
+ 		/* cs1 */
+-		gpiod_set_value(par->CS0, 1);
+-		gpiod_set_value(par->CS1, 0);
+-	} else {
+-		/* cs0 */
+ 		gpiod_set_value(par->CS0, 0);
+ 		gpiod_set_value(par->CS1, 1);
++	} else {
++		/* cs0 */
++		gpiod_set_value(par->CS0, 1);
++		gpiod_set_value(par->CS1, 0);
+ 	}
+ 
+ 	gpiod_set_value(par->RS, 0); /* RS->0 (command mode) */
+@@ -397,8 +397,8 @@ static int write_vmem(struct fbtft_par *par, size_t offset, size_t len)
+ 	}
+ 	kfree(convert_buf);
+ 
+-	gpiod_set_value(par->CS0, 1);
+-	gpiod_set_value(par->CS1, 1);
++	gpiod_set_value(par->CS0, 0);
++	gpiod_set_value(par->CS1, 0);
+ 
+ 	return ret;
+ }
+@@ -419,10 +419,10 @@ static int write(struct fbtft_par *par, void *buf, size_t len)
+ 		for (i = 0; i < 8; ++i)
+ 			gpiod_set_value(par->gpio.db[i], data & (1 << i));
+ 		/* set E */
+-		gpiod_set_value(par->EPIN, 1);
++		gpiod_set_value(par->EPIN, 0);
+ 		udelay(5);
+ 		/* unset E - write */
+-		gpiod_set_value(par->EPIN, 0);
++		gpiod_set_value(par->EPIN, 1);
+ 		udelay(1);
+ 	}
+ 
+diff --git a/drivers/staging/fbtft/fb_bd663474.c b/drivers/staging/fbtft/fb_bd663474.c
+index e2c7646588f8c..1629c2c440a97 100644
+--- a/drivers/staging/fbtft/fb_bd663474.c
++++ b/drivers/staging/fbtft/fb_bd663474.c
+@@ -12,7 +12,6 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/gpio/consumer.h>
+ #include <linux/delay.h>
+ 
+ #include "fbtft.h"
+@@ -24,9 +23,6 @@
+ 
+ static int init_display(struct fbtft_par *par)
+ {
+-	if (par->gpio.cs)
+-		gpiod_set_value(par->gpio.cs, 0);  /* Activate chip */
+-
+ 	par->fbtftops.reset(par);
+ 
+ 	/* Initialization sequence from Lib_UTFT */
+diff --git a/drivers/staging/fbtft/fb_ili9163.c b/drivers/staging/fbtft/fb_ili9163.c
+index 05648c3ffe474..6582a2c90aafc 100644
+--- a/drivers/staging/fbtft/fb_ili9163.c
++++ b/drivers/staging/fbtft/fb_ili9163.c
+@@ -11,7 +11,6 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/gpio/consumer.h>
+ #include <linux/delay.h>
+ #include <video/mipi_display.h>
+ 
+@@ -77,9 +76,6 @@ static int init_display(struct fbtft_par *par)
+ {
+ 	par->fbtftops.reset(par);
+ 
+-	if (par->gpio.cs)
+-		gpiod_set_value(par->gpio.cs, 0);  /* Activate chip */
+-
+ 	write_reg(par, MIPI_DCS_SOFT_RESET); /* software reset */
+ 	mdelay(500);
+ 	write_reg(par, MIPI_DCS_EXIT_SLEEP_MODE); /* exit sleep */
+diff --git a/drivers/staging/fbtft/fb_ili9320.c b/drivers/staging/fbtft/fb_ili9320.c
+index f2e72d14431db..a8f4c618b754c 100644
+--- a/drivers/staging/fbtft/fb_ili9320.c
++++ b/drivers/staging/fbtft/fb_ili9320.c
+@@ -8,7 +8,6 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/gpio/consumer.h>
+ #include <linux/spi/spi.h>
+ #include <linux/delay.h>
+ 
+diff --git a/drivers/staging/fbtft/fb_ili9325.c b/drivers/staging/fbtft/fb_ili9325.c
+index c9aa4cb431236..16d3b17ca2798 100644
+--- a/drivers/staging/fbtft/fb_ili9325.c
++++ b/drivers/staging/fbtft/fb_ili9325.c
+@@ -10,7 +10,6 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/gpio/consumer.h>
+ #include <linux/delay.h>
+ 
+ #include "fbtft.h"
+@@ -85,9 +84,6 @@ static int init_display(struct fbtft_par *par)
+ {
+ 	par->fbtftops.reset(par);
+ 
+-	if (par->gpio.cs)
+-		gpiod_set_value(par->gpio.cs, 0);  /* Activate chip */
+-
+ 	bt &= 0x07;
+ 	vc &= 0x07;
+ 	vrh &= 0x0f;
+diff --git a/drivers/staging/fbtft/fb_ili9340.c b/drivers/staging/fbtft/fb_ili9340.c
+index 415183c7054a8..704236bcaf3ff 100644
+--- a/drivers/staging/fbtft/fb_ili9340.c
++++ b/drivers/staging/fbtft/fb_ili9340.c
+@@ -8,7 +8,6 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/gpio/consumer.h>
+ #include <linux/delay.h>
+ #include <video/mipi_display.h>
+ 
+diff --git a/drivers/staging/fbtft/fb_s6d1121.c b/drivers/staging/fbtft/fb_s6d1121.c
+index 8c7de32903434..62f27172f8449 100644
+--- a/drivers/staging/fbtft/fb_s6d1121.c
++++ b/drivers/staging/fbtft/fb_s6d1121.c
+@@ -12,7 +12,6 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/gpio/consumer.h>
+ #include <linux/delay.h>
+ 
+ #include "fbtft.h"
+@@ -29,9 +28,6 @@ static int init_display(struct fbtft_par *par)
+ {
+ 	par->fbtftops.reset(par);
+ 
+-	if (par->gpio.cs)
+-		gpiod_set_value(par->gpio.cs, 0);  /* Activate chip */
+-
+ 	/* Initialization sequence from Lib_UTFT */
+ 
+ 	write_reg(par, 0x0011, 0x2004);
+diff --git a/drivers/staging/fbtft/fb_sh1106.c b/drivers/staging/fbtft/fb_sh1106.c
+index 6f7249493ea3b..7b9ab39e1c1a8 100644
+--- a/drivers/staging/fbtft/fb_sh1106.c
++++ b/drivers/staging/fbtft/fb_sh1106.c
+@@ -9,7 +9,6 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/gpio/consumer.h>
+ #include <linux/delay.h>
+ 
+ #include "fbtft.h"
+diff --git a/drivers/staging/fbtft/fb_ssd1289.c b/drivers/staging/fbtft/fb_ssd1289.c
+index 7a3fe022cc69d..f27bab38b3ec4 100644
+--- a/drivers/staging/fbtft/fb_ssd1289.c
++++ b/drivers/staging/fbtft/fb_ssd1289.c
+@@ -10,7 +10,6 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/gpio/consumer.h>
+ 
+ #include "fbtft.h"
+ 
+@@ -28,9 +27,6 @@ static int init_display(struct fbtft_par *par)
+ {
+ 	par->fbtftops.reset(par);
+ 
+-	if (par->gpio.cs)
+-		gpiod_set_value(par->gpio.cs, 0);  /* Activate chip */
+-
+ 	write_reg(par, 0x00, 0x0001);
+ 	write_reg(par, 0x03, 0xA8A4);
+ 	write_reg(par, 0x0C, 0x0000);
+diff --git a/drivers/staging/fbtft/fb_ssd1325.c b/drivers/staging/fbtft/fb_ssd1325.c
+index 8a3140d41d8bb..796a2ac3e1948 100644
+--- a/drivers/staging/fbtft/fb_ssd1325.c
++++ b/drivers/staging/fbtft/fb_ssd1325.c
+@@ -35,8 +35,6 @@ static int init_display(struct fbtft_par *par)
+ {
+ 	par->fbtftops.reset(par);
+ 
+-	gpiod_set_value(par->gpio.cs, 0);
+-
+ 	write_reg(par, 0xb3);
+ 	write_reg(par, 0xf0);
+ 	write_reg(par, 0xae);
+diff --git a/drivers/staging/fbtft/fb_ssd1331.c b/drivers/staging/fbtft/fb_ssd1331.c
+index 37622c9462aa7..ec5eced7f8cbd 100644
+--- a/drivers/staging/fbtft/fb_ssd1331.c
++++ b/drivers/staging/fbtft/fb_ssd1331.c
+@@ -81,8 +81,7 @@ static void write_reg8_bus8(struct fbtft_par *par, int len, ...)
+ 	va_start(args, len);
+ 
+ 	*buf = (u8)va_arg(args, unsigned int);
+-	if (par->gpio.dc)
+-		gpiod_set_value(par->gpio.dc, 0);
++	gpiod_set_value(par->gpio.dc, 0);
+ 	ret = par->fbtftops.write(par, par->buf, sizeof(u8));
+ 	if (ret < 0) {
+ 		va_end(args);
+@@ -104,8 +103,7 @@ static void write_reg8_bus8(struct fbtft_par *par, int len, ...)
+ 			return;
+ 		}
+ 	}
+-	if (par->gpio.dc)
+-		gpiod_set_value(par->gpio.dc, 1);
++	gpiod_set_value(par->gpio.dc, 1);
+ 	va_end(args);
+ }
+ 
+diff --git a/drivers/staging/fbtft/fb_ssd1351.c b/drivers/staging/fbtft/fb_ssd1351.c
+index 900b28d826b28..cf263a58a1489 100644
+--- a/drivers/staging/fbtft/fb_ssd1351.c
++++ b/drivers/staging/fbtft/fb_ssd1351.c
+@@ -2,7 +2,6 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/gpio/consumer.h>
+ #include <linux/spi/spi.h>
+ #include <linux/delay.h>
+ 
+diff --git a/drivers/staging/fbtft/fb_upd161704.c b/drivers/staging/fbtft/fb_upd161704.c
+index c77832ae5e5ba..c680160d63807 100644
+--- a/drivers/staging/fbtft/fb_upd161704.c
++++ b/drivers/staging/fbtft/fb_upd161704.c
+@@ -12,7 +12,6 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/gpio/consumer.h>
+ #include <linux/delay.h>
+ 
+ #include "fbtft.h"
+@@ -26,9 +25,6 @@ static int init_display(struct fbtft_par *par)
+ {
+ 	par->fbtftops.reset(par);
+ 
+-	if (par->gpio.cs)
+-		gpiod_set_value(par->gpio.cs, 0);  /* Activate chip */
+-
+ 	/* Initialization sequence from Lib_UTFT */
+ 
+ 	/* register reset */
+diff --git a/drivers/staging/fbtft/fb_watterott.c b/drivers/staging/fbtft/fb_watterott.c
+index 76b25df376b8f..a57e1f4feef35 100644
+--- a/drivers/staging/fbtft/fb_watterott.c
++++ b/drivers/staging/fbtft/fb_watterott.c
+@@ -8,7 +8,6 @@
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+-#include <linux/gpio/consumer.h>
+ #include <linux/delay.h>
+ 
+ #include "fbtft.h"
+diff --git a/drivers/staging/fbtft/fbtft-bus.c b/drivers/staging/fbtft/fbtft-bus.c
+index 63c65dd67b175..3d422bc116411 100644
+--- a/drivers/staging/fbtft/fbtft-bus.c
++++ b/drivers/staging/fbtft/fbtft-bus.c
+@@ -135,8 +135,7 @@ int fbtft_write_vmem16_bus8(struct fbtft_par *par, size_t offset, size_t len)
+ 	remain = len / 2;
+ 	vmem16 = (u16 *)(par->info->screen_buffer + offset);
+ 
+-	if (par->gpio.dc)
+-		gpiod_set_value(par->gpio.dc, 1);
++	gpiod_set_value(par->gpio.dc, 1);
+ 
+ 	/* non buffered write */
+ 	if (!par->txbuf.buf)
+diff --git a/drivers/staging/fbtft/fbtft-core.c b/drivers/staging/fbtft/fbtft-core.c
+index 4f362dad4436a..3723269890d5f 100644
+--- a/drivers/staging/fbtft/fbtft-core.c
++++ b/drivers/staging/fbtft/fbtft-core.c
+@@ -38,8 +38,7 @@ int fbtft_write_buf_dc(struct fbtft_par *par, void *buf, size_t len, int dc)
+ {
+ 	int ret;
+ 
+-	if (par->gpio.dc)
+-		gpiod_set_value(par->gpio.dc, dc);
++	gpiod_set_value(par->gpio.dc, dc);
+ 
+ 	ret = par->fbtftops.write(par, buf, len);
+ 	if (ret < 0)
+@@ -76,20 +75,16 @@ static int fbtft_request_one_gpio(struct fbtft_par *par,
+ 				  struct gpio_desc **gpiop)
+ {
+ 	struct device *dev = par->info->device;
+-	int ret = 0;
+ 
+ 	*gpiop = devm_gpiod_get_index_optional(dev, name, index,
+-					       GPIOD_OUT_HIGH);
+-	if (IS_ERR(*gpiop)) {
+-		ret = PTR_ERR(*gpiop);
+-		dev_err(dev,
+-			"Failed to request %s GPIO: %d\n", name, ret);
+-		return ret;
+-	}
++					       GPIOD_OUT_LOW);
++	if (IS_ERR(*gpiop))
++		return dev_err_probe(dev, PTR_ERR(*gpiop), "Failed to request %s GPIO\n", name);
++
+ 	fbtft_par_dbg(DEBUG_REQUEST_GPIOS, par, "%s: '%s' GPIO\n",
+ 		      __func__, name);
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static int fbtft_request_gpios(struct fbtft_par *par)
+@@ -226,11 +221,15 @@ static void fbtft_reset(struct fbtft_par *par)
+ {
+ 	if (!par->gpio.reset)
+ 		return;
++
+ 	fbtft_par_dbg(DEBUG_RESET, par, "%s()\n", __func__);
++
+ 	gpiod_set_value_cansleep(par->gpio.reset, 1);
+ 	usleep_range(20, 40);
+ 	gpiod_set_value_cansleep(par->gpio.reset, 0);
+ 	msleep(120);
++
++	gpiod_set_value_cansleep(par->gpio.cs, 1);  /* Activate chip */
+ }
+ 
+ static void fbtft_update_display(struct fbtft_par *par, unsigned int start_line,
+@@ -922,8 +921,6 @@ static int fbtft_init_display_from_property(struct fbtft_par *par)
+ 		goto out_free;
+ 
+ 	par->fbtftops.reset(par);
+-	if (par->gpio.cs)
+-		gpiod_set_value(par->gpio.cs, 0);  /* Activate chip */
+ 
+ 	index = -1;
+ 	val = values[++index];
+@@ -1018,8 +1015,6 @@ int fbtft_init_display(struct fbtft_par *par)
+ 	}
+ 
+ 	par->fbtftops.reset(par);
+-	if (par->gpio.cs)
+-		gpiod_set_value(par->gpio.cs, 0);  /* Activate chip */
+ 
+ 	i = 0;
+ 	while (i < FBTFT_MAX_INIT_SEQUENCE) {
+diff --git a/drivers/staging/fbtft/fbtft-io.c b/drivers/staging/fbtft/fbtft-io.c
+index 0863d257d7620..de1904a443c27 100644
+--- a/drivers/staging/fbtft/fbtft-io.c
++++ b/drivers/staging/fbtft/fbtft-io.c
+@@ -142,12 +142,12 @@ int fbtft_write_gpio8_wr(struct fbtft_par *par, void *buf, size_t len)
+ 		data = *(u8 *)buf;
+ 
+ 		/* Start writing by pulling down /WR */
+-		gpiod_set_value(par->gpio.wr, 0);
++		gpiod_set_value(par->gpio.wr, 1);
+ 
+ 		/* Set data */
+ #ifndef DO_NOT_OPTIMIZE_FBTFT_WRITE_GPIO
+ 		if (data == prev_data) {
+-			gpiod_set_value(par->gpio.wr, 0); /* used as delay */
++			gpiod_set_value(par->gpio.wr, 1); /* used as delay */
+ 		} else {
+ 			for (i = 0; i < 8; i++) {
+ 				if ((data & 1) != (prev_data & 1))
+@@ -165,7 +165,7 @@ int fbtft_write_gpio8_wr(struct fbtft_par *par, void *buf, size_t len)
+ #endif
+ 
+ 		/* Pullup /WR */
+-		gpiod_set_value(par->gpio.wr, 1);
++		gpiod_set_value(par->gpio.wr, 0);
+ 
+ #ifndef DO_NOT_OPTIMIZE_FBTFT_WRITE_GPIO
+ 		prev_data = *(u8 *)buf;
+@@ -192,12 +192,12 @@ int fbtft_write_gpio16_wr(struct fbtft_par *par, void *buf, size_t len)
+ 		data = *(u16 *)buf;
+ 
+ 		/* Start writing by pulling down /WR */
+-		gpiod_set_value(par->gpio.wr, 0);
++		gpiod_set_value(par->gpio.wr, 1);
+ 
+ 		/* Set data */
+ #ifndef DO_NOT_OPTIMIZE_FBTFT_WRITE_GPIO
+ 		if (data == prev_data) {
+-			gpiod_set_value(par->gpio.wr, 0); /* used as delay */
++			gpiod_set_value(par->gpio.wr, 1); /* used as delay */
+ 		} else {
+ 			for (i = 0; i < 16; i++) {
+ 				if ((data & 1) != (prev_data & 1))
+@@ -215,7 +215,7 @@ int fbtft_write_gpio16_wr(struct fbtft_par *par, void *buf, size_t len)
+ #endif
+ 
+ 		/* Pullup /WR */
+-		gpiod_set_value(par->gpio.wr, 1);
++		gpiod_set_value(par->gpio.wr, 0);
+ 
+ #ifndef DO_NOT_OPTIMIZE_FBTFT_WRITE_GPIO
+ 		prev_data = *(u16 *)buf;
+diff --git a/drivers/staging/gdm724x/gdm_lte.c b/drivers/staging/gdm724x/gdm_lte.c
+index 571f47d394843..bd5f874334043 100644
+--- a/drivers/staging/gdm724x/gdm_lte.c
++++ b/drivers/staging/gdm724x/gdm_lte.c
+@@ -611,10 +611,12 @@ static void gdm_lte_netif_rx(struct net_device *dev, char *buf,
+ 						  * bytes (99,130,83,99 dec)
+ 						  */
+ 			} __packed;
+-			void *addr = buf + sizeof(struct iphdr) +
+-				sizeof(struct udphdr) +
+-				offsetof(struct dhcp_packet, chaddr);
+-			ether_addr_copy(nic->dest_mac_addr, addr);
++			int offset = sizeof(struct iphdr) +
++				     sizeof(struct udphdr) +
++				     offsetof(struct dhcp_packet, chaddr);
++			if (offset + ETH_ALEN > len)
++				return;
++			ether_addr_copy(nic->dest_mac_addr, buf + offset);
+ 		}
+ 	}
+ 
+@@ -677,6 +679,7 @@ static void gdm_lte_multi_sdu_pkt(struct phy_dev *phy_dev, char *buf, int len)
+ 	struct sdu *sdu = NULL;
+ 	u8 endian = phy_dev->get_endian(phy_dev->priv_dev);
+ 	u8 *data = (u8 *)multi_sdu->data;
++	int copied;
+ 	u16 i = 0;
+ 	u16 num_packet;
+ 	u16 hci_len;
+@@ -688,6 +691,12 @@ static void gdm_lte_multi_sdu_pkt(struct phy_dev *phy_dev, char *buf, int len)
+ 	num_packet = gdm_dev16_to_cpu(endian, multi_sdu->num_packet);
+ 
+ 	for (i = 0; i < num_packet; i++) {
++		copied = data - multi_sdu->data;
++		if (len < copied + sizeof(*sdu)) {
++			pr_err("rx prevent buffer overflow");
++			return;
++		}
++
+ 		sdu = (struct sdu *)data;
+ 
+ 		cmd_evt  = gdm_dev16_to_cpu(endian, sdu->cmd_evt);
+@@ -698,7 +707,8 @@ static void gdm_lte_multi_sdu_pkt(struct phy_dev *phy_dev, char *buf, int len)
+ 			pr_err("rx sdu wrong hci %04x\n", cmd_evt);
+ 			return;
+ 		}
+-		if (hci_len < 12) {
++		if (hci_len < 12 ||
++		    len < copied + sizeof(*sdu) + (hci_len - 12)) {
+ 			pr_err("rx sdu invalid len %d\n", hci_len);
+ 			return;
+ 		}
+diff --git a/drivers/staging/media/hantro/hantro_drv.c b/drivers/staging/media/hantro/hantro_drv.c
+index 595e82a827287..eea2009fa17bd 100644
+--- a/drivers/staging/media/hantro/hantro_drv.c
++++ b/drivers/staging/media/hantro/hantro_drv.c
+@@ -56,16 +56,12 @@ dma_addr_t hantro_get_ref(struct hantro_ctx *ctx, u64 ts)
+ 	return hantro_get_dec_buf_addr(ctx, buf);
+ }
+ 
+-static void hantro_job_finish(struct hantro_dev *vpu,
+-			      struct hantro_ctx *ctx,
+-			      enum vb2_buffer_state result)
++static void hantro_job_finish_no_pm(struct hantro_dev *vpu,
++				    struct hantro_ctx *ctx,
++				    enum vb2_buffer_state result)
+ {
+ 	struct vb2_v4l2_buffer *src, *dst;
+ 
+-	pm_runtime_mark_last_busy(vpu->dev);
+-	pm_runtime_put_autosuspend(vpu->dev);
+-	clk_bulk_disable(vpu->variant->num_clocks, vpu->clocks);
+-
+ 	src = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 	dst = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+ 
+@@ -81,6 +77,18 @@ static void hantro_job_finish(struct hantro_dev *vpu,
+ 					 result);
+ }
+ 
++static void hantro_job_finish(struct hantro_dev *vpu,
++			      struct hantro_ctx *ctx,
++			      enum vb2_buffer_state result)
++{
++	pm_runtime_mark_last_busy(vpu->dev);
++	pm_runtime_put_autosuspend(vpu->dev);
++
++	clk_bulk_disable(vpu->variant->num_clocks, vpu->clocks);
++
++	hantro_job_finish_no_pm(vpu, ctx, result);
++}
++
+ void hantro_irq_done(struct hantro_dev *vpu,
+ 		     enum vb2_buffer_state result)
+ {
+@@ -152,12 +160,15 @@ static void device_run(void *priv)
+ 	src = hantro_get_src_buf(ctx);
+ 	dst = hantro_get_dst_buf(ctx);
+ 
++	ret = pm_runtime_get_sync(ctx->dev->dev);
++	if (ret < 0) {
++		pm_runtime_put_noidle(ctx->dev->dev);
++		goto err_cancel_job;
++	}
++
+ 	ret = clk_bulk_enable(ctx->dev->variant->num_clocks, ctx->dev->clocks);
+ 	if (ret)
+ 		goto err_cancel_job;
+-	ret = pm_runtime_get_sync(ctx->dev->dev);
+-	if (ret < 0)
+-		goto err_cancel_job;
+ 
+ 	v4l2_m2m_buf_copy_metadata(src, dst, true);
+ 
+@@ -165,7 +176,7 @@ static void device_run(void *priv)
+ 	return;
+ 
+ err_cancel_job:
+-	hantro_job_finish(ctx->dev, ctx, VB2_BUF_STATE_ERROR);
++	hantro_job_finish_no_pm(ctx->dev, ctx, VB2_BUF_STATE_ERROR);
+ }
+ 
+ static struct v4l2_m2m_ops vpu_m2m_ops = {
+diff --git a/drivers/staging/media/hantro/hantro_v4l2.c b/drivers/staging/media/hantro/hantro_v4l2.c
+index 1bc118e375a12..7ccc6405036ae 100644
+--- a/drivers/staging/media/hantro/hantro_v4l2.c
++++ b/drivers/staging/media/hantro/hantro_v4l2.c
+@@ -639,7 +639,14 @@ static int hantro_buf_prepare(struct vb2_buffer *vb)
+ 	ret = hantro_buf_plane_check(vb, pix_fmt);
+ 	if (ret)
+ 		return ret;
+-	vb2_set_plane_payload(vb, 0, pix_fmt->plane_fmt[0].sizeimage);
++	/*
++	 * Buffer's bytesused must be written by driver for CAPTURE buffers.
++	 * (for OUTPUT buffers, if userspace passes 0 bytesused, v4l2-core sets
++	 * it to buffer length).
++	 */
++	if (V4L2_TYPE_IS_CAPTURE(vq->type))
++		vb2_set_plane_payload(vb, 0, pix_fmt->plane_fmt[0].sizeimage);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
+index e3bfd635a89ae..6a94fff49bf6b 100644
+--- a/drivers/staging/media/imx/imx-media-csi.c
++++ b/drivers/staging/media/imx/imx-media-csi.c
+@@ -750,9 +750,10 @@ static int csi_setup(struct csi_priv *priv)
+ 
+ static int csi_start(struct csi_priv *priv)
+ {
+-	struct v4l2_fract *output_fi;
++	struct v4l2_fract *input_fi, *output_fi;
+ 	int ret;
+ 
++	input_fi = &priv->frame_interval[CSI_SINK_PAD];
+ 	output_fi = &priv->frame_interval[priv->active_output_pad];
+ 
+ 	/* start upstream */
+@@ -761,6 +762,17 @@ static int csi_start(struct csi_priv *priv)
+ 	if (ret)
+ 		return ret;
+ 
++	/* Skip first few frames from a BT.656 source */
++	if (priv->upstream_ep.bus_type == V4L2_MBUS_BT656) {
++		u32 delay_usec, bad_frames = 20;
++
++		delay_usec = DIV_ROUND_UP_ULL((u64)USEC_PER_SEC *
++			input_fi->numerator * bad_frames,
++			input_fi->denominator);
++
++		usleep_range(delay_usec, delay_usec + 1000);
++	}
++
+ 	if (priv->dest == IPU_CSI_DEST_IDMAC) {
+ 		ret = csi_idmac_start(priv);
+ 		if (ret)
+diff --git a/drivers/staging/media/imx/imx7-mipi-csis.c b/drivers/staging/media/imx/imx7-mipi-csis.c
+index 025fdc488bd66..25d0f89b2e53e 100644
+--- a/drivers/staging/media/imx/imx7-mipi-csis.c
++++ b/drivers/staging/media/imx/imx7-mipi-csis.c
+@@ -666,13 +666,15 @@ static void mipi_csis_clear_counters(struct csi_state *state)
+ 
+ static void mipi_csis_log_counters(struct csi_state *state, bool non_errors)
+ {
+-	int i = non_errors ? MIPI_CSIS_NUM_EVENTS : MIPI_CSIS_NUM_EVENTS - 4;
++	unsigned int num_events = non_errors ? MIPI_CSIS_NUM_EVENTS
++				: MIPI_CSIS_NUM_EVENTS - 6;
+ 	struct device *dev = &state->pdev->dev;
+ 	unsigned long flags;
++	unsigned int i;
+ 
+ 	spin_lock_irqsave(&state->slock, flags);
+ 
+-	for (i--; i >= 0; i--) {
++	for (i = 0; i < num_events; ++i) {
+ 		if (state->events[i].counter > 0 || state->debug)
+ 			dev_info(dev, "%s events: %d\n", state->events[i].name,
+ 				 state->events[i].counter);
+diff --git a/drivers/staging/media/rkvdec/rkvdec.c b/drivers/staging/media/rkvdec/rkvdec.c
+index d821661d30f38..7131156c1f2cf 100644
+--- a/drivers/staging/media/rkvdec/rkvdec.c
++++ b/drivers/staging/media/rkvdec/rkvdec.c
+@@ -481,7 +481,15 @@ static int rkvdec_buf_prepare(struct vb2_buffer *vb)
+ 		if (vb2_plane_size(vb, i) < sizeimage)
+ 			return -EINVAL;
+ 	}
+-	vb2_set_plane_payload(vb, 0, f->fmt.pix_mp.plane_fmt[0].sizeimage);
++
++	/*
++	 * Buffer's bytesused must be written by driver for CAPTURE buffers.
++	 * (for OUTPUT buffers, if userspace passes 0 bytesused, v4l2-core sets
++	 * it to buffer length).
++	 */
++	if (V4L2_TYPE_IS_CAPTURE(vq->type))
++		vb2_set_plane_payload(vb, 0, f->fmt.pix_mp.plane_fmt[0].sizeimage);
++
+ 	return 0;
+ }
+ 
+@@ -658,7 +666,7 @@ static void rkvdec_device_run(void *priv)
+ 	if (WARN_ON(!desc))
+ 		return;
+ 
+-	ret = pm_runtime_get_sync(rkvdec->dev);
++	ret = pm_runtime_resume_and_get(rkvdec->dev);
+ 	if (ret < 0) {
+ 		rkvdec_job_finish_no_pm(ctx, VB2_BUF_STATE_ERROR);
+ 		return;
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_h265.c b/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
+index ce497d0197dfc..10744fab7ceaa 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
+@@ -477,8 +477,8 @@ static void cedrus_h265_setup(struct cedrus_ctx *ctx,
+ 				slice_params->flags);
+ 
+ 	reg |= VE_DEC_H265_FLAG(VE_DEC_H265_DEC_SLICE_HDR_INFO0_FLAG_DEPENDENT_SLICE_SEGMENT,
+-				V4L2_HEVC_PPS_FLAG_DEPENDENT_SLICE_SEGMENT,
+-				pps->flags);
++				V4L2_HEVC_SLICE_PARAMS_FLAG_DEPENDENT_SLICE_SEGMENT,
++				slice_params->flags);
+ 
+ 	/* FIXME: For multi-slice support. */
+ 	reg |= VE_DEC_H265_DEC_SLICE_HDR_INFO0_FLAG_FIRST_SLICE_SEGMENT_IN_PIC;
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_video.c b/drivers/staging/media/sunxi/cedrus/cedrus_video.c
+index b62eb8e840573..bf731caf2ed51 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_video.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_video.c
+@@ -457,7 +457,13 @@ static int cedrus_buf_prepare(struct vb2_buffer *vb)
+ 	if (vb2_plane_size(vb, 0) < pix_fmt->sizeimage)
+ 		return -EINVAL;
+ 
+-	vb2_set_plane_payload(vb, 0, pix_fmt->sizeimage);
++	/*
++	 * Buffer's bytesused must be written by driver for CAPTURE buffers.
++	 * (for OUTPUT buffers, if userspace passes 0 bytesused, v4l2-core sets
++	 * it to buffer length).
++	 */
++	if (V4L2_TYPE_IS_CAPTURE(vq->type))
++		vb2_set_plane_payload(vb, 0, pix_fmt->sizeimage);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/staging/mt7621-dts/mt7621.dtsi b/drivers/staging/mt7621-dts/mt7621.dtsi
+index f0c9ae757bcd9..d6628e5f4f66c 100644
+--- a/drivers/staging/mt7621-dts/mt7621.dtsi
++++ b/drivers/staging/mt7621-dts/mt7621.dtsi
+@@ -498,7 +498,7 @@
+ 
+ 		bus-range = <0 255>;
+ 		ranges = <
+-			0x02000000 0 0x00000000 0x60000000 0 0x10000000 /* pci memory */
++			0x02000000 0 0x60000000 0x60000000 0 0x10000000 /* pci memory */
+ 			0x01000000 0 0x00000000 0x1e160000 0 0x00010000 /* io space */
+ 		>;
+ 
+diff --git a/drivers/staging/rtl8712/hal_init.c b/drivers/staging/rtl8712/hal_init.c
+index 715f1fe8b4726..22974277afa08 100644
+--- a/drivers/staging/rtl8712/hal_init.c
++++ b/drivers/staging/rtl8712/hal_init.c
+@@ -40,7 +40,10 @@ static void rtl871x_load_fw_cb(const struct firmware *firmware, void *context)
+ 		dev_err(&udev->dev, "r8712u: Firmware request failed\n");
+ 		usb_put_dev(udev);
+ 		usb_set_intfdata(usb_intf, NULL);
++		r8712_free_drv_sw(adapter);
++		adapter->dvobj_deinit(adapter);
+ 		complete(&adapter->rtl8712_fw_ready);
++		free_netdev(adapter->pnetdev);
+ 		return;
+ 	}
+ 	adapter->fw = firmware;
+diff --git a/drivers/staging/rtl8712/os_intfs.c b/drivers/staging/rtl8712/os_intfs.c
+index 0c3ae8495afb7..2214aca097308 100644
+--- a/drivers/staging/rtl8712/os_intfs.c
++++ b/drivers/staging/rtl8712/os_intfs.c
+@@ -328,8 +328,6 @@ int r8712_init_drv_sw(struct _adapter *padapter)
+ 
+ void r8712_free_drv_sw(struct _adapter *padapter)
+ {
+-	struct net_device *pnetdev = padapter->pnetdev;
+-
+ 	r8712_free_cmd_priv(&padapter->cmdpriv);
+ 	r8712_free_evt_priv(&padapter->evtpriv);
+ 	r8712_DeInitSwLeds(padapter);
+@@ -339,8 +337,6 @@ void r8712_free_drv_sw(struct _adapter *padapter)
+ 	_r8712_free_sta_priv(&padapter->stapriv);
+ 	_r8712_free_recv_priv(&padapter->recvpriv);
+ 	mp871xdeinit(padapter);
+-	if (pnetdev)
+-		free_netdev(pnetdev);
+ }
+ 
+ static void enable_video_mode(struct _adapter *padapter, int cbw40_value)
+diff --git a/drivers/staging/rtl8712/rtl871x_recv.c b/drivers/staging/rtl8712/rtl871x_recv.c
+index db2add5764189..c23f6b376111e 100644
+--- a/drivers/staging/rtl8712/rtl871x_recv.c
++++ b/drivers/staging/rtl8712/rtl871x_recv.c
+@@ -374,7 +374,7 @@ static sint ap2sta_data_frame(struct _adapter *adapter,
+ 	if (check_fwstate(pmlmepriv, WIFI_STATION_STATE) &&
+ 	    check_fwstate(pmlmepriv, _FW_LINKED)) {
+ 		/* if NULL-frame, drop packet */
+-		if ((GetFrameSubType(ptr)) == IEEE80211_STYPE_NULLFUNC)
++		if ((GetFrameSubType(ptr)) == (IEEE80211_FTYPE_DATA | IEEE80211_STYPE_NULLFUNC))
+ 			return _FAIL;
+ 		/* drop QoS-SubType Data, including QoS NULL,
+ 		 * excluding QoS-Data
+diff --git a/drivers/staging/rtl8712/rtl871x_security.c b/drivers/staging/rtl8712/rtl871x_security.c
+index 63d63f7be481a..e0a1c30a8fe66 100644
+--- a/drivers/staging/rtl8712/rtl871x_security.c
++++ b/drivers/staging/rtl8712/rtl871x_security.c
+@@ -1045,9 +1045,9 @@ static void aes_cipher(u8 *key, uint hdrlen,
+ 	else
+ 		a4_exists = 1;
+ 
+-	if ((frtype == IEEE80211_STYPE_DATA_CFACK) ||
+-	    (frtype == IEEE80211_STYPE_DATA_CFPOLL) ||
+-	    (frtype == IEEE80211_STYPE_DATA_CFACKPOLL)) {
++	if ((frtype == (IEEE80211_FTYPE_DATA | IEEE80211_STYPE_DATA_CFACK)) ||
++	    (frtype == (IEEE80211_FTYPE_DATA | IEEE80211_STYPE_DATA_CFPOLL)) ||
++	    (frtype == (IEEE80211_FTYPE_DATA | IEEE80211_STYPE_DATA_CFACKPOLL))) {
+ 		qc_exists = 1;
+ 		if (hdrlen !=  WLAN_HDR_A3_QOS_LEN)
+ 			hdrlen += 2;
+@@ -1225,9 +1225,9 @@ static void aes_decipher(u8 *key, uint hdrlen,
+ 		a4_exists = 0;
+ 	else
+ 		a4_exists = 1;
+-	if ((frtype == IEEE80211_STYPE_DATA_CFACK) ||
+-	    (frtype == IEEE80211_STYPE_DATA_CFPOLL) ||
+-	    (frtype == IEEE80211_STYPE_DATA_CFACKPOLL)) {
++	if ((frtype == (IEEE80211_FTYPE_DATA | IEEE80211_STYPE_DATA_CFACK)) ||
++	    (frtype == (IEEE80211_FTYPE_DATA | IEEE80211_STYPE_DATA_CFPOLL)) ||
++	    (frtype == (IEEE80211_FTYPE_DATA | IEEE80211_STYPE_DATA_CFACKPOLL))) {
+ 		qc_exists = 1;
+ 		if (hdrlen != WLAN_HDR_A3_QOS_LEN)
+ 			hdrlen += 2;
+diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c
+index dc21e7743349c..b760bc3559373 100644
+--- a/drivers/staging/rtl8712/usb_intf.c
++++ b/drivers/staging/rtl8712/usb_intf.c
+@@ -361,7 +361,7 @@ static int r871xu_drv_init(struct usb_interface *pusb_intf,
+ 	/* step 1. */
+ 	pnetdev = r8712_init_netdev();
+ 	if (!pnetdev)
+-		goto error;
++		goto put_dev;
+ 	padapter = netdev_priv(pnetdev);
+ 	disable_ht_for_spec_devid(pdid, padapter);
+ 	pdvobjpriv = &padapter->dvobjpriv;
+@@ -381,16 +381,16 @@ static int r871xu_drv_init(struct usb_interface *pusb_intf,
+ 	 * initialize the dvobj_priv
+ 	 */
+ 	if (!padapter->dvobj_init) {
+-		goto error;
++		goto put_dev;
+ 	} else {
+ 		status = padapter->dvobj_init(padapter);
+ 		if (status != _SUCCESS)
+-			goto error;
++			goto free_netdev;
+ 	}
+ 	/* step 4. */
+ 	status = r8712_init_drv_sw(padapter);
+ 	if (status)
+-		goto error;
++		goto dvobj_deinit;
+ 	/* step 5. read efuse/eeprom data and get mac_addr */
+ 	{
+ 		int i, offset;
+@@ -570,17 +570,20 @@ static int r871xu_drv_init(struct usb_interface *pusb_intf,
+ 	}
+ 	/* step 6. Load the firmware asynchronously */
+ 	if (rtl871x_load_fw(padapter))
+-		goto error;
++		goto deinit_drv_sw;
+ 	spin_lock_init(&padapter->lock_rx_ff0_filter);
+ 	mutex_init(&padapter->mutex_start);
+ 	return 0;
+-error:
++
++deinit_drv_sw:
++	r8712_free_drv_sw(padapter);
++dvobj_deinit:
++	padapter->dvobj_deinit(padapter);
++free_netdev:
++	free_netdev(pnetdev);
++put_dev:
+ 	usb_put_dev(udev);
+ 	usb_set_intfdata(pusb_intf, NULL);
+-	if (padapter && padapter->dvobj_deinit)
+-		padapter->dvobj_deinit(padapter);
+-	if (pnetdev)
+-		free_netdev(pnetdev);
+ 	return -ENODEV;
+ }
+ 
+@@ -612,6 +615,7 @@ static void r871xu_dev_remove(struct usb_interface *pusb_intf)
+ 		r8712_stop_drv_timers(padapter);
+ 		r871x_dev_unload(padapter);
+ 		r8712_free_drv_sw(padapter);
++		free_netdev(pnetdev);
+ 
+ 		/* decrease the reference count of the usb device structure
+ 		 * when disconnect
+diff --git a/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c b/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c
+index 5088c3731b6df..6d0d0beed402f 100644
+--- a/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c
++++ b/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c
+@@ -420,8 +420,10 @@ static int wpa_set_encryption(struct net_device *dev, struct ieee_param *param,
+ 			wep_key_len = wep_key_len <= 5 ? 5 : 13;
+ 			wep_total_len = wep_key_len + FIELD_OFFSET(struct ndis_802_11_wep, KeyMaterial);
+ 			pwep = kzalloc(wep_total_len, GFP_KERNEL);
+-			if (!pwep)
++			if (!pwep) {
++				ret = -ENOMEM;
+ 				goto exit;
++			}
+ 
+ 			pwep->KeyLength = wep_key_len;
+ 			pwep->Length = wep_total_len;
+diff --git a/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c b/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c
+index 06bca7be5203f..76d3f03999647 100644
+--- a/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c
++++ b/drivers/staging/vc04_services/vchiq-mmal/mmal-vchiq.c
+@@ -1862,7 +1862,7 @@ int vchiq_mmal_init(struct vchiq_mmal_instance **out_instance)
+ 	int status;
+ 	int err = -ENODEV;
+ 	struct vchiq_mmal_instance *instance;
+-	static struct vchiq_instance *vchiq_instance;
++	struct vchiq_instance *vchiq_instance;
+ 	struct vchiq_service_params_kernel params = {
+ 		.version		= VC_MMAL_VER,
+ 		.version_min		= VC_MMAL_MIN_VER,
+diff --git a/drivers/target/iscsi/cxgbit/cxgbit_ddp.c b/drivers/target/iscsi/cxgbit/cxgbit_ddp.c
+index af35251232eb3..b044999ad002b 100644
+--- a/drivers/target/iscsi/cxgbit/cxgbit_ddp.c
++++ b/drivers/target/iscsi/cxgbit/cxgbit_ddp.c
+@@ -265,12 +265,13 @@ void cxgbit_unmap_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd)
+ 	struct cxgbit_cmd *ccmd = iscsit_priv_cmd(cmd);
+ 
+ 	if (ccmd->release) {
+-		struct cxgbi_task_tag_info *ttinfo = &ccmd->ttinfo;
+-
+-		if (ttinfo->sgl) {
++		if (cmd->se_cmd.se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC) {
++			put_page(sg_page(&ccmd->sg));
++		} else {
+ 			struct cxgbit_sock *csk = conn->context;
+ 			struct cxgbit_device *cdev = csk->com.cdev;
+ 			struct cxgbi_ppm *ppm = cdev2ppm(cdev);
++			struct cxgbi_task_tag_info *ttinfo = &ccmd->ttinfo;
+ 
+ 			/* Abort the TCP conn if DDP is not complete to
+ 			 * avoid any possibility of DDP after freeing
+@@ -280,14 +281,14 @@ void cxgbit_unmap_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd)
+ 				     cmd->se_cmd.data_length))
+ 				cxgbit_abort_conn(csk);
+ 
++			if (unlikely(ttinfo->sgl)) {
++				dma_unmap_sg(&ppm->pdev->dev, ttinfo->sgl,
++					     ttinfo->nents, DMA_FROM_DEVICE);
++				ttinfo->nents = 0;
++				ttinfo->sgl = NULL;
++			}
+ 			cxgbi_ppm_ppod_release(ppm, ttinfo->idx);
+-
+-			dma_unmap_sg(&ppm->pdev->dev, ttinfo->sgl,
+-				     ttinfo->nents, DMA_FROM_DEVICE);
+-		} else {
+-			put_page(sg_page(&ccmd->sg));
+ 		}
+-
+ 		ccmd->release = false;
+ 	}
+ }
+diff --git a/drivers/target/iscsi/cxgbit/cxgbit_target.c b/drivers/target/iscsi/cxgbit/cxgbit_target.c
+index b926e1d6c7b8e..282297ffc4044 100644
+--- a/drivers/target/iscsi/cxgbit/cxgbit_target.c
++++ b/drivers/target/iscsi/cxgbit/cxgbit_target.c
+@@ -997,17 +997,18 @@ static int cxgbit_handle_iscsi_dataout(struct cxgbit_sock *csk)
+ 	struct scatterlist *sg_start;
+ 	struct iscsi_conn *conn = csk->conn;
+ 	struct iscsi_cmd *cmd = NULL;
++	struct cxgbit_cmd *ccmd;
++	struct cxgbi_task_tag_info *ttinfo;
+ 	struct cxgbit_lro_pdu_cb *pdu_cb = cxgbit_rx_pdu_cb(csk->skb);
+ 	struct iscsi_data *hdr = (struct iscsi_data *)pdu_cb->hdr;
+ 	u32 data_offset = be32_to_cpu(hdr->offset);
+-	u32 data_len = pdu_cb->dlen;
++	u32 data_len = ntoh24(hdr->dlength);
+ 	int rc, sg_nents, sg_off;
+ 	bool dcrc_err = false;
+ 
+ 	if (pdu_cb->flags & PDUCBF_RX_DDP_CMP) {
+ 		u32 offset = be32_to_cpu(hdr->offset);
+ 		u32 ddp_data_len;
+-		u32 payload_length = ntoh24(hdr->dlength);
+ 		bool success = false;
+ 
+ 		cmd = iscsit_find_cmd_from_itt_or_dump(conn, hdr->itt, 0);
+@@ -1022,7 +1023,7 @@ static int cxgbit_handle_iscsi_dataout(struct cxgbit_sock *csk)
+ 		cmd->data_sn = be32_to_cpu(hdr->datasn);
+ 
+ 		rc = __iscsit_check_dataout_hdr(conn, (unsigned char *)hdr,
+-						cmd, payload_length, &success);
++						cmd, data_len, &success);
+ 		if (rc < 0)
+ 			return rc;
+ 		else if (!success)
+@@ -1060,6 +1061,20 @@ static int cxgbit_handle_iscsi_dataout(struct cxgbit_sock *csk)
+ 		cxgbit_skb_copy_to_sg(csk->skb, sg_start, sg_nents, skip);
+ 	}
+ 
++	ccmd = iscsit_priv_cmd(cmd);
++	ttinfo = &ccmd->ttinfo;
++
++	if (ccmd->release && ttinfo->sgl &&
++	    (cmd->se_cmd.data_length ==	(cmd->write_data_done + data_len))) {
++		struct cxgbit_device *cdev = csk->com.cdev;
++		struct cxgbi_ppm *ppm = cdev2ppm(cdev);
++
++		dma_unmap_sg(&ppm->pdev->dev, ttinfo->sgl, ttinfo->nents,
++			     DMA_FROM_DEVICE);
++		ttinfo->nents = 0;
++		ttinfo->sgl = NULL;
++	}
++
+ check_payload:
+ 
+ 	rc = iscsit_check_dataout_payload(cmd, hdr, dcrc_err);
+diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c
+index eeb4e4b76c0be..43b1ae8a77893 100644
+--- a/drivers/thermal/cpufreq_cooling.c
++++ b/drivers/thermal/cpufreq_cooling.c
+@@ -478,7 +478,7 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
+ 	ret = freq_qos_update_request(&cpufreq_cdev->qos_req, frequency);
+ 	if (ret >= 0) {
+ 		cpufreq_cdev->cpufreq_state = state;
+-		cpus = cpufreq_cdev->policy->cpus;
++		cpus = cpufreq_cdev->policy->related_cpus;
+ 		max_capacity = arch_scale_cpu_capacity(cpumask_first(cpus));
+ 		capacity = frequency * max_capacity;
+ 		capacity /= cpufreq_cdev->policy->cpuinfo.max_freq;
+diff --git a/drivers/thunderbolt/test.c b/drivers/thunderbolt/test.c
+index 5ff5a03bc9cef..6e0a5391fcd7c 100644
+--- a/drivers/thunderbolt/test.c
++++ b/drivers/thunderbolt/test.c
+@@ -260,14 +260,14 @@ static struct tb_switch *alloc_dev_default(struct kunit *test,
+ 	if (port->dual_link_port && upstream_port->dual_link_port) {
+ 		port->dual_link_port->remote = upstream_port->dual_link_port;
+ 		upstream_port->dual_link_port->remote = port->dual_link_port;
+-	}
+ 
+-	if (bonded) {
+-		/* Bonding is used */
+-		port->bonded = true;
+-		port->dual_link_port->bonded = true;
+-		upstream_port->bonded = true;
+-		upstream_port->dual_link_port->bonded = true;
++		if (bonded) {
++			/* Bonding is used */
++			port->bonded = true;
++			port->dual_link_port->bonded = true;
++			upstream_port->bonded = true;
++			upstream_port->dual_link_port->bonded = true;
++		}
+ 	}
+ 
+ 	return sw;
+diff --git a/drivers/tty/nozomi.c b/drivers/tty/nozomi.c
+index 9a2d78ace49be..ce3a79e95fb55 100644
+--- a/drivers/tty/nozomi.c
++++ b/drivers/tty/nozomi.c
+@@ -1378,7 +1378,7 @@ static int nozomi_card_init(struct pci_dev *pdev,
+ 			NOZOMI_NAME, dc);
+ 	if (unlikely(ret)) {
+ 		dev_err(&pdev->dev, "can't request irq %d\n", pdev->irq);
+-		goto err_free_kfifo;
++		goto err_free_all_kfifo;
+ 	}
+ 
+ 	DBG1("base_addr: %p", dc->base_addr);
+@@ -1416,12 +1416,15 @@ static int nozomi_card_init(struct pci_dev *pdev,
+ 	return 0;
+ 
+ err_free_tty:
+-	for (i = 0; i < MAX_PORT; ++i) {
++	for (i--; i >= 0; i--) {
+ 		tty_unregister_device(ntty_driver, dc->index_start + i);
+ 		tty_port_destroy(&dc->port[i].port);
+ 	}
++	free_irq(pdev->irq, dc);
++err_free_all_kfifo:
++	i = MAX_PORT;
+ err_free_kfifo:
+-	for (i = 0; i < MAX_PORT; i++)
++	for (i--; i >= PORT_MDM; i--)
+ 		kfifo_free(&dc->port[i].fifo_ul);
+ err_free_sbuf:
+ 	kfree(dc->send_buf);
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index 8ac11eaeca51b..79418d4beb48f 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -43,6 +43,7 @@
+ #define UART_ERRATA_CLOCK_DISABLE	(1 << 3)
+ #define	UART_HAS_EFR2			BIT(4)
+ #define UART_HAS_RHR_IT_DIS		BIT(5)
++#define UART_RX_TIMEOUT_QUIRK		BIT(6)
+ 
+ #define OMAP_UART_FCR_RX_TRIG		6
+ #define OMAP_UART_FCR_TX_TRIG		4
+@@ -104,6 +105,9 @@
+ #define UART_OMAP_EFR2			0x23
+ #define UART_OMAP_EFR2_TIMEOUT_BEHAVE	BIT(6)
+ 
++/* RX FIFO occupancy indicator */
++#define UART_OMAP_RX_LVL		0x64
++
+ struct omap8250_priv {
+ 	int line;
+ 	u8 habit;
+@@ -611,6 +615,7 @@ static int omap_8250_dma_handle_irq(struct uart_port *port);
+ static irqreturn_t omap8250_irq(int irq, void *dev_id)
+ {
+ 	struct uart_port *port = dev_id;
++	struct omap8250_priv *priv = port->private_data;
+ 	struct uart_8250_port *up = up_to_u8250p(port);
+ 	unsigned int iir;
+ 	int ret;
+@@ -625,6 +630,18 @@ static irqreturn_t omap8250_irq(int irq, void *dev_id)
+ 	serial8250_rpm_get(up);
+ 	iir = serial_port_in(port, UART_IIR);
+ 	ret = serial8250_handle_irq(port, iir);
++
++	/*
++	 * On K3 SoCs, it is observed that RX TIMEOUT is signalled after
++	 * FIFO has been drained, in which case a dummy read of RX FIFO
++	 * is required to clear RX TIMEOUT condition.
++	 */
++	if (priv->habit & UART_RX_TIMEOUT_QUIRK &&
++	    (iir & UART_IIR_RX_TIMEOUT) == UART_IIR_RX_TIMEOUT &&
++	    serial_port_in(port, UART_OMAP_RX_LVL) == 0) {
++		serial_port_in(port, UART_RX);
++	}
++
+ 	serial8250_rpm_put(up);
+ 
+ 	return IRQ_RETVAL(ret);
+@@ -813,7 +830,7 @@ static void __dma_rx_do_complete(struct uart_8250_port *p)
+ 			       poll_count--)
+ 				cpu_relax();
+ 
+-			if (!poll_count)
++			if (poll_count == -1)
+ 				dev_err(p->port.dev, "teardown incomplete\n");
+ 		}
+ 	}
+@@ -1218,7 +1235,8 @@ static struct omap8250_dma_params am33xx_dma = {
+ 
+ static struct omap8250_platdata am654_platdata = {
+ 	.dma_params	= &am654_dma,
+-	.habit		= UART_HAS_EFR2 | UART_HAS_RHR_IT_DIS,
++	.habit		= UART_HAS_EFR2 | UART_HAS_RHR_IT_DIS |
++			  UART_RX_TIMEOUT_QUIRK,
+ };
+ 
+ static struct omap8250_platdata am33xx_platdata = {
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index fc5ab20322821..ff3f13693def7 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -2629,6 +2629,21 @@ static unsigned int serial8250_get_baud_rate(struct uart_port *port,
+ 					     struct ktermios *old)
+ {
+ 	unsigned int tolerance = port->uartclk / 100;
++	unsigned int min;
++	unsigned int max;
++
++	/*
++	 * Handle magic divisors for baud rates above baud_base on SMSC
++	 * Super I/O chips.  Enable custom rates of clk/4 and clk/8, but
++	 * disable divisor values beyond 32767, which are unavailable.
++	 */
++	if (port->flags & UPF_MAGIC_MULTIPLIER) {
++		min = port->uartclk / 16 / UART_DIV_MAX >> 1;
++		max = (port->uartclk + tolerance) / 4;
++	} else {
++		min = port->uartclk / 16 / UART_DIV_MAX;
++		max = (port->uartclk + tolerance) / 16;
++	}
+ 
+ 	/*
+ 	 * Ask the core to calculate the divisor for us.
+@@ -2636,9 +2651,7 @@ static unsigned int serial8250_get_baud_rate(struct uart_port *port,
+ 	 * slower than nominal still match standard baud rates without
+ 	 * causing transmission errors.
+ 	 */
+-	return uart_get_baud_rate(port, termios, old,
+-				  port->uartclk / 16 / UART_DIV_MAX,
+-				  (port->uartclk + tolerance) / 16);
++	return uart_get_baud_rate(port, termios, old, min, max);
+ }
+ 
+ /*
+diff --git a/drivers/tty/serial/8250/serial_cs.c b/drivers/tty/serial/8250/serial_cs.c
+index 63ea9c4da3d5a..53f2697014a02 100644
+--- a/drivers/tty/serial/8250/serial_cs.c
++++ b/drivers/tty/serial/8250/serial_cs.c
+@@ -777,6 +777,7 @@ static const struct pcmcia_device_id serial_ids[] = {
+ 	PCMCIA_DEVICE_PROD_ID12("Multi-Tech", "MT2834LT", 0x5f73be51, 0x4cd7c09e),
+ 	PCMCIA_DEVICE_PROD_ID12("OEM      ", "C288MX     ", 0xb572d360, 0xd2385b7a),
+ 	PCMCIA_DEVICE_PROD_ID12("Option International", "V34bis GSM/PSTN Data/Fax Modem", 0x9d7cd6f5, 0x5cb8bf41),
++	PCMCIA_DEVICE_PROD_ID12("Option International", "GSM-Ready 56K/ISDN", 0x9d7cd6f5, 0xb23844aa),
+ 	PCMCIA_DEVICE_PROD_ID12("PCMCIA   ", "C336MX     ", 0x99bcafe9, 0xaa25bcab),
+ 	PCMCIA_DEVICE_PROD_ID12("Quatech Inc", "PCMCIA Dual RS-232 Serial Port Card", 0xc4420b35, 0x92abc92f),
+ 	PCMCIA_DEVICE_PROD_ID12("Quatech Inc", "Dual RS-232 Serial Port PC Card", 0xc4420b35, 0x031a380d),
+@@ -804,7 +805,6 @@ static const struct pcmcia_device_id serial_ids[] = {
+ 	PCMCIA_DEVICE_CIS_PROD_ID12("ADVANTECH", "COMpad-32/85B-4", 0x96913a85, 0xcec8f102, "cis/COMpad4.cis"),
+ 	PCMCIA_DEVICE_CIS_PROD_ID123("ADVANTECH", "COMpad-32/85", "1.0", 0x96913a85, 0x8fbe92ae, 0x0877b627, "cis/COMpad2.cis"),
+ 	PCMCIA_DEVICE_CIS_PROD_ID2("RS-COM 2P", 0xad20b156, "cis/RS-COM-2P.cis"),
+-	PCMCIA_DEVICE_CIS_MANF_CARD(0x0013, 0x0000, "cis/GLOBETROTTER.cis"),
+ 	PCMCIA_DEVICE_PROD_ID12("ELAN DIGITAL SYSTEMS LTD, c1997.", "SERIAL CARD: SL100  1.00.", 0x19ca78af, 0xf964f42b),
+ 	PCMCIA_DEVICE_PROD_ID12("ELAN DIGITAL SYSTEMS LTD, c1997.", "SERIAL CARD: SL100", 0x19ca78af, 0x71d98e83),
+ 	PCMCIA_DEVICE_PROD_ID12("ELAN DIGITAL SYSTEMS LTD, c1997.", "SERIAL CARD: SL232  1.00.", 0x19ca78af, 0x69fb7490),
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 794035041744f..9c78e43e669d7 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -1408,17 +1408,7 @@ static unsigned int lpuart_get_mctrl(struct uart_port *port)
+ 
+ static unsigned int lpuart32_get_mctrl(struct uart_port *port)
+ {
+-	unsigned int temp = 0;
+-	unsigned long reg;
+-
+-	reg = lpuart32_read(port, UARTMODIR);
+-	if (reg & UARTMODIR_TXCTSE)
+-		temp |= TIOCM_CTS;
+-
+-	if (reg & UARTMODIR_RXRTSE)
+-		temp |= TIOCM_RTS;
+-
+-	return temp;
++	return 0;
+ }
+ 
+ static void lpuart_set_mctrl(struct uart_port *port, unsigned int mctrl)
+@@ -1625,7 +1615,7 @@ static void lpuart_rx_dma_startup(struct lpuart_port *sport)
+ 	sport->lpuart_dma_rx_use = true;
+ 	rx_dma_timer_init(sport);
+ 
+-	if (sport->port.has_sysrq) {
++	if (sport->port.has_sysrq && !lpuart_is_32(sport)) {
+ 		cr3 = readb(sport->port.membase + UARTCR3);
+ 		cr3 |= UARTCR3_FEIE;
+ 		writeb(cr3, sport->port.membase + UARTCR3);
+diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c
+index 51b0ecabf2ec9..1e26220c78527 100644
+--- a/drivers/tty/serial/mvebu-uart.c
++++ b/drivers/tty/serial/mvebu-uart.c
+@@ -445,12 +445,11 @@ static void mvebu_uart_shutdown(struct uart_port *port)
+ 
+ static int mvebu_uart_baud_rate_set(struct uart_port *port, unsigned int baud)
+ {
+-	struct mvebu_uart *mvuart = to_mvuart(port);
+ 	unsigned int d_divisor, m_divisor;
+ 	u32 brdv, osamp;
+ 
+-	if (IS_ERR(mvuart->clk))
+-		return -PTR_ERR(mvuart->clk);
++	if (!port->uartclk)
++		return -EOPNOTSUPP;
+ 
+ 	/*
+ 	 * The baudrate is derived from the UART clock thanks to two divisors:
+@@ -463,7 +462,7 @@ static int mvebu_uart_baud_rate_set(struct uart_port *port, unsigned int baud)
+ 	 * makes use of D to configure the desired baudrate.
+ 	 */
+ 	m_divisor = OSAMP_DEFAULT_DIVISOR;
+-	d_divisor = DIV_ROUND_UP(port->uartclk, baud * m_divisor);
++	d_divisor = DIV_ROUND_CLOSEST(port->uartclk, baud * m_divisor);
+ 
+ 	brdv = readl(port->membase + UART_BRDV);
+ 	brdv &= ~BRDV_BAUD_MASK;
+@@ -482,7 +481,7 @@ static void mvebu_uart_set_termios(struct uart_port *port,
+ 				   struct ktermios *old)
+ {
+ 	unsigned long flags;
+-	unsigned int baud;
++	unsigned int baud, min_baud, max_baud;
+ 
+ 	spin_lock_irqsave(&port->lock, flags);
+ 
+@@ -501,16 +500,21 @@ static void mvebu_uart_set_termios(struct uart_port *port,
+ 		port->ignore_status_mask |= STAT_RX_RDY(port) | STAT_BRK_ERR;
+ 
+ 	/*
++	 * Maximal divisor is 1023 * 16 when using default (x16) scheme.
+ 	 * Maximum achievable frequency with simple baudrate divisor is 230400.
+ 	 * Since the error per bit frame would be of more than 15%, achieving
+ 	 * higher frequencies would require to implement the fractional divisor
+ 	 * feature.
+ 	 */
+-	baud = uart_get_baud_rate(port, termios, old, 0, 230400);
++	min_baud = DIV_ROUND_UP(port->uartclk, 1023 * 16);
++	max_baud = 230400;
++
++	baud = uart_get_baud_rate(port, termios, old, min_baud, max_baud);
+ 	if (mvebu_uart_baud_rate_set(port, baud)) {
+ 		/* No clock available, baudrate cannot be changed */
+ 		if (old)
+-			baud = uart_get_baud_rate(port, old, NULL, 0, 230400);
++			baud = uart_get_baud_rate(port, old, NULL,
++						  min_baud, max_baud);
+ 	} else {
+ 		tty_termios_encode_baud_rate(termios, baud, baud);
+ 		uart_update_timeout(port, termios->c_cflag, baud);
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index 4baf1316ea729..2d5487bf68559 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -610,6 +610,14 @@ static void sci_stop_tx(struct uart_port *port)
+ 	ctrl &= ~SCSCR_TIE;
+ 
+ 	serial_port_out(port, SCSCR, ctrl);
++
++#ifdef CONFIG_SERIAL_SH_SCI_DMA
++	if (to_sci_port(port)->chan_tx &&
++	    !dma_submit_error(to_sci_port(port)->cookie_tx)) {
++		dmaengine_terminate_async(to_sci_port(port)->chan_tx);
++		to_sci_port(port)->cookie_tx = -EINVAL;
++	}
++#endif
+ }
+ 
+ static void sci_start_rx(struct uart_port *port)
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index ca7a61190dd93..d50b606d09aae 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1959,6 +1959,11 @@ static const struct usb_device_id acm_ids[] = {
+ 	.driver_info = IGNORE_DEVICE,
+ 	},
+ 
++	/* Exclude Heimann Sensor GmbH USB appset demo */
++	{ USB_DEVICE(0x32a7, 0x0000),
++	.driver_info = IGNORE_DEVICE,
++	},
++
+ 	/* control interfaces without any protocol set */
+ 	{ USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ACM,
+ 		USB_CDC_PROTO_NONE) },
+diff --git a/drivers/usb/dwc2/core.c b/drivers/usb/dwc2/core.c
+index 6f70ab9577b4e..272ae5722c861 100644
+--- a/drivers/usb/dwc2/core.c
++++ b/drivers/usb/dwc2/core.c
+@@ -1111,15 +1111,6 @@ static int dwc2_hs_phy_init(struct dwc2_hsotg *hsotg, bool select_phy)
+ 		usbcfg &= ~(GUSBCFG_ULPI_UTMI_SEL | GUSBCFG_PHYIF16);
+ 		if (hsotg->params.phy_utmi_width == 16)
+ 			usbcfg |= GUSBCFG_PHYIF16;
+-
+-		/* Set turnaround time */
+-		if (dwc2_is_device_mode(hsotg)) {
+-			usbcfg &= ~GUSBCFG_USBTRDTIM_MASK;
+-			if (hsotg->params.phy_utmi_width == 16)
+-				usbcfg |= 5 << GUSBCFG_USBTRDTIM_SHIFT;
+-			else
+-				usbcfg |= 9 << GUSBCFG_USBTRDTIM_SHIFT;
+-		}
+ 		break;
+ 	default:
+ 		dev_err(hsotg->dev, "FS PHY selected at HS!\n");
+@@ -1141,6 +1132,24 @@ static int dwc2_hs_phy_init(struct dwc2_hsotg *hsotg, bool select_phy)
+ 	return retval;
+ }
+ 
++static void dwc2_set_turnaround_time(struct dwc2_hsotg *hsotg)
++{
++	u32 usbcfg;
++
++	if (hsotg->params.phy_type != DWC2_PHY_TYPE_PARAM_UTMI)
++		return;
++
++	usbcfg = dwc2_readl(hsotg, GUSBCFG);
++
++	usbcfg &= ~GUSBCFG_USBTRDTIM_MASK;
++	if (hsotg->params.phy_utmi_width == 16)
++		usbcfg |= 5 << GUSBCFG_USBTRDTIM_SHIFT;
++	else
++		usbcfg |= 9 << GUSBCFG_USBTRDTIM_SHIFT;
++
++	dwc2_writel(hsotg, usbcfg, GUSBCFG);
++}
++
+ int dwc2_phy_init(struct dwc2_hsotg *hsotg, bool select_phy)
+ {
+ 	u32 usbcfg;
+@@ -1158,6 +1167,9 @@ int dwc2_phy_init(struct dwc2_hsotg *hsotg, bool select_phy)
+ 		retval = dwc2_hs_phy_init(hsotg, select_phy);
+ 		if (retval)
+ 			return retval;
++
++		if (dwc2_is_device_mode(hsotg))
++			dwc2_set_turnaround_time(hsotg);
+ 	}
+ 
+ 	if (hsotg->hw_params.hs_phy_type == GHWCFG2_HS_PHY_TYPE_ULPI &&
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 4ac397e43e19b..bca720c81799c 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -1616,17 +1616,18 @@ static int dwc3_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	dwc3_check_params(dwc);
++	dwc3_debugfs_init(dwc);
+ 
+ 	ret = dwc3_core_init_mode(dwc);
+ 	if (ret)
+ 		goto err5;
+ 
+-	dwc3_debugfs_init(dwc);
+ 	pm_runtime_put(dev);
+ 
+ 	return 0;
+ 
+ err5:
++	dwc3_debugfs_exit(dwc);
+ 	dwc3_event_buffers_cleanup(dwc);
+ 
+ 	usb_phy_shutdown(dwc->usb2_phy);
+diff --git a/drivers/usb/gadget/function/f_eem.c b/drivers/usb/gadget/function/f_eem.c
+index 2cd9942707b46..5d38f29bda720 100644
+--- a/drivers/usb/gadget/function/f_eem.c
++++ b/drivers/usb/gadget/function/f_eem.c
+@@ -30,6 +30,11 @@ struct f_eem {
+ 	u8				ctrl_id;
+ };
+ 
++struct in_context {
++	struct sk_buff	*skb;
++	struct usb_ep	*ep;
++};
++
+ static inline struct f_eem *func_to_eem(struct usb_function *f)
+ {
+ 	return container_of(f, struct f_eem, port.func);
+@@ -320,9 +325,12 @@ fail:
+ 
+ static void eem_cmd_complete(struct usb_ep *ep, struct usb_request *req)
+ {
+-	struct sk_buff *skb = (struct sk_buff *)req->context;
++	struct in_context *ctx = req->context;
+ 
+-	dev_kfree_skb_any(skb);
++	dev_kfree_skb_any(ctx->skb);
++	kfree(req->buf);
++	usb_ep_free_request(ctx->ep, req);
++	kfree(ctx);
+ }
+ 
+ /*
+@@ -410,7 +418,9 @@ static int eem_unwrap(struct gether *port,
+ 		 * b15:		bmType (0 == data, 1 == command)
+ 		 */
+ 		if (header & BIT(15)) {
+-			struct usb_request	*req = cdev->req;
++			struct usb_request	*req;
++			struct in_context	*ctx;
++			struct usb_ep		*ep;
+ 			u16			bmEEMCmd;
+ 
+ 			/* EEM command packet format:
+@@ -439,11 +449,36 @@ static int eem_unwrap(struct gether *port,
+ 				skb_trim(skb2, len);
+ 				put_unaligned_le16(BIT(15) | BIT(11) | len,
+ 							skb_push(skb2, 2));
++
++				ep = port->in_ep;
++				req = usb_ep_alloc_request(ep, GFP_ATOMIC);
++				if (!req) {
++					dev_kfree_skb_any(skb2);
++					goto next;
++				}
++
++				req->buf = kmalloc(skb2->len, GFP_KERNEL);
++				if (!req->buf) {
++					usb_ep_free_request(ep, req);
++					dev_kfree_skb_any(skb2);
++					goto next;
++				}
++
++				ctx = kmalloc(sizeof(*ctx), GFP_KERNEL);
++				if (!ctx) {
++					kfree(req->buf);
++					usb_ep_free_request(ep, req);
++					dev_kfree_skb_any(skb2);
++					goto next;
++				}
++				ctx->skb = skb2;
++				ctx->ep = ep;
++
+ 				skb_copy_bits(skb2, 0, req->buf, skb2->len);
+ 				req->length = skb2->len;
+ 				req->complete = eem_cmd_complete;
+ 				req->zero = 1;
+-				req->context = skb2;
++				req->context = ctx;
+ 				if (usb_ep_queue(port->in_ep, req, GFP_ATOMIC))
+ 					DBG(cdev, "echo response queue fail\n");
+ 				break;
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index d4844afeaffc2..9c0c393abb39b 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -250,8 +250,8 @@ EXPORT_SYMBOL_GPL(ffs_lock);
+ static struct ffs_dev *_ffs_find_dev(const char *name);
+ static struct ffs_dev *_ffs_alloc_dev(void);
+ static void _ffs_free_dev(struct ffs_dev *dev);
+-static void *ffs_acquire_dev(const char *dev_name);
+-static void ffs_release_dev(struct ffs_data *ffs_data);
++static int ffs_acquire_dev(const char *dev_name, struct ffs_data *ffs_data);
++static void ffs_release_dev(struct ffs_dev *ffs_dev);
+ static int ffs_ready(struct ffs_data *ffs);
+ static void ffs_closed(struct ffs_data *ffs);
+ 
+@@ -1554,8 +1554,8 @@ unmapped_value:
+ static int ffs_fs_get_tree(struct fs_context *fc)
+ {
+ 	struct ffs_sb_fill_data *ctx = fc->fs_private;
+-	void *ffs_dev;
+ 	struct ffs_data	*ffs;
++	int ret;
+ 
+ 	ENTER();
+ 
+@@ -1574,13 +1574,12 @@ static int ffs_fs_get_tree(struct fs_context *fc)
+ 		return -ENOMEM;
+ 	}
+ 
+-	ffs_dev = ffs_acquire_dev(ffs->dev_name);
+-	if (IS_ERR(ffs_dev)) {
++	ret = ffs_acquire_dev(ffs->dev_name, ffs);
++	if (ret) {
+ 		ffs_data_put(ffs);
+-		return PTR_ERR(ffs_dev);
++		return ret;
+ 	}
+ 
+-	ffs->private_data = ffs_dev;
+ 	ctx->ffs_data = ffs;
+ 	return get_tree_nodev(fc, ffs_sb_fill);
+ }
+@@ -1591,7 +1590,6 @@ static void ffs_fs_free_fc(struct fs_context *fc)
+ 
+ 	if (ctx) {
+ 		if (ctx->ffs_data) {
+-			ffs_release_dev(ctx->ffs_data);
+ 			ffs_data_put(ctx->ffs_data);
+ 		}
+ 
+@@ -1630,10 +1628,8 @@ ffs_fs_kill_sb(struct super_block *sb)
+ 	ENTER();
+ 
+ 	kill_litter_super(sb);
+-	if (sb->s_fs_info) {
+-		ffs_release_dev(sb->s_fs_info);
++	if (sb->s_fs_info)
+ 		ffs_data_closed(sb->s_fs_info);
+-	}
+ }
+ 
+ static struct file_system_type ffs_fs_type = {
+@@ -1703,6 +1699,7 @@ static void ffs_data_put(struct ffs_data *ffs)
+ 	if (refcount_dec_and_test(&ffs->ref)) {
+ 		pr_info("%s(): freeing\n", __func__);
+ 		ffs_data_clear(ffs);
++		ffs_release_dev(ffs->private_data);
+ 		BUG_ON(waitqueue_active(&ffs->ev.waitq) ||
+ 		       swait_active(&ffs->ep0req_completion.wait) ||
+ 		       waitqueue_active(&ffs->wait));
+@@ -3032,6 +3029,7 @@ static inline struct f_fs_opts *ffs_do_functionfs_bind(struct usb_function *f,
+ 	struct ffs_function *func = ffs_func_from_usb(f);
+ 	struct f_fs_opts *ffs_opts =
+ 		container_of(f->fi, struct f_fs_opts, func_inst);
++	struct ffs_data *ffs_data;
+ 	int ret;
+ 
+ 	ENTER();
+@@ -3046,12 +3044,13 @@ static inline struct f_fs_opts *ffs_do_functionfs_bind(struct usb_function *f,
+ 	if (!ffs_opts->no_configfs)
+ 		ffs_dev_lock();
+ 	ret = ffs_opts->dev->desc_ready ? 0 : -ENODEV;
+-	func->ffs = ffs_opts->dev->ffs_data;
++	ffs_data = ffs_opts->dev->ffs_data;
+ 	if (!ffs_opts->no_configfs)
+ 		ffs_dev_unlock();
+ 	if (ret)
+ 		return ERR_PTR(ret);
+ 
++	func->ffs = ffs_data;
+ 	func->conf = c;
+ 	func->gadget = c->cdev->gadget;
+ 
+@@ -3506,6 +3505,7 @@ static void ffs_free_inst(struct usb_function_instance *f)
+ 	struct f_fs_opts *opts;
+ 
+ 	opts = to_f_fs_opts(f);
++	ffs_release_dev(opts->dev);
+ 	ffs_dev_lock();
+ 	_ffs_free_dev(opts->dev);
+ 	ffs_dev_unlock();
+@@ -3693,47 +3693,48 @@ static void _ffs_free_dev(struct ffs_dev *dev)
+ {
+ 	list_del(&dev->entry);
+ 
+-	/* Clear the private_data pointer to stop incorrect dev access */
+-	if (dev->ffs_data)
+-		dev->ffs_data->private_data = NULL;
+-
+ 	kfree(dev);
+ 	if (list_empty(&ffs_devices))
+ 		functionfs_cleanup();
+ }
+ 
+-static void *ffs_acquire_dev(const char *dev_name)
++static int ffs_acquire_dev(const char *dev_name, struct ffs_data *ffs_data)
+ {
++	int ret = 0;
+ 	struct ffs_dev *ffs_dev;
+ 
+ 	ENTER();
+ 	ffs_dev_lock();
+ 
+ 	ffs_dev = _ffs_find_dev(dev_name);
+-	if (!ffs_dev)
+-		ffs_dev = ERR_PTR(-ENOENT);
+-	else if (ffs_dev->mounted)
+-		ffs_dev = ERR_PTR(-EBUSY);
+-	else if (ffs_dev->ffs_acquire_dev_callback &&
+-	    ffs_dev->ffs_acquire_dev_callback(ffs_dev))
+-		ffs_dev = ERR_PTR(-ENOENT);
+-	else
++	if (!ffs_dev) {
++		ret = -ENOENT;
++	} else if (ffs_dev->mounted) {
++		ret = -EBUSY;
++	} else if (ffs_dev->ffs_acquire_dev_callback &&
++		   ffs_dev->ffs_acquire_dev_callback(ffs_dev)) {
++		ret = -ENOENT;
++	} else {
+ 		ffs_dev->mounted = true;
++		ffs_dev->ffs_data = ffs_data;
++		ffs_data->private_data = ffs_dev;
++	}
+ 
+ 	ffs_dev_unlock();
+-	return ffs_dev;
++	return ret;
+ }
+ 
+-static void ffs_release_dev(struct ffs_data *ffs_data)
++static void ffs_release_dev(struct ffs_dev *ffs_dev)
+ {
+-	struct ffs_dev *ffs_dev;
+-
+ 	ENTER();
+ 	ffs_dev_lock();
+ 
+-	ffs_dev = ffs_data->private_data;
+-	if (ffs_dev) {
++	if (ffs_dev && ffs_dev->mounted) {
+ 		ffs_dev->mounted = false;
++		if (ffs_dev->ffs_data) {
++			ffs_dev->ffs_data->private_data = NULL;
++			ffs_dev->ffs_data = NULL;
++		}
+ 
+ 		if (ffs_dev->ffs_release_dev_callback)
+ 			ffs_dev->ffs_release_dev_callback(ffs_dev);
+@@ -3761,7 +3762,6 @@ static int ffs_ready(struct ffs_data *ffs)
+ 	}
+ 
+ 	ffs_obj->desc_ready = true;
+-	ffs_obj->ffs_data = ffs;
+ 
+ 	if (ffs_obj->ffs_ready_callback) {
+ 		ret = ffs_obj->ffs_ready_callback(ffs);
+@@ -3789,7 +3789,6 @@ static void ffs_closed(struct ffs_data *ffs)
+ 		goto done;
+ 
+ 	ffs_obj->desc_ready = false;
+-	ffs_obj->ffs_data = NULL;
+ 
+ 	if (test_and_clear_bit(FFS_FL_CALL_CLOSED_CALLBACK, &ffs->flags) &&
+ 	    ffs_obj->ffs_closed_callback)
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index f66815fe84822..e4b0c0420b376 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -1924,6 +1924,7 @@ no_bw:
+ 	xhci->hw_ports = NULL;
+ 	xhci->rh_bw = NULL;
+ 	xhci->ext_caps = NULL;
++	xhci->port_caps = NULL;
+ 
+ 	xhci->page_size = 0;
+ 	xhci->page_shift = 0;
+diff --git a/drivers/usb/host/xhci-pci-renesas.c b/drivers/usb/host/xhci-pci-renesas.c
+index f97ac9f52bf4d..431213cdf9e0e 100644
+--- a/drivers/usb/host/xhci-pci-renesas.c
++++ b/drivers/usb/host/xhci-pci-renesas.c
+@@ -207,7 +207,8 @@ static int renesas_check_rom_state(struct pci_dev *pdev)
+ 			return 0;
+ 
+ 		case RENESAS_ROM_STATUS_NO_RESULT: /* No result yet */
+-			return 0;
++			dev_dbg(&pdev->dev, "Unknown ROM status ...\n");
++			break;
+ 
+ 		case RENESAS_ROM_STATUS_ERROR: /* Error State */
+ 		default: /* All other states are marked as "Reserved states" */
+@@ -224,13 +225,12 @@ static int renesas_fw_check_running(struct pci_dev *pdev)
+ 	u8 fw_state;
+ 	int err;
+ 
+-	/* Check if device has ROM and loaded, if so skip everything */
+-	err = renesas_check_rom(pdev);
+-	if (err) { /* we have rom */
+-		err = renesas_check_rom_state(pdev);
+-		if (!err)
+-			return err;
+-	}
++	/*
++	 * Only if device has ROM and loaded FW we can skip loading and
++	 * return success. Otherwise (even unknown state), attempt to load FW.
++	 */
++	if (renesas_check_rom(pdev) && !renesas_check_rom_state(pdev))
++		return 0;
+ 
+ 	/*
+ 	 * Test if the device is actually needing the firmware. As most
+diff --git a/drivers/usb/phy/phy-tegra-usb.c b/drivers/usb/phy/phy-tegra-usb.c
+index a48452a6172b6..c0f432d509aab 100644
+--- a/drivers/usb/phy/phy-tegra-usb.c
++++ b/drivers/usb/phy/phy-tegra-usb.c
+@@ -58,12 +58,12 @@
+ #define   USB_WAKEUP_DEBOUNCE_COUNT(x)		(((x) & 0x7) << 16)
+ 
+ #define USB_PHY_VBUS_SENSORS			0x404
+-#define   B_SESS_VLD_WAKEUP_EN			BIT(6)
+-#define   B_VBUS_VLD_WAKEUP_EN			BIT(14)
++#define   B_SESS_VLD_WAKEUP_EN			BIT(14)
+ #define   A_SESS_VLD_WAKEUP_EN			BIT(22)
+ #define   A_VBUS_VLD_WAKEUP_EN			BIT(30)
+ 
+ #define USB_PHY_VBUS_WAKEUP_ID			0x408
++#define   VBUS_WAKEUP_STS			BIT(10)
+ #define   VBUS_WAKEUP_WAKEUP_EN			BIT(30)
+ 
+ #define USB1_LEGACY_CTRL			0x410
+@@ -544,7 +544,7 @@ static int utmi_phy_power_on(struct tegra_usb_phy *phy)
+ 
+ 		val = readl_relaxed(base + USB_PHY_VBUS_SENSORS);
+ 		val &= ~(A_VBUS_VLD_WAKEUP_EN | A_SESS_VLD_WAKEUP_EN);
+-		val &= ~(B_VBUS_VLD_WAKEUP_EN | B_SESS_VLD_WAKEUP_EN);
++		val &= ~(B_SESS_VLD_WAKEUP_EN);
+ 		writel_relaxed(val, base + USB_PHY_VBUS_SENSORS);
+ 
+ 		val = readl_relaxed(base + UTMIP_BAT_CHRG_CFG0);
+@@ -642,6 +642,15 @@ static int utmi_phy_power_off(struct tegra_usb_phy *phy)
+ 	void __iomem *base = phy->regs;
+ 	u32 val;
+ 
++	/*
++	 * Give hardware time to settle down after VBUS disconnection,
++	 * otherwise PHY will immediately wake up from suspend.
++	 */
++	if (phy->wakeup_enabled && phy->mode != USB_DR_MODE_HOST)
++		readl_relaxed_poll_timeout(base + USB_PHY_VBUS_WAKEUP_ID,
++					   val, !(val & VBUS_WAKEUP_STS),
++					   5000, 100000);
++
+ 	utmi_phy_clk_disable(phy);
+ 
+ 	/* PHY won't resume if reset is asserted */
+diff --git a/drivers/usb/typec/class.c b/drivers/usb/typec/class.c
+index b9429c9f65f6c..aeef453aa6585 100644
+--- a/drivers/usb/typec/class.c
++++ b/drivers/usb/typec/class.c
+@@ -517,8 +517,10 @@ typec_register_altmode(struct device *parent,
+ 	int ret;
+ 
+ 	alt = kzalloc(sizeof(*alt), GFP_KERNEL);
+-	if (!alt)
++	if (!alt) {
++		altmode_id_remove(parent, id);
+ 		return ERR_PTR(-ENOMEM);
++	}
+ 
+ 	alt->adev.svid = desc->svid;
+ 	alt->adev.mode = desc->mode;
+diff --git a/drivers/usb/typec/tcpm/tcpci.c b/drivers/usb/typec/tcpm/tcpci.c
+index 25b480752266e..98d84243c630c 100644
+--- a/drivers/usb/typec/tcpm/tcpci.c
++++ b/drivers/usb/typec/tcpm/tcpci.c
+@@ -21,8 +21,12 @@
+ #define	PD_RETRY_COUNT_DEFAULT			3
+ #define	PD_RETRY_COUNT_3_0_OR_HIGHER		2
+ #define	AUTO_DISCHARGE_DEFAULT_THRESHOLD_MV	3500
+-#define	AUTO_DISCHARGE_PD_HEADROOM_MV		850
+-#define	AUTO_DISCHARGE_PPS_HEADROOM_MV		1250
++#define	VSINKPD_MIN_IR_DROP_MV			750
++#define	VSRC_NEW_MIN_PERCENT			95
++#define	VSRC_VALID_MIN_MV			500
++#define	VPPS_NEW_MIN_PERCENT			95
++#define	VPPS_VALID_MIN_MV			100
++#define	VSINKDISCONNECT_PD_MIN_PERCENT		90
+ 
+ #define tcpc_presenting_rd(reg, cc) \
+ 	(!(TCPC_ROLE_CTRL_DRP & (reg)) && \
+@@ -324,11 +328,13 @@ static int tcpci_set_auto_vbus_discharge_threshold(struct tcpc_dev *dev, enum ty
+ 		threshold = AUTO_DISCHARGE_DEFAULT_THRESHOLD_MV;
+ 	} else if (mode == TYPEC_PWR_MODE_PD) {
+ 		if (pps_active)
+-			threshold = (95 * requested_vbus_voltage_mv / 100) -
+-				AUTO_DISCHARGE_PD_HEADROOM_MV;
++			threshold = ((VPPS_NEW_MIN_PERCENT * requested_vbus_voltage_mv / 100) -
++				     VSINKPD_MIN_IR_DROP_MV - VPPS_VALID_MIN_MV) *
++				     VSINKDISCONNECT_PD_MIN_PERCENT / 100;
+ 		else
+-			threshold = (95 * requested_vbus_voltage_mv / 100) -
+-				AUTO_DISCHARGE_PPS_HEADROOM_MV;
++			threshold = ((VSRC_NEW_MIN_PERCENT * requested_vbus_voltage_mv / 100) -
++				     VSINKPD_MIN_IR_DROP_MV - VSRC_VALID_MIN_MV) *
++				     VSINKDISCONNECT_PD_MIN_PERCENT / 100;
+ 	} else {
+ 		/* 3.5V for non-pd sink */
+ 		threshold = AUTO_DISCHARGE_DEFAULT_THRESHOLD_MV;
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 63470cf7f4cd9..1b7f18d35df45 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -2576,6 +2576,11 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port,
+ 			} else {
+ 				next_state = SNK_WAIT_CAPABILITIES;
+ 			}
++
++			/* Threshold was relaxed before sending Request. Restore it back. */
++			tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_PD,
++							       port->pps_data.active,
++							       port->supply_voltage);
+ 			tcpm_set_state(port, next_state, 0);
+ 			break;
+ 		case SNK_NEGOTIATE_PPS_CAPABILITIES:
+@@ -2589,6 +2594,11 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port,
+ 			    port->send_discover)
+ 				port->vdm_sm_running = true;
+ 
++			/* Threshold was relaxed before sending Request. Restore it back. */
++			tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_PD,
++							       port->pps_data.active,
++							       port->supply_voltage);
++
+ 			tcpm_set_state(port, SNK_READY, 0);
+ 			break;
+ 		case DR_SWAP_SEND:
+@@ -3308,6 +3318,12 @@ static int tcpm_pd_send_request(struct tcpm_port *port)
+ 	if (ret < 0)
+ 		return ret;
+ 
++	/*
++	 * Relax the threshold as voltage will be adjusted after Accept Message plus tSrcTransition.
++	 * It is safer to modify the threshold here.
++	 */
++	tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_USB, false, 0);
++
+ 	memset(&msg, 0, sizeof(msg));
+ 	msg.header = PD_HEADER_LE(PD_DATA_REQUEST,
+ 				  port->pwr_role,
+@@ -3405,6 +3421,9 @@ static int tcpm_pd_send_pps_request(struct tcpm_port *port)
+ 	if (ret < 0)
+ 		return ret;
+ 
++	/* Relax the threshold as voltage will be adjusted right after Accept Message. */
++	tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_USB, false, 0);
++
+ 	memset(&msg, 0, sizeof(msg));
+ 	msg.header = PD_HEADER_LE(PD_DATA_REQUEST,
+ 				  port->pwr_role,
+@@ -4186,6 +4205,10 @@ static void run_state_machine(struct tcpm_port *port)
+ 		port->hard_reset_count = 0;
+ 		ret = tcpm_pd_send_request(port);
+ 		if (ret < 0) {
++			/* Restore back to the original state */
++			tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_PD,
++							       port->pps_data.active,
++							       port->supply_voltage);
+ 			/* Let the Source send capabilities again. */
+ 			tcpm_set_state(port, SNK_WAIT_CAPABILITIES, 0);
+ 		} else {
+@@ -4196,6 +4219,10 @@ static void run_state_machine(struct tcpm_port *port)
+ 	case SNK_NEGOTIATE_PPS_CAPABILITIES:
+ 		ret = tcpm_pd_send_pps_request(port);
+ 		if (ret < 0) {
++			/* Restore back to the original state */
++			tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_PD,
++							       port->pps_data.active,
++							       port->supply_voltage);
+ 			port->pps_status = ret;
+ 			/*
+ 			 * If this was called due to updates to sink
+@@ -5198,6 +5225,9 @@ static void _tcpm_pd_vbus_vsafe0v(struct tcpm_port *port)
+ 				tcpm_set_state(port, SNK_UNATTACHED, 0);
+ 		}
+ 		break;
++	case PR_SWAP_SNK_SRC_SINK_OFF:
++		/* Do nothing, vsafe0v is expected during transition */
++		break;
+ 	default:
+ 		if (port->pwr_role == TYPEC_SINK && port->auto_vbus_discharge_enabled)
+ 			tcpm_set_state(port, SNK_UNATTACHED, 0);
+diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
+index bd7c482c948aa..b94958552eb87 100644
+--- a/drivers/vfio/pci/vfio_pci.c
++++ b/drivers/vfio/pci/vfio_pci.c
+@@ -1594,6 +1594,7 @@ static vm_fault_t vfio_pci_mmap_fault(struct vm_fault *vmf)
+ {
+ 	struct vm_area_struct *vma = vmf->vma;
+ 	struct vfio_pci_device *vdev = vma->vm_private_data;
++	struct vfio_pci_mmap_vma *mmap_vma;
+ 	vm_fault_t ret = VM_FAULT_NOPAGE;
+ 
+ 	mutex_lock(&vdev->vma_lock);
+@@ -1601,24 +1602,36 @@ static vm_fault_t vfio_pci_mmap_fault(struct vm_fault *vmf)
+ 
+ 	if (!__vfio_pci_memory_enabled(vdev)) {
+ 		ret = VM_FAULT_SIGBUS;
+-		mutex_unlock(&vdev->vma_lock);
+ 		goto up_out;
+ 	}
+ 
+-	if (__vfio_pci_add_vma(vdev, vma)) {
+-		ret = VM_FAULT_OOM;
+-		mutex_unlock(&vdev->vma_lock);
+-		goto up_out;
++	/*
++	 * We populate the whole vma on fault, so we need to test whether
++	 * the vma has already been mapped, such as for concurrent faults
++	 * to the same vma.  io_remap_pfn_range() will trigger a BUG_ON if
++	 * we ask it to fill the same range again.
++	 */
++	list_for_each_entry(mmap_vma, &vdev->vma_list, vma_next) {
++		if (mmap_vma->vma == vma)
++			goto up_out;
+ 	}
+ 
+-	mutex_unlock(&vdev->vma_lock);
+-
+ 	if (io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
+-			       vma->vm_end - vma->vm_start, vma->vm_page_prot))
++			       vma->vm_end - vma->vm_start,
++			       vma->vm_page_prot)) {
+ 		ret = VM_FAULT_SIGBUS;
++		zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start);
++		goto up_out;
++	}
++
++	if (__vfio_pci_add_vma(vdev, vma)) {
++		ret = VM_FAULT_OOM;
++		zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start);
++	}
+ 
+ up_out:
+ 	up_read(&vdev->memory_lock);
++	mutex_unlock(&vdev->vma_lock);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/video/backlight/lm3630a_bl.c b/drivers/video/backlight/lm3630a_bl.c
+index e88a2b0e59046..662029d6a3dc9 100644
+--- a/drivers/video/backlight/lm3630a_bl.c
++++ b/drivers/video/backlight/lm3630a_bl.c
+@@ -482,8 +482,10 @@ static int lm3630a_parse_node(struct lm3630a_chip *pchip,
+ 
+ 	device_for_each_child_node(pchip->dev, node) {
+ 		ret = lm3630a_parse_bank(pdata, node, &seen_led_sources);
+-		if (ret)
++		if (ret) {
++			fwnode_handle_put(node);
+ 			return ret;
++		}
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/video/fbdev/imxfb.c b/drivers/video/fbdev/imxfb.c
+index 7f8debd2da065..ad598257ab386 100644
+--- a/drivers/video/fbdev/imxfb.c
++++ b/drivers/video/fbdev/imxfb.c
+@@ -992,7 +992,7 @@ static int imxfb_probe(struct platform_device *pdev)
+ 	info->screen_buffer = dma_alloc_wc(&pdev->dev, fbi->map_size,
+ 					   &fbi->map_dma, GFP_KERNEL);
+ 	if (!info->screen_buffer) {
+-		dev_err(&pdev->dev, "Failed to allocate video RAM: %d\n", ret);
++		dev_err(&pdev->dev, "Failed to allocate video RAM\n");
+ 		ret = -ENOMEM;
+ 		goto failed_map;
+ 	}
+diff --git a/drivers/visorbus/visorchipset.c b/drivers/visorbus/visorchipset.c
+index cb1eb7e05f871..5668cad86e374 100644
+--- a/drivers/visorbus/visorchipset.c
++++ b/drivers/visorbus/visorchipset.c
+@@ -1561,7 +1561,7 @@ schedule_out:
+ 
+ static int visorchipset_init(struct acpi_device *acpi_device)
+ {
+-	int err = -ENODEV;
++	int err = -ENOMEM;
+ 	struct visorchannel *controlvm_channel;
+ 
+ 	chipset_dev = kzalloc(sizeof(*chipset_dev), GFP_KERNEL);
+@@ -1584,8 +1584,10 @@ static int visorchipset_init(struct acpi_device *acpi_device)
+ 				 "controlvm",
+ 				 sizeof(struct visor_controlvm_channel),
+ 				 VISOR_CONTROLVM_CHANNEL_VERSIONID,
+-				 VISOR_CHANNEL_SIGNATURE))
++				 VISOR_CHANNEL_SIGNATURE)) {
++		err = -ENODEV;
+ 		goto error_delete_groups;
++	}
+ 	/* if booting in a crash kernel */
+ 	if (is_kdump_kernel())
+ 		INIT_DELAYED_WORK(&chipset_dev->periodic_controlvm_work,
+diff --git a/fs/btrfs/Kconfig b/fs/btrfs/Kconfig
+index 68b95ad82126e..520a0f6a7d9e9 100644
+--- a/fs/btrfs/Kconfig
++++ b/fs/btrfs/Kconfig
+@@ -18,6 +18,8 @@ config BTRFS_FS
+ 	select RAID6_PQ
+ 	select XOR_BLOCKS
+ 	select SRCU
++	depends on !PPC_256K_PAGES	# powerpc
++	depends on !PAGE_SIZE_256KB	# hexagon
+ 
+ 	help
+ 	  Btrfs is a general purpose copy-on-write filesystem with extents,
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index a484fb72a01f0..4bc3ca2cbd7d4 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -596,7 +596,6 @@ noinline int btrfs_cow_block(struct btrfs_trans_handle *trans,
+ 		       trans->transid, fs_info->generation);
+ 
+ 	if (!should_cow_block(trans, root, buf)) {
+-		trans->dirty = true;
+ 		*cow_ret = buf;
+ 		return 0;
+ 	}
+@@ -1788,10 +1787,8 @@ again:
+ 			 * then we don't want to set the path blocking,
+ 			 * so we test it here
+ 			 */
+-			if (!should_cow_block(trans, root, b)) {
+-				trans->dirty = true;
++			if (!should_cow_block(trans, root, b))
+ 				goto cow_done;
+-			}
+ 
+ 			/*
+ 			 * must have write locks on this node and the
+diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
+index 1a88f6214ebc0..3bb8b919d2c19 100644
+--- a/fs/btrfs/delayed-inode.c
++++ b/fs/btrfs/delayed-inode.c
+@@ -1009,12 +1009,10 @@ static int __btrfs_update_delayed_inode(struct btrfs_trans_handle *trans,
+ 	nofs_flag = memalloc_nofs_save();
+ 	ret = btrfs_lookup_inode(trans, root, path, &key, mod);
+ 	memalloc_nofs_restore(nofs_flag);
+-	if (ret > 0) {
+-		btrfs_release_path(path);
+-		return -ENOENT;
+-	} else if (ret < 0) {
+-		return ret;
+-	}
++	if (ret > 0)
++		ret = -ENOENT;
++	if (ret < 0)
++		goto out;
+ 
+ 	leaf = path->nodes[0];
+ 	inode_item = btrfs_item_ptr(leaf, path->slots[0],
+@@ -1052,6 +1050,14 @@ err_out:
+ 	btrfs_delayed_inode_release_metadata(fs_info, node, (ret < 0));
+ 	btrfs_release_delayed_inode(node);
+ 
++	/*
++	 * If we fail to update the delayed inode we need to abort the
++	 * transaction, because we could leave the inode with the improper
++	 * counts behind.
++	 */
++	if (ret && ret != -ENOENT)
++		btrfs_abort_transaction(trans, ret);
++
+ 	return ret;
+ 
+ search:
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 3d5c35e4cb76e..d2f39a122d89d 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -4784,7 +4784,6 @@ btrfs_init_new_buffer(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+ 		set_extent_dirty(&trans->transaction->dirty_pages, buf->start,
+ 			 buf->start + buf->len - 1, GFP_NOFS);
+ 	}
+-	trans->dirty = true;
+ 	/* this returns a buffer locked for blocking */
+ 	return buf;
+ }
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 46f392943f4d0..9229549697ce7 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -603,7 +603,7 @@ again:
+ 	 * inode has not been flagged as nocompress.  This flag can
+ 	 * change at any time if we discover bad compression ratios.
+ 	 */
+-	if (inode_need_compress(BTRFS_I(inode), start, end)) {
++	if (nr_pages > 1 && inode_need_compress(BTRFS_I(inode), start, end)) {
+ 		WARN_ON(pages);
+ 		pages = kcalloc(nr_pages, sizeof(struct page *), GFP_NOFS);
+ 		if (!pages) {
+@@ -8390,7 +8390,19 @@ static void btrfs_invalidatepage(struct page *page, unsigned int offset,
+ 	 */
+ 	wait_on_page_writeback(page);
+ 
+-	if (offset) {
++	/*
++	 * For subpage case, we have call sites like
++	 * btrfs_punch_hole_lock_range() which passes range not aligned to
++	 * sectorsize.
++	 * If the range doesn't cover the full page, we don't need to and
++	 * shouldn't clear page extent mapped, as page->private can still
++	 * record subpage dirty bits for other part of the range.
++	 *
++	 * For cases that can invalidate the full even the range doesn't
++	 * cover the full page, like invalidating the last page, we're
++	 * still safe to wait for ordered extent to finish.
++	 */
++	if (!(offset == 0 && length == PAGE_SIZE)) {
+ 		btrfs_releasepage(page, GFP_NOFS);
+ 		return;
+ 	}
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index bd69db72acc5e..a2b3c594379d6 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -4064,6 +4064,17 @@ static int process_recorded_refs(struct send_ctx *sctx, int *pending_move)
+ 				if (ret < 0)
+ 					goto out;
+ 			} else {
++				/*
++				 * If we previously orphanized a directory that
++				 * collided with a new reference that we already
++				 * processed, recompute the current path because
++				 * that directory may be part of the path.
++				 */
++				if (orphanized_dir) {
++					ret = refresh_ref_path(sctx, cur);
++					if (ret < 0)
++						goto out;
++				}
+ 				ret = send_unlink(sctx, cur->full_path);
+ 				if (ret < 0)
+ 					goto out;
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index 4a396c1147f17..bc613218c8c5b 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -299,17 +299,6 @@ void __btrfs_abort_transaction(struct btrfs_trans_handle *trans,
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
+ 
+ 	WRITE_ONCE(trans->aborted, errno);
+-	/* Nothing used. The other threads that have joined this
+-	 * transaction may be able to continue. */
+-	if (!trans->dirty && list_empty(&trans->new_bgs)) {
+-		const char *errstr;
+-
+-		errstr = btrfs_decode_error(errno);
+-		btrfs_warn(fs_info,
+-		           "%s:%d: Aborting unused transaction(%s).",
+-		           function, line, errstr);
+-		return;
+-	}
+ 	WRITE_ONCE(trans->transaction->aborted, errno);
+ 	/* Wake up anybody who may be waiting on this transaction */
+ 	wake_up(&fs_info->transaction_wait);
+diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
+index 436ac7b4b3346..4f5b14cd3a199 100644
+--- a/fs/btrfs/sysfs.c
++++ b/fs/btrfs/sysfs.c
+@@ -429,7 +429,7 @@ static ssize_t btrfs_discard_bitmap_bytes_show(struct kobject *kobj,
+ {
+ 	struct btrfs_fs_info *fs_info = discard_to_fs_info(kobj);
+ 
+-	return scnprintf(buf, PAGE_SIZE, "%lld\n",
++	return scnprintf(buf, PAGE_SIZE, "%llu\n",
+ 			fs_info->discard_ctl.discard_bitmap_bytes);
+ }
+ BTRFS_ATTR(discard, discard_bitmap_bytes, btrfs_discard_bitmap_bytes_show);
+@@ -451,7 +451,7 @@ static ssize_t btrfs_discard_extent_bytes_show(struct kobject *kobj,
+ {
+ 	struct btrfs_fs_info *fs_info = discard_to_fs_info(kobj);
+ 
+-	return scnprintf(buf, PAGE_SIZE, "%lld\n",
++	return scnprintf(buf, PAGE_SIZE, "%llu\n",
+ 			fs_info->discard_ctl.discard_extent_bytes);
+ }
+ BTRFS_ATTR(discard, discard_extent_bytes, btrfs_discard_extent_bytes_show);
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index f75de9f6c0ada..37450c7644ca0 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -1406,8 +1406,10 @@ int btrfs_defrag_root(struct btrfs_root *root)
+ 
+ 	while (1) {
+ 		trans = btrfs_start_transaction(root, 0);
+-		if (IS_ERR(trans))
+-			return PTR_ERR(trans);
++		if (IS_ERR(trans)) {
++			ret = PTR_ERR(trans);
++			break;
++		}
+ 
+ 		ret = btrfs_defrag_leaves(trans, root);
+ 
+@@ -1476,7 +1478,7 @@ static int qgroup_account_snapshot(struct btrfs_trans_handle *trans,
+ 	ret = btrfs_run_delayed_refs(trans, (unsigned long)-1);
+ 	if (ret) {
+ 		btrfs_abort_transaction(trans, ret);
+-		goto out;
++		return ret;
+ 	}
+ 
+ 	/*
+@@ -2074,14 +2076,6 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
+ 
+ 	ASSERT(refcount_read(&trans->use_count) == 1);
+ 
+-	/*
+-	 * Some places just start a transaction to commit it.  We need to make
+-	 * sure that if this commit fails that the abort code actually marks the
+-	 * transaction as failed, so set trans->dirty to make the abort code do
+-	 * the right thing.
+-	 */
+-	trans->dirty = true;
+-
+ 	/* Stop the commit early if ->aborted is set */
+ 	if (TRANS_ABORTED(cur_trans)) {
+ 		ret = cur_trans->aborted;
+diff --git a/fs/btrfs/transaction.h b/fs/btrfs/transaction.h
+index 364cfbb4c5c59..c49e2266b28ba 100644
+--- a/fs/btrfs/transaction.h
++++ b/fs/btrfs/transaction.h
+@@ -143,7 +143,6 @@ struct btrfs_trans_handle {
+ 	bool allocating_chunk;
+ 	bool can_flush_pending_bgs;
+ 	bool reloc_reserved;
+-	bool dirty;
+ 	bool in_fsync;
+ 	struct btrfs_root *root;
+ 	struct btrfs_fs_info *fs_info;
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index dbcf8bb2f3b9a..760d950752f51 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -6371,6 +6371,7 @@ next:
+ error:
+ 	if (wc.trans)
+ 		btrfs_end_transaction(wc.trans);
++	clear_bit(BTRFS_FS_LOG_RECOVERING, &fs_info->flags);
+ 	btrfs_free_path(path);
+ 	return ret;
+ }
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index f1f3b10d1dbbe..c7243d392ca8e 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -1140,6 +1140,10 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
+ 		}
+ 
+ 		if (zone.type == BLK_ZONE_TYPE_CONVENTIONAL) {
++			btrfs_err_in_rcu(fs_info,
++	"zoned: unexpected conventional zone %llu on device %s (devid %llu)",
++				zone.start << SECTOR_SHIFT,
++				rcu_str_deref(device->name), device->devid);
+ 			ret = -EIO;
+ 			goto out;
+ 		}
+@@ -1200,6 +1204,13 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
+ 
+ 	switch (map->type & BTRFS_BLOCK_GROUP_PROFILE_MASK) {
+ 	case 0: /* single */
++		if (alloc_offsets[0] == WP_MISSING_DEV) {
++			btrfs_err(fs_info,
++			"zoned: cannot recover write pointer for zone %llu",
++				physical);
++			ret = -EIO;
++			goto out;
++		}
+ 		cache->alloc_offset = alloc_offsets[0];
+ 		break;
+ 	case BTRFS_BLOCK_GROUP_DUP:
+@@ -1217,6 +1228,13 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
+ 	}
+ 
+ out:
++	if (cache->alloc_offset > fs_info->zone_size) {
++		btrfs_err(fs_info,
++			"zoned: invalid write pointer %llu in block group %llu",
++			cache->alloc_offset, cache->start);
++		ret = -EIO;
++	}
++
+ 	/* An extent is allocated after the write pointer */
+ 	if (!ret && num_conventional && last_alloc > cache->alloc_offset) {
+ 		btrfs_err(fs_info,
+diff --git a/fs/cifs/cifs_swn.c b/fs/cifs/cifs_swn.c
+index d829b8bf833e3..93b47818c6c2d 100644
+--- a/fs/cifs/cifs_swn.c
++++ b/fs/cifs/cifs_swn.c
+@@ -447,15 +447,13 @@ static int cifs_swn_store_swn_addr(const struct sockaddr_storage *new,
+ 				   const struct sockaddr_storage *old,
+ 				   struct sockaddr_storage *dst)
+ {
+-	__be16 port;
++	__be16 port = cpu_to_be16(CIFS_PORT);
+ 
+ 	if (old->ss_family == AF_INET) {
+ 		struct sockaddr_in *ipv4 = (struct sockaddr_in *)old;
+ 
+ 		port = ipv4->sin_port;
+-	}
+-
+-	if (old->ss_family == AF_INET6) {
++	} else if (old->ss_family == AF_INET6) {
+ 		struct sockaddr_in6 *ipv6 = (struct sockaddr_in6 *)old;
+ 
+ 		port = ipv6->sin6_port;
+@@ -465,9 +463,7 @@ static int cifs_swn_store_swn_addr(const struct sockaddr_storage *new,
+ 		struct sockaddr_in *ipv4 = (struct sockaddr_in *)new;
+ 
+ 		ipv4->sin_port = port;
+-	}
+-
+-	if (new->ss_family == AF_INET6) {
++	} else if (new->ss_family == AF_INET6) {
+ 		struct sockaddr_in6 *ipv6 = (struct sockaddr_in6 *)new;
+ 
+ 		ipv6->sin6_port = port;
+diff --git a/fs/cifs/cifsacl.c b/fs/cifs/cifsacl.c
+index 784407f9280fd..a18dee071fcd2 100644
+--- a/fs/cifs/cifsacl.c
++++ b/fs/cifs/cifsacl.c
+@@ -1308,7 +1308,7 @@ static int build_sec_desc(struct cifs_ntsd *pntsd, struct cifs_ntsd *pnntsd,
+ 		ndacl_ptr = (struct cifs_acl *)((char *)pnntsd + ndacloffset);
+ 		ndacl_ptr->revision =
+ 			dacloffset ? dacl_ptr->revision : cpu_to_le16(ACL_REVISION);
+-		ndacl_ptr->num_aces = dacl_ptr->num_aces;
++		ndacl_ptr->num_aces = dacl_ptr ? dacl_ptr->num_aces : 0;
+ 
+ 		if (uid_valid(uid)) { /* chown */
+ 			uid_t id;
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 8488d70244620..706a2aeba1dec 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -896,7 +896,7 @@ struct cifs_ses {
+ 	struct mutex session_mutex;
+ 	struct TCP_Server_Info *server;	/* pointer to server info */
+ 	int ses_count;		/* reference counter */
+-	enum statusEnum status;
++	enum statusEnum status;  /* updates protected by GlobalMid_Lock */
+ 	unsigned overrideSecFlg;  /* if non-zero override global sec flags */
+ 	char *serverOS;		/* name of operating system underlying server */
+ 	char *serverNOS;	/* name of network operating system of server */
+@@ -1795,6 +1795,7 @@ require use of the stronger protocol */
+  *	list operations on pending_mid_q and oplockQ
+  *      updates to XID counters, multiplex id  and SMB sequence numbers
+  *      list operations on global DnotifyReqList
++ *      updates to ses->status
+  *  tcp_ses_lock protects:
+  *	list operations on tcp and SMB session lists
+  *  tcon->open_file_lock protects the list of open files hanging off the tcon
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 495c395f9defd..eb6c10fa67410 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -1617,9 +1617,12 @@ void cifs_put_smb_ses(struct cifs_ses *ses)
+ 		spin_unlock(&cifs_tcp_ses_lock);
+ 		return;
+ 	}
++	spin_unlock(&cifs_tcp_ses_lock);
++
++	spin_lock(&GlobalMid_Lock);
+ 	if (ses->status == CifsGood)
+ 		ses->status = CifsExiting;
+-	spin_unlock(&cifs_tcp_ses_lock);
++	spin_unlock(&GlobalMid_Lock);
+ 
+ 	cifs_free_ipc(ses);
+ 
+diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c
+index b1fa30fefe1f6..8e16ee1e5fd10 100644
+--- a/fs/cifs/dfs_cache.c
++++ b/fs/cifs/dfs_cache.c
+@@ -25,8 +25,7 @@
+ #define CACHE_HTABLE_SIZE 32
+ #define CACHE_MAX_ENTRIES 64
+ 
+-#define IS_INTERLINK_SET(v) ((v) & (DFSREF_REFERRAL_SERVER | \
+-				    DFSREF_STORAGE_SERVER))
++#define IS_DFS_INTERLINK(v) (((v) & DFSREF_REFERRAL_SERVER) && !((v) & DFSREF_STORAGE_SERVER))
+ 
+ struct cache_dfs_tgt {
+ 	char *name;
+@@ -171,7 +170,7 @@ static int dfscache_proc_show(struct seq_file *m, void *v)
+ 				   "cache entry: path=%s,type=%s,ttl=%d,etime=%ld,hdr_flags=0x%x,ref_flags=0x%x,interlink=%s,path_consumed=%d,expired=%s\n",
+ 				   ce->path, ce->srvtype == DFS_TYPE_ROOT ? "root" : "link",
+ 				   ce->ttl, ce->etime.tv_nsec, ce->ref_flags, ce->hdr_flags,
+-				   IS_INTERLINK_SET(ce->hdr_flags) ? "yes" : "no",
++				   IS_DFS_INTERLINK(ce->hdr_flags) ? "yes" : "no",
+ 				   ce->path_consumed, cache_entry_expired(ce) ? "yes" : "no");
+ 
+ 			list_for_each_entry(t, &ce->tlist, list) {
+@@ -240,7 +239,7 @@ static inline void dump_ce(const struct cache_entry *ce)
+ 		 ce->srvtype == DFS_TYPE_ROOT ? "root" : "link", ce->ttl,
+ 		 ce->etime.tv_nsec,
+ 		 ce->hdr_flags, ce->ref_flags,
+-		 IS_INTERLINK_SET(ce->hdr_flags) ? "yes" : "no",
++		 IS_DFS_INTERLINK(ce->hdr_flags) ? "yes" : "no",
+ 		 ce->path_consumed,
+ 		 cache_entry_expired(ce) ? "yes" : "no");
+ 	dump_tgts(ce);
+diff --git a/fs/cifs/dir.c b/fs/cifs/dir.c
+index 6bcd3e8f7cdae..7c641f9a3dac2 100644
+--- a/fs/cifs/dir.c
++++ b/fs/cifs/dir.c
+@@ -630,6 +630,7 @@ cifs_lookup(struct inode *parent_dir_inode, struct dentry *direntry,
+ 	struct inode *newInode = NULL;
+ 	const char *full_path;
+ 	void *page;
++	int retry_count = 0;
+ 
+ 	xid = get_xid();
+ 
+@@ -673,6 +674,7 @@ cifs_lookup(struct inode *parent_dir_inode, struct dentry *direntry,
+ 	cifs_dbg(FYI, "Full path: %s inode = 0x%p\n",
+ 		 full_path, d_inode(direntry));
+ 
++again:
+ 	if (pTcon->posix_extensions)
+ 		rc = smb311_posix_get_inode_info(&newInode, full_path, parent_dir_inode->i_sb, xid);
+ 	else if (pTcon->unix_ext) {
+@@ -687,6 +689,8 @@ cifs_lookup(struct inode *parent_dir_inode, struct dentry *direntry,
+ 		/* since paths are not looked up by component - the parent
+ 		   directories are presumed to be good here */
+ 		renew_parental_timestamps(direntry);
++	} else if (rc == -EAGAIN && retry_count++ < 10) {
++		goto again;
+ 	} else if (rc == -ENOENT) {
+ 		cifs_set_time(direntry, jiffies);
+ 		newInode = NULL;
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index 1dfa57982522b..f60f068d33e86 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -367,9 +367,12 @@ cifs_get_file_info_unix(struct file *filp)
+ 	} else if (rc == -EREMOTE) {
+ 		cifs_create_dfs_fattr(&fattr, inode->i_sb);
+ 		rc = 0;
+-	}
++	} else
++		goto cifs_gfiunix_out;
+ 
+ 	rc = cifs_fattr_to_inode(inode, &fattr);
++
++cifs_gfiunix_out:
+ 	free_xid(xid);
+ 	return rc;
+ }
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 21ef51d338e0c..903de7449aa33 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -2325,6 +2325,7 @@ smb2_query_dir_first(const unsigned int xid, struct cifs_tcon *tcon,
+ 	struct smb2_query_directory_rsp *qd_rsp = NULL;
+ 	struct smb2_create_rsp *op_rsp = NULL;
+ 	struct TCP_Server_Info *server = cifs_pick_channel(tcon->ses);
++	int retry_count = 0;
+ 
+ 	utf16_path = cifs_convert_path_to_utf16(path, cifs_sb);
+ 	if (!utf16_path)
+@@ -2372,10 +2373,14 @@ smb2_query_dir_first(const unsigned int xid, struct cifs_tcon *tcon,
+ 
+ 	smb2_set_related(&rqst[1]);
+ 
++again:
+ 	rc = compound_send_recv(xid, tcon->ses, server,
+ 				flags, 2, rqst,
+ 				resp_buftype, rsp_iov);
+ 
++	if (rc == -EAGAIN && retry_count++ < 10)
++		goto again;
++
+ 	/* If the open failed there is nothing to do */
+ 	op_rsp = (struct smb2_create_rsp *)rsp_iov[0].iov_base;
+ 	if (op_rsp == NULL || op_rsp->sync_hdr.Status != STATUS_SUCCESS) {
+@@ -3601,6 +3606,119 @@ static long smb3_punch_hole(struct file *file, struct cifs_tcon *tcon,
+ 	return rc;
+ }
+ 
++static int smb3_simple_fallocate_write_range(unsigned int xid,
++					     struct cifs_tcon *tcon,
++					     struct cifsFileInfo *cfile,
++					     loff_t off, loff_t len,
++					     char *buf)
++{
++	struct cifs_io_parms io_parms = {0};
++	int nbytes;
++	struct kvec iov[2];
++
++	io_parms.netfid = cfile->fid.netfid;
++	io_parms.pid = current->tgid;
++	io_parms.tcon = tcon;
++	io_parms.persistent_fid = cfile->fid.persistent_fid;
++	io_parms.volatile_fid = cfile->fid.volatile_fid;
++	io_parms.offset = off;
++	io_parms.length = len;
++
++	/* iov[0] is reserved for smb header */
++	iov[1].iov_base = buf;
++	iov[1].iov_len = io_parms.length;
++	return SMB2_write(xid, &io_parms, &nbytes, iov, 1);
++}
++
++static int smb3_simple_fallocate_range(unsigned int xid,
++				       struct cifs_tcon *tcon,
++				       struct cifsFileInfo *cfile,
++				       loff_t off, loff_t len)
++{
++	struct file_allocated_range_buffer in_data, *out_data = NULL, *tmp_data;
++	u32 out_data_len;
++	char *buf = NULL;
++	loff_t l;
++	int rc;
++
++	in_data.file_offset = cpu_to_le64(off);
++	in_data.length = cpu_to_le64(len);
++	rc = SMB2_ioctl(xid, tcon, cfile->fid.persistent_fid,
++			cfile->fid.volatile_fid,
++			FSCTL_QUERY_ALLOCATED_RANGES, true,
++			(char *)&in_data, sizeof(in_data),
++			1024 * sizeof(struct file_allocated_range_buffer),
++			(char **)&out_data, &out_data_len);
++	if (rc)
++		goto out;
++	/*
++	 * It is already all allocated
++	 */
++	if (out_data_len == 0)
++		goto out;
++
++	buf = kzalloc(1024 * 1024, GFP_KERNEL);
++	if (buf == NULL) {
++		rc = -ENOMEM;
++		goto out;
++	}
++
++	tmp_data = out_data;
++	while (len) {
++		/*
++		 * The rest of the region is unmapped so write it all.
++		 */
++		if (out_data_len == 0) {
++			rc = smb3_simple_fallocate_write_range(xid, tcon,
++					       cfile, off, len, buf);
++			goto out;
++		}
++
++		if (out_data_len < sizeof(struct file_allocated_range_buffer)) {
++			rc = -EINVAL;
++			goto out;
++		}
++
++		if (off < le64_to_cpu(tmp_data->file_offset)) {
++			/*
++			 * We are at a hole. Write until the end of the region
++			 * or until the next allocated data,
++			 * whichever comes next.
++			 */
++			l = le64_to_cpu(tmp_data->file_offset) - off;
++			if (len < l)
++				l = len;
++			rc = smb3_simple_fallocate_write_range(xid, tcon,
++					       cfile, off, l, buf);
++			if (rc)
++				goto out;
++			off = off + l;
++			len = len - l;
++			if (len == 0)
++				goto out;
++		}
++		/*
++		 * We are at a section of allocated data, just skip forward
++		 * until the end of the data or the end of the region
++		 * we are supposed to fallocate, whichever comes first.
++		 */
++		l = le64_to_cpu(tmp_data->length);
++		if (len < l)
++			l = len;
++		off += l;
++		len -= l;
++
++		tmp_data = &tmp_data[1];
++		out_data_len -= sizeof(struct file_allocated_range_buffer);
++	}
++
++ out:
++	kfree(out_data);
++	kfree(buf);
++	return rc;
++}
++
++
+ static long smb3_simple_falloc(struct file *file, struct cifs_tcon *tcon,
+ 			    loff_t off, loff_t len, bool keep_size)
+ {
+@@ -3661,6 +3779,26 @@ static long smb3_simple_falloc(struct file *file, struct cifs_tcon *tcon,
+ 	}
+ 
+ 	if ((keep_size == true) || (i_size_read(inode) >= off + len)) {
++		/*
++		 * At this point, we are trying to fallocate an internal
++		 * regions of a sparse file. Since smb2 does not have a
++		 * fallocate command we have two otions on how to emulate this.
++		 * We can either turn the entire file to become non-sparse
++		 * which we only do if the fallocate is for virtually
++		 * the whole file,  or we can overwrite the region with zeroes
++		 * using SMB2_write, which could be prohibitevly expensive
++		 * if len is large.
++		 */
++		/*
++		 * We are only trying to fallocate a small region so
++		 * just write it with zero.
++		 */
++		if (len <= 1024 * 1024) {
++			rc = smb3_simple_fallocate_range(xid, tcon, cfile,
++							 off, len);
++			goto out;
++		}
++
+ 		/*
+ 		 * Check if falloc starts within first few pages of file
+ 		 * and ends within a few pages of the end of file to
+diff --git a/fs/configfs/file.c b/fs/configfs/file.c
+index e26060dae70a3..b4b0fbabd62e2 100644
+--- a/fs/configfs/file.c
++++ b/fs/configfs/file.c
+@@ -480,13 +480,13 @@ static int configfs_release_bin_file(struct inode *inode, struct file *file)
+ 					buffer->bin_buffer_size);
+ 		}
+ 		up_read(&frag->frag_sem);
+-		/* vfree on NULL is safe */
+-		vfree(buffer->bin_buffer);
+-		buffer->bin_buffer = NULL;
+-		buffer->bin_buffer_size = 0;
+-		buffer->needs_read_fill = 1;
+ 	}
+ 
++	vfree(buffer->bin_buffer);
++	buffer->bin_buffer = NULL;
++	buffer->bin_buffer_size = 0;
++	buffer->needs_read_fill = 1;
++
+ 	configfs_release(inode, file);
+ 	return 0;
+ }
+diff --git a/fs/crypto/fname.c b/fs/crypto/fname.c
+index 6ca7d16593ff6..d00455440d087 100644
+--- a/fs/crypto/fname.c
++++ b/fs/crypto/fname.c
+@@ -344,13 +344,9 @@ int fscrypt_fname_disk_to_usr(const struct inode *inode,
+ 		     offsetof(struct fscrypt_nokey_name, sha256));
+ 	BUILD_BUG_ON(BASE64_CHARS(FSCRYPT_NOKEY_NAME_MAX) > NAME_MAX);
+ 
+-	if (hash) {
+-		nokey_name.dirhash[0] = hash;
+-		nokey_name.dirhash[1] = minor_hash;
+-	} else {
+-		nokey_name.dirhash[0] = 0;
+-		nokey_name.dirhash[1] = 0;
+-	}
++	nokey_name.dirhash[0] = hash;
++	nokey_name.dirhash[1] = minor_hash;
++
+ 	if (iname->len <= sizeof(nokey_name.bytes)) {
+ 		memcpy(nokey_name.bytes, iname->name, iname->len);
+ 		size = offsetof(struct fscrypt_nokey_name, bytes[iname->len]);
+diff --git a/fs/crypto/keysetup.c b/fs/crypto/keysetup.c
+index 261293fb70974..bca9c6658a7c5 100644
+--- a/fs/crypto/keysetup.c
++++ b/fs/crypto/keysetup.c
+@@ -210,15 +210,40 @@ out_unlock:
+ 	return err;
+ }
+ 
++/*
++ * Derive a SipHash key from the given fscrypt master key and the given
++ * application-specific information string.
++ *
++ * Note that the KDF produces a byte array, but the SipHash APIs expect the key
++ * as a pair of 64-bit words.  Therefore, on big endian CPUs we have to do an
++ * endianness swap in order to get the same results as on little endian CPUs.
++ */
++static int fscrypt_derive_siphash_key(const struct fscrypt_master_key *mk,
++				      u8 context, const u8 *info,
++				      unsigned int infolen, siphash_key_t *key)
++{
++	int err;
++
++	err = fscrypt_hkdf_expand(&mk->mk_secret.hkdf, context, info, infolen,
++				  (u8 *)key, sizeof(*key));
++	if (err)
++		return err;
++
++	BUILD_BUG_ON(sizeof(*key) != 16);
++	BUILD_BUG_ON(ARRAY_SIZE(key->key) != 2);
++	le64_to_cpus(&key->key[0]);
++	le64_to_cpus(&key->key[1]);
++	return 0;
++}
++
+ int fscrypt_derive_dirhash_key(struct fscrypt_info *ci,
+ 			       const struct fscrypt_master_key *mk)
+ {
+ 	int err;
+ 
+-	err = fscrypt_hkdf_expand(&mk->mk_secret.hkdf, HKDF_CONTEXT_DIRHASH_KEY,
+-				  ci->ci_nonce, FSCRYPT_FILE_NONCE_SIZE,
+-				  (u8 *)&ci->ci_dirhash_key,
+-				  sizeof(ci->ci_dirhash_key));
++	err = fscrypt_derive_siphash_key(mk, HKDF_CONTEXT_DIRHASH_KEY,
++					 ci->ci_nonce, FSCRYPT_FILE_NONCE_SIZE,
++					 &ci->ci_dirhash_key);
+ 	if (err)
+ 		return err;
+ 	ci->ci_dirhash_key_initialized = true;
+@@ -253,10 +278,9 @@ static int fscrypt_setup_iv_ino_lblk_32_key(struct fscrypt_info *ci,
+ 		if (mk->mk_ino_hash_key_initialized)
+ 			goto unlock;
+ 
+-		err = fscrypt_hkdf_expand(&mk->mk_secret.hkdf,
+-					  HKDF_CONTEXT_INODE_HASH_KEY, NULL, 0,
+-					  (u8 *)&mk->mk_ino_hash_key,
+-					  sizeof(mk->mk_ino_hash_key));
++		err = fscrypt_derive_siphash_key(mk,
++						 HKDF_CONTEXT_INODE_HASH_KEY,
++						 NULL, 0, &mk->mk_ino_hash_key);
+ 		if (err)
+ 			goto unlock;
+ 		/* pairs with smp_load_acquire() above */
+diff --git a/fs/dax.c b/fs/dax.c
+index 62352cbcf0f40..da41f9363568e 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -488,10 +488,11 @@ static void *grab_mapping_entry(struct xa_state *xas,
+ 		struct address_space *mapping, unsigned int order)
+ {
+ 	unsigned long index = xas->xa_index;
+-	bool pmd_downgrade = false; /* splitting PMD entry into PTE entries? */
++	bool pmd_downgrade;	/* splitting PMD entry into PTE entries? */
+ 	void *entry;
+ 
+ retry:
++	pmd_downgrade = false;
+ 	xas_lock_irq(xas);
+ 	entry = get_unlocked_entry(xas, order);
+ 
+diff --git a/fs/dlm/config.c b/fs/dlm/config.c
+index 88d95d96e36c5..52bcda64172aa 100644
+--- a/fs/dlm/config.c
++++ b/fs/dlm/config.c
+@@ -79,6 +79,9 @@ struct dlm_cluster {
+ 	unsigned int cl_new_rsb_count;
+ 	unsigned int cl_recover_callbacks;
+ 	char cl_cluster_name[DLM_LOCKSPACE_LEN];
++
++	struct dlm_spaces *sps;
++	struct dlm_comms *cms;
+ };
+ 
+ static struct dlm_cluster *config_item_to_cluster(struct config_item *i)
+@@ -409,6 +412,9 @@ static struct config_group *make_cluster(struct config_group *g,
+ 	if (!cl || !sps || !cms)
+ 		goto fail;
+ 
++	cl->sps = sps;
++	cl->cms = cms;
++
+ 	config_group_init_type_name(&cl->group, name, &cluster_type);
+ 	config_group_init_type_name(&sps->ss_group, "spaces", &spaces_type);
+ 	config_group_init_type_name(&cms->cs_group, "comms", &comms_type);
+@@ -458,6 +464,9 @@ static void drop_cluster(struct config_group *g, struct config_item *i)
+ static void release_cluster(struct config_item *i)
+ {
+ 	struct dlm_cluster *cl = config_item_to_cluster(i);
++
++	kfree(cl->sps);
++	kfree(cl->cms);
+ 	kfree(cl);
+ }
+ 
+diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
+index 166e36fcf3e4c..9bf920bee292e 100644
+--- a/fs/dlm/lowcomms.c
++++ b/fs/dlm/lowcomms.c
+@@ -79,14 +79,20 @@ struct connection {
+ #define CF_CLOSING 8
+ #define CF_SHUTDOWN 9
+ #define CF_CONNECTED 10
++#define CF_RECONNECT 11
++#define CF_DELAY_CONNECT 12
++#define CF_EOF 13
+ 	struct list_head writequeue;  /* List of outgoing writequeue_entries */
+ 	spinlock_t writequeue_lock;
++	atomic_t writequeue_cnt;
+ 	void (*connect_action) (struct connection *);	/* What to do to connect */
+ 	void (*shutdown_action)(struct connection *con); /* What to do to shutdown */
++	bool (*eof_condition)(struct connection *con); /* What to do to eof check */
+ 	int retries;
+ #define MAX_CONNECT_RETRIES 3
+ 	struct hlist_node list;
+ 	struct connection *othercon;
++	struct connection *sendcon;
+ 	struct work_struct rwork; /* Receive workqueue */
+ 	struct work_struct swork; /* Send workqueue */
+ 	wait_queue_head_t shutdown_wait; /* wait for graceful shutdown */
+@@ -113,6 +119,7 @@ struct writequeue_entry {
+ 	int len;
+ 	int end;
+ 	int users;
++	int idx; /* get()/commit() idx exchange */
+ 	struct connection *con;
+ };
+ 
+@@ -163,25 +170,23 @@ static inline int nodeid_hash(int nodeid)
+ 	return nodeid & (CONN_HASH_SIZE-1);
+ }
+ 
+-static struct connection *__find_con(int nodeid)
++static struct connection *__find_con(int nodeid, int r)
+ {
+-	int r, idx;
+ 	struct connection *con;
+ 
+-	r = nodeid_hash(nodeid);
+-
+-	idx = srcu_read_lock(&connections_srcu);
+ 	hlist_for_each_entry_rcu(con, &connection_hash[r], list) {
+-		if (con->nodeid == nodeid) {
+-			srcu_read_unlock(&connections_srcu, idx);
++		if (con->nodeid == nodeid)
+ 			return con;
+-		}
+ 	}
+-	srcu_read_unlock(&connections_srcu, idx);
+ 
+ 	return NULL;
+ }
+ 
++static bool tcp_eof_condition(struct connection *con)
++{
++	return atomic_read(&con->writequeue_cnt);
++}
++
+ static int dlm_con_init(struct connection *con, int nodeid)
+ {
+ 	con->rx_buflen = dlm_config.ci_buffer_size;
+@@ -193,6 +198,7 @@ static int dlm_con_init(struct connection *con, int nodeid)
+ 	mutex_init(&con->sock_mutex);
+ 	INIT_LIST_HEAD(&con->writequeue);
+ 	spin_lock_init(&con->writequeue_lock);
++	atomic_set(&con->writequeue_cnt, 0);
+ 	INIT_WORK(&con->swork, process_send_sockets);
+ 	INIT_WORK(&con->rwork, process_recv_sockets);
+ 	init_waitqueue_head(&con->shutdown_wait);
+@@ -200,6 +206,7 @@ static int dlm_con_init(struct connection *con, int nodeid)
+ 	if (dlm_config.ci_protocol == 0) {
+ 		con->connect_action = tcp_connect_to_sock;
+ 		con->shutdown_action = dlm_tcp_shutdown;
++		con->eof_condition = tcp_eof_condition;
+ 	} else {
+ 		con->connect_action = sctp_connect_to_sock;
+ 	}
+@@ -216,7 +223,8 @@ static struct connection *nodeid2con(int nodeid, gfp_t alloc)
+ 	struct connection *con, *tmp;
+ 	int r, ret;
+ 
+-	con = __find_con(nodeid);
++	r = nodeid_hash(nodeid);
++	con = __find_con(nodeid, r);
+ 	if (con || !alloc)
+ 		return con;
+ 
+@@ -230,8 +238,6 @@ static struct connection *nodeid2con(int nodeid, gfp_t alloc)
+ 		return NULL;
+ 	}
+ 
+-	r = nodeid_hash(nodeid);
+-
+ 	spin_lock(&connections_lock);
+ 	/* Because multiple workqueues/threads calls this function it can
+ 	 * race on multiple cpu's. Instead of locking hot path __find_con()
+@@ -239,7 +245,7 @@ static struct connection *nodeid2con(int nodeid, gfp_t alloc)
+ 	 * under protection of connections_lock. If this is the case we
+ 	 * abort our connection creation and return the existing connection.
+ 	 */
+-	tmp = __find_con(nodeid);
++	tmp = __find_con(nodeid, r);
+ 	if (tmp) {
+ 		spin_unlock(&connections_lock);
+ 		kfree(con->rx_buf);
+@@ -256,15 +262,13 @@ static struct connection *nodeid2con(int nodeid, gfp_t alloc)
+ /* Loop round all connections */
+ static void foreach_conn(void (*conn_func)(struct connection *c))
+ {
+-	int i, idx;
++	int i;
+ 	struct connection *con;
+ 
+-	idx = srcu_read_lock(&connections_srcu);
+ 	for (i = 0; i < CONN_HASH_SIZE; i++) {
+ 		hlist_for_each_entry_rcu(con, &connection_hash[i], list)
+ 			conn_func(con);
+ 	}
+-	srcu_read_unlock(&connections_srcu, idx);
+ }
+ 
+ static struct dlm_node_addr *find_node_addr(int nodeid)
+@@ -518,14 +522,21 @@ static void lowcomms_state_change(struct sock *sk)
+ int dlm_lowcomms_connect_node(int nodeid)
+ {
+ 	struct connection *con;
++	int idx;
+ 
+ 	if (nodeid == dlm_our_nodeid())
+ 		return 0;
+ 
++	idx = srcu_read_lock(&connections_srcu);
+ 	con = nodeid2con(nodeid, GFP_NOFS);
+-	if (!con)
++	if (!con) {
++		srcu_read_unlock(&connections_srcu, idx);
+ 		return -ENOMEM;
++	}
++
+ 	lowcomms_connect_sock(con);
++	srcu_read_unlock(&connections_srcu, idx);
++
+ 	return 0;
+ }
+ 
+@@ -587,6 +598,22 @@ static void lowcomms_error_report(struct sock *sk)
+ 				   dlm_config.ci_tcp_port, sk->sk_err,
+ 				   sk->sk_err_soft);
+ 	}
++
++	/* below sendcon only handling */
++	if (test_bit(CF_IS_OTHERCON, &con->flags))
++		con = con->sendcon;
++
++	switch (sk->sk_err) {
++	case ECONNREFUSED:
++		set_bit(CF_DELAY_CONNECT, &con->flags);
++		break;
++	default:
++		break;
++	}
++
++	if (!test_and_set_bit(CF_RECONNECT, &con->flags))
++		queue_work(send_workqueue, &con->swork);
++
+ out:
+ 	read_unlock_bh(&sk->sk_callback_lock);
+ 	if (orig_report)
+@@ -698,12 +725,15 @@ static void close_connection(struct connection *con, bool and_other,
+ 
+ 	if (con->othercon && and_other) {
+ 		/* Will only re-enter once. */
+-		close_connection(con->othercon, false, true, true);
++		close_connection(con->othercon, false, tx, rx);
+ 	}
+ 
+ 	con->rx_leftover = 0;
+ 	con->retries = 0;
+ 	clear_bit(CF_CONNECTED, &con->flags);
++	clear_bit(CF_DELAY_CONNECT, &con->flags);
++	clear_bit(CF_RECONNECT, &con->flags);
++	clear_bit(CF_EOF, &con->flags);
+ 	mutex_unlock(&con->sock_mutex);
+ 	clear_bit(CF_CLOSING, &con->flags);
+ }
+@@ -841,19 +871,26 @@ out_resched:
+ 	return -EAGAIN;
+ 
+ out_close:
+-	mutex_unlock(&con->sock_mutex);
+-	if (ret != -EAGAIN) {
+-		/* Reconnect when there is something to send */
+-		close_connection(con, false, true, false);
+-		if (ret == 0) {
+-			log_print("connection %p got EOF from %d",
+-				  con, con->nodeid);
++	if (ret == 0) {
++		log_print("connection %p got EOF from %d",
++			  con, con->nodeid);
++
++		if (con->eof_condition && con->eof_condition(con)) {
++			set_bit(CF_EOF, &con->flags);
++			mutex_unlock(&con->sock_mutex);
++		} else {
++			mutex_unlock(&con->sock_mutex);
++			close_connection(con, false, true, false);
++
+ 			/* handling for tcp shutdown */
+ 			clear_bit(CF_SHUTDOWN, &con->flags);
+ 			wake_up(&con->shutdown_wait);
+-			/* signal to breaking receive worker */
+-			ret = -1;
+ 		}
++
++		/* signal to breaking receive worker */
++		ret = -1;
++	} else {
++		mutex_unlock(&con->sock_mutex);
+ 	}
+ 	return ret;
+ }
+@@ -864,7 +901,7 @@ static int accept_from_sock(struct listen_connection *con)
+ 	int result;
+ 	struct sockaddr_storage peeraddr;
+ 	struct socket *newsock;
+-	int len;
++	int len, idx;
+ 	int nodeid;
+ 	struct connection *newcon;
+ 	struct connection *addcon;
+@@ -907,8 +944,10 @@ static int accept_from_sock(struct listen_connection *con)
+ 	 *  the same time and the connections cross on the wire.
+ 	 *  In this case we store the incoming one in "othercon"
+ 	 */
++	idx = srcu_read_lock(&connections_srcu);
+ 	newcon = nodeid2con(nodeid, GFP_NOFS);
+ 	if (!newcon) {
++		srcu_read_unlock(&connections_srcu, idx);
+ 		result = -ENOMEM;
+ 		goto accept_err;
+ 	}
+@@ -924,6 +963,7 @@ static int accept_from_sock(struct listen_connection *con)
+ 			if (!othercon) {
+ 				log_print("failed to allocate incoming socket");
+ 				mutex_unlock(&newcon->sock_mutex);
++				srcu_read_unlock(&connections_srcu, idx);
+ 				result = -ENOMEM;
+ 				goto accept_err;
+ 			}
+@@ -932,11 +972,13 @@ static int accept_from_sock(struct listen_connection *con)
+ 			if (result < 0) {
+ 				kfree(othercon);
+ 				mutex_unlock(&newcon->sock_mutex);
++				srcu_read_unlock(&connections_srcu, idx);
+ 				goto accept_err;
+ 			}
+ 
+ 			lockdep_set_subclass(&othercon->sock_mutex, 1);
+ 			newcon->othercon = othercon;
++			othercon->sendcon = newcon;
+ 		} else {
+ 			/* close other sock con if we have something new */
+ 			close_connection(othercon, false, true, false);
+@@ -966,6 +1008,8 @@ static int accept_from_sock(struct listen_connection *con)
+ 	if (!test_and_set_bit(CF_READ_PENDING, &addcon->flags))
+ 		queue_work(recv_workqueue, &addcon->rwork);
+ 
++	srcu_read_unlock(&connections_srcu, idx);
++
+ 	return 0;
+ 
+ accept_err:
+@@ -997,6 +1041,7 @@ static void writequeue_entry_complete(struct writequeue_entry *e, int completed)
+ 
+ 	if (e->len == 0 && e->users == 0) {
+ 		list_del(&e->list);
++		atomic_dec(&e->con->writequeue_cnt);
+ 		free_entry(e);
+ 	}
+ }
+@@ -1393,6 +1438,7 @@ static struct writequeue_entry *new_wq_entry(struct connection *con, int len,
+ 
+ 	*ppc = page_address(e->page);
+ 	e->end += len;
++	atomic_inc(&con->writequeue_cnt);
+ 
+ 	spin_lock(&con->writequeue_lock);
+ 	list_add_tail(&e->list, &con->writequeue);
+@@ -1403,7 +1449,9 @@ static struct writequeue_entry *new_wq_entry(struct connection *con, int len,
+ 
+ void *dlm_lowcomms_get_buffer(int nodeid, int len, gfp_t allocation, char **ppc)
+ {
++	struct writequeue_entry *e;
+ 	struct connection *con;
++	int idx;
+ 
+ 	if (len > DEFAULT_BUFFER_SIZE ||
+ 	    len < sizeof(struct dlm_header)) {
+@@ -1413,11 +1461,23 @@ void *dlm_lowcomms_get_buffer(int nodeid, int len, gfp_t allocation, char **ppc)
+ 		return NULL;
+ 	}
+ 
++	idx = srcu_read_lock(&connections_srcu);
+ 	con = nodeid2con(nodeid, allocation);
+-	if (!con)
++	if (!con) {
++		srcu_read_unlock(&connections_srcu, idx);
+ 		return NULL;
++	}
+ 
+-	return new_wq_entry(con, len, allocation, ppc);
++	e = new_wq_entry(con, len, allocation, ppc);
++	if (!e) {
++		srcu_read_unlock(&connections_srcu, idx);
++		return NULL;
++	}
++
++	/* we assume if successful commit must called */
++	e->idx = idx;
++
++	return e;
+ }
+ 
+ void dlm_lowcomms_commit_buffer(void *mh)
+@@ -1435,10 +1495,12 @@ void dlm_lowcomms_commit_buffer(void *mh)
+ 	spin_unlock(&con->writequeue_lock);
+ 
+ 	queue_work(send_workqueue, &con->swork);
++	srcu_read_unlock(&connections_srcu, e->idx);
+ 	return;
+ 
+ out:
+ 	spin_unlock(&con->writequeue_lock);
++	srcu_read_unlock(&connections_srcu, e->idx);
+ 	return;
+ }
+ 
+@@ -1483,7 +1545,7 @@ static void send_to_sock(struct connection *con)
+ 				cond_resched();
+ 				goto out;
+ 			} else if (ret < 0)
+-				goto send_error;
++				goto out;
+ 		}
+ 
+ 		/* Don't starve people filling buffers */
+@@ -1496,16 +1558,23 @@ static void send_to_sock(struct connection *con)
+ 		writequeue_entry_complete(e, ret);
+ 	}
+ 	spin_unlock(&con->writequeue_lock);
+-out:
+-	mutex_unlock(&con->sock_mutex);
++
++	/* close if we got EOF */
++	if (test_and_clear_bit(CF_EOF, &con->flags)) {
++		mutex_unlock(&con->sock_mutex);
++		close_connection(con, false, false, true);
++
++		/* handling for tcp shutdown */
++		clear_bit(CF_SHUTDOWN, &con->flags);
++		wake_up(&con->shutdown_wait);
++	} else {
++		mutex_unlock(&con->sock_mutex);
++	}
++
+ 	return;
+ 
+-send_error:
++out:
+ 	mutex_unlock(&con->sock_mutex);
+-	close_connection(con, false, false, true);
+-	/* Requeue the send work. When the work daemon runs again, it will try
+-	   a new connection, then call this function again. */
+-	queue_work(send_workqueue, &con->swork);
+ 	return;
+ 
+ out_connect:
+@@ -1532,8 +1601,10 @@ int dlm_lowcomms_close(int nodeid)
+ {
+ 	struct connection *con;
+ 	struct dlm_node_addr *na;
++	int idx;
+ 
+ 	log_print("closing connection to node %d", nodeid);
++	idx = srcu_read_lock(&connections_srcu);
+ 	con = nodeid2con(nodeid, 0);
+ 	if (con) {
+ 		set_bit(CF_CLOSE, &con->flags);
+@@ -1542,6 +1613,7 @@ int dlm_lowcomms_close(int nodeid)
+ 		if (con->othercon)
+ 			clean_one_writequeue(con->othercon);
+ 	}
++	srcu_read_unlock(&connections_srcu, idx);
+ 
+ 	spin_lock(&dlm_node_addrs_spin);
+ 	na = find_node_addr(nodeid);
+@@ -1579,18 +1651,30 @@ static void process_send_sockets(struct work_struct *work)
+ 	struct connection *con = container_of(work, struct connection, swork);
+ 
+ 	clear_bit(CF_WRITE_PENDING, &con->flags);
+-	if (con->sock == NULL) /* not mutex protected so check it inside too */
++
++	if (test_and_clear_bit(CF_RECONNECT, &con->flags))
++		close_connection(con, false, false, true);
++
++	if (con->sock == NULL) { /* not mutex protected so check it inside too */
++		if (test_and_clear_bit(CF_DELAY_CONNECT, &con->flags))
++			msleep(1000);
+ 		con->connect_action(con);
++	}
+ 	if (!list_empty(&con->writequeue))
+ 		send_to_sock(con);
+ }
+ 
+ static void work_stop(void)
+ {
+-	if (recv_workqueue)
++	if (recv_workqueue) {
+ 		destroy_workqueue(recv_workqueue);
+-	if (send_workqueue)
++		recv_workqueue = NULL;
++	}
++
++	if (send_workqueue) {
+ 		destroy_workqueue(send_workqueue);
++		send_workqueue = NULL;
++	}
+ }
+ 
+ static int work_start(void)
+@@ -1607,6 +1691,7 @@ static int work_start(void)
+ 	if (!send_workqueue) {
+ 		log_print("can't start dlm_send");
+ 		destroy_workqueue(recv_workqueue);
++		recv_workqueue = NULL;
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -1621,6 +1706,8 @@ static void shutdown_conn(struct connection *con)
+ 
+ void dlm_lowcomms_shutdown(void)
+ {
++	int idx;
++
+ 	/* Set all the flags to prevent any
+ 	 * socket activity.
+ 	 */
+@@ -1633,7 +1720,9 @@ void dlm_lowcomms_shutdown(void)
+ 
+ 	dlm_close_sock(&listen_con.sock);
+ 
++	idx = srcu_read_lock(&connections_srcu);
+ 	foreach_conn(shutdown_conn);
++	srcu_read_unlock(&connections_srcu, idx);
+ }
+ 
+ static void _stop_conn(struct connection *con, bool and_other)
+@@ -1682,7 +1771,7 @@ static void free_conn(struct connection *con)
+ 
+ static void work_flush(void)
+ {
+-	int ok, idx;
++	int ok;
+ 	int i;
+ 	struct connection *con;
+ 
+@@ -1693,7 +1782,6 @@ static void work_flush(void)
+ 			flush_workqueue(recv_workqueue);
+ 		if (send_workqueue)
+ 			flush_workqueue(send_workqueue);
+-		idx = srcu_read_lock(&connections_srcu);
+ 		for (i = 0; i < CONN_HASH_SIZE && ok; i++) {
+ 			hlist_for_each_entry_rcu(con, &connection_hash[i],
+ 						 list) {
+@@ -1707,14 +1795,17 @@ static void work_flush(void)
+ 				}
+ 			}
+ 		}
+-		srcu_read_unlock(&connections_srcu, idx);
+ 	} while (!ok);
+ }
+ 
+ void dlm_lowcomms_stop(void)
+ {
++	int idx;
++
++	idx = srcu_read_lock(&connections_srcu);
+ 	work_flush();
+ 	foreach_conn(free_conn);
++	srcu_read_unlock(&connections_srcu, idx);
+ 	work_stop();
+ 	deinit_local();
+ }
+@@ -1738,7 +1829,7 @@ int dlm_lowcomms_start(void)
+ 
+ 	error = work_start();
+ 	if (error)
+-		goto fail;
++		goto fail_local;
+ 
+ 	dlm_allow_conn = 1;
+ 
+@@ -1755,6 +1846,9 @@ int dlm_lowcomms_start(void)
+ fail_unlisten:
+ 	dlm_allow_conn = 0;
+ 	dlm_close_sock(&listen_con.sock);
++	work_stop();
++fail_local:
++	deinit_local();
+ fail:
+ 	return error;
+ }
+diff --git a/fs/erofs/super.c b/fs/erofs/super.c
+index bbf3bbd908e08..22991d22af5a2 100644
+--- a/fs/erofs/super.c
++++ b/fs/erofs/super.c
+@@ -285,6 +285,7 @@ static int erofs_read_superblock(struct super_block *sb)
+ 			goto out;
+ 	}
+ 
++	ret = -EINVAL;
+ 	blkszbits = dsb->blkszbits;
+ 	/* 9(512 bytes) + LOG_SECTORS_PER_BLOCK == LOG_BLOCK_SIZE */
+ 	if (blkszbits != LOG_BLOCK_SIZE) {
+diff --git a/fs/exec.c b/fs/exec.c
+index 18594f11c31fe..d7c4187ca023e 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1360,6 +1360,10 @@ int begin_new_exec(struct linux_binprm * bprm)
+ 	WRITE_ONCE(me->self_exec_id, me->self_exec_id + 1);
+ 	flush_signal_handlers(me, 0);
+ 
++	retval = set_cred_ucounts(bprm->cred);
++	if (retval < 0)
++		goto out_unlock;
++
+ 	/*
+ 	 * install the new credentials for this executable
+ 	 */
+diff --git a/fs/exfat/dir.c b/fs/exfat/dir.c
+index c4523648472a0..cb1c0d8c17141 100644
+--- a/fs/exfat/dir.c
++++ b/fs/exfat/dir.c
+@@ -63,7 +63,7 @@ static void exfat_get_uniname_from_ext_entry(struct super_block *sb,
+ static int exfat_readdir(struct inode *inode, loff_t *cpos, struct exfat_dir_entry *dir_entry)
+ {
+ 	int i, dentries_per_clu, dentries_per_clu_bits = 0, num_ext;
+-	unsigned int type, clu_offset;
++	unsigned int type, clu_offset, max_dentries;
+ 	sector_t sector;
+ 	struct exfat_chain dir, clu;
+ 	struct exfat_uni_name uni_name;
+@@ -86,6 +86,8 @@ static int exfat_readdir(struct inode *inode, loff_t *cpos, struct exfat_dir_ent
+ 
+ 	dentries_per_clu = sbi->dentries_per_clu;
+ 	dentries_per_clu_bits = ilog2(dentries_per_clu);
++	max_dentries = (unsigned int)min_t(u64, MAX_EXFAT_DENTRIES,
++					   (u64)sbi->num_clusters << dentries_per_clu_bits);
+ 
+ 	clu_offset = dentry >> dentries_per_clu_bits;
+ 	exfat_chain_dup(&clu, &dir);
+@@ -109,7 +111,7 @@ static int exfat_readdir(struct inode *inode, loff_t *cpos, struct exfat_dir_ent
+ 		}
+ 	}
+ 
+-	while (clu.dir != EXFAT_EOF_CLUSTER) {
++	while (clu.dir != EXFAT_EOF_CLUSTER && dentry < max_dentries) {
+ 		i = dentry & (dentries_per_clu - 1);
+ 
+ 		for ( ; i < dentries_per_clu; i++, dentry++) {
+@@ -245,7 +247,7 @@ static int exfat_iterate(struct file *filp, struct dir_context *ctx)
+ 	if (err)
+ 		goto unlock;
+ get_new:
+-	if (cpos >= i_size_read(inode))
++	if (ei->flags == ALLOC_NO_FAT_CHAIN && cpos >= i_size_read(inode))
+ 		goto end_of_dir;
+ 
+ 	err = exfat_readdir(inode, &cpos, &de);
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index cbf37b2cf871e..1293de50c8d48 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -825,6 +825,7 @@ void ext4_ext_tree_init(handle_t *handle, struct inode *inode)
+ 	eh->eh_entries = 0;
+ 	eh->eh_magic = EXT4_EXT_MAGIC;
+ 	eh->eh_max = cpu_to_le16(ext4_ext_space_root(inode, 0));
++	eh->eh_generation = 0;
+ 	ext4_mark_inode_dirty(handle, inode);
+ }
+ 
+@@ -1090,6 +1091,7 @@ static int ext4_ext_split(handle_t *handle, struct inode *inode,
+ 	neh->eh_max = cpu_to_le16(ext4_ext_space_block(inode, 0));
+ 	neh->eh_magic = EXT4_EXT_MAGIC;
+ 	neh->eh_depth = 0;
++	neh->eh_generation = 0;
+ 
+ 	/* move remainder of path[depth] to the new leaf */
+ 	if (unlikely(path[depth].p_hdr->eh_entries !=
+@@ -1167,6 +1169,7 @@ static int ext4_ext_split(handle_t *handle, struct inode *inode,
+ 		neh->eh_magic = EXT4_EXT_MAGIC;
+ 		neh->eh_max = cpu_to_le16(ext4_ext_space_block_idx(inode, 0));
+ 		neh->eh_depth = cpu_to_le16(depth - i);
++		neh->eh_generation = 0;
+ 		fidx = EXT_FIRST_INDEX(neh);
+ 		fidx->ei_block = border;
+ 		ext4_idx_store_pblock(fidx, oldblock);
+diff --git a/fs/ext4/extents_status.c b/fs/ext4/extents_status.c
+index 0a729027322dd..9a3a8996aacf7 100644
+--- a/fs/ext4/extents_status.c
++++ b/fs/ext4/extents_status.c
+@@ -1574,11 +1574,9 @@ static unsigned long ext4_es_scan(struct shrinker *shrink,
+ 	ret = percpu_counter_read_positive(&sbi->s_es_stats.es_stats_shk_cnt);
+ 	trace_ext4_es_shrink_scan_enter(sbi->s_sb, nr_to_scan, ret);
+ 
+-	if (!nr_to_scan)
+-		return ret;
+-
+ 	nr_shrunk = __es_shrink(sbi, nr_to_scan, NULL);
+ 
++	ret = percpu_counter_read_positive(&sbi->s_es_stats.es_stats_shk_cnt);
+ 	trace_ext4_es_shrink_scan_exit(sbi->s_sb, nr_shrunk, ret);
+ 	return nr_shrunk;
+ }
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index 9bab7fd4ccd57..e89fc0f770b03 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -402,7 +402,7 @@ static void get_orlov_stats(struct super_block *sb, ext4_group_t g,
+  *
+  * We always try to spread first-level directories.
+  *
+- * If there are blockgroups with both free inodes and free blocks counts
++ * If there are blockgroups with both free inodes and free clusters counts
+  * not worse than average we return one with smallest directory count.
+  * Otherwise we simply return a random group.
+  *
+@@ -411,7 +411,7 @@ static void get_orlov_stats(struct super_block *sb, ext4_group_t g,
+  * It's OK to put directory into a group unless
+  * it has too many directories already (max_dirs) or
+  * it has too few free inodes left (min_inodes) or
+- * it has too few free blocks left (min_blocks) or
++ * it has too few free clusters left (min_clusters) or
+  * Parent's group is preferred, if it doesn't satisfy these
+  * conditions we search cyclically through the rest. If none
+  * of the groups look good we just look for a group with more
+@@ -427,7 +427,7 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent,
+ 	ext4_group_t real_ngroups = ext4_get_groups_count(sb);
+ 	int inodes_per_group = EXT4_INODES_PER_GROUP(sb);
+ 	unsigned int freei, avefreei, grp_free;
+-	ext4_fsblk_t freeb, avefreec;
++	ext4_fsblk_t freec, avefreec;
+ 	unsigned int ndirs;
+ 	int max_dirs, min_inodes;
+ 	ext4_grpblk_t min_clusters;
+@@ -446,9 +446,8 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent,
+ 
+ 	freei = percpu_counter_read_positive(&sbi->s_freeinodes_counter);
+ 	avefreei = freei / ngroups;
+-	freeb = EXT4_C2B(sbi,
+-		percpu_counter_read_positive(&sbi->s_freeclusters_counter));
+-	avefreec = freeb;
++	freec = percpu_counter_read_positive(&sbi->s_freeclusters_counter);
++	avefreec = freec;
+ 	do_div(avefreec, ngroups);
+ 	ndirs = percpu_counter_read_positive(&sbi->s_dirs_counter);
+ 
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index fe6045a465993..211acfba3af7c 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -3418,7 +3418,7 @@ retry:
+ 	 * i_disksize out to i_size. This could be beyond where direct I/O is
+ 	 * happening and thus expose allocated blocks to direct I/O reads.
+ 	 */
+-	else if ((map->m_lblk * (1 << blkbits)) >= i_size_read(inode))
++	else if (((loff_t)map->m_lblk << blkbits) >= i_size_read(inode))
+ 		m_flags = EXT4_GET_BLOCKS_CREATE;
+ 	else if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
+ 		m_flags = EXT4_GET_BLOCKS_IO_CREATE_EXT;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index c2c22c2baac0b..089c958aa2c34 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -1909,10 +1909,11 @@ static int mb_find_extent(struct ext4_buddy *e4b, int block,
+ 	if (ex->fe_start + ex->fe_len > EXT4_CLUSTERS_PER_GROUP(e4b->bd_sb)) {
+ 		/* Should never happen! (but apparently sometimes does?!?) */
+ 		WARN_ON(1);
+-		ext4_error(e4b->bd_sb, "corruption or bug in mb_find_extent "
+-			   "block=%d, order=%d needed=%d ex=%u/%d/%d@%u",
+-			   block, order, needed, ex->fe_group, ex->fe_start,
+-			   ex->fe_len, ex->fe_logical);
++		ext4_grp_locked_error(e4b->bd_sb, e4b->bd_group, 0, 0,
++			"corruption or bug in mb_find_extent "
++			"block=%d, order=%d needed=%d ex=%u/%d/%d@%u",
++			block, order, needed, ex->fe_group, ex->fe_start,
++			ex->fe_len, ex->fe_logical);
+ 		ex->fe_len = 0;
+ 		ex->fe_start = 0;
+ 		ex->fe_group = 0;
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index d29f6aa7d96ee..736724ce86d73 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -3101,8 +3101,15 @@ static void ext4_orphan_cleanup(struct super_block *sb,
+ 			inode_lock(inode);
+ 			truncate_inode_pages(inode->i_mapping, inode->i_size);
+ 			ret = ext4_truncate(inode);
+-			if (ret)
++			if (ret) {
++				/*
++				 * We need to clean up the in-core orphan list
++				 * manually if ext4_truncate() failed to get a
++				 * transaction handle.
++				 */
++				ext4_orphan_del(NULL, inode);
+ 				ext4_std_error(inode->i_sb, ret);
++			}
+ 			inode_unlock(inode);
+ 			nr_truncates++;
+ 		} else {
+@@ -5058,6 +5065,7 @@ no_journal:
+ 			ext4_msg(sb, KERN_ERR,
+ 			       "unable to initialize "
+ 			       "flex_bg meta info!");
++			ret = -ENOMEM;
+ 			goto failed_mount6;
+ 		}
+ 
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 009a09fb9d88c..e2d0c7d9673e0 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -4067,6 +4067,12 @@ static int f2fs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ 	if (f2fs_readonly(F2FS_I_SB(inode)->sb))
+ 		return -EROFS;
+ 
++	if (f2fs_lfs_mode(F2FS_I_SB(inode))) {
++		f2fs_err(F2FS_I_SB(inode),
++			"Swapfile not supported in LFS mode");
++		return -EINVAL;
++	}
++
+ 	ret = f2fs_convert_inline_inode(inode);
+ 	if (ret)
+ 		return ret;
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index 39b522ec73e7e..e5dbe87e65b45 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -562,6 +562,7 @@ enum feat_id {
+ 	FEAT_CASEFOLD,
+ 	FEAT_COMPRESSION,
+ 	FEAT_TEST_DUMMY_ENCRYPTION_V2,
++	FEAT_ENCRYPTED_CASEFOLD,
+ };
+ 
+ static ssize_t f2fs_feature_show(struct f2fs_attr *a,
+@@ -583,6 +584,7 @@ static ssize_t f2fs_feature_show(struct f2fs_attr *a,
+ 	case FEAT_CASEFOLD:
+ 	case FEAT_COMPRESSION:
+ 	case FEAT_TEST_DUMMY_ENCRYPTION_V2:
++	case FEAT_ENCRYPTED_CASEFOLD:
+ 		return sprintf(buf, "supported\n");
+ 	}
+ 	return 0;
+@@ -687,7 +689,10 @@ F2FS_GENERAL_RO_ATTR(avg_vblocks);
+ #ifdef CONFIG_FS_ENCRYPTION
+ F2FS_FEATURE_RO_ATTR(encryption, FEAT_CRYPTO);
+ F2FS_FEATURE_RO_ATTR(test_dummy_encryption_v2, FEAT_TEST_DUMMY_ENCRYPTION_V2);
++#ifdef CONFIG_UNICODE
++F2FS_FEATURE_RO_ATTR(encrypted_casefold, FEAT_ENCRYPTED_CASEFOLD);
+ #endif
++#endif /* CONFIG_FS_ENCRYPTION */
+ #ifdef CONFIG_BLK_DEV_ZONED
+ F2FS_FEATURE_RO_ATTR(block_zoned, FEAT_BLKZONED);
+ #endif
+@@ -786,7 +791,10 @@ static struct attribute *f2fs_feat_attrs[] = {
+ #ifdef CONFIG_FS_ENCRYPTION
+ 	ATTR_LIST(encryption),
+ 	ATTR_LIST(test_dummy_encryption_v2),
++#ifdef CONFIG_UNICODE
++	ATTR_LIST(encrypted_casefold),
+ #endif
++#endif /* CONFIG_FS_ENCRYPTION */
+ #ifdef CONFIG_BLK_DEV_ZONED
+ 	ATTR_LIST(block_zoned),
+ #endif
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index e91980f493884..8d4130b01423b 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -505,12 +505,19 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
+ 	if (!isw)
+ 		return;
+ 
++	atomic_inc(&isw_nr_in_flight);
++
+ 	/* find and pin the new wb */
+ 	rcu_read_lock();
+ 	memcg_css = css_from_id(new_wb_id, &memory_cgrp_subsys);
+-	if (memcg_css)
+-		isw->new_wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
++	if (memcg_css && !css_tryget(memcg_css))
++		memcg_css = NULL;
+ 	rcu_read_unlock();
++	if (!memcg_css)
++		goto out_free;
++
++	isw->new_wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
++	css_put(memcg_css);
+ 	if (!isw->new_wb)
+ 		goto out_free;
+ 
+@@ -535,11 +542,10 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
+ 	 * Let's continue after I_WB_SWITCH is guaranteed to be visible.
+ 	 */
+ 	call_rcu(&isw->rcu_head, inode_switch_wbs_rcu_fn);
+-
+-	atomic_inc(&isw_nr_in_flight);
+ 	return;
+ 
+ out_free:
++	atomic_dec(&isw_nr_in_flight);
+ 	if (isw->new_wb)
+ 		wb_put(isw->new_wb);
+ 	kfree(isw);
+@@ -2205,28 +2211,6 @@ int dirtytime_interval_handler(struct ctl_table *table, int write,
+ 	return ret;
+ }
+ 
+-static noinline void block_dump___mark_inode_dirty(struct inode *inode)
+-{
+-	if (inode->i_ino || strcmp(inode->i_sb->s_id, "bdev")) {
+-		struct dentry *dentry;
+-		const char *name = "?";
+-
+-		dentry = d_find_alias(inode);
+-		if (dentry) {
+-			spin_lock(&dentry->d_lock);
+-			name = (const char *) dentry->d_name.name;
+-		}
+-		printk(KERN_DEBUG
+-		       "%s(%d): dirtied inode %lu (%s) on %s\n",
+-		       current->comm, task_pid_nr(current), inode->i_ino,
+-		       name, inode->i_sb->s_id);
+-		if (dentry) {
+-			spin_unlock(&dentry->d_lock);
+-			dput(dentry);
+-		}
+-	}
+-}
+-
+ /**
+  * __mark_inode_dirty -	internal function to mark an inode dirty
+  *
+@@ -2296,9 +2280,6 @@ void __mark_inode_dirty(struct inode *inode, int flags)
+ 	    (dirtytime && (inode->i_state & I_DIRTY_INODE)))
+ 		return;
+ 
+-	if (unlikely(block_dump))
+-		block_dump___mark_inode_dirty(inode);
+-
+ 	spin_lock(&inode->i_lock);
+ 	if (dirtytime && (inode->i_state & I_DIRTY_INODE))
+ 		goto out_unlock_inode;
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index a5ceccc5ef00f..b8d58aa082062 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -783,6 +783,7 @@ static int fuse_check_page(struct page *page)
+ 	       1 << PG_uptodate |
+ 	       1 << PG_lru |
+ 	       1 << PG_active |
++	       1 << PG_workingset |
+ 	       1 << PG_reclaim |
+ 	       1 << PG_waiters))) {
+ 		dump_page(page, "fuse: trying to steal weird page");
+@@ -1271,6 +1272,15 @@ static ssize_t fuse_dev_do_read(struct fuse_dev *fud, struct file *file,
+ 		goto restart;
+ 	}
+ 	spin_lock(&fpq->lock);
++	/*
++	 *  Must not put request on fpq->io queue after having been shut down by
++	 *  fuse_abort_conn()
++	 */
++	if (!fpq->connected) {
++		req->out.h.error = err = -ECONNABORTED;
++		goto out_end;
++
++	}
+ 	list_add(&req->list, &fpq->io);
+ 	spin_unlock(&fpq->lock);
+ 	cs->req = req;
+@@ -1857,7 +1867,7 @@ static ssize_t fuse_dev_do_write(struct fuse_dev *fud,
+ 	}
+ 
+ 	err = -EINVAL;
+-	if (oh.error <= -1000 || oh.error > 0)
++	if (oh.error <= -512 || oh.error > 0)
+ 		goto copy_finish;
+ 
+ 	spin_lock(&fpq->lock);
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index 1b6c001a7dd12..3fa8604c21d52 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -339,18 +339,33 @@ static struct vfsmount *fuse_dentry_automount(struct path *path)
+ 
+ 	/* Initialize superblock, making @mp_fi its root */
+ 	err = fuse_fill_super_submount(sb, mp_fi);
+-	if (err)
++	if (err) {
++		fuse_conn_put(fc);
++		kfree(fm);
++		sb->s_fs_info = NULL;
+ 		goto out_put_sb;
++	}
++
++	down_write(&fc->killsb);
++	list_add_tail(&fm->fc_entry, &fc->mounts);
++	up_write(&fc->killsb);
+ 
+ 	sb->s_flags |= SB_ACTIVE;
+ 	fsc->root = dget(sb->s_root);
++
++	/*
++	 * FIXME: setting SB_BORN requires a write barrier for
++	 *        super_cache_count(). We should actually come
++	 *        up with a proper ->get_tree() implementation
++	 *        for submounts and call vfs_get_tree() to take
++	 *        care of the write barrier.
++	 */
++	smp_wmb();
++	sb->s_flags |= SB_BORN;
++
+ 	/* We are done configuring the superblock, so unlock it */
+ 	up_write(&sb->s_umount);
+ 
+-	down_write(&fc->killsb);
+-	list_add_tail(&fm->fc_entry, &fc->mounts);
+-	up_write(&fc->killsb);
+-
+ 	/* Create the submount */
+ 	mnt = vfs_create_mount(fsc);
+ 	if (IS_ERR(mnt)) {
+diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
+index 493a83e3f5906..13ca4fe47a6e7 100644
+--- a/fs/gfs2/file.c
++++ b/fs/gfs2/file.c
+@@ -450,8 +450,8 @@ static vm_fault_t gfs2_page_mkwrite(struct vm_fault *vmf)
+ 	file_update_time(vmf->vma->vm_file);
+ 
+ 	/* page is wholly or partially inside EOF */
+-	if (offset > size - PAGE_SIZE)
+-		length = offset_in_page(size);
++	if (size - offset < PAGE_SIZE)
++		length = size - offset;
+ 	else
+ 		length = PAGE_SIZE;
+ 
+diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
+index 826f77d9cff5d..5f4504dd0875a 100644
+--- a/fs/gfs2/ops_fstype.c
++++ b/fs/gfs2/ops_fstype.c
+@@ -687,6 +687,7 @@ static int init_statfs(struct gfs2_sbd *sdp)
+ 	}
+ 
+ 	iput(pn);
++	pn = NULL;
+ 	ip = GFS2_I(sdp->sd_sc_inode);
+ 	error = gfs2_glock_nq_init(ip->i_gl, LM_ST_EXCLUSIVE, 0,
+ 				   &sdp->sd_sc_gh);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index fa8794c61af7b..ad1f31fafe445 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -2621,7 +2621,7 @@ static bool __io_file_supports_async(struct file *file, int rw)
+ 			return true;
+ 		return false;
+ 	}
+-	if (S_ISCHR(mode) || S_ISSOCK(mode))
++	if (S_ISSOCK(mode))
+ 		return true;
+ 	if (S_ISREG(mode)) {
+ 		if (IS_ENABLED(CONFIG_BLOCK) &&
+@@ -3453,6 +3453,10 @@ static int io_renameat_prep(struct io_kiocb *req,
+ 	struct io_rename *ren = &req->rename;
+ 	const char __user *oldf, *newf;
+ 
++	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++	if (sqe->ioprio || sqe->buf_index)
++		return -EINVAL;
+ 	if (unlikely(req->flags & REQ_F_FIXED_FILE))
+ 		return -EBADF;
+ 
+@@ -3500,6 +3504,10 @@ static int io_unlinkat_prep(struct io_kiocb *req,
+ 	struct io_unlink *un = &req->unlink;
+ 	const char __user *fname;
+ 
++	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
++		return -EINVAL;
++	if (sqe->ioprio || sqe->off || sqe->len || sqe->buf_index)
++		return -EINVAL;
+ 	if (unlikely(req->flags & REQ_F_FIXED_FILE))
+ 		return -EBADF;
+ 
+diff --git a/fs/ntfs/inode.c b/fs/ntfs/inode.c
+index f5c058b3192ce..4474adb393ca8 100644
+--- a/fs/ntfs/inode.c
++++ b/fs/ntfs/inode.c
+@@ -477,7 +477,7 @@ err_corrupt_attr:
+ 		}
+ 		file_name_attr = (FILE_NAME_ATTR*)((u8*)attr +
+ 				le16_to_cpu(attr->data.resident.value_offset));
+-		p2 = (u8*)attr + le32_to_cpu(attr->data.resident.value_length);
++		p2 = (u8 *)file_name_attr + le32_to_cpu(attr->data.resident.value_length);
+ 		if (p2 < (u8*)attr || p2 > p)
+ 			goto err_corrupt_attr;
+ 		/* This attribute is ok, but is it in the $Extend directory? */
+diff --git a/fs/ocfs2/filecheck.c b/fs/ocfs2/filecheck.c
+index 90b8d300c1eea..de56e6231af87 100644
+--- a/fs/ocfs2/filecheck.c
++++ b/fs/ocfs2/filecheck.c
+@@ -326,11 +326,7 @@ static ssize_t ocfs2_filecheck_attr_show(struct kobject *kobj,
+ 		ret = snprintf(buf + total, remain, "%lu\t\t%u\t%s\n",
+ 			       p->fe_ino, p->fe_done,
+ 			       ocfs2_filecheck_error(p->fe_status));
+-		if (ret < 0) {
+-			total = ret;
+-			break;
+-		}
+-		if (ret == remain) {
++		if (ret >= remain) {
+ 			/* snprintf() didn't fit */
+ 			total = -E2BIG;
+ 			break;
+diff --git a/fs/ocfs2/stackglue.c b/fs/ocfs2/stackglue.c
+index d50e8b8dfea47..16f1bfc407f2a 100644
+--- a/fs/ocfs2/stackglue.c
++++ b/fs/ocfs2/stackglue.c
+@@ -500,11 +500,7 @@ static ssize_t ocfs2_loaded_cluster_plugins_show(struct kobject *kobj,
+ 	list_for_each_entry(p, &ocfs2_stack_list, sp_list) {
+ 		ret = snprintf(buf, remain, "%s\n",
+ 			       p->sp_name);
+-		if (ret < 0) {
+-			total = ret;
+-			break;
+-		}
+-		if (ret == remain) {
++		if (ret >= remain) {
+ 			/* snprintf() didn't fit */
+ 			total = -E2BIG;
+ 			break;
+@@ -531,7 +527,7 @@ static ssize_t ocfs2_active_cluster_plugin_show(struct kobject *kobj,
+ 	if (active_stack) {
+ 		ret = snprintf(buf, PAGE_SIZE, "%s\n",
+ 			       active_stack->sp_name);
+-		if (ret == PAGE_SIZE)
++		if (ret >= PAGE_SIZE)
+ 			ret = -E2BIG;
+ 	}
+ 	spin_unlock(&ocfs2_stack_lock);
+diff --git a/fs/open.c b/fs/open.c
+index e53af13b5835f..53bc0573c0eca 100644
+--- a/fs/open.c
++++ b/fs/open.c
+@@ -1002,12 +1002,20 @@ inline struct open_how build_open_how(int flags, umode_t mode)
+ 
+ inline int build_open_flags(const struct open_how *how, struct open_flags *op)
+ {
+-	int flags = how->flags;
++	u64 flags = how->flags;
++	u64 strip = FMODE_NONOTIFY | O_CLOEXEC;
+ 	int lookup_flags = 0;
+ 	int acc_mode = ACC_MODE(flags);
+ 
+-	/* Must never be set by userspace */
+-	flags &= ~(FMODE_NONOTIFY | O_CLOEXEC);
++	BUILD_BUG_ON_MSG(upper_32_bits(VALID_OPEN_FLAGS),
++			 "struct open_flags doesn't yet handle flags > 32 bits");
++
++	/*
++	 * Strip flags that either shouldn't be set by userspace like
++	 * FMODE_NONOTIFY or that aren't relevant in determining struct
++	 * open_flags like O_CLOEXEC.
++	 */
++	flags &= ~strip;
+ 
+ 	/*
+ 	 * Older syscalls implicitly clear all of the invalid flags or argument
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index fc9784544b241..7389df326edde 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -832,7 +832,7 @@ static int show_smap(struct seq_file *m, void *v)
+ 	__show_smap(m, &mss, false);
+ 
+ 	seq_printf(m, "THPeligible:    %d\n",
+-		   transparent_hugepage_enabled(vma));
++		   transparent_hugepage_active(vma));
+ 
+ 	if (arch_pkeys_enabled())
+ 		seq_printf(m, "ProtectionKey:  %8u\n", vma_pkey(vma));
+diff --git a/fs/pstore/Kconfig b/fs/pstore/Kconfig
+index 8adabde685f13..328da35da3908 100644
+--- a/fs/pstore/Kconfig
++++ b/fs/pstore/Kconfig
+@@ -173,6 +173,7 @@ config PSTORE_BLK
+ 	tristate "Log panic/oops to a block device"
+ 	depends on PSTORE
+ 	depends on BLOCK
++	depends on BROKEN
+ 	select PSTORE_ZONE
+ 	default n
+ 	help
+diff --git a/include/asm-generic/pgtable-nop4d.h b/include/asm-generic/pgtable-nop4d.h
+index ce2cbb3c380ff..2f6b1befb1292 100644
+--- a/include/asm-generic/pgtable-nop4d.h
++++ b/include/asm-generic/pgtable-nop4d.h
+@@ -9,7 +9,6 @@
+ typedef struct { pgd_t pgd; } p4d_t;
+ 
+ #define P4D_SHIFT		PGDIR_SHIFT
+-#define MAX_PTRS_PER_P4D	1
+ #define PTRS_PER_P4D		1
+ #define P4D_SIZE		(1UL << P4D_SHIFT)
+ #define P4D_MASK		(~(P4D_SIZE-1))
+diff --git a/include/asm-generic/preempt.h b/include/asm-generic/preempt.h
+index d683f5e6d7913..b4d43a4af5f79 100644
+--- a/include/asm-generic/preempt.h
++++ b/include/asm-generic/preempt.h
+@@ -29,7 +29,7 @@ static __always_inline void preempt_count_set(int pc)
+ } while (0)
+ 
+ #define init_idle_preempt_count(p, cpu) do { \
+-	task_thread_info(p)->preempt_count = PREEMPT_ENABLED; \
++	task_thread_info(p)->preempt_count = PREEMPT_DISABLED; \
+ } while (0)
+ 
+ static __always_inline void set_preempt_need_resched(void)
+diff --git a/include/clocksource/timer-ti-dm.h b/include/clocksource/timer-ti-dm.h
+index 4c61dade8835f..f6da8a1326398 100644
+--- a/include/clocksource/timer-ti-dm.h
++++ b/include/clocksource/timer-ti-dm.h
+@@ -74,6 +74,7 @@
+ #define OMAP_TIMER_ERRATA_I103_I767			0x80000000
+ 
+ struct timer_regs {
++	u32 ocp_cfg;
+ 	u32 tidr;
+ 	u32 tier;
+ 	u32 twer;
+diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h
+index 0a288dddcf5be..25806141db591 100644
+--- a/include/crypto/internal/hash.h
++++ b/include/crypto/internal/hash.h
+@@ -75,13 +75,7 @@ void crypto_unregister_ahashes(struct ahash_alg *algs, int count);
+ int ahash_register_instance(struct crypto_template *tmpl,
+ 			    struct ahash_instance *inst);
+ 
+-int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
+-		    unsigned int keylen);
+-
+-static inline bool crypto_shash_alg_has_setkey(struct shash_alg *alg)
+-{
+-	return alg->setkey != shash_no_setkey;
+-}
++bool crypto_shash_alg_has_setkey(struct shash_alg *alg);
+ 
+ static inline bool crypto_shash_alg_needs_key(struct shash_alg *alg)
+ {
+diff --git a/include/dt-bindings/clock/imx8mq-clock.h b/include/dt-bindings/clock/imx8mq-clock.h
+index 82e907ce7bdd3..afa74d7ba1009 100644
+--- a/include/dt-bindings/clock/imx8mq-clock.h
++++ b/include/dt-bindings/clock/imx8mq-clock.h
+@@ -405,25 +405,6 @@
+ 
+ #define IMX8MQ_VIDEO2_PLL1_REF_SEL		266
+ 
+-#define IMX8MQ_SYS1_PLL_40M_CG			267
+-#define IMX8MQ_SYS1_PLL_80M_CG			268
+-#define IMX8MQ_SYS1_PLL_100M_CG			269
+-#define IMX8MQ_SYS1_PLL_133M_CG			270
+-#define IMX8MQ_SYS1_PLL_160M_CG			271
+-#define IMX8MQ_SYS1_PLL_200M_CG			272
+-#define IMX8MQ_SYS1_PLL_266M_CG			273
+-#define IMX8MQ_SYS1_PLL_400M_CG			274
+-#define IMX8MQ_SYS1_PLL_800M_CG			275
+-#define IMX8MQ_SYS2_PLL_50M_CG			276
+-#define IMX8MQ_SYS2_PLL_100M_CG			277
+-#define IMX8MQ_SYS2_PLL_125M_CG			278
+-#define IMX8MQ_SYS2_PLL_166M_CG			279
+-#define IMX8MQ_SYS2_PLL_200M_CG			280
+-#define IMX8MQ_SYS2_PLL_250M_CG			281
+-#define IMX8MQ_SYS2_PLL_333M_CG			282
+-#define IMX8MQ_SYS2_PLL_500M_CG			283
+-#define IMX8MQ_SYS2_PLL_1000M_CG		284
+-
+ #define IMX8MQ_CLK_GPU_CORE			285
+ #define IMX8MQ_CLK_GPU_SHADER			286
+ #define IMX8MQ_CLK_M4_CORE			287
+diff --git a/include/linux/bio.h b/include/linux/bio.h
+index a0b4cfdf62a43..d2b98efb5cc50 100644
+--- a/include/linux/bio.h
++++ b/include/linux/bio.h
+@@ -44,9 +44,6 @@ static inline unsigned int bio_max_segs(unsigned int nr_segs)
+ #define bio_offset(bio)		bio_iter_offset((bio), (bio)->bi_iter)
+ #define bio_iovec(bio)		bio_iter_iovec((bio), (bio)->bi_iter)
+ 
+-#define bio_multiple_segments(bio)				\
+-	((bio)->bi_iter.bi_size != bio_iovec(bio).bv_len)
+-
+ #define bvec_iter_sectors(iter)	((iter).bi_size >> 9)
+ #define bvec_iter_end_sector(iter) ((iter).bi_sector + bvec_iter_sectors((iter)))
+ 
+@@ -271,7 +268,7 @@ static inline void bio_clear_flag(struct bio *bio, unsigned int bit)
+ 
+ static inline void bio_get_first_bvec(struct bio *bio, struct bio_vec *bv)
+ {
+-	*bv = bio_iovec(bio);
++	*bv = mp_bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter);
+ }
+ 
+ static inline void bio_get_last_bvec(struct bio *bio, struct bio_vec *bv)
+@@ -279,10 +276,9 @@ static inline void bio_get_last_bvec(struct bio *bio, struct bio_vec *bv)
+ 	struct bvec_iter iter = bio->bi_iter;
+ 	int idx;
+ 
+-	if (unlikely(!bio_multiple_segments(bio))) {
+-		*bv = bio_iovec(bio);
+-		return;
+-	}
++	bio_get_first_bvec(bio, bv);
++	if (bv->bv_len == bio->bi_iter.bi_size)
++		return;		/* this bio only has a single bvec */
+ 
+ 	bio_advance_iter(bio, &iter, iter.bi_size);
+ 
+diff --git a/include/linux/clocksource.h b/include/linux/clocksource.h
+index d6ab416ee2d2c..7f83d51c0fd7b 100644
+--- a/include/linux/clocksource.h
++++ b/include/linux/clocksource.h
+@@ -137,7 +137,7 @@ struct clocksource {
+ #define CLOCK_SOURCE_UNSTABLE			0x40
+ #define CLOCK_SOURCE_SUSPEND_NONSTOP		0x80
+ #define CLOCK_SOURCE_RESELECT			0x100
+-
++#define CLOCK_SOURCE_VERIFY_PERCPU		0x200
+ /* simplify initialization of mask field */
+ #define CLOCKSOURCE_MASK(bits) GENMASK_ULL((bits) - 1, 0)
+ 
+diff --git a/include/linux/cred.h b/include/linux/cred.h
+index 14971322e1a05..65014e50d5fab 100644
+--- a/include/linux/cred.h
++++ b/include/linux/cred.h
+@@ -143,6 +143,7 @@ struct cred {
+ #endif
+ 	struct user_struct *user;	/* real user ID subscription */
+ 	struct user_namespace *user_ns; /* user_ns the caps and keyrings are relative to. */
++	struct ucounts *ucounts;
+ 	struct group_info *group_info;	/* supplementary groups for euid/fsgid */
+ 	/* RCU deletion */
+ 	union {
+@@ -169,6 +170,7 @@ extern int set_security_override_from_ctx(struct cred *, const char *);
+ extern int set_create_files_as(struct cred *, struct inode *);
+ extern int cred_fscmp(const struct cred *, const struct cred *);
+ extern void __init cred_init(void);
++extern int set_cred_ucounts(struct cred *);
+ 
+ /*
+  * check for validity of credentials
+diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
+index 2a8ebe6c222ef..b4e1ebaae825a 100644
+--- a/include/linux/huge_mm.h
++++ b/include/linux/huge_mm.h
+@@ -115,9 +115,34 @@ extern struct kobj_attribute shmem_enabled_attr;
+ 
+ extern unsigned long transparent_hugepage_flags;
+ 
++static inline bool transhuge_vma_suitable(struct vm_area_struct *vma,
++		unsigned long haddr)
++{
++	/* Don't have to check pgoff for anonymous vma */
++	if (!vma_is_anonymous(vma)) {
++		if (!IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff,
++				HPAGE_PMD_NR))
++			return false;
++	}
++
++	if (haddr < vma->vm_start || haddr + HPAGE_PMD_SIZE > vma->vm_end)
++		return false;
++	return true;
++}
++
++static inline bool transhuge_vma_enabled(struct vm_area_struct *vma,
++					  unsigned long vm_flags)
++{
++	/* Explicitly disabled through madvise. */
++	if ((vm_flags & VM_NOHUGEPAGE) ||
++	    test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))
++		return false;
++	return true;
++}
++
+ /*
+  * to be used on vmas which are known to support THP.
+- * Use transparent_hugepage_enabled otherwise
++ * Use transparent_hugepage_active otherwise
+  */
+ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma)
+ {
+@@ -128,15 +153,12 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma)
+ 	if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_NEVER_DAX))
+ 		return false;
+ 
+-	if (vma->vm_flags & VM_NOHUGEPAGE)
++	if (!transhuge_vma_enabled(vma, vma->vm_flags))
+ 		return false;
+ 
+ 	if (vma_is_temporary_stack(vma))
+ 		return false;
+ 
+-	if (test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))
+-		return false;
+-
+ 	if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_FLAG))
+ 		return true;
+ 
+@@ -150,24 +172,7 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma)
+ 	return false;
+ }
+ 
+-bool transparent_hugepage_enabled(struct vm_area_struct *vma);
+-
+-#define HPAGE_CACHE_INDEX_MASK (HPAGE_PMD_NR - 1)
+-
+-static inline bool transhuge_vma_suitable(struct vm_area_struct *vma,
+-		unsigned long haddr)
+-{
+-	/* Don't have to check pgoff for anonymous vma */
+-	if (!vma_is_anonymous(vma)) {
+-		if (((vma->vm_start >> PAGE_SHIFT) & HPAGE_CACHE_INDEX_MASK) !=
+-			(vma->vm_pgoff & HPAGE_CACHE_INDEX_MASK))
+-			return false;
+-	}
+-
+-	if (haddr < vma->vm_start || haddr + HPAGE_PMD_SIZE > vma->vm_end)
+-		return false;
+-	return true;
+-}
++bool transparent_hugepage_active(struct vm_area_struct *vma);
+ 
+ #define transparent_hugepage_use_zero_page()				\
+ 	(transparent_hugepage_flags &					\
+@@ -354,7 +359,7 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma)
+ 	return false;
+ }
+ 
+-static inline bool transparent_hugepage_enabled(struct vm_area_struct *vma)
++static inline bool transparent_hugepage_active(struct vm_area_struct *vma)
+ {
+ 	return false;
+ }
+@@ -365,6 +370,12 @@ static inline bool transhuge_vma_suitable(struct vm_area_struct *vma,
+ 	return false;
+ }
+ 
++static inline bool transhuge_vma_enabled(struct vm_area_struct *vma,
++					  unsigned long vm_flags)
++{
++	return false;
++}
++
+ static inline void prep_transhuge_page(struct page *page) {}
+ 
+ static inline bool is_transparent_hugepage(struct page *page)
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index 3c0117656745a..28a110ec2a0d5 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -875,6 +875,11 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
+ #else	/* CONFIG_HUGETLB_PAGE */
+ struct hstate {};
+ 
++static inline struct hugepage_subpool *hugetlb_page_subpool(struct page *hpage)
++{
++	return NULL;
++}
++
+ static inline int isolate_or_dissolve_huge_page(struct page *page,
+ 						struct list_head *list)
+ {
+diff --git a/include/linux/iio/common/cros_ec_sensors_core.h b/include/linux/iio/common/cros_ec_sensors_core.h
+index 7ce8a8adad587..c582e1a142320 100644
+--- a/include/linux/iio/common/cros_ec_sensors_core.h
++++ b/include/linux/iio/common/cros_ec_sensors_core.h
+@@ -77,7 +77,7 @@ struct cros_ec_sensors_core_state {
+ 		u16 scale;
+ 	} calib[CROS_EC_SENSOR_MAX_AXIS];
+ 	s8 sign[CROS_EC_SENSOR_MAX_AXIS];
+-	u8 samples[CROS_EC_SAMPLE_SIZE];
++	u8 samples[CROS_EC_SAMPLE_SIZE] __aligned(8);
+ 
+ 	int (*read_ec_sensors_data)(struct iio_dev *indio_dev,
+ 				    unsigned long scan_mask, s16 *data);
+diff --git a/include/linux/kthread.h b/include/linux/kthread.h
+index 2484ed97e72f5..d9133d6db3084 100644
+--- a/include/linux/kthread.h
++++ b/include/linux/kthread.h
+@@ -33,6 +33,8 @@ struct task_struct *kthread_create_on_cpu(int (*threadfn)(void *data),
+ 					  unsigned int cpu,
+ 					  const char *namefmt);
+ 
++void set_kthread_struct(struct task_struct *p);
++
+ void kthread_set_per_cpu(struct task_struct *k, int cpu);
+ bool kthread_is_per_cpu(struct task_struct *k);
+ 
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 8ae31622deeff..9afb8998e7e5d 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -2474,7 +2474,6 @@ extern void set_dma_reserve(unsigned long new_dma_reserve);
+ extern void memmap_init_range(unsigned long, int, unsigned long,
+ 		unsigned long, unsigned long, enum meminit_context,
+ 		struct vmem_altmap *, int migratetype);
+-extern void memmap_init_zone(struct zone *zone);
+ extern void setup_per_zone_wmarks(void);
+ extern int __meminit init_per_zone_wmark_min(void);
+ extern void mem_init(void);
+diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
+index a43047b1030dc..c32600c9e1ade 100644
+--- a/include/linux/pgtable.h
++++ b/include/linux/pgtable.h
+@@ -1592,4 +1592,26 @@ typedef unsigned int pgtbl_mod_mask;
+ #define pte_leaf_size(x) PAGE_SIZE
+ #endif
+ 
++/*
++ * Some architectures have MMUs that are configurable or selectable at boot
++ * time. These lead to variable PTRS_PER_x. For statically allocated arrays it
++ * helps to have a static maximum value.
++ */
++
++#ifndef MAX_PTRS_PER_PTE
++#define MAX_PTRS_PER_PTE PTRS_PER_PTE
++#endif
++
++#ifndef MAX_PTRS_PER_PMD
++#define MAX_PTRS_PER_PMD PTRS_PER_PMD
++#endif
++
++#ifndef MAX_PTRS_PER_PUD
++#define MAX_PTRS_PER_PUD PTRS_PER_PUD
++#endif
++
++#ifndef MAX_PTRS_PER_P4D
++#define MAX_PTRS_PER_P4D PTRS_PER_P4D
++#endif
++
+ #endif /* _LINUX_PGTABLE_H */
+diff --git a/include/linux/prandom.h b/include/linux/prandom.h
+index bbf4b4ad61dfd..056d31317e499 100644
+--- a/include/linux/prandom.h
++++ b/include/linux/prandom.h
+@@ -111,7 +111,7 @@ static inline u32 __seed(u32 x, u32 m)
+  */
+ static inline void prandom_seed_state(struct rnd_state *state, u64 seed)
+ {
+-	u32 i = (seed >> 32) ^ (seed << 10) ^ seed;
++	u32 i = ((seed >> 32) ^ (seed << 10) ^ seed) & 0xffffffffUL;
+ 
+ 	state->s1 = __seed(i,   2U);
+ 	state->s2 = __seed(i,   8U);
+diff --git a/include/linux/swap.h b/include/linux/swap.h
+index 144727041e78b..a84f76db50702 100644
+--- a/include/linux/swap.h
++++ b/include/linux/swap.h
+@@ -526,6 +526,15 @@ static inline struct swap_info_struct *swp_swap_info(swp_entry_t entry)
+ 	return NULL;
+ }
+ 
++static inline struct swap_info_struct *get_swap_device(swp_entry_t entry)
++{
++	return NULL;
++}
++
++static inline void put_swap_device(struct swap_info_struct *si)
++{
++}
++
+ #define swap_address_space(entry)		(NULL)
+ #define get_nr_swap_pages()			0L
+ #define total_swap_pages			0L
+diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
+index 13f65420f188e..ab58696d0ddd1 100644
+--- a/include/linux/tracepoint.h
++++ b/include/linux/tracepoint.h
+@@ -41,7 +41,17 @@ extern int
+ tracepoint_probe_register_prio(struct tracepoint *tp, void *probe, void *data,
+ 			       int prio);
+ extern int
++tracepoint_probe_register_prio_may_exist(struct tracepoint *tp, void *probe, void *data,
++					 int prio);
++extern int
+ tracepoint_probe_unregister(struct tracepoint *tp, void *probe, void *data);
++static inline int
++tracepoint_probe_register_may_exist(struct tracepoint *tp, void *probe,
++				    void *data)
++{
++	return tracepoint_probe_register_prio_may_exist(tp, probe, data,
++							TRACEPOINT_DEFAULT_PRIO);
++}
+ extern void
+ for_each_kernel_tracepoint(void (*fct)(struct tracepoint *tp, void *priv),
+ 		void *priv);
+diff --git a/include/linux/user_namespace.h b/include/linux/user_namespace.h
+index 1d08dbbcfe32a..bfa6463f8a95d 100644
+--- a/include/linux/user_namespace.h
++++ b/include/linux/user_namespace.h
+@@ -104,11 +104,15 @@ struct ucounts {
+ };
+ 
+ extern struct user_namespace init_user_ns;
++extern struct ucounts init_ucounts;
+ 
+ bool setup_userns_sysctls(struct user_namespace *ns);
+ void retire_userns_sysctls(struct user_namespace *ns);
+ struct ucounts *inc_ucount(struct user_namespace *ns, kuid_t uid, enum ucount_type type);
+ void dec_ucount(struct ucounts *ucounts, enum ucount_type type);
++struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid);
++struct ucounts *get_ucounts(struct ucounts *ucounts);
++void put_ucounts(struct ucounts *ucounts);
+ 
+ #ifdef CONFIG_USER_NS
+ 
+diff --git a/include/media/hevc-ctrls.h b/include/media/hevc-ctrls.h
+index b4cb2ef02f171..226fcfa0e0261 100644
+--- a/include/media/hevc-ctrls.h
++++ b/include/media/hevc-ctrls.h
+@@ -81,7 +81,7 @@ struct v4l2_ctrl_hevc_sps {
+ 	__u64	flags;
+ };
+ 
+-#define V4L2_HEVC_PPS_FLAG_DEPENDENT_SLICE_SEGMENT		(1ULL << 0)
++#define V4L2_HEVC_PPS_FLAG_DEPENDENT_SLICE_SEGMENT_ENABLED	(1ULL << 0)
+ #define V4L2_HEVC_PPS_FLAG_OUTPUT_FLAG_PRESENT			(1ULL << 1)
+ #define V4L2_HEVC_PPS_FLAG_SIGN_DATA_HIDING_ENABLED		(1ULL << 2)
+ #define V4L2_HEVC_PPS_FLAG_CABAC_INIT_PRESENT			(1ULL << 3)
+@@ -160,6 +160,7 @@ struct v4l2_hevc_pred_weight_table {
+ #define V4L2_HEVC_SLICE_PARAMS_FLAG_USE_INTEGER_MV		(1ULL << 6)
+ #define V4L2_HEVC_SLICE_PARAMS_FLAG_SLICE_DEBLOCKING_FILTER_DISABLED (1ULL << 7)
+ #define V4L2_HEVC_SLICE_PARAMS_FLAG_SLICE_LOOP_FILTER_ACROSS_SLICES_ENABLED (1ULL << 8)
++#define V4L2_HEVC_SLICE_PARAMS_FLAG_DEPENDENT_SLICE_SEGMENT	(1ULL << 9)
+ 
+ struct v4l2_ctrl_hevc_slice_params {
+ 	__u32	bit_size;
+diff --git a/include/media/media-dev-allocator.h b/include/media/media-dev-allocator.h
+index b35ea6062596b..2ab54d426c644 100644
+--- a/include/media/media-dev-allocator.h
++++ b/include/media/media-dev-allocator.h
+@@ -19,7 +19,7 @@
+ 
+ struct usb_device;
+ 
+-#if defined(CONFIG_MEDIA_CONTROLLER) && defined(CONFIG_USB)
++#if defined(CONFIG_MEDIA_CONTROLLER) && IS_ENABLED(CONFIG_USB)
+ /**
+  * media_device_usb_allocate() - Allocate and return struct &media device
+  *
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index ea4ae551c4268..18b135dc968b9 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -1774,13 +1774,15 @@ struct hci_cp_ext_adv_set {
+ 	__u8  max_events;
+ } __packed;
+ 
++#define HCI_MAX_EXT_AD_LENGTH	251
++
+ #define HCI_OP_LE_SET_EXT_ADV_DATA		0x2037
+ struct hci_cp_le_set_ext_adv_data {
+ 	__u8  handle;
+ 	__u8  operation;
+ 	__u8  frag_pref;
+ 	__u8  length;
+-	__u8  data[HCI_MAX_AD_LENGTH];
++	__u8  data[];
+ } __packed;
+ 
+ #define HCI_OP_LE_SET_EXT_SCAN_RSP_DATA		0x2038
+@@ -1789,7 +1791,7 @@ struct hci_cp_le_set_ext_scan_rsp_data {
+ 	__u8  operation;
+ 	__u8  frag_pref;
+ 	__u8  length;
+-	__u8  data[HCI_MAX_AD_LENGTH];
++	__u8  data[];
+ } __packed;
+ 
+ #define LE_SET_ADV_DATA_OP_COMPLETE	0x03
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index c73ac52af1869..89c8406dddb4a 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -228,9 +228,9 @@ struct adv_info {
+ 	__u16	remaining_time;
+ 	__u16	duration;
+ 	__u16	adv_data_len;
+-	__u8	adv_data[HCI_MAX_AD_LENGTH];
++	__u8	adv_data[HCI_MAX_EXT_AD_LENGTH];
+ 	__u16	scan_rsp_len;
+-	__u8	scan_rsp_data[HCI_MAX_AD_LENGTH];
++	__u8	scan_rsp_data[HCI_MAX_EXT_AD_LENGTH];
+ 	__s8	tx_power;
+ 	__u32   min_interval;
+ 	__u32   max_interval;
+@@ -550,9 +550,9 @@ struct hci_dev {
+ 	DECLARE_BITMAP(dev_flags, __HCI_NUM_FLAGS);
+ 
+ 	__s8			adv_tx_power;
+-	__u8			adv_data[HCI_MAX_AD_LENGTH];
++	__u8			adv_data[HCI_MAX_EXT_AD_LENGTH];
+ 	__u8			adv_data_len;
+-	__u8			scan_rsp_data[HCI_MAX_AD_LENGTH];
++	__u8			scan_rsp_data[HCI_MAX_EXT_AD_LENGTH];
+ 	__u8			scan_rsp_data_len;
+ 
+ 	struct list_head	adv_instances;
+diff --git a/include/net/ip.h b/include/net/ip.h
+index e20874059f826..d9683bef86840 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -31,6 +31,7 @@
+ #include <net/flow.h>
+ #include <net/flow_dissector.h>
+ #include <net/netns/hash.h>
++#include <net/lwtunnel.h>
+ 
+ #define IPV4_MAX_PMTU		65535U		/* RFC 2675, Section 5.1 */
+ #define IPV4_MIN_MTU		68			/* RFC 791 */
+@@ -445,22 +446,25 @@ static inline unsigned int ip_dst_mtu_maybe_forward(const struct dst_entry *dst,
+ 
+ 	/* 'forwarding = true' case should always honour route mtu */
+ 	mtu = dst_metric_raw(dst, RTAX_MTU);
+-	if (mtu)
+-		return mtu;
++	if (!mtu)
++		mtu = min(READ_ONCE(dst->dev->mtu), IP_MAX_MTU);
+ 
+-	return min(READ_ONCE(dst->dev->mtu), IP_MAX_MTU);
++	return mtu - lwtunnel_headroom(dst->lwtstate, mtu);
+ }
+ 
+ static inline unsigned int ip_skb_dst_mtu(struct sock *sk,
+ 					  const struct sk_buff *skb)
+ {
++	unsigned int mtu;
++
+ 	if (!sk || !sk_fullsock(sk) || ip_sk_use_pmtu(sk)) {
+ 		bool forwarding = IPCB(skb)->flags & IPSKB_FORWARDED;
+ 
+ 		return ip_dst_mtu_maybe_forward(skb_dst(skb), forwarding);
+ 	}
+ 
+-	return min(READ_ONCE(skb_dst(skb)->dev->mtu), IP_MAX_MTU);
++	mtu = min(READ_ONCE(skb_dst(skb)->dev->mtu), IP_MAX_MTU);
++	return mtu - lwtunnel_headroom(skb_dst(skb)->lwtstate, mtu);
+ }
+ 
+ struct dst_metrics *ip_fib_metrics_init(struct net *net, struct nlattr *fc_mx,
+diff --git a/include/net/ip6_route.h b/include/net/ip6_route.h
+index f51a118bfce8b..f14149df5a654 100644
+--- a/include/net/ip6_route.h
++++ b/include/net/ip6_route.h
+@@ -265,11 +265,18 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ 
+ static inline int ip6_skb_dst_mtu(struct sk_buff *skb)
+ {
++	int mtu;
++
+ 	struct ipv6_pinfo *np = skb->sk && !dev_recursion_level() ?
+ 				inet6_sk(skb->sk) : NULL;
+ 
+-	return (np && np->pmtudisc >= IPV6_PMTUDISC_PROBE) ?
+-	       skb_dst(skb)->dev->mtu : dst_mtu(skb_dst(skb));
++	if (np && np->pmtudisc >= IPV6_PMTUDISC_PROBE) {
++		mtu = READ_ONCE(skb_dst(skb)->dev->mtu);
++		mtu -= lwtunnel_headroom(skb_dst(skb)->lwtstate, mtu);
++	} else
++		mtu = dst_mtu(skb_dst(skb));
++
++	return mtu;
+ }
+ 
+ static inline bool ip6_sk_accept_pmtu(const struct sock *sk)
+@@ -317,7 +324,7 @@ static inline unsigned int ip6_dst_mtu_forward(const struct dst_entry *dst)
+ 	if (dst_metric_locked(dst, RTAX_MTU)) {
+ 		mtu = dst_metric_raw(dst, RTAX_MTU);
+ 		if (mtu)
+-			return mtu;
++			goto out;
+ 	}
+ 
+ 	mtu = IPV6_MIN_MTU;
+@@ -327,7 +334,8 @@ static inline unsigned int ip6_dst_mtu_forward(const struct dst_entry *dst)
+ 		mtu = idev->cnf.mtu6;
+ 	rcu_read_unlock();
+ 
+-	return mtu;
++out:
++	return mtu - lwtunnel_headroom(dst->lwtstate, mtu);
+ }
+ 
+ u32 ip6_mtu_from_fib6(const struct fib6_result *res,
+diff --git a/include/net/macsec.h b/include/net/macsec.h
+index 52874cdfe2260..d6fa6b97f6efa 100644
+--- a/include/net/macsec.h
++++ b/include/net/macsec.h
+@@ -241,7 +241,7 @@ struct macsec_context {
+ 	struct macsec_rx_sc *rx_sc;
+ 	struct {
+ 		unsigned char assoc_num;
+-		u8 key[MACSEC_KEYID_LEN];
++		u8 key[MACSEC_MAX_KEY_LEN];
+ 		union {
+ 			struct macsec_rx_sa *rx_sa;
+ 			struct macsec_tx_sa *tx_sa;
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 1e625519ae968..57710303908c6 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -163,6 +163,12 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc)
+ 		if (spin_trylock(&qdisc->seqlock))
+ 			goto nolock_empty;
+ 
++		/* Paired with smp_mb__after_atomic() to make sure
++		 * STATE_MISSED checking is synchronized with clearing
++		 * in pfifo_fast_dequeue().
++		 */
++		smp_mb__before_atomic();
++
+ 		/* If the MISSED flag is set, it means other thread has
+ 		 * set the MISSED flag before second spin_trylock(), so
+ 		 * we can return false here to avoid multi cpus doing
+@@ -180,6 +186,12 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc)
+ 		 */
+ 		set_bit(__QDISC_STATE_MISSED, &qdisc->state);
+ 
++		/* spin_trylock() only has load-acquire semantic, so use
++		 * smp_mb__after_atomic() to ensure STATE_MISSED is set
++		 * before doing the second spin_trylock().
++		 */
++		smp_mb__after_atomic();
++
+ 		/* Retry again in case other CPU may not see the new flag
+ 		 * after it releases the lock at the end of qdisc_run_end().
+ 		 */
+diff --git a/include/net/tc_act/tc_vlan.h b/include/net/tc_act/tc_vlan.h
+index f051046ba0344..f94b8bc26f9ec 100644
+--- a/include/net/tc_act/tc_vlan.h
++++ b/include/net/tc_act/tc_vlan.h
+@@ -16,6 +16,7 @@ struct tcf_vlan_params {
+ 	u16               tcfv_push_vid;
+ 	__be16            tcfv_push_proto;
+ 	u8                tcfv_push_prio;
++	bool              tcfv_push_prio_exists;
+ 	struct rcu_head   rcu;
+ };
+ 
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index c58a6d4eb6103..6232a5f048bde 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -1546,6 +1546,7 @@ void xfrm_sad_getinfo(struct net *net, struct xfrmk_sadinfo *si);
+ void xfrm_spd_getinfo(struct net *net, struct xfrmk_spdinfo *si);
+ u32 xfrm_replay_seqhi(struct xfrm_state *x, __be32 net_seq);
+ int xfrm_init_replay(struct xfrm_state *x);
++u32 __xfrm_state_mtu(struct xfrm_state *x, int mtu);
+ u32 xfrm_state_mtu(struct xfrm_state *x, int mtu);
+ int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload);
+ int xfrm_init_state(struct xfrm_state *x);
+diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h
+index eaa8386dbc630..7a9a23e7a604a 100644
+--- a/include/net/xsk_buff_pool.h
++++ b/include/net/xsk_buff_pool.h
+@@ -147,11 +147,16 @@ static inline bool xp_desc_crosses_non_contig_pg(struct xsk_buff_pool *pool,
+ {
+ 	bool cross_pg = (addr & (PAGE_SIZE - 1)) + len > PAGE_SIZE;
+ 
+-	if (pool->dma_pages_cnt && cross_pg) {
++	if (likely(!cross_pg))
++		return false;
++
++	if (pool->dma_pages_cnt) {
+ 		return !(pool->dma_pages[addr >> PAGE_SHIFT] &
+ 			 XSK_NEXT_PG_CONTIG_MASK);
+ 	}
+-	return false;
++
++	/* skb path */
++	return addr + len > pool->addrs_cnt;
+ }
+ 
+ static inline u64 xp_aligned_extract_addr(struct xsk_buff_pool *pool, u64 addr)
+diff --git a/include/scsi/fc/fc_ms.h b/include/scsi/fc/fc_ms.h
+index 9e273fed0a85f..800d53dc94705 100644
+--- a/include/scsi/fc/fc_ms.h
++++ b/include/scsi/fc/fc_ms.h
+@@ -63,8 +63,8 @@ enum fc_fdmi_hba_attr_type {
+  * HBA Attribute Length
+  */
+ #define FC_FDMI_HBA_ATTR_NODENAME_LEN		8
+-#define FC_FDMI_HBA_ATTR_MANUFACTURER_LEN	80
+-#define FC_FDMI_HBA_ATTR_SERIALNUMBER_LEN	80
++#define FC_FDMI_HBA_ATTR_MANUFACTURER_LEN	64
++#define FC_FDMI_HBA_ATTR_SERIALNUMBER_LEN	64
+ #define FC_FDMI_HBA_ATTR_MODEL_LEN		256
+ #define FC_FDMI_HBA_ATTR_MODELDESCR_LEN		256
+ #define FC_FDMI_HBA_ATTR_HARDWAREVERSION_LEN	256
+diff --git a/include/scsi/libiscsi.h b/include/scsi/libiscsi.h
+index 02f966e9358f6..091f284bd6e93 100644
+--- a/include/scsi/libiscsi.h
++++ b/include/scsi/libiscsi.h
+@@ -424,6 +424,7 @@ extern int iscsi_conn_start(struct iscsi_cls_conn *);
+ extern void iscsi_conn_stop(struct iscsi_cls_conn *, int);
+ extern int iscsi_conn_bind(struct iscsi_cls_session *, struct iscsi_cls_conn *,
+ 			   int);
++extern void iscsi_conn_unbind(struct iscsi_cls_conn *cls_conn, bool is_active);
+ extern void iscsi_conn_failure(struct iscsi_conn *conn, enum iscsi_err err);
+ extern void iscsi_session_failure(struct iscsi_session *session,
+ 				  enum iscsi_err err);
+diff --git a/include/scsi/scsi_transport_iscsi.h b/include/scsi/scsi_transport_iscsi.h
+index fc5a39839b4b0..3974329d4d023 100644
+--- a/include/scsi/scsi_transport_iscsi.h
++++ b/include/scsi/scsi_transport_iscsi.h
+@@ -82,6 +82,7 @@ struct iscsi_transport {
+ 	void (*destroy_session) (struct iscsi_cls_session *session);
+ 	struct iscsi_cls_conn *(*create_conn) (struct iscsi_cls_session *sess,
+ 				uint32_t cid);
++	void (*unbind_conn) (struct iscsi_cls_conn *conn, bool is_active);
+ 	int (*bind_conn) (struct iscsi_cls_session *session,
+ 			  struct iscsi_cls_conn *cls_conn,
+ 			  uint64_t transport_eph, int is_leading);
+@@ -196,15 +197,23 @@ enum iscsi_connection_state {
+ 	ISCSI_CONN_BOUND,
+ };
+ 
++#define ISCSI_CLS_CONN_BIT_CLEANUP	1
++
+ struct iscsi_cls_conn {
+ 	struct list_head conn_list;	/* item in connlist */
+-	struct list_head conn_list_err;	/* item in connlist_err */
+ 	void *dd_data;			/* LLD private data */
+ 	struct iscsi_transport *transport;
+ 	uint32_t cid;			/* connection id */
++	/*
++	 * This protects the conn startup and binding/unbinding of the ep to
++	 * the conn. Unbinding includes ep_disconnect and stop_conn.
++	 */
+ 	struct mutex ep_mutex;
+ 	struct iscsi_endpoint *ep;
+ 
++	unsigned long flags;
++	struct work_struct cleanup_work;
++
+ 	struct device dev;		/* sysfs transport/container device */
+ 	enum iscsi_connection_state state;
+ };
+@@ -441,6 +450,7 @@ extern int iscsi_scan_finished(struct Scsi_Host *shost, unsigned long time);
+ extern struct iscsi_endpoint *iscsi_create_endpoint(int dd_size);
+ extern void iscsi_destroy_endpoint(struct iscsi_endpoint *ep);
+ extern struct iscsi_endpoint *iscsi_lookup_endpoint(u64 handle);
++extern void iscsi_put_endpoint(struct iscsi_endpoint *ep);
+ extern int iscsi_block_scsi_eh(struct scsi_cmnd *cmd);
+ extern struct iscsi_iface *iscsi_create_iface(struct Scsi_Host *shost,
+ 					      struct iscsi_transport *t,
+diff --git a/include/uapi/linux/seccomp.h b/include/uapi/linux/seccomp.h
+index 6ba18b82a02e4..78074254ab98a 100644
+--- a/include/uapi/linux/seccomp.h
++++ b/include/uapi/linux/seccomp.h
+@@ -115,6 +115,7 @@ struct seccomp_notif_resp {
+ 
+ /* valid flags for seccomp_notif_addfd */
+ #define SECCOMP_ADDFD_FLAG_SETFD	(1UL << 0) /* Specify remote fd */
++#define SECCOMP_ADDFD_FLAG_SEND		(1UL << 1) /* Addfd and return it, atomically */
+ 
+ /**
+  * struct seccomp_notif_addfd
+diff --git a/include/uapi/linux/v4l2-controls.h b/include/uapi/linux/v4l2-controls.h
+index d43bec5f1afd0..5afc19c687049 100644
+--- a/include/uapi/linux/v4l2-controls.h
++++ b/include/uapi/linux/v4l2-controls.h
+@@ -50,6 +50,7 @@
+ #ifndef __LINUX_V4L2_CONTROLS_H
+ #define __LINUX_V4L2_CONTROLS_H
+ 
++#include <linux/const.h>
+ #include <linux/types.h>
+ 
+ /* Control classes */
+@@ -1602,30 +1603,30 @@ struct v4l2_ctrl_h264_decode_params {
+ #define V4L2_FWHT_VERSION			3
+ 
+ /* Set if this is an interlaced format */
+-#define V4L2_FWHT_FL_IS_INTERLACED		BIT(0)
++#define V4L2_FWHT_FL_IS_INTERLACED		_BITUL(0)
+ /* Set if this is a bottom-first (NTSC) interlaced format */
+-#define V4L2_FWHT_FL_IS_BOTTOM_FIRST		BIT(1)
++#define V4L2_FWHT_FL_IS_BOTTOM_FIRST		_BITUL(1)
+ /* Set if each 'frame' contains just one field */
+-#define V4L2_FWHT_FL_IS_ALTERNATE		BIT(2)
++#define V4L2_FWHT_FL_IS_ALTERNATE		_BITUL(2)
+ /*
+  * If V4L2_FWHT_FL_IS_ALTERNATE was set, then this is set if this
+  * 'frame' is the bottom field, else it is the top field.
+  */
+-#define V4L2_FWHT_FL_IS_BOTTOM_FIELD		BIT(3)
++#define V4L2_FWHT_FL_IS_BOTTOM_FIELD		_BITUL(3)
+ /* Set if the Y' plane is uncompressed */
+-#define V4L2_FWHT_FL_LUMA_IS_UNCOMPRESSED	BIT(4)
++#define V4L2_FWHT_FL_LUMA_IS_UNCOMPRESSED	_BITUL(4)
+ /* Set if the Cb plane is uncompressed */
+-#define V4L2_FWHT_FL_CB_IS_UNCOMPRESSED		BIT(5)
++#define V4L2_FWHT_FL_CB_IS_UNCOMPRESSED		_BITUL(5)
+ /* Set if the Cr plane is uncompressed */
+-#define V4L2_FWHT_FL_CR_IS_UNCOMPRESSED		BIT(6)
++#define V4L2_FWHT_FL_CR_IS_UNCOMPRESSED		_BITUL(6)
+ /* Set if the chroma plane is full height, if cleared it is half height */
+-#define V4L2_FWHT_FL_CHROMA_FULL_HEIGHT		BIT(7)
++#define V4L2_FWHT_FL_CHROMA_FULL_HEIGHT		_BITUL(7)
+ /* Set if the chroma plane is full width, if cleared it is half width */
+-#define V4L2_FWHT_FL_CHROMA_FULL_WIDTH		BIT(8)
++#define V4L2_FWHT_FL_CHROMA_FULL_WIDTH		_BITUL(8)
+ /* Set if the alpha plane is uncompressed */
+-#define V4L2_FWHT_FL_ALPHA_IS_UNCOMPRESSED	BIT(9)
++#define V4L2_FWHT_FL_ALPHA_IS_UNCOMPRESSED	_BITUL(9)
+ /* Set if this is an I Frame */
+-#define V4L2_FWHT_FL_I_FRAME			BIT(10)
++#define V4L2_FWHT_FL_I_FRAME			_BITUL(10)
+ 
+ /* A 4-values flag - the number of components - 1 */
+ #define V4L2_FWHT_FL_COMPONENTS_NUM_MSK		GENMASK(18, 16)
+diff --git a/init/main.c b/init/main.c
+index e9c42a183e339..e6836a9400d56 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -941,11 +941,7 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
+ 	 * time - but meanwhile we still have a functioning scheduler.
+ 	 */
+ 	sched_init();
+-	/*
+-	 * Disable preemption - early bootup scheduling is extremely
+-	 * fragile until we cpu_idle() for the first time.
+-	 */
+-	preempt_disable();
++
+ 	if (WARN(!irqs_disabled(),
+ 		 "Interrupts were enabled *very* early, fixing it\n"))
+ 		local_irq_disable();
+diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
+index aa516472ce46f..3b45c23286c03 100644
+--- a/kernel/bpf/devmap.c
++++ b/kernel/bpf/devmap.c
+@@ -92,7 +92,7 @@ static struct hlist_head *dev_map_create_hash(unsigned int entries,
+ 	int i;
+ 	struct hlist_head *hash;
+ 
+-	hash = bpf_map_area_alloc(entries * sizeof(*hash), numa_node);
++	hash = bpf_map_area_alloc((u64) entries * sizeof(*hash), numa_node);
+ 	if (hash != NULL)
+ 		for (i = 0; i < entries; i++)
+ 			INIT_HLIST_HEAD(&hash[i]);
+@@ -143,7 +143,7 @@ static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr)
+ 
+ 		spin_lock_init(&dtab->index_lock);
+ 	} else {
+-		dtab->netdev_map = bpf_map_area_alloc(dtab->map.max_entries *
++		dtab->netdev_map = bpf_map_area_alloc((u64) dtab->map.max_entries *
+ 						      sizeof(struct bpf_dtab_netdev *),
+ 						      dtab->map.numa_node);
+ 		if (!dtab->netdev_map)
+diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
+index b4ebd60a6c164..80da1db47c686 100644
+--- a/kernel/bpf/inode.c
++++ b/kernel/bpf/inode.c
+@@ -543,7 +543,7 @@ int bpf_obj_get_user(const char __user *pathname, int flags)
+ 		return PTR_ERR(raw);
+ 
+ 	if (type == BPF_TYPE_PROG)
+-		ret = (f_flags != O_RDWR) ? -EINVAL : bpf_prog_new_fd(raw);
++		ret = bpf_prog_new_fd(raw);
+ 	else if (type == BPF_TYPE_MAP)
+ 		ret = bpf_map_new_fd(raw, f_flags);
+ 	else if (type == BPF_TYPE_LINK)
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index c6a27574242de..6e2ebcb0d66f0 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -11459,7 +11459,7 @@ static void adjust_subprog_starts(struct bpf_verifier_env *env, u32 off, u32 len
+ 	}
+ }
+ 
+-static void adjust_poke_descs(struct bpf_prog *prog, u32 len)
++static void adjust_poke_descs(struct bpf_prog *prog, u32 off, u32 len)
+ {
+ 	struct bpf_jit_poke_descriptor *tab = prog->aux->poke_tab;
+ 	int i, sz = prog->aux->size_poke_tab;
+@@ -11467,6 +11467,8 @@ static void adjust_poke_descs(struct bpf_prog *prog, u32 len)
+ 
+ 	for (i = 0; i < sz; i++) {
+ 		desc = &tab[i];
++		if (desc->insn_idx <= off)
++			continue;
+ 		desc->insn_idx += len - 1;
+ 	}
+ }
+@@ -11487,7 +11489,7 @@ static struct bpf_prog *bpf_patch_insn_data(struct bpf_verifier_env *env, u32 of
+ 	if (adjust_insn_aux_data(env, new_prog, off, len))
+ 		return NULL;
+ 	adjust_subprog_starts(env, off, len);
+-	adjust_poke_descs(new_prog, len);
++	adjust_poke_descs(new_prog, off, len);
+ 	return new_prog;
+ }
+ 
+diff --git a/kernel/cred.c b/kernel/cred.c
+index e1d274cd741be..9c2759166bd82 100644
+--- a/kernel/cred.c
++++ b/kernel/cred.c
+@@ -60,6 +60,7 @@ struct cred init_cred = {
+ 	.user			= INIT_USER,
+ 	.user_ns		= &init_user_ns,
+ 	.group_info		= &init_groups,
++	.ucounts		= &init_ucounts,
+ };
+ 
+ static inline void set_cred_subscribers(struct cred *cred, int n)
+@@ -119,6 +120,8 @@ static void put_cred_rcu(struct rcu_head *rcu)
+ 	if (cred->group_info)
+ 		put_group_info(cred->group_info);
+ 	free_uid(cred->user);
++	if (cred->ucounts)
++		put_ucounts(cred->ucounts);
+ 	put_user_ns(cred->user_ns);
+ 	kmem_cache_free(cred_jar, cred);
+ }
+@@ -222,6 +225,7 @@ struct cred *cred_alloc_blank(void)
+ #ifdef CONFIG_DEBUG_CREDENTIALS
+ 	new->magic = CRED_MAGIC;
+ #endif
++	new->ucounts = get_ucounts(&init_ucounts);
+ 
+ 	if (security_cred_alloc_blank(new, GFP_KERNEL_ACCOUNT) < 0)
+ 		goto error;
+@@ -284,6 +288,11 @@ struct cred *prepare_creds(void)
+ 
+ 	if (security_prepare_creds(new, old, GFP_KERNEL_ACCOUNT) < 0)
+ 		goto error;
++
++	new->ucounts = get_ucounts(new->ucounts);
++	if (!new->ucounts)
++		goto error;
++
+ 	validate_creds(new);
+ 	return new;
+ 
+@@ -363,6 +372,9 @@ int copy_creds(struct task_struct *p, unsigned long clone_flags)
+ 		ret = create_user_ns(new);
+ 		if (ret < 0)
+ 			goto error_put;
++		ret = set_cred_ucounts(new);
++		if (ret < 0)
++			goto error_put;
+ 	}
+ 
+ #ifdef CONFIG_KEYS
+@@ -653,6 +665,31 @@ int cred_fscmp(const struct cred *a, const struct cred *b)
+ }
+ EXPORT_SYMBOL(cred_fscmp);
+ 
++int set_cred_ucounts(struct cred *new)
++{
++	struct task_struct *task = current;
++	const struct cred *old = task->real_cred;
++	struct ucounts *old_ucounts = new->ucounts;
++
++	if (new->user == old->user && new->user_ns == old->user_ns)
++		return 0;
++
++	/*
++	 * This optimization is needed because alloc_ucounts() uses locks
++	 * for table lookups.
++	 */
++	if (old_ucounts && old_ucounts->ns == new->user_ns && uid_eq(old_ucounts->uid, new->euid))
++		return 0;
++
++	if (!(new->ucounts = alloc_ucounts(new->user_ns, new->euid)))
++		return -EAGAIN;
++
++	if (old_ucounts)
++		put_ucounts(old_ucounts);
++
++	return 0;
++}
++
+ /*
+  * initialise the credentials stuff
+  */
+@@ -719,6 +756,10 @@ struct cred *prepare_kernel_cred(struct task_struct *daemon)
+ 	if (security_prepare_creds(new, old, GFP_KERNEL_ACCOUNT) < 0)
+ 		goto error;
+ 
++	new->ucounts = get_ucounts(new->ucounts);
++	if (!new->ucounts)
++		goto error;
++
+ 	put_cred(old);
+ 	validate_creds(new);
+ 	return new;
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index fe88d6eea3c2c..9ebac2a794679 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -3821,9 +3821,16 @@ static void perf_event_context_sched_in(struct perf_event_context *ctx,
+ 					struct task_struct *task)
+ {
+ 	struct perf_cpu_context *cpuctx;
+-	struct pmu *pmu = ctx->pmu;
++	struct pmu *pmu;
+ 
+ 	cpuctx = __get_cpu_context(ctx);
++
++	/*
++	 * HACK: for HETEROGENEOUS the task context might have switched to a
++	 * different PMU, force (re)set the context,
++	 */
++	pmu = ctx->pmu = cpuctx->ctx.pmu;
++
+ 	if (cpuctx->task_ctx == ctx) {
+ 		if (cpuctx->sched_cb_usage)
+ 			__perf_pmu_sched_task(cpuctx, true);
+diff --git a/kernel/fork.c b/kernel/fork.c
+index a070caed5c8ed..567fee3405003 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -1999,7 +1999,7 @@ static __latent_entropy struct task_struct *copy_process(
+ 		goto bad_fork_cleanup_count;
+ 
+ 	delayacct_tsk_init(p);	/* Must remain after dup_task_struct() */
+-	p->flags &= ~(PF_SUPERPRIV | PF_WQ_WORKER | PF_IDLE);
++	p->flags &= ~(PF_SUPERPRIV | PF_WQ_WORKER | PF_IDLE | PF_NO_SETAFFINITY);
+ 	p->flags |= PF_FORKNOEXEC;
+ 	INIT_LIST_HEAD(&p->children);
+ 	INIT_LIST_HEAD(&p->sibling);
+@@ -2407,7 +2407,7 @@ static inline void init_idle_pids(struct task_struct *idle)
+ 	}
+ }
+ 
+-struct task_struct *fork_idle(int cpu)
++struct task_struct * __init fork_idle(int cpu)
+ {
+ 	struct task_struct *task;
+ 	struct kernel_clone_args args = {
+@@ -2997,6 +2997,12 @@ int ksys_unshare(unsigned long unshare_flags)
+ 	if (err)
+ 		goto bad_unshare_cleanup_cred;
+ 
++	if (new_cred) {
++		err = set_cred_ucounts(new_cred);
++		if (err)
++			goto bad_unshare_cleanup_cred;
++	}
++
+ 	if (new_fs || new_fd || do_sysvsem || new_cred || new_nsproxy) {
+ 		if (do_sysvsem) {
+ 			/*
+diff --git a/kernel/kthread.c b/kernel/kthread.c
+index 0fccf7d0c6a16..08931e525dd92 100644
+--- a/kernel/kthread.c
++++ b/kernel/kthread.c
+@@ -68,16 +68,6 @@ enum KTHREAD_BITS {
+ 	KTHREAD_SHOULD_PARK,
+ };
+ 
+-static inline void set_kthread_struct(void *kthread)
+-{
+-	/*
+-	 * We abuse ->set_child_tid to avoid the new member and because it
+-	 * can't be wrongly copied by copy_process(). We also rely on fact
+-	 * that the caller can't exec, so PF_KTHREAD can't be cleared.
+-	 */
+-	current->set_child_tid = (__force void __user *)kthread;
+-}
+-
+ static inline struct kthread *to_kthread(struct task_struct *k)
+ {
+ 	WARN_ON(!(k->flags & PF_KTHREAD));
+@@ -103,6 +93,22 @@ static inline struct kthread *__to_kthread(struct task_struct *p)
+ 	return kthread;
+ }
+ 
++void set_kthread_struct(struct task_struct *p)
++{
++	struct kthread *kthread;
++
++	if (__to_kthread(p))
++		return;
++
++	kthread = kzalloc(sizeof(*kthread), GFP_KERNEL);
++	/*
++	 * We abuse ->set_child_tid to avoid the new member and because it
++	 * can't be wrongly copied by copy_process(). We also rely on fact
++	 * that the caller can't exec, so PF_KTHREAD can't be cleared.
++	 */
++	p->set_child_tid = (__force void __user *)kthread;
++}
++
+ void free_kthread_struct(struct task_struct *k)
+ {
+ 	struct kthread *kthread;
+@@ -272,8 +278,8 @@ static int kthread(void *_create)
+ 	struct kthread *self;
+ 	int ret;
+ 
+-	self = kzalloc(sizeof(*self), GFP_KERNEL);
+-	set_kthread_struct(self);
++	set_kthread_struct(current);
++	self = to_kthread(current);
+ 
+ 	/* If user was SIGKILLed, I release the structure. */
+ 	done = xchg(&create->done, NULL);
+@@ -1156,14 +1162,14 @@ static bool __kthread_cancel_work(struct kthread_work *work)
+  * modify @dwork's timer so that it expires after @delay. If @delay is zero,
+  * @work is guaranteed to be queued immediately.
+  *
+- * Return: %true if @dwork was pending and its timer was modified,
+- * %false otherwise.
++ * Return: %false if @dwork was idle and queued, %true otherwise.
+  *
+  * A special case is when the work is being canceled in parallel.
+  * It might be caused either by the real kthread_cancel_delayed_work_sync()
+  * or yet another kthread_mod_delayed_work() call. We let the other command
+- * win and return %false here. The caller is supposed to synchronize these
+- * operations a reasonable way.
++ * win and return %true here. The return value can be used for reference
++ * counting and the number of queued works stays the same. Anyway, the caller
++ * is supposed to synchronize these operations a reasonable way.
+  *
+  * This function is safe to call from any context including IRQ handler.
+  * See __kthread_cancel_work() and kthread_delayed_work_timer_fn()
+@@ -1175,13 +1181,15 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
+ {
+ 	struct kthread_work *work = &dwork->work;
+ 	unsigned long flags;
+-	int ret = false;
++	int ret;
+ 
+ 	raw_spin_lock_irqsave(&worker->lock, flags);
+ 
+ 	/* Do not bother with canceling when never queued. */
+-	if (!work->worker)
++	if (!work->worker) {
++		ret = false;
+ 		goto fast_queue;
++	}
+ 
+ 	/* Work must not be used with >1 worker, see kthread_queue_work() */
+ 	WARN_ON_ONCE(work->worker != worker);
+@@ -1199,8 +1207,11 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker,
+ 	 * be used for reference counting.
+ 	 */
+ 	kthread_cancel_delayed_work_timer(work, &flags);
+-	if (work->canceling)
++	if (work->canceling) {
++		/* The number of works in the queue does not change. */
++		ret = true;
+ 		goto out;
++	}
+ 	ret = __kthread_cancel_work(work);
+ 
+ fast_queue:
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index e32313072506d..9125bd419216b 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -2306,7 +2306,56 @@ static void print_lock_class_header(struct lock_class *class, int depth)
+ }
+ 
+ /*
+- * printk the shortest lock dependencies from @start to @end in reverse order:
++ * Dependency path printing:
++ *
++ * After BFS we get a lock dependency path (linked via ->parent of lock_list),
++ * printing out each lock in the dependency path will help on understanding how
++ * the deadlock could happen. Here are some details about dependency path
++ * printing:
++ *
++ * 1)	A lock_list can be either forwards or backwards for a lock dependency,
++ * 	for a lock dependency A -> B, there are two lock_lists:
++ *
++ * 	a)	lock_list in the ->locks_after list of A, whose ->class is B and
++ * 		->links_to is A. In this case, we can say the lock_list is
++ * 		"A -> B" (forwards case).
++ *
++ * 	b)	lock_list in the ->locks_before list of B, whose ->class is A
++ * 		and ->links_to is B. In this case, we can say the lock_list is
++ * 		"B <- A" (bacwards case).
++ *
++ * 	The ->trace of both a) and b) point to the call trace where B was
++ * 	acquired with A held.
++ *
++ * 2)	A "helper" lock_list is introduced during BFS, this lock_list doesn't
++ * 	represent a certain lock dependency, it only provides an initial entry
++ * 	for BFS. For example, BFS may introduce a "helper" lock_list whose
++ * 	->class is A, as a result BFS will search all dependencies starting with
++ * 	A, e.g. A -> B or A -> C.
++ *
++ * 	The notation of a forwards helper lock_list is like "-> A", which means
++ * 	we should search the forwards dependencies starting with "A", e.g A -> B
++ * 	or A -> C.
++ *
++ * 	The notation of a bacwards helper lock_list is like "<- B", which means
++ * 	we should search the backwards dependencies ending with "B", e.g.
++ * 	B <- A or B <- C.
++ */
++
++/*
++ * printk the shortest lock dependencies from @root to @leaf in reverse order.
++ *
++ * We have a lock dependency path as follow:
++ *
++ *    @root                                                                 @leaf
++ *      |                                                                     |
++ *      V                                                                     V
++ *	          ->parent                                   ->parent
++ * | lock_list | <--------- | lock_list | ... | lock_list  | <--------- | lock_list |
++ * |    -> L1  |            | L1 -> L2  | ... |Ln-2 -> Ln-1|            | Ln-1 -> Ln|
++ *
++ * , so it's natural that we start from @leaf and print every ->class and
++ * ->trace until we reach the @root.
+  */
+ static void __used
+ print_shortest_lock_dependencies(struct lock_list *leaf,
+@@ -2334,6 +2383,61 @@ print_shortest_lock_dependencies(struct lock_list *leaf,
+ 	} while (entry && (depth >= 0));
+ }
+ 
++/*
++ * printk the shortest lock dependencies from @leaf to @root.
++ *
++ * We have a lock dependency path (from a backwards search) as follow:
++ *
++ *    @leaf                                                                 @root
++ *      |                                                                     |
++ *      V                                                                     V
++ *	          ->parent                                   ->parent
++ * | lock_list | ---------> | lock_list | ... | lock_list  | ---------> | lock_list |
++ * | L2 <- L1  |            | L3 <- L2  | ... | Ln <- Ln-1 |            |    <- Ln  |
++ *
++ * , so when we iterate from @leaf to @root, we actually print the lock
++ * dependency path L1 -> L2 -> .. -> Ln in the non-reverse order.
++ *
++ * Another thing to notice here is that ->class of L2 <- L1 is L1, while the
++ * ->trace of L2 <- L1 is the call trace of L2, in fact we don't have the call
++ * trace of L1 in the dependency path, which is alright, because most of the
++ * time we can figure out where L1 is held from the call trace of L2.
++ */
++static void __used
++print_shortest_lock_dependencies_backwards(struct lock_list *leaf,
++					   struct lock_list *root)
++{
++	struct lock_list *entry = leaf;
++	const struct lock_trace *trace = NULL;
++	int depth;
++
++	/*compute depth from generated tree by BFS*/
++	depth = get_lock_depth(leaf);
++
++	do {
++		print_lock_class_header(entry->class, depth);
++		if (trace) {
++			printk("%*s ... acquired at:\n", depth, "");
++			print_lock_trace(trace, 2);
++			printk("\n");
++		}
++
++		/*
++		 * Record the pointer to the trace for the next lock_list
++		 * entry, see the comments for the function.
++		 */
++		trace = entry->trace;
++
++		if (depth == 0 && (entry != root)) {
++			printk("lockdep:%s bad path found in chain graph\n", __func__);
++			break;
++		}
++
++		entry = get_lock_parent(entry);
++		depth--;
++	} while (entry && (depth >= 0));
++}
++
+ static void
+ print_irq_lock_scenario(struct lock_list *safe_entry,
+ 			struct lock_list *unsafe_entry,
+@@ -2451,7 +2555,7 @@ print_bad_irq_dependency(struct task_struct *curr,
+ 	prev_root->trace = save_trace();
+ 	if (!prev_root->trace)
+ 		return;
+-	print_shortest_lock_dependencies(backwards_entry, prev_root);
++	print_shortest_lock_dependencies_backwards(backwards_entry, prev_root);
+ 
+ 	pr_warn("\nthe dependencies between the lock to be acquired");
+ 	pr_warn(" and %s-irq-unsafe lock:\n", irqclass);
+@@ -2669,8 +2773,18 @@ static int check_irq_usage(struct task_struct *curr, struct held_lock *prev,
+ 	 * Step 3: we found a bad match! Now retrieve a lock from the backward
+ 	 * list whose usage mask matches the exclusive usage mask from the
+ 	 * lock found on the forward list.
++	 *
++	 * Note, we should only keep the LOCKF_ENABLED_IRQ_ALL bits, considering
++	 * the follow case:
++	 *
++	 * When trying to add A -> B to the graph, we find that there is a
++	 * hardirq-safe L, that L -> ... -> A, and another hardirq-unsafe M,
++	 * that B -> ... -> M. However M is **softirq-safe**, if we use exact
++	 * invert bits of M's usage_mask, we will find another lock N that is
++	 * **softirq-unsafe** and N -> ... -> A, however N -> .. -> M will not
++	 * cause a inversion deadlock.
+ 	 */
+-	backward_mask = original_mask(target_entry1->class->usage_mask);
++	backward_mask = original_mask(target_entry1->class->usage_mask & LOCKF_ENABLED_IRQ_ALL);
+ 
+ 	ret = find_usage_backwards(&this, backward_mask, &target_entry);
+ 	if (bfs_error(ret)) {
+@@ -4579,7 +4693,7 @@ static int check_wait_context(struct task_struct *curr, struct held_lock *next)
+ 	u8 curr_inner;
+ 	int depth;
+ 
+-	if (!curr->lockdep_depth || !next_inner || next->trylock)
++	if (!next_inner || next->trylock)
+ 		return 0;
+ 
+ 	if (!next_outer)
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 8e78b2430c168..9a1396a70c520 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -2911,7 +2911,6 @@ static int __init rcu_spawn_core_kthreads(void)
+ 		  "%s: Could not start rcuc kthread, OOM is now expected behavior\n", __func__);
+ 	return 0;
+ }
+-early_initcall(rcu_spawn_core_kthreads);
+ 
+ /*
+  * Handle any core-RCU processing required by a call_rcu() invocation.
+@@ -4472,6 +4471,7 @@ static int __init rcu_spawn_gp_kthread(void)
+ 	wake_up_process(t);
+ 	rcu_spawn_nocb_kthreads();
+ 	rcu_spawn_boost_kthreads();
++	rcu_spawn_core_kthreads();
+ 	return 0;
+ }
+ early_initcall(rcu_spawn_gp_kthread);
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 4ca80df205ce6..e5858999b54de 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1065,9 +1065,10 @@ static void uclamp_sync_util_min_rt_default(void)
+ static inline struct uclamp_se
+ uclamp_tg_restrict(struct task_struct *p, enum uclamp_id clamp_id)
+ {
++	/* Copy by value as we could modify it */
+ 	struct uclamp_se uc_req = p->uclamp_req[clamp_id];
+ #ifdef CONFIG_UCLAMP_TASK_GROUP
+-	struct uclamp_se uc_max;
++	unsigned int tg_min, tg_max, value;
+ 
+ 	/*
+ 	 * Tasks in autogroups or root task group will be
+@@ -1078,9 +1079,11 @@ uclamp_tg_restrict(struct task_struct *p, enum uclamp_id clamp_id)
+ 	if (task_group(p) == &root_task_group)
+ 		return uc_req;
+ 
+-	uc_max = task_group(p)->uclamp[clamp_id];
+-	if (uc_req.value > uc_max.value || !uc_req.user_defined)
+-		return uc_max;
++	tg_min = task_group(p)->uclamp[UCLAMP_MIN].value;
++	tg_max = task_group(p)->uclamp[UCLAMP_MAX].value;
++	value = uc_req.value;
++	value = clamp(value, tg_min, tg_max);
++	uclamp_se_set(&uc_req, value, false);
+ #endif
+ 
+ 	return uc_req;
+@@ -1279,8 +1282,9 @@ static inline void uclamp_rq_dec(struct rq *rq, struct task_struct *p)
+ }
+ 
+ static inline void
+-uclamp_update_active(struct task_struct *p, enum uclamp_id clamp_id)
++uclamp_update_active(struct task_struct *p)
+ {
++	enum uclamp_id clamp_id;
+ 	struct rq_flags rf;
+ 	struct rq *rq;
+ 
+@@ -1300,9 +1304,11 @@ uclamp_update_active(struct task_struct *p, enum uclamp_id clamp_id)
+ 	 * affecting a valid clamp bucket, the next time it's enqueued,
+ 	 * it will already see the updated clamp bucket value.
+ 	 */
+-	if (p->uclamp[clamp_id].active) {
+-		uclamp_rq_dec_id(rq, p, clamp_id);
+-		uclamp_rq_inc_id(rq, p, clamp_id);
++	for_each_clamp_id(clamp_id) {
++		if (p->uclamp[clamp_id].active) {
++			uclamp_rq_dec_id(rq, p, clamp_id);
++			uclamp_rq_inc_id(rq, p, clamp_id);
++		}
+ 	}
+ 
+ 	task_rq_unlock(rq, p, &rf);
+@@ -1310,20 +1316,14 @@ uclamp_update_active(struct task_struct *p, enum uclamp_id clamp_id)
+ 
+ #ifdef CONFIG_UCLAMP_TASK_GROUP
+ static inline void
+-uclamp_update_active_tasks(struct cgroup_subsys_state *css,
+-			   unsigned int clamps)
++uclamp_update_active_tasks(struct cgroup_subsys_state *css)
+ {
+-	enum uclamp_id clamp_id;
+ 	struct css_task_iter it;
+ 	struct task_struct *p;
+ 
+ 	css_task_iter_start(css, 0, &it);
+-	while ((p = css_task_iter_next(&it))) {
+-		for_each_clamp_id(clamp_id) {
+-			if ((0x1 << clamp_id) & clamps)
+-				uclamp_update_active(p, clamp_id);
+-		}
+-	}
++	while ((p = css_task_iter_next(&it)))
++		uclamp_update_active(p);
+ 	css_task_iter_end(&it);
+ }
+ 
+@@ -1916,7 +1916,6 @@ static int migration_cpu_stop(void *data)
+ 	struct migration_arg *arg = data;
+ 	struct set_affinity_pending *pending = arg->pending;
+ 	struct task_struct *p = arg->task;
+-	int dest_cpu = arg->dest_cpu;
+ 	struct rq *rq = this_rq();
+ 	bool complete = false;
+ 	struct rq_flags rf;
+@@ -1954,19 +1953,15 @@ static int migration_cpu_stop(void *data)
+ 		if (pending) {
+ 			p->migration_pending = NULL;
+ 			complete = true;
+-		}
+ 
+-		if (dest_cpu < 0) {
+ 			if (cpumask_test_cpu(task_cpu(p), &p->cpus_mask))
+ 				goto out;
+-
+-			dest_cpu = cpumask_any_distribute(&p->cpus_mask);
+ 		}
+ 
+ 		if (task_on_rq_queued(p))
+-			rq = __migrate_task(rq, &rf, p, dest_cpu);
++			rq = __migrate_task(rq, &rf, p, arg->dest_cpu);
+ 		else
+-			p->wake_cpu = dest_cpu;
++			p->wake_cpu = arg->dest_cpu;
+ 
+ 		/*
+ 		 * XXX __migrate_task() can fail, at which point we might end
+@@ -2249,7 +2244,7 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
+ 			init_completion(&my_pending.done);
+ 			my_pending.arg = (struct migration_arg) {
+ 				.task = p,
+-				.dest_cpu = -1,		/* any */
++				.dest_cpu = dest_cpu,
+ 				.pending = &my_pending,
+ 			};
+ 
+@@ -2257,6 +2252,15 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
+ 		} else {
+ 			pending = p->migration_pending;
+ 			refcount_inc(&pending->refs);
++			/*
++			 * Affinity has changed, but we've already installed a
++			 * pending. migration_cpu_stop() *must* see this, else
++			 * we risk a completion of the pending despite having a
++			 * task on a disallowed CPU.
++			 *
++			 * Serialized by p->pi_lock, so this is safe.
++			 */
++			pending->arg.dest_cpu = dest_cpu;
+ 		}
+ 	}
+ 	pending = p->migration_pending;
+@@ -7433,19 +7437,32 @@ void show_state_filter(unsigned long state_filter)
+  * NOTE: this function does not set the idle thread's NEED_RESCHED
+  * flag, to make booting more robust.
+  */
+-void init_idle(struct task_struct *idle, int cpu)
++void __init init_idle(struct task_struct *idle, int cpu)
+ {
+ 	struct rq *rq = cpu_rq(cpu);
+ 	unsigned long flags;
+ 
+ 	__sched_fork(0, idle);
+ 
++	/*
++	 * The idle task doesn't need the kthread struct to function, but it
++	 * is dressed up as a per-CPU kthread and thus needs to play the part
++	 * if we want to avoid special-casing it in code that deals with per-CPU
++	 * kthreads.
++	 */
++	set_kthread_struct(idle);
++
+ 	raw_spin_lock_irqsave(&idle->pi_lock, flags);
+ 	raw_spin_lock(&rq->lock);
+ 
+ 	idle->state = TASK_RUNNING;
+ 	idle->se.exec_start = sched_clock();
+-	idle->flags |= PF_IDLE;
++	/*
++	 * PF_KTHREAD should already be set at this point; regardless, make it
++	 * look like a proper per-CPU kthread.
++	 */
++	idle->flags |= PF_IDLE | PF_KTHREAD | PF_NO_SETAFFINITY;
++	kthread_set_per_cpu(idle, cpu);
+ 
+ 	scs_task_reset(idle);
+ 	kasan_unpoison_task_stack(idle);
+@@ -7662,12 +7679,8 @@ static void balance_push(struct rq *rq)
+ 	/*
+ 	 * Both the cpu-hotplug and stop task are in this case and are
+ 	 * required to complete the hotplug process.
+-	 *
+-	 * XXX: the idle task does not match kthread_is_per_cpu() due to
+-	 * histerical raisins.
+ 	 */
+-	if (rq->idle == push_task ||
+-	    kthread_is_per_cpu(push_task) ||
++	if (kthread_is_per_cpu(push_task) ||
+ 	    is_migration_disabled(push_task)) {
+ 
+ 		/*
+@@ -8680,7 +8693,11 @@ static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
+ 
+ #ifdef CONFIG_UCLAMP_TASK_GROUP
+ 	/* Propagate the effective uclamp value for the new group */
++	mutex_lock(&uclamp_mutex);
++	rcu_read_lock();
+ 	cpu_util_update_eff(css);
++	rcu_read_unlock();
++	mutex_unlock(&uclamp_mutex);
+ #endif
+ 
+ 	return 0;
+@@ -8770,6 +8787,9 @@ static void cpu_util_update_eff(struct cgroup_subsys_state *css)
+ 	enum uclamp_id clamp_id;
+ 	unsigned int clamps;
+ 
++	lockdep_assert_held(&uclamp_mutex);
++	SCHED_WARN_ON(!rcu_read_lock_held());
++
+ 	css_for_each_descendant_pre(css, top_css) {
+ 		uc_parent = css_tg(css)->parent
+ 			? css_tg(css)->parent->uclamp : NULL;
+@@ -8802,7 +8822,7 @@ static void cpu_util_update_eff(struct cgroup_subsys_state *css)
+ 		}
+ 
+ 		/* Immediately update descendants RUNNABLE tasks */
+-		uclamp_update_active_tasks(css, clamps);
++		uclamp_update_active_tasks(css);
+ 	}
+ }
+ 
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 9a2989749b8d1..2f9964b467e03 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -2486,6 +2486,8 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p)
+ 			check_preempt_curr_dl(rq, p, 0);
+ 		else
+ 			resched_curr(rq);
++	} else {
++		update_dl_rq_load_avg(rq_clock_pelt(rq), rq, 0);
+ 	}
+ }
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 23663318fb81a..e807b743353d1 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3139,7 +3139,7 @@ void reweight_task(struct task_struct *p, int prio)
+  *
+  *                     tg->weight * grq->load.weight
+  *   ge->load.weight = -----------------------------               (1)
+- *			  \Sum grq->load.weight
++ *                       \Sum grq->load.weight
+  *
+  * Now, because computing that sum is prohibitively expensive to compute (been
+  * there, done that) we approximate it with this average stuff. The average
+@@ -3153,7 +3153,7 @@ void reweight_task(struct task_struct *p, int prio)
+  *
+  *                     tg->weight * grq->avg.load_avg
+  *   ge->load.weight = ------------------------------              (3)
+- *				tg->load_avg
++ *                             tg->load_avg
+  *
+  * Where: tg->load_avg ~= \Sum grq->avg.load_avg
+  *
+@@ -3169,7 +3169,7 @@ void reweight_task(struct task_struct *p, int prio)
+  *
+  *                     tg->weight * grq->load.weight
+  *   ge->load.weight = ----------------------------- = tg->weight   (4)
+- *			    grp->load.weight
++ *                         grp->load.weight
+  *
+  * That is, the sum collapses because all other CPUs are idle; the UP scenario.
+  *
+@@ -3188,7 +3188,7 @@ void reweight_task(struct task_struct *p, int prio)
+  *
+  *                     tg->weight * grq->load.weight
+  *   ge->load.weight = -----------------------------		   (6)
+- *				tg_load_avg'
++ *                             tg_load_avg'
+  *
+  * Where:
+  *
+@@ -6620,8 +6620,11 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
+ 	struct cpumask *pd_mask = perf_domain_span(pd);
+ 	unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
+ 	unsigned long max_util = 0, sum_util = 0;
++	unsigned long _cpu_cap = cpu_cap;
+ 	int cpu;
+ 
++	_cpu_cap -= arch_scale_thermal_pressure(cpumask_first(pd_mask));
++
+ 	/*
+ 	 * The capacity state of CPUs of the current rd can be driven by CPUs
+ 	 * of another rd if they belong to the same pd. So, account for the
+@@ -6657,8 +6660,10 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
+ 		 * is already enough to scale the EM reported power
+ 		 * consumption at the (eventually clamped) cpu_capacity.
+ 		 */
+-		sum_util += effective_cpu_util(cpu, util_running, cpu_cap,
+-					       ENERGY_UTIL, NULL);
++		cpu_util = effective_cpu_util(cpu, util_running, cpu_cap,
++					      ENERGY_UTIL, NULL);
++
++		sum_util += min(cpu_util, _cpu_cap);
+ 
+ 		/*
+ 		 * Performance domain frequency: utilization clamping
+@@ -6669,7 +6674,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
+ 		 */
+ 		cpu_util = effective_cpu_util(cpu, util_freq, cpu_cap,
+ 					      FREQUENCY_UTIL, tsk);
+-		max_util = max(max_util, cpu_util);
++		max_util = max(max_util, min(cpu_util, _cpu_cap));
+ 	}
+ 
+ 	return em_cpu_energy(pd->em_pd, max_util, sum_util);
+diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
+index cc25a3cff41fb..58b36d17a09a0 100644
+--- a/kernel/sched/psi.c
++++ b/kernel/sched/psi.c
+@@ -182,6 +182,8 @@ struct psi_group psi_system = {
+ 
+ static void psi_avgs_work(struct work_struct *work);
+ 
++static void poll_timer_fn(struct timer_list *t);
++
+ static void group_init(struct psi_group *group)
+ {
+ 	int cpu;
+@@ -201,6 +203,8 @@ static void group_init(struct psi_group *group)
+ 	memset(group->polling_total, 0, sizeof(group->polling_total));
+ 	group->polling_next_update = ULLONG_MAX;
+ 	group->polling_until = 0;
++	init_waitqueue_head(&group->poll_wait);
++	timer_setup(&group->poll_timer, poll_timer_fn, 0);
+ 	rcu_assign_pointer(group->poll_task, NULL);
+ }
+ 
+@@ -1157,9 +1161,7 @@ struct psi_trigger *psi_trigger_create(struct psi_group *group,
+ 			return ERR_CAST(task);
+ 		}
+ 		atomic_set(&group->poll_wakeup, 0);
+-		init_waitqueue_head(&group->poll_wait);
+ 		wake_up_process(task);
+-		timer_setup(&group->poll_timer, poll_timer_fn, 0);
+ 		rcu_assign_pointer(group->poll_task, task);
+ 	}
+ 
+@@ -1211,6 +1213,7 @@ static void psi_trigger_destroy(struct kref *ref)
+ 					group->poll_task,
+ 					lockdep_is_held(&group->trigger_lock));
+ 			rcu_assign_pointer(group->poll_task, NULL);
++			del_timer(&group->poll_timer);
+ 		}
+ 	}
+ 
+@@ -1223,17 +1226,14 @@ static void psi_trigger_destroy(struct kref *ref)
+ 	 */
+ 	synchronize_rcu();
+ 	/*
+-	 * Destroy the kworker after releasing trigger_lock to prevent a
++	 * Stop kthread 'psimon' after releasing trigger_lock to prevent a
+ 	 * deadlock while waiting for psi_poll_work to acquire trigger_lock
+ 	 */
+ 	if (task_to_destroy) {
+ 		/*
+ 		 * After the RCU grace period has expired, the worker
+ 		 * can no longer be found through group->poll_task.
+-		 * But it might have been already scheduled before
+-		 * that - deschedule it cleanly before destroying it.
+ 		 */
+-		del_timer_sync(&group->poll_timer);
+ 		kthread_stop(task_to_destroy);
+ 	}
+ 	kfree(t);
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index c286e5ba3c942..3b1b8b025b746 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -2331,13 +2331,20 @@ void __init init_sched_rt_class(void)
+ static void switched_to_rt(struct rq *rq, struct task_struct *p)
+ {
+ 	/*
+-	 * If we are already running, then there's nothing
+-	 * that needs to be done. But if we are not running
+-	 * we may need to preempt the current running task.
+-	 * If that current running task is also an RT task
++	 * If we are running, update the avg_rt tracking, as the running time
++	 * will now on be accounted into the latter.
++	 */
++	if (task_current(rq, p)) {
++		update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0);
++		return;
++	}
++
++	/*
++	 * If we are not running we may need to preempt the current
++	 * running task. If that current running task is also an RT task
+ 	 * then see if we can move to another run queue.
+ 	 */
+-	if (task_on_rq_queued(p) && rq->curr != p) {
++	if (task_on_rq_queued(p)) {
+ #ifdef CONFIG_SMP
+ 		if (p->nr_cpus_allowed > 1 && rq->rt.overloaded)
+ 			rt_queue_push_tasks(rq);
+diff --git a/kernel/seccomp.c b/kernel/seccomp.c
+index 9f58049ac16d9..057e17f3215d5 100644
+--- a/kernel/seccomp.c
++++ b/kernel/seccomp.c
+@@ -107,6 +107,7 @@ struct seccomp_knotif {
+  *      installing process should allocate the fd as normal.
+  * @flags: The flags for the new file descriptor. At the moment, only O_CLOEXEC
+  *         is allowed.
++ * @ioctl_flags: The flags used for the seccomp_addfd ioctl.
+  * @ret: The return value of the installing process. It is set to the fd num
+  *       upon success (>= 0).
+  * @completion: Indicates that the installing process has completed fd
+@@ -118,6 +119,7 @@ struct seccomp_kaddfd {
+ 	struct file *file;
+ 	int fd;
+ 	unsigned int flags;
++	__u32 ioctl_flags;
+ 
+ 	union {
+ 		bool setfd;
+@@ -1065,18 +1067,37 @@ static u64 seccomp_next_notify_id(struct seccomp_filter *filter)
+ 	return filter->notif->next_id++;
+ }
+ 
+-static void seccomp_handle_addfd(struct seccomp_kaddfd *addfd)
++static void seccomp_handle_addfd(struct seccomp_kaddfd *addfd, struct seccomp_knotif *n)
+ {
++	int fd;
++
+ 	/*
+ 	 * Remove the notification, and reset the list pointers, indicating
+ 	 * that it has been handled.
+ 	 */
+ 	list_del_init(&addfd->list);
+ 	if (!addfd->setfd)
+-		addfd->ret = receive_fd(addfd->file, addfd->flags);
++		fd = receive_fd(addfd->file, addfd->flags);
+ 	else
+-		addfd->ret = receive_fd_replace(addfd->fd, addfd->file,
+-						addfd->flags);
++		fd = receive_fd_replace(addfd->fd, addfd->file, addfd->flags);
++	addfd->ret = fd;
++
++	if (addfd->ioctl_flags & SECCOMP_ADDFD_FLAG_SEND) {
++		/* If we fail reset and return an error to the notifier */
++		if (fd < 0) {
++			n->state = SECCOMP_NOTIFY_SENT;
++		} else {
++			/* Return the FD we just added */
++			n->flags = 0;
++			n->error = 0;
++			n->val = fd;
++		}
++	}
++
++	/*
++	 * Mark the notification as completed. From this point, addfd mem
++	 * might be invalidated and we can't safely read it anymore.
++	 */
+ 	complete(&addfd->completion);
+ }
+ 
+@@ -1120,7 +1141,7 @@ static int seccomp_do_user_notification(int this_syscall,
+ 						 struct seccomp_kaddfd, list);
+ 		/* Check if we were woken up by a addfd message */
+ 		if (addfd)
+-			seccomp_handle_addfd(addfd);
++			seccomp_handle_addfd(addfd, &n);
+ 
+ 	}  while (n.state != SECCOMP_NOTIFY_REPLIED);
+ 
+@@ -1581,7 +1602,7 @@ static long seccomp_notify_addfd(struct seccomp_filter *filter,
+ 	if (addfd.newfd_flags & ~O_CLOEXEC)
+ 		return -EINVAL;
+ 
+-	if (addfd.flags & ~SECCOMP_ADDFD_FLAG_SETFD)
++	if (addfd.flags & ~(SECCOMP_ADDFD_FLAG_SETFD | SECCOMP_ADDFD_FLAG_SEND))
+ 		return -EINVAL;
+ 
+ 	if (addfd.newfd && !(addfd.flags & SECCOMP_ADDFD_FLAG_SETFD))
+@@ -1591,6 +1612,7 @@ static long seccomp_notify_addfd(struct seccomp_filter *filter,
+ 	if (!kaddfd.file)
+ 		return -EBADF;
+ 
++	kaddfd.ioctl_flags = addfd.flags;
+ 	kaddfd.flags = addfd.newfd_flags;
+ 	kaddfd.setfd = addfd.flags & SECCOMP_ADDFD_FLAG_SETFD;
+ 	kaddfd.fd = addfd.newfd;
+@@ -1616,6 +1638,23 @@ static long seccomp_notify_addfd(struct seccomp_filter *filter,
+ 		goto out_unlock;
+ 	}
+ 
++	if (addfd.flags & SECCOMP_ADDFD_FLAG_SEND) {
++		/*
++		 * Disallow queuing an atomic addfd + send reply while there are
++		 * some addfd requests still to process.
++		 *
++		 * There is no clear reason to support it and allows us to keep
++		 * the loop on the other side straight-forward.
++		 */
++		if (!list_empty(&knotif->addfd)) {
++			ret = -EBUSY;
++			goto out_unlock;
++		}
++
++		/* Allow exactly only one reply */
++		knotif->state = SECCOMP_NOTIFY_REPLIED;
++	}
++
+ 	list_add(&kaddfd.list, &knotif->addfd);
+ 	complete(&knotif->ready);
+ 	mutex_unlock(&filter->notify_lock);
+diff --git a/kernel/smpboot.c b/kernel/smpboot.c
+index f25208e8df836..e4163042c4d66 100644
+--- a/kernel/smpboot.c
++++ b/kernel/smpboot.c
+@@ -33,7 +33,6 @@ struct task_struct *idle_thread_get(unsigned int cpu)
+ 
+ 	if (!tsk)
+ 		return ERR_PTR(-ENOMEM);
+-	init_idle(tsk, cpu);
+ 	return tsk;
+ }
+ 
+diff --git a/kernel/sys.c b/kernel/sys.c
+index 3a583a29815fa..142ee040f5733 100644
+--- a/kernel/sys.c
++++ b/kernel/sys.c
+@@ -558,6 +558,10 @@ long __sys_setreuid(uid_t ruid, uid_t euid)
+ 	if (retval < 0)
+ 		goto error;
+ 
++	retval = set_cred_ucounts(new);
++	if (retval < 0)
++		goto error;
++
+ 	return commit_creds(new);
+ 
+ error:
+@@ -616,6 +620,10 @@ long __sys_setuid(uid_t uid)
+ 	if (retval < 0)
+ 		goto error;
+ 
++	retval = set_cred_ucounts(new);
++	if (retval < 0)
++		goto error;
++
+ 	return commit_creds(new);
+ 
+ error:
+@@ -691,6 +699,10 @@ long __sys_setresuid(uid_t ruid, uid_t euid, uid_t suid)
+ 	if (retval < 0)
+ 		goto error;
+ 
++	retval = set_cred_ucounts(new);
++	if (retval < 0)
++		goto error;
++
+ 	return commit_creds(new);
+ 
+ error:
+diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
+index 2cd902592fc1f..cb12225bf0502 100644
+--- a/kernel/time/clocksource.c
++++ b/kernel/time/clocksource.c
+@@ -124,6 +124,13 @@ static void __clocksource_change_rating(struct clocksource *cs, int rating);
+ #define WATCHDOG_INTERVAL (HZ >> 1)
+ #define WATCHDOG_THRESHOLD (NSEC_PER_SEC >> 4)
+ 
++/*
++ * Maximum permissible delay between two readouts of the watchdog
++ * clocksource surrounding a read of the clocksource being validated.
++ * This delay could be due to SMIs, NMIs, or to VCPU preemptions.
++ */
++#define WATCHDOG_MAX_SKEW (100 * NSEC_PER_USEC)
++
+ static void clocksource_watchdog_work(struct work_struct *work)
+ {
+ 	/*
+@@ -184,12 +191,99 @@ void clocksource_mark_unstable(struct clocksource *cs)
+ 	spin_unlock_irqrestore(&watchdog_lock, flags);
+ }
+ 
++static ulong max_cswd_read_retries = 3;
++module_param(max_cswd_read_retries, ulong, 0644);
++
++static bool cs_watchdog_read(struct clocksource *cs, u64 *csnow, u64 *wdnow)
++{
++	unsigned int nretries;
++	u64 wd_end, wd_delta;
++	int64_t wd_delay;
++
++	for (nretries = 0; nretries <= max_cswd_read_retries; nretries++) {
++		local_irq_disable();
++		*wdnow = watchdog->read(watchdog);
++		*csnow = cs->read(cs);
++		wd_end = watchdog->read(watchdog);
++		local_irq_enable();
++
++		wd_delta = clocksource_delta(wd_end, *wdnow, watchdog->mask);
++		wd_delay = clocksource_cyc2ns(wd_delta, watchdog->mult,
++					      watchdog->shift);
++		if (wd_delay <= WATCHDOG_MAX_SKEW) {
++			if (nretries > 1 || nretries >= max_cswd_read_retries) {
++				pr_warn("timekeeping watchdog on CPU%d: %s retried %d times before success\n",
++					smp_processor_id(), watchdog->name, nretries);
++			}
++			return true;
++		}
++	}
++
++	pr_warn("timekeeping watchdog on CPU%d: %s read-back delay of %lldns, attempt %d, marking unstable\n",
++		smp_processor_id(), watchdog->name, wd_delay, nretries);
++	return false;
++}
++
++static u64 csnow_mid;
++static cpumask_t cpus_ahead;
++static cpumask_t cpus_behind;
++
++static void clocksource_verify_one_cpu(void *csin)
++{
++	struct clocksource *cs = (struct clocksource *)csin;
++
++	csnow_mid = cs->read(cs);
++}
++
++static void clocksource_verify_percpu(struct clocksource *cs)
++{
++	int64_t cs_nsec, cs_nsec_max = 0, cs_nsec_min = LLONG_MAX;
++	u64 csnow_begin, csnow_end;
++	int cpu, testcpu;
++	s64 delta;
++
++	cpumask_clear(&cpus_ahead);
++	cpumask_clear(&cpus_behind);
++	preempt_disable();
++	testcpu = smp_processor_id();
++	pr_warn("Checking clocksource %s synchronization from CPU %d.\n", cs->name, testcpu);
++	for_each_online_cpu(cpu) {
++		if (cpu == testcpu)
++			continue;
++		csnow_begin = cs->read(cs);
++		smp_call_function_single(cpu, clocksource_verify_one_cpu, cs, 1);
++		csnow_end = cs->read(cs);
++		delta = (s64)((csnow_mid - csnow_begin) & cs->mask);
++		if (delta < 0)
++			cpumask_set_cpu(cpu, &cpus_behind);
++		delta = (csnow_end - csnow_mid) & cs->mask;
++		if (delta < 0)
++			cpumask_set_cpu(cpu, &cpus_ahead);
++		delta = clocksource_delta(csnow_end, csnow_begin, cs->mask);
++		cs_nsec = clocksource_cyc2ns(delta, cs->mult, cs->shift);
++		if (cs_nsec > cs_nsec_max)
++			cs_nsec_max = cs_nsec;
++		if (cs_nsec < cs_nsec_min)
++			cs_nsec_min = cs_nsec;
++	}
++	preempt_enable();
++	if (!cpumask_empty(&cpus_ahead))
++		pr_warn("        CPUs %*pbl ahead of CPU %d for clocksource %s.\n",
++			cpumask_pr_args(&cpus_ahead), testcpu, cs->name);
++	if (!cpumask_empty(&cpus_behind))
++		pr_warn("        CPUs %*pbl behind CPU %d for clocksource %s.\n",
++			cpumask_pr_args(&cpus_behind), testcpu, cs->name);
++	if (!cpumask_empty(&cpus_ahead) || !cpumask_empty(&cpus_behind))
++		pr_warn("        CPU %d check durations %lldns - %lldns for clocksource %s.\n",
++			testcpu, cs_nsec_min, cs_nsec_max, cs->name);
++}
++
+ static void clocksource_watchdog(struct timer_list *unused)
+ {
+-	struct clocksource *cs;
+ 	u64 csnow, wdnow, cslast, wdlast, delta;
+-	int64_t wd_nsec, cs_nsec;
+ 	int next_cpu, reset_pending;
++	int64_t wd_nsec, cs_nsec;
++	struct clocksource *cs;
+ 
+ 	spin_lock(&watchdog_lock);
+ 	if (!watchdog_running)
+@@ -206,10 +300,11 @@ static void clocksource_watchdog(struct timer_list *unused)
+ 			continue;
+ 		}
+ 
+-		local_irq_disable();
+-		csnow = cs->read(cs);
+-		wdnow = watchdog->read(watchdog);
+-		local_irq_enable();
++		if (!cs_watchdog_read(cs, &csnow, &wdnow)) {
++			/* Clock readout unreliable, so give it up. */
++			__clocksource_unstable(cs);
++			continue;
++		}
+ 
+ 		/* Clocksource initialized ? */
+ 		if (!(cs->flags & CLOCK_SOURCE_WATCHDOG) ||
+@@ -407,6 +502,12 @@ static int __clocksource_watchdog_kthread(void)
+ 	unsigned long flags;
+ 	int select = 0;
+ 
++	/* Do any required per-CPU skew verification. */
++	if (curr_clocksource &&
++	    curr_clocksource->flags & CLOCK_SOURCE_UNSTABLE &&
++	    curr_clocksource->flags & CLOCK_SOURCE_VERIFY_PERCPU)
++		clocksource_verify_percpu(curr_clocksource);
++
+ 	spin_lock_irqsave(&watchdog_lock, flags);
+ 	list_for_each_entry_safe(cs, tmp, &watchdog_list, wd_list) {
+ 		if (cs->flags & CLOCK_SOURCE_UNSTABLE) {
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index 7a52bc1728414..f0568b3d6bd1e 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -1840,7 +1840,8 @@ static int __bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *
+ 	if (prog->aux->max_tp_access > btp->writable_size)
+ 		return -EINVAL;
+ 
+-	return tracepoint_probe_register(tp, (void *)btp->bpf_func, prog);
++	return tracepoint_probe_register_may_exist(tp, (void *)btp->bpf_func,
++						   prog);
+ }
+ 
+ int bpf_probe_register(struct bpf_raw_event_map *btp, struct bpf_prog *prog)
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index c1abd63f1d6c5..797e096bc1f3d 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -1555,6 +1555,13 @@ static int contains_operator(char *str)
+ 
+ 	switch (*op) {
+ 	case '-':
++		/*
++		 * Unfortunately, the modifier ".sym-offset"
++		 * can confuse things.
++		 */
++		if (op - str >= 4 && !strncmp(op - 4, ".sym-offset", 11))
++			return FIELD_OP_NONE;
++
+ 		if (*str == '-')
+ 			field_op = FIELD_OP_UNARY_MINUS;
+ 		else
+diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
+index 9f478d29b9264..976bf8ce80396 100644
+--- a/kernel/tracepoint.c
++++ b/kernel/tracepoint.c
+@@ -273,7 +273,8 @@ static void tracepoint_update_call(struct tracepoint *tp, struct tracepoint_func
+  * Add the probe function to a tracepoint.
+  */
+ static int tracepoint_add_func(struct tracepoint *tp,
+-			       struct tracepoint_func *func, int prio)
++			       struct tracepoint_func *func, int prio,
++			       bool warn)
+ {
+ 	struct tracepoint_func *old, *tp_funcs;
+ 	int ret;
+@@ -288,7 +289,7 @@ static int tracepoint_add_func(struct tracepoint *tp,
+ 			lockdep_is_held(&tracepoints_mutex));
+ 	old = func_add(&tp_funcs, func, prio);
+ 	if (IS_ERR(old)) {
+-		WARN_ON_ONCE(PTR_ERR(old) != -ENOMEM);
++		WARN_ON_ONCE(warn && PTR_ERR(old) != -ENOMEM);
+ 		return PTR_ERR(old);
+ 	}
+ 
+@@ -343,6 +344,32 @@ static int tracepoint_remove_func(struct tracepoint *tp,
+ 	return 0;
+ }
+ 
++/**
++ * tracepoint_probe_register_prio_may_exist -  Connect a probe to a tracepoint with priority
++ * @tp: tracepoint
++ * @probe: probe handler
++ * @data: tracepoint data
++ * @prio: priority of this function over other registered functions
++ *
++ * Same as tracepoint_probe_register_prio() except that it will not warn
++ * if the tracepoint is already registered.
++ */
++int tracepoint_probe_register_prio_may_exist(struct tracepoint *tp, void *probe,
++					     void *data, int prio)
++{
++	struct tracepoint_func tp_func;
++	int ret;
++
++	mutex_lock(&tracepoints_mutex);
++	tp_func.func = probe;
++	tp_func.data = data;
++	tp_func.prio = prio;
++	ret = tracepoint_add_func(tp, &tp_func, prio, false);
++	mutex_unlock(&tracepoints_mutex);
++	return ret;
++}
++EXPORT_SYMBOL_GPL(tracepoint_probe_register_prio_may_exist);
++
+ /**
+  * tracepoint_probe_register_prio -  Connect a probe to a tracepoint with priority
+  * @tp: tracepoint
+@@ -366,7 +393,7 @@ int tracepoint_probe_register_prio(struct tracepoint *tp, void *probe,
+ 	tp_func.func = probe;
+ 	tp_func.data = data;
+ 	tp_func.prio = prio;
+-	ret = tracepoint_add_func(tp, &tp_func, prio);
++	ret = tracepoint_add_func(tp, &tp_func, prio, true);
+ 	mutex_unlock(&tracepoints_mutex);
+ 	return ret;
+ }
+diff --git a/kernel/ucount.c b/kernel/ucount.c
+index 8d8874f1c35e2..1f4455874aa0d 100644
+--- a/kernel/ucount.c
++++ b/kernel/ucount.c
+@@ -8,6 +8,12 @@
+ #include <linux/kmemleak.h>
+ #include <linux/user_namespace.h>
+ 
++struct ucounts init_ucounts = {
++	.ns    = &init_user_ns,
++	.uid   = GLOBAL_ROOT_UID,
++	.count = 1,
++};
++
+ #define UCOUNTS_HASHTABLE_BITS 10
+ static struct hlist_head ucounts_hashtable[(1 << UCOUNTS_HASHTABLE_BITS)];
+ static DEFINE_SPINLOCK(ucounts_lock);
+@@ -129,7 +135,15 @@ static struct ucounts *find_ucounts(struct user_namespace *ns, kuid_t uid, struc
+ 	return NULL;
+ }
+ 
+-static struct ucounts *get_ucounts(struct user_namespace *ns, kuid_t uid)
++static void hlist_add_ucounts(struct ucounts *ucounts)
++{
++	struct hlist_head *hashent = ucounts_hashentry(ucounts->ns, ucounts->uid);
++	spin_lock_irq(&ucounts_lock);
++	hlist_add_head(&ucounts->node, hashent);
++	spin_unlock_irq(&ucounts_lock);
++}
++
++struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid)
+ {
+ 	struct hlist_head *hashent = ucounts_hashentry(ns, uid);
+ 	struct ucounts *ucounts, *new;
+@@ -164,7 +178,26 @@ static struct ucounts *get_ucounts(struct user_namespace *ns, kuid_t uid)
+ 	return ucounts;
+ }
+ 
+-static void put_ucounts(struct ucounts *ucounts)
++struct ucounts *get_ucounts(struct ucounts *ucounts)
++{
++	unsigned long flags;
++
++	if (!ucounts)
++		return NULL;
++
++	spin_lock_irqsave(&ucounts_lock, flags);
++	if (ucounts->count == INT_MAX) {
++		WARN_ONCE(1, "ucounts: counter has reached its maximum value");
++		ucounts = NULL;
++	} else {
++		ucounts->count += 1;
++	}
++	spin_unlock_irqrestore(&ucounts_lock, flags);
++
++	return ucounts;
++}
++
++void put_ucounts(struct ucounts *ucounts)
+ {
+ 	unsigned long flags;
+ 
+@@ -198,7 +231,7 @@ struct ucounts *inc_ucount(struct user_namespace *ns, kuid_t uid,
+ {
+ 	struct ucounts *ucounts, *iter, *bad;
+ 	struct user_namespace *tns;
+-	ucounts = get_ucounts(ns, uid);
++	ucounts = alloc_ucounts(ns, uid);
+ 	for (iter = ucounts; iter; iter = tns->ucounts) {
+ 		int max;
+ 		tns = iter->ns;
+@@ -241,6 +274,7 @@ static __init int user_namespace_sysctl_init(void)
+ 	BUG_ON(!user_header);
+ 	BUG_ON(!setup_userns_sysctls(&init_user_ns));
+ #endif
++	hlist_add_ucounts(&init_ucounts);
+ 	return 0;
+ }
+ subsys_initcall(user_namespace_sysctl_init);
+diff --git a/kernel/user_namespace.c b/kernel/user_namespace.c
+index 8d62863721b05..27670ab7a4eda 100644
+--- a/kernel/user_namespace.c
++++ b/kernel/user_namespace.c
+@@ -1340,6 +1340,9 @@ static int userns_install(struct nsset *nsset, struct ns_common *ns)
+ 	put_user_ns(cred->user_ns);
+ 	set_cred_user_ns(cred, get_user_ns(user_ns));
+ 
++	if (set_cred_ucounts(cred) < 0)
++		return -EINVAL;
++
+ 	return 0;
+ }
+ 
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 678c13967580e..1e1bd6f4a13de 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -1372,7 +1372,6 @@ config LOCKDEP
+ 	bool
+ 	depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT
+ 	select STACKTRACE
+-	depends on FRAME_POINTER || MIPS || PPC || S390 || MICROBLAZE || ARM || ARC || X86
+ 	select KALLSYMS
+ 	select KALLSYMS_ALL
+ 
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index c701b7a187f2b..9eb7c31688cc8 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -476,7 +476,7 @@ int iov_iter_fault_in_readable(struct iov_iter *i, size_t bytes)
+ 	int err;
+ 	struct iovec v;
+ 
+-	if (!(i->type & (ITER_BVEC|ITER_KVEC))) {
++	if (iter_is_iovec(i)) {
+ 		iterate_iovec(i, bytes, v, iov, skip, ({
+ 			err = fault_in_pages_readable(v.iov_base, v.iov_len);
+ 			if (unlikely(err))
+@@ -957,23 +957,48 @@ static inline bool page_copy_sane(struct page *page, size_t offset, size_t n)
+ 	return false;
+ }
+ 
+-size_t copy_page_to_iter(struct page *page, size_t offset, size_t bytes,
++static size_t __copy_page_to_iter(struct page *page, size_t offset, size_t bytes,
+ 			 struct iov_iter *i)
+ {
+-	if (unlikely(!page_copy_sane(page, offset, bytes)))
+-		return 0;
+ 	if (i->type & (ITER_BVEC | ITER_KVEC | ITER_XARRAY)) {
+ 		void *kaddr = kmap_atomic(page);
+ 		size_t wanted = copy_to_iter(kaddr + offset, bytes, i);
+ 		kunmap_atomic(kaddr);
+ 		return wanted;
+-	} else if (unlikely(iov_iter_is_discard(i)))
++	} else if (unlikely(iov_iter_is_discard(i))) {
++		if (unlikely(i->count < bytes))
++			bytes = i->count;
++		i->count -= bytes;
+ 		return bytes;
+-	else if (likely(!iov_iter_is_pipe(i)))
++	} else if (likely(!iov_iter_is_pipe(i)))
+ 		return copy_page_to_iter_iovec(page, offset, bytes, i);
+ 	else
+ 		return copy_page_to_iter_pipe(page, offset, bytes, i);
+ }
++
++size_t copy_page_to_iter(struct page *page, size_t offset, size_t bytes,
++			 struct iov_iter *i)
++{
++	size_t res = 0;
++	if (unlikely(!page_copy_sane(page, offset, bytes)))
++		return 0;
++	page += offset / PAGE_SIZE; // first subpage
++	offset %= PAGE_SIZE;
++	while (1) {
++		size_t n = __copy_page_to_iter(page, offset,
++				min(bytes, (size_t)PAGE_SIZE - offset), i);
++		res += n;
++		bytes -= n;
++		if (!bytes || !n)
++			break;
++		offset += n;
++		if (offset == PAGE_SIZE) {
++			page++;
++			offset = 0;
++		}
++	}
++	return res;
++}
+ EXPORT_SYMBOL(copy_page_to_iter);
+ 
+ size_t copy_page_from_iter(struct page *page, size_t offset, size_t bytes,
+diff --git a/lib/kstrtox.c b/lib/kstrtox.c
+index a118b0b1e9b2c..0b5fe8b411732 100644
+--- a/lib/kstrtox.c
++++ b/lib/kstrtox.c
+@@ -39,20 +39,22 @@ const char *_parse_integer_fixup_radix(const char *s, unsigned int *base)
+ 
+ /*
+  * Convert non-negative integer string representation in explicitly given radix
+- * to an integer.
++ * to an integer. A maximum of max_chars characters will be converted.
++ *
+  * Return number of characters consumed maybe or-ed with overflow bit.
+  * If overflow occurs, result integer (incorrect) is still returned.
+  *
+  * Don't you dare use this function.
+  */
+-unsigned int _parse_integer(const char *s, unsigned int base, unsigned long long *p)
++unsigned int _parse_integer_limit(const char *s, unsigned int base, unsigned long long *p,
++				  size_t max_chars)
+ {
+ 	unsigned long long res;
+ 	unsigned int rv;
+ 
+ 	res = 0;
+ 	rv = 0;
+-	while (1) {
++	while (max_chars--) {
+ 		unsigned int c = *s;
+ 		unsigned int lc = c | 0x20; /* don't tolower() this line */
+ 		unsigned int val;
+@@ -82,6 +84,11 @@ unsigned int _parse_integer(const char *s, unsigned int base, unsigned long long
+ 	return rv;
+ }
+ 
++unsigned int _parse_integer(const char *s, unsigned int base, unsigned long long *p)
++{
++	return _parse_integer_limit(s, base, p, INT_MAX);
++}
++
+ static int _kstrtoull(const char *s, unsigned int base, unsigned long long *res)
+ {
+ 	unsigned long long _res;
+diff --git a/lib/kstrtox.h b/lib/kstrtox.h
+index 3b4637bcd2540..158c400ca8658 100644
+--- a/lib/kstrtox.h
++++ b/lib/kstrtox.h
+@@ -4,6 +4,8 @@
+ 
+ #define KSTRTOX_OVERFLOW	(1U << 31)
+ const char *_parse_integer_fixup_radix(const char *s, unsigned int *base);
++unsigned int _parse_integer_limit(const char *s, unsigned int base, unsigned long long *res,
++				  size_t max_chars);
+ unsigned int _parse_integer(const char *s, unsigned int base, unsigned long long *res);
+ 
+ #endif
+diff --git a/lib/kunit/test.c b/lib/kunit/test.c
+index 2f6cc01232322..17973a4a44c29 100644
+--- a/lib/kunit/test.c
++++ b/lib/kunit/test.c
+@@ -376,7 +376,7 @@ static void kunit_run_case_catch_errors(struct kunit_suite *suite,
+ 	context.test_case = test_case;
+ 	kunit_try_catch_run(try_catch, &context);
+ 
+-	test_case->success = test->success;
++	test_case->success &= test->success;
+ }
+ 
+ int kunit_run_tests(struct kunit_suite *suite)
+@@ -388,7 +388,7 @@ int kunit_run_tests(struct kunit_suite *suite)
+ 
+ 	kunit_suite_for_each_test_case(suite, test_case) {
+ 		struct kunit test = { .param_value = NULL, .param_index = 0 };
+-		bool test_success = true;
++		test_case->success = true;
+ 
+ 		if (test_case->generate_params) {
+ 			/* Get initial param. */
+@@ -398,7 +398,6 @@ int kunit_run_tests(struct kunit_suite *suite)
+ 
+ 		do {
+ 			kunit_run_case_catch_errors(suite, test_case, &test);
+-			test_success &= test_case->success;
+ 
+ 			if (test_case->generate_params) {
+ 				if (param_desc[0] == '\0') {
+@@ -420,7 +419,7 @@ int kunit_run_tests(struct kunit_suite *suite)
+ 			}
+ 		} while (test.param_value);
+ 
+-		kunit_print_ok_not_ok(&test, true, test_success,
++		kunit_print_ok_not_ok(&test, true, test_case->success,
+ 				      kunit_test_case_num(suite, test_case),
+ 				      test_case->name);
+ 	}
+diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
+index 2d85abac17448..0f6b262e09648 100644
+--- a/lib/locking-selftest.c
++++ b/lib/locking-selftest.c
+@@ -194,6 +194,7 @@ static void init_shared_classes(void)
+ #define HARDIRQ_ENTER()				\
+ 	local_irq_disable();			\
+ 	__irq_enter();				\
++	lockdep_hardirq_threaded();		\
+ 	WARN_ON(!in_irq());
+ 
+ #define HARDIRQ_EXIT()				\
+diff --git a/lib/math/rational.c b/lib/math/rational.c
+index 9781d521963d1..c0ab51d8fbb98 100644
+--- a/lib/math/rational.c
++++ b/lib/math/rational.c
+@@ -12,6 +12,7 @@
+ #include <linux/compiler.h>
+ #include <linux/export.h>
+ #include <linux/minmax.h>
++#include <linux/limits.h>
+ 
+ /*
+  * calculate best rational approximation for a given fraction
+@@ -78,13 +79,18 @@ void rational_best_approximation(
+ 		 * found below as 't'.
+ 		 */
+ 		if ((n2 > max_numerator) || (d2 > max_denominator)) {
+-			unsigned long t = min((max_numerator - n0) / n1,
+-					      (max_denominator - d0) / d1);
++			unsigned long t = ULONG_MAX;
+ 
+-			/* This tests if the semi-convergent is closer
+-			 * than the previous convergent.
++			if (d1)
++				t = (max_denominator - d0) / d1;
++			if (n1)
++				t = min(t, (max_numerator - n0) / n1);
++
++			/* This tests if the semi-convergent is closer than the previous
++			 * convergent.  If d1 is zero there is no previous convergent as this
++			 * is the 1st iteration, so always choose the semi-convergent.
+ 			 */
+-			if (2u * t > a || (2u * t == a && d0 * dp > d1 * d)) {
++			if (!d1 || 2u * t > a || (2u * t == a && d0 * dp > d1 * d)) {
+ 				n1 = n0 + t * n1;
+ 				d1 = d0 + t * d1;
+ 			}
+diff --git a/lib/seq_buf.c b/lib/seq_buf.c
+index 707453f5d58ee..89c26c393bdba 100644
+--- a/lib/seq_buf.c
++++ b/lib/seq_buf.c
+@@ -243,12 +243,14 @@ int seq_buf_putmem_hex(struct seq_buf *s, const void *mem,
+ 			break;
+ 
+ 		/* j increments twice per loop */
+-		len -= j / 2;
+ 		hex[j++] = ' ';
+ 
+ 		seq_buf_putmem(s, hex, j);
+ 		if (seq_buf_has_overflowed(s))
+ 			return -1;
++
++		len -= start_len;
++		data += start_len;
+ 	}
+ 	return 0;
+ }
+diff --git a/lib/vsprintf.c b/lib/vsprintf.c
+index f0c35d9b65bff..077a4a7c6f00c 100644
+--- a/lib/vsprintf.c
++++ b/lib/vsprintf.c
+@@ -53,6 +53,31 @@
+ #include <linux/string_helpers.h>
+ #include "kstrtox.h"
+ 
++static unsigned long long simple_strntoull(const char *startp, size_t max_chars,
++					   char **endp, unsigned int base)
++{
++	const char *cp;
++	unsigned long long result = 0ULL;
++	size_t prefix_chars;
++	unsigned int rv;
++
++	cp = _parse_integer_fixup_radix(startp, &base);
++	prefix_chars = cp - startp;
++	if (prefix_chars < max_chars) {
++		rv = _parse_integer_limit(cp, base, &result, max_chars - prefix_chars);
++		/* FIXME */
++		cp += (rv & ~KSTRTOX_OVERFLOW);
++	} else {
++		/* Field too short for prefix + digit, skip over without converting */
++		cp = startp + max_chars;
++	}
++
++	if (endp)
++		*endp = (char *)cp;
++
++	return result;
++}
++
+ /**
+  * simple_strtoull - convert a string to an unsigned long long
+  * @cp: The start of the string
+@@ -63,18 +88,7 @@
+  */
+ unsigned long long simple_strtoull(const char *cp, char **endp, unsigned int base)
+ {
+-	unsigned long long result;
+-	unsigned int rv;
+-
+-	cp = _parse_integer_fixup_radix(cp, &base);
+-	rv = _parse_integer(cp, base, &result);
+-	/* FIXME */
+-	cp += (rv & ~KSTRTOX_OVERFLOW);
+-
+-	if (endp)
+-		*endp = (char *)cp;
+-
+-	return result;
++	return simple_strntoull(cp, INT_MAX, endp, base);
+ }
+ EXPORT_SYMBOL(simple_strtoull);
+ 
+@@ -109,6 +123,21 @@ long simple_strtol(const char *cp, char **endp, unsigned int base)
+ }
+ EXPORT_SYMBOL(simple_strtol);
+ 
++static long long simple_strntoll(const char *cp, size_t max_chars, char **endp,
++				 unsigned int base)
++{
++	/*
++	 * simple_strntoull() safely handles receiving max_chars==0 in the
++	 * case cp[0] == '-' && max_chars == 1.
++	 * If max_chars == 0 we can drop through and pass it to simple_strntoull()
++	 * and the content of *cp is irrelevant.
++	 */
++	if (*cp == '-' && max_chars > 0)
++		return -simple_strntoull(cp + 1, max_chars - 1, endp, base);
++
++	return simple_strntoull(cp, max_chars, endp, base);
++}
++
+ /**
+  * simple_strtoll - convert a string to a signed long long
+  * @cp: The start of the string
+@@ -119,10 +148,7 @@ EXPORT_SYMBOL(simple_strtol);
+  */
+ long long simple_strtoll(const char *cp, char **endp, unsigned int base)
+ {
+-	if (*cp == '-')
+-		return -simple_strtoull(cp + 1, endp, base);
+-
+-	return simple_strtoull(cp, endp, base);
++	return simple_strntoll(cp, INT_MAX, endp, base);
+ }
+ EXPORT_SYMBOL(simple_strtoll);
+ 
+@@ -3576,25 +3602,13 @@ int vsscanf(const char *buf, const char *fmt, va_list args)
+ 			break;
+ 
+ 		if (is_sign)
+-			val.s = qualifier != 'L' ?
+-				simple_strtol(str, &next, base) :
+-				simple_strtoll(str, &next, base);
++			val.s = simple_strntoll(str,
++						field_width >= 0 ? field_width : INT_MAX,
++						&next, base);
+ 		else
+-			val.u = qualifier != 'L' ?
+-				simple_strtoul(str, &next, base) :
+-				simple_strtoull(str, &next, base);
+-
+-		if (field_width > 0 && next - str > field_width) {
+-			if (base == 0)
+-				_parse_integer_fixup_radix(str, &base);
+-			while (next - str > field_width) {
+-				if (is_sign)
+-					val.s = div_s64(val.s, base);
+-				else
+-					val.u = div_u64(val.u, base);
+-				--next;
+-			}
+-		}
++			val.u = simple_strntoull(str,
++						 field_width >= 0 ? field_width : INT_MAX,
++						 &next, base);
+ 
+ 		switch (qualifier) {
+ 		case 'H':	/* that's 'hh' in format */
+diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
+index 297d1b349c197..92bfc37300dfc 100644
+--- a/mm/debug_vm_pgtable.c
++++ b/mm/debug_vm_pgtable.c
+@@ -146,13 +146,14 @@ static void __init pte_savedwrite_tests(unsigned long pfn, pgprot_t prot)
+ static void __init pmd_basic_tests(unsigned long pfn, int idx)
+ {
+ 	pgprot_t prot = protection_map[idx];
+-	pmd_t pmd = pfn_pmd(pfn, prot);
+ 	unsigned long val = idx, *ptr = &val;
++	pmd_t pmd;
+ 
+ 	if (!has_transparent_hugepage())
+ 		return;
+ 
+ 	pr_debug("Validating PMD basic (%pGv)\n", ptr);
++	pmd = pfn_pmd(pfn, prot);
+ 
+ 	/*
+ 	 * This test needs to be executed after the given page table entry
+@@ -185,7 +186,7 @@ static void __init pmd_advanced_tests(struct mm_struct *mm,
+ 				      unsigned long pfn, unsigned long vaddr,
+ 				      pgprot_t prot, pgtable_t pgtable)
+ {
+-	pmd_t pmd = pfn_pmd(pfn, prot);
++	pmd_t pmd;
+ 
+ 	if (!has_transparent_hugepage())
+ 		return;
+@@ -232,9 +233,14 @@ static void __init pmd_advanced_tests(struct mm_struct *mm,
+ 
+ static void __init pmd_leaf_tests(unsigned long pfn, pgprot_t prot)
+ {
+-	pmd_t pmd = pfn_pmd(pfn, prot);
++	pmd_t pmd;
++
++	if (!has_transparent_hugepage())
++		return;
+ 
+ 	pr_debug("Validating PMD leaf\n");
++	pmd = pfn_pmd(pfn, prot);
++
+ 	/*
+ 	 * PMD based THP is a leaf entry.
+ 	 */
+@@ -267,12 +273,16 @@ static void __init pmd_huge_tests(pmd_t *pmdp, unsigned long pfn, pgprot_t prot)
+ 
+ static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot)
+ {
+-	pmd_t pmd = pfn_pmd(pfn, prot);
++	pmd_t pmd;
+ 
+ 	if (!IS_ENABLED(CONFIG_NUMA_BALANCING))
+ 		return;
+ 
++	if (!has_transparent_hugepage())
++		return;
++
+ 	pr_debug("Validating PMD saved write\n");
++	pmd = pfn_pmd(pfn, prot);
+ 	WARN_ON(!pmd_savedwrite(pmd_mk_savedwrite(pmd_clear_savedwrite(pmd))));
+ 	WARN_ON(pmd_savedwrite(pmd_clear_savedwrite(pmd_mk_savedwrite(pmd))));
+ }
+@@ -281,13 +291,14 @@ static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot)
+ static void __init pud_basic_tests(struct mm_struct *mm, unsigned long pfn, int idx)
+ {
+ 	pgprot_t prot = protection_map[idx];
+-	pud_t pud = pfn_pud(pfn, prot);
+ 	unsigned long val = idx, *ptr = &val;
++	pud_t pud;
+ 
+ 	if (!has_transparent_hugepage())
+ 		return;
+ 
+ 	pr_debug("Validating PUD basic (%pGv)\n", ptr);
++	pud = pfn_pud(pfn, prot);
+ 
+ 	/*
+ 	 * This test needs to be executed after the given page table entry
+@@ -323,7 +334,7 @@ static void __init pud_advanced_tests(struct mm_struct *mm,
+ 				      unsigned long pfn, unsigned long vaddr,
+ 				      pgprot_t prot)
+ {
+-	pud_t pud = pfn_pud(pfn, prot);
++	pud_t pud;
+ 
+ 	if (!has_transparent_hugepage())
+ 		return;
+@@ -332,6 +343,7 @@ static void __init pud_advanced_tests(struct mm_struct *mm,
+ 	/* Align the address wrt HPAGE_PUD_SIZE */
+ 	vaddr &= HPAGE_PUD_MASK;
+ 
++	pud = pfn_pud(pfn, prot);
+ 	set_pud_at(mm, vaddr, pudp, pud);
+ 	pudp_set_wrprotect(mm, vaddr, pudp);
+ 	pud = READ_ONCE(*pudp);
+@@ -370,9 +382,13 @@ static void __init pud_advanced_tests(struct mm_struct *mm,
+ 
+ static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot)
+ {
+-	pud_t pud = pfn_pud(pfn, prot);
++	pud_t pud;
++
++	if (!has_transparent_hugepage())
++		return;
+ 
+ 	pr_debug("Validating PUD leaf\n");
++	pud = pfn_pud(pfn, prot);
+ 	/*
+ 	 * PUD based THP is a leaf entry.
+ 	 */
+@@ -654,12 +670,16 @@ static void __init pte_protnone_tests(unsigned long pfn, pgprot_t prot)
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ static void __init pmd_protnone_tests(unsigned long pfn, pgprot_t prot)
+ {
+-	pmd_t pmd = pmd_mkhuge(pfn_pmd(pfn, prot));
++	pmd_t pmd;
+ 
+ 	if (!IS_ENABLED(CONFIG_NUMA_BALANCING))
+ 		return;
+ 
++	if (!has_transparent_hugepage())
++		return;
++
+ 	pr_debug("Validating PMD protnone\n");
++	pmd = pmd_mkhuge(pfn_pmd(pfn, prot));
+ 	WARN_ON(!pmd_protnone(pmd));
+ 	WARN_ON(!pmd_present(pmd));
+ }
+@@ -679,18 +699,26 @@ static void __init pte_devmap_tests(unsigned long pfn, pgprot_t prot)
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ static void __init pmd_devmap_tests(unsigned long pfn, pgprot_t prot)
+ {
+-	pmd_t pmd = pfn_pmd(pfn, prot);
++	pmd_t pmd;
++
++	if (!has_transparent_hugepage())
++		return;
+ 
+ 	pr_debug("Validating PMD devmap\n");
++	pmd = pfn_pmd(pfn, prot);
+ 	WARN_ON(!pmd_devmap(pmd_mkdevmap(pmd)));
+ }
+ 
+ #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+ static void __init pud_devmap_tests(unsigned long pfn, pgprot_t prot)
+ {
+-	pud_t pud = pfn_pud(pfn, prot);
++	pud_t pud;
++
++	if (!has_transparent_hugepage())
++		return;
+ 
+ 	pr_debug("Validating PUD devmap\n");
++	pud = pfn_pud(pfn, prot);
+ 	WARN_ON(!pud_devmap(pud_mkdevmap(pud)));
+ }
+ #else  /* !CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
+@@ -733,25 +761,33 @@ static void __init pte_swap_soft_dirty_tests(unsigned long pfn, pgprot_t prot)
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ static void __init pmd_soft_dirty_tests(unsigned long pfn, pgprot_t prot)
+ {
+-	pmd_t pmd = pfn_pmd(pfn, prot);
++	pmd_t pmd;
+ 
+ 	if (!IS_ENABLED(CONFIG_MEM_SOFT_DIRTY))
+ 		return;
+ 
++	if (!has_transparent_hugepage())
++		return;
++
+ 	pr_debug("Validating PMD soft dirty\n");
++	pmd = pfn_pmd(pfn, prot);
+ 	WARN_ON(!pmd_soft_dirty(pmd_mksoft_dirty(pmd)));
+ 	WARN_ON(pmd_soft_dirty(pmd_clear_soft_dirty(pmd)));
+ }
+ 
+ static void __init pmd_swap_soft_dirty_tests(unsigned long pfn, pgprot_t prot)
+ {
+-	pmd_t pmd = pfn_pmd(pfn, prot);
++	pmd_t pmd;
+ 
+ 	if (!IS_ENABLED(CONFIG_MEM_SOFT_DIRTY) ||
+ 		!IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION))
+ 		return;
+ 
++	if (!has_transparent_hugepage())
++		return;
++
+ 	pr_debug("Validating PMD swap soft dirty\n");
++	pmd = pfn_pmd(pfn, prot);
+ 	WARN_ON(!pmd_swp_soft_dirty(pmd_swp_mksoft_dirty(pmd)));
+ 	WARN_ON(pmd_swp_soft_dirty(pmd_swp_clear_soft_dirty(pmd)));
+ }
+@@ -780,6 +816,9 @@ static void __init pmd_swap_tests(unsigned long pfn, pgprot_t prot)
+ 	swp_entry_t swp;
+ 	pmd_t pmd;
+ 
++	if (!has_transparent_hugepage())
++		return;
++
+ 	pr_debug("Validating PMD swap\n");
+ 	pmd = pfn_pmd(pfn, prot);
+ 	swp = __pmd_to_swp_entry(pmd);
+diff --git a/mm/gup.c b/mm/gup.c
+index 3ded6a5f26b25..90262e448552a 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -44,6 +44,23 @@ static void hpage_pincount_sub(struct page *page, int refs)
+ 	atomic_sub(refs, compound_pincount_ptr(page));
+ }
+ 
++/* Equivalent to calling put_page() @refs times. */
++static void put_page_refs(struct page *page, int refs)
++{
++#ifdef CONFIG_DEBUG_VM
++	if (VM_WARN_ON_ONCE_PAGE(page_ref_count(page) < refs, page))
++		return;
++#endif
++
++	/*
++	 * Calling put_page() for each ref is unnecessarily slow. Only the last
++	 * ref needs a put_page().
++	 */
++	if (refs > 1)
++		page_ref_sub(page, refs - 1);
++	put_page(page);
++}
++
+ /*
+  * Return the compound head page with ref appropriately incremented,
+  * or NULL if that failed.
+@@ -56,6 +73,21 @@ static inline struct page *try_get_compound_head(struct page *page, int refs)
+ 		return NULL;
+ 	if (unlikely(!page_cache_add_speculative(head, refs)))
+ 		return NULL;
++
++	/*
++	 * At this point we have a stable reference to the head page; but it
++	 * could be that between the compound_head() lookup and the refcount
++	 * increment, the compound page was split, in which case we'd end up
++	 * holding a reference on a page that has nothing to do with the page
++	 * we were given anymore.
++	 * So now that the head page is stable, recheck that the pages still
++	 * belong together.
++	 */
++	if (unlikely(compound_head(page) != head)) {
++		put_page_refs(head, refs);
++		return NULL;
++	}
++
+ 	return head;
+ }
+ 
+@@ -95,6 +127,14 @@ __maybe_unused struct page *try_grab_compound_head(struct page *page,
+ 			     !is_pinnable_page(page)))
+ 			return NULL;
+ 
++		/*
++		 * CAUTION: Don't use compound_head() on the page before this
++		 * point, the result won't be stable.
++		 */
++		page = try_get_compound_head(page, refs);
++		if (!page)
++			return NULL;
++
+ 		/*
+ 		 * When pinning a compound page of order > 1 (which is what
+ 		 * hpage_pincount_available() checks for), use an exact count to
+@@ -103,15 +143,10 @@ __maybe_unused struct page *try_grab_compound_head(struct page *page,
+ 		 * However, be sure to *also* increment the normal page refcount
+ 		 * field at least once, so that the page really is pinned.
+ 		 */
+-		if (!hpage_pincount_available(page))
+-			refs *= GUP_PIN_COUNTING_BIAS;
+-
+-		page = try_get_compound_head(page, refs);
+-		if (!page)
+-			return NULL;
+-
+ 		if (hpage_pincount_available(page))
+ 			hpage_pincount_add(page, refs);
++		else
++			page_ref_add(page, refs * (GUP_PIN_COUNTING_BIAS - 1));
+ 
+ 		mod_node_page_state(page_pgdat(page), NR_FOLL_PIN_ACQUIRED,
+ 				    orig_refs);
+@@ -135,14 +170,7 @@ static void put_compound_head(struct page *page, int refs, unsigned int flags)
+ 			refs *= GUP_PIN_COUNTING_BIAS;
+ 	}
+ 
+-	VM_BUG_ON_PAGE(page_ref_count(page) < refs, page);
+-	/*
+-	 * Calling put_page() for each ref is unnecessarily slow. Only the last
+-	 * ref needs a put_page().
+-	 */
+-	if (refs > 1)
+-		page_ref_sub(page, refs - 1);
+-	put_page(page);
++	put_page_refs(page, refs);
+ }
+ 
+ /**
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 6d2a0119fc58e..8857ef1543eb6 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -64,7 +64,14 @@ static atomic_t huge_zero_refcount;
+ struct page *huge_zero_page __read_mostly;
+ unsigned long huge_zero_pfn __read_mostly = ~0UL;
+ 
+-bool transparent_hugepage_enabled(struct vm_area_struct *vma)
++static inline bool file_thp_enabled(struct vm_area_struct *vma)
++{
++	return transhuge_vma_enabled(vma, vma->vm_flags) && vma->vm_file &&
++	       !inode_is_open_for_write(vma->vm_file->f_inode) &&
++	       (vma->vm_flags & VM_EXEC);
++}
++
++bool transparent_hugepage_active(struct vm_area_struct *vma)
+ {
+ 	/* The addr is used to check if the vma size fits */
+ 	unsigned long addr = (vma->vm_end & HPAGE_PMD_MASK) - HPAGE_PMD_SIZE;
+@@ -75,6 +82,8 @@ bool transparent_hugepage_enabled(struct vm_area_struct *vma)
+ 		return __transparent_hugepage_enabled(vma);
+ 	if (vma_is_shmem(vma))
+ 		return shmem_huge_enabled(vma);
++	if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS))
++		return file_thp_enabled(vma);
+ 
+ 	return false;
+ }
+@@ -1604,7 +1613,7 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
+ 	 * If other processes are mapping this page, we couldn't discard
+ 	 * the page unless they all do MADV_FREE so let's skip the page.
+ 	 */
+-	if (page_mapcount(page) != 1)
++	if (total_mapcount(page) != 1)
+ 		goto out;
+ 
+ 	if (!trylock_page(page))
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 5ba5a0da6d572..65e0e8642ded8 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -1318,8 +1318,6 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
+ 	return alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask);
+ }
+ 
+-static void prep_new_huge_page(struct hstate *h, struct page *page, int nid);
+-static void prep_compound_gigantic_page(struct page *page, unsigned int order);
+ #else /* !CONFIG_CONTIG_ALLOC */
+ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
+ 					int nid, nodemask_t *nodemask)
+@@ -2625,16 +2623,10 @@ found:
+ 	return 1;
+ }
+ 
+-static void __init prep_compound_huge_page(struct page *page,
+-		unsigned int order)
+-{
+-	if (unlikely(order > (MAX_ORDER - 1)))
+-		prep_compound_gigantic_page(page, order);
+-	else
+-		prep_compound_page(page, order);
+-}
+-
+-/* Put bootmem huge pages into the standard lists after mem_map is up */
++/*
++ * Put bootmem huge pages into the standard lists after mem_map is up.
++ * Note: This only applies to gigantic (order > MAX_ORDER) pages.
++ */
+ static void __init gather_bootmem_prealloc(void)
+ {
+ 	struct huge_bootmem_page *m;
+@@ -2643,20 +2635,19 @@ static void __init gather_bootmem_prealloc(void)
+ 		struct page *page = virt_to_page(m);
+ 		struct hstate *h = m->hstate;
+ 
++		VM_BUG_ON(!hstate_is_gigantic(h));
+ 		WARN_ON(page_count(page) != 1);
+-		prep_compound_huge_page(page, huge_page_order(h));
++		prep_compound_gigantic_page(page, huge_page_order(h));
+ 		WARN_ON(PageReserved(page));
+ 		prep_new_huge_page(h, page, page_to_nid(page));
+ 		put_page(page); /* free it into the hugepage allocator */
+ 
+ 		/*
+-		 * If we had gigantic hugepages allocated at boot time, we need
+-		 * to restore the 'stolen' pages to totalram_pages in order to
+-		 * fix confusing memory reports from free(1) and another
+-		 * side-effects, like CommitLimit going negative.
++		 * We need to restore the 'stolen' pages to totalram_pages
++		 * in order to fix confusing memory reports from free(1) and
++		 * other side-effects, like CommitLimit going negative.
+ 		 */
+-		if (hstate_is_gigantic(h))
+-			adjust_managed_page_count(page, pages_per_huge_page(h));
++		adjust_managed_page_count(page, pages_per_huge_page(h));
+ 		cond_resched();
+ 	}
+ }
+diff --git a/mm/kfence/core.c b/mm/kfence/core.c
+index 4d21ac44d5d35..d7666ace9d2e4 100644
+--- a/mm/kfence/core.c
++++ b/mm/kfence/core.c
+@@ -636,7 +636,7 @@ static void toggle_allocation_gate(struct work_struct *work)
+ 	/* Disable static key and reset timer. */
+ 	static_branch_disable(&kfence_allocation_key);
+ #endif
+-	queue_delayed_work(system_power_efficient_wq, &kfence_timer,
++	queue_delayed_work(system_unbound_wq, &kfence_timer,
+ 			   msecs_to_jiffies(kfence_sample_interval));
+ }
+ static DECLARE_DELAYED_WORK(kfence_timer, toggle_allocation_gate);
+@@ -666,7 +666,7 @@ void __init kfence_init(void)
+ 	}
+ 
+ 	WRITE_ONCE(kfence_enabled, true);
+-	queue_delayed_work(system_power_efficient_wq, &kfence_timer, 0);
++	queue_delayed_work(system_unbound_wq, &kfence_timer, 0);
+ 	pr_info("initialized - using %lu bytes for %d objects at 0x%p-0x%p\n", KFENCE_POOL_SIZE,
+ 		CONFIG_KFENCE_NUM_OBJECTS, (void *)__kfence_pool,
+ 		(void *)(__kfence_pool + KFENCE_POOL_SIZE));
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index 6c0185fdd8158..d97b20fad6e8e 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -442,9 +442,7 @@ static inline int khugepaged_test_exit(struct mm_struct *mm)
+ static bool hugepage_vma_check(struct vm_area_struct *vma,
+ 			       unsigned long vm_flags)
+ {
+-	/* Explicitly disabled through madvise. */
+-	if ((vm_flags & VM_NOHUGEPAGE) ||
+-	    test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))
++	if (!transhuge_vma_enabled(vma, vm_flags))
+ 		return false;
+ 
+ 	/* Enabled via shmem mount options or sysfs settings. */
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 64ada9e650a51..f4f2d05c8c7ba 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -2739,6 +2739,13 @@ retry:
+ }
+ 
+ #ifdef CONFIG_MEMCG_KMEM
++/*
++ * The allocated objcg pointers array is not accounted directly.
++ * Moreover, it should not come from DMA buffer and is not readily
++ * reclaimable. So those GFP bits should be masked off.
++ */
++#define OBJCGS_CLEAR_MASK	(__GFP_DMA | __GFP_RECLAIMABLE | __GFP_ACCOUNT)
++
+ int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s,
+ 				 gfp_t gfp, bool new_page)
+ {
+@@ -2746,6 +2753,7 @@ int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s,
+ 	unsigned long memcg_data;
+ 	void *vec;
+ 
++	gfp &= ~OBJCGS_CLEAR_MASK;
+ 	vec = kcalloc_node(objects, sizeof(struct obj_cgroup *), gfp,
+ 			   page_to_nid(page));
+ 	if (!vec)
+diff --git a/mm/memory.c b/mm/memory.c
+index 486f4a2874e72..b15367c285bde 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -3353,6 +3353,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
+ {
+ 	struct vm_area_struct *vma = vmf->vma;
+ 	struct page *page = NULL, *swapcache;
++	struct swap_info_struct *si = NULL;
+ 	swp_entry_t entry;
+ 	pte_t pte;
+ 	int locked;
+@@ -3380,14 +3381,16 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
+ 		goto out;
+ 	}
+ 
++	/* Prevent swapoff from happening to us. */
++	si = get_swap_device(entry);
++	if (unlikely(!si))
++		goto out;
+ 
+ 	delayacct_set_flag(current, DELAYACCT_PF_SWAPIN);
+ 	page = lookup_swap_cache(entry, vma, vmf->address);
+ 	swapcache = page;
+ 
+ 	if (!page) {
+-		struct swap_info_struct *si = swp_swap_info(entry);
+-
+ 		if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
+ 		    __swap_count(entry) == 1) {
+ 			/* skip swapcache */
+@@ -3556,6 +3559,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
+ unlock:
+ 	pte_unmap_unlock(vmf->pte, vmf->ptl);
+ out:
++	if (si)
++		put_swap_device(si);
+ 	return ret;
+ out_nomap:
+ 	pte_unmap_unlock(vmf->pte, vmf->ptl);
+@@ -3567,6 +3572,8 @@ out_release:
+ 		unlock_page(swapcache);
+ 		put_page(swapcache);
+ 	}
++	if (si)
++		put_swap_device(si);
+ 	return ret;
+ }
+ 
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 41ff2c9896c4f..047209d6602eb 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -1288,7 +1288,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
+ 	 * page_mapping() set, hugetlbfs specific move page routine will not
+ 	 * be called and we could leak usage counts for subpools.
+ 	 */
+-	if (page_private(hpage) && !page_mapping(hpage)) {
++	if (hugetlb_page_subpool(hpage) && !page_mapping(hpage)) {
+ 		rc = -EBUSY;
+ 		goto out_unlock;
+ 	}
+diff --git a/mm/mmap_lock.c b/mm/mmap_lock.c
+index dcdde4f722a40..2ae3f33b85b16 100644
+--- a/mm/mmap_lock.c
++++ b/mm/mmap_lock.c
+@@ -11,6 +11,7 @@
+ #include <linux/rcupdate.h>
+ #include <linux/smp.h>
+ #include <linux/trace_events.h>
++#include <linux/local_lock.h>
+ 
+ EXPORT_TRACEPOINT_SYMBOL(mmap_lock_start_locking);
+ EXPORT_TRACEPOINT_SYMBOL(mmap_lock_acquire_returned);
+@@ -39,21 +40,30 @@ static int reg_refcount; /* Protected by reg_lock. */
+  */
+ #define CONTEXT_COUNT 4
+ 
+-static DEFINE_PER_CPU(char __rcu *, memcg_path_buf);
++struct memcg_path {
++	local_lock_t lock;
++	char __rcu *buf;
++	local_t buf_idx;
++};
++static DEFINE_PER_CPU(struct memcg_path, memcg_paths) = {
++	.lock = INIT_LOCAL_LOCK(lock),
++	.buf_idx = LOCAL_INIT(0),
++};
++
+ static char **tmp_bufs;
+-static DEFINE_PER_CPU(int, memcg_path_buf_idx);
+ 
+ /* Called with reg_lock held. */
+ static void free_memcg_path_bufs(void)
+ {
++	struct memcg_path *memcg_path;
+ 	int cpu;
+ 	char **old = tmp_bufs;
+ 
+ 	for_each_possible_cpu(cpu) {
+-		*(old++) = rcu_dereference_protected(
+-			per_cpu(memcg_path_buf, cpu),
++		memcg_path = per_cpu_ptr(&memcg_paths, cpu);
++		*(old++) = rcu_dereference_protected(memcg_path->buf,
+ 			lockdep_is_held(&reg_lock));
+-		rcu_assign_pointer(per_cpu(memcg_path_buf, cpu), NULL);
++		rcu_assign_pointer(memcg_path->buf, NULL);
+ 	}
+ 
+ 	/* Wait for inflight memcg_path_buf users to finish. */
+@@ -88,7 +98,7 @@ int trace_mmap_lock_reg(void)
+ 		new = kmalloc(MEMCG_PATH_BUF_SIZE * CONTEXT_COUNT, GFP_KERNEL);
+ 		if (new == NULL)
+ 			goto out_fail_free;
+-		rcu_assign_pointer(per_cpu(memcg_path_buf, cpu), new);
++		rcu_assign_pointer(per_cpu_ptr(&memcg_paths, cpu)->buf, new);
+ 		/* Don't need to wait for inflights, they'd have gotten NULL. */
+ 	}
+ 
+@@ -122,23 +132,24 @@ out:
+ 
+ static inline char *get_memcg_path_buf(void)
+ {
++	struct memcg_path *memcg_path = this_cpu_ptr(&memcg_paths);
+ 	char *buf;
+ 	int idx;
+ 
+ 	rcu_read_lock();
+-	buf = rcu_dereference(*this_cpu_ptr(&memcg_path_buf));
++	buf = rcu_dereference(memcg_path->buf);
+ 	if (buf == NULL) {
+ 		rcu_read_unlock();
+ 		return NULL;
+ 	}
+-	idx = this_cpu_add_return(memcg_path_buf_idx, MEMCG_PATH_BUF_SIZE) -
++	idx = local_add_return(MEMCG_PATH_BUF_SIZE, &memcg_path->buf_idx) -
+ 	      MEMCG_PATH_BUF_SIZE;
+ 	return &buf[idx];
+ }
+ 
+ static inline void put_memcg_path_buf(void)
+ {
+-	this_cpu_sub(memcg_path_buf_idx, MEMCG_PATH_BUF_SIZE);
++	local_sub(MEMCG_PATH_BUF_SIZE, &this_cpu_ptr(&memcg_paths)->buf_idx);
+ 	rcu_read_unlock();
+ }
+ 
+@@ -179,14 +190,14 @@ out:
+ #define TRACE_MMAP_LOCK_EVENT(type, mm, ...)                                   \
+ 	do {                                                                   \
+ 		const char *memcg_path;                                        \
+-		preempt_disable();                                             \
++		local_lock(&memcg_paths.lock);				       \
+ 		memcg_path = get_mm_memcg_path(mm);                            \
+ 		trace_mmap_lock_##type(mm,                                     \
+ 				       memcg_path != NULL ? memcg_path : "",   \
+ 				       ##__VA_ARGS__);                         \
+ 		if (likely(memcg_path != NULL))                                \
+ 			put_memcg_path_buf();                                  \
+-		preempt_enable();                                              \
++		local_unlock(&memcg_paths.lock);			       \
+ 	} while (0)
+ 
+ #else /* !CONFIG_MEMCG */
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 04220581579cd..fc5beebf69887 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -6400,7 +6400,7 @@ void __ref memmap_init_zone_device(struct zone *zone,
+ 		return;
+ 
+ 	/*
+-	 * The call to memmap_init_zone should have already taken care
++	 * The call to memmap_init should have already taken care
+ 	 * of the pages reserved for the memmap, so we can just jump to
+ 	 * the end of that region and start processing the device pages.
+ 	 */
+@@ -6465,7 +6465,7 @@ static void __meminit zone_init_free_lists(struct zone *zone)
+ /*
+  * Only struct pages that correspond to ranges defined by memblock.memory
+  * are zeroed and initialized by going through __init_single_page() during
+- * memmap_init_zone().
++ * memmap_init_zone_range().
+  *
+  * But, there could be struct pages that correspond to holes in
+  * memblock.memory. This can happen because of the following reasons:
+@@ -6484,9 +6484,9 @@ static void __meminit zone_init_free_lists(struct zone *zone)
+  *   zone/node above the hole except for the trailing pages in the last
+  *   section that will be appended to the zone/node below.
+  */
+-static u64 __meminit init_unavailable_range(unsigned long spfn,
+-					    unsigned long epfn,
+-					    int zone, int node)
++static void __init init_unavailable_range(unsigned long spfn,
++					  unsigned long epfn,
++					  int zone, int node)
+ {
+ 	unsigned long pfn;
+ 	u64 pgcnt = 0;
+@@ -6502,56 +6502,77 @@ static u64 __meminit init_unavailable_range(unsigned long spfn,
+ 		pgcnt++;
+ 	}
+ 
+-	return pgcnt;
++	if (pgcnt)
++		pr_info("On node %d, zone %s: %lld pages in unavailable ranges",
++			node, zone_names[zone], pgcnt);
+ }
+ #else
+-static inline u64 init_unavailable_range(unsigned long spfn, unsigned long epfn,
+-					 int zone, int node)
++static inline void init_unavailable_range(unsigned long spfn,
++					  unsigned long epfn,
++					  int zone, int node)
+ {
+-	return 0;
+ }
+ #endif
+ 
+-void __meminit __weak memmap_init_zone(struct zone *zone)
++static void __init memmap_init_zone_range(struct zone *zone,
++					  unsigned long start_pfn,
++					  unsigned long end_pfn,
++					  unsigned long *hole_pfn)
+ {
+ 	unsigned long zone_start_pfn = zone->zone_start_pfn;
+ 	unsigned long zone_end_pfn = zone_start_pfn + zone->spanned_pages;
+-	int i, nid = zone_to_nid(zone), zone_id = zone_idx(zone);
+-	static unsigned long hole_pfn;
++	int nid = zone_to_nid(zone), zone_id = zone_idx(zone);
++
++	start_pfn = clamp(start_pfn, zone_start_pfn, zone_end_pfn);
++	end_pfn = clamp(end_pfn, zone_start_pfn, zone_end_pfn);
++
++	if (start_pfn >= end_pfn)
++		return;
++
++	memmap_init_range(end_pfn - start_pfn, nid, zone_id, start_pfn,
++			  zone_end_pfn, MEMINIT_EARLY, NULL, MIGRATE_MOVABLE);
++
++	if (*hole_pfn < start_pfn)
++		init_unavailable_range(*hole_pfn, start_pfn, zone_id, nid);
++
++	*hole_pfn = end_pfn;
++}
++
++static void __init memmap_init(void)
++{
+ 	unsigned long start_pfn, end_pfn;
+-	u64 pgcnt = 0;
++	unsigned long hole_pfn = 0;
++	int i, j, zone_id, nid;
+ 
+-	for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {
+-		start_pfn = clamp(start_pfn, zone_start_pfn, zone_end_pfn);
+-		end_pfn = clamp(end_pfn, zone_start_pfn, zone_end_pfn);
++	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) {
++		struct pglist_data *node = NODE_DATA(nid);
++
++		for (j = 0; j < MAX_NR_ZONES; j++) {
++			struct zone *zone = node->node_zones + j;
+ 
+-		if (end_pfn > start_pfn)
+-			memmap_init_range(end_pfn - start_pfn, nid,
+-					zone_id, start_pfn, zone_end_pfn,
+-					MEMINIT_EARLY, NULL, MIGRATE_MOVABLE);
++			if (!populated_zone(zone))
++				continue;
+ 
+-		if (hole_pfn < start_pfn)
+-			pgcnt += init_unavailable_range(hole_pfn, start_pfn,
+-							zone_id, nid);
+-		hole_pfn = end_pfn;
++			memmap_init_zone_range(zone, start_pfn, end_pfn,
++					       &hole_pfn);
++			zone_id = j;
++		}
+ 	}
+ 
+ #ifdef CONFIG_SPARSEMEM
+ 	/*
+-	 * Initialize the hole in the range [zone_end_pfn, section_end].
+-	 * If zone boundary falls in the middle of a section, this hole
+-	 * will be re-initialized during the call to this function for the
+-	 * higher zone.
++	 * Initialize the memory map for hole in the range [memory_end,
++	 * section_end].
++	 * Append the pages in this hole to the highest zone in the last
++	 * node.
++	 * The call to init_unavailable_range() is outside the ifdef to
++	 * silence the compiler warining about zone_id set but not used;
++	 * for FLATMEM it is a nop anyway
+ 	 */
+-	end_pfn = round_up(zone_end_pfn, PAGES_PER_SECTION);
++	end_pfn = round_up(end_pfn, PAGES_PER_SECTION);
+ 	if (hole_pfn < end_pfn)
+-		pgcnt += init_unavailable_range(hole_pfn, end_pfn,
+-						zone_id, nid);
+ #endif
+-
+-	if (pgcnt)
+-		pr_info("  %s zone: %llu pages in unavailable ranges\n",
+-			zone->name, pgcnt);
++		init_unavailable_range(hole_pfn, end_pfn, zone_id, nid);
+ }
+ 
+ static int zone_batchsize(struct zone *zone)
+@@ -7254,7 +7275,6 @@ static void __init free_area_init_core(struct pglist_data *pgdat)
+ 		set_pageblock_order();
+ 		setup_usemap(zone);
+ 		init_currently_empty_zone(zone, zone->zone_start_pfn, size);
+-		memmap_init_zone(zone);
+ 	}
+ }
+ 
+@@ -7780,6 +7800,8 @@ void __init free_area_init(unsigned long *max_zone_pfn)
+ 			node_set_state(nid, N_MEMORY);
+ 		check_for_memory(pgdat, nid);
+ 	}
++
++	memmap_init();
+ }
+ 
+ static int __init cmdline_parse_core(char *p, unsigned long *core,
+@@ -8065,14 +8087,14 @@ static void setup_per_zone_lowmem_reserve(void)
+ 			unsigned long managed_pages = 0;
+ 
+ 			for (j = i + 1; j < MAX_NR_ZONES; j++) {
+-				if (clear) {
+-					zone->lowmem_reserve[j] = 0;
+-				} else {
+-					struct zone *upper_zone = &pgdat->node_zones[j];
++				struct zone *upper_zone = &pgdat->node_zones[j];
+ 
+-					managed_pages += zone_managed_pages(upper_zone);
++				managed_pages += zone_managed_pages(upper_zone);
++
++				if (clear)
++					zone->lowmem_reserve[j] = 0;
++				else
+ 					zone->lowmem_reserve[j] = managed_pages / ratio;
+-				}
+ 			}
+ 		}
+ 	}
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 5d46611cba8dc..5fa21d66af203 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -1696,7 +1696,8 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index,
+ 	struct address_space *mapping = inode->i_mapping;
+ 	struct shmem_inode_info *info = SHMEM_I(inode);
+ 	struct mm_struct *charge_mm = vma ? vma->vm_mm : current->mm;
+-	struct page *page;
++	struct swap_info_struct *si;
++	struct page *page = NULL;
+ 	swp_entry_t swap;
+ 	int error;
+ 
+@@ -1704,6 +1705,12 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index,
+ 	swap = radix_to_swp_entry(*pagep);
+ 	*pagep = NULL;
+ 
++	/* Prevent swapoff from happening to us. */
++	si = get_swap_device(swap);
++	if (!si) {
++		error = EINVAL;
++		goto failed;
++	}
+ 	/* Look it up and read it in.. */
+ 	page = lookup_swap_cache(swap, NULL, 0);
+ 	if (!page) {
+@@ -1765,6 +1772,8 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index,
+ 	swap_free(swap);
+ 
+ 	*pagep = page;
++	if (si)
++		put_swap_device(si);
+ 	return 0;
+ failed:
+ 	if (!shmem_confirm_swap(mapping, index, swap))
+@@ -1775,6 +1784,9 @@ unlock:
+ 		put_page(page);
+ 	}
+ 
++	if (si)
++		put_swap_device(si);
++
+ 	return error;
+ }
+ 
+@@ -4028,8 +4040,7 @@ bool shmem_huge_enabled(struct vm_area_struct *vma)
+ 	loff_t i_size;
+ 	pgoff_t off;
+ 
+-	if ((vma->vm_flags & VM_NOHUGEPAGE) ||
+-	    test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))
++	if (!transhuge_vma_enabled(vma, vma->vm_flags))
+ 		return false;
+ 	if (shmem_huge == SHMEM_HUGE_FORCE)
+ 		return true;
+diff --git a/mm/slab.h b/mm/slab.h
+index 18c1927cd196c..b3294712a6868 100644
+--- a/mm/slab.h
++++ b/mm/slab.h
+@@ -309,7 +309,6 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s,
+ 	if (!memcg_kmem_enabled() || !objcg)
+ 		return;
+ 
+-	flags &= ~__GFP_ACCOUNT;
+ 	for (i = 0; i < size; i++) {
+ 		if (likely(p[i])) {
+ 			page = virt_to_head_page(p[i]);
+diff --git a/mm/z3fold.c b/mm/z3fold.c
+index 7fe7adaaad013..ed0023dc5a3d2 100644
+--- a/mm/z3fold.c
++++ b/mm/z3fold.c
+@@ -1059,6 +1059,7 @@ static void z3fold_destroy_pool(struct z3fold_pool *pool)
+ 	destroy_workqueue(pool->compact_wq);
+ 	destroy_workqueue(pool->release_wq);
+ 	z3fold_unregister_migration(pool);
++	free_percpu(pool->unbuddied);
+ 	kfree(pool);
+ }
+ 
+@@ -1382,7 +1383,7 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
+ 			if (zhdr->foreign_handles ||
+ 			    test_and_set_bit(PAGE_CLAIMED, &page->private)) {
+ 				if (kref_put(&zhdr->refcount,
+-						release_z3fold_page))
++						release_z3fold_page_locked))
+ 					atomic64_dec(&pool->pages_nr);
+ 				else
+ 					z3fold_page_unlock(zhdr);
+diff --git a/mm/zswap.c b/mm/zswap.c
+index 20763267a219e..706e0f98125ad 100644
+--- a/mm/zswap.c
++++ b/mm/zswap.c
+@@ -967,6 +967,13 @@ static int zswap_writeback_entry(struct zpool *pool, unsigned long handle)
+ 	spin_unlock(&tree->lock);
+ 	BUG_ON(offset != entry->offset);
+ 
++	src = (u8 *)zhdr + sizeof(struct zswap_header);
++	if (!zpool_can_sleep_mapped(pool)) {
++		memcpy(tmp, src, entry->length);
++		src = tmp;
++		zpool_unmap_handle(pool, handle);
++	}
++
+ 	/* try to allocate swap cache page */
+ 	switch (zswap_get_swap_cache_page(swpentry, &page)) {
+ 	case ZSWAP_SWAPCACHE_FAIL: /* no memory or invalidate happened */
+@@ -982,17 +989,7 @@ static int zswap_writeback_entry(struct zpool *pool, unsigned long handle)
+ 	case ZSWAP_SWAPCACHE_NEW: /* page is locked */
+ 		/* decompress */
+ 		acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
+-
+ 		dlen = PAGE_SIZE;
+-		src = (u8 *)zhdr + sizeof(struct zswap_header);
+-
+-		if (!zpool_can_sleep_mapped(pool)) {
+-
+-			memcpy(tmp, src, entry->length);
+-			src = tmp;
+-
+-			zpool_unmap_handle(pool, handle);
+-		}
+ 
+ 		mutex_lock(acomp_ctx->mutex);
+ 		sg_init_one(&input, src, entry->length);
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 016b2999f2195..b077d150ac529 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -5296,8 +5296,19 @@ static void hci_le_ext_adv_term_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 
+ 	BT_DBG("%s status 0x%2.2x", hdev->name, ev->status);
+ 
+-	if (ev->status)
++	if (ev->status) {
++		struct adv_info *adv;
++
++		adv = hci_find_adv_instance(hdev, ev->handle);
++		if (!adv)
++			return;
++
++		/* Remove advertising as it has been terminated */
++		hci_remove_adv_instance(hdev, ev->handle);
++		mgmt_advertising_removed(NULL, hdev, ev->handle);
++
+ 		return;
++	}
+ 
+ 	conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->conn_handle));
+ 	if (conn) {
+@@ -5441,7 +5452,7 @@ static void process_adv_report(struct hci_dev *hdev, u8 type, bdaddr_t *bdaddr,
+ 	struct hci_conn *conn;
+ 	bool match;
+ 	u32 flags;
+-	u8 *ptr, real_len;
++	u8 *ptr;
+ 
+ 	switch (type) {
+ 	case LE_ADV_IND:
+@@ -5472,14 +5483,10 @@ static void process_adv_report(struct hci_dev *hdev, u8 type, bdaddr_t *bdaddr,
+ 			break;
+ 	}
+ 
+-	real_len = ptr - data;
+-
+-	/* Adjust for actual length */
+-	if (len != real_len) {
+-		bt_dev_err_ratelimited(hdev, "advertising data len corrected %u -> %u",
+-				       len, real_len);
+-		len = real_len;
+-	}
++	/* Adjust for actual length. This handles the case when remote
++	 * device is advertising with incorrect data length.
++	 */
++	len = ptr - data;
+ 
+ 	/* If the direct address is present, then this report is from
+ 	 * a LE Direct Advertising Report event. In that case it is
+diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
+index fa9125b782f85..b069f640394d0 100644
+--- a/net/bluetooth/hci_request.c
++++ b/net/bluetooth/hci_request.c
+@@ -1697,30 +1697,33 @@ void __hci_req_update_scan_rsp_data(struct hci_request *req, u8 instance)
+ 		return;
+ 
+ 	if (ext_adv_capable(hdev)) {
+-		struct hci_cp_le_set_ext_scan_rsp_data cp;
++		struct {
++			struct hci_cp_le_set_ext_scan_rsp_data cp;
++			u8 data[HCI_MAX_EXT_AD_LENGTH];
++		} pdu;
+ 
+-		memset(&cp, 0, sizeof(cp));
++		memset(&pdu, 0, sizeof(pdu));
+ 
+ 		if (instance)
+ 			len = create_instance_scan_rsp_data(hdev, instance,
+-							    cp.data);
++							    pdu.data);
+ 		else
+-			len = create_default_scan_rsp_data(hdev, cp.data);
++			len = create_default_scan_rsp_data(hdev, pdu.data);
+ 
+ 		if (hdev->scan_rsp_data_len == len &&
+-		    !memcmp(cp.data, hdev->scan_rsp_data, len))
++		    !memcmp(pdu.data, hdev->scan_rsp_data, len))
+ 			return;
+ 
+-		memcpy(hdev->scan_rsp_data, cp.data, sizeof(cp.data));
++		memcpy(hdev->scan_rsp_data, pdu.data, len);
+ 		hdev->scan_rsp_data_len = len;
+ 
+-		cp.handle = instance;
+-		cp.length = len;
+-		cp.operation = LE_SET_ADV_DATA_OP_COMPLETE;
+-		cp.frag_pref = LE_SET_ADV_DATA_NO_FRAG;
++		pdu.cp.handle = instance;
++		pdu.cp.length = len;
++		pdu.cp.operation = LE_SET_ADV_DATA_OP_COMPLETE;
++		pdu.cp.frag_pref = LE_SET_ADV_DATA_NO_FRAG;
+ 
+-		hci_req_add(req, HCI_OP_LE_SET_EXT_SCAN_RSP_DATA, sizeof(cp),
+-			    &cp);
++		hci_req_add(req, HCI_OP_LE_SET_EXT_SCAN_RSP_DATA,
++			    sizeof(pdu.cp) + len, &pdu.cp);
+ 	} else {
+ 		struct hci_cp_le_set_scan_rsp_data cp;
+ 
+@@ -1843,26 +1846,30 @@ void __hci_req_update_adv_data(struct hci_request *req, u8 instance)
+ 		return;
+ 
+ 	if (ext_adv_capable(hdev)) {
+-		struct hci_cp_le_set_ext_adv_data cp;
++		struct {
++			struct hci_cp_le_set_ext_adv_data cp;
++			u8 data[HCI_MAX_EXT_AD_LENGTH];
++		} pdu;
+ 
+-		memset(&cp, 0, sizeof(cp));
++		memset(&pdu, 0, sizeof(pdu));
+ 
+-		len = create_instance_adv_data(hdev, instance, cp.data);
++		len = create_instance_adv_data(hdev, instance, pdu.data);
+ 
+ 		/* There's nothing to do if the data hasn't changed */
+ 		if (hdev->adv_data_len == len &&
+-		    memcmp(cp.data, hdev->adv_data, len) == 0)
++		    memcmp(pdu.data, hdev->adv_data, len) == 0)
+ 			return;
+ 
+-		memcpy(hdev->adv_data, cp.data, sizeof(cp.data));
++		memcpy(hdev->adv_data, pdu.data, len);
+ 		hdev->adv_data_len = len;
+ 
+-		cp.length = len;
+-		cp.handle = instance;
+-		cp.operation = LE_SET_ADV_DATA_OP_COMPLETE;
+-		cp.frag_pref = LE_SET_ADV_DATA_NO_FRAG;
++		pdu.cp.length = len;
++		pdu.cp.handle = instance;
++		pdu.cp.operation = LE_SET_ADV_DATA_OP_COMPLETE;
++		pdu.cp.frag_pref = LE_SET_ADV_DATA_NO_FRAG;
+ 
+-		hci_req_add(req, HCI_OP_LE_SET_EXT_ADV_DATA, sizeof(cp), &cp);
++		hci_req_add(req, HCI_OP_LE_SET_EXT_ADV_DATA,
++			    sizeof(pdu.cp) + len, &pdu.cp);
+ 	} else {
+ 		struct hci_cp_le_set_adv_data cp;
+ 
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index f9be7f9084d67..023a98f7c9922 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -7585,6 +7585,9 @@ static bool tlv_data_is_valid(struct hci_dev *hdev, u32 adv_flags, u8 *data,
+ 	for (i = 0, cur_len = 0; i < len; i += (cur_len + 1)) {
+ 		cur_len = data[i];
+ 
++		if (!cur_len)
++			continue;
++
+ 		if (data[i + 1] == EIR_FLAGS &&
+ 		    (!is_adv_data || flags_managed(adv_flags)))
+ 			return false;
+diff --git a/net/bpfilter/main.c b/net/bpfilter/main.c
+index 05e1cfc1e5cd1..291a925462463 100644
+--- a/net/bpfilter/main.c
++++ b/net/bpfilter/main.c
+@@ -57,7 +57,7 @@ int main(void)
+ {
+ 	debug_f = fopen("/dev/kmsg", "w");
+ 	setvbuf(debug_f, 0, _IOLBF, 0);
+-	fprintf(debug_f, "Started bpfilter\n");
++	fprintf(debug_f, "<5>Started bpfilter\n");
+ 	loop();
+ 	fclose(debug_f);
+ 	return 0;
+diff --git a/net/can/bcm.c b/net/can/bcm.c
+index f3e4d9528fa38..0928a39c4423b 100644
+--- a/net/can/bcm.c
++++ b/net/can/bcm.c
+@@ -785,6 +785,7 @@ static int bcm_delete_rx_op(struct list_head *ops, struct bcm_msg_head *mh,
+ 						  bcm_rx_handler, op);
+ 
+ 			list_del(&op->list);
++			synchronize_rcu();
+ 			bcm_remove_op(op);
+ 			return 1; /* done */
+ 		}
+@@ -1533,9 +1534,13 @@ static int bcm_release(struct socket *sock)
+ 					  REGMASK(op->can_id),
+ 					  bcm_rx_handler, op);
+ 
+-		bcm_remove_op(op);
+ 	}
+ 
++	synchronize_rcu();
++
++	list_for_each_entry_safe(op, next, &bo->rx_ops, list)
++		bcm_remove_op(op);
++
+ #if IS_ENABLED(CONFIG_PROC_FS)
+ 	/* remove procfs entry */
+ 	if (net->can.bcmproc_dir && bo->bcm_proc_read)
+diff --git a/net/can/gw.c b/net/can/gw.c
+index ba41248056029..d8861e862f157 100644
+--- a/net/can/gw.c
++++ b/net/can/gw.c
+@@ -596,6 +596,7 @@ static int cgw_notifier(struct notifier_block *nb,
+ 			if (gwj->src.dev == dev || gwj->dst.dev == dev) {
+ 				hlist_del(&gwj->list);
+ 				cgw_unregister_filter(net, gwj);
++				synchronize_rcu();
+ 				kmem_cache_free(cgw_cache, gwj);
+ 			}
+ 		}
+@@ -1154,6 +1155,7 @@ static void cgw_remove_all_jobs(struct net *net)
+ 	hlist_for_each_entry_safe(gwj, nx, &net->can.cgw_list, list) {
+ 		hlist_del(&gwj->list);
+ 		cgw_unregister_filter(net, gwj);
++		synchronize_rcu();
+ 		kmem_cache_free(cgw_cache, gwj);
+ 	}
+ }
+@@ -1222,6 +1224,7 @@ static int cgw_remove_job(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 
+ 		hlist_del(&gwj->list);
+ 		cgw_unregister_filter(net, gwj);
++		synchronize_rcu();
+ 		kmem_cache_free(cgw_cache, gwj);
+ 		err = 0;
+ 		break;
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index be6183f8ca110..234cc4ad179a2 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -1028,9 +1028,6 @@ static int isotp_release(struct socket *sock)
+ 
+ 	lock_sock(sk);
+ 
+-	hrtimer_cancel(&so->txtimer);
+-	hrtimer_cancel(&so->rxtimer);
+-
+ 	/* remove current filters & unregister */
+ 	if (so->bound && (!(so->opt.flags & CAN_ISOTP_SF_BROADCAST))) {
+ 		if (so->ifindex) {
+@@ -1042,10 +1039,14 @@ static int isotp_release(struct socket *sock)
+ 						  SINGLE_MASK(so->rxid),
+ 						  isotp_rcv, sk);
+ 				dev_put(dev);
++				synchronize_rcu();
+ 			}
+ 		}
+ 	}
+ 
++	hrtimer_cancel(&so->txtimer);
++	hrtimer_cancel(&so->rxtimer);
++
+ 	so->ifindex = 0;
+ 	so->bound = 0;
+ 
+diff --git a/net/can/j1939/main.c b/net/can/j1939/main.c
+index da3a7a7bcff2b..08c8606cfd9c7 100644
+--- a/net/can/j1939/main.c
++++ b/net/can/j1939/main.c
+@@ -193,6 +193,10 @@ static void j1939_can_rx_unregister(struct j1939_priv *priv)
+ 	can_rx_unregister(dev_net(ndev), ndev, J1939_CAN_ID, J1939_CAN_MASK,
+ 			  j1939_can_recv, priv);
+ 
++	/* The last reference of priv is dropped by the RCU deferred
++	 * j1939_sk_sock_destruct() of the last socket, so we can
++	 * safely drop this reference here.
++	 */
+ 	j1939_priv_put(priv);
+ }
+ 
+diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c
+index 56aa66147d5ac..e1a399821238f 100644
+--- a/net/can/j1939/socket.c
++++ b/net/can/j1939/socket.c
+@@ -398,6 +398,9 @@ static int j1939_sk_init(struct sock *sk)
+ 	atomic_set(&jsk->skb_pending, 0);
+ 	spin_lock_init(&jsk->sk_session_queue_lock);
+ 	INIT_LIST_HEAD(&jsk->sk_session_queue);
++
++	/* j1939_sk_sock_destruct() depends on SOCK_RCU_FREE flag */
++	sock_set_flag(sk, SOCK_RCU_FREE);
+ 	sk->sk_destruct = j1939_sk_sock_destruct;
+ 	sk->sk_protocol = CAN_J1939;
+ 
+@@ -673,7 +676,7 @@ static int j1939_sk_setsockopt(struct socket *sock, int level, int optname,
+ 
+ 	switch (optname) {
+ 	case SO_J1939_FILTER:
+-		if (!sockptr_is_null(optval)) {
++		if (!sockptr_is_null(optval) && optlen != 0) {
+ 			struct j1939_filter *f;
+ 			int c;
+ 
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 65ab4e21c087f..6541358a770bf 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -3263,8 +3263,6 @@ static int bpf_skb_proto_4_to_6(struct sk_buff *skb)
+ 			shinfo->gso_type |=  SKB_GSO_TCPV6;
+ 		}
+ 
+-		/* Due to IPv6 header, MSS needs to be downgraded. */
+-		skb_decrease_gso_size(shinfo, len_diff);
+ 		/* Header must be checked, and gso_segs recomputed. */
+ 		shinfo->gso_type |= SKB_GSO_DODGY;
+ 		shinfo->gso_segs = 0;
+@@ -3304,8 +3302,6 @@ static int bpf_skb_proto_6_to_4(struct sk_buff *skb)
+ 			shinfo->gso_type |=  SKB_GSO_TCPV4;
+ 		}
+ 
+-		/* Due to IPv4 header, MSS can be upgraded. */
+-		skb_increase_gso_size(shinfo, len_diff);
+ 		/* Header must be checked, and gso_segs recomputed. */
+ 		shinfo->gso_type |= SKB_GSO_DODGY;
+ 		shinfo->gso_segs = 0;
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index ec931b080156d..c6e75bd0035dc 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -543,7 +543,9 @@ static const struct rtnl_af_ops *rtnl_af_lookup(const int family)
+ {
+ 	const struct rtnl_af_ops *ops;
+ 
+-	list_for_each_entry_rcu(ops, &rtnl_af_ops, list) {
++	ASSERT_RTNL();
++
++	list_for_each_entry(ops, &rtnl_af_ops, list) {
+ 		if (ops->family == family)
+ 			return ops;
+ 	}
+@@ -2274,27 +2276,18 @@ static int validate_linkmsg(struct net_device *dev, struct nlattr *tb[])
+ 		nla_for_each_nested(af, tb[IFLA_AF_SPEC], rem) {
+ 			const struct rtnl_af_ops *af_ops;
+ 
+-			rcu_read_lock();
+ 			af_ops = rtnl_af_lookup(nla_type(af));
+-			if (!af_ops) {
+-				rcu_read_unlock();
++			if (!af_ops)
+ 				return -EAFNOSUPPORT;
+-			}
+ 
+-			if (!af_ops->set_link_af) {
+-				rcu_read_unlock();
++			if (!af_ops->set_link_af)
+ 				return -EOPNOTSUPP;
+-			}
+ 
+ 			if (af_ops->validate_link_af) {
+ 				err = af_ops->validate_link_af(dev, af);
+-				if (err < 0) {
+-					rcu_read_unlock();
++				if (err < 0)
+ 					return err;
+-				}
+ 			}
+-
+-			rcu_read_unlock();
+ 		}
+ 	}
+ 
+@@ -2868,17 +2861,12 @@ static int do_setlink(const struct sk_buff *skb,
+ 		nla_for_each_nested(af, tb[IFLA_AF_SPEC], rem) {
+ 			const struct rtnl_af_ops *af_ops;
+ 
+-			rcu_read_lock();
+-
+ 			BUG_ON(!(af_ops = rtnl_af_lookup(nla_type(af))));
+ 
+ 			err = af_ops->set_link_af(dev, af, extack);
+-			if (err < 0) {
+-				rcu_read_unlock();
++			if (err < 0)
+ 				goto errout;
+-			}
+ 
+-			rcu_read_unlock();
+ 			status |= DO_SETLINK_NOTIFY;
+ 		}
+ 	}
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 43ce17a6a5852..539c83a45665e 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -847,7 +847,7 @@ out:
+ }
+ EXPORT_SYMBOL_GPL(sk_psock_msg_verdict);
+ 
+-static void sk_psock_skb_redirect(struct sk_buff *skb)
++static int sk_psock_skb_redirect(struct sk_buff *skb)
+ {
+ 	struct sk_psock *psock_other;
+ 	struct sock *sk_other;
+@@ -858,7 +858,7 @@ static void sk_psock_skb_redirect(struct sk_buff *skb)
+ 	 */
+ 	if (unlikely(!sk_other)) {
+ 		kfree_skb(skb);
+-		return;
++		return -EIO;
+ 	}
+ 	psock_other = sk_psock(sk_other);
+ 	/* This error indicates the socket is being torn down or had another
+@@ -866,19 +866,22 @@ static void sk_psock_skb_redirect(struct sk_buff *skb)
+ 	 * a socket that is in this state so we drop the skb.
+ 	 */
+ 	if (!psock_other || sock_flag(sk_other, SOCK_DEAD)) {
++		skb_bpf_redirect_clear(skb);
+ 		kfree_skb(skb);
+-		return;
++		return -EIO;
+ 	}
+ 	spin_lock_bh(&psock_other->ingress_lock);
+ 	if (!sk_psock_test_state(psock_other, SK_PSOCK_TX_ENABLED)) {
+ 		spin_unlock_bh(&psock_other->ingress_lock);
++		skb_bpf_redirect_clear(skb);
+ 		kfree_skb(skb);
+-		return;
++		return -EIO;
+ 	}
+ 
+ 	skb_queue_tail(&psock_other->ingress_skb, skb);
+ 	schedule_work(&psock_other->work);
+ 	spin_unlock_bh(&psock_other->ingress_lock);
++	return 0;
+ }
+ 
+ static void sk_psock_tls_verdict_apply(struct sk_buff *skb, struct sock *sk, int verdict)
+@@ -915,14 +918,15 @@ int sk_psock_tls_strp_read(struct sk_psock *psock, struct sk_buff *skb)
+ }
+ EXPORT_SYMBOL_GPL(sk_psock_tls_strp_read);
+ 
+-static void sk_psock_verdict_apply(struct sk_psock *psock,
+-				   struct sk_buff *skb, int verdict)
++static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb,
++				  int verdict)
+ {
+ 	struct sock *sk_other;
+-	int err = -EIO;
++	int err = 0;
+ 
+ 	switch (verdict) {
+ 	case __SK_PASS:
++		err = -EIO;
+ 		sk_other = psock->sk;
+ 		if (sock_flag(sk_other, SOCK_DEAD) ||
+ 		    !sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) {
+@@ -945,18 +949,25 @@ static void sk_psock_verdict_apply(struct sk_psock *psock,
+ 			if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) {
+ 				skb_queue_tail(&psock->ingress_skb, skb);
+ 				schedule_work(&psock->work);
++				err = 0;
+ 			}
+ 			spin_unlock_bh(&psock->ingress_lock);
++			if (err < 0) {
++				skb_bpf_redirect_clear(skb);
++				goto out_free;
++			}
+ 		}
+ 		break;
+ 	case __SK_REDIRECT:
+-		sk_psock_skb_redirect(skb);
++		err = sk_psock_skb_redirect(skb);
+ 		break;
+ 	case __SK_DROP:
+ 	default:
+ out_free:
+ 		kfree_skb(skb);
+ 	}
++
++	return err;
+ }
+ 
+ static void sk_psock_write_space(struct sock *sk)
+@@ -1123,7 +1134,8 @@ static int sk_psock_verdict_recv(read_descriptor_t *desc, struct sk_buff *skb,
+ 		ret = sk_psock_map_verd(ret, skb_bpf_redirect_fetch(skb));
+ 		skb->sk = NULL;
+ 	}
+-	sk_psock_verdict_apply(psock, skb, ret);
++	if (sk_psock_verdict_apply(psock, skb, ret) < 0)
++		len = 0;
+ out:
+ 	rcu_read_unlock();
+ 	return len;
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index 6f1b82b8ad49a..60decd6420ca1 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -48,7 +48,7 @@ static struct bpf_map *sock_map_alloc(union bpf_attr *attr)
+ 	bpf_map_init_from_attr(&stab->map, attr);
+ 	raw_spin_lock_init(&stab->lock);
+ 
+-	stab->sks = bpf_map_area_alloc(stab->map.max_entries *
++	stab->sks = bpf_map_area_alloc((u64) stab->map.max_entries *
+ 				       sizeof(struct sock *),
+ 				       stab->map.numa_node);
+ 	if (!stab->sks) {
+diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
+index 1c6429c353a96..73721a4448bd4 100644
+--- a/net/ipv4/devinet.c
++++ b/net/ipv4/devinet.c
+@@ -1955,7 +1955,7 @@ static int inet_validate_link_af(const struct net_device *dev,
+ 	struct nlattr *a, *tb[IFLA_INET_MAX+1];
+ 	int err, rem;
+ 
+-	if (dev && !__in_dev_get_rcu(dev))
++	if (dev && !__in_dev_get_rtnl(dev))
+ 		return -EAFNOSUPPORT;
+ 
+ 	err = nla_parse_nested_deprecated(tb, IFLA_INET_MAX, nla,
+@@ -1981,7 +1981,7 @@ static int inet_validate_link_af(const struct net_device *dev,
+ static int inet_set_link_af(struct net_device *dev, const struct nlattr *nla,
+ 			    struct netlink_ext_ack *extack)
+ {
+-	struct in_device *in_dev = __in_dev_get_rcu(dev);
++	struct in_device *in_dev = __in_dev_get_rtnl(dev);
+ 	struct nlattr *a, *tb[IFLA_INET_MAX+1];
+ 	int rem;
+ 
+diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
+index 35803ab7ac804..26171dec08c4d 100644
+--- a/net/ipv4/esp4.c
++++ b/net/ipv4/esp4.c
+@@ -673,7 +673,7 @@ static int esp_output(struct xfrm_state *x, struct sk_buff *skb)
+ 		struct xfrm_dst *dst = (struct xfrm_dst *)skb_dst(skb);
+ 		u32 padto;
+ 
+-		padto = min(x->tfcpad, xfrm_state_mtu(x, dst->child_mtu_cached));
++		padto = min(x->tfcpad, __xfrm_state_mtu(x, dst->child_mtu_cached));
+ 		if (skb->len < padto)
+ 			esp.tfclen = padto - skb->len;
+ 	}
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index 84bb707bd88d8..647bceab56c2d 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -371,6 +371,8 @@ static int __fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst,
+ 		fl4.flowi4_proto = 0;
+ 		fl4.fl4_sport = 0;
+ 		fl4.fl4_dport = 0;
++	} else {
++		swap(fl4.fl4_sport, fl4.fl4_dport);
+ 	}
+ 
+ 	if (fib_lookup(net, &fl4, &res, 0))
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 6a36ac98476fa..78d1e5afc4520 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1306,7 +1306,7 @@ INDIRECT_CALLABLE_SCOPE unsigned int ipv4_mtu(const struct dst_entry *dst)
+ 		mtu = dst_metric_raw(dst, RTAX_MTU);
+ 
+ 	if (mtu)
+-		return mtu;
++		goto out;
+ 
+ 	mtu = READ_ONCE(dst->dev->mtu);
+ 
+@@ -1315,6 +1315,7 @@ INDIRECT_CALLABLE_SCOPE unsigned int ipv4_mtu(const struct dst_entry *dst)
+ 			mtu = 576;
+ 	}
+ 
++out:
+ 	mtu = min_t(unsigned int, mtu, IP_MAX_MTU);
+ 
+ 	return mtu - lwtunnel_headroom(dst->lwtstate, mtu);
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 1307ad0d3b9ed..8091276cb85b8 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -1798,11 +1798,13 @@ int udp_read_sock(struct sock *sk, read_descriptor_t *desc,
+ 		if (used <= 0) {
+ 			if (!copied)
+ 				copied = used;
++			kfree_skb(skb);
+ 			break;
+ 		} else if (used <= skb->len) {
+ 			copied += used;
+ 		}
+ 
++		kfree_skb(skb);
+ 		if (!desc->count)
+ 			break;
+ 	}
+diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
+index 393ae2b78e7d4..1654e4ce094f3 100644
+--- a/net/ipv6/esp6.c
++++ b/net/ipv6/esp6.c
+@@ -708,7 +708,7 @@ static int esp6_output(struct xfrm_state *x, struct sk_buff *skb)
+ 		struct xfrm_dst *dst = (struct xfrm_dst *)skb_dst(skb);
+ 		u32 padto;
+ 
+-		padto = min(x->tfcpad, xfrm_state_mtu(x, dst->child_mtu_cached));
++		padto = min(x->tfcpad, __xfrm_state_mtu(x, dst->child_mtu_cached));
+ 		if (skb->len < padto)
+ 			esp.tfclen = padto - skb->len;
+ 	}
+diff --git a/net/ipv6/exthdrs.c b/net/ipv6/exthdrs.c
+index 56e479d158b7c..26882e165c9e3 100644
+--- a/net/ipv6/exthdrs.c
++++ b/net/ipv6/exthdrs.c
+@@ -135,18 +135,23 @@ static bool ip6_parse_tlv(const struct tlvtype_proc *procs,
+ 	len -= 2;
+ 
+ 	while (len > 0) {
+-		int optlen = nh[off + 1] + 2;
+-		int i;
++		int optlen, i;
+ 
+-		switch (nh[off]) {
+-		case IPV6_TLV_PAD1:
+-			optlen = 1;
++		if (nh[off] == IPV6_TLV_PAD1) {
+ 			padlen++;
+ 			if (padlen > 7)
+ 				goto bad;
+-			break;
++			off++;
++			len--;
++			continue;
++		}
++		if (len < 2)
++			goto bad;
++		optlen = nh[off + 1] + 2;
++		if (optlen > len)
++			goto bad;
+ 
+-		case IPV6_TLV_PADN:
++		if (nh[off] == IPV6_TLV_PADN) {
+ 			/* RFC 2460 states that the purpose of PadN is
+ 			 * to align the containing header to multiples
+ 			 * of 8. 7 is therefore the highest valid value.
+@@ -163,12 +168,7 @@ static bool ip6_parse_tlv(const struct tlvtype_proc *procs,
+ 				if (nh[off + i] != 0)
+ 					goto bad;
+ 			}
+-			break;
+-
+-		default: /* Other TLV code so scan list */
+-			if (optlen > len)
+-				goto bad;
+-
++		} else {
+ 			tlv_count++;
+ 			if (tlv_count > max_count)
+ 				goto bad;
+@@ -188,7 +188,6 @@ static bool ip6_parse_tlv(const struct tlvtype_proc *procs,
+ 				return false;
+ 
+ 			padlen = 0;
+-			break;
+ 		}
+ 		off += optlen;
+ 		len -= optlen;
+@@ -306,7 +305,7 @@ fail_and_free:
+ #endif
+ 
+ 	if (ip6_parse_tlv(tlvprocdestopt_lst, skb,
+-			  init_net.ipv6.sysctl.max_dst_opts_cnt)) {
++			  net->ipv6.sysctl.max_dst_opts_cnt)) {
+ 		skb->transport_header += extlen;
+ 		opt = IP6CB(skb);
+ #if IS_ENABLED(CONFIG_IPV6_MIP6)
+@@ -1037,7 +1036,7 @@ fail_and_free:
+ 
+ 	opt->flags |= IP6SKB_HOPBYHOP;
+ 	if (ip6_parse_tlv(tlvprochopopt_lst, skb,
+-			  init_net.ipv6.sysctl.max_hbh_opts_cnt)) {
++			  net->ipv6.sysctl.max_hbh_opts_cnt)) {
+ 		skb->transport_header += extlen;
+ 		opt = IP6CB(skb);
+ 		opt->nhoff = sizeof(struct ipv6hdr);
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 288bafded9989..28ca70af014ad 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -1239,8 +1239,6 @@ route_lookup:
+ 	if (max_headroom > dev->needed_headroom)
+ 		dev->needed_headroom = max_headroom;
+ 
+-	skb_set_inner_ipproto(skb, proto);
+-
+ 	err = ip6_tnl_encap(skb, t, &proto, fl6);
+ 	if (err)
+ 		return err;
+@@ -1377,6 +1375,8 @@ ipxip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev,
+ 	if (iptunnel_handle_offloads(skb, SKB_GSO_IPXIP6))
+ 		return -1;
+ 
++	skb_set_inner_ipproto(skb, protocol);
++
+ 	err = ip6_tnl_xmit(skb, dev, dsfield, &fl6, encap_limit, &mtu,
+ 			   protocol);
+ 	if (err != 0) {
+diff --git a/net/mac80211/he.c b/net/mac80211/he.c
+index 0c0b970835ceb..a87421c8637d6 100644
+--- a/net/mac80211/he.c
++++ b/net/mac80211/he.c
+@@ -111,7 +111,7 @@ ieee80211_he_cap_ie_to_sta_he_cap(struct ieee80211_sub_if_data *sdata,
+ 				  struct sta_info *sta)
+ {
+ 	struct ieee80211_sta_he_cap *he_cap = &sta->sta.he_cap;
+-	struct ieee80211_sta_he_cap own_he_cap = sband->iftype_data->he_cap;
++	struct ieee80211_sta_he_cap own_he_cap;
+ 	struct ieee80211_he_cap_elem *he_cap_ie_elem = (void *)he_cap_ie;
+ 	u8 he_ppe_size;
+ 	u8 mcs_nss_size;
+@@ -123,6 +123,8 @@ ieee80211_he_cap_ie_to_sta_he_cap(struct ieee80211_sub_if_data *sdata,
+ 	if (!he_cap_ie || !ieee80211_get_he_sta_cap(sband))
+ 		return;
+ 
++	own_he_cap = sband->iftype_data->he_cap;
++
+ 	/* Make sure size is OK */
+ 	mcs_nss_size = ieee80211_he_mcs_nss_size(he_cap_ie_elem);
+ 	he_ppe_size =
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 3f2aad2e74366..b1c44fa63a06f 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -1094,11 +1094,6 @@ void ieee80211_send_nullfunc(struct ieee80211_local *local,
+ 	struct ieee80211_hdr_3addr *nullfunc;
+ 	struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
+ 
+-	/* Don't send NDPs when STA is connected HE */
+-	if (sdata->vif.type == NL80211_IFTYPE_STATION &&
+-	    !(ifmgd->flags & IEEE80211_STA_DISABLE_HE))
+-		return;
+-
+ 	skb = ieee80211_nullfunc_get(&local->hw, &sdata->vif,
+ 		!ieee80211_hw_check(&local->hw, DOESNT_SUPPORT_QOS_NDP));
+ 	if (!skb)
+@@ -1130,10 +1125,6 @@ static void ieee80211_send_4addr_nullfunc(struct ieee80211_local *local,
+ 	if (WARN_ON(sdata->vif.type != NL80211_IFTYPE_STATION))
+ 		return;
+ 
+-	/* Don't send NDPs when connected HE */
+-	if (!(sdata->u.mgd.flags & IEEE80211_STA_DISABLE_HE))
+-		return;
+-
+ 	skb = dev_alloc_skb(local->hw.extra_tx_headroom + 30);
+ 	if (!skb)
+ 		return;
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index f2fb69da9b6e1..13250cadb4202 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -1398,11 +1398,6 @@ static void ieee80211_send_null_response(struct sta_info *sta, int tid,
+ 	struct ieee80211_tx_info *info;
+ 	struct ieee80211_chanctx_conf *chanctx_conf;
+ 
+-	/* Don't send NDPs when STA is connected HE */
+-	if (sdata->vif.type == NL80211_IFTYPE_STATION &&
+-	    !(sdata->u.mgd.flags & IEEE80211_STA_DISABLE_HE))
+-		return;
+-
+ 	if (qos) {
+ 		fc = cpu_to_le16(IEEE80211_FTYPE_DATA |
+ 				 IEEE80211_STYPE_QOS_NULLFUNC |
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index 9b263f27ce9bd..b87e46f515fb8 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -896,19 +896,20 @@ reset:
+ 	return false;
+ }
+ 
+-static u64 expand_ack(u64 old_ack, u64 cur_ack, bool use_64bit)
++u64 __mptcp_expand_seq(u64 old_seq, u64 cur_seq)
+ {
+-	u32 old_ack32, cur_ack32;
+-
+-	if (use_64bit)
+-		return cur_ack;
+-
+-	old_ack32 = (u32)old_ack;
+-	cur_ack32 = (u32)cur_ack;
+-	cur_ack = (old_ack & GENMASK_ULL(63, 32)) + cur_ack32;
+-	if (unlikely(before(cur_ack32, old_ack32)))
+-		return cur_ack + (1LL << 32);
+-	return cur_ack;
++	u32 old_seq32, cur_seq32;
++
++	old_seq32 = (u32)old_seq;
++	cur_seq32 = (u32)cur_seq;
++	cur_seq = (old_seq & GENMASK_ULL(63, 32)) + cur_seq32;
++	if (unlikely(cur_seq32 < old_seq32 && before(old_seq32, cur_seq32)))
++		return cur_seq + (1LL << 32);
++
++	/* reverse wrap could happen, too */
++	if (unlikely(cur_seq32 > old_seq32 && after(old_seq32, cur_seq32)))
++		return cur_seq - (1LL << 32);
++	return cur_seq;
+ }
+ 
+ static void ack_update_msk(struct mptcp_sock *msk,
+@@ -926,7 +927,7 @@ static void ack_update_msk(struct mptcp_sock *msk,
+ 	 * more dangerous than missing an ack
+ 	 */
+ 	old_snd_una = msk->snd_una;
+-	new_snd_una = expand_ack(old_snd_una, mp_opt->data_ack, mp_opt->ack64);
++	new_snd_una = mptcp_expand_seq(old_snd_una, mp_opt->data_ack, mp_opt->ack64);
+ 
+ 	/* ACK for data not even sent yet? Ignore. */
+ 	if (after64(new_snd_una, snd_nxt))
+@@ -963,7 +964,7 @@ bool mptcp_update_rcv_data_fin(struct mptcp_sock *msk, u64 data_fin_seq, bool us
+ 		return false;
+ 
+ 	WRITE_ONCE(msk->rcv_data_fin_seq,
+-		   expand_ack(READ_ONCE(msk->ack_seq), data_fin_seq, use_64bit));
++		   mptcp_expand_seq(READ_ONCE(msk->ack_seq), data_fin_seq, use_64bit));
+ 	WRITE_ONCE(msk->rcv_data_fin, 1);
+ 
+ 	return true;
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index 2469e06a3a9d6..3f5d90a20235a 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -971,8 +971,14 @@ skip_family:
+ 	if (tb[MPTCP_PM_ADDR_ATTR_FLAGS])
+ 		entry->flags = nla_get_u32(tb[MPTCP_PM_ADDR_ATTR_FLAGS]);
+ 
+-	if (tb[MPTCP_PM_ADDR_ATTR_PORT])
++	if (tb[MPTCP_PM_ADDR_ATTR_PORT]) {
++		if (!(entry->flags & MPTCP_PM_ADDR_FLAG_SIGNAL)) {
++			NL_SET_ERR_MSG_ATTR(info->extack, attr,
++					    "flags must have signal when using port");
++			return -EINVAL;
++		}
+ 		entry->addr.port = htons(nla_get_u16(tb[MPTCP_PM_ADDR_ATTR_PORT]));
++	}
+ 
+ 	return 0;
+ }
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 632350018fb66..8ead550df8b1e 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -2946,6 +2946,11 @@ static void mptcp_release_cb(struct sock *sk)
+ 		spin_lock_bh(&sk->sk_lock.slock);
+ 	}
+ 
++	/* be sure to set the current sk state before tacking actions
++	 * depending on sk_state
++	 */
++	if (test_and_clear_bit(MPTCP_CONNECTED, &mptcp_sk(sk)->flags))
++		__mptcp_set_connected(sk);
+ 	if (test_and_clear_bit(MPTCP_CLEAN_UNA, &mptcp_sk(sk)->flags))
+ 		__mptcp_clean_una_wakeup(sk);
+ 	if (test_and_clear_bit(MPTCP_ERROR_REPORT, &mptcp_sk(sk)->flags))
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 385796f0ef19b..7b634568f49cf 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -109,6 +109,7 @@
+ #define MPTCP_ERROR_REPORT	8
+ #define MPTCP_RETRANSMIT	9
+ #define MPTCP_WORK_SYNC_SETSOCKOPT 10
++#define MPTCP_CONNECTED		11
+ 
+ static inline bool before64(__u64 seq1, __u64 seq2)
+ {
+@@ -579,6 +580,7 @@ void mptcp_get_options(const struct sk_buff *skb,
+ 		       struct mptcp_options_received *mp_opt);
+ 
+ void mptcp_finish_connect(struct sock *sk);
++void __mptcp_set_connected(struct sock *sk);
+ static inline bool mptcp_is_fully_established(struct sock *sk)
+ {
+ 	return inet_sk_state_load(sk) == TCP_ESTABLISHED &&
+@@ -593,6 +595,14 @@ int mptcp_setsockopt(struct sock *sk, int level, int optname,
+ int mptcp_getsockopt(struct sock *sk, int level, int optname,
+ 		     char __user *optval, int __user *option);
+ 
++u64 __mptcp_expand_seq(u64 old_seq, u64 cur_seq);
++static inline u64 mptcp_expand_seq(u64 old_seq, u64 cur_seq, bool use_64bit)
++{
++	if (use_64bit)
++		return cur_seq;
++
++	return __mptcp_expand_seq(old_seq, cur_seq);
++}
+ void __mptcp_check_push(struct sock *sk, struct sock *ssk);
+ void __mptcp_data_acked(struct sock *sk);
+ void __mptcp_error_report(struct sock *sk);
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index be1de4084196b..cbc452d0901ec 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -371,6 +371,24 @@ static bool subflow_use_different_dport(struct mptcp_sock *msk, const struct soc
+ 	return inet_sk(sk)->inet_dport != inet_sk((struct sock *)msk)->inet_dport;
+ }
+ 
++void __mptcp_set_connected(struct sock *sk)
++{
++	if (sk->sk_state == TCP_SYN_SENT) {
++		inet_sk_state_store(sk, TCP_ESTABLISHED);
++		sk->sk_state_change(sk);
++	}
++}
++
++static void mptcp_set_connected(struct sock *sk)
++{
++	mptcp_data_lock(sk);
++	if (!sock_owned_by_user(sk))
++		__mptcp_set_connected(sk);
++	else
++		set_bit(MPTCP_CONNECTED, &mptcp_sk(sk)->flags);
++	mptcp_data_unlock(sk);
++}
++
+ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ {
+ 	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
+@@ -379,10 +397,6 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ 
+ 	subflow->icsk_af_ops->sk_rx_dst_set(sk, skb);
+ 
+-	if (inet_sk_state_load(parent) == TCP_SYN_SENT) {
+-		inet_sk_state_store(parent, TCP_ESTABLISHED);
+-		parent->sk_state_change(parent);
+-	}
+ 
+ 	/* be sure no special action on any packet other than syn-ack */
+ 	if (subflow->conn_finished)
+@@ -411,6 +425,7 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ 			 subflow->remote_key);
+ 		MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPCAPABLEACTIVEACK);
+ 		mptcp_finish_connect(sk);
++		mptcp_set_connected(parent);
+ 	} else if (subflow->request_join) {
+ 		u8 hmac[SHA256_DIGEST_SIZE];
+ 
+@@ -430,15 +445,15 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ 			goto do_reset;
+ 		}
+ 
++		if (!mptcp_finish_join(sk))
++			goto do_reset;
++
+ 		subflow_generate_hmac(subflow->local_key, subflow->remote_key,
+ 				      subflow->local_nonce,
+ 				      subflow->remote_nonce,
+ 				      hmac);
+ 		memcpy(subflow->hmac, hmac, MPTCPOPT_HMAC_LEN);
+ 
+-		if (!mptcp_finish_join(sk))
+-			goto do_reset;
+-
+ 		subflow->mp_join = 1;
+ 		MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_JOINSYNACKRX);
+ 
+@@ -451,6 +466,7 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ 	} else if (mptcp_check_fallback(sk)) {
+ fallback:
+ 		mptcp_rcv_space_init(mptcp_sk(parent), sk);
++		mptcp_set_connected(parent);
+ 	}
+ 	return;
+ 
+@@ -558,6 +574,7 @@ static void mptcp_sock_destruct(struct sock *sk)
+ 
+ static void mptcp_force_close(struct sock *sk)
+ {
++	/* the msk is not yet exposed to user-space */
+ 	inet_sk_state_store(sk, TCP_CLOSE);
+ 	sk_common_release(sk);
+ }
+@@ -775,15 +792,6 @@ enum mapping_status {
+ 	MAPPING_DUMMY
+ };
+ 
+-static u64 expand_seq(u64 old_seq, u16 old_data_len, u64 seq)
+-{
+-	if ((u32)seq == (u32)old_seq)
+-		return old_seq;
+-
+-	/* Assume map covers data not mapped yet. */
+-	return seq | ((old_seq + old_data_len + 1) & GENMASK_ULL(63, 32));
+-}
+-
+ static void dbg_bad_map(struct mptcp_subflow_context *subflow, u32 ssn)
+ {
+ 	pr_debug("Bad mapping: ssn=%d map_seq=%d map_data_len=%d",
+@@ -907,13 +915,7 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
+ 		data_len--;
+ 	}
+ 
+-	if (!mpext->dsn64) {
+-		map_seq = expand_seq(subflow->map_seq, subflow->map_data_len,
+-				     mpext->data_seq);
+-		pr_debug("expanded seq=%llu", subflow->map_seq);
+-	} else {
+-		map_seq = mpext->data_seq;
+-	}
++	map_seq = mptcp_expand_seq(READ_ONCE(msk->ack_seq), mpext->data_seq, mpext->dsn64);
+ 	WRITE_ONCE(mptcp_sk(subflow->conn)->use_64bit_ack, !!mpext->dsn64);
+ 
+ 	if (subflow->map_valid) {
+@@ -1489,10 +1491,7 @@ static void subflow_state_change(struct sock *sk)
+ 		mptcp_rcv_space_init(mptcp_sk(parent), sk);
+ 		pr_fallback(mptcp_sk(parent));
+ 		subflow->conn_finished = 1;
+-		if (inet_sk_state_load(parent) == TCP_SYN_SENT) {
+-			inet_sk_state_store(parent, TCP_ESTABLISHED);
+-			parent->sk_state_change(parent);
+-		}
++		mptcp_set_connected(parent);
+ 	}
+ 
+ 	/* as recvmsg() does not acquire the subflow socket for ssk selection
+diff --git a/net/mptcp/token.c b/net/mptcp/token.c
+index 8f0270a780ce5..72a24e63b1314 100644
+--- a/net/mptcp/token.c
++++ b/net/mptcp/token.c
+@@ -156,9 +156,6 @@ int mptcp_token_new_connect(struct sock *sk)
+ 	int retries = TOKEN_MAX_RETRIES;
+ 	struct token_bucket *bucket;
+ 
+-	pr_debug("ssk=%p, local_key=%llu, token=%u, idsn=%llu\n",
+-		 sk, subflow->local_key, subflow->token, subflow->idsn);
+-
+ again:
+ 	mptcp_crypto_key_gen_sha(&subflow->local_key, &subflow->token,
+ 				 &subflow->idsn);
+@@ -172,6 +169,9 @@ again:
+ 		goto again;
+ 	}
+ 
++	pr_debug("ssk=%p, local_key=%llu, token=%u, idsn=%llu\n",
++		 sk, subflow->local_key, subflow->token, subflow->idsn);
++
+ 	WRITE_ONCE(msk->token, subflow->token);
+ 	__sk_nulls_add_node_rcu((struct sock *)msk, &bucket->msk_chain);
+ 	bucket->chain_len++;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index bf4d6ec9fc55c..fcb15b8904e87 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -571,7 +571,7 @@ static struct nft_table *nft_table_lookup(const struct net *net,
+ 		    table->family == family &&
+ 		    nft_active_genmask(table, genmask)) {
+ 			if (nft_table_has_owner(table) &&
+-			    table->nlpid != nlpid)
++			    nlpid && table->nlpid != nlpid)
+ 				return ERR_PTR(-EPERM);
+ 
+ 			return table;
+@@ -583,7 +583,7 @@ static struct nft_table *nft_table_lookup(const struct net *net,
+ 
+ static struct nft_table *nft_table_lookup_byhandle(const struct net *net,
+ 						   const struct nlattr *nla,
+-						   u8 genmask)
++						   u8 genmask, u32 nlpid)
+ {
+ 	struct nftables_pernet *nft_net;
+ 	struct nft_table *table;
+@@ -591,8 +591,13 @@ static struct nft_table *nft_table_lookup_byhandle(const struct net *net,
+ 	nft_net = nft_pernet(net);
+ 	list_for_each_entry(table, &nft_net->tables, list) {
+ 		if (be64_to_cpu(nla_get_be64(nla)) == table->handle &&
+-		    nft_active_genmask(table, genmask))
++		    nft_active_genmask(table, genmask)) {
++			if (nft_table_has_owner(table) &&
++			    nlpid && table->nlpid != nlpid)
++				return ERR_PTR(-EPERM);
++
+ 			return table;
++		}
+ 	}
+ 
+ 	return ERR_PTR(-ENOENT);
+@@ -1279,7 +1284,8 @@ static int nf_tables_deltable(struct sk_buff *skb, const struct nfnl_info *info,
+ 
+ 	if (nla[NFTA_TABLE_HANDLE]) {
+ 		attr = nla[NFTA_TABLE_HANDLE];
+-		table = nft_table_lookup_byhandle(net, attr, genmask);
++		table = nft_table_lookup_byhandle(net, attr, genmask,
++						  NETLINK_CB(skb).portid);
+ 	} else {
+ 		attr = nla[NFTA_TABLE_NAME];
+ 		table = nft_table_lookup(net, attr, family, genmask,
+@@ -3243,9 +3249,9 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
+ 	u8 genmask = nft_genmask_next(info->net);
+ 	struct nft_rule *rule, *old_rule = NULL;
+ 	struct nft_expr_info *expr_info = NULL;
++	struct nft_flow_rule *flow = NULL;
+ 	int family = nfmsg->nfgen_family;
+ 	struct net *net = info->net;
+-	struct nft_flow_rule *flow;
+ 	struct nft_userdata *udata;
+ 	struct nft_table *table;
+ 	struct nft_chain *chain;
+@@ -3340,13 +3346,13 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
+ 		nla_for_each_nested(tmp, nla[NFTA_RULE_EXPRESSIONS], rem) {
+ 			err = -EINVAL;
+ 			if (nla_type(tmp) != NFTA_LIST_ELEM)
+-				goto err1;
++				goto err_release_expr;
+ 			if (n == NFT_RULE_MAXEXPRS)
+-				goto err1;
++				goto err_release_expr;
+ 			err = nf_tables_expr_parse(&ctx, tmp, &expr_info[n]);
+ 			if (err < 0) {
+ 				NL_SET_BAD_ATTR(extack, tmp);
+-				goto err1;
++				goto err_release_expr;
+ 			}
+ 			size += expr_info[n].ops->size;
+ 			n++;
+@@ -3355,7 +3361,7 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
+ 	/* Check for overflow of dlen field */
+ 	err = -EFBIG;
+ 	if (size >= 1 << 12)
+-		goto err1;
++		goto err_release_expr;
+ 
+ 	if (nla[NFTA_RULE_USERDATA]) {
+ 		ulen = nla_len(nla[NFTA_RULE_USERDATA]);
+@@ -3366,7 +3372,7 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
+ 	err = -ENOMEM;
+ 	rule = kzalloc(sizeof(*rule) + size + usize, GFP_KERNEL);
+ 	if (rule == NULL)
+-		goto err1;
++		goto err_release_expr;
+ 
+ 	nft_activate_next(net, rule);
+ 
+@@ -3385,7 +3391,7 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
+ 		err = nf_tables_newexpr(&ctx, &expr_info[i], expr);
+ 		if (err < 0) {
+ 			NL_SET_BAD_ATTR(extack, expr_info[i].attr);
+-			goto err2;
++			goto err_release_rule;
+ 		}
+ 
+ 		if (expr_info[i].ops->validate)
+@@ -3395,16 +3401,24 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
+ 		expr = nft_expr_next(expr);
+ 	}
+ 
++	if (chain->flags & NFT_CHAIN_HW_OFFLOAD) {
++		flow = nft_flow_rule_create(net, rule);
++		if (IS_ERR(flow)) {
++			err = PTR_ERR(flow);
++			goto err_release_rule;
++		}
++	}
++
+ 	if (info->nlh->nlmsg_flags & NLM_F_REPLACE) {
+ 		trans = nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule);
+ 		if (trans == NULL) {
+ 			err = -ENOMEM;
+-			goto err2;
++			goto err_destroy_flow_rule;
+ 		}
+ 		err = nft_delrule(&ctx, old_rule);
+ 		if (err < 0) {
+ 			nft_trans_destroy(trans);
+-			goto err2;
++			goto err_destroy_flow_rule;
+ 		}
+ 
+ 		list_add_tail_rcu(&rule->list, &old_rule->list);
+@@ -3412,7 +3426,7 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
+ 		trans = nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule);
+ 		if (!trans) {
+ 			err = -ENOMEM;
+-			goto err2;
++			goto err_destroy_flow_rule;
+ 		}
+ 
+ 		if (info->nlh->nlmsg_flags & NLM_F_APPEND) {
+@@ -3430,21 +3444,19 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
+ 	kvfree(expr_info);
+ 	chain->use++;
+ 
++	if (flow)
++		nft_trans_flow_rule(trans) = flow;
++
+ 	if (nft_net->validate_state == NFT_VALIDATE_DO)
+ 		return nft_table_validate(net, table);
+ 
+-	if (chain->flags & NFT_CHAIN_HW_OFFLOAD) {
+-		flow = nft_flow_rule_create(net, rule);
+-		if (IS_ERR(flow))
+-			return PTR_ERR(flow);
+-
+-		nft_trans_flow_rule(trans) = flow;
+-	}
+-
+ 	return 0;
+-err2:
++
++err_destroy_flow_rule:
++	nft_flow_rule_destroy(flow);
++err_release_rule:
+ 	nf_tables_rule_release(&ctx, rule);
+-err1:
++err_release_expr:
+ 	for (i = 0; i < n; i++) {
+ 		if (expr_info[i].ops) {
+ 			module_put(expr_info[i].ops->type->owner);
+@@ -8839,11 +8851,16 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ 			nft_rule_expr_deactivate(&trans->ctx,
+ 						 nft_trans_rule(trans),
+ 						 NFT_TRANS_ABORT);
++			if (trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD)
++				nft_flow_rule_destroy(nft_trans_flow_rule(trans));
+ 			break;
+ 		case NFT_MSG_DELRULE:
+ 			trans->ctx.chain->use++;
+ 			nft_clear(trans->ctx.net, nft_trans_rule(trans));
+ 			nft_rule_expr_activate(&trans->ctx, nft_trans_rule(trans));
++			if (trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD)
++				nft_flow_rule_destroy(nft_trans_flow_rule(trans));
++
+ 			nft_trans_destroy(trans);
+ 			break;
+ 		case NFT_MSG_NEWSET:
+diff --git a/net/netfilter/nf_tables_offload.c b/net/netfilter/nf_tables_offload.c
+index a48c5fd53a80a..b58d73a965232 100644
+--- a/net/netfilter/nf_tables_offload.c
++++ b/net/netfilter/nf_tables_offload.c
+@@ -54,15 +54,10 @@ static void nft_flow_rule_transfer_vlan(struct nft_offload_ctx *ctx,
+ 					struct nft_flow_rule *flow)
+ {
+ 	struct nft_flow_match *match = &flow->match;
+-	struct nft_offload_ethertype ethertype;
+-
+-	if (match->dissector.used_keys & BIT(FLOW_DISSECTOR_KEY_CONTROL) &&
+-	    match->key.basic.n_proto != htons(ETH_P_8021Q) &&
+-	    match->key.basic.n_proto != htons(ETH_P_8021AD))
+-		return;
+-
+-	ethertype.value = match->key.basic.n_proto;
+-	ethertype.mask = match->mask.basic.n_proto;
++	struct nft_offload_ethertype ethertype = {
++		.value	= match->key.basic.n_proto,
++		.mask	= match->mask.basic.n_proto,
++	};
+ 
+ 	if (match->dissector.used_keys & BIT(FLOW_DISSECTOR_KEY_VLAN) &&
+ 	    (match->key.vlan.vlan_tpid == htons(ETH_P_8021Q) ||
+@@ -76,7 +71,9 @@ static void nft_flow_rule_transfer_vlan(struct nft_offload_ctx *ctx,
+ 		match->dissector.offset[FLOW_DISSECTOR_KEY_CVLAN] =
+ 			offsetof(struct nft_flow_key, cvlan);
+ 		match->dissector.used_keys |= BIT(FLOW_DISSECTOR_KEY_CVLAN);
+-	} else {
++	} else if (match->dissector.used_keys & BIT(FLOW_DISSECTOR_KEY_BASIC) &&
++		   (match->key.basic.n_proto == htons(ETH_P_8021Q) ||
++		    match->key.basic.n_proto == htons(ETH_P_8021AD))) {
+ 		match->key.basic.n_proto = match->key.vlan.vlan_tpid;
+ 		match->mask.basic.n_proto = match->mask.vlan.vlan_tpid;
+ 		match->key.vlan.vlan_tpid = ethertype.value;
+@@ -594,23 +591,6 @@ int nft_flow_rule_offload_commit(struct net *net)
+ 		}
+ 	}
+ 
+-	list_for_each_entry(trans, &nft_net->commit_list, list) {
+-		if (trans->ctx.family != NFPROTO_NETDEV)
+-			continue;
+-
+-		switch (trans->msg_type) {
+-		case NFT_MSG_NEWRULE:
+-		case NFT_MSG_DELRULE:
+-			if (!(trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD))
+-				continue;
+-
+-			nft_flow_rule_destroy(nft_trans_flow_rule(trans));
+-			break;
+-		default:
+-			break;
+-		}
+-	}
+-
+ 	return err;
+ }
+ 
+diff --git a/net/netfilter/nft_exthdr.c b/net/netfilter/nft_exthdr.c
+index f64f0017e9a53..670dd146fb2b1 100644
+--- a/net/netfilter/nft_exthdr.c
++++ b/net/netfilter/nft_exthdr.c
+@@ -42,6 +42,9 @@ static void nft_exthdr_ipv6_eval(const struct nft_expr *expr,
+ 	unsigned int offset = 0;
+ 	int err;
+ 
++	if (pkt->skb->protocol != htons(ETH_P_IPV6))
++		goto err;
++
+ 	err = ipv6_find_hdr(pkt->skb, &offset, priv->type, NULL, NULL);
+ 	if (priv->flags & NFT_EXTHDR_F_PRESENT) {
+ 		nft_reg_store8(dest, err >= 0);
+diff --git a/net/netfilter/nft_osf.c b/net/netfilter/nft_osf.c
+index ac61f708b82d2..d82677e83400b 100644
+--- a/net/netfilter/nft_osf.c
++++ b/net/netfilter/nft_osf.c
+@@ -28,6 +28,11 @@ static void nft_osf_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ 	struct nf_osf_data data;
+ 	struct tcphdr _tcph;
+ 
++	if (pkt->tprot != IPPROTO_TCP) {
++		regs->verdict.code = NFT_BREAK;
++		return;
++	}
++
+ 	tcp = skb_header_pointer(skb, ip_hdrlen(skb),
+ 				 sizeof(struct tcphdr), &_tcph);
+ 	if (!tcp) {
+diff --git a/net/netfilter/nft_tproxy.c b/net/netfilter/nft_tproxy.c
+index accef672088c7..5cb4d575d47ff 100644
+--- a/net/netfilter/nft_tproxy.c
++++ b/net/netfilter/nft_tproxy.c
+@@ -30,6 +30,12 @@ static void nft_tproxy_eval_v4(const struct nft_expr *expr,
+ 	__be16 tport = 0;
+ 	struct sock *sk;
+ 
++	if (pkt->tprot != IPPROTO_TCP &&
++	    pkt->tprot != IPPROTO_UDP) {
++		regs->verdict.code = NFT_BREAK;
++		return;
++	}
++
+ 	hp = skb_header_pointer(skb, ip_hdrlen(skb), sizeof(_hdr), &_hdr);
+ 	if (!hp) {
+ 		regs->verdict.code = NFT_BREAK;
+@@ -91,7 +97,8 @@ static void nft_tproxy_eval_v6(const struct nft_expr *expr,
+ 
+ 	memset(&taddr, 0, sizeof(taddr));
+ 
+-	if (!pkt->tprot_set) {
++	if (pkt->tprot != IPPROTO_TCP &&
++	    pkt->tprot != IPPROTO_UDP) {
+ 		regs->verdict.code = NFT_BREAK;
+ 		return;
+ 	}
+diff --git a/net/netlabel/netlabel_mgmt.c b/net/netlabel/netlabel_mgmt.c
+index ca52f50859899..e51ab37bbb038 100644
+--- a/net/netlabel/netlabel_mgmt.c
++++ b/net/netlabel/netlabel_mgmt.c
+@@ -76,6 +76,7 @@ static const struct nla_policy netlbl_mgmt_genl_policy[NLBL_MGMT_A_MAX + 1] = {
+ static int netlbl_mgmt_add_common(struct genl_info *info,
+ 				  struct netlbl_audit *audit_info)
+ {
++	void *pmap = NULL;
+ 	int ret_val = -EINVAL;
+ 	struct netlbl_domaddr_map *addrmap = NULL;
+ 	struct cipso_v4_doi *cipsov4 = NULL;
+@@ -175,6 +176,7 @@ static int netlbl_mgmt_add_common(struct genl_info *info,
+ 			ret_val = -ENOMEM;
+ 			goto add_free_addrmap;
+ 		}
++		pmap = map;
+ 		map->list.addr = addr->s_addr & mask->s_addr;
+ 		map->list.mask = mask->s_addr;
+ 		map->list.valid = 1;
+@@ -183,10 +185,8 @@ static int netlbl_mgmt_add_common(struct genl_info *info,
+ 			map->def.cipso = cipsov4;
+ 
+ 		ret_val = netlbl_af4list_add(&map->list, &addrmap->list4);
+-		if (ret_val != 0) {
+-			kfree(map);
+-			goto add_free_addrmap;
+-		}
++		if (ret_val != 0)
++			goto add_free_map;
+ 
+ 		entry->family = AF_INET;
+ 		entry->def.type = NETLBL_NLTYPE_ADDRSELECT;
+@@ -223,6 +223,7 @@ static int netlbl_mgmt_add_common(struct genl_info *info,
+ 			ret_val = -ENOMEM;
+ 			goto add_free_addrmap;
+ 		}
++		pmap = map;
+ 		map->list.addr = *addr;
+ 		map->list.addr.s6_addr32[0] &= mask->s6_addr32[0];
+ 		map->list.addr.s6_addr32[1] &= mask->s6_addr32[1];
+@@ -235,10 +236,8 @@ static int netlbl_mgmt_add_common(struct genl_info *info,
+ 			map->def.calipso = calipso;
+ 
+ 		ret_val = netlbl_af6list_add(&map->list, &addrmap->list6);
+-		if (ret_val != 0) {
+-			kfree(map);
+-			goto add_free_addrmap;
+-		}
++		if (ret_val != 0)
++			goto add_free_map;
+ 
+ 		entry->family = AF_INET6;
+ 		entry->def.type = NETLBL_NLTYPE_ADDRSELECT;
+@@ -248,10 +247,12 @@ static int netlbl_mgmt_add_common(struct genl_info *info,
+ 
+ 	ret_val = netlbl_domhsh_add(entry, audit_info);
+ 	if (ret_val != 0)
+-		goto add_free_addrmap;
++		goto add_free_map;
+ 
+ 	return 0;
+ 
++add_free_map:
++	kfree(pmap);
+ add_free_addrmap:
+ 	kfree(addrmap);
+ add_doi_put_def:
+diff --git a/net/qrtr/ns.c b/net/qrtr/ns.c
+index 8d00dfe8139e8..1990d496fcfc0 100644
+--- a/net/qrtr/ns.c
++++ b/net/qrtr/ns.c
+@@ -775,8 +775,10 @@ int qrtr_ns_init(void)
+ 	}
+ 
+ 	qrtr_ns.workqueue = alloc_workqueue("qrtr_ns_handler", WQ_UNBOUND, 1);
+-	if (!qrtr_ns.workqueue)
++	if (!qrtr_ns.workqueue) {
++		ret = -ENOMEM;
+ 		goto err_sock;
++	}
+ 
+ 	qrtr_ns.sock->sk->sk_data_ready = qrtr_ns_data_ready;
+ 
+diff --git a/net/sched/act_vlan.c b/net/sched/act_vlan.c
+index 1cac3c6fbb49c..a108469c664f7 100644
+--- a/net/sched/act_vlan.c
++++ b/net/sched/act_vlan.c
+@@ -70,7 +70,7 @@ static int tcf_vlan_act(struct sk_buff *skb, const struct tc_action *a,
+ 		/* replace the vid */
+ 		tci = (tci & ~VLAN_VID_MASK) | p->tcfv_push_vid;
+ 		/* replace prio bits, if tcfv_push_prio specified */
+-		if (p->tcfv_push_prio) {
++		if (p->tcfv_push_prio_exists) {
+ 			tci &= ~VLAN_PRIO_MASK;
+ 			tci |= p->tcfv_push_prio << VLAN_PRIO_SHIFT;
+ 		}
+@@ -121,6 +121,7 @@ static int tcf_vlan_init(struct net *net, struct nlattr *nla,
+ 	struct tc_action_net *tn = net_generic(net, vlan_net_id);
+ 	struct nlattr *tb[TCA_VLAN_MAX + 1];
+ 	struct tcf_chain *goto_ch = NULL;
++	bool push_prio_exists = false;
+ 	struct tcf_vlan_params *p;
+ 	struct tc_vlan *parm;
+ 	struct tcf_vlan *v;
+@@ -189,7 +190,8 @@ static int tcf_vlan_init(struct net *net, struct nlattr *nla,
+ 			push_proto = htons(ETH_P_8021Q);
+ 		}
+ 
+-		if (tb[TCA_VLAN_PUSH_VLAN_PRIORITY])
++		push_prio_exists = !!tb[TCA_VLAN_PUSH_VLAN_PRIORITY];
++		if (push_prio_exists)
+ 			push_prio = nla_get_u8(tb[TCA_VLAN_PUSH_VLAN_PRIORITY]);
+ 		break;
+ 	case TCA_VLAN_ACT_POP_ETH:
+@@ -241,6 +243,7 @@ static int tcf_vlan_init(struct net *net, struct nlattr *nla,
+ 	p->tcfv_action = action;
+ 	p->tcfv_push_vid = push_vid;
+ 	p->tcfv_push_prio = push_prio;
++	p->tcfv_push_prio_exists = push_prio_exists || action == TCA_VLAN_ACT_PUSH;
+ 	p->tcfv_push_proto = push_proto;
+ 
+ 	if (action == TCA_VLAN_ACT_PUSH_ETH) {
+diff --git a/net/sched/cls_tcindex.c b/net/sched/cls_tcindex.c
+index c4007b9cd16d6..5b274534264c2 100644
+--- a/net/sched/cls_tcindex.c
++++ b/net/sched/cls_tcindex.c
+@@ -304,7 +304,7 @@ static int tcindex_alloc_perfect_hash(struct net *net, struct tcindex_data *cp)
+ 	int i, err = 0;
+ 
+ 	cp->perfect = kcalloc(cp->hash, sizeof(struct tcindex_filter_result),
+-			      GFP_KERNEL);
++			      GFP_KERNEL | __GFP_NOWARN);
+ 	if (!cp->perfect)
+ 		return -ENOMEM;
+ 
+diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c
+index 1db9d4a2ef5ef..b692a0de1ad5e 100644
+--- a/net/sched/sch_qfq.c
++++ b/net/sched/sch_qfq.c
+@@ -485,11 +485,6 @@ static int qfq_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+ 
+ 	if (cl->qdisc != &noop_qdisc)
+ 		qdisc_hash_add(cl->qdisc, true);
+-	sch_tree_lock(sch);
+-	qdisc_class_hash_insert(&q->clhash, &cl->common);
+-	sch_tree_unlock(sch);
+-
+-	qdisc_class_hash_grow(sch, &q->clhash);
+ 
+ set_change_agg:
+ 	sch_tree_lock(sch);
+@@ -507,8 +502,11 @@ set_change_agg:
+ 	}
+ 	if (existing)
+ 		qfq_deact_rm_from_agg(q, cl);
++	else
++		qdisc_class_hash_insert(&q->clhash, &cl->common);
+ 	qfq_add_to_agg(q, new_agg, cl);
+ 	sch_tree_unlock(sch);
++	qdisc_class_hash_grow(sch, &q->clhash);
+ 
+ 	*arg = (unsigned long)cl;
+ 	return 0;
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index 39ed0e0afe6d9..c045f63d11fa6 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -591,11 +591,21 @@ static struct rpc_task *__rpc_find_next_queued_priority(struct rpc_wait_queue *q
+ 	struct list_head *q;
+ 	struct rpc_task *task;
+ 
++	/*
++	 * Service the privileged queue.
++	 */
++	q = &queue->tasks[RPC_NR_PRIORITY - 1];
++	if (queue->maxpriority > RPC_PRIORITY_PRIVILEGED && !list_empty(q)) {
++		task = list_first_entry(q, struct rpc_task, u.tk_wait.list);
++		goto out;
++	}
++
+ 	/*
+ 	 * Service a batch of tasks from a single owner.
+ 	 */
+ 	q = &queue->tasks[queue->priority];
+-	if (!list_empty(q) && --queue->nr) {
++	if (!list_empty(q) && queue->nr) {
++		queue->nr--;
+ 		task = list_first_entry(q, struct rpc_task, u.tk_wait.list);
+ 		goto out;
+ 	}
+diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c
+index d4beca895992d..593846d252143 100644
+--- a/net/tipc/bcast.c
++++ b/net/tipc/bcast.c
+@@ -699,7 +699,7 @@ int tipc_bcast_init(struct net *net)
+ 	spin_lock_init(&tipc_net(net)->bclock);
+ 
+ 	if (!tipc_link_bc_create(net, 0, 0, NULL,
+-				 FB_MTU,
++				 one_page_mtu,
+ 				 BCLINK_WIN_DEFAULT,
+ 				 BCLINK_WIN_DEFAULT,
+ 				 0,
+diff --git a/net/tipc/msg.c b/net/tipc/msg.c
+index ce6ab54822d8d..7053c22e393e7 100644
+--- a/net/tipc/msg.c
++++ b/net/tipc/msg.c
+@@ -44,12 +44,15 @@
+ #define MAX_FORWARD_SIZE 1024
+ #ifdef CONFIG_TIPC_CRYPTO
+ #define BUF_HEADROOM ALIGN(((LL_MAX_HEADER + 48) + EHDR_MAX_SIZE), 16)
+-#define BUF_TAILROOM (TIPC_AES_GCM_TAG_SIZE)
++#define BUF_OVERHEAD (BUF_HEADROOM + TIPC_AES_GCM_TAG_SIZE)
+ #else
+ #define BUF_HEADROOM (LL_MAX_HEADER + 48)
+-#define BUF_TAILROOM 16
++#define BUF_OVERHEAD BUF_HEADROOM
+ #endif
+ 
++const int one_page_mtu = PAGE_SIZE - SKB_DATA_ALIGN(BUF_OVERHEAD) -
++			 SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
++
+ static unsigned int align(unsigned int i)
+ {
+ 	return (i + 3) & ~3u;
+@@ -69,13 +72,8 @@ static unsigned int align(unsigned int i)
+ struct sk_buff *tipc_buf_acquire(u32 size, gfp_t gfp)
+ {
+ 	struct sk_buff *skb;
+-#ifdef CONFIG_TIPC_CRYPTO
+-	unsigned int buf_size = (BUF_HEADROOM + size + BUF_TAILROOM + 3) & ~3u;
+-#else
+-	unsigned int buf_size = (BUF_HEADROOM + size + 3) & ~3u;
+-#endif
+ 
+-	skb = alloc_skb_fclone(buf_size, gfp);
++	skb = alloc_skb_fclone(BUF_OVERHEAD + size, gfp);
+ 	if (skb) {
+ 		skb_reserve(skb, BUF_HEADROOM);
+ 		skb_put(skb, size);
+@@ -395,7 +393,8 @@ int tipc_msg_build(struct tipc_msg *mhdr, struct msghdr *m, int offset,
+ 		if (unlikely(!skb)) {
+ 			if (pktmax != MAX_MSG_SIZE)
+ 				return -ENOMEM;
+-			rc = tipc_msg_build(mhdr, m, offset, dsz, FB_MTU, list);
++			rc = tipc_msg_build(mhdr, m, offset, dsz,
++					    one_page_mtu, list);
+ 			if (rc != dsz)
+ 				return rc;
+ 			if (tipc_msg_assemble(list))
+diff --git a/net/tipc/msg.h b/net/tipc/msg.h
+index 5d64596ba9877..64ae4c4c44f8c 100644
+--- a/net/tipc/msg.h
++++ b/net/tipc/msg.h
+@@ -99,9 +99,10 @@ struct plist;
+ #define MAX_H_SIZE                60	/* Largest possible TIPC header size */
+ 
+ #define MAX_MSG_SIZE (MAX_H_SIZE + TIPC_MAX_USER_MSG_SIZE)
+-#define FB_MTU                  3744
+ #define TIPC_MEDIA_INFO_OFFSET	5
+ 
++extern const int one_page_mtu;
++
+ struct tipc_skb_cb {
+ 	union {
+ 		struct {
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 694de024d0ee6..74e5701034aa6 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -1153,7 +1153,7 @@ static int tls_sw_do_sendpage(struct sock *sk, struct page *page,
+ 	int ret = 0;
+ 	bool eor;
+ 
+-	eor = !(flags & (MSG_MORE | MSG_SENDPAGE_NOTLAST));
++	eor = !(flags & MSG_SENDPAGE_NOTLAST);
+ 	sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk);
+ 
+ 	/* Call the sk_stream functions to manage the sndbuf mem. */
+diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
+index 9d2a89d793c01..9ae13cccfb28d 100644
+--- a/net/xdp/xsk_queue.h
++++ b/net/xdp/xsk_queue.h
+@@ -128,12 +128,15 @@ static inline bool xskq_cons_read_addr_unchecked(struct xsk_queue *q, u64 *addr)
+ static inline bool xp_aligned_validate_desc(struct xsk_buff_pool *pool,
+ 					    struct xdp_desc *desc)
+ {
+-	u64 chunk;
+-
+-	if (desc->len > pool->chunk_size)
+-		return false;
++	u64 chunk, chunk_end;
+ 
+ 	chunk = xp_aligned_extract_addr(pool, desc->addr);
++	if (likely(desc->len)) {
++		chunk_end = xp_aligned_extract_addr(pool, desc->addr + desc->len - 1);
++		if (chunk != chunk_end)
++			return false;
++	}
++
+ 	if (chunk >= pool->addrs_cnt)
+ 		return false;
+ 
+diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
+index 6d6917b68856f..e843b0d9e2a61 100644
+--- a/net/xfrm/xfrm_device.c
++++ b/net/xfrm/xfrm_device.c
+@@ -268,6 +268,7 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
+ 		xso->num_exthdrs = 0;
+ 		xso->flags = 0;
+ 		xso->dev = NULL;
++		xso->real_dev = NULL;
+ 		dev_put(dev);
+ 
+ 		if (err != -EOPNOTSUPP)
+diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c
+index e4cb0ff4dcf41..ac907b9d32d1e 100644
+--- a/net/xfrm/xfrm_output.c
++++ b/net/xfrm/xfrm_output.c
+@@ -711,15 +711,8 @@ out:
+ static int xfrm6_extract_output(struct xfrm_state *x, struct sk_buff *skb)
+ {
+ #if IS_ENABLED(CONFIG_IPV6)
+-	unsigned int ptr = 0;
+ 	int err;
+ 
+-	if (x->outer_mode.encap == XFRM_MODE_BEET &&
+-	    ipv6_find_hdr(skb, &ptr, NEXTHDR_FRAGMENT, NULL, NULL) >= 0) {
+-		net_warn_ratelimited("BEET mode doesn't support inner IPv6 fragments\n");
+-		return -EAFNOSUPPORT;
+-	}
+-
+ 	err = xfrm6_tunnel_check_size(skb);
+ 	if (err)
+ 		return err;
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 4496f7efa2200..c25586156c6a7 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -2518,7 +2518,7 @@ void xfrm_state_delete_tunnel(struct xfrm_state *x)
+ }
+ EXPORT_SYMBOL(xfrm_state_delete_tunnel);
+ 
+-u32 xfrm_state_mtu(struct xfrm_state *x, int mtu)
++u32 __xfrm_state_mtu(struct xfrm_state *x, int mtu)
+ {
+ 	const struct xfrm_type *type = READ_ONCE(x->type);
+ 	struct crypto_aead *aead;
+@@ -2549,7 +2549,17 @@ u32 xfrm_state_mtu(struct xfrm_state *x, int mtu)
+ 	return ((mtu - x->props.header_len - crypto_aead_authsize(aead) -
+ 		 net_adj) & ~(blksize - 1)) + net_adj - 2;
+ }
+-EXPORT_SYMBOL_GPL(xfrm_state_mtu);
++EXPORT_SYMBOL_GPL(__xfrm_state_mtu);
++
++u32 xfrm_state_mtu(struct xfrm_state *x, int mtu)
++{
++	mtu = __xfrm_state_mtu(x, mtu);
++
++	if (x->props.family == AF_INET6 && mtu < IPV6_MIN_MTU)
++		return IPV6_MIN_MTU;
++
++	return mtu;
++}
+ 
+ int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload)
+ {
+diff --git a/samples/bpf/xdp_redirect_user.c b/samples/bpf/xdp_redirect_user.c
+index 41d705c3a1f7f..93854e135134c 100644
+--- a/samples/bpf/xdp_redirect_user.c
++++ b/samples/bpf/xdp_redirect_user.c
+@@ -130,7 +130,7 @@ int main(int argc, char **argv)
+ 	if (!(xdp_flags & XDP_FLAGS_SKB_MODE))
+ 		xdp_flags |= XDP_FLAGS_DRV_MODE;
+ 
+-	if (optind == argc) {
++	if (optind + 2 != argc) {
+ 		printf("usage: %s <IFNAME|IFINDEX>_IN <IFNAME|IFINDEX>_OUT\n", argv[0]);
+ 		return 1;
+ 	}
+@@ -213,5 +213,5 @@ int main(int argc, char **argv)
+ 	poll_stats(2, ifindex_out);
+ 
+ out:
+-	return 0;
++	return ret;
+ }
+diff --git a/scripts/Makefile.build b/scripts/Makefile.build
+index 949f723efe538..34d257653fb47 100644
+--- a/scripts/Makefile.build
++++ b/scripts/Makefile.build
+@@ -268,7 +268,8 @@ define rule_as_o_S
+ endef
+ 
+ # Built-in and composite module parts
+-$(obj)/%.o: $(src)/%.c $(recordmcount_source) $(objtool_dep) FORCE
++.SECONDEXPANSION:
++$(obj)/%.o: $(src)/%.c $(recordmcount_source) $$(objtool_dep) FORCE
+ 	$(call if_changed_rule,cc_o_c)
+ 	$(call cmd,force_checksrc)
+ 
+@@ -349,7 +350,7 @@ cmd_modversions_S =								\
+ 	fi
+ endif
+ 
+-$(obj)/%.o: $(src)/%.S $(objtool_dep) FORCE
++$(obj)/%.o: $(src)/%.S $$(objtool_dep) FORCE
+ 	$(call if_changed_rule,as_o_S)
+ 
+ targets += $(filter-out $(subdir-builtin), $(real-obj-y))
+diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh
+index 0e0f6466b18d6..475faa15854e6 100755
+--- a/scripts/link-vmlinux.sh
++++ b/scripts/link-vmlinux.sh
+@@ -235,6 +235,10 @@ gen_btf()
+ 
+ 	vmlinux_link ${1}
+ 
++	if [ "${pahole_ver}" -ge "118" ] && [ "${pahole_ver}" -le "121" ]; then
++		# pahole 1.18 through 1.21 can't handle zero-sized per-CPU vars
++		extra_paholeopt="${extra_paholeopt} --skip_encoding_btf_vars"
++	fi
+ 	if [ "${pahole_ver}" -ge "121" ]; then
+ 		extra_paholeopt="${extra_paholeopt} --btf_gen_floats"
+ 	fi
+diff --git a/scripts/tools-support-relr.sh b/scripts/tools-support-relr.sh
+index 45e8aa360b457..cb55878bd5b81 100755
+--- a/scripts/tools-support-relr.sh
++++ b/scripts/tools-support-relr.sh
+@@ -7,7 +7,8 @@ trap "rm -f $tmp_file.o $tmp_file $tmp_file.bin" EXIT
+ cat << "END" | $CC -c -x c - -o $tmp_file.o >/dev/null 2>&1
+ void *p = &p;
+ END
+-$LD $tmp_file.o -shared -Bsymbolic --pack-dyn-relocs=relr -o $tmp_file
++$LD $tmp_file.o -shared -Bsymbolic --pack-dyn-relocs=relr \
++  --use-android-relr-tags -o $tmp_file
+ 
+ # Despite printing an error message, GNU nm still exits with exit code 0 if it
+ # sees a relr section. So we need to check that nothing is printed to stderr.
+diff --git a/security/integrity/evm/evm_main.c b/security/integrity/evm/evm_main.c
+index 0de367aaa2d31..7ac5204c8d1f2 100644
+--- a/security/integrity/evm/evm_main.c
++++ b/security/integrity/evm/evm_main.c
+@@ -521,7 +521,7 @@ void evm_inode_post_setattr(struct dentry *dentry, int ia_valid)
+ }
+ 
+ /*
+- * evm_inode_init_security - initializes security.evm
++ * evm_inode_init_security - initializes security.evm HMAC value
+  */
+ int evm_inode_init_security(struct inode *inode,
+ 				 const struct xattr *lsm_xattr,
+@@ -530,7 +530,8 @@ int evm_inode_init_security(struct inode *inode,
+ 	struct evm_xattr *xattr_data;
+ 	int rc;
+ 
+-	if (!evm_key_loaded() || !evm_protected_xattr(lsm_xattr->name))
++	if (!(evm_initialized & EVM_INIT_HMAC) ||
++	    !evm_protected_xattr(lsm_xattr->name))
+ 		return 0;
+ 
+ 	xattr_data = kzalloc(sizeof(*xattr_data), GFP_NOFS);
+diff --git a/security/integrity/evm/evm_secfs.c b/security/integrity/evm/evm_secfs.c
+index bbc85637e18b2..5f0da41bccd07 100644
+--- a/security/integrity/evm/evm_secfs.c
++++ b/security/integrity/evm/evm_secfs.c
+@@ -66,12 +66,13 @@ static ssize_t evm_read_key(struct file *filp, char __user *buf,
+ static ssize_t evm_write_key(struct file *file, const char __user *buf,
+ 			     size_t count, loff_t *ppos)
+ {
+-	int i, ret;
++	unsigned int i;
++	int ret;
+ 
+ 	if (!capable(CAP_SYS_ADMIN) || (evm_initialized & EVM_SETUP_COMPLETE))
+ 		return -EPERM;
+ 
+-	ret = kstrtoint_from_user(buf, count, 0, &i);
++	ret = kstrtouint_from_user(buf, count, 0, &i);
+ 
+ 	if (ret)
+ 		return ret;
+@@ -80,12 +81,12 @@ static ssize_t evm_write_key(struct file *file, const char __user *buf,
+ 	if (!i || (i & ~EVM_INIT_MASK) != 0)
+ 		return -EINVAL;
+ 
+-	/* Don't allow a request to freshly enable metadata writes if
+-	 * keys are loaded.
++	/*
++	 * Don't allow a request to enable metadata writes if
++	 * an HMAC key is loaded.
+ 	 */
+ 	if ((i & EVM_ALLOW_METADATA_WRITES) &&
+-	    ((evm_initialized & EVM_KEY_MASK) != 0) &&
+-	    !(evm_initialized & EVM_ALLOW_METADATA_WRITES))
++	    (evm_initialized & EVM_INIT_HMAC) != 0)
+ 		return -EPERM;
+ 
+ 	if (i & EVM_INIT_HMAC) {
+diff --git a/security/integrity/ima/ima_appraise.c b/security/integrity/ima/ima_appraise.c
+index 4e5eb0236278a..55dac618f2a12 100644
+--- a/security/integrity/ima/ima_appraise.c
++++ b/security/integrity/ima/ima_appraise.c
+@@ -522,8 +522,6 @@ void ima_inode_post_setattr(struct user_namespace *mnt_userns,
+ 		return;
+ 
+ 	action = ima_must_appraise(mnt_userns, inode, MAY_ACCESS, POST_SETATTR);
+-	if (!action)
+-		__vfs_removexattr(&init_user_ns, dentry, XATTR_NAME_IMA);
+ 	iint = integrity_iint_find(inode);
+ 	if (iint) {
+ 		set_bit(IMA_CHANGE_ATTR, &iint->atomic_flags);
+diff --git a/sound/firewire/amdtp-stream.c b/sound/firewire/amdtp-stream.c
+index 5805c5de39fbf..7a282d8e71485 100644
+--- a/sound/firewire/amdtp-stream.c
++++ b/sound/firewire/amdtp-stream.c
+@@ -1404,14 +1404,17 @@ int amdtp_domain_start(struct amdtp_domain *d, unsigned int ir_delay_cycle)
+ 	unsigned int queue_size;
+ 	struct amdtp_stream *s;
+ 	int cycle;
++	bool found = false;
+ 	int err;
+ 
+ 	// Select an IT context as IRQ target.
+ 	list_for_each_entry(s, &d->streams, list) {
+-		if (s->direction == AMDTP_OUT_STREAM)
++		if (s->direction == AMDTP_OUT_STREAM) {
++			found = true;
+ 			break;
++		}
+ 	}
+-	if (!s)
++	if (!found)
+ 		return -ENXIO;
+ 	d->irq_target = s;
+ 
+diff --git a/sound/firewire/bebob/bebob_stream.c b/sound/firewire/bebob/bebob_stream.c
+index b612ee3e33b65..317a4242cfe9f 100644
+--- a/sound/firewire/bebob/bebob_stream.c
++++ b/sound/firewire/bebob/bebob_stream.c
+@@ -883,6 +883,11 @@ static int detect_midi_ports(struct snd_bebob *bebob,
+ 		err = avc_bridgeco_get_plug_ch_count(bebob->unit, addr, &ch_count);
+ 		if (err < 0)
+ 			break;
++		// Yamaha GO44, GO46, Terratec Phase 24, Phase x24 reports 0 for the number of
++		// channels in external output plug 3 (MIDI type) even if it has a pair of physical
++		// MIDI jacks. As a workaround, assume it as one.
++		if (ch_count == 0)
++			ch_count = 1;
+ 		*midi_ports += ch_count;
+ 	}
+ 
+@@ -961,12 +966,12 @@ int snd_bebob_stream_discover(struct snd_bebob *bebob)
+ 	if (err < 0)
+ 		goto end;
+ 
+-	err = detect_midi_ports(bebob, bebob->rx_stream_formations, addr, AVC_BRIDGECO_PLUG_DIR_IN,
++	err = detect_midi_ports(bebob, bebob->tx_stream_formations, addr, AVC_BRIDGECO_PLUG_DIR_IN,
+ 				plugs[2], &bebob->midi_input_ports);
+ 	if (err < 0)
+ 		goto end;
+ 
+-	err = detect_midi_ports(bebob, bebob->tx_stream_formations, addr, AVC_BRIDGECO_PLUG_DIR_OUT,
++	err = detect_midi_ports(bebob, bebob->rx_stream_formations, addr, AVC_BRIDGECO_PLUG_DIR_OUT,
+ 				plugs[3], &bebob->midi_output_ports);
+ 	if (err < 0)
+ 		goto end;
+diff --git a/sound/firewire/motu/motu-protocol-v2.c b/sound/firewire/motu/motu-protocol-v2.c
+index e59e69ab1538b..784073aa10265 100644
+--- a/sound/firewire/motu/motu-protocol-v2.c
++++ b/sound/firewire/motu/motu-protocol-v2.c
+@@ -353,6 +353,7 @@ const struct snd_motu_spec snd_motu_spec_8pre = {
+ 	.protocol_version = SND_MOTU_PROTOCOL_V2,
+ 	.flags = SND_MOTU_SPEC_RX_MIDI_2ND_Q |
+ 		 SND_MOTU_SPEC_TX_MIDI_2ND_Q,
+-	.tx_fixed_pcm_chunks = {10, 6, 0},
+-	.rx_fixed_pcm_chunks = {10, 6, 0},
++	// Two dummy chunks always in the end of data block.
++	.tx_fixed_pcm_chunks = {10, 10, 0},
++	.rx_fixed_pcm_chunks = {6, 6, 0},
+ };
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index ab5113cccffae..1ca320fef670f 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -385,6 +385,7 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
+ 		alc_update_coef_idx(codec, 0x67, 0xf000, 0x3000);
+ 		fallthrough;
+ 	case 0x10ec0215:
++	case 0x10ec0230:
+ 	case 0x10ec0233:
+ 	case 0x10ec0235:
+ 	case 0x10ec0236:
+@@ -3153,6 +3154,7 @@ static void alc_disable_headset_jack_key(struct hda_codec *codec)
+ 		alc_update_coef_idx(codec, 0x49, 0x0045, 0x0);
+ 		alc_update_coef_idx(codec, 0x44, 0x0045 << 8, 0x0);
+ 		break;
++	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+ 		alc_write_coef_idx(codec, 0x48, 0x0);
+@@ -3180,6 +3182,7 @@ static void alc_enable_headset_jack_key(struct hda_codec *codec)
+ 		alc_update_coef_idx(codec, 0x49, 0x007f, 0x0045);
+ 		alc_update_coef_idx(codec, 0x44, 0x007f << 8, 0x0045 << 8);
+ 		break;
++	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+ 		alc_write_coef_idx(codec, 0x48, 0xd011);
+@@ -4744,6 +4747,7 @@ static void alc_headset_mode_unplugged(struct hda_codec *codec)
+ 	case 0x10ec0255:
+ 		alc_process_coef_fw(codec, coef0255);
+ 		break;
++	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+ 		alc_process_coef_fw(codec, coef0256);
+@@ -4858,6 +4862,7 @@ static void alc_headset_mode_mic_in(struct hda_codec *codec, hda_nid_t hp_pin,
+ 		alc_process_coef_fw(codec, coef0255);
+ 		snd_hda_set_pin_ctl_cache(codec, mic_pin, PIN_VREF50);
+ 		break;
++	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+ 		alc_write_coef_idx(codec, 0x45, 0xc489);
+@@ -5007,6 +5012,7 @@ static void alc_headset_mode_default(struct hda_codec *codec)
+ 	case 0x10ec0255:
+ 		alc_process_coef_fw(codec, coef0255);
+ 		break;
++	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+ 		alc_write_coef_idx(codec, 0x1b, 0x0e4b);
+@@ -5105,6 +5111,7 @@ static void alc_headset_mode_ctia(struct hda_codec *codec)
+ 	case 0x10ec0255:
+ 		alc_process_coef_fw(codec, coef0255);
+ 		break;
++	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+ 		alc_process_coef_fw(codec, coef0256);
+@@ -5218,6 +5225,7 @@ static void alc_headset_mode_omtp(struct hda_codec *codec)
+ 	case 0x10ec0255:
+ 		alc_process_coef_fw(codec, coef0255);
+ 		break;
++	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+ 		alc_process_coef_fw(codec, coef0256);
+@@ -5318,6 +5326,7 @@ static void alc_determine_headset_type(struct hda_codec *codec)
+ 		val = alc_read_coef_idx(codec, 0x46);
+ 		is_ctia = (val & 0x0070) == 0x0070;
+ 		break;
++	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+ 		alc_write_coef_idx(codec, 0x1b, 0x0e4b);
+@@ -5611,6 +5620,7 @@ static void alc255_set_default_jack_type(struct hda_codec *codec)
+ 	case 0x10ec0255:
+ 		alc_process_coef_fw(codec, alc255fw);
+ 		break;
++	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+ 		alc_process_coef_fw(codec, alc256fw);
+@@ -6211,6 +6221,7 @@ static void alc_combo_jack_hp_jd_restart(struct hda_codec *codec)
+ 		alc_update_coef_idx(codec, 0x4a, 0x8000, 1 << 15); /* Reset HP JD */
+ 		alc_update_coef_idx(codec, 0x4a, 0x8000, 0 << 15);
+ 		break;
++	case 0x10ec0230:
+ 	case 0x10ec0235:
+ 	case 0x10ec0236:
+ 	case 0x10ec0255:
+@@ -6343,6 +6354,24 @@ static void alc_fixup_no_int_mic(struct hda_codec *codec,
+ 	}
+ }
+ 
++static void alc285_fixup_hp_spectre_x360(struct hda_codec *codec,
++					  const struct hda_fixup *fix, int action)
++{
++	static const hda_nid_t conn[] = { 0x02 };
++	static const struct hda_pintbl pincfgs[] = {
++		{ 0x14, 0x90170110 },  /* rear speaker */
++		{ }
++	};
++
++	switch (action) {
++	case HDA_FIXUP_ACT_PRE_PROBE:
++		snd_hda_apply_pincfgs(codec, pincfgs);
++		/* force front speaker to DAC1 */
++		snd_hda_override_conn_list(codec, 0x17, ARRAY_SIZE(conn), conn);
++		break;
++	}
++}
++
+ /* for hda_fixup_thinkpad_acpi() */
+ #include "thinkpad_helper.c"
+ 
+@@ -7810,6 +7839,8 @@ static const struct hda_fixup alc269_fixups[] = {
+ 			{ 0x20, AC_VERB_SET_PROC_COEF, 0x4e4b },
+ 			{ }
+ 		},
++		.chained = true,
++		.chain_id = ALC289_FIXUP_ASUS_GA401,
+ 	},
+ 	[ALC285_FIXUP_HP_GPIO_LED] = {
+ 		.type = HDA_FIXUP_FUNC,
+@@ -8127,13 +8158,8 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chain_id = ALC269_FIXUP_HP_LINE1_MIC1_LED,
+ 	},
+ 	[ALC285_FIXUP_HP_SPECTRE_X360] = {
+-		.type = HDA_FIXUP_PINS,
+-		.v.pins = (const struct hda_pintbl[]) {
+-			{ 0x14, 0x90170110 }, /* enable top speaker */
+-			{}
+-		},
+-		.chained = true,
+-		.chain_id = ALC285_FIXUP_SPEAKER2_TO_DAC1,
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc285_fixup_hp_spectre_x360,
+ 	},
+ 	[ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP] = {
+ 		.type = HDA_FIXUP_FUNC,
+@@ -8319,6 +8345,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x84da, "HP OMEN dc0019-ur", ALC295_FIXUP_HP_OMEN),
+ 	SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360),
++	SND_PCI_QUIRK(0x103c, 0x861f, "HP Elite Dragonfly G1", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x86c7, "HP Envy AiO 32", ALC274_FIXUP_HP_ENVY_GPIO),
+ 	SND_PCI_QUIRK(0x103c, 0x8716, "HP Elite Dragonfly G2 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+@@ -8336,19 +8363,26 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 		      ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87e5, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x87e7, "HP ProBook 450 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x87f1, "HP ProBook 630 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f2, "HP ProBook 640 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f4, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
++	SND_PCI_QUIRK(0x103c, 0x880d, "HP EliteBook 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8846, "HP EliteBook 850 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8847, "HP EliteBook x360 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x884b, "HP EliteBook 840 Aero G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x884c, "HP EliteBook 840 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8862, "HP ProBook 445 G8 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
++	SND_PCI_QUIRK(0x103c, 0x8863, "HP ProBook 445 G8 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x886d, "HP ZBook Fury 17.3 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8870, "HP ZBook Fury 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8873, "HP ZBook Studio 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x888d, "HP ZBook Power 15.6 inch G8 Mobile Workstation PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8896, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8898, "HP EliteBook 845 G8 Notebook PC", ALC285_FIXUP_HP_LIMIT_INT_MIC_BOOST),
++	SND_PCI_QUIRK(0x103c, 0x88d0, "HP Pavilion 15-eh1xxx (mainboard 88D0)", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ 	SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+@@ -9341,6 +9375,7 @@ static int patch_alc269(struct hda_codec *codec)
+ 		spec->shutup = alc256_shutup;
+ 		spec->init_hook = alc256_init;
+ 		break;
++	case 0x10ec0230:
+ 	case 0x10ec0236:
+ 	case 0x10ec0256:
+ 		spec->codec_variant = ALC269_TYPE_ALC256;
+@@ -10632,6 +10667,7 @@ static const struct hda_device_id snd_hda_id_realtek[] = {
+ 	HDA_CODEC_ENTRY(0x10ec0221, "ALC221", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0222, "ALC222", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0225, "ALC225", patch_alc269),
++	HDA_CODEC_ENTRY(0x10ec0230, "ALC236", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0231, "ALC231", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0233, "ALC233", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0234, "ALC234", patch_alc269),
+diff --git a/sound/pci/intel8x0.c b/sound/pci/intel8x0.c
+index 5b124c4ad5725..11b398be0954f 100644
+--- a/sound/pci/intel8x0.c
++++ b/sound/pci/intel8x0.c
+@@ -692,7 +692,7 @@ static inline void snd_intel8x0_update(struct intel8x0 *chip, struct ichdev *ich
+ 	int status, civ, i, step;
+ 	int ack = 0;
+ 
+-	if (!ichdev->prepared || ichdev->suspended)
++	if (!(ichdev->prepared || chip->in_measurement) || ichdev->suspended)
+ 		return;
+ 
+ 	spin_lock_irqsave(&chip->reg_lock, flags);
+diff --git a/sound/soc/atmel/atmel-i2s.c b/sound/soc/atmel/atmel-i2s.c
+index 584656cc7d3cb..e5c4625b7771f 100644
+--- a/sound/soc/atmel/atmel-i2s.c
++++ b/sound/soc/atmel/atmel-i2s.c
+@@ -200,6 +200,7 @@ struct atmel_i2s_dev {
+ 	unsigned int				fmt;
+ 	const struct atmel_i2s_gck_param	*gck_param;
+ 	const struct atmel_i2s_caps		*caps;
++	int					clk_use_no;
+ };
+ 
+ static irqreturn_t atmel_i2s_interrupt(int irq, void *dev_id)
+@@ -321,9 +322,16 @@ static int atmel_i2s_hw_params(struct snd_pcm_substream *substream,
+ {
+ 	struct atmel_i2s_dev *dev = snd_soc_dai_get_drvdata(dai);
+ 	bool is_playback = (substream->stream == SNDRV_PCM_STREAM_PLAYBACK);
+-	unsigned int mr = 0;
++	unsigned int mr = 0, mr_mask;
+ 	int ret;
+ 
++	mr_mask = ATMEL_I2SC_MR_FORMAT_MASK | ATMEL_I2SC_MR_MODE_MASK |
++		ATMEL_I2SC_MR_DATALENGTH_MASK;
++	if (is_playback)
++		mr_mask |= ATMEL_I2SC_MR_TXMONO;
++	else
++		mr_mask |= ATMEL_I2SC_MR_RXMONO;
++
+ 	switch (dev->fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
+ 	case SND_SOC_DAIFMT_I2S:
+ 		mr |= ATMEL_I2SC_MR_FORMAT_I2S;
+@@ -402,7 +410,7 @@ static int atmel_i2s_hw_params(struct snd_pcm_substream *substream,
+ 		return -EINVAL;
+ 	}
+ 
+-	return regmap_write(dev->regmap, ATMEL_I2SC_MR, mr);
++	return regmap_update_bits(dev->regmap, ATMEL_I2SC_MR, mr_mask, mr);
+ }
+ 
+ static int atmel_i2s_switch_mck_generator(struct atmel_i2s_dev *dev,
+@@ -495,18 +503,28 @@ static int atmel_i2s_trigger(struct snd_pcm_substream *substream, int cmd,
+ 	is_master = (mr & ATMEL_I2SC_MR_MODE_MASK) == ATMEL_I2SC_MR_MODE_MASTER;
+ 
+ 	/* If master starts, enable the audio clock. */
+-	if (is_master && mck_enabled)
+-		err = atmel_i2s_switch_mck_generator(dev, true);
+-	if (err)
+-		return err;
++	if (is_master && mck_enabled) {
++		if (!dev->clk_use_no) {
++			err = atmel_i2s_switch_mck_generator(dev, true);
++			if (err)
++				return err;
++		}
++		dev->clk_use_no++;
++	}
+ 
+ 	err = regmap_write(dev->regmap, ATMEL_I2SC_CR, cr);
+ 	if (err)
+ 		return err;
+ 
+ 	/* If master stops, disable the audio clock. */
+-	if (is_master && !mck_enabled)
+-		err = atmel_i2s_switch_mck_generator(dev, false);
++	if (is_master && !mck_enabled) {
++		if (dev->clk_use_no == 1) {
++			err = atmel_i2s_switch_mck_generator(dev, false);
++			if (err)
++				return err;
++		}
++		dev->clk_use_no--;
++	}
+ 
+ 	return err;
+ }
+@@ -542,6 +560,7 @@ static struct snd_soc_dai_driver atmel_i2s_dai = {
+ 	},
+ 	.ops = &atmel_i2s_dai_ops,
+ 	.symmetric_rate = 1,
++	.symmetric_sample_bits = 1,
+ };
+ 
+ static const struct snd_soc_component_driver atmel_i2s_component = {
+diff --git a/sound/soc/codecs/cs42l42.h b/sound/soc/codecs/cs42l42.h
+index 36b763f0d1a06..386c40f9ed311 100644
+--- a/sound/soc/codecs/cs42l42.h
++++ b/sound/soc/codecs/cs42l42.h
+@@ -79,7 +79,7 @@
+ #define CS42L42_HP_PDN_SHIFT		3
+ #define CS42L42_HP_PDN_MASK		(1 << CS42L42_HP_PDN_SHIFT)
+ #define CS42L42_ADC_PDN_SHIFT		2
+-#define CS42L42_ADC_PDN_MASK		(1 << CS42L42_HP_PDN_SHIFT)
++#define CS42L42_ADC_PDN_MASK		(1 << CS42L42_ADC_PDN_SHIFT)
+ #define CS42L42_PDN_ALL_SHIFT		0
+ #define CS42L42_PDN_ALL_MASK		(1 << CS42L42_PDN_ALL_SHIFT)
+ 
+diff --git a/sound/soc/codecs/max98373-sdw.c b/sound/soc/codecs/max98373-sdw.c
+index f3a12205cd484..dc520effc61cb 100644
+--- a/sound/soc/codecs/max98373-sdw.c
++++ b/sound/soc/codecs/max98373-sdw.c
+@@ -271,7 +271,7 @@ static __maybe_unused int max98373_resume(struct device *dev)
+ 	struct max98373_priv *max98373 = dev_get_drvdata(dev);
+ 	unsigned long time;
+ 
+-	if (!max98373->hw_init)
++	if (!max98373->first_hw_init)
+ 		return 0;
+ 
+ 	if (!slave->unattach_request)
+@@ -362,7 +362,7 @@ static int max98373_io_init(struct sdw_slave *slave)
+ 	struct device *dev = &slave->dev;
+ 	struct max98373_priv *max98373 = dev_get_drvdata(dev);
+ 
+-	if (max98373->pm_init_once) {
++	if (max98373->first_hw_init) {
+ 		regcache_cache_only(max98373->regmap, false);
+ 		regcache_cache_bypass(max98373->regmap, true);
+ 	}
+@@ -370,7 +370,7 @@ static int max98373_io_init(struct sdw_slave *slave)
+ 	/*
+ 	 * PM runtime is only enabled when a Slave reports as Attached
+ 	 */
+-	if (!max98373->pm_init_once) {
++	if (!max98373->first_hw_init) {
+ 		/* set autosuspend parameters */
+ 		pm_runtime_set_autosuspend_delay(dev, 3000);
+ 		pm_runtime_use_autosuspend(dev);
+@@ -462,12 +462,12 @@ static int max98373_io_init(struct sdw_slave *slave)
+ 	regmap_write(max98373->regmap, MAX98373_R20B5_BDE_EN, 1);
+ 	regmap_write(max98373->regmap, MAX98373_R20E2_LIMITER_EN, 1);
+ 
+-	if (max98373->pm_init_once) {
++	if (max98373->first_hw_init) {
+ 		regcache_cache_bypass(max98373->regmap, false);
+ 		regcache_mark_dirty(max98373->regmap);
+ 	}
+ 
+-	max98373->pm_init_once = true;
++	max98373->first_hw_init = true;
+ 	max98373->hw_init = true;
+ 
+ 	pm_runtime_mark_last_busy(dev);
+@@ -787,6 +787,8 @@ static int max98373_init(struct sdw_slave *slave, struct regmap *regmap)
+ 	max98373->cache = devm_kcalloc(dev, max98373->cache_num,
+ 				       sizeof(*max98373->cache),
+ 				       GFP_KERNEL);
++	if (!max98373->cache)
++		return -ENOMEM;
+ 
+ 	for (i = 0; i < max98373->cache_num; i++)
+ 		max98373->cache[i].reg = max98373_sdw_cache_reg[i];
+@@ -795,7 +797,7 @@ static int max98373_init(struct sdw_slave *slave, struct regmap *regmap)
+ 	max98373_slot_config(dev, max98373);
+ 
+ 	max98373->hw_init = false;
+-	max98373->pm_init_once = false;
++	max98373->first_hw_init = false;
+ 
+ 	/* codec registration  */
+ 	ret = devm_snd_soc_register_component(dev, &soc_codec_dev_max98373_sdw,
+diff --git a/sound/soc/codecs/max98373.h b/sound/soc/codecs/max98373.h
+index 73a2cf69d84ad..e1810b3b1620b 100644
+--- a/sound/soc/codecs/max98373.h
++++ b/sound/soc/codecs/max98373.h
+@@ -226,7 +226,7 @@ struct max98373_priv {
+ 	/* variables to support soundwire */
+ 	struct sdw_slave *slave;
+ 	bool hw_init;
+-	bool pm_init_once;
++	bool first_hw_init;
+ 	int slot;
+ 	unsigned int rx_mask;
+ };
+diff --git a/sound/soc/codecs/rk3328_codec.c b/sound/soc/codecs/rk3328_codec.c
+index bfefefcc76d81..758d439e8c7a5 100644
+--- a/sound/soc/codecs/rk3328_codec.c
++++ b/sound/soc/codecs/rk3328_codec.c
+@@ -474,7 +474,8 @@ static int rk3328_platform_probe(struct platform_device *pdev)
+ 	rk3328->pclk = devm_clk_get(&pdev->dev, "pclk");
+ 	if (IS_ERR(rk3328->pclk)) {
+ 		dev_err(&pdev->dev, "can't get acodec pclk\n");
+-		return PTR_ERR(rk3328->pclk);
++		ret = PTR_ERR(rk3328->pclk);
++		goto err_unprepare_mclk;
+ 	}
+ 
+ 	ret = clk_prepare_enable(rk3328->pclk);
+@@ -484,19 +485,34 @@ static int rk3328_platform_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	base = devm_platform_ioremap_resource(pdev, 0);
+-	if (IS_ERR(base))
+-		return PTR_ERR(base);
++	if (IS_ERR(base)) {
++		ret = PTR_ERR(base);
++		goto err_unprepare_pclk;
++	}
+ 
+ 	rk3328->regmap = devm_regmap_init_mmio(&pdev->dev, base,
+ 					       &rk3328_codec_regmap_config);
+-	if (IS_ERR(rk3328->regmap))
+-		return PTR_ERR(rk3328->regmap);
++	if (IS_ERR(rk3328->regmap)) {
++		ret = PTR_ERR(rk3328->regmap);
++		goto err_unprepare_pclk;
++	}
+ 
+ 	platform_set_drvdata(pdev, rk3328);
+ 
+-	return devm_snd_soc_register_component(&pdev->dev, &soc_codec_rk3328,
++	ret = devm_snd_soc_register_component(&pdev->dev, &soc_codec_rk3328,
+ 					       rk3328_dai,
+ 					       ARRAY_SIZE(rk3328_dai));
++	if (ret)
++		goto err_unprepare_pclk;
++
++	return 0;
++
++err_unprepare_pclk:
++	clk_disable_unprepare(rk3328->pclk);
++
++err_unprepare_mclk:
++	clk_disable_unprepare(rk3328->mclk);
++	return ret;
+ }
+ 
+ static const struct of_device_id rk3328_codec_of_match[] __maybe_unused = {
+diff --git a/sound/soc/codecs/rt1308-sdw.c b/sound/soc/codecs/rt1308-sdw.c
+index 1c226994aebd8..f716668de6400 100644
+--- a/sound/soc/codecs/rt1308-sdw.c
++++ b/sound/soc/codecs/rt1308-sdw.c
+@@ -709,7 +709,7 @@ static int __maybe_unused rt1308_dev_resume(struct device *dev)
+ 	struct rt1308_sdw_priv *rt1308 = dev_get_drvdata(dev);
+ 	unsigned long time;
+ 
+-	if (!rt1308->hw_init)
++	if (!rt1308->first_hw_init)
+ 		return 0;
+ 
+ 	if (!slave->unattach_request)
+diff --git a/sound/soc/codecs/rt1316-sdw.c b/sound/soc/codecs/rt1316-sdw.c
+index 3b029c56467de..09b4914bba1bf 100644
+--- a/sound/soc/codecs/rt1316-sdw.c
++++ b/sound/soc/codecs/rt1316-sdw.c
+@@ -701,7 +701,7 @@ static int __maybe_unused rt1316_dev_resume(struct device *dev)
+ 	struct rt1316_sdw_priv *rt1316 = dev_get_drvdata(dev);
+ 	unsigned long time;
+ 
+-	if (!rt1316->hw_init)
++	if (!rt1316->first_hw_init)
+ 		return 0;
+ 
+ 	if (!slave->unattach_request)
+diff --git a/sound/soc/codecs/rt5682-i2c.c b/sound/soc/codecs/rt5682-i2c.c
+index 8ea9f1d9fec0e..cd964e023d96e 100644
+--- a/sound/soc/codecs/rt5682-i2c.c
++++ b/sound/soc/codecs/rt5682-i2c.c
+@@ -273,6 +273,7 @@ static void rt5682_i2c_shutdown(struct i2c_client *client)
+ {
+ 	struct rt5682_priv *rt5682 = i2c_get_clientdata(client);
+ 
++	disable_irq(client->irq);
+ 	cancel_delayed_work_sync(&rt5682->jack_detect_work);
+ 	cancel_delayed_work_sync(&rt5682->jd_check_work);
+ 
+diff --git a/sound/soc/codecs/rt5682-sdw.c b/sound/soc/codecs/rt5682-sdw.c
+index e78ba3b064c4f..54873730bec55 100644
+--- a/sound/soc/codecs/rt5682-sdw.c
++++ b/sound/soc/codecs/rt5682-sdw.c
+@@ -400,6 +400,11 @@ static int rt5682_io_init(struct device *dev, struct sdw_slave *slave)
+ 
+ 	pm_runtime_get_noresume(&slave->dev);
+ 
++	if (rt5682->first_hw_init) {
++		regcache_cache_only(rt5682->regmap, false);
++		regcache_cache_bypass(rt5682->regmap, true);
++	}
++
+ 	while (loop > 0) {
+ 		regmap_read(rt5682->regmap, RT5682_DEVICE_ID, &val);
+ 		if (val == DEVICE_ID)
+@@ -408,14 +413,11 @@ static int rt5682_io_init(struct device *dev, struct sdw_slave *slave)
+ 		usleep_range(30000, 30005);
+ 		loop--;
+ 	}
++
+ 	if (val != DEVICE_ID) {
+ 		dev_err(dev, "Device with ID register %x is not rt5682\n", val);
+-		return -ENODEV;
+-	}
+-
+-	if (rt5682->first_hw_init) {
+-		regcache_cache_only(rt5682->regmap, false);
+-		regcache_cache_bypass(rt5682->regmap, true);
++		ret = -ENODEV;
++		goto err_nodev;
+ 	}
+ 
+ 	rt5682_calibrate(rt5682);
+@@ -486,10 +488,11 @@ reinit:
+ 	rt5682->hw_init = true;
+ 	rt5682->first_hw_init = true;
+ 
++err_nodev:
+ 	pm_runtime_mark_last_busy(&slave->dev);
+ 	pm_runtime_put_autosuspend(&slave->dev);
+ 
+-	dev_dbg(&slave->dev, "%s hw_init complete\n", __func__);
++	dev_dbg(&slave->dev, "%s hw_init complete: %d\n", __func__, ret);
+ 
+ 	return ret;
+ }
+@@ -743,7 +746,7 @@ static int __maybe_unused rt5682_dev_resume(struct device *dev)
+ 	struct rt5682_priv *rt5682 = dev_get_drvdata(dev);
+ 	unsigned long time;
+ 
+-	if (!rt5682->hw_init)
++	if (!rt5682->first_hw_init)
+ 		return 0;
+ 
+ 	if (!slave->unattach_request)
+diff --git a/sound/soc/codecs/rt700-sdw.c b/sound/soc/codecs/rt700-sdw.c
+index ff9c081fd52af..d1d9c0f455b43 100644
+--- a/sound/soc/codecs/rt700-sdw.c
++++ b/sound/soc/codecs/rt700-sdw.c
+@@ -498,7 +498,7 @@ static int __maybe_unused rt700_dev_resume(struct device *dev)
+ 	struct rt700_priv *rt700 = dev_get_drvdata(dev);
+ 	unsigned long time;
+ 
+-	if (!rt700->hw_init)
++	if (!rt700->first_hw_init)
+ 		return 0;
+ 
+ 	if (!slave->unattach_request)
+diff --git a/sound/soc/codecs/rt711-sdca-sdw.c b/sound/soc/codecs/rt711-sdca-sdw.c
+index 9685c8905468a..03cd3e0142f99 100644
+--- a/sound/soc/codecs/rt711-sdca-sdw.c
++++ b/sound/soc/codecs/rt711-sdca-sdw.c
+@@ -75,6 +75,16 @@ static bool rt711_sdca_mbq_readable_register(struct device *dev, unsigned int re
+ 	case 0x5b00000 ... 0x5b000ff:
+ 	case 0x5f00000 ... 0x5f000ff:
+ 	case 0x6100000 ... 0x61000ff:
++	case SDW_SDCA_CTL(FUNC_NUM_JACK_CODEC, RT711_SDCA_ENT_USER_FU05, RT711_SDCA_CTL_FU_VOLUME, CH_L):
++	case SDW_SDCA_CTL(FUNC_NUM_JACK_CODEC, RT711_SDCA_ENT_USER_FU05, RT711_SDCA_CTL_FU_VOLUME, CH_R):
++	case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT711_SDCA_ENT_USER_FU1E, RT711_SDCA_CTL_FU_VOLUME, CH_L):
++	case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT711_SDCA_ENT_USER_FU1E, RT711_SDCA_CTL_FU_VOLUME, CH_R):
++	case SDW_SDCA_CTL(FUNC_NUM_JACK_CODEC, RT711_SDCA_ENT_USER_FU0F, RT711_SDCA_CTL_FU_VOLUME, CH_L):
++	case SDW_SDCA_CTL(FUNC_NUM_JACK_CODEC, RT711_SDCA_ENT_USER_FU0F, RT711_SDCA_CTL_FU_VOLUME, CH_R):
++	case SDW_SDCA_CTL(FUNC_NUM_JACK_CODEC, RT711_SDCA_ENT_PLATFORM_FU44, RT711_SDCA_CTL_FU_CH_GAIN, CH_L):
++	case SDW_SDCA_CTL(FUNC_NUM_JACK_CODEC, RT711_SDCA_ENT_PLATFORM_FU44, RT711_SDCA_CTL_FU_CH_GAIN, CH_R):
++	case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT711_SDCA_ENT_PLATFORM_FU15, RT711_SDCA_CTL_FU_CH_GAIN, CH_L):
++	case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT711_SDCA_ENT_PLATFORM_FU15, RT711_SDCA_CTL_FU_CH_GAIN, CH_R):
+ 		return true;
+ 	default:
+ 		return false;
+@@ -380,7 +390,7 @@ static int __maybe_unused rt711_sdca_dev_resume(struct device *dev)
+ 	struct rt711_sdca_priv *rt711 = dev_get_drvdata(dev);
+ 	unsigned long time;
+ 
+-	if (!rt711->hw_init)
++	if (!rt711->first_hw_init)
+ 		return 0;
+ 
+ 	if (!slave->unattach_request)
+diff --git a/sound/soc/codecs/rt711-sdca.c b/sound/soc/codecs/rt711-sdca.c
+index 24a084e0b48a1..0b0c230dcf716 100644
+--- a/sound/soc/codecs/rt711-sdca.c
++++ b/sound/soc/codecs/rt711-sdca.c
+@@ -1500,6 +1500,8 @@ int rt711_sdca_io_init(struct device *dev, struct sdw_slave *slave)
+ 	if (rt711->first_hw_init) {
+ 		regcache_cache_only(rt711->regmap, false);
+ 		regcache_cache_bypass(rt711->regmap, true);
++		regcache_cache_only(rt711->mbq_regmap, false);
++		regcache_cache_bypass(rt711->mbq_regmap, true);
+ 	} else {
+ 		/*
+ 		 * PM runtime is only enabled when a Slave reports as Attached
+@@ -1565,6 +1567,8 @@ int rt711_sdca_io_init(struct device *dev, struct sdw_slave *slave)
+ 	if (rt711->first_hw_init) {
+ 		regcache_cache_bypass(rt711->regmap, false);
+ 		regcache_mark_dirty(rt711->regmap);
++		regcache_cache_bypass(rt711->mbq_regmap, false);
++		regcache_mark_dirty(rt711->mbq_regmap);
+ 	} else
+ 		rt711->first_hw_init = true;
+ 
+diff --git a/sound/soc/codecs/rt711-sdw.c b/sound/soc/codecs/rt711-sdw.c
+index 8f5ebe92d4076..15299084429f2 100644
+--- a/sound/soc/codecs/rt711-sdw.c
++++ b/sound/soc/codecs/rt711-sdw.c
+@@ -501,7 +501,7 @@ static int __maybe_unused rt711_dev_resume(struct device *dev)
+ 	struct rt711_priv *rt711 = dev_get_drvdata(dev);
+ 	unsigned long time;
+ 
+-	if (!rt711->hw_init)
++	if (!rt711->first_hw_init)
+ 		return 0;
+ 
+ 	if (!slave->unattach_request)
+diff --git a/sound/soc/codecs/rt715-sdca-sdw.c b/sound/soc/codecs/rt715-sdca-sdw.c
+index 1350798406f0a..a5c673f43d824 100644
+--- a/sound/soc/codecs/rt715-sdca-sdw.c
++++ b/sound/soc/codecs/rt715-sdca-sdw.c
+@@ -70,6 +70,7 @@ static bool rt715_sdca_mbq_readable_register(struct device *dev, unsigned int re
+ 	case 0x2000036:
+ 	case 0x2000037:
+ 	case 0x2000039:
++	case 0x2000044:
+ 	case 0x6100000:
+ 		return true;
+ 	default:
+@@ -224,7 +225,7 @@ static int __maybe_unused rt715_dev_resume(struct device *dev)
+ 	struct rt715_sdca_priv *rt715 = dev_get_drvdata(dev);
+ 	unsigned long time;
+ 
+-	if (!rt715->hw_init)
++	if (!rt715->first_hw_init)
+ 		return 0;
+ 
+ 	if (!slave->unattach_request)
+diff --git a/sound/soc/codecs/rt715-sdca-sdw.h b/sound/soc/codecs/rt715-sdca-sdw.h
+index cd365bb60747e..0cbc14844f8c2 100644
+--- a/sound/soc/codecs/rt715-sdca-sdw.h
++++ b/sound/soc/codecs/rt715-sdca-sdw.h
+@@ -113,6 +113,7 @@ static const struct reg_default rt715_mbq_reg_defaults_sdca[] = {
+ 	{ 0x2000036, 0x0000 },
+ 	{ 0x2000037, 0x0000 },
+ 	{ 0x2000039, 0xaa81 },
++	{ 0x2000044, 0x0202 },
+ 	{ 0x6100000, 0x0100 },
+ 	{ SDW_SDCA_CTL(FUN_MIC_ARRAY, RT715_SDCA_FU_ADC8_9_VOL,
+ 		RT715_SDCA_FU_VOL_CTRL, CH_01), 0x00 },
+diff --git a/sound/soc/codecs/rt715-sdca.c b/sound/soc/codecs/rt715-sdca.c
+index 7db76c19e0480..66e166568c508 100644
+--- a/sound/soc/codecs/rt715-sdca.c
++++ b/sound/soc/codecs/rt715-sdca.c
+@@ -997,7 +997,7 @@ int rt715_sdca_init(struct device *dev, struct regmap *mbq_regmap,
+ 	 * HW init will be performed when device reports present
+ 	 */
+ 	rt715->hw_init = false;
+-	rt715->first_init = false;
++	rt715->first_hw_init = false;
+ 
+ 	ret = devm_snd_soc_register_component(dev,
+ 			&soc_codec_dev_rt715_sdca,
+@@ -1018,7 +1018,7 @@ int rt715_sdca_io_init(struct device *dev, struct sdw_slave *slave)
+ 	/*
+ 	 * PM runtime is only enabled when a Slave reports as Attached
+ 	 */
+-	if (!rt715->first_init) {
++	if (!rt715->first_hw_init) {
+ 		/* set autosuspend parameters */
+ 		pm_runtime_set_autosuspend_delay(&slave->dev, 3000);
+ 		pm_runtime_use_autosuspend(&slave->dev);
+@@ -1031,7 +1031,7 @@ int rt715_sdca_io_init(struct device *dev, struct sdw_slave *slave)
+ 
+ 		pm_runtime_enable(&slave->dev);
+ 
+-		rt715->first_init = true;
++		rt715->first_hw_init = true;
+ 	}
+ 
+ 	pm_runtime_get_noresume(&slave->dev);
+@@ -1054,6 +1054,9 @@ int rt715_sdca_io_init(struct device *dev, struct sdw_slave *slave)
+ 		rt715_sdca_index_update_bits(rt715, RT715_VENDOR_REG,
+ 			RT715_REV_1, 0x40, 0x40);
+ 	}
++	/* DFLL Calibration trigger */
++	rt715_sdca_index_update_bits(rt715, RT715_VENDOR_REG,
++			RT715_DFLL_VAD, 0x1, 0x1);
+ 	/* trigger mode = VAD enable */
+ 	regmap_write(rt715->regmap,
+ 		SDW_SDCA_CTL(FUN_MIC_ARRAY, RT715_SDCA_SMPU_TRIG_ST_EN,
+diff --git a/sound/soc/codecs/rt715-sdca.h b/sound/soc/codecs/rt715-sdca.h
+index 85ce4d95e5eb2..90881b455ece9 100644
+--- a/sound/soc/codecs/rt715-sdca.h
++++ b/sound/soc/codecs/rt715-sdca.h
+@@ -27,7 +27,7 @@ struct rt715_sdca_priv {
+ 	enum sdw_slave_status status;
+ 	struct sdw_bus_params params;
+ 	bool hw_init;
+-	bool first_init;
++	bool first_hw_init;
+ 	int l_is_unmute;
+ 	int r_is_unmute;
+ 	int hw_sdw_ver;
+@@ -81,6 +81,7 @@ struct rt715_sdca_kcontrol_private {
+ #define RT715_AD_FUNC_EN				0x36
+ #define RT715_REV_1					0x37
+ #define RT715_SDW_INPUT_SEL				0x39
++#define RT715_DFLL_VAD					0x44
+ #define RT715_EXT_DMIC_CLK_CTRL2			0x54
+ 
+ /* Index (NID:61h) */
+diff --git a/sound/soc/codecs/rt715-sdw.c b/sound/soc/codecs/rt715-sdw.c
+index 81a1dd77b6f69..a7b21b03c08bb 100644
+--- a/sound/soc/codecs/rt715-sdw.c
++++ b/sound/soc/codecs/rt715-sdw.c
+@@ -541,7 +541,7 @@ static int __maybe_unused rt715_dev_resume(struct device *dev)
+ 	struct rt715_priv *rt715 = dev_get_drvdata(dev);
+ 	unsigned long time;
+ 
+-	if (!rt715->hw_init)
++	if (!rt715->first_hw_init)
+ 		return 0;
+ 
+ 	if (!slave->unattach_request)
+diff --git a/sound/soc/fsl/fsl_spdif.c b/sound/soc/fsl/fsl_spdif.c
+index c631de325a6e0..53499bc71fa99 100644
+--- a/sound/soc/fsl/fsl_spdif.c
++++ b/sound/soc/fsl/fsl_spdif.c
+@@ -1375,14 +1375,27 @@ static int fsl_spdif_probe(struct platform_device *pdev)
+ 					      &spdif_priv->cpu_dai_drv, 1);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to register DAI: %d\n", ret);
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	ret = imx_pcm_dma_init(pdev, IMX_SPDIF_DMABUF_SIZE);
+-	if (ret && ret != -EPROBE_DEFER)
+-		dev_err(&pdev->dev, "imx_pcm_dma_init failed: %d\n", ret);
++	if (ret) {
++		dev_err_probe(&pdev->dev, ret, "imx_pcm_dma_init failed\n");
++		goto err_pm_disable;
++	}
+ 
+ 	return ret;
++
++err_pm_disable:
++	pm_runtime_disable(&pdev->dev);
++	return ret;
++}
++
++static int fsl_spdif_remove(struct platform_device *pdev)
++{
++	pm_runtime_disable(&pdev->dev);
++
++	return 0;
+ }
+ 
+ #ifdef CONFIG_PM
+@@ -1391,6 +1404,9 @@ static int fsl_spdif_runtime_suspend(struct device *dev)
+ 	struct fsl_spdif_priv *spdif_priv = dev_get_drvdata(dev);
+ 	int i;
+ 
++	/* Disable all the interrupts */
++	regmap_update_bits(spdif_priv->regmap, REG_SPDIF_SIE, 0xffffff, 0);
++
+ 	regmap_read(spdif_priv->regmap, REG_SPDIF_SRPC,
+ 			&spdif_priv->regcache_srpc);
+ 	regcache_cache_only(spdif_priv->regmap, true);
+@@ -1487,6 +1503,7 @@ static struct platform_driver fsl_spdif_driver = {
+ 		.pm = &fsl_spdif_pm,
+ 	},
+ 	.probe = fsl_spdif_probe,
++	.remove = fsl_spdif_remove,
+ };
+ 
+ module_platform_driver(fsl_spdif_driver);
+diff --git a/sound/soc/fsl/fsl_xcvr.c b/sound/soc/fsl/fsl_xcvr.c
+index 6cb5581658485..46f3f2c687566 100644
+--- a/sound/soc/fsl/fsl_xcvr.c
++++ b/sound/soc/fsl/fsl_xcvr.c
+@@ -1233,6 +1233,16 @@ static __maybe_unused int fsl_xcvr_runtime_suspend(struct device *dev)
+ 	struct fsl_xcvr *xcvr = dev_get_drvdata(dev);
+ 	int ret;
+ 
++	/*
++	 * Clear interrupts, when streams starts or resumes after
++	 * suspend, interrupts are enabled in prepare(), so no need
++	 * to enable interrupts in resume().
++	 */
++	ret = regmap_update_bits(xcvr->regmap, FSL_XCVR_EXT_IER0,
++				 FSL_XCVR_IRQ_EARC_ALL, 0);
++	if (ret < 0)
++		dev_err(dev, "Failed to clear IER0: %d\n", ret);
++
+ 	/* Assert M0+ reset */
+ 	ret = regmap_update_bits(xcvr->regmap, FSL_XCVR_EXT_CTRL,
+ 				 FSL_XCVR_EXT_CTRL_CORE_RESET,
+diff --git a/sound/soc/hisilicon/hi6210-i2s.c b/sound/soc/hisilicon/hi6210-i2s.c
+index 907f5f1f7b445..ff05b9779e4be 100644
+--- a/sound/soc/hisilicon/hi6210-i2s.c
++++ b/sound/soc/hisilicon/hi6210-i2s.c
+@@ -102,18 +102,15 @@ static int hi6210_i2s_startup(struct snd_pcm_substream *substream,
+ 
+ 	for (n = 0; n < i2s->clocks; n++) {
+ 		ret = clk_prepare_enable(i2s->clk[n]);
+-		if (ret) {
+-			while (n--)
+-				clk_disable_unprepare(i2s->clk[n]);
+-			return ret;
+-		}
++		if (ret)
++			goto err_unprepare_clk;
+ 	}
+ 
+ 	ret = clk_set_rate(i2s->clk[CLK_I2S_BASE], 49152000);
+ 	if (ret) {
+ 		dev_err(i2s->dev, "%s: setting 49.152MHz base rate failed %d\n",
+ 			__func__, ret);
+-		return ret;
++		goto err_unprepare_clk;
+ 	}
+ 
+ 	/* enable clock before frequency division */
+@@ -165,6 +162,11 @@ static int hi6210_i2s_startup(struct snd_pcm_substream *substream,
+ 	hi6210_write_reg(i2s, HII2S_SW_RST_N, val);
+ 
+ 	return 0;
++
++err_unprepare_clk:
++	while (n--)
++		clk_disable_unprepare(i2s->clk[n]);
++	return ret;
+ }
+ 
+ static void hi6210_i2s_shutdown(struct snd_pcm_substream *substream,
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index ecd3f90f4bbea..dfad2ad129abb 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -196,6 +196,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 		},
+ 		.driver_data = (void *)(SOF_RT711_JD_SRC_JD1 |
+ 					SOF_SDW_TGL_HDMI |
++					SOF_RT715_DAI_ID_FIX |
+ 					SOF_SDW_PCH_DMIC),
+ 	},
+ 	{}
+diff --git a/sound/soc/mediatek/common/mtk-btcvsd.c b/sound/soc/mediatek/common/mtk-btcvsd.c
+index f85b5ea180ec0..d884bb7c0fc74 100644
+--- a/sound/soc/mediatek/common/mtk-btcvsd.c
++++ b/sound/soc/mediatek/common/mtk-btcvsd.c
+@@ -1281,7 +1281,7 @@ static const struct snd_soc_component_driver mtk_btcvsd_snd_platform = {
+ 
+ static int mtk_btcvsd_snd_probe(struct platform_device *pdev)
+ {
+-	int ret = 0;
++	int ret;
+ 	int irq_id;
+ 	u32 offset[5] = {0, 0, 0, 0, 0};
+ 	struct mtk_btcvsd_snd *btcvsd;
+@@ -1337,7 +1337,8 @@ static int mtk_btcvsd_snd_probe(struct platform_device *pdev)
+ 	btcvsd->bt_sram_bank2_base = of_iomap(dev->of_node, 1);
+ 	if (!btcvsd->bt_sram_bank2_base) {
+ 		dev_err(dev, "iomap bt_sram_bank2_base fail\n");
+-		return -EIO;
++		ret = -EIO;
++		goto unmap_pkv_err;
+ 	}
+ 
+ 	btcvsd->infra = syscon_regmap_lookup_by_phandle(dev->of_node,
+@@ -1345,7 +1346,8 @@ static int mtk_btcvsd_snd_probe(struct platform_device *pdev)
+ 	if (IS_ERR(btcvsd->infra)) {
+ 		dev_err(dev, "cannot find infra controller: %ld\n",
+ 			PTR_ERR(btcvsd->infra));
+-		return PTR_ERR(btcvsd->infra);
++		ret = PTR_ERR(btcvsd->infra);
++		goto unmap_bank2_err;
+ 	}
+ 
+ 	/* get offset */
+@@ -1354,7 +1356,7 @@ static int mtk_btcvsd_snd_probe(struct platform_device *pdev)
+ 					 ARRAY_SIZE(offset));
+ 	if (ret) {
+ 		dev_warn(dev, "%s(), get offset fail, ret %d\n", __func__, ret);
+-		return ret;
++		goto unmap_bank2_err;
+ 	}
+ 	btcvsd->infra_misc_offset = offset[0];
+ 	btcvsd->conn_bt_cvsd_mask = offset[1];
+@@ -1373,8 +1375,18 @@ static int mtk_btcvsd_snd_probe(struct platform_device *pdev)
+ 	mtk_btcvsd_snd_set_state(btcvsd, btcvsd->tx, BT_SCO_STATE_IDLE);
+ 	mtk_btcvsd_snd_set_state(btcvsd, btcvsd->rx, BT_SCO_STATE_IDLE);
+ 
+-	return devm_snd_soc_register_component(dev, &mtk_btcvsd_snd_platform,
+-					       NULL, 0);
++	ret = devm_snd_soc_register_component(dev, &mtk_btcvsd_snd_platform,
++					      NULL, 0);
++	if (ret)
++		goto unmap_bank2_err;
++
++	return 0;
++
++unmap_bank2_err:
++	iounmap(btcvsd->bt_sram_bank2_base);
++unmap_pkv_err:
++	iounmap(btcvsd->bt_pkv_base);
++	return ret;
+ }
+ 
+ static int mtk_btcvsd_snd_remove(struct platform_device *pdev)
+diff --git a/sound/soc/sh/rcar/adg.c b/sound/soc/sh/rcar/adg.c
+index 0b8ae3eee148f..93751099465d2 100644
+--- a/sound/soc/sh/rcar/adg.c
++++ b/sound/soc/sh/rcar/adg.c
+@@ -290,7 +290,6 @@ static void rsnd_adg_set_ssi_clk(struct rsnd_mod *ssi_mod, u32 val)
+ int rsnd_adg_clk_query(struct rsnd_priv *priv, unsigned int rate)
+ {
+ 	struct rsnd_adg *adg = rsnd_priv_to_adg(priv);
+-	struct clk *clk;
+ 	int i;
+ 	int sel_table[] = {
+ 		[CLKA] = 0x1,
+@@ -303,10 +302,9 @@ int rsnd_adg_clk_query(struct rsnd_priv *priv, unsigned int rate)
+ 	 * find suitable clock from
+ 	 * AUDIO_CLKA/AUDIO_CLKB/AUDIO_CLKC/AUDIO_CLKI.
+ 	 */
+-	for_each_rsnd_clk(clk, adg, i) {
++	for (i = 0; i < CLKMAX; i++)
+ 		if (rate == adg->clk_rate[i])
+ 			return sel_table[i];
+-	}
+ 
+ 	/*
+ 	 * find divided clock from BRGA/BRGB
+diff --git a/sound/usb/format.c b/sound/usb/format.c
+index 2287f8c653150..eb216fef4ba75 100644
+--- a/sound/usb/format.c
++++ b/sound/usb/format.c
+@@ -223,9 +223,11 @@ static int parse_audio_format_rates_v1(struct snd_usb_audio *chip, struct audiof
+ 				continue;
+ 			/* C-Media CM6501 mislabels its 96 kHz altsetting */
+ 			/* Terratec Aureon 7.1 USB C-Media 6206, too */
++			/* Ozone Z90 USB C-Media, too */
+ 			if (rate == 48000 && nr_rates == 1 &&
+ 			    (chip->usb_id == USB_ID(0x0d8c, 0x0201) ||
+ 			     chip->usb_id == USB_ID(0x0d8c, 0x0102) ||
++			     chip->usb_id == USB_ID(0x0d8c, 0x0078) ||
+ 			     chip->usb_id == USB_ID(0x0ccd, 0x00b1)) &&
+ 			    fp->altsetting == 5 && fp->maxpacksize == 392)
+ 				rate = 96000;
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 428d581f988fe..30b3e128e28d8 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -3294,8 +3294,9 @@ static void snd_usb_mixer_dump_cval(struct snd_info_buffer *buffer,
+ 				    struct usb_mixer_elem_list *list)
+ {
+ 	struct usb_mixer_elem_info *cval = mixer_elem_list_to_info(list);
+-	static const char * const val_types[] = {"BOOLEAN", "INV_BOOLEAN",
+-				    "S8", "U8", "S16", "U16"};
++	static const char * const val_types[] = {
++		"BOOLEAN", "INV_BOOLEAN", "S8", "U8", "S16", "U16", "S32", "U32",
++	};
+ 	snd_iprintf(buffer, "    Info: id=%i, control=%i, cmask=0x%x, "
+ 			    "channels=%i, type=\"%s\"\n", cval->head.id,
+ 			    cval->control, cval->cmask, cval->channels,
+@@ -3605,6 +3606,9 @@ static int restore_mixer_value(struct usb_mixer_elem_list *list)
+ 	struct usb_mixer_elem_info *cval = mixer_elem_list_to_info(list);
+ 	int c, err, idx;
+ 
++	if (cval->val_type == USB_MIXER_BESPOKEN)
++		return 0;
++
+ 	if (cval->cmask) {
+ 		idx = 0;
+ 		for (c = 0; c < MAX_CHANNELS; c++) {
+diff --git a/sound/usb/mixer.h b/sound/usb/mixer.h
+index e5a01f17bf3c9..ea41e7a1f7bf2 100644
+--- a/sound/usb/mixer.h
++++ b/sound/usb/mixer.h
+@@ -55,6 +55,7 @@ enum {
+ 	USB_MIXER_U16,
+ 	USB_MIXER_S32,
+ 	USB_MIXER_U32,
++	USB_MIXER_BESPOKEN,	/* non-standard type */
+ };
+ 
+ typedef void (*usb_mixer_elem_dump_func_t)(struct snd_info_buffer *buffer,
+diff --git a/sound/usb/mixer_scarlett_gen2.c b/sound/usb/mixer_scarlett_gen2.c
+index 4caf379d5b991..bca3e7fe27df6 100644
+--- a/sound/usb/mixer_scarlett_gen2.c
++++ b/sound/usb/mixer_scarlett_gen2.c
+@@ -949,10 +949,15 @@ static int scarlett2_add_new_ctl(struct usb_mixer_interface *mixer,
+ 	if (!elem)
+ 		return -ENOMEM;
+ 
++	/* We set USB_MIXER_BESPOKEN type, so that the core USB mixer code
++	 * ignores them for resume and other operations.
++	 * Also, the head.id field is set to 0, as we don't use this field.
++	 */
+ 	elem->head.mixer = mixer;
+ 	elem->control = index;
+-	elem->head.id = index;
++	elem->head.id = 0;
+ 	elem->channels = channels;
++	elem->val_type = USB_MIXER_BESPOKEN;
+ 
+ 	kctl = snd_ctl_new1(ncontrol, elem);
+ 	if (!kctl) {
+diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c
+index d9afb730136a4..0f36b9edd3f55 100644
+--- a/tools/bpf/bpftool/main.c
++++ b/tools/bpf/bpftool/main.c
+@@ -340,8 +340,10 @@ static int do_batch(int argc, char **argv)
+ 		n_argc = make_args(buf, n_argv, BATCH_ARG_NB_MAX, lines);
+ 		if (!n_argc)
+ 			continue;
+-		if (n_argc < 0)
++		if (n_argc < 0) {
++			err = n_argc;
+ 			goto err_close;
++		}
+ 
+ 		if (json_output) {
+ 			jsonw_start_object(json_wtr);
+diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
+index 7550fd9c31883..3ad9301b0f00c 100644
+--- a/tools/bpf/resolve_btfids/main.c
++++ b/tools/bpf/resolve_btfids/main.c
+@@ -655,6 +655,9 @@ static int symbols_patch(struct object *obj)
+ 	if (sets_patch(obj))
+ 		return -1;
+ 
++	/* Set type to ensure endian translation occurs. */
++	obj->efile.idlist->d_type = ELF_T_WORD;
++
+ 	elf_flagdata(obj->efile.idlist, ELF_C_SET, ELF_F_DIRTY);
+ 
+ 	err = elf_update(obj->efile.elf, ELF_C_WRITE);
+diff --git a/tools/lib/bpf/linker.c b/tools/lib/bpf/linker.c
+index 9de084b1c6993..f44f8a37f780e 100644
+--- a/tools/lib/bpf/linker.c
++++ b/tools/lib/bpf/linker.c
+@@ -1780,7 +1780,7 @@ static void sym_update_visibility(Elf64_Sym *sym, int sym_vis)
+ 	/* libelf doesn't provide setters for ST_VISIBILITY,
+ 	 * but it is stored in the lower 2 bits of st_other
+ 	 */
+-	sym->st_other &= 0x03;
++	sym->st_other &= ~0x03;
+ 	sym->st_other |= sym_vis;
+ }
+ 
+diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c
+index 523aa4157f801..bc821056aba90 100644
+--- a/tools/objtool/arch/x86/decode.c
++++ b/tools/objtool/arch/x86/decode.c
+@@ -684,7 +684,7 @@ static int elf_add_alternative(struct elf *elf,
+ 	sec = find_section_by_name(elf, ".altinstructions");
+ 	if (!sec) {
+ 		sec = elf_create_section(elf, ".altinstructions",
+-					 SHF_WRITE, size, 0);
++					 SHF_ALLOC, size, 0);
+ 
+ 		if (!sec) {
+ 			WARN_ELF("elf_create_section");
+diff --git a/tools/perf/util/llvm-utils.c b/tools/perf/util/llvm-utils.c
+index 3ceaf7ef33013..cbd9b268f1684 100644
+--- a/tools/perf/util/llvm-utils.c
++++ b/tools/perf/util/llvm-utils.c
+@@ -504,6 +504,7 @@ int llvm__compile_bpf(const char *path, void **p_obj_buf,
+ 			goto errout;
+ 		}
+ 
++		err = -ENOMEM;
+ 		if (asprintf(&pipe_template, "%s -emit-llvm | %s -march=bpf %s -filetype=obj -o -",
+ 			      template, llc_path, opts) < 0) {
+ 			pr_err("ERROR:\tnot enough memory to setup command line\n");
+@@ -524,6 +525,7 @@ int llvm__compile_bpf(const char *path, void **p_obj_buf,
+ 
+ 	pr_debug("llvm compiling command template: %s\n", template);
+ 
++	err = -ENOMEM;
+ 	if (asprintf(&command_echo, "echo -n \"%s\"", template) < 0)
+ 		goto errout;
+ 
+diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
+index 4e4aa4c97ac59..3dfc543327af8 100644
+--- a/tools/perf/util/scripting-engines/trace-event-python.c
++++ b/tools/perf/util/scripting-engines/trace-event-python.c
+@@ -934,7 +934,7 @@ static PyObject *tuple_new(unsigned int sz)
+ 	return t;
+ }
+ 
+-static int tuple_set_u64(PyObject *t, unsigned int pos, u64 val)
++static int tuple_set_s64(PyObject *t, unsigned int pos, s64 val)
+ {
+ #if BITS_PER_LONG == 64
+ 	return PyTuple_SetItem(t, pos, _PyLong_FromLong(val));
+@@ -944,6 +944,22 @@ static int tuple_set_u64(PyObject *t, unsigned int pos, u64 val)
+ #endif
+ }
+ 
++/*
++ * Databases support only signed 64-bit numbers, so even though we are
++ * exporting a u64, it must be as s64.
++ */
++#define tuple_set_d64 tuple_set_s64
++
++static int tuple_set_u64(PyObject *t, unsigned int pos, u64 val)
++{
++#if BITS_PER_LONG == 64
++	return PyTuple_SetItem(t, pos, PyLong_FromUnsignedLong(val));
++#endif
++#if BITS_PER_LONG == 32
++	return PyTuple_SetItem(t, pos, PyLong_FromUnsignedLongLong(val));
++#endif
++}
++
+ static int tuple_set_s32(PyObject *t, unsigned int pos, s32 val)
+ {
+ 	return PyTuple_SetItem(t, pos, _PyLong_FromLong(val));
+@@ -967,7 +983,7 @@ static int python_export_evsel(struct db_export *dbe, struct evsel *evsel)
+ 
+ 	t = tuple_new(2);
+ 
+-	tuple_set_u64(t, 0, evsel->db_id);
++	tuple_set_d64(t, 0, evsel->db_id);
+ 	tuple_set_string(t, 1, evsel__name(evsel));
+ 
+ 	call_object(tables->evsel_handler, t, "evsel_table");
+@@ -985,7 +1001,7 @@ static int python_export_machine(struct db_export *dbe,
+ 
+ 	t = tuple_new(3);
+ 
+-	tuple_set_u64(t, 0, machine->db_id);
++	tuple_set_d64(t, 0, machine->db_id);
+ 	tuple_set_s32(t, 1, machine->pid);
+ 	tuple_set_string(t, 2, machine->root_dir ? machine->root_dir : "");
+ 
+@@ -1004,9 +1020,9 @@ static int python_export_thread(struct db_export *dbe, struct thread *thread,
+ 
+ 	t = tuple_new(5);
+ 
+-	tuple_set_u64(t, 0, thread->db_id);
+-	tuple_set_u64(t, 1, machine->db_id);
+-	tuple_set_u64(t, 2, main_thread_db_id);
++	tuple_set_d64(t, 0, thread->db_id);
++	tuple_set_d64(t, 1, machine->db_id);
++	tuple_set_d64(t, 2, main_thread_db_id);
+ 	tuple_set_s32(t, 3, thread->pid_);
+ 	tuple_set_s32(t, 4, thread->tid);
+ 
+@@ -1025,10 +1041,10 @@ static int python_export_comm(struct db_export *dbe, struct comm *comm,
+ 
+ 	t = tuple_new(5);
+ 
+-	tuple_set_u64(t, 0, comm->db_id);
++	tuple_set_d64(t, 0, comm->db_id);
+ 	tuple_set_string(t, 1, comm__str(comm));
+-	tuple_set_u64(t, 2, thread->db_id);
+-	tuple_set_u64(t, 3, comm->start);
++	tuple_set_d64(t, 2, thread->db_id);
++	tuple_set_d64(t, 3, comm->start);
+ 	tuple_set_s32(t, 4, comm->exec);
+ 
+ 	call_object(tables->comm_handler, t, "comm_table");
+@@ -1046,9 +1062,9 @@ static int python_export_comm_thread(struct db_export *dbe, u64 db_id,
+ 
+ 	t = tuple_new(3);
+ 
+-	tuple_set_u64(t, 0, db_id);
+-	tuple_set_u64(t, 1, comm->db_id);
+-	tuple_set_u64(t, 2, thread->db_id);
++	tuple_set_d64(t, 0, db_id);
++	tuple_set_d64(t, 1, comm->db_id);
++	tuple_set_d64(t, 2, thread->db_id);
+ 
+ 	call_object(tables->comm_thread_handler, t, "comm_thread_table");
+ 
+@@ -1068,8 +1084,8 @@ static int python_export_dso(struct db_export *dbe, struct dso *dso,
+ 
+ 	t = tuple_new(5);
+ 
+-	tuple_set_u64(t, 0, dso->db_id);
+-	tuple_set_u64(t, 1, machine->db_id);
++	tuple_set_d64(t, 0, dso->db_id);
++	tuple_set_d64(t, 1, machine->db_id);
+ 	tuple_set_string(t, 2, dso->short_name);
+ 	tuple_set_string(t, 3, dso->long_name);
+ 	tuple_set_string(t, 4, sbuild_id);
+@@ -1090,10 +1106,10 @@ static int python_export_symbol(struct db_export *dbe, struct symbol *sym,
+ 
+ 	t = tuple_new(6);
+ 
+-	tuple_set_u64(t, 0, *sym_db_id);
+-	tuple_set_u64(t, 1, dso->db_id);
+-	tuple_set_u64(t, 2, sym->start);
+-	tuple_set_u64(t, 3, sym->end);
++	tuple_set_d64(t, 0, *sym_db_id);
++	tuple_set_d64(t, 1, dso->db_id);
++	tuple_set_d64(t, 2, sym->start);
++	tuple_set_d64(t, 3, sym->end);
+ 	tuple_set_s32(t, 4, sym->binding);
+ 	tuple_set_string(t, 5, sym->name);
+ 
+@@ -1130,30 +1146,30 @@ static void python_export_sample_table(struct db_export *dbe,
+ 
+ 	t = tuple_new(24);
+ 
+-	tuple_set_u64(t, 0, es->db_id);
+-	tuple_set_u64(t, 1, es->evsel->db_id);
+-	tuple_set_u64(t, 2, es->al->maps->machine->db_id);
+-	tuple_set_u64(t, 3, es->al->thread->db_id);
+-	tuple_set_u64(t, 4, es->comm_db_id);
+-	tuple_set_u64(t, 5, es->dso_db_id);
+-	tuple_set_u64(t, 6, es->sym_db_id);
+-	tuple_set_u64(t, 7, es->offset);
+-	tuple_set_u64(t, 8, es->sample->ip);
+-	tuple_set_u64(t, 9, es->sample->time);
++	tuple_set_d64(t, 0, es->db_id);
++	tuple_set_d64(t, 1, es->evsel->db_id);
++	tuple_set_d64(t, 2, es->al->maps->machine->db_id);
++	tuple_set_d64(t, 3, es->al->thread->db_id);
++	tuple_set_d64(t, 4, es->comm_db_id);
++	tuple_set_d64(t, 5, es->dso_db_id);
++	tuple_set_d64(t, 6, es->sym_db_id);
++	tuple_set_d64(t, 7, es->offset);
++	tuple_set_d64(t, 8, es->sample->ip);
++	tuple_set_d64(t, 9, es->sample->time);
+ 	tuple_set_s32(t, 10, es->sample->cpu);
+-	tuple_set_u64(t, 11, es->addr_dso_db_id);
+-	tuple_set_u64(t, 12, es->addr_sym_db_id);
+-	tuple_set_u64(t, 13, es->addr_offset);
+-	tuple_set_u64(t, 14, es->sample->addr);
+-	tuple_set_u64(t, 15, es->sample->period);
+-	tuple_set_u64(t, 16, es->sample->weight);
+-	tuple_set_u64(t, 17, es->sample->transaction);
+-	tuple_set_u64(t, 18, es->sample->data_src);
++	tuple_set_d64(t, 11, es->addr_dso_db_id);
++	tuple_set_d64(t, 12, es->addr_sym_db_id);
++	tuple_set_d64(t, 13, es->addr_offset);
++	tuple_set_d64(t, 14, es->sample->addr);
++	tuple_set_d64(t, 15, es->sample->period);
++	tuple_set_d64(t, 16, es->sample->weight);
++	tuple_set_d64(t, 17, es->sample->transaction);
++	tuple_set_d64(t, 18, es->sample->data_src);
+ 	tuple_set_s32(t, 19, es->sample->flags & PERF_BRANCH_MASK);
+ 	tuple_set_s32(t, 20, !!(es->sample->flags & PERF_IP_FLAG_IN_TX));
+-	tuple_set_u64(t, 21, es->call_path_id);
+-	tuple_set_u64(t, 22, es->sample->insn_cnt);
+-	tuple_set_u64(t, 23, es->sample->cyc_cnt);
++	tuple_set_d64(t, 21, es->call_path_id);
++	tuple_set_d64(t, 22, es->sample->insn_cnt);
++	tuple_set_d64(t, 23, es->sample->cyc_cnt);
+ 
+ 	call_object(tables->sample_handler, t, "sample_table");
+ 
+@@ -1167,8 +1183,8 @@ static void python_export_synth(struct db_export *dbe, struct export_sample *es)
+ 
+ 	t = tuple_new(3);
+ 
+-	tuple_set_u64(t, 0, es->db_id);
+-	tuple_set_u64(t, 1, es->evsel->core.attr.config);
++	tuple_set_d64(t, 0, es->db_id);
++	tuple_set_d64(t, 1, es->evsel->core.attr.config);
+ 	tuple_set_bytes(t, 2, es->sample->raw_data, es->sample->raw_size);
+ 
+ 	call_object(tables->synth_handler, t, "synth_data");
+@@ -1200,10 +1216,10 @@ static int python_export_call_path(struct db_export *dbe, struct call_path *cp)
+ 
+ 	t = tuple_new(4);
+ 
+-	tuple_set_u64(t, 0, cp->db_id);
+-	tuple_set_u64(t, 1, parent_db_id);
+-	tuple_set_u64(t, 2, sym_db_id);
+-	tuple_set_u64(t, 3, cp->ip);
++	tuple_set_d64(t, 0, cp->db_id);
++	tuple_set_d64(t, 1, parent_db_id);
++	tuple_set_d64(t, 2, sym_db_id);
++	tuple_set_d64(t, 3, cp->ip);
+ 
+ 	call_object(tables->call_path_handler, t, "call_path_table");
+ 
+@@ -1221,20 +1237,20 @@ static int python_export_call_return(struct db_export *dbe,
+ 
+ 	t = tuple_new(14);
+ 
+-	tuple_set_u64(t, 0, cr->db_id);
+-	tuple_set_u64(t, 1, cr->thread->db_id);
+-	tuple_set_u64(t, 2, comm_db_id);
+-	tuple_set_u64(t, 3, cr->cp->db_id);
+-	tuple_set_u64(t, 4, cr->call_time);
+-	tuple_set_u64(t, 5, cr->return_time);
+-	tuple_set_u64(t, 6, cr->branch_count);
+-	tuple_set_u64(t, 7, cr->call_ref);
+-	tuple_set_u64(t, 8, cr->return_ref);
+-	tuple_set_u64(t, 9, cr->cp->parent->db_id);
++	tuple_set_d64(t, 0, cr->db_id);
++	tuple_set_d64(t, 1, cr->thread->db_id);
++	tuple_set_d64(t, 2, comm_db_id);
++	tuple_set_d64(t, 3, cr->cp->db_id);
++	tuple_set_d64(t, 4, cr->call_time);
++	tuple_set_d64(t, 5, cr->return_time);
++	tuple_set_d64(t, 6, cr->branch_count);
++	tuple_set_d64(t, 7, cr->call_ref);
++	tuple_set_d64(t, 8, cr->return_ref);
++	tuple_set_d64(t, 9, cr->cp->parent->db_id);
+ 	tuple_set_s32(t, 10, cr->flags);
+-	tuple_set_u64(t, 11, cr->parent_db_id);
+-	tuple_set_u64(t, 12, cr->insn_count);
+-	tuple_set_u64(t, 13, cr->cyc_count);
++	tuple_set_d64(t, 11, cr->parent_db_id);
++	tuple_set_d64(t, 12, cr->insn_count);
++	tuple_set_d64(t, 13, cr->cyc_count);
+ 
+ 	call_object(tables->call_return_handler, t, "call_return_table");
+ 
+@@ -1254,14 +1270,14 @@ static int python_export_context_switch(struct db_export *dbe, u64 db_id,
+ 
+ 	t = tuple_new(9);
+ 
+-	tuple_set_u64(t, 0, db_id);
+-	tuple_set_u64(t, 1, machine->db_id);
+-	tuple_set_u64(t, 2, sample->time);
++	tuple_set_d64(t, 0, db_id);
++	tuple_set_d64(t, 1, machine->db_id);
++	tuple_set_d64(t, 2, sample->time);
+ 	tuple_set_s32(t, 3, sample->cpu);
+-	tuple_set_u64(t, 4, th_out_id);
+-	tuple_set_u64(t, 5, comm_out_id);
+-	tuple_set_u64(t, 6, th_in_id);
+-	tuple_set_u64(t, 7, comm_in_id);
++	tuple_set_d64(t, 4, th_out_id);
++	tuple_set_d64(t, 5, comm_out_id);
++	tuple_set_d64(t, 6, th_in_id);
++	tuple_set_d64(t, 7, comm_in_id);
+ 	tuple_set_s32(t, 8, flags);
+ 
+ 	call_object(tables->context_switch_handler, t, "context_switch");
+diff --git a/tools/power/x86/intel-speed-select/isst-config.c b/tools/power/x86/intel-speed-select/isst-config.c
+index ab940c508ef0c..d4f0a7872e493 100644
+--- a/tools/power/x86/intel-speed-select/isst-config.c
++++ b/tools/power/x86/intel-speed-select/isst-config.c
+@@ -106,6 +106,22 @@ int is_skx_based_platform(void)
+ 	return 0;
+ }
+ 
++int is_spr_platform(void)
++{
++	if (cpu_model == 0x8F)
++		return 1;
++
++	return 0;
++}
++
++int is_icx_platform(void)
++{
++	if (cpu_model == 0x6A || cpu_model == 0x6C)
++		return 1;
++
++	return 0;
++}
++
+ static int update_cpu_model(void)
+ {
+ 	unsigned int ebx, ecx, edx;
+diff --git a/tools/power/x86/intel-speed-select/isst-core.c b/tools/power/x86/intel-speed-select/isst-core.c
+index 6a26d57699845..4431c8a0d40ae 100644
+--- a/tools/power/x86/intel-speed-select/isst-core.c
++++ b/tools/power/x86/intel-speed-select/isst-core.c
+@@ -201,6 +201,7 @@ void isst_get_uncore_mem_freq(int cpu, int config_index,
+ {
+ 	unsigned int resp;
+ 	int ret;
++
+ 	ret = isst_send_mbox_command(cpu, CONFIG_TDP, CONFIG_TDP_GET_MEM_FREQ,
+ 				     0, config_index, &resp);
+ 	if (ret) {
+@@ -209,6 +210,20 @@ void isst_get_uncore_mem_freq(int cpu, int config_index,
+ 	}
+ 
+ 	ctdp_level->mem_freq = resp & GENMASK(7, 0);
++	if (is_spr_platform()) {
++		ctdp_level->mem_freq *= 200;
++	} else if (is_icx_platform()) {
++		if (ctdp_level->mem_freq < 7) {
++			ctdp_level->mem_freq = (12 - ctdp_level->mem_freq) * 133.33 * 2 * 10;
++			ctdp_level->mem_freq /= 10;
++			if (ctdp_level->mem_freq % 10 > 5)
++				ctdp_level->mem_freq++;
++		} else {
++			ctdp_level->mem_freq = 0;
++		}
++	} else {
++		ctdp_level->mem_freq = 0;
++	}
+ 	debug_printf(
+ 		"cpu:%d ctdp:%d CONFIG_TDP_GET_MEM_FREQ resp:%x uncore mem_freq:%d\n",
+ 		cpu, config_index, resp, ctdp_level->mem_freq);
+diff --git a/tools/power/x86/intel-speed-select/isst-display.c b/tools/power/x86/intel-speed-select/isst-display.c
+index 3bf1820c0da11..f97d8859ada72 100644
+--- a/tools/power/x86/intel-speed-select/isst-display.c
++++ b/tools/power/x86/intel-speed-select/isst-display.c
+@@ -446,7 +446,7 @@ void isst_ctdp_display_information(int cpu, FILE *outf, int tdp_level,
+ 		if (ctdp_level->mem_freq) {
+ 			snprintf(header, sizeof(header), "mem-frequency(MHz)");
+ 			snprintf(value, sizeof(value), "%d",
+-				 ctdp_level->mem_freq * DISP_FREQ_MULTIPLIER);
++				 ctdp_level->mem_freq);
+ 			format_and_print(outf, level + 2, header, value);
+ 		}
+ 
+diff --git a/tools/power/x86/intel-speed-select/isst.h b/tools/power/x86/intel-speed-select/isst.h
+index 0cac6c54be873..1aa15d5ea57ce 100644
+--- a/tools/power/x86/intel-speed-select/isst.h
++++ b/tools/power/x86/intel-speed-select/isst.h
+@@ -257,5 +257,7 @@ extern int get_cpufreq_base_freq(int cpu);
+ extern int isst_read_pm_config(int cpu, int *cp_state, int *cp_cap);
+ extern void isst_display_error_info_message(int error, char *msg, int arg_valid, int arg);
+ extern int is_skx_based_platform(void);
++extern int is_spr_platform(void);
++extern int is_icx_platform(void);
+ extern void isst_trl_display_information(int cpu, FILE *outf, unsigned long long trl);
+ #endif
+diff --git a/tools/testing/selftests/bpf/.gitignore b/tools/testing/selftests/bpf/.gitignore
+index 4866f6a219018..d89efd9785d89 100644
+--- a/tools/testing/selftests/bpf/.gitignore
++++ b/tools/testing/selftests/bpf/.gitignore
+@@ -10,6 +10,7 @@ FEATURE-DUMP.libbpf
+ fixdep
+ test_dev_cgroup
+ /test_progs*
++!test_progs.h
+ test_verifier_log
+ feature
+ test_sock
+diff --git a/tools/testing/selftests/bpf/prog_tests/ringbuf.c b/tools/testing/selftests/bpf/prog_tests/ringbuf.c
+index f9a8ae331963d..2a0549ae13f31 100644
+--- a/tools/testing/selftests/bpf/prog_tests/ringbuf.c
++++ b/tools/testing/selftests/bpf/prog_tests/ringbuf.c
+@@ -102,7 +102,7 @@ void test_ringbuf(void)
+ 	if (CHECK(err != 0, "skel_load", "skeleton load failed\n"))
+ 		goto cleanup;
+ 
+-	rb_fd = bpf_map__fd(skel->maps.ringbuf);
++	rb_fd = skel->maps.ringbuf.map_fd;
+ 	/* good read/write cons_pos */
+ 	mmap_ptr = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, rb_fd, 0);
+ 	ASSERT_OK_PTR(mmap_ptr, "rw_cons_pos");
+diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
+index 648d9ae898d2f..01ab11259809e 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
++++ b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
+@@ -1610,6 +1610,7 @@ static void udp_redir_to_connected(int family, int sotype, int sock_mapfd,
+ 	struct sockaddr_storage addr;
+ 	int c0, c1, p0, p1;
+ 	unsigned int pass;
++	int retries = 100;
+ 	socklen_t len;
+ 	int err, n;
+ 	u64 value;
+@@ -1686,9 +1687,13 @@ static void udp_redir_to_connected(int family, int sotype, int sock_mapfd,
+ 	if (pass != 1)
+ 		FAIL("%s: want pass count 1, have %d", log_prefix, pass);
+ 
++again:
+ 	n = read(mode == REDIR_INGRESS ? p0 : c0, &b, 1);
+-	if (n < 0)
++	if (n < 0) {
++		if (errno == EAGAIN && retries--)
++			goto again;
+ 		FAIL_ERRNO("%s: read", log_prefix);
++	}
+ 	if (n == 0)
+ 		FAIL("%s: incomplete read", log_prefix);
+ 
+diff --git a/tools/testing/selftests/ftrace/test.d/event/event-no-pid.tc b/tools/testing/selftests/ftrace/test.d/event/event-no-pid.tc
+index e6eb78f0b9545..9933ed24f9012 100644
+--- a/tools/testing/selftests/ftrace/test.d/event/event-no-pid.tc
++++ b/tools/testing/selftests/ftrace/test.d/event/event-no-pid.tc
+@@ -57,6 +57,10 @@ enable_events() {
+     echo 1 > tracing_on
+ }
+ 
++other_task() {
++    sleep .001 || usleep 1 || sleep 1
++}
++
+ echo 0 > options/event-fork
+ 
+ do_reset
+@@ -94,6 +98,9 @@ child=$!
+ echo "child = $child"
+ wait $child
+ 
++# Be sure some other events will happen for small systems (e.g. 1 core)
++other_task
++
+ echo 0 > tracing_on
+ 
+ cnt=`count_pid $mypid`
+diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
+index 81edbd23d371c..b4d24f50aca62 100644
+--- a/tools/testing/selftests/kvm/dirty_log_test.c
++++ b/tools/testing/selftests/kvm/dirty_log_test.c
+@@ -16,7 +16,6 @@
+ #include <errno.h>
+ #include <linux/bitmap.h>
+ #include <linux/bitops.h>
+-#include <asm/barrier.h>
+ #include <linux/atomic.h>
+ 
+ #include "kvm_util.h"
+diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
+index a2b732cf96ea4..8ea854d7822d9 100644
+--- a/tools/testing/selftests/kvm/lib/kvm_util.c
++++ b/tools/testing/selftests/kvm/lib/kvm_util.c
+@@ -375,10 +375,6 @@ struct kvm_vm *vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus,
+ 		uint32_t vcpuid = vcpuids ? vcpuids[i] : i;
+ 
+ 		vm_vcpu_add_default(vm, vcpuid, guest_code);
+-
+-#ifdef __x86_64__
+-		vcpu_set_cpuid(vm, vcpuid, kvm_get_supported_cpuid());
+-#endif
+ 	}
+ 
+ 	return vm;
+diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
+index efe2350444213..595322b24e4cb 100644
+--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
++++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
+@@ -600,6 +600,9 @@ void vm_vcpu_add_default(struct kvm_vm *vm, uint32_t vcpuid, void *guest_code)
+ 	/* Setup the MP state */
+ 	mp_state.mp_state = 0;
+ 	vcpu_set_mp_state(vm, vcpuid, &mp_state);
++
++	/* Setup supported CPUIDs */
++	vcpu_set_cpuid(vm, vcpuid, kvm_get_supported_cpuid());
+ }
+ 
+ /*
+diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selftests/kvm/steal_time.c
+index fcc840088c919..a6fe75cb9a6eb 100644
+--- a/tools/testing/selftests/kvm/steal_time.c
++++ b/tools/testing/selftests/kvm/steal_time.c
+@@ -73,8 +73,6 @@ static void steal_time_init(struct kvm_vm *vm)
+ 	for (i = 0; i < NR_VCPUS; ++i) {
+ 		int ret;
+ 
+-		vcpu_set_cpuid(vm, i, kvm_get_supported_cpuid());
+-
+ 		/* ST_GPA_BASE is identity mapped */
+ 		st_gva[i] = (void *)(ST_GPA_BASE + i * STEAL_TIME_SIZE);
+ 		sync_global_to_guest(vm, st_gva[i]);
+diff --git a/tools/testing/selftests/kvm/x86_64/set_boot_cpu_id.c b/tools/testing/selftests/kvm/x86_64/set_boot_cpu_id.c
+index 12c558fc8074a..c8d2bbe202d0e 100644
+--- a/tools/testing/selftests/kvm/x86_64/set_boot_cpu_id.c
++++ b/tools/testing/selftests/kvm/x86_64/set_boot_cpu_id.c
+@@ -106,8 +106,6 @@ static void add_x86_vcpu(struct kvm_vm *vm, uint32_t vcpuid, bool bsp_code)
+ 		vm_vcpu_add_default(vm, vcpuid, guest_bsp_vcpu);
+ 	else
+ 		vm_vcpu_add_default(vm, vcpuid, guest_not_bsp_vcpu);
+-
+-	vcpu_set_cpuid(vm, vcpuid, kvm_get_supported_cpuid());
+ }
+ 
+ static void run_vm_bsp(uint32_t bsp_vcpu)
+diff --git a/tools/testing/selftests/lkdtm/run.sh b/tools/testing/selftests/lkdtm/run.sh
+index bb7a1775307b8..e95e79bd31268 100755
+--- a/tools/testing/selftests/lkdtm/run.sh
++++ b/tools/testing/selftests/lkdtm/run.sh
+@@ -76,10 +76,14 @@ fi
+ # Save existing dmesg so we can detect new content below
+ dmesg > "$DMESG"
+ 
+-# Most shells yell about signals and we're expecting the "cat" process
+-# to usually be killed by the kernel. So we have to run it in a sub-shell
+-# and silence errors.
+-($SHELL -c 'cat <(echo '"$test"') >'"$TRIGGER" 2>/dev/null) || true
++# Since the kernel is likely killing the process writing to the trigger
++# file, it must not be the script's shell itself. i.e. we cannot do:
++#     echo "$test" >"$TRIGGER"
++# Instead, use "cat" to take the signal. Since the shell will yell about
++# the signal that killed the subprocess, we must ignore the failure and
++# continue. However we don't silence stderr since there might be other
++# useful details reported there in the case of other unexpected conditions.
++echo "$test" | cat >"$TRIGGER" || true
+ 
+ # Record and dump the results
+ dmesg | comm --nocheck-order -13 "$DMESG" - > "$LOG" || true
+diff --git a/tools/testing/selftests/net/tls.c b/tools/testing/selftests/net/tls.c
+index 426d07875a48e..112d41d01b12d 100644
+--- a/tools/testing/selftests/net/tls.c
++++ b/tools/testing/selftests/net/tls.c
+@@ -25,6 +25,47 @@
+ #define TLS_PAYLOAD_MAX_LEN 16384
+ #define SOL_TLS 282
+ 
++struct tls_crypto_info_keys {
++	union {
++		struct tls12_crypto_info_aes_gcm_128 aes128;
++		struct tls12_crypto_info_chacha20_poly1305 chacha20;
++	};
++	size_t len;
++};
++
++static void tls_crypto_info_init(uint16_t tls_version, uint16_t cipher_type,
++				 struct tls_crypto_info_keys *tls12)
++{
++	memset(tls12, 0, sizeof(*tls12));
++
++	switch (cipher_type) {
++	case TLS_CIPHER_CHACHA20_POLY1305:
++		tls12->len = sizeof(struct tls12_crypto_info_chacha20_poly1305);
++		tls12->chacha20.info.version = tls_version;
++		tls12->chacha20.info.cipher_type = cipher_type;
++		break;
++	case TLS_CIPHER_AES_GCM_128:
++		tls12->len = sizeof(struct tls12_crypto_info_aes_gcm_128);
++		tls12->aes128.info.version = tls_version;
++		tls12->aes128.info.cipher_type = cipher_type;
++		break;
++	default:
++		break;
++	}
++}
++
++static void memrnd(void *s, size_t n)
++{
++	int *dword = s;
++	char *byte;
++
++	for (; n >= 4; n -= 4)
++		*dword++ = rand();
++	byte = (void *)dword;
++	while (n--)
++		*byte++ = rand();
++}
++
+ FIXTURE(tls_basic)
+ {
+ 	int fd, cfd;
+@@ -133,33 +174,16 @@ FIXTURE_VARIANT_ADD(tls, 13_chacha)
+ 
+ FIXTURE_SETUP(tls)
+ {
+-	union {
+-		struct tls12_crypto_info_aes_gcm_128 aes128;
+-		struct tls12_crypto_info_chacha20_poly1305 chacha20;
+-	} tls12;
++	struct tls_crypto_info_keys tls12;
+ 	struct sockaddr_in addr;
+ 	socklen_t len;
+ 	int sfd, ret;
+-	size_t tls12_sz;
+ 
+ 	self->notls = false;
+ 	len = sizeof(addr);
+ 
+-	memset(&tls12, 0, sizeof(tls12));
+-	switch (variant->cipher_type) {
+-	case TLS_CIPHER_CHACHA20_POLY1305:
+-		tls12_sz = sizeof(struct tls12_crypto_info_chacha20_poly1305);
+-		tls12.chacha20.info.version = variant->tls_version;
+-		tls12.chacha20.info.cipher_type = variant->cipher_type;
+-		break;
+-	case TLS_CIPHER_AES_GCM_128:
+-		tls12_sz = sizeof(struct tls12_crypto_info_aes_gcm_128);
+-		tls12.aes128.info.version = variant->tls_version;
+-		tls12.aes128.info.cipher_type = variant->cipher_type;
+-		break;
+-	default:
+-		tls12_sz = 0;
+-	}
++	tls_crypto_info_init(variant->tls_version, variant->cipher_type,
++			     &tls12);
+ 
+ 	addr.sin_family = AF_INET;
+ 	addr.sin_addr.s_addr = htonl(INADDR_ANY);
+@@ -187,7 +211,7 @@ FIXTURE_SETUP(tls)
+ 
+ 	if (!self->notls) {
+ 		ret = setsockopt(self->fd, SOL_TLS, TLS_TX, &tls12,
+-				 tls12_sz);
++				 tls12.len);
+ 		ASSERT_EQ(ret, 0);
+ 	}
+ 
+@@ -200,7 +224,7 @@ FIXTURE_SETUP(tls)
+ 		ASSERT_EQ(ret, 0);
+ 
+ 		ret = setsockopt(self->cfd, SOL_TLS, TLS_RX, &tls12,
+-				 tls12_sz);
++				 tls12.len);
+ 		ASSERT_EQ(ret, 0);
+ 	}
+ 
+@@ -308,6 +332,8 @@ TEST_F(tls, recv_max)
+ 	char recv_mem[TLS_PAYLOAD_MAX_LEN];
+ 	char buf[TLS_PAYLOAD_MAX_LEN];
+ 
++	memrnd(buf, sizeof(buf));
++
+ 	EXPECT_GE(send(self->fd, buf, send_len, 0), 0);
+ 	EXPECT_NE(recv(self->cfd, recv_mem, send_len, 0), -1);
+ 	EXPECT_EQ(memcmp(buf, recv_mem, send_len), 0);
+@@ -588,6 +614,8 @@ TEST_F(tls, recvmsg_single_max)
+ 	struct iovec vec;
+ 	struct msghdr hdr;
+ 
++	memrnd(send_mem, sizeof(send_mem));
++
+ 	EXPECT_EQ(send(self->fd, send_mem, send_len, 0), send_len);
+ 	vec.iov_base = (char *)recv_mem;
+ 	vec.iov_len = TLS_PAYLOAD_MAX_LEN;
+@@ -610,6 +638,8 @@ TEST_F(tls, recvmsg_multiple)
+ 	struct msghdr hdr;
+ 	int i;
+ 
++	memrnd(buf, sizeof(buf));
++
+ 	EXPECT_EQ(send(self->fd, buf, send_len, 0), send_len);
+ 	for (i = 0; i < msg_iovlen; i++) {
+ 		iov_base[i] = (char *)malloc(iov_len);
+@@ -634,6 +664,8 @@ TEST_F(tls, single_send_multiple_recv)
+ 	char send_mem[TLS_PAYLOAD_MAX_LEN * 2];
+ 	char recv_mem[TLS_PAYLOAD_MAX_LEN * 2];
+ 
++	memrnd(send_mem, sizeof(send_mem));
++
+ 	EXPECT_GE(send(self->fd, send_mem, total_len, 0), 0);
+ 	memset(recv_mem, 0, total_len);
+ 
+@@ -834,18 +866,17 @@ TEST_F(tls, bidir)
+ 	int ret;
+ 
+ 	if (!self->notls) {
+-		struct tls12_crypto_info_aes_gcm_128 tls12;
++		struct tls_crypto_info_keys tls12;
+ 
+-		memset(&tls12, 0, sizeof(tls12));
+-		tls12.info.version = variant->tls_version;
+-		tls12.info.cipher_type = TLS_CIPHER_AES_GCM_128;
++		tls_crypto_info_init(variant->tls_version, variant->cipher_type,
++				     &tls12);
+ 
+ 		ret = setsockopt(self->fd, SOL_TLS, TLS_RX, &tls12,
+-				 sizeof(tls12));
++				 tls12.len);
+ 		ASSERT_EQ(ret, 0);
+ 
+ 		ret = setsockopt(self->cfd, SOL_TLS, TLS_TX, &tls12,
+-				 sizeof(tls12));
++				 tls12.len);
+ 		ASSERT_EQ(ret, 0);
+ 	}
+ 
+diff --git a/tools/testing/selftests/resctrl/README b/tools/testing/selftests/resctrl/README
+index 4b36b25b6ac05..3d2bbd4fa3aa1 100644
+--- a/tools/testing/selftests/resctrl/README
++++ b/tools/testing/selftests/resctrl/README
+@@ -47,7 +47,7 @@ Parameter '-h' shows usage information.
+ 
+ usage: resctrl_tests [-h] [-b "benchmark_cmd [options]"] [-t test list] [-n no_of_bits]
+         -b benchmark_cmd [options]: run specified benchmark for MBM, MBA and CMT default benchmark is builtin fill_buf
+-        -t test list: run tests specified in the test list, e.g. -t mbm, mba, cmt, cat
++        -t test list: run tests specified in the test list, e.g. -t mbm,mba,cmt,cat
+         -n no_of_bits: run cache tests using specified no of bits in cache bit mask
+         -p cpu_no: specify CPU number to run the test. 1 is default
+         -h: help
+diff --git a/tools/testing/selftests/resctrl/resctrl_tests.c b/tools/testing/selftests/resctrl/resctrl_tests.c
+index f51b5fc066a32..973f09a66e1ee 100644
+--- a/tools/testing/selftests/resctrl/resctrl_tests.c
++++ b/tools/testing/selftests/resctrl/resctrl_tests.c
+@@ -40,7 +40,7 @@ static void cmd_help(void)
+ 	printf("\t-b benchmark_cmd [options]: run specified benchmark for MBM, MBA and CMT\n");
+ 	printf("\t   default benchmark is builtin fill_buf\n");
+ 	printf("\t-t test list: run tests specified in the test list, ");
+-	printf("e.g. -t mbm, mba, cmt, cat\n");
++	printf("e.g. -t mbm,mba,cmt,cat\n");
+ 	printf("\t-n no_of_bits: run cache tests using specified no of bits in cache bit mask\n");
+ 	printf("\t-p cpu_no: specify CPU number to run the test. 1 is default\n");
+ 	printf("\t-h: help\n");
+@@ -173,7 +173,7 @@ int main(int argc, char **argv)
+ 
+ 					return -1;
+ 				}
+-				token = strtok(NULL, ":\t");
++				token = strtok(NULL, ",");
+ 			}
+ 			break;
+ 		case 'p':
+diff --git a/tools/testing/selftests/sgx/load.c b/tools/testing/selftests/sgx/load.c
+index f441ac34b4d44..bae78c3263d94 100644
+--- a/tools/testing/selftests/sgx/load.c
++++ b/tools/testing/selftests/sgx/load.c
+@@ -150,16 +150,6 @@ bool encl_load(const char *path, struct encl *encl)
+ 		goto err;
+ 	}
+ 
+-	/*
+-	 * This just checks if the /dev file has these permission
+-	 * bits set.  It does not check that the current user is
+-	 * the owner or in the owning group.
+-	 */
+-	if (!(sb.st_mode & (S_IXUSR | S_IXGRP | S_IXOTH))) {
+-		fprintf(stderr, "no execute permissions on device file %s\n", device_path);
+-		goto err;
+-	}
+-
+ 	ptr = mmap(NULL, PAGE_SIZE, PROT_READ, MAP_SHARED, fd, 0);
+ 	if (ptr == (void *)-1) {
+ 		perror("mmap for read");
+@@ -169,13 +159,13 @@ bool encl_load(const char *path, struct encl *encl)
+ 
+ #define ERR_MSG \
+ "mmap() succeeded for PROT_READ, but failed for PROT_EXEC.\n" \
+-" Check that current user has execute permissions on %s and \n" \
+-" that /dev does not have noexec set: mount | grep \"/dev .*noexec\"\n" \
++" Check that /dev does not have noexec set:\n" \
++" \tmount | grep \"/dev .*noexec\"\n" \
+ " If so, remount it executable: mount -o remount,exec /dev\n\n"
+ 
+ 	ptr = mmap(NULL, PAGE_SIZE, PROT_EXEC, MAP_SHARED, fd, 0);
+ 	if (ptr == (void *)-1) {
+-		fprintf(stderr, ERR_MSG, device_path);
++		fprintf(stderr, ERR_MSG);
+ 		goto err;
+ 	}
+ 	munmap(ptr, PAGE_SIZE);
+diff --git a/tools/testing/selftests/splice/short_splice_read.sh b/tools/testing/selftests/splice/short_splice_read.sh
+index 7810d3589d9ab..22b6c8910b182 100755
+--- a/tools/testing/selftests/splice/short_splice_read.sh
++++ b/tools/testing/selftests/splice/short_splice_read.sh
+@@ -1,21 +1,87 @@
+ #!/bin/sh
+ # SPDX-License-Identifier: GPL-2.0
++#
++# Test for mishandling of splice() on pseudofilesystems, which should catch
++# bugs like 11990a5bd7e5 ("module: Correctly truncate sysfs sections output")
++#
++# Since splice fallback was removed as part of the set_fs() rework, many of these
++# tests expect to fail now. See https://lore.kernel.org/lkml/202009181443.C2179FB@keescook/
+ set -e
+ 
++DIR=$(dirname "$0")
++
+ ret=0
+ 
++expect_success()
++{
++	title="$1"
++	shift
++
++	echo "" >&2
++	echo "$title ..." >&2
++
++	set +e
++	"$@"
++	rc=$?
++	set -e
++
++	case "$rc" in
++	0)
++		echo "ok: $title succeeded" >&2
++		;;
++	1)
++		echo "FAIL: $title should work" >&2
++		ret=$(( ret + 1 ))
++		;;
++	*)
++		echo "FAIL: something else went wrong" >&2
++		ret=$(( ret + 1 ))
++		;;
++	esac
++}
++
++expect_failure()
++{
++	title="$1"
++	shift
++
++	echo "" >&2
++	echo "$title ..." >&2
++
++	set +e
++	"$@"
++	rc=$?
++	set -e
++
++	case "$rc" in
++	0)
++		echo "FAIL: $title unexpectedly worked" >&2
++		ret=$(( ret + 1 ))
++		;;
++	1)
++		echo "ok: $title correctly failed" >&2
++		;;
++	*)
++		echo "FAIL: something else went wrong" >&2
++		ret=$(( ret + 1 ))
++		;;
++	esac
++}
++
+ do_splice()
+ {
+ 	filename="$1"
+ 	bytes="$2"
+ 	expected="$3"
++	report="$4"
+ 
+-	out=$(./splice_read "$filename" "$bytes" | cat)
++	out=$("$DIR"/splice_read "$filename" "$bytes" | cat)
+ 	if [ "$out" = "$expected" ] ; then
+-		echo "ok: $filename $bytes"
++		echo "      matched $report" >&2
++		return 0
+ 	else
+-		echo "FAIL: $filename $bytes"
+-		ret=1
++		echo "      no match: '$out' vs $report" >&2
++		return 1
+ 	fi
+ }
+ 
+@@ -23,34 +89,45 @@ test_splice()
+ {
+ 	filename="$1"
+ 
++	echo "  checking $filename ..." >&2
++
+ 	full=$(cat "$filename")
++	rc=$?
++	if [ $rc -ne 0 ] ; then
++		return 2
++	fi
++
+ 	two=$(echo "$full" | grep -m1 . | cut -c-2)
+ 
+ 	# Make sure full splice has the same contents as a standard read.
+-	do_splice "$filename" 4096 "$full"
++	echo "    splicing 4096 bytes ..." >&2
++	if ! do_splice "$filename" 4096 "$full" "full read" ; then
++		return 1
++	fi
+ 
+ 	# Make sure a partial splice see the first two characters.
+-	do_splice "$filename" 2 "$two"
++	echo "    splicing 2 bytes ..." >&2
++	if ! do_splice "$filename" 2 "$two" "'$two'" ; then
++		return 1
++	fi
++
++	return 0
+ }
+ 
+-# proc_single_open(), seq_read()
+-test_splice /proc/$$/limits
+-# special open, seq_read()
+-test_splice /proc/$$/comm
++### /proc/$pid/ has no splice interface; these should all fail.
++expect_failure "proc_single_open(), seq_read() splice" test_splice /proc/$$/limits
++expect_failure "special open(), seq_read() splice" test_splice /proc/$$/comm
+ 
+-# proc_handler, proc_dointvec_minmax
+-test_splice /proc/sys/fs/nr_open
+-# proc_handler, proc_dostring
+-test_splice /proc/sys/kernel/modprobe
+-# proc_handler, special read
+-test_splice /proc/sys/kernel/version
++### /proc/sys/ has a splice interface; these should all succeed.
++expect_success "proc_handler: proc_dointvec_minmax() splice" test_splice /proc/sys/fs/nr_open
++expect_success "proc_handler: proc_dostring() splice" test_splice /proc/sys/kernel/modprobe
++expect_success "proc_handler: special read splice" test_splice /proc/sys/kernel/version
+ 
++### /sys/ has no splice interface; these should all fail.
+ if ! [ -d /sys/module/test_module/sections ] ; then
+-	modprobe test_module
++	expect_success "test_module kernel module load" modprobe test_module
+ fi
+-# kernfs, attr
+-test_splice /sys/module/test_module/coresize
+-# kernfs, binattr
+-test_splice /sys/module/test_module/sections/.init.text
++expect_failure "kernfs attr splice" test_splice /sys/module/test_module/coresize
++expect_failure "kernfs binattr splice" test_splice /sys/module/test_module/sections/.init.text
+ 
+ exit $ret
+diff --git a/tools/testing/selftests/tc-testing/plugin-lib/scapyPlugin.py b/tools/testing/selftests/tc-testing/plugin-lib/scapyPlugin.py
+index 229ee185b27e1..a7b21658af9b4 100644
+--- a/tools/testing/selftests/tc-testing/plugin-lib/scapyPlugin.py
++++ b/tools/testing/selftests/tc-testing/plugin-lib/scapyPlugin.py
+@@ -36,7 +36,7 @@ class SubPlugin(TdcPlugin):
+         for k in scapy_keys:
+             if k not in scapyinfo:
+                 keyfail = True
+-                missing_keys.add(k)
++                missing_keys.append(k)
+         if keyfail:
+             print('{}: Scapy block present in the test, but is missing info:'
+                 .format(self.sub_class))
+diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c
+index fdbb602ecf325..87eecd5ba577b 100644
+--- a/tools/testing/selftests/vm/protection_keys.c
++++ b/tools/testing/selftests/vm/protection_keys.c
+@@ -510,7 +510,7 @@ int alloc_pkey(void)
+ 			" shadow: 0x%016llx\n",
+ 			__func__, __LINE__, ret, __read_pkey_reg(),
+ 			shadow_pkey_reg);
+-	if (ret) {
++	if (ret > 0) {
+ 		/* clear both the bits: */
+ 		shadow_pkey_reg = set_pkey_bits(shadow_pkey_reg, ret,
+ 						~PKEY_MASK);
+@@ -561,7 +561,6 @@ int alloc_random_pkey(void)
+ 	int nr_alloced = 0;
+ 	int random_index;
+ 	memset(alloced_pkeys, 0, sizeof(alloced_pkeys));
+-	srand((unsigned int)time(NULL));
+ 
+ 	/* allocate every possible key and make a note of which ones we got */
+ 	max_nr_pkey_allocs = NR_PKEYS;
+@@ -1449,6 +1448,13 @@ void test_implicit_mprotect_exec_only_memory(int *ptr, u16 pkey)
+ 	ret = mprotect(p1, PAGE_SIZE, PROT_EXEC);
+ 	pkey_assert(!ret);
+ 
++	/*
++	 * Reset the shadow, assuming that the above mprotect()
++	 * correctly changed PKRU, but to an unknown value since
++	 * the actual alllocated pkey is unknown.
++	 */
++	shadow_pkey_reg = __read_pkey_reg();
++
+ 	dprintf2("pkey_reg: %016llx\n", read_pkey_reg());
+ 
+ 	/* Make sure this is an *instruction* fault */
+@@ -1552,6 +1558,8 @@ int main(void)
+ 	int nr_iterations = 22;
+ 	int pkeys_supported = is_pkeys_supported();
+ 
++	srand((unsigned int)time(NULL));
++
+ 	setup_handlers();
+ 
+ 	printf("has pkeys: %d\n", pkeys_supported);


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-07-19 11:15 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-07-19 11:15 UTC (permalink / raw
  To: gentoo-commits

commit:     251a2869d4d1cff8f96751d70c93211e5b7a34ab
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jul 19 11:13:49 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jul 19 11:13:49 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=251a2869

Linux patch 5.13.3

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |     4 +
 1002_linux-5.13.3.patch | 12204 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 12208 insertions(+)

diff --git a/0000_README b/0000_README
index e1ba986..13b88ab 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch:  1001_linux-5.13.2.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.13.2
 
+Patch:  1002_linux-5.13.3.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.13.3
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1002_linux-5.13.3.patch b/1002_linux-5.13.3.patch
new file mode 100644
index 0000000..c335b1f
--- /dev/null
+++ b/1002_linux-5.13.3.patch
@@ -0,0 +1,12204 @@
+diff --git a/Documentation/Makefile b/Documentation/Makefile
+index 9c42dde97671f..c3feb657b6548 100644
+--- a/Documentation/Makefile
++++ b/Documentation/Makefile
+@@ -76,7 +76,7 @@ quiet_cmd_sphinx = SPHINX  $@ --> file://$(abspath $(BUILDDIR)/$3/$4)
+ 	PYTHONDONTWRITEBYTECODE=1 \
+ 	BUILDDIR=$(abspath $(BUILDDIR)) SPHINX_CONF=$(abspath $(srctree)/$(src)/$5/$(SPHINX_CONF)) \
+ 	$(PYTHON3) $(srctree)/scripts/jobserver-exec \
+-	$(SHELL) $(srctree)/Documentation/sphinx/parallel-wrapper.sh \
++	$(CONFIG_SHELL) $(srctree)/Documentation/sphinx/parallel-wrapper.sh \
+ 	$(SPHINXBUILD) \
+ 	-b $2 \
+ 	-c $(abspath $(srctree)/$(src)) \
+diff --git a/Makefile b/Makefile
+index 31bbcc5255357..83f4212e004f1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 13
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/alpha/include/uapi/asm/socket.h b/arch/alpha/include/uapi/asm/socket.h
+index 57420356ce4c1..6b3daba60987e 100644
+--- a/arch/alpha/include/uapi/asm/socket.h
++++ b/arch/alpha/include/uapi/asm/socket.h
+@@ -127,6 +127,8 @@
+ #define SO_PREFER_BUSY_POLL	69
+ #define SO_BUSY_POLL_BUDGET	70
+ 
++#define SO_NETNS_COOKIE		71
++
+ #if !defined(__KERNEL__)
+ 
+ #if __BITS_PER_LONG == 64
+diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
+index 61c97d3b58c70..c995d1f4594f6 100644
+--- a/arch/arm64/include/asm/tlb.h
++++ b/arch/arm64/include/asm/tlb.h
+@@ -28,6 +28,10 @@ static void tlb_flush(struct mmu_gather *tlb);
+  */
+ static inline int tlb_get_level(struct mmu_gather *tlb)
+ {
++	/* The TTL field is only valid for the leaf entry. */
++	if (tlb->freed_tables)
++		return 0;
++
+ 	if (tlb->cleared_ptes && !(tlb->cleared_pmds ||
+ 				   tlb->cleared_puds ||
+ 				   tlb->cleared_p4ds))
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index ed51970c08e75..344e6c622efdd 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -428,6 +428,8 @@ config MACH_INGENIC_SOC
+ 	select MIPS_GENERIC
+ 	select MACH_INGENIC
+ 	select SYS_SUPPORTS_ZBOOT_UART16550
++	select CPU_SUPPORTS_CPUFREQ
++	select MIPS_EXTERNAL_TIMER
+ 
+ config LANTIQ
+ 	bool "Lantiq based platforms"
+diff --git a/arch/mips/boot/dts/ingenic/ci20.dts b/arch/mips/boot/dts/ingenic/ci20.dts
+index 8877c62609de5..3a4eaf1f3f487 100644
+--- a/arch/mips/boot/dts/ingenic/ci20.dts
++++ b/arch/mips/boot/dts/ingenic/ci20.dts
+@@ -525,10 +525,10 @@
+ 
+ &tcu {
+ 	/*
+-	 * 750 kHz for the system timer and 3 MHz for the clocksource,
++	 * 750 kHz for the system timer and clocksource,
+ 	 * use channel #0 for the system timer, #1 for the clocksource.
+ 	 */
+ 	assigned-clocks = <&tcu TCU_CLK_TIMER0>, <&tcu TCU_CLK_TIMER1>,
+ 					  <&tcu TCU_CLK_OST>;
+-	assigned-clock-rates = <750000>, <3000000>, <3000000>;
++	assigned-clock-rates = <750000>, <750000>, <3000000>;
+ };
+diff --git a/arch/mips/include/asm/cpu-features.h b/arch/mips/include/asm/cpu-features.h
+index 336e02b3b3cea..3d71081afc55f 100644
+--- a/arch/mips/include/asm/cpu-features.h
++++ b/arch/mips/include/asm/cpu-features.h
+@@ -64,6 +64,8 @@
+ 	((MIPS_ISA_REV >= (ge)) && (MIPS_ISA_REV < (lt)))
+ #define __isa_range_or_flag(ge, lt, flag) \
+ 	(__isa_range(ge, lt) || ((MIPS_ISA_REV < (lt)) && __isa(flag)))
++#define __isa_range_and_ase(ge, lt, ase) \
++	(__isa_range(ge, lt) && __ase(ase))
+ 
+ /*
+  * SMP assumption: Options of CPU 0 are a superset of all processors.
+@@ -421,7 +423,7 @@
+ #endif
+ 
+ #ifndef cpu_has_mipsmt
+-#define cpu_has_mipsmt		__isa_lt_and_ase(6, MIPS_ASE_MIPSMT)
++#define cpu_has_mipsmt		__isa_range_and_ase(2, 6, MIPS_ASE_MIPSMT)
+ #endif
+ 
+ #ifndef cpu_has_vp
+diff --git a/arch/mips/include/asm/hugetlb.h b/arch/mips/include/asm/hugetlb.h
+index 10e3be870df78..c2144409c0c40 100644
+--- a/arch/mips/include/asm/hugetlb.h
++++ b/arch/mips/include/asm/hugetlb.h
+@@ -46,7 +46,13 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+ static inline void huge_ptep_clear_flush(struct vm_area_struct *vma,
+ 					 unsigned long addr, pte_t *ptep)
+ {
+-	flush_tlb_page(vma, addr & huge_page_mask(hstate_vma(vma)));
++	/*
++	 * clear the huge pte entry firstly, so that the other smp threads will
++	 * not get old pte entry after finishing flush_tlb_page and before
++	 * setting new huge pte entry
++	 */
++	huge_ptep_get_and_clear(vma->vm_mm, addr, ptep);
++	flush_tlb_page(vma, addr);
+ }
+ 
+ #define __HAVE_ARCH_HUGE_PTE_NONE
+diff --git a/arch/mips/include/asm/mipsregs.h b/arch/mips/include/asm/mipsregs.h
+index 9c8099a6ffed1..acdf8c69220b0 100644
+--- a/arch/mips/include/asm/mipsregs.h
++++ b/arch/mips/include/asm/mipsregs.h
+@@ -2077,7 +2077,7 @@ _ASM_MACRO_0(tlbginvf, _ASM_INSN_IF_MIPS(0x4200000c)
+ ({ int __res;								\
+ 	__asm__ __volatile__(						\
+ 		".set\tpush\n\t"					\
+-		".set\tmips32r2\n\t"					\
++		".set\tmips32r5\n\t"					\
+ 		_ASM_SET_VIRT						\
+ 		"mfgc0\t%0, " #source ", %1\n\t"			\
+ 		".set\tpop"						\
+@@ -2090,7 +2090,7 @@ _ASM_MACRO_0(tlbginvf, _ASM_INSN_IF_MIPS(0x4200000c)
+ ({ unsigned long long __res;						\
+ 	__asm__ __volatile__(						\
+ 		".set\tpush\n\t"					\
+-		".set\tmips64r2\n\t"					\
++		".set\tmips64r5\n\t"					\
+ 		_ASM_SET_VIRT						\
+ 		"dmfgc0\t%0, " #source ", %1\n\t"			\
+ 		".set\tpop"						\
+@@ -2103,7 +2103,7 @@ _ASM_MACRO_0(tlbginvf, _ASM_INSN_IF_MIPS(0x4200000c)
+ do {									\
+ 	__asm__ __volatile__(						\
+ 		".set\tpush\n\t"					\
+-		".set\tmips32r2\n\t"					\
++		".set\tmips32r5\n\t"					\
+ 		_ASM_SET_VIRT						\
+ 		"mtgc0\t%z0, " #register ", %1\n\t"			\
+ 		".set\tpop"						\
+@@ -2115,7 +2115,7 @@ do {									\
+ do {									\
+ 	__asm__ __volatile__(						\
+ 		".set\tpush\n\t"					\
+-		".set\tmips64r2\n\t"					\
++		".set\tmips64r5\n\t"					\
+ 		_ASM_SET_VIRT						\
+ 		"dmtgc0\t%z0, " #register ", %1\n\t"			\
+ 		".set\tpop"						\
+diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h
+index 8b18424b31208..d0cf997b4ba84 100644
+--- a/arch/mips/include/asm/pgalloc.h
++++ b/arch/mips/include/asm/pgalloc.h
+@@ -59,11 +59,15 @@ do {							\
+ 
+ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
+ {
+-	pmd_t *pmd;
++	pmd_t *pmd = NULL;
++	struct page *pg;
+ 
+-	pmd = (pmd_t *) __get_free_pages(GFP_KERNEL, PMD_ORDER);
+-	if (pmd)
++	pg = alloc_pages(GFP_KERNEL | __GFP_ACCOUNT, PMD_ORDER);
++	if (pg) {
++		pgtable_pmd_page_ctor(pg);
++		pmd = (pmd_t *)page_address(pg);
+ 		pmd_init((unsigned long)pmd, (unsigned long)invalid_pte_table);
++	}
+ 	return pmd;
+ }
+ 
+diff --git a/arch/mips/include/uapi/asm/socket.h b/arch/mips/include/uapi/asm/socket.h
+index 2d949969313b6..cdf404a831b25 100644
+--- a/arch/mips/include/uapi/asm/socket.h
++++ b/arch/mips/include/uapi/asm/socket.h
+@@ -138,6 +138,8 @@
+ #define SO_PREFER_BUSY_POLL	69
+ #define SO_BUSY_POLL_BUDGET	70
+ 
++#define SO_NETNS_COOKIE		71
++
+ #if !defined(__KERNEL__)
+ 
+ #if __BITS_PER_LONG == 64
+diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
+index 0ef240adefb59..630fcb4cb30e7 100644
+--- a/arch/mips/kernel/cpu-probe.c
++++ b/arch/mips/kernel/cpu-probe.c
+@@ -1840,6 +1840,11 @@ static inline void cpu_probe_ingenic(struct cpuinfo_mips *c, unsigned int cpu)
+ 		 */
+ 		case PRID_COMP_INGENIC_D0:
+ 			c->isa_level &= ~MIPS_CPU_ISA_M32R2;
++
++			/* FPU is not properly detected on JZ4760(B). */
++			if (c->processor_id == 0x2ed0024f)
++				c->options |= MIPS_CPU_FPU;
++
+ 			fallthrough;
+ 
+ 		/*
+diff --git a/arch/mips/loongson64/numa.c b/arch/mips/loongson64/numa.c
+index fa9b4a487a479..e8e3e48c53330 100644
+--- a/arch/mips/loongson64/numa.c
++++ b/arch/mips/loongson64/numa.c
+@@ -129,6 +129,9 @@ static void __init node_mem_init(unsigned int node)
+ 		if (node_end_pfn(0) >= (0xffffffff >> PAGE_SHIFT))
+ 			memblock_reserve((node_addrspace_offset | 0xfe000000),
+ 					 32 << 20);
++
++		/* Reserve pfn range 0~node[0]->node_start_pfn */
++		memblock_reserve(0, PAGE_SIZE * start_pfn);
+ 	}
+ }
+ 
+diff --git a/arch/mips/loongson64/reset.c b/arch/mips/loongson64/reset.c
+index c97bfdc8c9226..758d5d26aaaa2 100644
+--- a/arch/mips/loongson64/reset.c
++++ b/arch/mips/loongson64/reset.c
+@@ -126,11 +126,12 @@ static void loongson_kexec_shutdown(void)
+ 	for_each_possible_cpu(cpu)
+ 		if (!cpu_online(cpu))
+ 			cpu_device_up(get_cpu_device(cpu));
++
++	secondary_kexec_args[0] = TO_UNCAC(0x3ff01000);
+ #endif
+ 	kexec_args[0] = kexec_argc;
+ 	kexec_args[1] = fw_arg1;
+ 	kexec_args[2] = fw_arg2;
+-	secondary_kexec_args[0] = TO_UNCAC(0x3ff01000);
+ 	memcpy((void *)fw_arg1, kexec_argv, KEXEC_ARGV_SIZE);
+ 	memcpy((void *)fw_arg2, kexec_envp, KEXEC_ENVP_SIZE);
+ }
+@@ -141,7 +142,9 @@ static void loongson_crash_shutdown(struct pt_regs *regs)
+ 	kexec_args[0] = kdump_argc;
+ 	kexec_args[1] = fw_arg1;
+ 	kexec_args[2] = fw_arg2;
++#ifdef CONFIG_SMP
+ 	secondary_kexec_args[0] = TO_UNCAC(0x3ff01000);
++#endif
+ 	memcpy((void *)fw_arg1, kdump_argv, KEXEC_ARGV_SIZE);
+ 	memcpy((void *)fw_arg2, kexec_envp, KEXEC_ENVP_SIZE);
+ }
+diff --git a/arch/parisc/include/uapi/asm/socket.h b/arch/parisc/include/uapi/asm/socket.h
+index f60904329bbcc..5b5351cdcb338 100644
+--- a/arch/parisc/include/uapi/asm/socket.h
++++ b/arch/parisc/include/uapi/asm/socket.h
+@@ -119,6 +119,8 @@
+ #define SO_PREFER_BUSY_POLL	0x4043
+ #define SO_BUSY_POLL_BUDGET	0x4044
+ 
++#define SO_NETNS_COOKIE		0x4045
++
+ #if !defined(__KERNEL__)
+ 
+ #if __BITS_PER_LONG == 64
+diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
+index 7ae29cfb06c07..f0e6872364842 100644
+--- a/arch/powerpc/include/asm/barrier.h
++++ b/arch/powerpc/include/asm/barrier.h
+@@ -46,6 +46,8 @@
+ #    define SMPWMB      eieio
+ #endif
+ 
++/* clang defines this macro for a builtin, which will not work with runtime patching */
++#undef __lwsync
+ #define __lwsync()	__asm__ __volatile__ (stringify_in_c(LWSYNC) : : :"memory")
+ #define dma_rmb()	__lwsync()
+ #define dma_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
+diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
+index 34f641d4a2fe9..a8d0ce85d39ad 100644
+--- a/arch/powerpc/mm/fault.c
++++ b/arch/powerpc/mm/fault.c
+@@ -199,9 +199,7 @@ static bool bad_kernel_fault(struct pt_regs *regs, unsigned long error_code,
+ {
+ 	int is_exec = TRAP(regs) == INTERRUPT_INST_STORAGE;
+ 
+-	/* NX faults set DSISR_PROTFAULT on the 8xx, DSISR_NOEXEC_OR_G on others */
+-	if (is_exec && (error_code & (DSISR_NOEXEC_OR_G | DSISR_KEYFAULT |
+-				      DSISR_PROTFAULT))) {
++	if (is_exec) {
+ 		pr_crit_ratelimited("kernel tried to execute %s page (%lx) - exploit attempt? (uid: %d)\n",
+ 				    address >= TASK_SIZE ? "exec-protected" : "user",
+ 				    address,
+diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
+index bbb16099e8c7f..68476780047ac 100644
+--- a/arch/powerpc/net/bpf_jit_comp32.c
++++ b/arch/powerpc/net/bpf_jit_comp32.c
+@@ -773,9 +773,17 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
+ 			break;
+ 
+ 		/*
+-		 * BPF_STX XADD (atomic_add)
++		 * BPF_STX ATOMIC (atomic ops)
+ 		 */
+-		case BPF_STX | BPF_XADD | BPF_W: /* *(u32 *)(dst + off) += src */
++		case BPF_STX | BPF_ATOMIC | BPF_W:
++			if (imm != BPF_ADD) {
++				pr_err_ratelimited("eBPF filter atomic op code %02x (@%d) unsupported\n",
++						   code, i);
++				return -ENOTSUPP;
++			}
++
++			/* *(u32 *)(dst + off) += src */
++
+ 			bpf_set_seen_register(ctx, tmp_reg);
+ 			/* Get offset into TMP_REG */
+ 			EMIT(PPC_RAW_LI(tmp_reg, off));
+@@ -789,7 +797,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
+ 			PPC_BCC_SHORT(COND_NE, (ctx->idx - 3) * 4);
+ 			break;
+ 
+-		case BPF_STX | BPF_XADD | BPF_DW: /* *(u64 *)(dst + off) += src */
++		case BPF_STX | BPF_ATOMIC | BPF_DW: /* *(u64 *)(dst + off) += src */
+ 			return -EOPNOTSUPP;
+ 
+ 		/*
+diff --git a/arch/powerpc/platforms/powernv/vas-window.c b/arch/powerpc/platforms/powernv/vas-window.c
+index 5f5fe63a3d1ce..7ba0840fc3b55 100644
+--- a/arch/powerpc/platforms/powernv/vas-window.c
++++ b/arch/powerpc/platforms/powernv/vas-window.c
+@@ -1093,9 +1093,9 @@ struct vas_window *vas_tx_win_open(int vasid, enum vas_cop_type cop,
+ 		/*
+ 		 * Process closes window during exit. In the case of
+ 		 * multithread application, the child thread can open
+-		 * window and can exit without closing it. Expects parent
+-		 * thread to use and close the window. So do not need
+-		 * to take pid reference for parent thread.
++		 * window and can exit without closing it. so takes tgid
++		 * reference until window closed to make sure tgid is not
++		 * reused.
+ 		 */
+ 		txwin->tgid = find_get_pid(task_tgid_vnr(current));
+ 		/*
+@@ -1339,8 +1339,9 @@ int vas_win_close(struct vas_window *window)
+ 	/* if send window, drop reference to matching receive window */
+ 	if (window->tx_win) {
+ 		if (window->user_win) {
+-			/* Drop references to pid and mm */
++			/* Drop references to pid. tgid and mm */
+ 			put_pid(window->pid);
++			put_pid(window->tgid);
+ 			if (window->mm) {
+ 				mm_context_remove_vas_window(window->mm);
+ 				mmdrop(window->mm);
+diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c
+index a8304327072d3..dbdbbc2f1dc51 100644
+--- a/arch/powerpc/sysdev/xive/common.c
++++ b/arch/powerpc/sysdev/xive/common.c
+@@ -1153,11 +1153,10 @@ static int __init xive_request_ipi(void)
+ 		 * Since the HW interrupt number doesn't have any meaning,
+ 		 * simply use the node number.
+ 		 */
+-		xid->irq = irq_domain_alloc_irqs(ipi_domain, 1, node, &info);
+-		if (xid->irq < 0) {
+-			ret = xid->irq;
++		ret = irq_domain_alloc_irqs(ipi_domain, 1, node, &info);
++		if (ret < 0)
+ 			goto out_free_xive_ipis;
+-		}
++		xid->irq = ret;
+ 
+ 		snprintf(xid->name, sizeof(xid->name), "IPI-%d", node);
+ 
+diff --git a/arch/sparc/include/uapi/asm/socket.h b/arch/sparc/include/uapi/asm/socket.h
+index 848a22fbac20e..92675dc380fa2 100644
+--- a/arch/sparc/include/uapi/asm/socket.h
++++ b/arch/sparc/include/uapi/asm/socket.h
+@@ -120,6 +120,8 @@
+ #define SO_PREFER_BUSY_POLL	 0x0048
+ #define SO_BUSY_POLL_BUDGET	 0x0049
+ 
++#define SO_NETNS_COOKIE          0x0050
++
+ #if !defined(__KERNEL__)
+ 
+ 
+diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c
+index 656460636ad34..e83af7bc75919 100644
+--- a/block/blk-rq-qos.c
++++ b/block/blk-rq-qos.c
+@@ -266,8 +266,8 @@ void rq_qos_wait(struct rq_wait *rqw, void *private_data,
+ 	if (!has_sleeper && acquire_inflight_cb(rqw, private_data))
+ 		return;
+ 
+-	prepare_to_wait_exclusive(&rqw->wait, &data.wq, TASK_UNINTERRUPTIBLE);
+-	has_sleeper = !wq_has_single_sleeper(&rqw->wait);
++	has_sleeper = !prepare_to_wait_exclusive(&rqw->wait, &data.wq,
++						 TASK_UNINTERRUPTIBLE);
+ 	do {
+ 		/* The memory barrier in set_task_state saves us here. */
+ 		if (data.got_token)
+diff --git a/drivers/ata/ahci_sunxi.c b/drivers/ata/ahci_sunxi.c
+index cb69b737cb499..56b695136977a 100644
+--- a/drivers/ata/ahci_sunxi.c
++++ b/drivers/ata/ahci_sunxi.c
+@@ -200,7 +200,7 @@ static void ahci_sunxi_start_engine(struct ata_port *ap)
+ }
+ 
+ static const struct ata_port_info ahci_sunxi_port_info = {
+-	.flags		= AHCI_FLAG_COMMON | ATA_FLAG_NCQ,
++	.flags		= AHCI_FLAG_COMMON | ATA_FLAG_NCQ | ATA_FLAG_NO_DIPM,
+ 	.pio_mask	= ATA_PIO4,
+ 	.udma_mask	= ATA_UDMA6,
+ 	.port_ops	= &ahci_platform_ops,
+diff --git a/drivers/atm/iphase.c b/drivers/atm/iphase.c
+index 933e3ff2ee8d1..3f2ebfb06afdf 100644
+--- a/drivers/atm/iphase.c
++++ b/drivers/atm/iphase.c
+@@ -3279,7 +3279,7 @@ static void __exit ia_module_exit(void)
+ {
+ 	pci_unregister_driver(&ia_driver);
+ 
+-        del_timer(&ia_timer);
++	del_timer_sync(&ia_timer);
+ }
+ 
+ module_init(ia_module_init);
+diff --git a/drivers/atm/nicstar.c b/drivers/atm/nicstar.c
+index 5c7e4df159b91..bc5a6ab6fa4b4 100644
+--- a/drivers/atm/nicstar.c
++++ b/drivers/atm/nicstar.c
+@@ -299,7 +299,7 @@ static void __exit nicstar_cleanup(void)
+ {
+ 	XPRINTK("nicstar: nicstar_cleanup() called.\n");
+ 
+-	del_timer(&ns_timer);
++	del_timer_sync(&ns_timer);
+ 
+ 	pci_unregister_driver(&nicstar_driver);
+ 
+@@ -527,6 +527,15 @@ static int ns_init_card(int i, struct pci_dev *pcidev)
+ 	/* Set the VPI/VCI MSb mask to zero so we can receive OAM cells */
+ 	writel(0x00000000, card->membase + VPM);
+ 
++	card->intcnt = 0;
++	if (request_irq
++	    (pcidev->irq, &ns_irq_handler, IRQF_SHARED, "nicstar", card) != 0) {
++		pr_err("nicstar%d: can't allocate IRQ %d.\n", i, pcidev->irq);
++		error = 9;
++		ns_init_card_error(card, error);
++		return error;
++	}
++
+ 	/* Initialize TSQ */
+ 	card->tsq.org = dma_alloc_coherent(&card->pcidev->dev,
+ 					   NS_TSQSIZE + NS_TSQ_ALIGNMENT,
+@@ -753,15 +762,6 @@ static int ns_init_card(int i, struct pci_dev *pcidev)
+ 
+ 	card->efbie = 1;
+ 
+-	card->intcnt = 0;
+-	if (request_irq
+-	    (pcidev->irq, &ns_irq_handler, IRQF_SHARED, "nicstar", card) != 0) {
+-		printk("nicstar%d: can't allocate IRQ %d.\n", i, pcidev->irq);
+-		error = 9;
+-		ns_init_card_error(card, error);
+-		return error;
+-	}
+-
+ 	/* Register device */
+ 	card->atmdev = atm_dev_register("nicstar", &card->pcidev->dev, &atm_ops,
+ 					-1, NULL);
+@@ -839,10 +839,12 @@ static void ns_init_card_error(ns_dev *card, int error)
+ 			dev_kfree_skb_any(hb);
+ 	}
+ 	if (error >= 12) {
+-		kfree(card->rsq.org);
++		dma_free_coherent(&card->pcidev->dev, NS_RSQSIZE + NS_RSQ_ALIGNMENT,
++				card->rsq.org, card->rsq.dma);
+ 	}
+ 	if (error >= 11) {
+-		kfree(card->tsq.org);
++		dma_free_coherent(&card->pcidev->dev, NS_TSQSIZE + NS_TSQ_ALIGNMENT,
++				card->tsq.org, card->tsq.dma);
+ 	}
+ 	if (error >= 10) {
+ 		free_irq(card->pcidev->irq, card);
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 7f6ba2c975ed4..6d23308119d16 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -270,6 +270,8 @@ static const struct usb_device_id blacklist_table[] = {
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x0cf3, 0xe360), .driver_info = BTUSB_QCA_ROME |
+ 						     BTUSB_WIDEBAND_SPEECH },
++	{ USB_DEVICE(0x0cf3, 0xe500), .driver_info = BTUSB_QCA_ROME |
++						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x0489, 0xe092), .driver_info = BTUSB_QCA_ROME |
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x0489, 0xe09f), .driver_info = BTUSB_QCA_ROME |
+@@ -1749,6 +1751,13 @@ static void btusb_work(struct work_struct *work)
+ 			 * which work with WBS at all.
+ 			 */
+ 			new_alts = btusb_find_altsetting(data, 6) ? 6 : 1;
++			/* Because mSBC frames do not need to be aligned to the
++			 * SCO packet boundary. If support the Alt 3, use the
++			 * Alt 3 for HCI payload >= 60 Bytes let air packet
++			 * data satisfy 60 bytes.
++			 */
++			if (new_alts == 1 && btusb_find_altsetting(data, 3))
++				new_alts = 3;
+ 		}
+ 
+ 		if (btusb_switch_alt_setting(hdev, new_alts) < 0)
+@@ -3312,11 +3321,6 @@ static int btusb_mtk_hci_wmt_sync(struct hci_dev *hdev,
+ 	struct btmtk_wmt_hdr *hdr;
+ 	int err;
+ 
+-	/* Submit control IN URB on demand to process the WMT event */
+-	err = btusb_mtk_submit_wmt_recv_urb(hdev);
+-	if (err < 0)
+-		return err;
+-
+ 	/* Send the WMT command and wait until the WMT event returns */
+ 	hlen = sizeof(*hdr) + wmt_params->dlen;
+ 	if (hlen > 255)
+@@ -3342,6 +3346,11 @@ static int btusb_mtk_hci_wmt_sync(struct hci_dev *hdev,
+ 		goto err_free_wc;
+ 	}
+ 
++	/* Submit control IN URB on demand to process the WMT event */
++	err = btusb_mtk_submit_wmt_recv_urb(hdev);
++	if (err < 0)
++		return err;
++
+ 	/* The vendor specific WMT commands are all answered by a vendor
+ 	 * specific event and will have the Command Status or Command
+ 	 * Complete as with usual HCI command flow control.
+@@ -4062,6 +4071,11 @@ static int btusb_setup_qca_download_fw(struct hci_dev *hdev,
+ 	sent += size;
+ 	count -= size;
+ 
++	/* ep2 need time to switch from function acl to function dfu,
++	 * so we add 20ms delay here.
++	 */
++	msleep(20);
++
+ 	while (count) {
+ 		size = min_t(size_t, count, QCA_DFU_PACKET_LEN);
+ 
+@@ -4154,9 +4168,15 @@ static int btusb_setup_qca_load_nvm(struct hci_dev *hdev,
+ 	int err;
+ 
+ 	if (((ver->flag >> 8) & 0xff) == QCA_FLAG_MULTI_NVM) {
+-		snprintf(fwname, sizeof(fwname), "qca/nvm_usb_%08x_%04x.bin",
+-			 le32_to_cpu(ver->rom_version),
+-			 le16_to_cpu(ver->board_id));
++		/* if boardid equal 0, use default nvm without surfix */
++		if (le16_to_cpu(ver->board_id) == 0x0) {
++			snprintf(fwname, sizeof(fwname), "qca/nvm_usb_%08x.bin",
++				 le32_to_cpu(ver->rom_version));
++		} else {
++			snprintf(fwname, sizeof(fwname), "qca/nvm_usb_%08x_%04x.bin",
++				le32_to_cpu(ver->rom_version),
++				le16_to_cpu(ver->board_id));
++		}
+ 	} else {
+ 		snprintf(fwname, sizeof(fwname), "qca/nvm_usb_%08x.bin",
+ 			 le32_to_cpu(ver->rom_version));
+diff --git a/drivers/char/ipmi/ipmi_watchdog.c b/drivers/char/ipmi/ipmi_watchdog.c
+index 32c334e34d553..e4ff3b50de7f3 100644
+--- a/drivers/char/ipmi/ipmi_watchdog.c
++++ b/drivers/char/ipmi/ipmi_watchdog.c
+@@ -371,16 +371,18 @@ static int __ipmi_set_timeout(struct ipmi_smi_msg  *smi_msg,
+ 	data[0] = 0;
+ 	WDOG_SET_TIMER_USE(data[0], WDOG_TIMER_USE_SMS_OS);
+ 
+-	if ((ipmi_version_major > 1)
+-	    || ((ipmi_version_major == 1) && (ipmi_version_minor >= 5))) {
+-		/* This is an IPMI 1.5-only feature. */
+-		data[0] |= WDOG_DONT_STOP_ON_SET;
+-	} else if (ipmi_watchdog_state != WDOG_TIMEOUT_NONE) {
+-		/*
+-		 * In ipmi 1.0, setting the timer stops the watchdog, we
+-		 * need to start it back up again.
+-		 */
+-		hbnow = 1;
++	if (ipmi_watchdog_state != WDOG_TIMEOUT_NONE) {
++		if ((ipmi_version_major > 1) ||
++		    ((ipmi_version_major == 1) && (ipmi_version_minor >= 5))) {
++			/* This is an IPMI 1.5-only feature. */
++			data[0] |= WDOG_DONT_STOP_ON_SET;
++		} else {
++			/*
++			 * In ipmi 1.0, setting the timer stops the watchdog, we
++			 * need to start it back up again.
++			 */
++			hbnow = 1;
++		}
+ 	}
+ 
+ 	data[1] = 0;
+diff --git a/drivers/clk/renesas/r8a77995-cpg-mssr.c b/drivers/clk/renesas/r8a77995-cpg-mssr.c
+index 9cfd00cf4e69a..81c0bc1e78af8 100644
+--- a/drivers/clk/renesas/r8a77995-cpg-mssr.c
++++ b/drivers/clk/renesas/r8a77995-cpg-mssr.c
+@@ -75,6 +75,7 @@ static const struct cpg_core_clk r8a77995_core_clks[] __initconst = {
+ 	DEF_RATE(".oco",       CLK_OCO,            8 * 1000 * 1000),
+ 
+ 	/* Core Clock Outputs */
++	DEF_FIXED("za2",       R8A77995_CLK_ZA2,   CLK_PLL0D3,     2, 1),
+ 	DEF_FIXED("z2",        R8A77995_CLK_Z2,    CLK_PLL0D3,     1, 1),
+ 	DEF_FIXED("ztr",       R8A77995_CLK_ZTR,   CLK_PLL1,       6, 1),
+ 	DEF_FIXED("zt",        R8A77995_CLK_ZT,    CLK_PLL1,       4, 1),
+diff --git a/drivers/clk/renesas/rcar-usb2-clock-sel.c b/drivers/clk/renesas/rcar-usb2-clock-sel.c
+index 34a85dc95beb8..9fb79bd794350 100644
+--- a/drivers/clk/renesas/rcar-usb2-clock-sel.c
++++ b/drivers/clk/renesas/rcar-usb2-clock-sel.c
+@@ -128,10 +128,8 @@ static int rcar_usb2_clock_sel_resume(struct device *dev)
+ static int rcar_usb2_clock_sel_remove(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+-	struct usb2_clock_sel_priv *priv = platform_get_drvdata(pdev);
+ 
+ 	of_clk_del_provider(dev->of_node);
+-	clk_hw_unregister(&priv->hw);
+ 	pm_runtime_put(dev);
+ 	pm_runtime_disable(dev);
+ 
+@@ -164,9 +162,6 @@ static int rcar_usb2_clock_sel_probe(struct platform_device *pdev)
+ 	if (IS_ERR(priv->rsts))
+ 		return PTR_ERR(priv->rsts);
+ 
+-	pm_runtime_enable(dev);
+-	pm_runtime_get_sync(dev);
+-
+ 	clk = devm_clk_get(dev, "usb_extal");
+ 	if (!IS_ERR(clk) && !clk_prepare_enable(clk)) {
+ 		priv->extal = !!clk_get_rate(clk);
+@@ -183,6 +178,8 @@ static int rcar_usb2_clock_sel_probe(struct platform_device *pdev)
+ 		return -ENOENT;
+ 	}
+ 
++	pm_runtime_enable(dev);
++	pm_runtime_get_sync(dev);
+ 	platform_set_drvdata(pdev, priv);
+ 	dev_set_drvdata(dev, priv);
+ 
+@@ -190,11 +187,20 @@ static int rcar_usb2_clock_sel_probe(struct platform_device *pdev)
+ 	init.ops = &usb2_clock_sel_clock_ops;
+ 	priv->hw.init = &init;
+ 
+-	clk = clk_register(NULL, &priv->hw);
+-	if (IS_ERR(clk))
+-		return PTR_ERR(clk);
++	ret = devm_clk_hw_register(NULL, &priv->hw);
++	if (ret)
++		goto pm_put;
++
++	ret = of_clk_add_hw_provider(np, of_clk_hw_simple_get, &priv->hw);
++	if (ret)
++		goto pm_put;
++
++	return 0;
+ 
+-	return of_clk_add_hw_provider(np, of_clk_hw_simple_get, &priv->hw);
++pm_put:
++	pm_runtime_put(dev);
++	pm_runtime_disable(dev);
++	return ret;
+ }
+ 
+ static const struct dev_pm_ops rcar_usb2_clock_sel_pm_ops = {
+diff --git a/drivers/clk/tegra/clk-periph-gate.c b/drivers/clk/tegra/clk-periph-gate.c
+index 4b31beefc9fc2..dc3f92678407b 100644
+--- a/drivers/clk/tegra/clk-periph-gate.c
++++ b/drivers/clk/tegra/clk-periph-gate.c
+@@ -48,18 +48,9 @@ static int clk_periph_is_enabled(struct clk_hw *hw)
+ 	return state;
+ }
+ 
+-static int clk_periph_enable(struct clk_hw *hw)
++static void clk_periph_enable_locked(struct clk_hw *hw)
+ {
+ 	struct tegra_clk_periph_gate *gate = to_clk_periph_gate(hw);
+-	unsigned long flags = 0;
+-
+-	spin_lock_irqsave(&periph_ref_lock, flags);
+-
+-	gate->enable_refcnt[gate->clk_num]++;
+-	if (gate->enable_refcnt[gate->clk_num] > 1) {
+-		spin_unlock_irqrestore(&periph_ref_lock, flags);
+-		return 0;
+-	}
+ 
+ 	write_enb_set(periph_clk_to_bit(gate), gate);
+ 	udelay(2);
+@@ -78,6 +69,32 @@ static int clk_periph_enable(struct clk_hw *hw)
+ 		udelay(1);
+ 		writel_relaxed(0, gate->clk_base + LVL2_CLK_GATE_OVRE);
+ 	}
++}
++
++static void clk_periph_disable_locked(struct clk_hw *hw)
++{
++	struct tegra_clk_periph_gate *gate = to_clk_periph_gate(hw);
++
++	/*
++	 * If peripheral is in the APB bus then read the APB bus to
++	 * flush the write operation in apb bus. This will avoid the
++	 * peripheral access after disabling clock
++	 */
++	if (gate->flags & TEGRA_PERIPH_ON_APB)
++		tegra_read_chipid();
++
++	write_enb_clr(periph_clk_to_bit(gate), gate);
++}
++
++static int clk_periph_enable(struct clk_hw *hw)
++{
++	struct tegra_clk_periph_gate *gate = to_clk_periph_gate(hw);
++	unsigned long flags = 0;
++
++	spin_lock_irqsave(&periph_ref_lock, flags);
++
++	if (!gate->enable_refcnt[gate->clk_num]++)
++		clk_periph_enable_locked(hw);
+ 
+ 	spin_unlock_irqrestore(&periph_ref_lock, flags);
+ 
+@@ -91,21 +108,28 @@ static void clk_periph_disable(struct clk_hw *hw)
+ 
+ 	spin_lock_irqsave(&periph_ref_lock, flags);
+ 
+-	gate->enable_refcnt[gate->clk_num]--;
+-	if (gate->enable_refcnt[gate->clk_num] > 0) {
+-		spin_unlock_irqrestore(&periph_ref_lock, flags);
+-		return;
+-	}
++	WARN_ON(!gate->enable_refcnt[gate->clk_num]);
++
++	if (--gate->enable_refcnt[gate->clk_num] == 0)
++		clk_periph_disable_locked(hw);
++
++	spin_unlock_irqrestore(&periph_ref_lock, flags);
++}
++
++static void clk_periph_disable_unused(struct clk_hw *hw)
++{
++	struct tegra_clk_periph_gate *gate = to_clk_periph_gate(hw);
++	unsigned long flags = 0;
++
++	spin_lock_irqsave(&periph_ref_lock, flags);
+ 
+ 	/*
+-	 * If peripheral is in the APB bus then read the APB bus to
+-	 * flush the write operation in apb bus. This will avoid the
+-	 * peripheral access after disabling clock
++	 * Some clocks are duplicated and some of them are marked as critical,
++	 * like fuse and fuse_burn for example, thus the enable_refcnt will
++	 * be non-zero here if the "unused" duplicate is disabled by CCF.
+ 	 */
+-	if (gate->flags & TEGRA_PERIPH_ON_APB)
+-		tegra_read_chipid();
+-
+-	write_enb_clr(periph_clk_to_bit(gate), gate);
++	if (!gate->enable_refcnt[gate->clk_num])
++		clk_periph_disable_locked(hw);
+ 
+ 	spin_unlock_irqrestore(&periph_ref_lock, flags);
+ }
+@@ -114,6 +138,7 @@ const struct clk_ops tegra_clk_periph_gate_ops = {
+ 	.is_enabled = clk_periph_is_enabled,
+ 	.enable = clk_periph_enable,
+ 	.disable = clk_periph_disable,
++	.disable_unused = clk_periph_disable_unused,
+ };
+ 
+ struct clk *tegra_clk_register_periph_gate(const char *name,
+@@ -148,9 +173,6 @@ struct clk *tegra_clk_register_periph_gate(const char *name,
+ 	gate->enable_refcnt = enable_refcnt;
+ 	gate->regs = pregs;
+ 
+-	if (read_enb(gate) & periph_clk_to_bit(gate))
+-		enable_refcnt[clk_num]++;
+-
+ 	/* Data in .init is copied by clk_register(), so stack variable OK */
+ 	gate->hw.init = &init;
+ 
+diff --git a/drivers/clk/tegra/clk-periph.c b/drivers/clk/tegra/clk-periph.c
+index 67620c7ecd9ee..79ca3aa072b70 100644
+--- a/drivers/clk/tegra/clk-periph.c
++++ b/drivers/clk/tegra/clk-periph.c
+@@ -100,6 +100,15 @@ static void clk_periph_disable(struct clk_hw *hw)
+ 	gate_ops->disable(gate_hw);
+ }
+ 
++static void clk_periph_disable_unused(struct clk_hw *hw)
++{
++	struct tegra_clk_periph *periph = to_clk_periph(hw);
++	const struct clk_ops *gate_ops = periph->gate_ops;
++	struct clk_hw *gate_hw = &periph->gate.hw;
++
++	gate_ops->disable_unused(gate_hw);
++}
++
+ static void clk_periph_restore_context(struct clk_hw *hw)
+ {
+ 	struct tegra_clk_periph *periph = to_clk_periph(hw);
+@@ -126,6 +135,7 @@ const struct clk_ops tegra_clk_periph_ops = {
+ 	.is_enabled = clk_periph_is_enabled,
+ 	.enable = clk_periph_enable,
+ 	.disable = clk_periph_disable,
++	.disable_unused = clk_periph_disable_unused,
+ 	.restore_context = clk_periph_restore_context,
+ };
+ 
+@@ -135,6 +145,7 @@ static const struct clk_ops tegra_clk_periph_nodiv_ops = {
+ 	.is_enabled = clk_periph_is_enabled,
+ 	.enable = clk_periph_enable,
+ 	.disable = clk_periph_disable,
++	.disable_unused = clk_periph_disable_unused,
+ 	.restore_context = clk_periph_restore_context,
+ };
+ 
+diff --git a/drivers/clk/tegra/clk-pll.c b/drivers/clk/tegra/clk-pll.c
+index 0193cebe8c5a3..823a567f2adc2 100644
+--- a/drivers/clk/tegra/clk-pll.c
++++ b/drivers/clk/tegra/clk-pll.c
+@@ -1131,7 +1131,8 @@ static int clk_pllu_enable(struct clk_hw *hw)
+ 	if (pll->lock)
+ 		spin_lock_irqsave(pll->lock, flags);
+ 
+-	_clk_pll_enable(hw);
++	if (!clk_pll_is_enabled(hw))
++		_clk_pll_enable(hw);
+ 
+ 	ret = clk_pll_wait_for_lock(pll);
+ 	if (ret < 0)
+@@ -1748,15 +1749,13 @@ static int clk_pllu_tegra114_enable(struct clk_hw *hw)
+ 		return -EINVAL;
+ 	}
+ 
+-	if (clk_pll_is_enabled(hw))
+-		return 0;
+-
+ 	input_rate = clk_hw_get_rate(__clk_get_hw(osc));
+ 
+ 	if (pll->lock)
+ 		spin_lock_irqsave(pll->lock, flags);
+ 
+-	_clk_pll_enable(hw);
++	if (!clk_pll_is_enabled(hw))
++		_clk_pll_enable(hw);
+ 
+ 	ret = clk_pll_wait_for_lock(pll);
+ 	if (ret < 0)
+diff --git a/drivers/clk/tegra/clk-tegra124-emc.c b/drivers/clk/tegra/clk-tegra124-emc.c
+index bdf6f4a516176..74c1d894cca86 100644
+--- a/drivers/clk/tegra/clk-tegra124-emc.c
++++ b/drivers/clk/tegra/clk-tegra124-emc.c
+@@ -249,8 +249,10 @@ static int emc_set_timing(struct tegra_clk_emc *tegra,
+ 	div = timing->parent_rate / (timing->rate / 2) - 2;
+ 
+ 	err = tegra->prepare_timing_change(emc, timing->rate);
+-	if (err)
++	if (err) {
++		clk_disable_unprepare(timing->parent);
+ 		return err;
++	}
+ 
+ 	spin_lock_irqsave(tegra->lock, flags);
+ 
+diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
+index fe1a82627d570..5b412b51c21bf 100644
+--- a/drivers/clocksource/arm_arch_timer.c
++++ b/drivers/clocksource/arm_arch_timer.c
+@@ -365,7 +365,7 @@ static u64 notrace arm64_858921_read_cntvct_el0(void)
+ 	do {								\
+ 		_val = read_sysreg(reg);				\
+ 		_retries--;						\
+-	} while (((_val + 1) & GENMASK(9, 0)) <= 1 && _retries);	\
++	} while (((_val + 1) & GENMASK(8, 0)) <= 1 && _retries);	\
+ 									\
+ 	WARN_ON_ONCE(!_retries);					\
+ 	_val;								\
+diff --git a/drivers/extcon/extcon-intel-mrfld.c b/drivers/extcon/extcon-intel-mrfld.c
+index f47016fb28a84..cd1a5f230077c 100644
+--- a/drivers/extcon/extcon-intel-mrfld.c
++++ b/drivers/extcon/extcon-intel-mrfld.c
+@@ -197,6 +197,7 @@ static int mrfld_extcon_probe(struct platform_device *pdev)
+ 	struct intel_soc_pmic *pmic = dev_get_drvdata(dev->parent);
+ 	struct regmap *regmap = pmic->regmap;
+ 	struct mrfld_extcon_data *data;
++	unsigned int status;
+ 	unsigned int id;
+ 	int irq, ret;
+ 
+@@ -244,6 +245,14 @@ static int mrfld_extcon_probe(struct platform_device *pdev)
+ 	/* Get initial state */
+ 	mrfld_extcon_role_detect(data);
+ 
++	/*
++	 * Cached status value is used for cable detection, see comments
++	 * in mrfld_extcon_cable_detect(), we need to sync cached value
++	 * with a real state of the hardware.
++	 */
++	regmap_read(regmap, BCOVE_SCHGRIRQ1, &status);
++	data->status = status;
++
+ 	mrfld_extcon_clear(data, BCOVE_MIRQLVL1, BCOVE_LVL1_CHGR);
+ 	mrfld_extcon_clear(data, BCOVE_MCHGRIRQ1, BCOVE_CHGRIRQ_ALL);
+ 
+diff --git a/drivers/firmware/qemu_fw_cfg.c b/drivers/firmware/qemu_fw_cfg.c
+index 0078260fbabea..172c751a4f6c2 100644
+--- a/drivers/firmware/qemu_fw_cfg.c
++++ b/drivers/firmware/qemu_fw_cfg.c
+@@ -299,15 +299,13 @@ static int fw_cfg_do_platform_probe(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-static ssize_t fw_cfg_showrev(struct kobject *k, struct attribute *a, char *buf)
++static ssize_t fw_cfg_showrev(struct kobject *k, struct kobj_attribute *a,
++			      char *buf)
+ {
+ 	return sprintf(buf, "%u\n", fw_cfg_rev);
+ }
+ 
+-static const struct {
+-	struct attribute attr;
+-	ssize_t (*show)(struct kobject *k, struct attribute *a, char *buf);
+-} fw_cfg_rev_attr = {
++static const struct kobj_attribute fw_cfg_rev_attr = {
+ 	.attr = { .name = "rev", .mode = S_IRUSR },
+ 	.show = fw_cfg_showrev,
+ };
+diff --git a/drivers/fpga/stratix10-soc.c b/drivers/fpga/stratix10-soc.c
+index 657a70c5fc996..9e34bbbce26e2 100644
+--- a/drivers/fpga/stratix10-soc.c
++++ b/drivers/fpga/stratix10-soc.c
+@@ -454,6 +454,7 @@ static int s10_remove(struct platform_device *pdev)
+ 	struct s10_priv *priv = mgr->priv;
+ 
+ 	fpga_mgr_unregister(mgr);
++	fpga_mgr_free(mgr);
+ 	stratix10_svc_free_channel(priv->chan);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+index 7d4118c8128a5..5e69b5b50a196 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+@@ -50,12 +50,6 @@ static struct {
+ 	spinlock_t mem_limit_lock;
+ } kfd_mem_limit;
+ 
+-/* Struct used for amdgpu_amdkfd_bo_validate */
+-struct amdgpu_vm_parser {
+-	uint32_t        domain;
+-	bool            wait;
+-};
+-
+ static const char * const domain_bit_to_string[] = {
+ 		"CPU",
+ 		"GTT",
+@@ -346,11 +340,9 @@ validate_fail:
+ 	return ret;
+ }
+ 
+-static int amdgpu_amdkfd_validate(void *param, struct amdgpu_bo *bo)
++static int amdgpu_amdkfd_validate_vm_bo(void *_unused, struct amdgpu_bo *bo)
+ {
+-	struct amdgpu_vm_parser *p = param;
+-
+-	return amdgpu_amdkfd_bo_validate(bo, p->domain, p->wait);
++	return amdgpu_amdkfd_bo_validate(bo, bo->allowed_domains, false);
+ }
+ 
+ /* vm_validate_pt_pd_bos - Validate page table and directory BOs
+@@ -364,20 +356,15 @@ static int vm_validate_pt_pd_bos(struct amdgpu_vm *vm)
+ {
+ 	struct amdgpu_bo *pd = vm->root.base.bo;
+ 	struct amdgpu_device *adev = amdgpu_ttm_adev(pd->tbo.bdev);
+-	struct amdgpu_vm_parser param;
+ 	int ret;
+ 
+-	param.domain = AMDGPU_GEM_DOMAIN_VRAM;
+-	param.wait = false;
+-
+-	ret = amdgpu_vm_validate_pt_bos(adev, vm, amdgpu_amdkfd_validate,
+-					&param);
++	ret = amdgpu_vm_validate_pt_bos(adev, vm, amdgpu_amdkfd_validate_vm_bo, NULL);
+ 	if (ret) {
+ 		pr_err("failed to validate PT BOs\n");
+ 		return ret;
+ 	}
+ 
+-	ret = amdgpu_amdkfd_validate(&param, pd);
++	ret = amdgpu_amdkfd_validate_vm_bo(NULL, pd);
+ 	if (ret) {
+ 		pr_err("failed to validate PD\n");
+ 		return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 57ec108b59720..d83f2ee150b86 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2856,7 +2856,7 @@ static int amdgpu_device_ip_reinit_early_sriov(struct amdgpu_device *adev)
+ 		AMD_IP_BLOCK_TYPE_IH,
+ 	};
+ 
+-	for (i = 0; i < ARRAY_SIZE(ip_order); i++) {
++	for (i = 0; i < adev->num_ip_blocks; i++) {
+ 		int j;
+ 		struct amdgpu_ip_block *block;
+ 
+@@ -3181,8 +3181,8 @@ static int amdgpu_device_get_job_timeout_settings(struct amdgpu_device *adev)
+ 	int ret = 0;
+ 
+ 	/*
+-	 * By default timeout for non compute jobs is 10000.
+-	 * And there is no timeout enforced on compute jobs.
++	 * By default timeout for non compute jobs is 10000
++	 * and 60000 for compute jobs.
+ 	 * In SR-IOV or passthrough mode, timeout for compute
+ 	 * jobs are 60000 by default.
+ 	 */
+@@ -3191,10 +3191,8 @@ static int amdgpu_device_get_job_timeout_settings(struct amdgpu_device *adev)
+ 	if (amdgpu_sriov_vf(adev))
+ 		adev->compute_timeout = amdgpu_sriov_is_pp_one_vf(adev) ?
+ 					msecs_to_jiffies(60000) : msecs_to_jiffies(10000);
+-	else if (amdgpu_passthrough(adev))
+-		adev->compute_timeout =  msecs_to_jiffies(60000);
+ 	else
+-		adev->compute_timeout = MAX_SCHEDULE_TIMEOUT;
++		adev->compute_timeout =  msecs_to_jiffies(60000);
+ 
+ 	if (strnlen(input, AMDGPU_MAX_TIMEOUT_PARAM_LENGTH)) {
+ 		while ((timeout_setting = strsep(&input, ",")) &&
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index f93883db2b467..c576902ccf8e6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -287,9 +287,9 @@ module_param_named(msi, amdgpu_msi, int, 0444);
+  *   for SDMA and Video.
+  *
+  * By default(with no lockup_timeout settings), the timeout for all non-compute(GFX, SDMA and Video)
+- * jobs is 10000. And there is no timeout enforced on compute jobs.
++ * jobs is 10000. The timeout for compute is 60000.
+  */
+-MODULE_PARM_DESC(lockup_timeout, "GPU lockup timeout in ms (default: for bare metal 10000 for non-compute jobs and infinity timeout for compute jobs; "
++MODULE_PARM_DESC(lockup_timeout, "GPU lockup timeout in ms (default: for bare metal 10000 for non-compute jobs and 60000 for compute jobs; "
+ 		"for passthrough or sriov, 10000 for all jobs."
+ 		" 0: keep default value. negative: infinity timeout), "
+ 		"format: for bare metal [Non-Compute] or [GFX,Compute,SDMA,Video]; "
+@@ -1179,6 +1179,7 @@ static const struct pci_device_id pciidlist[] = {
+ 	{0x1002, 0x73E0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_DIMGREY_CAVEFISH},
+ 	{0x1002, 0x73E1, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_DIMGREY_CAVEFISH},
+ 	{0x1002, 0x73E2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_DIMGREY_CAVEFISH},
++	{0x1002, 0x73E3, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_DIMGREY_CAVEFISH},
+ 	{0x1002, 0x73FF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_DIMGREY_CAVEFISH},
+ 
+ 	/* Aldebaran */
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+index a2fe2dac32c16..98906a43fda3d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+@@ -130,7 +130,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
+ 	struct amdgpu_device *adev = ring->adev;
+ 	struct amdgpu_ib *ib = &ibs[0];
+ 	struct dma_fence *tmp = NULL;
+-	bool skip_preamble, need_ctx_switch;
++	bool need_ctx_switch;
+ 	unsigned patch_offset = ~0;
+ 	struct amdgpu_vm *vm;
+ 	uint64_t fence_ctx;
+@@ -227,7 +227,6 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
+ 	if (need_ctx_switch)
+ 		status |= AMDGPU_HAVE_CTX_SWITCH;
+ 
+-	skip_preamble = ring->current_ctx == fence_ctx;
+ 	if (job && ring->funcs->emit_cntxcntl) {
+ 		status |= job->preamble_status;
+ 		status |= job->preemption_status;
+@@ -245,14 +244,6 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
+ 	for (i = 0; i < num_ibs; ++i) {
+ 		ib = &ibs[i];
+ 
+-		/* drop preamble IBs if we don't have a context switch */
+-		if ((ib->flags & AMDGPU_IB_FLAG_PREAMBLE) &&
+-		    skip_preamble &&
+-		    !(status & AMDGPU_PREAMBLE_IB_PRESENT_FIRST) &&
+-		    !amdgpu_mcbp &&
+-		    !amdgpu_sriov_vf(adev)) /* for SRIOV preemption, Preamble CE ib must be inserted anyway */
+-			continue;
+-
+ 		if (job && ring->funcs->emit_frame_cntl) {
+ 			if (secure != !!(ib->flags & AMDGPU_IB_FLAGS_SECURE)) {
+ 				amdgpu_ring_emit_frame_cntl(ring, false, secure);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_nbio.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_nbio.h
+index 25ee53545837d..45295dce5c3ec 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_nbio.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_nbio.h
+@@ -93,6 +93,8 @@ struct amdgpu_nbio_funcs {
+ 	void (*enable_aspm)(struct amdgpu_device *adev,
+ 			    bool enable);
+ 	void (*program_aspm)(struct amdgpu_device *adev);
++	void (*apply_lc_spc_mode_wa)(struct amdgpu_device *adev);
++	void (*apply_l1_link_width_reconfig_wa)(struct amdgpu_device *adev);
+ };
+ 
+ struct amdgpu_nbio {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index f9434bc2f9b21..db00de33caa32 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -1246,6 +1246,9 @@ int amdgpu_bo_get_metadata(struct amdgpu_bo *bo, void *buffer,
+ 
+ 	BUG_ON(bo->tbo.type == ttm_bo_type_kernel);
+ 	ubo = to_amdgpu_bo_user(bo);
++	if (metadata_size)
++		*metadata_size = ubo->metadata_size;
++
+ 	if (buffer) {
+ 		if (buffer_size < ubo->metadata_size)
+ 			return -EINVAL;
+@@ -1254,8 +1257,6 @@ int amdgpu_bo_get_metadata(struct amdgpu_bo *bo, void *buffer,
+ 			memcpy(buffer, ubo->metadata, ubo->metadata_size);
+ 	}
+ 
+-	if (metadata_size)
+-		*metadata_size = ubo->metadata_size;
+ 	if (flags)
+ 		*flags = ubo->metadata_flags;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.h
+index bbcccf53080dd..e5a75fb788ddb 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.h
+@@ -21,6 +21,11 @@
+ #ifndef __AMDGPU_UMC_H__
+ #define __AMDGPU_UMC_H__
+ 
++/*
++ * (addr / 256) * 4096, the higher 26 bits in ErrorAddr
++ * is the index of 4KB block
++ */
++#define ADDR_OF_4KB_BLOCK(addr)			(((addr) & ~0xffULL) << 4)
+ /*
+  * (addr / 256) * 8192, the higher 26 bits in ErrorAddr
+  * is the index of 8KB block
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v2_3.c b/drivers/gpu/drm/amd/amdgpu/nbio_v2_3.c
+index 05ddec7ba7e2e..754b11dea6f04 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v2_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v2_3.c
+@@ -51,6 +51,8 @@
+ #define mmBIF_MMSCH1_DOORBELL_RANGE		0x01d8
+ #define mmBIF_MMSCH1_DOORBELL_RANGE_BASE_IDX	2
+ 
++#define smnPCIE_LC_LINK_WIDTH_CNTL		0x11140288
++
+ static void nbio_v2_3_remap_hdp_registers(struct amdgpu_device *adev)
+ {
+ 	WREG32_SOC15(NBIO, 0, mmREMAP_HDP_MEM_FLUSH_CNTL,
+@@ -463,6 +465,43 @@ static void nbio_v2_3_program_aspm(struct amdgpu_device *adev)
+ 		WREG32_PCIE(smnPCIE_LC_CNTL3, data);
+ }
+ 
++static void nbio_v2_3_apply_lc_spc_mode_wa(struct amdgpu_device *adev)
++{
++	uint32_t reg_data = 0;
++	uint32_t link_width = 0;
++
++	if (!((adev->asic_type >= CHIP_NAVI10) &&
++	     (adev->asic_type <= CHIP_NAVI12)))
++		return;
++
++	reg_data = RREG32_PCIE(smnPCIE_LC_LINK_WIDTH_CNTL);
++	link_width = (reg_data & PCIE_LC_LINK_WIDTH_CNTL__LC_LINK_WIDTH_RD_MASK)
++		>> PCIE_LC_LINK_WIDTH_CNTL__LC_LINK_WIDTH_RD__SHIFT;
++
++	/*
++	 * Program PCIE_LC_CNTL6.LC_SPC_MODE_8GT to 0x2 (4 symbols per clock data)
++	 * if link_width is 0x3 (x4)
++	 */
++	if (0x3 == link_width) {
++		reg_data = RREG32_PCIE(smnPCIE_LC_CNTL6);
++		reg_data &= ~PCIE_LC_CNTL6__LC_SPC_MODE_8GT_MASK;
++		reg_data |= (0x2 << PCIE_LC_CNTL6__LC_SPC_MODE_8GT__SHIFT);
++		WREG32_PCIE(smnPCIE_LC_CNTL6, reg_data);
++	}
++}
++
++static void nbio_v2_3_apply_l1_link_width_reconfig_wa(struct amdgpu_device *adev)
++{
++	uint32_t reg_data = 0;
++
++	if (adev->asic_type != CHIP_NAVI10)
++		return;
++
++	reg_data = RREG32_PCIE(smnPCIE_LC_LINK_WIDTH_CNTL);
++	reg_data |= PCIE_LC_LINK_WIDTH_CNTL__LC_L1_RECONFIG_EN_MASK;
++	WREG32_PCIE(smnPCIE_LC_LINK_WIDTH_CNTL, reg_data);
++}
++
+ const struct amdgpu_nbio_funcs nbio_v2_3_funcs = {
+ 	.get_hdp_flush_req_offset = nbio_v2_3_get_hdp_flush_req_offset,
+ 	.get_hdp_flush_done_offset = nbio_v2_3_get_hdp_flush_done_offset,
+@@ -484,4 +523,6 @@ const struct amdgpu_nbio_funcs nbio_v2_3_funcs = {
+ 	.remap_hdp_registers = nbio_v2_3_remap_hdp_registers,
+ 	.enable_aspm = nbio_v2_3_enable_aspm,
+ 	.program_aspm =  nbio_v2_3_program_aspm,
++	.apply_lc_spc_mode_wa = nbio_v2_3_apply_lc_spc_mode_wa,
++	.apply_l1_link_width_reconfig_wa = nbio_v2_3_apply_l1_link_width_reconfig_wa,
+ };
+diff --git a/drivers/gpu/drm/amd/amdgpu/nv.c b/drivers/gpu/drm/amd/amdgpu/nv.c
+index d290ca0b06da8..d73a4e9842d2e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nv.c
++++ b/drivers/gpu/drm/amd/amdgpu/nv.c
+@@ -1194,6 +1194,12 @@ static int nv_common_hw_init(void *handle)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 
++	if (adev->nbio.funcs->apply_lc_spc_mode_wa)
++		adev->nbio.funcs->apply_lc_spc_mode_wa(adev);
++
++	if (adev->nbio.funcs->apply_l1_link_width_reconfig_wa)
++		adev->nbio.funcs->apply_l1_link_width_reconfig_wa(adev);
++
+ 	/* enable pcie gen2/3 link */
+ 	nv_pcie_gen3_enable(adev);
+ 	/* enable aspm */
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+index 5715be6770ecc..3bb996ac8d26a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+@@ -144,7 +144,7 @@ static const struct soc15_reg_golden golden_settings_sdma_4_1[] = {
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC0_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_IB_CNTL, 0x800f0111, 0x00000100),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+-	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003c0),
++	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003e0),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_WATERMK, 0xfc000000, 0x00000000)
+ };
+ 
+@@ -288,7 +288,7 @@ static const struct soc15_reg_golden golden_settings_sdma_4_3[] = {
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_POWER_CNTL, 0x003fff07, 0x40000051),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC0_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+-	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003c0),
++	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003e0),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_WATERMK, 0xfc000000, 0x03fbe1fe)
+ };
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+index 240596b25fe4e..9ab23947a1515 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+@@ -145,9 +145,6 @@ static int sdma_v5_2_init_microcode(struct amdgpu_device *adev)
+ 	struct amdgpu_firmware_info *info = NULL;
+ 	const struct common_firmware_header *header = NULL;
+ 
+-	if (amdgpu_sriov_vf(adev) && (adev->asic_type == CHIP_SIENNA_CICHLID))
+-		return 0;
+-
+ 	DRM_DEBUG("\n");
+ 
+ 	switch (adev->asic_type) {
+@@ -182,6 +179,9 @@ static int sdma_v5_2_init_microcode(struct amdgpu_device *adev)
+ 		       (void *)&adev->sdma.instance[0],
+ 		       sizeof(struct amdgpu_sdma_instance));
+ 
++	if (amdgpu_sriov_vf(adev) && (adev->asic_type == CHIP_SIENNA_CICHLID))
++		return 0;
++
+ 	DRM_DEBUG("psp_load == '%s'\n",
+ 		  adev->firmware.load_type == AMDGPU_FW_LOAD_PSP ? "true" : "false");
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/umc_v8_7.c b/drivers/gpu/drm/amd/amdgpu/umc_v8_7.c
+index 89d20adfa001a..af59a35788e3e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/umc_v8_7.c
++++ b/drivers/gpu/drm/amd/amdgpu/umc_v8_7.c
+@@ -234,7 +234,7 @@ static void umc_v8_7_query_error_address(struct amdgpu_device *adev,
+ 		err_addr &= ~((0x1ULL << lsb) - 1);
+ 
+ 		/* translate umc channel address to soc pa, 3 parts are included */
+-		retired_page = ADDR_OF_8KB_BLOCK(err_addr) |
++		retired_page = ADDR_OF_4KB_BLOCK(err_addr) |
+ 				ADDR_OF_256B_BLOCK(channel_index) |
+ 				OFFSET_IN_256B_BLOCK(err_addr);
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index d3eaa1549bd78..f0bad74af2305 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -486,9 +486,6 @@ static int destroy_queue_nocpsch_locked(struct device_queue_manager *dqm,
+ 	if (retval == -ETIME)
+ 		qpd->reset_wavefronts = true;
+ 
+-
+-	mqd_mgr->free_mqd(mqd_mgr, q->mqd, q->mqd_mem_obj);
+-
+ 	list_del(&q->list);
+ 	if (list_empty(&qpd->queues_list)) {
+ 		if (qpd->reset_wavefronts) {
+@@ -523,6 +520,8 @@ static int destroy_queue_nocpsch(struct device_queue_manager *dqm,
+ 	int retval;
+ 	uint64_t sdma_val = 0;
+ 	struct kfd_process_device *pdd = qpd_to_pdd(qpd);
++	struct mqd_manager *mqd_mgr =
++		dqm->mqd_mgrs[get_mqd_type_from_queue_type(q->properties.type)];
+ 
+ 	/* Get the SDMA queue stats */
+ 	if ((q->properties.type == KFD_QUEUE_TYPE_SDMA) ||
+@@ -540,6 +539,8 @@ static int destroy_queue_nocpsch(struct device_queue_manager *dqm,
+ 		pdd->sdma_past_activity_counter += sdma_val;
+ 	dqm_unlock(dqm);
+ 
++	mqd_mgr->free_mqd(mqd_mgr, q->mqd, q->mqd_mem_obj);
++
+ 	return retval;
+ }
+ 
+@@ -1629,7 +1630,7 @@ out:
+ static int process_termination_nocpsch(struct device_queue_manager *dqm,
+ 		struct qcm_process_device *qpd)
+ {
+-	struct queue *q, *next;
++	struct queue *q;
+ 	struct device_process_node *cur, *next_dpn;
+ 	int retval = 0;
+ 	bool found = false;
+@@ -1637,12 +1638,19 @@ static int process_termination_nocpsch(struct device_queue_manager *dqm,
+ 	dqm_lock(dqm);
+ 
+ 	/* Clear all user mode queues */
+-	list_for_each_entry_safe(q, next, &qpd->queues_list, list) {
++	while (!list_empty(&qpd->queues_list)) {
++		struct mqd_manager *mqd_mgr;
+ 		int ret;
+ 
++		q = list_first_entry(&qpd->queues_list, struct queue, list);
++		mqd_mgr = dqm->mqd_mgrs[get_mqd_type_from_queue_type(
++				q->properties.type)];
+ 		ret = destroy_queue_nocpsch_locked(dqm, qpd, q);
+ 		if (ret)
+ 			retval = ret;
++		dqm_unlock(dqm);
++		mqd_mgr->free_mqd(mqd_mgr, q->mqd, q->mqd_mem_obj);
++		dqm_lock(dqm);
+ 	}
+ 
+ 	/* Unregister process */
+@@ -1674,36 +1682,34 @@ static int get_wave_state(struct device_queue_manager *dqm,
+ 			  u32 *save_area_used_size)
+ {
+ 	struct mqd_manager *mqd_mgr;
+-	int r;
+ 
+ 	dqm_lock(dqm);
+ 
+-	if (q->properties.type != KFD_QUEUE_TYPE_COMPUTE ||
+-	    q->properties.is_active || !q->device->cwsr_enabled) {
+-		r = -EINVAL;
+-		goto dqm_unlock;
+-	}
+-
+ 	mqd_mgr = dqm->mqd_mgrs[KFD_MQD_TYPE_CP];
+ 
+-	if (!mqd_mgr->get_wave_state) {
+-		r = -EINVAL;
+-		goto dqm_unlock;
++	if (q->properties.type != KFD_QUEUE_TYPE_COMPUTE ||
++	    q->properties.is_active || !q->device->cwsr_enabled ||
++	    !mqd_mgr->get_wave_state) {
++		dqm_unlock(dqm);
++		return -EINVAL;
+ 	}
+ 
+-	r = mqd_mgr->get_wave_state(mqd_mgr, q->mqd, ctl_stack,
+-			ctl_stack_used_size, save_area_used_size);
+-
+-dqm_unlock:
+ 	dqm_unlock(dqm);
+-	return r;
++
++	/*
++	 * get_wave_state is outside the dqm lock to prevent circular locking
++	 * and the queue should be protected against destruction by the process
++	 * lock.
++	 */
++	return mqd_mgr->get_wave_state(mqd_mgr, q->mqd, ctl_stack,
++			ctl_stack_used_size, save_area_used_size);
+ }
+ 
+ static int process_termination_cpsch(struct device_queue_manager *dqm,
+ 		struct qcm_process_device *qpd)
+ {
+ 	int retval;
+-	struct queue *q, *next;
++	struct queue *q;
+ 	struct kernel_queue *kq, *kq_next;
+ 	struct mqd_manager *mqd_mgr;
+ 	struct device_process_node *cur, *next_dpn;
+@@ -1760,24 +1766,26 @@ static int process_termination_cpsch(struct device_queue_manager *dqm,
+ 		qpd->reset_wavefronts = false;
+ 	}
+ 
+-	dqm_unlock(dqm);
+-
+-	/* Outside the DQM lock because under the DQM lock we can't do
+-	 * reclaim or take other locks that others hold while reclaiming.
+-	 */
+-	if (found)
+-		kfd_dec_compute_active(dqm->dev);
+-
+ 	/* Lastly, free mqd resources.
+ 	 * Do free_mqd() after dqm_unlock to avoid circular locking.
+ 	 */
+-	list_for_each_entry_safe(q, next, &qpd->queues_list, list) {
++	while (!list_empty(&qpd->queues_list)) {
++		q = list_first_entry(&qpd->queues_list, struct queue, list);
+ 		mqd_mgr = dqm->mqd_mgrs[get_mqd_type_from_queue_type(
+ 				q->properties.type)];
+ 		list_del(&q->list);
+ 		qpd->queue_count--;
++		dqm_unlock(dqm);
+ 		mqd_mgr->free_mqd(mqd_mgr, q->mqd, q->mqd_mem_obj);
++		dqm_lock(dqm);
+ 	}
++	dqm_unlock(dqm);
++
++	/* Outside the DQM lock because under the DQM lock we can't do
++	 * reclaim or take other locks that others hold while reclaiming.
++	 */
++	if (found)
++		kfd_dec_compute_active(dqm->dev);
+ 
+ 	return retval;
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 2b2d7b9f26f16..eeaae1cf2bc2b 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -4032,6 +4032,23 @@ static int fill_dc_scaling_info(const struct drm_plane_state *state,
+ 	scaling_info->src_rect.x = state->src_x >> 16;
+ 	scaling_info->src_rect.y = state->src_y >> 16;
+ 
++	/*
++	 * For reasons we don't (yet) fully understand a non-zero
++	 * src_y coordinate into an NV12 buffer can cause a
++	 * system hang. To avoid hangs (and maybe be overly cautious)
++	 * let's reject both non-zero src_x and src_y.
++	 *
++	 * We currently know of only one use-case to reproduce a
++	 * scenario with non-zero src_x and src_y for NV12, which
++	 * is to gesture the YouTube Android app into full screen
++	 * on ChromeOS.
++	 */
++	if (state->fb &&
++	    state->fb->format->format == DRM_FORMAT_NV12 &&
++	    (scaling_info->src_rect.x != 0 ||
++	     scaling_info->src_rect.y != 0))
++		return -EINVAL;
++
+ 	/*
+ 	 * For reasons we don't (yet) fully understand a non-zero
+ 	 * src_y coordinate into an NV12 buffer can cause a
+@@ -9481,7 +9498,8 @@ skip_modeset:
+ 	BUG_ON(dm_new_crtc_state->stream == NULL);
+ 
+ 	/* Scaling or underscan settings */
+-	if (is_scaling_state_different(dm_old_conn_state, dm_new_conn_state))
++	if (is_scaling_state_different(dm_old_conn_state, dm_new_conn_state) ||
++				drm_atomic_crtc_needs_modeset(new_crtc_state))
+ 		update_stream_scaling_settings(
+ 			&new_crtc_state->mode, dm_new_conn_state, dm_new_crtc_state->stream);
+ 
+@@ -10048,6 +10066,10 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ 			dm_old_crtc_state->dsc_force_changed == false)
+ 			continue;
+ 
++		ret = amdgpu_dm_verify_lut_sizes(new_crtc_state);
++		if (ret)
++			goto fail;
++
+ 		if (!new_crtc_state->enable)
+ 			continue;
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+index b2f2ccfc20bbe..c8e5bbbb8bceb 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+@@ -591,6 +591,7 @@ void amdgpu_dm_trigger_timing_sync(struct drm_device *dev);
+ #define MAX_COLOR_LEGACY_LUT_ENTRIES 256
+ 
+ void amdgpu_dm_init_color_mod(void);
++int amdgpu_dm_verify_lut_sizes(const struct drm_crtc_state *crtc_state);
+ int amdgpu_dm_update_crtc_color_mgmt(struct dm_crtc_state *crtc);
+ int amdgpu_dm_update_plane_color_mgmt(struct dm_crtc_state *crtc,
+ 				      struct dc_plane_state *dc_plane_state);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_color.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_color.c
+index 157fe4efbb599..a022e5bb30a5c 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_color.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_color.c
+@@ -284,6 +284,37 @@ static int __set_input_tf(struct dc_transfer_func *func,
+ 	return res ? 0 : -ENOMEM;
+ }
+ 
++/**
++ * Verifies that the Degamma and Gamma LUTs attached to the |crtc_state| are of
++ * the expected size.
++ * Returns 0 on success.
++ */
++int amdgpu_dm_verify_lut_sizes(const struct drm_crtc_state *crtc_state)
++{
++	const struct drm_color_lut *lut = NULL;
++	uint32_t size = 0;
++
++	lut = __extract_blob_lut(crtc_state->degamma_lut, &size);
++	if (lut && size != MAX_COLOR_LUT_ENTRIES) {
++		DRM_DEBUG_DRIVER(
++			"Invalid Degamma LUT size. Should be %u but got %u.\n",
++			MAX_COLOR_LUT_ENTRIES, size);
++		return -EINVAL;
++	}
++
++	lut = __extract_blob_lut(crtc_state->gamma_lut, &size);
++	if (lut && size != MAX_COLOR_LUT_ENTRIES &&
++	    size != MAX_COLOR_LEGACY_LUT_ENTRIES) {
++		DRM_DEBUG_DRIVER(
++			"Invalid Gamma LUT size. Should be %u (or %u for legacy) but got %u.\n",
++			MAX_COLOR_LUT_ENTRIES, MAX_COLOR_LEGACY_LUT_ENTRIES,
++			size);
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
+ /**
+  * amdgpu_dm_update_crtc_color_mgmt: Maps DRM color management to DC stream.
+  * @crtc: amdgpu_dm crtc state
+@@ -317,14 +348,12 @@ int amdgpu_dm_update_crtc_color_mgmt(struct dm_crtc_state *crtc)
+ 	bool is_legacy;
+ 	int r;
+ 
+-	degamma_lut = __extract_blob_lut(crtc->base.degamma_lut, &degamma_size);
+-	if (degamma_lut && degamma_size != MAX_COLOR_LUT_ENTRIES)
+-		return -EINVAL;
++	r = amdgpu_dm_verify_lut_sizes(&crtc->base);
++	if (r)
++		return r;
+ 
++	degamma_lut = __extract_blob_lut(crtc->base.degamma_lut, &degamma_size);
+ 	regamma_lut = __extract_blob_lut(crtc->base.gamma_lut, &regamma_size);
+-	if (regamma_lut && regamma_size != MAX_COLOR_LUT_ENTRIES &&
+-	    regamma_size != MAX_COLOR_LEGACY_LUT_ENTRIES)
+-		return -EINVAL;
+ 
+ 	has_degamma =
+ 		degamma_lut && !__is_lut_linear(degamma_lut, degamma_size);
+diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+index d79f4fe06c47e..4812a72f8aadc 100644
+--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
++++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+@@ -2131,7 +2131,7 @@ static enum bp_result get_integrated_info_v2_1(
+ 		info_v2_1->edp1_info.edp_pwr_down_bloff_to_vary_bloff;
+ 	info->edp1_info.edp_panel_bpc =
+ 		info_v2_1->edp1_info.edp_panel_bpc;
+-	info->edp1_info.edp_bootup_bl_level =
++	info->edp1_info.edp_bootup_bl_level = info_v2_1->edp1_info.edp_bootup_bl_level;
+ 
+ 	info->edp2_info.edp_backlight_pwm_hz =
+ 	le16_to_cpu(info_v2_1->edp2_info.edp_backlight_pwm_hz);
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+index a06e86853bb96..75ba86f951f84 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+@@ -128,7 +128,7 @@ void rn_update_clocks(struct clk_mgr *clk_mgr_base,
+ 	struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+ 	struct dc_clocks *new_clocks = &context->bw_ctx.bw.dcn.clk;
+ 	struct dc *dc = clk_mgr_base->ctx->dc;
+-	int display_count, i;
++	int display_count;
+ 	bool update_dppclk = false;
+ 	bool update_dispclk = false;
+ 	bool dpp_clock_lowered = false;
+@@ -210,14 +210,6 @@ void rn_update_clocks(struct clk_mgr *clk_mgr_base,
+ 				clk_mgr_base->clks.dppclk_khz,
+ 				safe_to_lower);
+ 
+-		for (i = 0; i < context->stream_count; i++) {
+-			if (context->streams[i]->signal == SIGNAL_TYPE_EDP &&
+-				context->streams[i]->apply_seamless_boot_optimization) {
+-				dc_wait_for_vblank(dc, context->streams[i]);
+-				break;
+-			}
+-		}
+-
+ 		clk_mgr_base->clks.actual_dppclk_khz =
+ 				rn_vbios_smu_set_dppclk(clk_mgr, clk_mgr_base->clks.dppclk_khz);
+ 
+@@ -842,6 +834,7 @@ static struct wm_table lpddr4_wm_table_rn = {
+ 		},
+ 	}
+ };
++
+ static unsigned int find_socclk_for_voltage(struct dpm_clocks *clock_table, unsigned int voltage)
+ {
+ 	int i;
+@@ -854,6 +847,7 @@ static unsigned int find_socclk_for_voltage(struct dpm_clocks *clock_table, unsi
+ 	ASSERT(0);
+ 	return 0;
+ }
++
+ static unsigned int find_dcfclk_for_voltage(struct dpm_clocks *clock_table, unsigned int voltage)
+ {
+ 	int i;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 4713f09bcbf18..a869702d77af5 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -3219,19 +3219,6 @@ void dc_link_remove_remote_sink(struct dc_link *link, struct dc_sink *sink)
+ 	}
+ }
+ 
+-void dc_wait_for_vblank(struct dc *dc, struct dc_stream_state *stream)
+-{
+-	int i;
+-
+-	for (i = 0; i < dc->res_pool->pipe_count; i++)
+-		if (dc->current_state->res_ctx.pipe_ctx[i].stream == stream) {
+-			struct timing_generator *tg =
+-				dc->current_state->res_ctx.pipe_ctx[i].stream_res.tg;
+-			tg->funcs->wait_for_state(tg, CRTC_STATE_VBLANK);
+-			break;
+-		}
+-}
+-
+ void get_clock_requirements_for_state(struct dc_state *state, struct AsicStateEx *info)
+ {
+ 	info->displayClock				= (unsigned int)state->bw_ctx.bw.dcn.clk.dispclk_khz;
+@@ -3287,7 +3274,7 @@ void dc_allow_idle_optimizations(struct dc *dc, bool allow)
+ 	if (dc->debug.disable_idle_power_optimizations)
+ 		return;
+ 
+-	if (dc->clk_mgr->funcs->is_smu_present)
++	if (dc->clk_mgr != NULL && dc->clk_mgr->funcs->is_smu_present)
+ 		if (!dc->clk_mgr->funcs->is_smu_present(dc->clk_mgr))
+ 			return;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index 72bd7bc681a81..ae6830ff1cf7d 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -1784,6 +1784,8 @@ static void set_dp_mst_mode(struct dc_link *link, bool mst_enable)
+ 		link->type = dc_connection_single;
+ 		link->local_sink = link->remote_sinks[0];
+ 		link->local_sink->sink_signal = SIGNAL_TYPE_DISPLAY_PORT;
++		dc_sink_retain(link->local_sink);
++		dm_helpers_dp_mst_stop_top_mgr(link->ctx, link);
+ 	} else if (mst_enable == true &&
+ 			link->type == dc_connection_single &&
+ 			link->remote_sinks[0] != NULL) {
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index 8cb937c046aa0..3b1068a090951 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -695,124 +695,23 @@ static void calculate_split_count_and_index(struct pipe_ctx *pipe_ctx, int *spli
+ 	}
+ }
+ 
+-static void calculate_viewport(struct pipe_ctx *pipe_ctx)
++/*
++ * This is a preliminary vp size calculation to allow us to check taps support.
++ * The result is completely overridden afterwards.
++ */
++static void calculate_viewport_size(struct pipe_ctx *pipe_ctx)
+ {
+-	const struct dc_plane_state *plane_state = pipe_ctx->plane_state;
+-	const struct dc_stream_state *stream = pipe_ctx->stream;
+ 	struct scaler_data *data = &pipe_ctx->plane_res.scl_data;
+-	struct rect surf_src = plane_state->src_rect;
+-	struct rect clip, dest;
+-	int vpc_div = (data->format == PIXEL_FORMAT_420BPP8
+-			|| data->format == PIXEL_FORMAT_420BPP10) ? 2 : 1;
+-	int split_count = 0;
+-	int split_idx = 0;
+-	bool orthogonal_rotation, flip_y_start, flip_x_start;
+-
+-	calculate_split_count_and_index(pipe_ctx, &split_count, &split_idx);
+ 
+-	if (stream->view_format == VIEW_3D_FORMAT_SIDE_BY_SIDE ||
+-		stream->view_format == VIEW_3D_FORMAT_TOP_AND_BOTTOM) {
+-		split_count = 0;
+-		split_idx = 0;
+-	}
+-
+-	/* The actual clip is an intersection between stream
+-	 * source and surface clip
+-	 */
+-	dest = plane_state->dst_rect;
+-	clip.x = stream->src.x > plane_state->clip_rect.x ?
+-			stream->src.x : plane_state->clip_rect.x;
+-
+-	clip.width = stream->src.x + stream->src.width <
+-			plane_state->clip_rect.x + plane_state->clip_rect.width ?
+-			stream->src.x + stream->src.width - clip.x :
+-			plane_state->clip_rect.x + plane_state->clip_rect.width - clip.x ;
+-
+-	clip.y = stream->src.y > plane_state->clip_rect.y ?
+-			stream->src.y : plane_state->clip_rect.y;
+-
+-	clip.height = stream->src.y + stream->src.height <
+-			plane_state->clip_rect.y + plane_state->clip_rect.height ?
+-			stream->src.y + stream->src.height - clip.y :
+-			plane_state->clip_rect.y + plane_state->clip_rect.height - clip.y ;
+-
+-	/*
+-	 * Need to calculate how scan origin is shifted in vp space
+-	 * to correctly rotate clip and dst
+-	 */
+-	get_vp_scan_direction(
+-			plane_state->rotation,
+-			plane_state->horizontal_mirror,
+-			&orthogonal_rotation,
+-			&flip_y_start,
+-			&flip_x_start);
+-
+-	if (orthogonal_rotation) {
+-		swap(clip.x, clip.y);
+-		swap(clip.width, clip.height);
+-		swap(dest.x, dest.y);
+-		swap(dest.width, dest.height);
+-	}
+-	if (flip_x_start) {
+-		clip.x = dest.x + dest.width - clip.x - clip.width;
+-		dest.x = 0;
+-	}
+-	if (flip_y_start) {
+-		clip.y = dest.y + dest.height - clip.y - clip.height;
+-		dest.y = 0;
+-	}
+-
+-	/* offset = surf_src.ofs + (clip.ofs - surface->dst_rect.ofs) * scl_ratio
+-	 * num_pixels = clip.num_pix * scl_ratio
+-	 */
+-	data->viewport.x = surf_src.x + (clip.x - dest.x) * surf_src.width / dest.width;
+-	data->viewport.width = clip.width * surf_src.width / dest.width;
+-
+-	data->viewport.y = surf_src.y + (clip.y - dest.y) * surf_src.height / dest.height;
+-	data->viewport.height = clip.height * surf_src.height / dest.height;
+-
+-	/* Handle split */
+-	if (split_count) {
+-		/* extra pixels in the division remainder need to go to pipes after
+-		 * the extra pixel index minus one(epimo) defined here as:
+-		 */
+-		int epimo = 0;
+-
+-		if (orthogonal_rotation) {
+-			if (flip_y_start)
+-				split_idx = split_count - split_idx;
+-
+-			epimo = split_count - data->viewport.height % (split_count + 1);
+-
+-			data->viewport.y += (data->viewport.height / (split_count + 1)) * split_idx;
+-			if (split_idx > epimo)
+-				data->viewport.y += split_idx - epimo - 1;
+-			data->viewport.height = data->viewport.height / (split_count + 1) + (split_idx > epimo ? 1 : 0);
+-		} else {
+-			if (flip_x_start)
+-				split_idx = split_count - split_idx;
+-
+-			epimo = split_count - data->viewport.width % (split_count + 1);
+-
+-			data->viewport.x += (data->viewport.width / (split_count + 1)) * split_idx;
+-			if (split_idx > epimo)
+-				data->viewport.x += split_idx - epimo - 1;
+-			data->viewport.width = data->viewport.width / (split_count + 1) + (split_idx > epimo ? 1 : 0);
+-		}
++	data->viewport.width = dc_fixpt_ceil(dc_fixpt_mul_int(data->ratios.horz, data->recout.width));
++	data->viewport.height = dc_fixpt_ceil(dc_fixpt_mul_int(data->ratios.vert, data->recout.height));
++	data->viewport_c.width = dc_fixpt_ceil(dc_fixpt_mul_int(data->ratios.horz_c, data->recout.width));
++	data->viewport_c.height = dc_fixpt_ceil(dc_fixpt_mul_int(data->ratios.vert_c, data->recout.height));
++	if (pipe_ctx->plane_state->rotation == ROTATION_ANGLE_90 ||
++			pipe_ctx->plane_state->rotation == ROTATION_ANGLE_270) {
++		swap(data->viewport.width, data->viewport.height);
++		swap(data->viewport_c.width, data->viewport_c.height);
+ 	}
+-
+-	/* Round down, compensate in init */
+-	data->viewport_c.x = data->viewport.x / vpc_div;
+-	data->viewport_c.y = data->viewport.y / vpc_div;
+-	data->inits.h_c = (data->viewport.x % vpc_div) != 0 ? dc_fixpt_half : dc_fixpt_zero;
+-	data->inits.v_c = (data->viewport.y % vpc_div) != 0 ? dc_fixpt_half : dc_fixpt_zero;
+-
+-	/* Round up, assume original video size always even dimensions */
+-	data->viewport_c.width = (data->viewport.width + vpc_div - 1) / vpc_div;
+-	data->viewport_c.height = (data->viewport.height + vpc_div - 1) / vpc_div;
+-
+-	data->viewport_unadjusted = data->viewport;
+-	data->viewport_c_unadjusted = data->viewport_c;
+ }
+ 
+ static void calculate_recout(struct pipe_ctx *pipe_ctx)
+@@ -821,26 +720,21 @@ static void calculate_recout(struct pipe_ctx *pipe_ctx)
+ 	const struct dc_stream_state *stream = pipe_ctx->stream;
+ 	struct scaler_data *data = &pipe_ctx->plane_res.scl_data;
+ 	struct rect surf_clip = plane_state->clip_rect;
+-	bool pri_split_tb = pipe_ctx->bottom_pipe &&
+-			pipe_ctx->bottom_pipe->plane_state == pipe_ctx->plane_state &&
+-			stream->view_format == VIEW_3D_FORMAT_TOP_AND_BOTTOM;
+-	bool sec_split_tb = pipe_ctx->top_pipe &&
+-			pipe_ctx->top_pipe->plane_state == pipe_ctx->plane_state &&
+-			stream->view_format == VIEW_3D_FORMAT_TOP_AND_BOTTOM;
+-	int split_count = 0;
+-	int split_idx = 0;
++	bool split_tb = stream->view_format == VIEW_3D_FORMAT_TOP_AND_BOTTOM;
++	int split_count, split_idx;
+ 
+ 	calculate_split_count_and_index(pipe_ctx, &split_count, &split_idx);
++	if (stream->view_format == VIEW_3D_FORMAT_SIDE_BY_SIDE)
++		split_idx = 0;
+ 
+ 	/*
+ 	 * Only the leftmost ODM pipe should be offset by a nonzero distance
+ 	 */
+-	if (!pipe_ctx->prev_odm_pipe) {
++	if (!pipe_ctx->prev_odm_pipe || split_idx == split_count) {
+ 		data->recout.x = stream->dst.x;
+ 		if (stream->src.x < surf_clip.x)
+ 			data->recout.x += (surf_clip.x - stream->src.x) * stream->dst.width
+ 						/ stream->src.width;
+-
+ 	} else
+ 		data->recout.x = 0;
+ 
+@@ -861,26 +755,36 @@ static void calculate_recout(struct pipe_ctx *pipe_ctx)
+ 	if (data->recout.height + data->recout.y > stream->dst.y + stream->dst.height)
+ 		data->recout.height = stream->dst.y + stream->dst.height - data->recout.y;
+ 
+-	/* Handle h & v split, handle rotation using viewport */
+-	if (sec_split_tb) {
+-		data->recout.y += data->recout.height / 2;
+-		/* Floor primary pipe, ceil 2ndary pipe */
+-		data->recout.height = (data->recout.height + 1) / 2;
+-	} else if (pri_split_tb)
++	/* Handle h & v split */
++	if (split_tb) {
++		ASSERT(data->recout.height % 2 == 0);
+ 		data->recout.height /= 2;
+-	else if (split_count) {
+-		/* extra pixels in the division remainder need to go to pipes after
+-		 * the extra pixel index minus one(epimo) defined here as:
+-		 */
+-		int epimo = split_count - data->recout.width % (split_count + 1);
+-
+-		/*no recout offset due to odm */
++	} else if (split_count) {
+ 		if (!pipe_ctx->next_odm_pipe && !pipe_ctx->prev_odm_pipe) {
++			/* extra pixels in the division remainder need to go to pipes after
++			 * the extra pixel index minus one(epimo) defined here as:
++			 */
++			int epimo = split_count - data->recout.width % (split_count + 1);
++
+ 			data->recout.x += (data->recout.width / (split_count + 1)) * split_idx;
+ 			if (split_idx > epimo)
+ 				data->recout.x += split_idx - epimo - 1;
++			ASSERT(stream->view_format != VIEW_3D_FORMAT_SIDE_BY_SIDE || data->recout.width % 2 == 0);
++			data->recout.width = data->recout.width / (split_count + 1) + (split_idx > epimo ? 1 : 0);
++		} else {
++			/* odm */
++			if (split_idx == split_count) {
++				/* rightmost pipe is the remainder recout */
++				data->recout.width -= data->h_active * split_count - data->recout.x;
++
++				/* ODM combine cases with MPO we can get negative widths */
++				if (data->recout.width < 0)
++					data->recout.width = 0;
++
++				data->recout.x = 0;
++			} else
++				data->recout.width = data->h_active - data->recout.x;
+ 		}
+-		data->recout.width = data->recout.width / (split_count + 1) + (split_idx > epimo ? 1 : 0);
+ 	}
+ }
+ 
+@@ -934,9 +838,15 @@ static void calculate_scaling_ratios(struct pipe_ctx *pipe_ctx)
+ 			pipe_ctx->plane_res.scl_data.ratios.vert_c, 19);
+ }
+ 
+-static inline void adjust_vp_and_init_for_seamless_clip(
++
++/*
++ * We completely calculate vp offset, size and inits here based entirely on scaling
++ * ratios and recout for pixel perfect pipe combine.
++ */
++static void calculate_init_and_vp(
+ 		bool flip_scan_dir,
+-		int recout_skip,
++		int recout_offset_within_recout_full,
++		int recout_size,
+ 		int src_size,
+ 		int taps,
+ 		struct fixed31_32 ratio,
+@@ -944,91 +854,87 @@ static inline void adjust_vp_and_init_for_seamless_clip(
+ 		int *vp_offset,
+ 		int *vp_size)
+ {
+-	if (!flip_scan_dir) {
+-		/* Adjust for viewport end clip-off */
+-		if ((*vp_offset + *vp_size) < src_size) {
+-			int vp_clip = src_size - *vp_size - *vp_offset;
+-			int int_part = dc_fixpt_floor(dc_fixpt_sub(*init, ratio));
+-
+-			int_part = int_part > 0 ? int_part : 0;
+-			*vp_size += int_part < vp_clip ? int_part : vp_clip;
+-		}
+-
+-		/* Adjust for non-0 viewport offset */
+-		if (*vp_offset) {
+-			int int_part;
+-
+-			*init = dc_fixpt_add(*init, dc_fixpt_mul_int(ratio, recout_skip));
+-			int_part = dc_fixpt_floor(*init) - *vp_offset;
+-			if (int_part < taps) {
+-				int int_adj = *vp_offset >= (taps - int_part) ?
+-							(taps - int_part) : *vp_offset;
+-				*vp_offset -= int_adj;
+-				*vp_size += int_adj;
+-				int_part += int_adj;
+-			} else if (int_part > taps) {
+-				*vp_offset += int_part - taps;
+-				*vp_size -= int_part - taps;
+-				int_part = taps;
+-			}
+-			init->value &= 0xffffffff;
+-			*init = dc_fixpt_add_int(*init, int_part);
+-		}
+-	} else {
+-		/* Adjust for non-0 viewport offset */
+-		if (*vp_offset) {
+-			int int_part = dc_fixpt_floor(dc_fixpt_sub(*init, ratio));
+-
+-			int_part = int_part > 0 ? int_part : 0;
+-			*vp_size += int_part < *vp_offset ? int_part : *vp_offset;
+-			*vp_offset -= int_part < *vp_offset ? int_part : *vp_offset;
+-		}
++	struct fixed31_32 temp;
++	int int_part;
+ 
+-		/* Adjust for viewport end clip-off */
+-		if ((*vp_offset + *vp_size) < src_size) {
+-			int int_part;
+-			int end_offset = src_size - *vp_offset - *vp_size;
+-
+-			/*
+-			 * this is init if vp had no offset, keep in mind this is from the
+-			 * right side of vp due to scan direction
+-			 */
+-			*init = dc_fixpt_add(*init, dc_fixpt_mul_int(ratio, recout_skip));
+-			/*
+-			 * this is the difference between first pixel of viewport available to read
+-			 * and init position, takning into account scan direction
+-			 */
+-			int_part = dc_fixpt_floor(*init) - end_offset;
+-			if (int_part < taps) {
+-				int int_adj = end_offset >= (taps - int_part) ?
+-							(taps - int_part) : end_offset;
+-				*vp_size += int_adj;
+-				int_part += int_adj;
+-			} else if (int_part > taps) {
+-				*vp_size += int_part - taps;
+-				int_part = taps;
+-			}
+-			init->value &= 0xffffffff;
+-			*init = dc_fixpt_add_int(*init, int_part);
+-		}
++	/*
++	 * First of the taps starts sampling pixel number <init_int_part> corresponding to recout
++	 * pixel 1. Next recout pixel samples int part of <init + scaling ratio> and so on.
++	 * All following calculations are based on this logic.
++	 *
++	 * Init calculated according to formula:
++	 * 	init = (scaling_ratio + number_of_taps + 1) / 2
++	 * 	init_bot = init + scaling_ratio
++	 * 	to get pixel perfect combine add the fraction from calculating vp offset
++	 */
++	temp = dc_fixpt_mul_int(ratio, recout_offset_within_recout_full);
++	*vp_offset = dc_fixpt_floor(temp);
++	temp.value &= 0xffffffff;
++	*init = dc_fixpt_truncate(dc_fixpt_add(dc_fixpt_div_int(
++			dc_fixpt_add_int(ratio, taps + 1), 2), temp), 19);
++	/*
++	 * If viewport has non 0 offset and there are more taps than covered by init then
++	 * we should decrease the offset and increase init so we are never sampling
++	 * outside of viewport.
++	 */
++	int_part = dc_fixpt_floor(*init);
++	if (int_part < taps) {
++		int_part = taps - int_part;
++		if (int_part > *vp_offset)
++			int_part = *vp_offset;
++		*vp_offset -= int_part;
++		*init = dc_fixpt_add_int(*init, int_part);
+ 	}
++	/*
++	 * If taps are sampling outside of viewport at end of recout and there are more pixels
++	 * available in the surface we should increase the viewport size, regardless set vp to
++	 * only what is used.
++	 */
++	temp = dc_fixpt_add(*init, dc_fixpt_mul_int(ratio, recout_size - 1));
++	*vp_size = dc_fixpt_floor(temp);
++	if (*vp_size + *vp_offset > src_size)
++		*vp_size = src_size - *vp_offset;
++
++	/* We did all the math assuming we are scanning same direction as display does,
++	 * however mirror/rotation changes how vp scans vs how it is offset. If scan direction
++	 * is flipped we simply need to calculate offset from the other side of plane.
++	 * Note that outside of viewport all scaling hardware works in recout space.
++	 */
++	if (flip_scan_dir)
++		*vp_offset = src_size - *vp_offset - *vp_size;
+ }
+ 
+-static void calculate_inits_and_adj_vp(struct pipe_ctx *pipe_ctx)
++static void calculate_inits_and_viewports(struct pipe_ctx *pipe_ctx)
+ {
+ 	const struct dc_plane_state *plane_state = pipe_ctx->plane_state;
+ 	const struct dc_stream_state *stream = pipe_ctx->stream;
+-	struct pipe_ctx *odm_pipe = pipe_ctx;
+ 	struct scaler_data *data = &pipe_ctx->plane_res.scl_data;
+-	struct rect src = pipe_ctx->plane_state->src_rect;
+-	int recout_skip_h, recout_skip_v, surf_size_h, surf_size_v;
++	struct rect src = plane_state->src_rect;
+ 	int vpc_div = (data->format == PIXEL_FORMAT_420BPP8
+-			|| data->format == PIXEL_FORMAT_420BPP10) ? 2 : 1;
++				|| data->format == PIXEL_FORMAT_420BPP10) ? 2 : 1;
++	int split_count, split_idx, ro_lb, ro_tb, recout_full_x, recout_full_y;
+ 	bool orthogonal_rotation, flip_vert_scan_dir, flip_horz_scan_dir;
+-	int odm_idx = 0;
+ 
++	calculate_split_count_and_index(pipe_ctx, &split_count, &split_idx);
+ 	/*
+-	 * Need to calculate the scan direction for viewport to make adjustments
++	 * recout full is what the recout would have been if we didnt clip
++	 * the source plane at all. We only care about left(ro_lb) and top(ro_tb)
++	 * offsets of recout within recout full because those are the directions
++	 * we scan from and therefore the only ones that affect inits.
++	 */
++	recout_full_x = stream->dst.x + (plane_state->dst_rect.x - stream->src.x)
++			* stream->dst.width / stream->src.width;
++	recout_full_y = stream->dst.y + (plane_state->dst_rect.y - stream->src.y)
++			* stream->dst.height / stream->src.height;
++	if (pipe_ctx->prev_odm_pipe && split_idx)
++		ro_lb = data->h_active * split_idx - recout_full_x;
++	else
++		ro_lb = data->recout.x - recout_full_x;
++	ro_tb = data->recout.y - recout_full_y;
++	ASSERT(ro_lb >= 0 && ro_tb >= 0);
++
++	/*
++	 * Work in recout rotation since that requires less transformations
+ 	 */
+ 	get_vp_scan_direction(
+ 			plane_state->rotation,
+@@ -1037,145 +943,62 @@ static void calculate_inits_and_adj_vp(struct pipe_ctx *pipe_ctx)
+ 			&flip_vert_scan_dir,
+ 			&flip_horz_scan_dir);
+ 
+-	/* Calculate src rect rotation adjusted to recout space */
+-	surf_size_h = src.x + src.width;
+-	surf_size_v = src.y + src.height;
+-	if (flip_horz_scan_dir)
+-		src.x = 0;
+-	if (flip_vert_scan_dir)
+-		src.y = 0;
+ 	if (orthogonal_rotation) {
+-		swap(src.x, src.y);
+ 		swap(src.width, src.height);
++		swap(flip_vert_scan_dir, flip_horz_scan_dir);
+ 	}
+ 
+-	/*modified recout_skip_h calculation due to odm having no recout offset*/
+-	while (odm_pipe->prev_odm_pipe) {
+-		odm_idx++;
+-		odm_pipe = odm_pipe->prev_odm_pipe;
+-	}
+-	/*odm_pipe is the leftmost pipe in the ODM group*/
+-	recout_skip_h = odm_idx * data->recout.width;
+-
+-	/* Recout matching initial vp offset = recout_offset - (stream dst offset +
+-	 *			((surf dst offset - stream src offset) * 1/ stream scaling ratio)
+-	 *			- (surf surf_src offset * 1/ full scl ratio))
+-	 */
+-	recout_skip_h += odm_pipe->plane_res.scl_data.recout.x
+-				- (stream->dst.x + (plane_state->dst_rect.x - stream->src.x)
+-					* stream->dst.width / stream->src.width -
+-					src.x * plane_state->dst_rect.width / src.width
+-					* stream->dst.width / stream->src.width);
+-
+-
+-	recout_skip_v = data->recout.y - (stream->dst.y + (plane_state->dst_rect.y - stream->src.y)
+-					* stream->dst.height / stream->src.height -
+-					src.y * plane_state->dst_rect.height / src.height
+-					* stream->dst.height / stream->src.height);
+-	if (orthogonal_rotation)
+-		swap(recout_skip_h, recout_skip_v);
+-	/*
+-	 * Init calculated according to formula:
+-	 * 	init = (scaling_ratio + number_of_taps + 1) / 2
+-	 * 	init_bot = init + scaling_ratio
+-	 * 	init_c = init + truncated_vp_c_offset(from calculate viewport)
+-	 */
+-	data->inits.h = dc_fixpt_truncate(dc_fixpt_div_int(
+-			dc_fixpt_add_int(data->ratios.horz, data->taps.h_taps + 1), 2), 19);
+-
+-	data->inits.h_c = dc_fixpt_truncate(dc_fixpt_add(data->inits.h_c, dc_fixpt_div_int(
+-			dc_fixpt_add_int(data->ratios.horz_c, data->taps.h_taps_c + 1), 2)), 19);
+-
+-	data->inits.v = dc_fixpt_truncate(dc_fixpt_div_int(
+-			dc_fixpt_add_int(data->ratios.vert, data->taps.v_taps + 1), 2), 19);
+-
+-	data->inits.v_c = dc_fixpt_truncate(dc_fixpt_add(data->inits.v_c, dc_fixpt_div_int(
+-			dc_fixpt_add_int(data->ratios.vert_c, data->taps.v_taps_c + 1), 2)), 19);
+-
+-	/*
+-	 * Taps, inits and scaling ratios are in recout space need to rotate
+-	 * to viewport rotation before adjustment
+-	 */
+-	adjust_vp_and_init_for_seamless_clip(
++	calculate_init_and_vp(
+ 			flip_horz_scan_dir,
+-			recout_skip_h,
+-			surf_size_h,
+-			orthogonal_rotation ? data->taps.v_taps : data->taps.h_taps,
+-			orthogonal_rotation ? data->ratios.vert : data->ratios.horz,
+-			orthogonal_rotation ? &data->inits.v : &data->inits.h,
++			ro_lb,
++			data->recout.width,
++			src.width,
++			data->taps.h_taps,
++			data->ratios.horz,
++			&data->inits.h,
+ 			&data->viewport.x,
+ 			&data->viewport.width);
+-	adjust_vp_and_init_for_seamless_clip(
++	calculate_init_and_vp(
+ 			flip_horz_scan_dir,
+-			recout_skip_h,
+-			surf_size_h / vpc_div,
+-			orthogonal_rotation ? data->taps.v_taps_c : data->taps.h_taps_c,
+-			orthogonal_rotation ? data->ratios.vert_c : data->ratios.horz_c,
+-			orthogonal_rotation ? &data->inits.v_c : &data->inits.h_c,
++			ro_lb,
++			data->recout.width,
++			src.width / vpc_div,
++			data->taps.h_taps_c,
++			data->ratios.horz_c,
++			&data->inits.h_c,
+ 			&data->viewport_c.x,
+ 			&data->viewport_c.width);
+-	adjust_vp_and_init_for_seamless_clip(
++	calculate_init_and_vp(
+ 			flip_vert_scan_dir,
+-			recout_skip_v,
+-			surf_size_v,
+-			orthogonal_rotation ? data->taps.h_taps : data->taps.v_taps,
+-			orthogonal_rotation ? data->ratios.horz : data->ratios.vert,
+-			orthogonal_rotation ? &data->inits.h : &data->inits.v,
++			ro_tb,
++			data->recout.height,
++			src.height,
++			data->taps.v_taps,
++			data->ratios.vert,
++			&data->inits.v,
+ 			&data->viewport.y,
+ 			&data->viewport.height);
+-	adjust_vp_and_init_for_seamless_clip(
++	calculate_init_and_vp(
+ 			flip_vert_scan_dir,
+-			recout_skip_v,
+-			surf_size_v / vpc_div,
+-			orthogonal_rotation ? data->taps.h_taps_c : data->taps.v_taps_c,
+-			orthogonal_rotation ? data->ratios.horz_c : data->ratios.vert_c,
+-			orthogonal_rotation ? &data->inits.h_c : &data->inits.v_c,
++			ro_tb,
++			data->recout.height,
++			src.height / vpc_div,
++			data->taps.v_taps_c,
++			data->ratios.vert_c,
++			&data->inits.v_c,
+ 			&data->viewport_c.y,
+ 			&data->viewport_c.height);
+-
+-	/* Interlaced inits based on final vert inits */
+-	data->inits.v_bot = dc_fixpt_add(data->inits.v, data->ratios.vert);
+-	data->inits.v_c_bot = dc_fixpt_add(data->inits.v_c, data->ratios.vert_c);
+-
+-}
+-
+-/*
+- * When handling 270 rotation in mixed SLS mode, we have
+- * stream->timing.h_border_left that is non zero.  If we are doing
+- * pipe-splitting, this h_border_left value gets added to recout.x and when it
+- * calls calculate_inits_and_adj_vp() and
+- * adjust_vp_and_init_for_seamless_clip(), it can cause viewport.height for a
+- * pipe to be incorrect.
+- *
+- * To fix this, instead of using stream->timing.h_border_left, we can use
+- * stream->dst.x to represent the border instead.  So we will set h_border_left
+- * to 0 and shift the appropriate amount in stream->dst.x.  We will then
+- * perform all calculations in resource_build_scaling_params() based on this
+- * and then restore the h_border_left and stream->dst.x to their original
+- * values.
+- *
+- * shift_border_left_to_dst() will shift the amount of h_border_left to
+- * stream->dst.x and set h_border_left to 0.  restore_border_left_from_dst()
+- * will restore h_border_left and stream->dst.x back to their original values
+- * We also need to make sure pipe_ctx->plane_res.scl_data.h_active uses the
+- * original h_border_left value in its calculation.
+- */
+-static int shift_border_left_to_dst(struct pipe_ctx *pipe_ctx)
+-{
+-	int store_h_border_left = pipe_ctx->stream->timing.h_border_left;
+-
+-	if (store_h_border_left) {
+-		pipe_ctx->stream->timing.h_border_left = 0;
+-		pipe_ctx->stream->dst.x += store_h_border_left;
++	if (orthogonal_rotation) {
++		swap(data->viewport.x, data->viewport.y);
++		swap(data->viewport.width, data->viewport.height);
++		swap(data->viewport_c.x, data->viewport_c.y);
++		swap(data->viewport_c.width, data->viewport_c.height);
+ 	}
+-	return store_h_border_left;
+-}
+-
+-static void restore_border_left_from_dst(struct pipe_ctx *pipe_ctx,
+-					 int store_h_border_left)
+-{
+-	pipe_ctx->stream->dst.x -= store_h_border_left;
+-	pipe_ctx->stream->timing.h_border_left = store_h_border_left;
++	data->viewport.x += src.x;
++	data->viewport.y += src.y;
++	ASSERT(src.x % vpc_div == 0 && src.y % vpc_div == 0);
++	data->viewport_c.x += src.x / vpc_div;
++	data->viewport_c.y += src.y / vpc_div;
+ }
+ 
+ bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx)
+@@ -1183,48 +1006,42 @@ bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx)
+ 	const struct dc_plane_state *plane_state = pipe_ctx->plane_state;
+ 	struct dc_crtc_timing *timing = &pipe_ctx->stream->timing;
+ 	bool res = false;
+-	int store_h_border_left = shift_border_left_to_dst(pipe_ctx);
+ 	DC_LOGGER_INIT(pipe_ctx->stream->ctx->logger);
+-	/* Important: scaling ratio calculation requires pixel format,
+-	 * lb depth calculation requires recout and taps require scaling ratios.
+-	 * Inits require viewport, taps, ratios and recout of split pipe
+-	 */
++
+ 	pipe_ctx->plane_res.scl_data.format = convert_pixel_format_to_dalsurface(
+ 			pipe_ctx->plane_state->format);
+ 
+-	calculate_scaling_ratios(pipe_ctx);
+-
+-	calculate_viewport(pipe_ctx);
++	/* Timing borders are part of vactive that we are also supposed to skip in addition
++	 * to any stream dst offset. Since dm logic assumes dst is in addressable
++	 * space we need to add the the left and top borders to dst offsets temporarily.
++	 * TODO: fix in DM, stream dst is supposed to be in vactive
++	 */
++	pipe_ctx->stream->dst.x += timing->h_border_left;
++	pipe_ctx->stream->dst.y += timing->v_border_top;
+ 
+-	if (pipe_ctx->plane_res.scl_data.viewport.height < MIN_VIEWPORT_SIZE ||
+-		pipe_ctx->plane_res.scl_data.viewport.width < MIN_VIEWPORT_SIZE) {
+-		if (store_h_border_left) {
+-			restore_border_left_from_dst(pipe_ctx,
+-				store_h_border_left);
+-		}
+-		return false;
+-	}
++	/* Calculate H and V active size */
++	pipe_ctx->plane_res.scl_data.h_active = timing->h_addressable +
++			timing->h_border_left + timing->h_border_right;
++	pipe_ctx->plane_res.scl_data.v_active = timing->v_addressable +
++		timing->v_border_top + timing->v_border_bottom;
++	if (pipe_ctx->next_odm_pipe || pipe_ctx->prev_odm_pipe)
++		pipe_ctx->plane_res.scl_data.h_active /= get_num_odm_splits(pipe_ctx) + 1;
+ 
++	/* depends on h_active */
+ 	calculate_recout(pipe_ctx);
++	/* depends on pixel format */
++	calculate_scaling_ratios(pipe_ctx);
++	/* depends on scaling ratios and recout, does not calculate offset yet */
++	calculate_viewport_size(pipe_ctx);
+ 
+-	/**
++	/*
++	 * LB calculations depend on vp size, h/v_active and scaling ratios
+ 	 * Setting line buffer pixel depth to 24bpp yields banding
+ 	 * on certain displays, such as the Sharp 4k
+ 	 */
+ 	pipe_ctx->plane_res.scl_data.lb_params.depth = LB_PIXEL_DEPTH_30BPP;
+ 	pipe_ctx->plane_res.scl_data.lb_params.alpha_en = plane_state->per_pixel_alpha;
+ 
+-	pipe_ctx->plane_res.scl_data.recout.x += timing->h_border_left;
+-	pipe_ctx->plane_res.scl_data.recout.y += timing->v_border_top;
+-
+-	pipe_ctx->plane_res.scl_data.h_active = timing->h_addressable +
+-		store_h_border_left + timing->h_border_right;
+-	pipe_ctx->plane_res.scl_data.v_active = timing->v_addressable +
+-		timing->v_border_top + timing->v_border_bottom;
+-	if (pipe_ctx->next_odm_pipe || pipe_ctx->prev_odm_pipe)
+-		pipe_ctx->plane_res.scl_data.h_active /= get_num_odm_splits(pipe_ctx) + 1;
+-
+-	/* Taps calculations */
+ 	if (pipe_ctx->plane_res.xfm != NULL)
+ 		res = pipe_ctx->plane_res.xfm->funcs->transform_get_optimal_number_of_taps(
+ 				pipe_ctx->plane_res.xfm, &pipe_ctx->plane_res.scl_data, &plane_state->scaling_quality);
+@@ -1251,9 +1068,31 @@ bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx)
+ 					&plane_state->scaling_quality);
+ 	}
+ 
++	/*
++	 * Depends on recout, scaling ratios, h_active and taps
++	 * May need to re-check lb size after this in some obscure scenario
++	 */
+ 	if (res)
+-		/* May need to re-check lb size after this in some obscure scenario */
+-		calculate_inits_and_adj_vp(pipe_ctx);
++		calculate_inits_and_viewports(pipe_ctx);
++
++	/*
++	 * Handle side by side and top bottom 3d recout offsets after vp calculation
++	 * since 3d is special and needs to calculate vp as if there is no recout offset
++	 * This may break with rotation, good thing we aren't mixing hw rotation and 3d
++	 */
++	if (pipe_ctx->top_pipe && pipe_ctx->top_pipe->plane_state == plane_state) {
++		ASSERT(plane_state->rotation == ROTATION_ANGLE_0 ||
++			(pipe_ctx->stream->view_format != VIEW_3D_FORMAT_TOP_AND_BOTTOM &&
++				pipe_ctx->stream->view_format != VIEW_3D_FORMAT_SIDE_BY_SIDE));
++		if (pipe_ctx->stream->view_format == VIEW_3D_FORMAT_TOP_AND_BOTTOM)
++			pipe_ctx->plane_res.scl_data.recout.y += pipe_ctx->plane_res.scl_data.recout.height;
++		else if (pipe_ctx->stream->view_format == VIEW_3D_FORMAT_SIDE_BY_SIDE)
++			pipe_ctx->plane_res.scl_data.recout.x += pipe_ctx->plane_res.scl_data.recout.width;
++	}
++
++	if (pipe_ctx->plane_res.scl_data.viewport.height < MIN_VIEWPORT_SIZE ||
++			pipe_ctx->plane_res.scl_data.viewport.width < MIN_VIEWPORT_SIZE)
++		res = false;
+ 
+ 	DC_LOG_SCALER("%s pipe %d:\nViewport: height:%d width:%d x:%d y:%d  Recout: height:%d width:%d x:%d y:%d  HACTIVE:%d VACTIVE:%d\n"
+ 			"src_rect: height:%d width:%d x:%d y:%d  dst_rect: height:%d width:%d x:%d y:%d  clip_rect: height:%d width:%d x:%d y:%d\n",
+@@ -1282,8 +1121,8 @@ bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx)
+ 			plane_state->clip_rect.x,
+ 			plane_state->clip_rect.y);
+ 
+-	if (store_h_border_left)
+-		restore_border_left_from_dst(pipe_ctx, store_h_border_left);
++	pipe_ctx->stream->dst.x -= timing->h_border_left;
++	pipe_ctx->stream->dst.y -= timing->v_border_top;
+ 
+ 	return res;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 100d434f7a038..65f801b506863 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -719,7 +719,6 @@ void dc_init_callbacks(struct dc *dc,
+ void dc_deinit_callbacks(struct dc *dc);
+ void dc_destroy(struct dc **dc);
+ 
+-void dc_wait_for_vblank(struct dc *dc, struct dc_stream_state *stream);
+ /*******************************************************************************
+  * Surface Interfaces
+  ******************************************************************************/
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_types.h b/drivers/gpu/drm/amd/display/dc/dc_types.h
+index 432754eaf10b8..a6f21f9de6e4e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_types.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_types.h
+@@ -271,11 +271,6 @@ struct dc_edid_caps {
+ 	struct dc_panel_patch panel_patch;
+ };
+ 
+-struct view {
+-	uint32_t width;
+-	uint32_t height;
+-};
+-
+ struct dc_mode_flags {
+ 	/* note: part of refresh rate flag*/
+ 	uint32_t INTERLACE :1;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c
+index efa86d5c68470..a33f522a2648a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c
+@@ -496,10 +496,13 @@ static enum lb_memory_config dpp1_dscl_find_lb_memory_config(struct dcn10_dpp *d
+ 	int vtaps_c = scl_data->taps.v_taps_c;
+ 	int ceil_vratio = dc_fixpt_ceil(scl_data->ratios.vert);
+ 	int ceil_vratio_c = dc_fixpt_ceil(scl_data->ratios.vert_c);
+-	enum lb_memory_config mem_cfg = LB_MEMORY_CONFIG_0;
+ 
+-	if (dpp->base.ctx->dc->debug.use_max_lb)
+-		return mem_cfg;
++	if (dpp->base.ctx->dc->debug.use_max_lb) {
++		if (scl_data->format == PIXEL_FORMAT_420BPP8
++				|| scl_data->format == PIXEL_FORMAT_420BPP10)
++			return LB_MEMORY_CONFIG_3;
++		return LB_MEMORY_CONFIG_0;
++	}
+ 
+ 	dpp->base.caps->dscl_calc_lb_num_partitions(
+ 			scl_data, LB_MEMORY_CONFIG_1, &num_part_y, &num_part_c);
+@@ -628,8 +631,10 @@ static void dpp1_dscl_set_manual_ratio_init(
+ 		SCL_V_INIT_INT, init_int);
+ 
+ 	if (REG(SCL_VERT_FILTER_INIT_BOT)) {
+-		init_frac = dc_fixpt_u0d19(data->inits.v_bot) << 5;
+-		init_int = dc_fixpt_floor(data->inits.v_bot);
++		struct fixed31_32 bot = dc_fixpt_add(data->inits.v, data->ratios.vert);
++
++		init_frac = dc_fixpt_u0d19(bot) << 5;
++		init_int = dc_fixpt_floor(bot);
+ 		REG_SET_2(SCL_VERT_FILTER_INIT_BOT, 0,
+ 			SCL_V_INIT_FRAC_BOT, init_frac,
+ 			SCL_V_INIT_INT_BOT, init_int);
+@@ -642,8 +647,10 @@ static void dpp1_dscl_set_manual_ratio_init(
+ 		SCL_V_INIT_INT_C, init_int);
+ 
+ 	if (REG(SCL_VERT_FILTER_INIT_BOT_C)) {
+-		init_frac = dc_fixpt_u0d19(data->inits.v_c_bot) << 5;
+-		init_int = dc_fixpt_floor(data->inits.v_c_bot);
++		struct fixed31_32 bot = dc_fixpt_add(data->inits.v_c, data->ratios.vert_c);
++
++		init_frac = dc_fixpt_u0d19(bot) << 5;
++		init_int = dc_fixpt_floor(bot);
+ 		REG_SET_2(SCL_VERT_FILTER_INIT_BOT_C, 0,
+ 			SCL_V_INIT_FRAC_BOT_C, init_frac,
+ 			SCL_V_INIT_INT_BOT_C, init_int);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+index 6a10daec15ccd..793554e61c520 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+@@ -243,7 +243,7 @@ void dcn20_dccg_init(struct dce_hwseq *hws)
+ 	REG_WRITE(MILLISECOND_TIME_BASE_DIV, 0x1186a0);
+ 
+ 	/* This value is dependent on the hardware pipeline delay so set once per SOC */
+-	REG_WRITE(DISPCLK_FREQ_CHANGE_CNTL, 0x801003c);
++	REG_WRITE(DISPCLK_FREQ_CHANGE_CNTL, 0xe01003c);
+ }
+ 
+ void dcn20_disable_vga(
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index 8357aa3c41d5a..d7d70b9bb3874 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -2289,12 +2289,14 @@ int dcn20_populate_dml_pipes_from_context(
+ 
+ 			pipes[pipe_cnt].pipe.src.source_scan = pln->rotation == ROTATION_ANGLE_90
+ 					|| pln->rotation == ROTATION_ANGLE_270 ? dm_vert : dm_horz;
+-			pipes[pipe_cnt].pipe.src.viewport_y_y = scl->viewport_unadjusted.y;
+-			pipes[pipe_cnt].pipe.src.viewport_y_c = scl->viewport_c_unadjusted.y;
+-			pipes[pipe_cnt].pipe.src.viewport_width = scl->viewport_unadjusted.width;
+-			pipes[pipe_cnt].pipe.src.viewport_width_c = scl->viewport_c_unadjusted.width;
+-			pipes[pipe_cnt].pipe.src.viewport_height = scl->viewport_unadjusted.height;
+-			pipes[pipe_cnt].pipe.src.viewport_height_c = scl->viewport_c_unadjusted.height;
++			pipes[pipe_cnt].pipe.src.viewport_y_y = scl->viewport.y;
++			pipes[pipe_cnt].pipe.src.viewport_y_c = scl->viewport_c.y;
++			pipes[pipe_cnt].pipe.src.viewport_width = scl->viewport.width;
++			pipes[pipe_cnt].pipe.src.viewport_width_c = scl->viewport_c.width;
++			pipes[pipe_cnt].pipe.src.viewport_height = scl->viewport.height;
++			pipes[pipe_cnt].pipe.src.viewport_height_c = scl->viewport_c.height;
++			pipes[pipe_cnt].pipe.src.viewport_width_max = pln->src_rect.width;
++			pipes[pipe_cnt].pipe.src.viewport_height_max = pln->src_rect.height;
+ 			pipes[pipe_cnt].pipe.src.surface_width_y = pln->plane_size.surface_size.width;
+ 			pipes[pipe_cnt].pipe.src.surface_height_y = pln->plane_size.surface_size.height;
+ 			pipes[pipe_cnt].pipe.src.surface_width_c = pln->plane_size.chroma_size.width;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+index cb3f70a71b512..db6bb7ea53169 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+@@ -64,6 +64,7 @@ typedef struct {
+ #define BPP_INVALID 0
+ #define BPP_BLENDED_PIPE 0xffffffff
+ #define DCN30_MAX_DSC_IMAGE_WIDTH 5184
++#define DCN30_MAX_FMT_420_BUFFER_WIDTH 4096
+ 
+ static void DisplayPipeConfiguration(struct display_mode_lib *mode_lib);
+ static void DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation(
+@@ -2052,7 +2053,7 @@ static void DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerforman
+ 			v->DISPCLKWithoutRamping,
+ 			v->DISPCLKDPPCLKVCOSpeed);
+ 	v->MaxDispclkRoundedToDFSGranularity = RoundToDFSGranularityDown(
+-			v->soc.clock_limits[mode_lib->soc.num_states].dispclk_mhz,
++			v->soc.clock_limits[mode_lib->soc.num_states - 1].dispclk_mhz,
+ 			v->DISPCLKDPPCLKVCOSpeed);
+ 	if (v->DISPCLKWithoutRampingRoundedToDFSGranularity
+ 			> v->MaxDispclkRoundedToDFSGranularity) {
+@@ -3957,20 +3958,20 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 			for (k = 0; k <= v->NumberOfActivePlanes - 1; k++) {
+ 				v->PlaneRequiredDISPCLKWithoutODMCombine = v->PixelClock[k] * (1.0 + v->DISPCLKDPPCLKDSCCLKDownSpreading / 100.0)
+ 						* (1.0 + v->DISPCLKRampingMargin / 100.0);
+-				if ((v->PlaneRequiredDISPCLKWithoutODMCombine >= v->MaxDispclk[i] && v->MaxDispclk[i] == v->MaxDispclk[mode_lib->soc.num_states]
+-						&& v->MaxDppclk[i] == v->MaxDppclk[mode_lib->soc.num_states])) {
++				if ((v->PlaneRequiredDISPCLKWithoutODMCombine >= v->MaxDispclk[i] && v->MaxDispclk[i] == v->MaxDispclk[mode_lib->soc.num_states - 1]
++						&& v->MaxDppclk[i] == v->MaxDppclk[mode_lib->soc.num_states - 1])) {
+ 					v->PlaneRequiredDISPCLKWithoutODMCombine = v->PixelClock[k] * (1 + v->DISPCLKDPPCLKDSCCLKDownSpreading / 100.0);
+ 				}
+ 				v->PlaneRequiredDISPCLKWithODMCombine2To1 = v->PixelClock[k] / 2 * (1 + v->DISPCLKDPPCLKDSCCLKDownSpreading / 100.0)
+ 						* (1 + v->DISPCLKRampingMargin / 100.0);
+-				if ((v->PlaneRequiredDISPCLKWithODMCombine2To1 >= v->MaxDispclk[i] && v->MaxDispclk[i] == v->MaxDispclk[mode_lib->soc.num_states]
+-						&& v->MaxDppclk[i] == v->MaxDppclk[mode_lib->soc.num_states])) {
++				if ((v->PlaneRequiredDISPCLKWithODMCombine2To1 >= v->MaxDispclk[i] && v->MaxDispclk[i] == v->MaxDispclk[mode_lib->soc.num_states - 1]
++						&& v->MaxDppclk[i] == v->MaxDppclk[mode_lib->soc.num_states - 1])) {
+ 					v->PlaneRequiredDISPCLKWithODMCombine2To1 = v->PixelClock[k] / 2 * (1 + v->DISPCLKDPPCLKDSCCLKDownSpreading / 100.0);
+ 				}
+ 				v->PlaneRequiredDISPCLKWithODMCombine4To1 = v->PixelClock[k] / 4 * (1 + v->DISPCLKDPPCLKDSCCLKDownSpreading / 100.0)
+ 						* (1 + v->DISPCLKRampingMargin / 100.0);
+-				if ((v->PlaneRequiredDISPCLKWithODMCombine4To1 >= v->MaxDispclk[i] && v->MaxDispclk[i] == v->MaxDispclk[mode_lib->soc.num_states]
+-						&& v->MaxDppclk[i] == v->MaxDppclk[mode_lib->soc.num_states])) {
++				if ((v->PlaneRequiredDISPCLKWithODMCombine4To1 >= v->MaxDispclk[i] && v->MaxDispclk[i] == v->MaxDispclk[mode_lib->soc.num_states - 1]
++						&& v->MaxDppclk[i] == v->MaxDppclk[mode_lib->soc.num_states - 1])) {
+ 					v->PlaneRequiredDISPCLKWithODMCombine4To1 = v->PixelClock[k] / 4 * (1 + v->DISPCLKDPPCLKDSCCLKDownSpreading / 100.0);
+ 				}
+ 
+@@ -3987,19 +3988,30 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 				} else if (v->PlaneRequiredDISPCLKWithoutODMCombine > v->MaxDispclkRoundedDownToDFSGranularity) {
+ 					v->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
+ 					v->PlaneRequiredDISPCLK = v->PlaneRequiredDISPCLKWithODMCombine2To1;
+-				} else if (v->DSCEnabled[k] && (v->HActive[k] > DCN30_MAX_DSC_IMAGE_WIDTH)) {
+-					v->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
+-					v->PlaneRequiredDISPCLK = v->PlaneRequiredDISPCLKWithODMCombine2To1;
+ 				} else {
+ 					v->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_disabled;
+ 					v->PlaneRequiredDISPCLK = v->PlaneRequiredDISPCLKWithoutODMCombine;
+-					/*420 format workaround*/
+-					if (v->HActive[k] > 4096 && v->OutputFormat[k] == dm_420) {
++				}
++				if (v->DSCEnabled[k] && v->HActive[k] > DCN30_MAX_DSC_IMAGE_WIDTH
++						&& v->ODMCombineEnablePerState[i][k] != dm_odm_combine_mode_4to1) {
++					if (v->HActive[k] / 2 > DCN30_MAX_DSC_IMAGE_WIDTH) {
++						v->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_4to1;
++						v->PlaneRequiredDISPCLK = v->PlaneRequiredDISPCLKWithODMCombine4To1;
++					} else {
++						v->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
++						v->PlaneRequiredDISPCLK = v->PlaneRequiredDISPCLKWithODMCombine2To1;
++					}
++				}
++				if (v->OutputFormat[k] == dm_420 && v->HActive[k] > DCN30_MAX_FMT_420_BUFFER_WIDTH
++						&& v->ODMCombineEnablePerState[i][k] != dm_odm_combine_mode_4to1) {
++					if (v->HActive[k] / 2 > DCN30_MAX_FMT_420_BUFFER_WIDTH) {
++						v->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_4to1;
++						v->PlaneRequiredDISPCLK = v->PlaneRequiredDISPCLKWithODMCombine4To1;
++					} else {
+ 						v->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
+ 						v->PlaneRequiredDISPCLK = v->PlaneRequiredDISPCLKWithODMCombine2To1;
+ 					}
+ 				}
+-
+ 				if (v->ODMCombineEnablePerState[i][k] == dm_odm_combine_mode_4to1) {
+ 					v->MPCCombine[i][j][k] = false;
+ 					v->NoOfDPP[i][j][k] = 4;
+@@ -4281,42 +4293,8 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 		}
+ 	}
+ 
+-	for (i = 0; i < v->soc.num_states; i++) {
+-		v->DSCCLKRequiredMoreThanSupported[i] = false;
+-		for (k = 0; k <= v->NumberOfActivePlanes - 1; k++) {
+-			if (v->BlendingAndTiming[k] == k) {
+-				if (v->Output[k] == dm_dp || v->Output[k] == dm_edp) {
+-					if (v->OutputFormat[k] == dm_420) {
+-						v->DSCFormatFactor = 2;
+-					} else if (v->OutputFormat[k] == dm_444) {
+-						v->DSCFormatFactor = 1;
+-					} else if (v->OutputFormat[k] == dm_n422) {
+-						v->DSCFormatFactor = 2;
+-					} else {
+-						v->DSCFormatFactor = 1;
+-					}
+-					if (v->RequiresDSC[i][k] == true) {
+-						if (v->ODMCombineEnablePerState[i][k] == dm_odm_combine_mode_4to1) {
+-							if (v->PixelClockBackEnd[k] / 12.0 / v->DSCFormatFactor
+-									> (1.0 - v->DISPCLKDPPCLKDSCCLKDownSpreading / 100.0) * v->MaxDSCCLK[i]) {
+-								v->DSCCLKRequiredMoreThanSupported[i] = true;
+-							}
+-						} else if (v->ODMCombineEnablePerState[i][k] == dm_odm_combine_mode_2to1) {
+-							if (v->PixelClockBackEnd[k] / 6.0 / v->DSCFormatFactor
+-									> (1.0 - v->DISPCLKDPPCLKDSCCLKDownSpreading / 100.0) * v->MaxDSCCLK[i]) {
+-								v->DSCCLKRequiredMoreThanSupported[i] = true;
+-							}
+-						} else {
+-							if (v->PixelClockBackEnd[k] / 3.0 / v->DSCFormatFactor
+-									> (1.0 - v->DISPCLKDPPCLKDSCCLKDownSpreading / 100.0) * v->MaxDSCCLK[i]) {
+-								v->DSCCLKRequiredMoreThanSupported[i] = true;
+-							}
+-						}
+-					}
+-				}
+-			}
+-		}
+-	}
++	/* Skip dscclk validation: as long as dispclk is supported, dscclk is also implicitly supported */
++
+ 	for (i = 0; i < v->soc.num_states; i++) {
+ 		v->NotEnoughDSCUnits[i] = false;
+ 		v->TotalDSCUnitsRequired = 0.0;
+@@ -5319,7 +5297,7 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 		for (j = 0; j < 2; j++) {
+ 			if (v->ScaleRatioAndTapsSupport == 1 && v->SourceFormatPixelAndScanSupport == 1 && v->ViewportSizeSupport[i][j] == 1
+ 					&& v->DIOSupport[i] == 1 && v->ODMCombine4To1SupportCheckOK[i] == 1
+-					&& v->NotEnoughDSCUnits[i] == 0 && v->DSCCLKRequiredMoreThanSupported[i] == 0
++					&& v->NotEnoughDSCUnits[i] == 0
+ 					&& v->DTBCLKRequiredMoreThanSupported[i] == 0
+ 					&& v->ROBSupport[i][j] == 1 && v->DISPCLK_DPPCLK_Support[i][j] == 1 && v->TotalAvailablePipesSupport[i][j] == 1
+ 					&& EnoughWritebackUnits == 1 && WritebackModeSupport == 1
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h b/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
+index 2ece3690bfa31..a0f0c54c863bc 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
++++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
+@@ -253,6 +253,8 @@ struct _vcs_dpi_display_pipe_source_params_st {
+ 	unsigned int viewport_y_c;
+ 	unsigned int viewport_width_c;
+ 	unsigned int viewport_height_c;
++	unsigned int viewport_width_max;
++	unsigned int viewport_height_max;
+ 	unsigned int data_pitch;
+ 	unsigned int data_pitch_c;
+ 	unsigned int meta_pitch;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
+index 2a967458065be..8e5c9d22b3648 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
+@@ -630,6 +630,19 @@ static void fetch_pipe_params(struct display_mode_lib *mode_lib)
+ 				}
+ 			}
+ 		}
++		if (src->viewport_width_max) {
++			int hdiv_c = src->source_format >= dm_420_8 && src->source_format <= dm_422_10 ? 2 : 1;
++			int vdiv_c = src->source_format >= dm_420_8 && src->source_format <= dm_420_12 ? 2 : 1;
++
++			if (mode_lib->vba.ViewportWidth[mode_lib->vba.NumberOfActivePlanes] > src->viewport_width_max)
++				mode_lib->vba.ViewportWidth[mode_lib->vba.NumberOfActivePlanes] = src->viewport_width_max;
++			if (mode_lib->vba.ViewportHeight[mode_lib->vba.NumberOfActivePlanes] > src->viewport_height_max)
++				mode_lib->vba.ViewportHeight[mode_lib->vba.NumberOfActivePlanes] = src->viewport_height_max;
++			if (mode_lib->vba.ViewportWidthChroma[mode_lib->vba.NumberOfActivePlanes] > src->viewport_width_max / hdiv_c)
++				mode_lib->vba.ViewportWidthChroma[mode_lib->vba.NumberOfActivePlanes] = src->viewport_width_max / hdiv_c;
++			if (mode_lib->vba.ViewportHeightChroma[mode_lib->vba.NumberOfActivePlanes] > src->viewport_height_max / vdiv_c)
++				mode_lib->vba.ViewportHeightChroma[mode_lib->vba.NumberOfActivePlanes] = src->viewport_height_max / vdiv_c;
++		}
+ 
+ 		if (pipes[k].pipe.src.immediate_flip) {
+ 			mode_lib->vba.ImmediateFlipSupport = true;
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/transform.h b/drivers/gpu/drm/amd/display/dc/inc/hw/transform.h
+index 2947d1b155129..2a0db2b03047e 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/transform.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/transform.h
+@@ -162,9 +162,7 @@ struct scl_inits {
+ 	struct fixed31_32 h;
+ 	struct fixed31_32 h_c;
+ 	struct fixed31_32 v;
+-	struct fixed31_32 v_bot;
+ 	struct fixed31_32 v_c;
+-	struct fixed31_32 v_c_bot;
+ };
+ 
+ struct scaler_data {
+@@ -173,8 +171,6 @@ struct scaler_data {
+ 	struct scaling_taps taps;
+ 	struct rect viewport;
+ 	struct rect viewport_c;
+-	struct rect viewport_unadjusted;
+-	struct rect viewport_c_unadjusted;
+ 	struct rect recout;
+ 	struct scaling_ratios ratios;
+ 	struct scl_inits inits;
+diff --git a/drivers/gpu/drm/amd/display/dc/irq_types.h b/drivers/gpu/drm/amd/display/dc/irq_types.h
+index ae8f47ec0f8c3..1e4602bffb08d 100644
+--- a/drivers/gpu/drm/amd/display/dc/irq_types.h
++++ b/drivers/gpu/drm/amd/display/dc/irq_types.h
+@@ -165,7 +165,7 @@ enum irq_type
+ };
+ 
+ #define DAL_VALID_IRQ_SRC_NUM(src) \
+-	((src) <= DAL_IRQ_SOURCES_NUMBER && (src) > DC_IRQ_SOURCE_INVALID)
++	((src) < DAL_IRQ_SOURCES_NUMBER && (src) > DC_IRQ_SOURCE_INVALID)
+ 
+ /* Number of Page Flip IRQ Sources. */
+ #define DAL_PFLIP_IRQ_SRC_NUM \
+diff --git a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp.c b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp.c
+index 68a6481d7f8f3..b963226e8af43 100644
+--- a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp.c
++++ b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp.c
+@@ -260,7 +260,6 @@ enum mod_hdcp_status mod_hdcp_setup(struct mod_hdcp *hdcp,
+ 	struct mod_hdcp_output output;
+ 	enum mod_hdcp_status status = MOD_HDCP_STATUS_SUCCESS;
+ 
+-	memset(hdcp, 0, sizeof(struct mod_hdcp));
+ 	memset(&output, 0, sizeof(output));
+ 	hdcp->config = *config;
+ 	HDCP_TOP_INTERFACE_TRACE(hdcp);
+diff --git a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp1_execution.c b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp1_execution.c
+index 2cbd931363bdd..6d26d9c63ab2f 100644
+--- a/drivers/gpu/drm/amd/display/modules/hdcp/hdcp1_execution.c
++++ b/drivers/gpu/drm/amd/display/modules/hdcp/hdcp1_execution.c
+@@ -29,8 +29,10 @@ static inline enum mod_hdcp_status validate_bksv(struct mod_hdcp *hdcp)
+ {
+ 	uint64_t n = 0;
+ 	uint8_t count = 0;
++	u8 bksv[sizeof(n)] = { };
+ 
+-	memcpy(&n, hdcp->auth.msg.hdcp1.bksv, sizeof(uint64_t));
++	memcpy(bksv, hdcp->auth.msg.hdcp1.bksv, sizeof(hdcp->auth.msg.hdcp1.bksv));
++	n = *(uint64_t *)bksv;
+ 
+ 	while (n) {
+ 		count++;
+diff --git a/drivers/gpu/drm/amd/include/navi10_enum.h b/drivers/gpu/drm/amd/include/navi10_enum.h
+index d5ead9680c6ed..84bcb96f76ea4 100644
+--- a/drivers/gpu/drm/amd/include/navi10_enum.h
++++ b/drivers/gpu/drm/amd/include/navi10_enum.h
+@@ -430,7 +430,7 @@ ARRAY_2D_DEPTH                           = 0x00000001,
+  */
+ 
+ typedef enum ENUM_NUM_SIMD_PER_CU {
+-NUM_SIMD_PER_CU                          = 0x00000004,
++NUM_SIMD_PER_CU                          = 0x00000002,
+ } ENUM_NUM_SIMD_PER_CU;
+ 
+ /*
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
+index dcbe3a72da093..0d2f61f56f45c 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
+@@ -1290,10 +1290,13 @@ static int aldebaran_usr_edit_dpm_table(struct smu_context *smu, enum PP_OD_DPM_
+ 
+ static bool aldebaran_is_dpm_running(struct smu_context *smu)
+ {
+-	int ret = 0;
++	int ret;
+ 	uint32_t feature_mask[2];
+ 	unsigned long feature_enabled;
++
+ 	ret = smu_cmn_get_enabled_mask(smu, feature_mask, 2);
++	if (ret)
++		return false;
+ 	feature_enabled = (unsigned long)((uint64_t)feature_mask[0] |
+ 					  ((uint64_t)feature_mask[1] << 32));
+ 	return !!(feature_enabled & SMC_DPM_FEATURE);
+@@ -1779,10 +1782,8 @@ static int aldebaran_set_mp1_state(struct smu_context *smu,
+ 	case PP_MP1_STATE_UNLOAD:
+ 		return smu_cmn_set_mp1_state(smu, mp1_state);
+ 	default:
+-		return -EINVAL;
++		return 0;
+ 	}
+-
+-	return 0;
+ }
+ 
+ static const struct pptable_funcs aldebaran_ppt_funcs = {
+diff --git a/drivers/gpu/drm/arm/malidp_planes.c b/drivers/gpu/drm/arm/malidp_planes.c
+index ddbba67f0283a..8c2ab3d653b70 100644
+--- a/drivers/gpu/drm/arm/malidp_planes.c
++++ b/drivers/gpu/drm/arm/malidp_planes.c
+@@ -927,6 +927,11 @@ static const struct drm_plane_helper_funcs malidp_de_plane_helper_funcs = {
+ 	.atomic_disable = malidp_de_plane_disable,
+ };
+ 
++static const uint64_t linear_only_modifiers[] = {
++	DRM_FORMAT_MOD_LINEAR,
++	DRM_FORMAT_MOD_INVALID
++};
++
+ int malidp_de_planes_init(struct drm_device *drm)
+ {
+ 	struct malidp_drm *malidp = drm->dev_private;
+@@ -990,8 +995,8 @@ int malidp_de_planes_init(struct drm_device *drm)
+ 		 */
+ 		ret = drm_universal_plane_init(drm, &plane->base, crtcs,
+ 				&malidp_de_plane_funcs, formats, n,
+-				(id == DE_SMART) ? NULL : modifiers, plane_type,
+-				NULL);
++				(id == DE_SMART) ? linear_only_modifiers : modifiers,
++				plane_type, NULL);
+ 
+ 		if (ret < 0)
+ 			goto cleanup;
+diff --git a/drivers/gpu/drm/ast/ast_dp501.c b/drivers/gpu/drm/ast/ast_dp501.c
+index 88121c0e0d05c..cd93c44f26627 100644
+--- a/drivers/gpu/drm/ast/ast_dp501.c
++++ b/drivers/gpu/drm/ast/ast_dp501.c
+@@ -189,6 +189,9 @@ bool ast_backup_fw(struct drm_device *dev, u8 *addr, u32 size)
+ 	u32 i, data;
+ 	u32 boot_address;
+ 
++	if (ast->config_mode != ast_use_p2a)
++		return false;
++
+ 	data = ast_mindwm(ast, 0x1e6e2100) & 0x01;
+ 	if (data) {
+ 		boot_address = get_fw_base(ast);
+@@ -207,6 +210,9 @@ static bool ast_launch_m68k(struct drm_device *dev)
+ 	u8 *fw_addr = NULL;
+ 	u8 jreg;
+ 
++	if (ast->config_mode != ast_use_p2a)
++		return false;
++
+ 	data = ast_mindwm(ast, 0x1e6e2100) & 0x01;
+ 	if (!data) {
+ 
+@@ -271,25 +277,55 @@ u8 ast_get_dp501_max_clk(struct drm_device *dev)
+ 	struct ast_private *ast = to_ast_private(dev);
+ 	u32 boot_address, offset, data;
+ 	u8 linkcap[4], linkrate, linklanes, maxclk = 0xff;
++	u32 *plinkcap;
+ 
+-	boot_address = get_fw_base(ast);
+-
+-	/* validate FW version */
+-	offset = 0xf000;
+-	data = ast_mindwm(ast, boot_address + offset);
+-	if ((data & 0xf0) != 0x10) /* version: 1x */
+-		return maxclk;
+-
+-	/* Read Link Capability */
+-	offset  = 0xf014;
+-	*(u32 *)linkcap = ast_mindwm(ast, boot_address + offset);
+-	if (linkcap[2] == 0) {
+-		linkrate = linkcap[0];
+-		linklanes = linkcap[1];
+-		data = (linkrate == 0x0a) ? (90 * linklanes) : (54 * linklanes);
+-		if (data > 0xff)
+-			data = 0xff;
+-		maxclk = (u8)data;
++	if (ast->config_mode == ast_use_p2a) {
++		boot_address = get_fw_base(ast);
++
++		/* validate FW version */
++		offset = AST_DP501_GBL_VERSION;
++		data = ast_mindwm(ast, boot_address + offset);
++		if ((data & AST_DP501_FW_VERSION_MASK) != AST_DP501_FW_VERSION_1) /* version: 1x */
++			return maxclk;
++
++		/* Read Link Capability */
++		offset  = AST_DP501_LINKRATE;
++		plinkcap = (u32 *)linkcap;
++		*plinkcap  = ast_mindwm(ast, boot_address + offset);
++		if (linkcap[2] == 0) {
++			linkrate = linkcap[0];
++			linklanes = linkcap[1];
++			data = (linkrate == 0x0a) ? (90 * linklanes) : (54 * linklanes);
++			if (data > 0xff)
++				data = 0xff;
++			maxclk = (u8)data;
++		}
++	} else {
++		if (!ast->dp501_fw_buf)
++			return AST_DP501_DEFAULT_DCLK;	/* 1024x768 as default */
++
++		/* dummy read */
++		offset = 0x0000;
++		data = readl(ast->dp501_fw_buf + offset);
++
++		/* validate FW version */
++		offset = AST_DP501_GBL_VERSION;
++		data = readl(ast->dp501_fw_buf + offset);
++		if ((data & AST_DP501_FW_VERSION_MASK) != AST_DP501_FW_VERSION_1) /* version: 1x */
++			return maxclk;
++
++		/* Read Link Capability */
++		offset = AST_DP501_LINKRATE;
++		plinkcap = (u32 *)linkcap;
++		*plinkcap = readl(ast->dp501_fw_buf + offset);
++		if (linkcap[2] == 0) {
++			linkrate = linkcap[0];
++			linklanes = linkcap[1];
++			data = (linkrate == 0x0a) ? (90 * linklanes) : (54 * linklanes);
++			if (data > 0xff)
++				data = 0xff;
++			maxclk = (u8)data;
++		}
+ 	}
+ 	return maxclk;
+ }
+@@ -298,26 +334,57 @@ bool ast_dp501_read_edid(struct drm_device *dev, u8 *ediddata)
+ {
+ 	struct ast_private *ast = to_ast_private(dev);
+ 	u32 i, boot_address, offset, data;
++	u32 *pEDIDidx;
+ 
+-	boot_address = get_fw_base(ast);
+-
+-	/* validate FW version */
+-	offset = 0xf000;
+-	data = ast_mindwm(ast, boot_address + offset);
+-	if ((data & 0xf0) != 0x10)
+-		return false;
+-
+-	/* validate PnP Monitor */
+-	offset = 0xf010;
+-	data = ast_mindwm(ast, boot_address + offset);
+-	if (!(data & 0x01))
+-		return false;
++	if (ast->config_mode == ast_use_p2a) {
++		boot_address = get_fw_base(ast);
+ 
+-	/* Read EDID */
+-	offset = 0xf020;
+-	for (i = 0; i < 128; i += 4) {
+-		data = ast_mindwm(ast, boot_address + offset + i);
+-		*(u32 *)(ediddata + i) = data;
++		/* validate FW version */
++		offset = AST_DP501_GBL_VERSION;
++		data = ast_mindwm(ast, boot_address + offset);
++		if ((data & AST_DP501_FW_VERSION_MASK) != AST_DP501_FW_VERSION_1)
++			return false;
++
++		/* validate PnP Monitor */
++		offset = AST_DP501_PNPMONITOR;
++		data = ast_mindwm(ast, boot_address + offset);
++		if (!(data & AST_DP501_PNP_CONNECTED))
++			return false;
++
++		/* Read EDID */
++		offset = AST_DP501_EDID_DATA;
++		for (i = 0; i < 128; i += 4) {
++			data = ast_mindwm(ast, boot_address + offset + i);
++			pEDIDidx = (u32 *)(ediddata + i);
++			*pEDIDidx = data;
++		}
++	} else {
++		if (!ast->dp501_fw_buf)
++			return false;
++
++		/* dummy read */
++		offset = 0x0000;
++		data = readl(ast->dp501_fw_buf + offset);
++
++		/* validate FW version */
++		offset = AST_DP501_GBL_VERSION;
++		data = readl(ast->dp501_fw_buf + offset);
++		if ((data & AST_DP501_FW_VERSION_MASK) != AST_DP501_FW_VERSION_1)
++			return false;
++
++		/* validate PnP Monitor */
++		offset = AST_DP501_PNPMONITOR;
++		data = readl(ast->dp501_fw_buf + offset);
++		if (!(data & AST_DP501_PNP_CONNECTED))
++			return false;
++
++		/* Read EDID */
++		offset = AST_DP501_EDID_DATA;
++		for (i = 0; i < 128; i += 4) {
++			data = readl(ast->dp501_fw_buf + offset + i);
++			pEDIDidx = (u32 *)(ediddata + i);
++			*pEDIDidx = data;
++		}
+ 	}
+ 
+ 	return true;
+diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
+index e82ab86287703..911f9f4147741 100644
+--- a/drivers/gpu/drm/ast/ast_drv.h
++++ b/drivers/gpu/drm/ast/ast_drv.h
+@@ -150,6 +150,7 @@ struct ast_private {
+ 
+ 	void __iomem *regs;
+ 	void __iomem *ioregs;
++	void __iomem *dp501_fw_buf;
+ 
+ 	enum ast_chip chip;
+ 	bool vga2_clone;
+@@ -325,6 +326,17 @@ int ast_mode_config_init(struct ast_private *ast);
+ #define AST_MM_ALIGN_SHIFT 4
+ #define AST_MM_ALIGN_MASK ((1 << AST_MM_ALIGN_SHIFT) - 1)
+ 
++#define AST_DP501_FW_VERSION_MASK	GENMASK(7, 4)
++#define AST_DP501_FW_VERSION_1		BIT(4)
++#define AST_DP501_PNP_CONNECTED		BIT(1)
++
++#define AST_DP501_DEFAULT_DCLK	65
++
++#define AST_DP501_GBL_VERSION	0xf000
++#define AST_DP501_PNPMONITOR	0xf010
++#define AST_DP501_LINKRATE	0xf014
++#define AST_DP501_EDID_DATA	0xf020
++
+ int ast_mm_init(struct ast_private *ast);
+ 
+ /* ast post */
+diff --git a/drivers/gpu/drm/ast/ast_main.c b/drivers/gpu/drm/ast/ast_main.c
+index c29cc7f19863a..2aff2e6cf450c 100644
+--- a/drivers/gpu/drm/ast/ast_main.c
++++ b/drivers/gpu/drm/ast/ast_main.c
+@@ -99,7 +99,7 @@ static void ast_detect_config_mode(struct drm_device *dev, u32 *scu_rev)
+ 	if (!(jregd0 & 0x80) || !(jregd1 & 0x10)) {
+ 		/* Double check it's actually working */
+ 		data = ast_read32(ast, 0xf004);
+-		if (data != 0xFFFFFFFF) {
++		if ((data != 0xFFFFFFFF) && (data != 0x00)) {
+ 			/* P2A works, grab silicon revision */
+ 			ast->config_mode = ast_use_p2a;
+ 
+@@ -450,6 +450,14 @@ struct ast_private *ast_device_create(const struct drm_driver *drv,
+ 	if (ret)
+ 		return ERR_PTR(ret);
+ 
++	/* map reserved buffer */
++	ast->dp501_fw_buf = NULL;
++	if (dev->vram_mm->vram_size < pci_resource_len(pdev, 0)) {
++		ast->dp501_fw_buf = pci_iomap_range(pdev, 0, dev->vram_mm->vram_size, 0);
++		if (!ast->dp501_fw_buf)
++			drm_info(dev, "failed to map reserved buffer!\n");
++	}
++
+ 	ret = ast_mode_config_init(ast);
+ 	if (ret)
+ 		return ERR_PTR(ret);
+diff --git a/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c b/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
+index 989a05bc81979..ddcd5b6ad37aa 100644
+--- a/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
++++ b/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
+@@ -2369,9 +2369,9 @@ static int cdns_mhdp_probe(struct platform_device *pdev)
+ 	clk_prepare_enable(clk);
+ 
+ 	pm_runtime_enable(dev);
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0) {
+-		dev_err(dev, "pm_runtime_get_sync failed\n");
++		dev_err(dev, "pm_runtime_resume_and_get failed\n");
+ 		pm_runtime_disable(dev);
+ 		goto clk_disable;
+ 	}
+diff --git a/drivers/gpu/drm/bridge/cdns-dsi.c b/drivers/gpu/drm/bridge/cdns-dsi.c
+index 76373e31df92d..b31281f76117c 100644
+--- a/drivers/gpu/drm/bridge/cdns-dsi.c
++++ b/drivers/gpu/drm/bridge/cdns-dsi.c
+@@ -1028,7 +1028,7 @@ static ssize_t cdns_dsi_transfer(struct mipi_dsi_host *host,
+ 	struct mipi_dsi_packet packet;
+ 	int ret, i, tx_len, rx_len;
+ 
+-	ret = pm_runtime_get_sync(host->dev);
++	ret = pm_runtime_resume_and_get(host->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/gpu/drm/bridge/lontium-lt9611.c b/drivers/gpu/drm/bridge/lontium-lt9611.c
+index e8eb8deb444bb..29b1ce2140abc 100644
+--- a/drivers/gpu/drm/bridge/lontium-lt9611.c
++++ b/drivers/gpu/drm/bridge/lontium-lt9611.c
+@@ -1215,6 +1215,7 @@ static struct i2c_device_id lt9611_id[] = {
+ 	{ "lontium,lt9611", 0 },
+ 	{}
+ };
++MODULE_DEVICE_TABLE(i2c, lt9611_id);
+ 
+ static const struct of_device_id lt9611_match_table[] = {
+ 	{ .compatible = "lontium,lt9611" },
+diff --git a/drivers/gpu/drm/bridge/nwl-dsi.c b/drivers/gpu/drm/bridge/nwl-dsi.c
+index 66b67402f1acd..c65ca860712d2 100644
+--- a/drivers/gpu/drm/bridge/nwl-dsi.c
++++ b/drivers/gpu/drm/bridge/nwl-dsi.c
+@@ -21,6 +21,7 @@
+ #include <linux/sys_soc.h>
+ #include <linux/time64.h>
+ 
++#include <drm/drm_atomic_state_helper.h>
+ #include <drm/drm_bridge.h>
+ #include <drm/drm_mipi_dsi.h>
+ #include <drm/drm_of.h>
+@@ -742,7 +743,9 @@ static int nwl_dsi_disable(struct nwl_dsi *dsi)
+ 	return 0;
+ }
+ 
+-static void nwl_dsi_bridge_disable(struct drm_bridge *bridge)
++static void
++nwl_dsi_bridge_atomic_disable(struct drm_bridge *bridge,
++			      struct drm_bridge_state *old_bridge_state)
+ {
+ 	struct nwl_dsi *dsi = bridge_to_dsi(bridge);
+ 	int ret;
+@@ -803,17 +806,6 @@ static int nwl_dsi_get_dphy_params(struct nwl_dsi *dsi,
+ 	return 0;
+ }
+ 
+-static bool nwl_dsi_bridge_mode_fixup(struct drm_bridge *bridge,
+-				      const struct drm_display_mode *mode,
+-				      struct drm_display_mode *adjusted_mode)
+-{
+-	/* At least LCDIF + NWL needs active high sync */
+-	adjusted_mode->flags |= (DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC);
+-	adjusted_mode->flags &= ~(DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC);
+-
+-	return true;
+-}
+-
+ static enum drm_mode_status
+ nwl_dsi_bridge_mode_valid(struct drm_bridge *bridge,
+ 			  const struct drm_display_info *info,
+@@ -831,6 +823,24 @@ nwl_dsi_bridge_mode_valid(struct drm_bridge *bridge,
+ 	return MODE_OK;
+ }
+ 
++static int nwl_dsi_bridge_atomic_check(struct drm_bridge *bridge,
++				       struct drm_bridge_state *bridge_state,
++				       struct drm_crtc_state *crtc_state,
++				       struct drm_connector_state *conn_state)
++{
++	struct drm_display_mode *adjusted_mode = &crtc_state->adjusted_mode;
++
++	/* At least LCDIF + NWL needs active high sync */
++	adjusted_mode->flags |= (DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC);
++	adjusted_mode->flags &= ~(DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC);
++
++	/* Do a full modeset if crtc_state->active is changed to be true. */
++	if (crtc_state->active_changed && crtc_state->active)
++		crtc_state->mode_changed = true;
++
++	return 0;
++}
++
+ static void
+ nwl_dsi_bridge_mode_set(struct drm_bridge *bridge,
+ 			const struct drm_display_mode *mode,
+@@ -862,7 +872,9 @@ nwl_dsi_bridge_mode_set(struct drm_bridge *bridge,
+ 	drm_mode_debug_printmodeline(adjusted_mode);
+ }
+ 
+-static void nwl_dsi_bridge_pre_enable(struct drm_bridge *bridge)
++static void
++nwl_dsi_bridge_atomic_pre_enable(struct drm_bridge *bridge,
++				 struct drm_bridge_state *old_bridge_state)
+ {
+ 	struct nwl_dsi *dsi = bridge_to_dsi(bridge);
+ 	int ret;
+@@ -897,7 +909,9 @@ static void nwl_dsi_bridge_pre_enable(struct drm_bridge *bridge)
+ 	}
+ }
+ 
+-static void nwl_dsi_bridge_enable(struct drm_bridge *bridge)
++static void
++nwl_dsi_bridge_atomic_enable(struct drm_bridge *bridge,
++			     struct drm_bridge_state *old_bridge_state)
+ {
+ 	struct nwl_dsi *dsi = bridge_to_dsi(bridge);
+ 	int ret;
+@@ -942,14 +956,17 @@ static void nwl_dsi_bridge_detach(struct drm_bridge *bridge)
+ }
+ 
+ static const struct drm_bridge_funcs nwl_dsi_bridge_funcs = {
+-	.pre_enable = nwl_dsi_bridge_pre_enable,
+-	.enable     = nwl_dsi_bridge_enable,
+-	.disable    = nwl_dsi_bridge_disable,
+-	.mode_fixup = nwl_dsi_bridge_mode_fixup,
+-	.mode_set   = nwl_dsi_bridge_mode_set,
+-	.mode_valid = nwl_dsi_bridge_mode_valid,
+-	.attach	    = nwl_dsi_bridge_attach,
+-	.detach	    = nwl_dsi_bridge_detach,
++	.atomic_duplicate_state	= drm_atomic_helper_bridge_duplicate_state,
++	.atomic_destroy_state	= drm_atomic_helper_bridge_destroy_state,
++	.atomic_reset		= drm_atomic_helper_bridge_reset,
++	.atomic_check		= nwl_dsi_bridge_atomic_check,
++	.atomic_pre_enable	= nwl_dsi_bridge_atomic_pre_enable,
++	.atomic_enable		= nwl_dsi_bridge_atomic_enable,
++	.atomic_disable		= nwl_dsi_bridge_atomic_disable,
++	.mode_set		= nwl_dsi_bridge_mode_set,
++	.mode_valid		= nwl_dsi_bridge_mode_valid,
++	.attach			= nwl_dsi_bridge_attach,
++	.detach			= nwl_dsi_bridge_detach,
+ };
+ 
+ static int nwl_dsi_parse_dt(struct nwl_dsi *dsi)
+diff --git a/drivers/gpu/drm/drm_dp_helper.c b/drivers/gpu/drm/drm_dp_helper.c
+index cb2f53e566858..cd9ece0680bf1 100644
+--- a/drivers/gpu/drm/drm_dp_helper.c
++++ b/drivers/gpu/drm/drm_dp_helper.c
+@@ -679,7 +679,14 @@ int drm_dp_read_downstream_info(struct drm_dp_aux *aux,
+ 	    !(dpcd[DP_DOWNSTREAMPORT_PRESENT] & DP_DWN_STRM_PORT_PRESENT))
+ 		return 0;
+ 
++	/* Some branches advertise having 0 downstream ports, despite also advertising they have a
++	 * downstream port present. The DP spec isn't clear on if this is allowed or not, but since
++	 * some branches do it we need to handle it regardless.
++	 */
+ 	len = drm_dp_downstream_port_count(dpcd);
++	if (!len)
++		return 0;
++
+ 	if (dpcd[DP_DOWNSTREAMPORT_PRESENT] & DP_DETAILED_CAP_INFO_AVAILABLE)
+ 		len *= 4;
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 642c60f3d9b18..005510f309cf7 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -2850,7 +2850,7 @@ static int intel_dp_vsc_sdp_unpack(struct drm_dp_vsc_sdp *vsc,
+ 	if (size < sizeof(struct dp_sdp))
+ 		return -EINVAL;
+ 
+-	memset(vsc, 0, size);
++	memset(vsc, 0, sizeof(*vsc));
+ 
+ 	if (sdp->sdp_header.HB0 != 0)
+ 		return -EINVAL;
+diff --git a/drivers/gpu/drm/imx/imx-drm-core.c b/drivers/gpu/drm/imx/imx-drm-core.c
+index e6a88c8cbd691..8457b9788cdab 100644
+--- a/drivers/gpu/drm/imx/imx-drm-core.c
++++ b/drivers/gpu/drm/imx/imx-drm-core.c
+@@ -145,9 +145,26 @@ static const struct drm_ioctl_desc imx_drm_ioctls[] = {
+ 	/* none so far */
+ };
+ 
++static int imx_drm_dumb_create(struct drm_file *file_priv,
++			       struct drm_device *drm,
++			       struct drm_mode_create_dumb *args)
++{
++	u32 width = args->width;
++	int ret;
++
++	args->width = ALIGN(width, 8);
++
++	ret = drm_gem_cma_dumb_create(file_priv, drm, args);
++	if (ret)
++		return ret;
++
++	args->width = width;
++	return ret;
++}
++
+ static const struct drm_driver imx_drm_driver = {
+ 	.driver_features	= DRIVER_MODESET | DRIVER_GEM | DRIVER_ATOMIC,
+-	DRM_GEM_CMA_DRIVER_OPS,
++	DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE(imx_drm_dumb_create),
+ 	.ioctls			= imx_drm_ioctls,
+ 	.num_ioctls		= ARRAY_SIZE(imx_drm_ioctls),
+ 	.fops			= &imx_drm_driver_fops,
+diff --git a/drivers/gpu/drm/imx/imx-ldb.c b/drivers/gpu/drm/imx/imx-ldb.c
+index ffdc492c5bc51..53132ddf9587b 100644
+--- a/drivers/gpu/drm/imx/imx-ldb.c
++++ b/drivers/gpu/drm/imx/imx-ldb.c
+@@ -274,6 +274,11 @@ imx_ldb_encoder_atomic_mode_set(struct drm_encoder *encoder,
+ 			 "%s: mode exceeds 85 MHz pixel clock\n", __func__);
+ 	}
+ 
++	if (!IS_ALIGNED(mode->hdisplay, 8)) {
++		dev_warn(ldb->dev,
++			 "%s: hdisplay does not align to 8 byte\n", __func__);
++	}
++
+ 	if (dual) {
+ 		serial_clk = 3500UL * mode->clock;
+ 		imx_ldb_set_clock(ldb, mux, 0, serial_clk, di_clk);
+diff --git a/drivers/gpu/drm/imx/ipuv3-crtc.c b/drivers/gpu/drm/imx/ipuv3-crtc.c
+index e6431a227feb1..9c8829f945b23 100644
+--- a/drivers/gpu/drm/imx/ipuv3-crtc.c
++++ b/drivers/gpu/drm/imx/ipuv3-crtc.c
+@@ -305,10 +305,19 @@ static void ipu_crtc_mode_set_nofb(struct drm_crtc *crtc)
+ 	sig_cfg.vsync_pin = imx_crtc_state->di_vsync_pin;
+ 
+ 	drm_display_mode_to_videomode(mode, &sig_cfg.mode);
++	if (!IS_ALIGNED(sig_cfg.mode.hactive, 8)) {
++		unsigned int new_hactive = ALIGN(sig_cfg.mode.hactive, 8);
++
++		dev_warn(ipu_crtc->dev, "8-pixel align hactive %d -> %d\n",
++			 sig_cfg.mode.hactive, new_hactive);
++
++		sig_cfg.mode.hfront_porch = new_hactive - sig_cfg.mode.hactive;
++		sig_cfg.mode.hactive = new_hactive;
++	}
+ 
+ 	ipu_dc_init_sync(ipu_crtc->dc, ipu_crtc->di,
+ 			 mode->flags & DRM_MODE_FLAG_INTERLACE,
+-			 imx_crtc_state->bus_format, mode->hdisplay);
++			 imx_crtc_state->bus_format, sig_cfg.mode.hactive);
+ 	ipu_di_init_sync_panel(ipu_crtc->di, &sig_cfg);
+ }
+ 
+diff --git a/drivers/gpu/drm/imx/ipuv3-plane.c b/drivers/gpu/drm/imx/ipuv3-plane.c
+index 233310712deb0..886de0f80b4ed 100644
+--- a/drivers/gpu/drm/imx/ipuv3-plane.c
++++ b/drivers/gpu/drm/imx/ipuv3-plane.c
+@@ -30,6 +30,11 @@ to_ipu_plane_state(struct drm_plane_state *p)
+ 	return container_of(p, struct ipu_plane_state, base);
+ }
+ 
++static unsigned int ipu_src_rect_width(const struct drm_plane_state *state)
++{
++	return ALIGN(drm_rect_width(&state->src) >> 16, 8);
++}
++
+ static inline struct ipu_plane *to_ipu_plane(struct drm_plane *p)
+ {
+ 	return container_of(p, struct ipu_plane, base);
+@@ -441,6 +446,12 @@ static int ipu_plane_atomic_check(struct drm_plane *plane,
+ 	if (old_fb && fb->pitches[0] != old_fb->pitches[0])
+ 		crtc_state->mode_changed = true;
+ 
++	if (ALIGN(fb->width, 8) * fb->format->cpp[0] >
++	    fb->pitches[0] + fb->offsets[0]) {
++		dev_warn(dev, "pitch is not big enough for 8 pixels alignment");
++		return -EINVAL;
++	}
++
+ 	switch (fb->format->format) {
+ 	case DRM_FORMAT_YUV420:
+ 	case DRM_FORMAT_YVU420:
+@@ -616,7 +627,7 @@ static void ipu_plane_atomic_update(struct drm_plane *plane,
+ 	if (ipu_state->use_pre) {
+ 		axi_id = ipu_chan_assign_axi_id(ipu_plane->dma);
+ 		ipu_prg_channel_configure(ipu_plane->ipu_ch, axi_id,
+-					  drm_rect_width(&new_state->src) >> 16,
++					  ipu_src_rect_width(new_state),
+ 					  drm_rect_height(&new_state->src) >> 16,
+ 					  fb->pitches[0], fb->format->format,
+ 					  fb->modifier, &eba);
+@@ -649,9 +660,9 @@ static void ipu_plane_atomic_update(struct drm_plane *plane,
+ 		break;
+ 	}
+ 
+-	ipu_dmfc_config_wait4eot(ipu_plane->dmfc, drm_rect_width(dst));
++	ipu_dmfc_config_wait4eot(ipu_plane->dmfc, ALIGN(drm_rect_width(dst), 8));
+ 
+-	width = drm_rect_width(&new_state->src) >> 16;
++	width = ipu_src_rect_width(new_state);
+ 	height = drm_rect_height(&new_state->src) >> 16;
+ 	info = drm_format_info(fb->format->format);
+ 	ipu_calculate_bursts(width, info->cpp[0], fb->pitches[0],
+@@ -716,7 +727,7 @@ static void ipu_plane_atomic_update(struct drm_plane *plane,
+ 
+ 		ipu_cpmem_zero(ipu_plane->alpha_ch);
+ 		ipu_cpmem_set_resolution(ipu_plane->alpha_ch,
+-					 drm_rect_width(&new_state->src) >> 16,
++					 ipu_src_rect_width(new_state),
+ 					 drm_rect_height(&new_state->src) >> 16);
+ 		ipu_cpmem_set_format_passthrough(ipu_plane->alpha_ch, 8);
+ 		ipu_cpmem_set_high_priority(ipu_plane->alpha_ch);
+diff --git a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
+index 29742ec5ab952..389cad59e0909 100644
+--- a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
++++ b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
+@@ -342,7 +342,7 @@ static void ingenic_drm_crtc_atomic_flush(struct drm_crtc *crtc,
+ 	if (priv->update_clk_rate) {
+ 		mutex_lock(&priv->clk_mutex);
+ 		clk_set_rate(priv->pix_clk,
+-			     crtc_state->adjusted_mode.clock * 1000);
++			     crtc_state->adjusted_mode.crtc_clock * 1000);
+ 		priv->update_clk_rate = false;
+ 		mutex_unlock(&priv->clk_mutex);
+ 	}
+@@ -419,7 +419,7 @@ static void ingenic_drm_plane_enable(struct ingenic_drm *priv,
+ 	unsigned int en_bit;
+ 
+ 	if (priv->soc_info->has_osd) {
+-		if (plane->type == DRM_PLANE_TYPE_PRIMARY)
++		if (plane != &priv->f0)
+ 			en_bit = JZ_LCD_OSDC_F1EN;
+ 		else
+ 			en_bit = JZ_LCD_OSDC_F0EN;
+@@ -434,7 +434,7 @@ void ingenic_drm_plane_disable(struct device *dev, struct drm_plane *plane)
+ 	unsigned int en_bit;
+ 
+ 	if (priv->soc_info->has_osd) {
+-		if (plane->type == DRM_PLANE_TYPE_PRIMARY)
++		if (plane != &priv->f0)
+ 			en_bit = JZ_LCD_OSDC_F1EN;
+ 		else
+ 			en_bit = JZ_LCD_OSDC_F0EN;
+@@ -461,8 +461,7 @@ void ingenic_drm_plane_config(struct device *dev,
+ 
+ 	ingenic_drm_plane_enable(priv, plane);
+ 
+-	if (priv->soc_info->has_osd &&
+-	    plane->type == DRM_PLANE_TYPE_PRIMARY) {
++	if (priv->soc_info->has_osd && plane != &priv->f0) {
+ 		switch (fourcc) {
+ 		case DRM_FORMAT_XRGB1555:
+ 			ctrl |= JZ_LCD_OSDCTRL_RGB555;
+@@ -510,7 +509,7 @@ void ingenic_drm_plane_config(struct device *dev,
+ 	}
+ 
+ 	if (priv->soc_info->has_osd) {
+-		if (plane->type == DRM_PLANE_TYPE_PRIMARY) {
++		if (plane != &priv->f0) {
+ 			xy_reg = JZ_REG_LCD_XYP1;
+ 			size_reg = JZ_REG_LCD_SIZE1;
+ 		} else {
+@@ -561,7 +560,7 @@ static void ingenic_drm_plane_atomic_update(struct drm_plane *plane,
+ 		height = newstate->src_h >> 16;
+ 		cpp = newstate->fb->format->cpp[0];
+ 
+-		if (!priv->soc_info->has_osd || plane->type == DRM_PLANE_TYPE_OVERLAY)
++		if (!priv->soc_info->has_osd || plane == &priv->f0)
+ 			hwdesc = &priv->dma_hwdescs->hwdesc_f0;
+ 		else
+ 			hwdesc = &priv->dma_hwdescs->hwdesc_f1;
+diff --git a/drivers/gpu/drm/ingenic/ingenic-ipu.c b/drivers/gpu/drm/ingenic/ingenic-ipu.c
+index 5ae6adab8306c..3b1091e7c0cde 100644
+--- a/drivers/gpu/drm/ingenic/ingenic-ipu.c
++++ b/drivers/gpu/drm/ingenic/ingenic-ipu.c
+@@ -767,7 +767,7 @@ static int ingenic_ipu_bind(struct device *dev, struct device *master, void *d)
+ 
+ 	err = drm_universal_plane_init(drm, plane, 1, &ingenic_ipu_plane_funcs,
+ 				       soc_info->formats, soc_info->num_formats,
+-				       NULL, DRM_PLANE_TYPE_PRIMARY, NULL);
++				       NULL, DRM_PLANE_TYPE_OVERLAY, NULL);
+ 	if (err) {
+ 		dev_err(dev, "Failed to init plane: %i\n", err);
+ 		return err;
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+index 40df2c8231877..474efb8442493 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+@@ -260,7 +260,7 @@ static int mtk_crtc_ddp_hw_init(struct mtk_drm_crtc *mtk_crtc)
+ 		drm_connector_list_iter_end(&conn_iter);
+ 	}
+ 
+-	ret = pm_runtime_get_sync(crtc->dev->dev);
++	ret = pm_runtime_resume_and_get(crtc->dev->dev);
+ 	if (ret < 0) {
+ 		DRM_ERROR("Failed to enable power domain: %d\n", ret);
+ 		return ret;
+diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
+index 3d729270bde15..4a5b518288b06 100644
+--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
++++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
+@@ -88,8 +88,6 @@ static int mdp4_hw_init(struct msm_kms *kms)
+ 	if (mdp4_kms->rev > 1)
+ 		mdp4_write(mdp4_kms, REG_MDP4_RESET_STATUS, 1);
+ 
+-	dev->mode_config.allow_fb_modifiers = true;
+-
+ out:
+ 	pm_runtime_put_sync(dev->dev);
+ 
+diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c
+index 9aecca919f24b..49bdabea8ed59 100644
+--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c
++++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c
+@@ -349,6 +349,12 @@ enum mdp4_pipe mdp4_plane_pipe(struct drm_plane *plane)
+ 	return mdp4_plane->pipe;
+ }
+ 
++static const uint64_t supported_format_modifiers[] = {
++	DRM_FORMAT_MOD_SAMSUNG_64_32_TILE,
++	DRM_FORMAT_MOD_LINEAR,
++	DRM_FORMAT_MOD_INVALID
++};
++
+ /* initialize plane */
+ struct drm_plane *mdp4_plane_init(struct drm_device *dev,
+ 		enum mdp4_pipe pipe_id, bool private_plane)
+@@ -377,7 +383,7 @@ struct drm_plane *mdp4_plane_init(struct drm_device *dev,
+ 	type = private_plane ? DRM_PLANE_TYPE_PRIMARY : DRM_PLANE_TYPE_OVERLAY;
+ 	ret = drm_universal_plane_init(dev, plane, 0xff, &mdp4_plane_funcs,
+ 				 mdp4_plane->formats, mdp4_plane->nformats,
+-				 NULL, type, NULL);
++				 supported_format_modifiers, type, NULL);
+ 	if (ret)
+ 		goto fail;
+ 
+diff --git a/drivers/gpu/drm/mxsfb/Kconfig b/drivers/gpu/drm/mxsfb/Kconfig
+index 0143d539f8f82..ee22cd25d3e3d 100644
+--- a/drivers/gpu/drm/mxsfb/Kconfig
++++ b/drivers/gpu/drm/mxsfb/Kconfig
+@@ -10,7 +10,6 @@ config DRM_MXSFB
+ 	depends on COMMON_CLK
+ 	select DRM_MXS
+ 	select DRM_KMS_HELPER
+-	select DRM_KMS_FB_HELPER
+ 	select DRM_KMS_CMA_HELPER
+ 	select DRM_PANEL
+ 	select DRM_PANEL_BRIDGE
+diff --git a/drivers/gpu/drm/nouveau/nouveau_display.c b/drivers/gpu/drm/nouveau/nouveau_display.c
+index dac02c7be54dc..ac6cb8c6fc1a1 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_display.c
++++ b/drivers/gpu/drm/nouveau/nouveau_display.c
+@@ -697,7 +697,6 @@ nouveau_display_create(struct drm_device *dev)
+ 
+ 	dev->mode_config.preferred_depth = 24;
+ 	dev->mode_config.prefer_shadow = 1;
+-	dev->mode_config.allow_fb_modifiers = true;
+ 
+ 	if (drm->client.device.info.chipset < 0x11)
+ 		dev->mode_config.async_page_flip = false;
+diff --git a/drivers/gpu/drm/panfrost/panfrost_devfreq.c b/drivers/gpu/drm/panfrost/panfrost_devfreq.c
+index 47d27e54a34f2..3644652f726f7 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_devfreq.c
++++ b/drivers/gpu/drm/panfrost/panfrost_devfreq.c
+@@ -92,6 +92,15 @@ int panfrost_devfreq_init(struct panfrost_device *pfdev)
+ 	struct thermal_cooling_device *cooling;
+ 	struct panfrost_devfreq *pfdevfreq = &pfdev->pfdevfreq;
+ 
++	if (pfdev->comp->num_supplies > 1) {
++		/*
++		 * GPUs with more than 1 supply require platform-specific handling:
++		 * continue without devfreq
++		 */
++		DRM_DEV_INFO(dev, "More than 1 supply is not supported yet\n");
++		return 0;
++	}
++
+ 	ret = devm_pm_opp_set_regulators(dev, pfdev->comp->supply_names,
+ 					 pfdev->comp->num_supplies);
+ 	if (ret) {
+diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c
+index 652af7a134bd0..1d03ec7636047 100644
+--- a/drivers/gpu/drm/radeon/radeon_display.c
++++ b/drivers/gpu/drm/radeon/radeon_display.c
+@@ -1325,6 +1325,7 @@ radeon_user_framebuffer_create(struct drm_device *dev,
+ 	/* Handle is imported dma-buf, so cannot be migrated to VRAM for scanout */
+ 	if (obj->import_attach) {
+ 		DRM_DEBUG_KMS("Cannot create framebuffer from imported dma_buf\n");
++		drm_gem_object_put(obj);
+ 		return ERR_PTR(-EINVAL);
+ 	}
+ 
+diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c
+index efeb115ae70ec..40aece37e0b4e 100644
+--- a/drivers/gpu/drm/radeon/radeon_drv.c
++++ b/drivers/gpu/drm/radeon/radeon_drv.c
+@@ -386,13 +386,13 @@ radeon_pci_shutdown(struct pci_dev *pdev)
+ 	if (radeon_device_is_virtual())
+ 		radeon_pci_remove(pdev);
+ 
+-#ifdef CONFIG_PPC64
++#if defined(CONFIG_PPC64) || defined(CONFIG_MACH_LOONGSON64)
+ 	/*
+ 	 * Some adapters need to be suspended before a
+ 	 * shutdown occurs in order to prevent an error
+-	 * during kexec.
+-	 * Make this power specific becauase it breaks
+-	 * some non-power boards.
++	 * during kexec, shutdown or reboot.
++	 * Make this power and Loongson specific because
++	 * it breaks some other boards.
+ 	 */
+ 	radeon_suspend_kms(pci_get_drvdata(pdev), true, true, false);
+ #endif
+diff --git a/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c b/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
+index d8c47ee3cad37..520a0a0cd2b56 100644
+--- a/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
++++ b/drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
+@@ -243,7 +243,6 @@ struct dw_mipi_dsi_rockchip {
+ 	struct dw_mipi_dsi *dmd;
+ 	const struct rockchip_dw_dsi_chip_data *cdata;
+ 	struct dw_mipi_dsi_plat_data pdata;
+-	int devcnt;
+ };
+ 
+ struct dphy_pll_parameter_map {
+@@ -1141,9 +1140,6 @@ static int dw_mipi_dsi_rockchip_remove(struct platform_device *pdev)
+ {
+ 	struct dw_mipi_dsi_rockchip *dsi = platform_get_drvdata(pdev);
+ 
+-	if (dsi->devcnt == 0)
+-		component_del(dsi->dev, &dw_mipi_dsi_rockchip_ops);
+-
+ 	dw_mipi_dsi_remove(dsi->dmd);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/rockchip/rockchip_vop_reg.c b/drivers/gpu/drm/rockchip/rockchip_vop_reg.c
+index 80053d91a301f..a6fe03c3748aa 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_vop_reg.c
++++ b/drivers/gpu/drm/rockchip/rockchip_vop_reg.c
+@@ -349,8 +349,8 @@ static const struct vop_win_phy rk3066_win0_data = {
+ 	.nformats = ARRAY_SIZE(formats_win_full),
+ 	.format_modifiers = format_modifiers_win_full,
+ 	.enable = VOP_REG(RK3066_SYS_CTRL1, 0x1, 0),
+-	.format = VOP_REG(RK3066_SYS_CTRL0, 0x7, 4),
+-	.rb_swap = VOP_REG(RK3066_SYS_CTRL0, 0x1, 19),
++	.format = VOP_REG(RK3066_SYS_CTRL1, 0x7, 4),
++	.rb_swap = VOP_REG(RK3066_SYS_CTRL1, 0x1, 19),
+ 	.act_info = VOP_REG(RK3066_WIN0_ACT_INFO, 0x1fff1fff, 0),
+ 	.dsp_info = VOP_REG(RK3066_WIN0_DSP_INFO, 0x0fff0fff, 0),
+ 	.dsp_st = VOP_REG(RK3066_WIN0_DSP_ST, 0x1fff1fff, 0),
+@@ -361,13 +361,12 @@ static const struct vop_win_phy rk3066_win0_data = {
+ };
+ 
+ static const struct vop_win_phy rk3066_win1_data = {
+-	.scl = &rk3066_win_scl,
+ 	.data_formats = formats_win_full,
+ 	.nformats = ARRAY_SIZE(formats_win_full),
+ 	.format_modifiers = format_modifiers_win_full,
+ 	.enable = VOP_REG(RK3066_SYS_CTRL1, 0x1, 1),
+-	.format = VOP_REG(RK3066_SYS_CTRL0, 0x7, 7),
+-	.rb_swap = VOP_REG(RK3066_SYS_CTRL0, 0x1, 23),
++	.format = VOP_REG(RK3066_SYS_CTRL1, 0x7, 7),
++	.rb_swap = VOP_REG(RK3066_SYS_CTRL1, 0x1, 23),
+ 	.act_info = VOP_REG(RK3066_WIN1_ACT_INFO, 0x1fff1fff, 0),
+ 	.dsp_info = VOP_REG(RK3066_WIN1_DSP_INFO, 0x0fff0fff, 0),
+ 	.dsp_st = VOP_REG(RK3066_WIN1_DSP_ST, 0x1fff1fff, 0),
+@@ -382,8 +381,8 @@ static const struct vop_win_phy rk3066_win2_data = {
+ 	.nformats = ARRAY_SIZE(formats_win_lite),
+ 	.format_modifiers = format_modifiers_win_lite,
+ 	.enable = VOP_REG(RK3066_SYS_CTRL1, 0x1, 2),
+-	.format = VOP_REG(RK3066_SYS_CTRL0, 0x7, 10),
+-	.rb_swap = VOP_REG(RK3066_SYS_CTRL0, 0x1, 27),
++	.format = VOP_REG(RK3066_SYS_CTRL1, 0x7, 10),
++	.rb_swap = VOP_REG(RK3066_SYS_CTRL1, 0x1, 27),
+ 	.dsp_info = VOP_REG(RK3066_WIN2_DSP_INFO, 0x0fff0fff, 0),
+ 	.dsp_st = VOP_REG(RK3066_WIN2_DSP_ST, 0x1fff1fff, 0),
+ 	.yrgb_mst = VOP_REG(RK3066_WIN2_MST, 0xffffffff, 0),
+@@ -408,6 +407,9 @@ static const struct vop_common rk3066_common = {
+ 	.dither_down_en = VOP_REG(RK3066_DSP_CTRL0, 0x1, 11),
+ 	.dither_down_mode = VOP_REG(RK3066_DSP_CTRL0, 0x1, 10),
+ 	.dsp_blank = VOP_REG(RK3066_DSP_CTRL1, 0x1, 24),
++	.dither_up = VOP_REG(RK3066_DSP_CTRL0, 0x1, 9),
++	.dsp_lut_en = VOP_REG(RK3066_SYS_CTRL1, 0x1, 31),
++	.data_blank = VOP_REG(RK3066_DSP_CTRL1, 0x1, 25),
+ };
+ 
+ static const struct vop_win_data rk3066_vop_win_data[] = {
+@@ -505,7 +507,10 @@ static const struct vop_common rk3188_common = {
+ 	.dither_down_sel = VOP_REG(RK3188_DSP_CTRL0, 0x1, 27),
+ 	.dither_down_en = VOP_REG(RK3188_DSP_CTRL0, 0x1, 11),
+ 	.dither_down_mode = VOP_REG(RK3188_DSP_CTRL0, 0x1, 10),
+-	.dsp_blank = VOP_REG(RK3188_DSP_CTRL1, 0x3, 24),
++	.dsp_blank = VOP_REG(RK3188_DSP_CTRL1, 0x1, 24),
++	.dither_up = VOP_REG(RK3188_DSP_CTRL0, 0x1, 9),
++	.dsp_lut_en = VOP_REG(RK3188_SYS_CTRL, 0x1, 28),
++	.data_blank = VOP_REG(RK3188_DSP_CTRL1, 0x1, 25),
+ };
+ 
+ static const struct vop_win_data rk3188_vop_win_data[] = {
+diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
+index f0790e9471d1a..86a4209d8c770 100644
+--- a/drivers/gpu/drm/scheduler/sched_entity.c
++++ b/drivers/gpu/drm/scheduler/sched_entity.c
+@@ -116,7 +116,8 @@ static bool drm_sched_entity_is_idle(struct drm_sched_entity *entity)
+ 	rmb(); /* for list_empty to work without lock */
+ 
+ 	if (list_empty(&entity->list) ||
+-	    spsc_queue_count(&entity->job_queue) == 0)
++	    spsc_queue_count(&entity->job_queue) == 0 ||
++	    entity->stopped)
+ 		return true;
+ 
+ 	return false;
+@@ -221,11 +222,16 @@ static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f,
+ static void drm_sched_entity_kill_jobs(struct drm_sched_entity *entity)
+ {
+ 	struct drm_sched_job *job;
++	struct dma_fence *f;
+ 	int r;
+ 
+ 	while ((job = to_drm_sched_job(spsc_queue_pop(&entity->job_queue)))) {
+ 		struct drm_sched_fence *s_fence = job->s_fence;
+ 
++		/* Wait for all dependencies to avoid data corruptions */
++		while ((f = job->sched->ops->dependency(job, entity)))
++			dma_fence_wait(f, false);
++
+ 		drm_sched_fence_scheduled(s_fence);
+ 		dma_fence_set_error(&s_fence->finished, -ESRCH);
+ 
+diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
+index 92d8de24d0a17..c105c807d7e53 100644
+--- a/drivers/gpu/drm/scheduler/sched_main.c
++++ b/drivers/gpu/drm/scheduler/sched_main.c
+@@ -888,9 +888,33 @@ EXPORT_SYMBOL(drm_sched_init);
+  */
+ void drm_sched_fini(struct drm_gpu_scheduler *sched)
+ {
++	struct drm_sched_entity *s_entity;
++	int i;
++
+ 	if (sched->thread)
+ 		kthread_stop(sched->thread);
+ 
++	for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
++		struct drm_sched_rq *rq = &sched->sched_rq[i];
++
++		if (!rq)
++			continue;
++
++		spin_lock(&rq->lock);
++		list_for_each_entry(s_entity, &rq->entities, list)
++			/*
++			 * Prevents reinsertion and marks job_queue as idle,
++			 * it will removed from rq in drm_sched_entity_fini
++			 * eventually
++			 */
++			s_entity->stopped = true;
++		spin_unlock(&rq->lock);
++
++	}
++
++	/* Wakeup everyone stuck in drm_sched_entity_flush for this scheduler */
++	wake_up_all(&sched->job_scheduled);
++
+ 	/* Confirm no work left behind accessing device structures */
+ 	cancel_delayed_work_sync(&sched->work_tdr);
+ 
+diff --git a/drivers/gpu/drm/tegra/dc.c b/drivers/gpu/drm/tegra/dc.c
+index f9120dc246828..51bbbc42a144f 100644
+--- a/drivers/gpu/drm/tegra/dc.c
++++ b/drivers/gpu/drm/tegra/dc.c
+@@ -348,7 +348,7 @@ static void tegra_dc_setup_window(struct tegra_plane *plane,
+ 	 * For YUV planar modes, the number of bytes per pixel takes into
+ 	 * account only the luma component and therefore is 1.
+ 	 */
+-	yuv = tegra_plane_format_is_yuv(window->format, &planar);
++	yuv = tegra_plane_format_is_yuv(window->format, &planar, NULL);
+ 	if (!yuv)
+ 		bpp = window->bits_per_pixel / 8;
+ 	else
+@@ -999,6 +999,11 @@ static const struct drm_plane_helper_funcs tegra_cursor_plane_helper_funcs = {
+ 	.atomic_disable = tegra_cursor_atomic_disable,
+ };
+ 
++static const uint64_t linear_modifiers[] = {
++	DRM_FORMAT_MOD_LINEAR,
++	DRM_FORMAT_MOD_INVALID
++};
++
+ static struct drm_plane *tegra_dc_cursor_plane_create(struct drm_device *drm,
+ 						      struct tegra_dc *dc)
+ {
+@@ -1032,7 +1037,7 @@ static struct drm_plane *tegra_dc_cursor_plane_create(struct drm_device *drm,
+ 
+ 	err = drm_universal_plane_init(drm, &plane->base, possible_crtcs,
+ 				       &tegra_plane_funcs, formats,
+-				       num_formats, NULL,
++				       num_formats, linear_modifiers,
+ 				       DRM_PLANE_TYPE_CURSOR, NULL);
+ 	if (err < 0) {
+ 		kfree(plane);
+@@ -1151,7 +1156,8 @@ static struct drm_plane *tegra_dc_overlay_plane_create(struct drm_device *drm,
+ 
+ 	err = drm_universal_plane_init(drm, &plane->base, possible_crtcs,
+ 				       &tegra_plane_funcs, formats,
+-				       num_formats, NULL, type, NULL);
++				       num_formats, linear_modifiers,
++				       type, NULL);
+ 	if (err < 0) {
+ 		kfree(plane);
+ 		return ERR_PTR(err);
+diff --git a/drivers/gpu/drm/tegra/dc.h b/drivers/gpu/drm/tegra/dc.h
+index 29f19c3c61493..455c3fdef8dc9 100644
+--- a/drivers/gpu/drm/tegra/dc.h
++++ b/drivers/gpu/drm/tegra/dc.h
+@@ -696,6 +696,9 @@ int tegra_dc_rgb_exit(struct tegra_dc *dc);
+ 
+ #define DC_WINBUF_START_ADDR_HI			0x80d
+ 
++#define DC_WINBUF_START_ADDR_HI_U		0x80f
++#define DC_WINBUF_START_ADDR_HI_V		0x811
++
+ #define DC_WINBUF_CDE_CONTROL			0x82f
+ #define  ENABLE_SURFACE (1 << 0)
+ 
+@@ -720,6 +723,10 @@ int tegra_dc_rgb_exit(struct tegra_dc *dc);
+ #define DC_WIN_PLANAR_STORAGE			0x709
+ #define PITCH(x) (((x) >> 6) & 0x1fff)
+ 
++#define DC_WIN_PLANAR_STORAGE_UV		0x70a
++#define  PITCH_U(x) ((((x) >> 6) & 0x1fff) <<  0)
++#define  PITCH_V(x) ((((x) >> 6) & 0x1fff) << 16)
++
+ #define DC_WIN_SET_PARAMS			0x70d
+ #define  CLAMP_BEFORE_BLEND (1 << 15)
+ #define  DEGAMMA_NONE (0 << 13)
+diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c
+index 0c350b0daab47..fbd00fbfe862e 100644
+--- a/drivers/gpu/drm/tegra/drm.c
++++ b/drivers/gpu/drm/tegra/drm.c
+@@ -1124,8 +1124,6 @@ static int host1x_drm_probe(struct host1x_device *dev)
+ 	drm->mode_config.max_width = 0;
+ 	drm->mode_config.max_height = 0;
+ 
+-	drm->mode_config.allow_fb_modifiers = true;
+-
+ 	drm->mode_config.normalize_zpos = true;
+ 
+ 	drm->mode_config.funcs = &tegra_drm_mode_config_funcs;
+diff --git a/drivers/gpu/drm/tegra/hub.c b/drivers/gpu/drm/tegra/hub.c
+index bfae8a02f55b8..94e1ccfb6235a 100644
+--- a/drivers/gpu/drm/tegra/hub.c
++++ b/drivers/gpu/drm/tegra/hub.c
+@@ -454,7 +454,9 @@ static void tegra_shared_plane_atomic_update(struct drm_plane *plane,
+ 	unsigned int zpos = new_state->normalized_zpos;
+ 	struct drm_framebuffer *fb = new_state->fb;
+ 	struct tegra_plane *p = to_tegra_plane(plane);
+-	dma_addr_t base;
++	dma_addr_t base, addr_flag = 0;
++	unsigned int bpc;
++	bool yuv, planar;
+ 	u32 value;
+ 	int err;
+ 
+@@ -473,6 +475,8 @@ static void tegra_shared_plane_atomic_update(struct drm_plane *plane,
+ 		return;
+ 	}
+ 
++	yuv = tegra_plane_format_is_yuv(tegra_plane_state->format, &planar, &bpc);
++
+ 	tegra_dc_assign_shared_plane(dc, p);
+ 
+ 	tegra_plane_writel(p, VCOUNTER, DC_WIN_CORE_ACT_CONTROL);
+@@ -501,8 +505,6 @@ static void tegra_shared_plane_atomic_update(struct drm_plane *plane,
+ 	/* disable compression */
+ 	tegra_plane_writel(p, 0, DC_WINBUF_CDE_CONTROL);
+ 
+-	base = tegra_plane_state->iova[0] + fb->offsets[0];
+-
+ #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
+ 	/*
+ 	 * Physical address bit 39 in Tegra194 is used as a switch for special
+@@ -510,9 +512,12 @@ static void tegra_shared_plane_atomic_update(struct drm_plane *plane,
+ 	 * dGPU sector layout.
+ 	 */
+ 	if (tegra_plane_state->tiling.sector_layout == TEGRA_BO_SECTOR_LAYOUT_GPU)
+-		base |= BIT_ULL(39);
++		addr_flag = BIT_ULL(39);
+ #endif
+ 
++	base = tegra_plane_state->iova[0] + fb->offsets[0];
++	base |= addr_flag;
++
+ 	tegra_plane_writel(p, tegra_plane_state->format, DC_WIN_COLOR_DEPTH);
+ 	tegra_plane_writel(p, 0, DC_WIN_PRECOMP_WGRP_PARAMS);
+ 
+@@ -535,7 +540,44 @@ static void tegra_shared_plane_atomic_update(struct drm_plane *plane,
+ 	value = PITCH(fb->pitches[0]);
+ 	tegra_plane_writel(p, value, DC_WIN_PLANAR_STORAGE);
+ 
+-	value = CLAMP_BEFORE_BLEND | DEGAMMA_SRGB | INPUT_RANGE_FULL;
++	if (yuv && planar) {
++		base = tegra_plane_state->iova[1] + fb->offsets[1];
++		base |= addr_flag;
++
++		tegra_plane_writel(p, upper_32_bits(base), DC_WINBUF_START_ADDR_HI_U);
++		tegra_plane_writel(p, lower_32_bits(base), DC_WINBUF_START_ADDR_U);
++
++		base = tegra_plane_state->iova[2] + fb->offsets[2];
++		base |= addr_flag;
++
++		tegra_plane_writel(p, upper_32_bits(base), DC_WINBUF_START_ADDR_HI_V);
++		tegra_plane_writel(p, lower_32_bits(base), DC_WINBUF_START_ADDR_V);
++
++		value = PITCH_U(fb->pitches[2]) | PITCH_V(fb->pitches[2]);
++		tegra_plane_writel(p, value, DC_WIN_PLANAR_STORAGE_UV);
++	} else {
++		tegra_plane_writel(p, 0, DC_WINBUF_START_ADDR_U);
++		tegra_plane_writel(p, 0, DC_WINBUF_START_ADDR_HI_U);
++		tegra_plane_writel(p, 0, DC_WINBUF_START_ADDR_V);
++		tegra_plane_writel(p, 0, DC_WINBUF_START_ADDR_HI_V);
++		tegra_plane_writel(p, 0, DC_WIN_PLANAR_STORAGE_UV);
++	}
++
++	value = CLAMP_BEFORE_BLEND | INPUT_RANGE_FULL;
++
++	if (yuv) {
++		if (bpc < 12)
++			value |= DEGAMMA_YUV8_10;
++		else
++			value |= DEGAMMA_YUV12;
++
++		/* XXX parameterize */
++		value |= COLOR_SPACE_YUV_2020;
++	} else {
++		if (!tegra_plane_format_is_indexed(tegra_plane_state->format))
++			value |= DEGAMMA_SRGB;
++	}
++
+ 	tegra_plane_writel(p, value, DC_WIN_SET_PARAMS);
+ 
+ 	value = OFFSET_X(new_state->src_y >> 16) |
+diff --git a/drivers/gpu/drm/tegra/plane.c b/drivers/gpu/drm/tegra/plane.c
+index 2e11b4b1f7025..2e65b4075ce6c 100644
+--- a/drivers/gpu/drm/tegra/plane.c
++++ b/drivers/gpu/drm/tegra/plane.c
+@@ -375,7 +375,20 @@ int tegra_plane_format(u32 fourcc, u32 *format, u32 *swap)
+ 	return 0;
+ }
+ 
+-bool tegra_plane_format_is_yuv(unsigned int format, bool *planar)
++bool tegra_plane_format_is_indexed(unsigned int format)
++{
++	switch (format) {
++	case WIN_COLOR_DEPTH_P1:
++	case WIN_COLOR_DEPTH_P2:
++	case WIN_COLOR_DEPTH_P4:
++	case WIN_COLOR_DEPTH_P8:
++		return true;
++	}
++
++	return false;
++}
++
++bool tegra_plane_format_is_yuv(unsigned int format, bool *planar, unsigned int *bpc)
+ {
+ 	switch (format) {
+ 	case WIN_COLOR_DEPTH_YCbCr422:
+@@ -383,6 +396,9 @@ bool tegra_plane_format_is_yuv(unsigned int format, bool *planar)
+ 		if (planar)
+ 			*planar = false;
+ 
++		if (bpc)
++			*bpc = 8;
++
+ 		return true;
+ 
+ 	case WIN_COLOR_DEPTH_YCbCr420P:
+@@ -396,6 +412,9 @@ bool tegra_plane_format_is_yuv(unsigned int format, bool *planar)
+ 		if (planar)
+ 			*planar = true;
+ 
++		if (bpc)
++			*bpc = 8;
++
+ 		return true;
+ 	}
+ 
+@@ -421,7 +440,7 @@ static bool __drm_format_has_alpha(u32 format)
+ static int tegra_plane_format_get_alpha(unsigned int opaque,
+ 					unsigned int *alpha)
+ {
+-	if (tegra_plane_format_is_yuv(opaque, NULL)) {
++	if (tegra_plane_format_is_yuv(opaque, NULL, NULL)) {
+ 		*alpha = opaque;
+ 		return 0;
+ 	}
+diff --git a/drivers/gpu/drm/tegra/plane.h b/drivers/gpu/drm/tegra/plane.h
+index c691dd79b27b1..1785c1559c0ce 100644
+--- a/drivers/gpu/drm/tegra/plane.h
++++ b/drivers/gpu/drm/tegra/plane.h
+@@ -74,7 +74,8 @@ int tegra_plane_state_add(struct tegra_plane *plane,
+ 			  struct drm_plane_state *state);
+ 
+ int tegra_plane_format(u32 fourcc, u32 *format, u32 *swap);
+-bool tegra_plane_format_is_yuv(unsigned int format, bool *planar);
++bool tegra_plane_format_is_indexed(unsigned int format);
++bool tegra_plane_format_is_yuv(unsigned int format, bool *planar, unsigned int *bpc);
+ int tegra_plane_setup_legacy_state(struct tegra_plane *tegra,
+ 				   struct tegra_plane_state *state);
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_crtc.c b/drivers/gpu/drm/vc4/vc4_crtc.c
+index 1f36b67cd6ce9..18f5009ce90e3 100644
+--- a/drivers/gpu/drm/vc4/vc4_crtc.c
++++ b/drivers/gpu/drm/vc4/vc4_crtc.c
+@@ -1035,7 +1035,7 @@ static const struct vc4_pv_data bcm2711_pv3_data = {
+ 	.fifo_depth = 64,
+ 	.pixels_per_clock = 1,
+ 	.encoder_types = {
+-		[0] = VC4_ENCODER_TYPE_VEC,
++		[PV_CONTROL_CLK_SELECT_VEC] = VC4_ENCODER_TYPE_VEC,
+ 	},
+ };
+ 
+@@ -1076,6 +1076,9 @@ static void vc4_set_crtc_possible_masks(struct drm_device *drm,
+ 		struct vc4_encoder *vc4_encoder;
+ 		int i;
+ 
++		if (encoder->encoder_type == DRM_MODE_ENCODER_VIRTUAL)
++			continue;
++
+ 		vc4_encoder = to_vc4_encoder(encoder);
+ 		for (i = 0; i < ARRAY_SIZE(pv_data->encoder_types); i++) {
+ 			if (vc4_encoder->type == encoder_types[i]) {
+diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
+index a7500716cf3f1..5dceadc61600e 100644
+--- a/drivers/gpu/drm/vc4/vc4_drv.h
++++ b/drivers/gpu/drm/vc4/vc4_drv.h
+@@ -825,7 +825,7 @@ void vc4_crtc_destroy_state(struct drm_crtc *crtc,
+ void vc4_crtc_reset(struct drm_crtc *crtc);
+ void vc4_crtc_handle_vblank(struct vc4_crtc *crtc);
+ void vc4_crtc_get_margins(struct drm_crtc_state *state,
+-			  unsigned int *right, unsigned int *left,
++			  unsigned int *left, unsigned int *right,
+ 			  unsigned int *top, unsigned int *bottom);
+ 
+ /* vc4_debugfs.c */
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index e94730beb15b7..188b74c9e9fff 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -745,7 +745,7 @@ static void vc4_hdmi_encoder_pre_crtc_configure(struct drm_encoder *encoder,
+ 	unsigned long pixel_rate, hsm_rate;
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(&vc4_hdmi->pdev->dev);
++	ret = pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev);
+ 	if (ret < 0) {
+ 		DRM_ERROR("Failed to retain power domain: %d\n", ret);
+ 		return;
+@@ -2012,6 +2012,14 @@ static int vc4_hdmi_bind(struct device *dev, struct device *master, void *data)
+ 	if (vc4_hdmi->variant->reset)
+ 		vc4_hdmi->variant->reset(vc4_hdmi);
+ 
++	if ((of_device_is_compatible(dev->of_node, "brcm,bcm2711-hdmi0") ||
++	     of_device_is_compatible(dev->of_node, "brcm,bcm2711-hdmi1")) &&
++	    HDMI_READ(HDMI_VID_CTL) & VC4_HD_VID_CTL_ENABLE) {
++		clk_prepare_enable(vc4_hdmi->pixel_clock);
++		clk_prepare_enable(vc4_hdmi->hsm_clock);
++		clk_prepare_enable(vc4_hdmi->pixel_bvb_clock);
++	}
++
+ 	pm_runtime_enable(dev);
+ 
+ 	drm_simple_encoder_init(drm, encoder, DRM_MODE_ENCODER_TMDS);
+diff --git a/drivers/gpu/drm/vc4/vc4_txp.c b/drivers/gpu/drm/vc4/vc4_txp.c
+index c0122d83b6511..2fc7f4b5fa098 100644
+--- a/drivers/gpu/drm/vc4/vc4_txp.c
++++ b/drivers/gpu/drm/vc4/vc4_txp.c
+@@ -507,7 +507,7 @@ static int vc4_txp_bind(struct device *dev, struct device *master, void *data)
+ 		return ret;
+ 
+ 	encoder = &txp->connector.encoder;
+-	encoder->possible_crtcs |= drm_crtc_mask(crtc);
++	encoder->possible_crtcs = drm_crtc_mask(crtc);
+ 
+ 	ret = devm_request_irq(dev, irq, vc4_txp_interrupt, 0,
+ 			       dev_name(dev), txp);
+diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c
+index b375394193be8..37a21a88d674c 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_kms.c
++++ b/drivers/gpu/drm/virtio/virtgpu_kms.c
+@@ -234,6 +234,7 @@ err_scanouts:
+ err_vbufs:
+ 	vgdev->vdev->config->del_vqs(vgdev->vdev);
+ err_vqs:
++	dev->dev_private = NULL;
+ 	kfree(vgdev);
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/zte/Kconfig b/drivers/gpu/drm/zte/Kconfig
+index 90ebaedc11fdf..aa8594190b509 100644
+--- a/drivers/gpu/drm/zte/Kconfig
++++ b/drivers/gpu/drm/zte/Kconfig
+@@ -3,7 +3,6 @@ config DRM_ZTE
+ 	tristate "DRM Support for ZTE SoCs"
+ 	depends on DRM && ARCH_ZX
+ 	select DRM_KMS_CMA_HELPER
+-	select DRM_KMS_FB_HELPER
+ 	select DRM_KMS_HELPER
+ 	select SND_SOC_HDMI_CODEC if SND_SOC
+ 	select VIDEOMODE_HELPERS
+diff --git a/drivers/gpu/ipu-v3/ipu-dc.c b/drivers/gpu/ipu-v3/ipu-dc.c
+index 34b4075a6a8e5..ca96b235491a9 100644
+--- a/drivers/gpu/ipu-v3/ipu-dc.c
++++ b/drivers/gpu/ipu-v3/ipu-dc.c
+@@ -167,6 +167,11 @@ int ipu_dc_init_sync(struct ipu_dc *dc, struct ipu_di *di, bool interlaced,
+ 
+ 	dc->di = ipu_di_get_num(di);
+ 
++	if (!IS_ALIGNED(width, 8)) {
++		dev_warn(priv->dev,
++			 "%s: hactive does not align to 8 byte\n", __func__);
++	}
++
+ 	map = ipu_bus_format_to_map(bus_format);
+ 
+ 	/*
+diff --git a/drivers/gpu/ipu-v3/ipu-di.c b/drivers/gpu/ipu-v3/ipu-di.c
+index e617f60afeea3..666223c6bec4d 100644
+--- a/drivers/gpu/ipu-v3/ipu-di.c
++++ b/drivers/gpu/ipu-v3/ipu-di.c
+@@ -506,6 +506,13 @@ int ipu_di_adjust_videomode(struct ipu_di *di, struct videomode *mode)
+ {
+ 	u32 diff;
+ 
++	if (!IS_ALIGNED(mode->hactive, 8) &&
++	    mode->hfront_porch < ALIGN(mode->hactive, 8) - mode->hactive) {
++		dev_err(di->ipu->dev, "hactive %d is not aligned to 8 and front porch is too small to compensate\n",
++			mode->hactive);
++		return -EINVAL;
++	}
++
+ 	if (mode->vfront_porch >= 2)
+ 		return 0;
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
+index 4ddf3d2338443..3cfefcdb0dfd1 100644
+--- a/drivers/hwtracing/coresight/coresight-core.c
++++ b/drivers/hwtracing/coresight/coresight-core.c
+@@ -1392,7 +1392,7 @@ static int coresight_fixup_device_conns(struct coresight_device *csdev)
+ 		}
+ 	}
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int coresight_remove_match(struct device *dev, void *data)
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+index 45b85edfc6900..cd0fb7bfba684 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-etf.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+@@ -530,7 +530,7 @@ static unsigned long tmc_update_etf_buffer(struct coresight_device *csdev,
+ 		buf_ptr = buf->data_pages[cur] + offset;
+ 		*buf_ptr = readl_relaxed(drvdata->base + TMC_RRD);
+ 
+-		if (lost && *barrier) {
++		if (lost && i < CORESIGHT_BARRIER_PKT_SIZE) {
+ 			*buf_ptr = *barrier;
+ 			barrier++;
+ 		}
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index ad9a9ba5f00d1..5d3b8b8d163d6 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -2829,7 +2829,8 @@ static int cma_resolve_ib_route(struct rdma_id_private *id_priv,
+ 
+ 	cma_init_resolve_route_work(work, id_priv);
+ 
+-	route->path_rec = kmalloc(sizeof *route->path_rec, GFP_KERNEL);
++	if (!route->path_rec)
++		route->path_rec = kmalloc(sizeof *route->path_rec, GFP_KERNEL);
+ 	if (!route->path_rec) {
+ 		ret = -ENOMEM;
+ 		goto err1;
+diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c
+index d109bb3822a5f..c9403743346e1 100644
+--- a/drivers/infiniband/hw/cxgb4/qp.c
++++ b/drivers/infiniband/hw/cxgb4/qp.c
+@@ -295,6 +295,7 @@ static int create_qp(struct c4iw_rdev *rdev, struct t4_wq *wq,
+ 	if (user && (!wq->sq.bar2_pa || (need_rq && !wq->rq.bar2_pa))) {
+ 		pr_warn("%s: sqid %u or rqid %u not in BAR2 range\n",
+ 			pci_name(rdev->lldi.pdev), wq->sq.qid, wq->rq.qid);
++		ret = -EINVAL;
+ 		goto free_dma;
+ 	}
+ 
+diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
+index 9f63947bab123..fe2b7d223183f 100644
+--- a/drivers/infiniband/sw/rxe/rxe_mr.c
++++ b/drivers/infiniband/sw/rxe/rxe_mr.c
+@@ -135,7 +135,7 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
+ 	if (IS_ERR(umem)) {
+ 		pr_warn("err %d from rxe_umem_get\n",
+ 			(int)PTR_ERR(umem));
+-		err = -EINVAL;
++		err = PTR_ERR(umem);
+ 		goto err1;
+ 	}
+ 
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
+index 18266f07c58d9..de3fc05fd2e8a 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.c
++++ b/drivers/infiniband/ulp/isert/ib_isert.c
+@@ -35,10 +35,10 @@ static const struct kernel_param_ops sg_tablesize_ops = {
+ 	.get = param_get_int,
+ };
+ 
+-static int isert_sg_tablesize = ISCSI_ISER_DEF_SG_TABLESIZE;
++static int isert_sg_tablesize = ISCSI_ISER_MIN_SG_TABLESIZE;
+ module_param_cb(sg_tablesize, &sg_tablesize_ops, &isert_sg_tablesize, 0644);
+ MODULE_PARM_DESC(sg_tablesize,
+-		 "Number of gather/scatter entries in a single scsi command, should >= 128 (default: 256, max: 4096)");
++		 "Number of gather/scatter entries in a single scsi command, should >= 128 (default: 128, max: 4096)");
+ 
+ static DEFINE_MUTEX(device_list_mutex);
+ static LIST_HEAD(device_list);
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.h b/drivers/infiniband/ulp/isert/ib_isert.h
+index 6c5af13db4e0d..ca8cfebe26ca7 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.h
++++ b/drivers/infiniband/ulp/isert/ib_isert.h
+@@ -65,9 +65,6 @@
+  */
+ #define ISER_RX_SIZE		(ISCSI_DEF_MAX_RECV_SEG_LEN + 1024)
+ 
+-/* Default I/O size is 1MB */
+-#define ISCSI_ISER_DEF_SG_TABLESIZE 256
+-
+ /* Minimum I/O size is 512KB */
+ #define ISCSI_ISER_MIN_SG_TABLESIZE 128
+ 
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-pri.h b/drivers/infiniband/ulp/rtrs/rtrs-pri.h
+index 86e65cf30cab0..d957bbf1ddd36 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-pri.h
++++ b/drivers/infiniband/ulp/rtrs/rtrs-pri.h
+@@ -47,12 +47,15 @@ enum {
+ 	MAX_PATHS_NUM = 128,
+ 
+ 	/*
+-	 * With the size of struct rtrs_permit allocated on the client, 4K
+-	 * is the maximum number of rtrs_permits we can allocate. This number is
+-	 * also used on the client to allocate the IU for the user connection
+-	 * to receive the RDMA addresses from the server.
++	 * Max IB immediate data size is 2^28 (MAX_IMM_PAYL_BITS)
++	 * and the minimum chunk size is 4096 (2^12).
++	 * So the maximum sess_queue_depth is 65536 (2^16) in theory.
++	 * But mempool_create, create_qp and ib_post_send fail with
++	 * "cannot allocate memory" error if sess_queue_depth is too big.
++	 * Therefore the pratical max value of sess_queue_depth is
++	 * somewhere between 1 and 65536 and it depends on the system.
+ 	 */
+-	MAX_SESS_QUEUE_DEPTH = 4096,
++	MAX_SESS_QUEUE_DEPTH = 65536,
+ 
+ 	RTRS_HB_INTERVAL_MS = 5000,
+ 	RTRS_HB_MISSED_MAX = 5,
+diff --git a/drivers/ipack/carriers/tpci200.c b/drivers/ipack/carriers/tpci200.c
+index ec71063fff76a..e1822e87ec3d2 100644
+--- a/drivers/ipack/carriers/tpci200.c
++++ b/drivers/ipack/carriers/tpci200.c
+@@ -596,8 +596,11 @@ static int tpci200_pci_probe(struct pci_dev *pdev,
+ 
+ out_err_bus_register:
+ 	tpci200_uninstall(tpci200);
++	/* tpci200->info->cfg_regs is unmapped in tpci200_uninstall */
++	tpci200->info->cfg_regs = NULL;
+ out_err_install:
+-	iounmap(tpci200->info->cfg_regs);
++	if (tpci200->info->cfg_regs)
++		iounmap(tpci200->info->cfg_regs);
+ out_err_ioremap:
+ 	pci_release_region(pdev, TPCI200_CFG_MEM_BAR);
+ out_err_pci_request:
+diff --git a/drivers/isdn/hardware/mISDN/hfcpci.c b/drivers/isdn/hardware/mISDN/hfcpci.c
+index 56bd2e9db6ed6..e501cb03f211d 100644
+--- a/drivers/isdn/hardware/mISDN/hfcpci.c
++++ b/drivers/isdn/hardware/mISDN/hfcpci.c
+@@ -2342,7 +2342,7 @@ static void __exit
+ HFC_cleanup(void)
+ {
+ 	if (timer_pending(&hfc_tl))
+-		del_timer(&hfc_tl);
++		del_timer_sync(&hfc_tl);
+ 
+ 	pci_unregister_driver(&hfc_driver);
+ }
+diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
+index aecc246ade263..8f891d838cc68 100644
+--- a/drivers/md/dm-writecache.c
++++ b/drivers/md/dm-writecache.c
+@@ -532,7 +532,7 @@ static void ssd_commit_superblock(struct dm_writecache *wc)
+ 
+ 	region.bdev = wc->ssd_dev->bdev;
+ 	region.sector = 0;
+-	region.count = PAGE_SIZE >> SECTOR_SHIFT;
++	region.count = max(4096U, wc->block_size) >> SECTOR_SHIFT;
+ 
+ 	if (unlikely(region.sector + region.count > wc->metadata_sectors))
+ 		region.count = wc->metadata_sectors - region.sector;
+@@ -1301,8 +1301,12 @@ static int writecache_map(struct dm_target *ti, struct bio *bio)
+ 			writecache_flush(wc);
+ 			if (writecache_has_error(wc))
+ 				goto unlock_error;
++			if (unlikely(wc->cleaner))
++				goto unlock_remap_origin;
+ 			goto unlock_submit;
+ 		} else {
++			if (dm_bio_get_target_bio_nr(bio))
++				goto unlock_remap_origin;
+ 			writecache_offload_bio(wc, bio);
+ 			goto unlock_return;
+ 		}
+@@ -1360,14 +1364,18 @@ read_next_block:
+ 	} else {
+ 		do {
+ 			bool found_entry = false;
++			bool search_used = false;
+ 			if (writecache_has_error(wc))
+ 				goto unlock_error;
+ 			e = writecache_find_entry(wc, bio->bi_iter.bi_sector, 0);
+ 			if (e) {
+-				if (!writecache_entry_is_committed(wc, e))
++				if (!writecache_entry_is_committed(wc, e)) {
++					search_used = true;
+ 					goto bio_copy;
++				}
+ 				if (!WC_MODE_PMEM(wc) && !e->write_in_progress) {
+ 					wc->overwrote_committed = true;
++					search_used = true;
+ 					goto bio_copy;
+ 				}
+ 				found_entry = true;
+@@ -1377,7 +1385,7 @@ read_next_block:
+ 			}
+ 			e = writecache_pop_from_freelist(wc, (sector_t)-1);
+ 			if (unlikely(!e)) {
+-				if (!found_entry) {
++				if (!WC_MODE_PMEM(wc) && !found_entry) {
+ direct_write:
+ 					e = writecache_find_entry(wc, bio->bi_iter.bi_sector, WFE_RETURN_FOLLOWING);
+ 					if (e) {
+@@ -1404,13 +1412,31 @@ bio_copy:
+ 				sector_t current_cache_sec = start_cache_sec + (bio_size >> SECTOR_SHIFT);
+ 
+ 				while (bio_size < bio->bi_iter.bi_size) {
+-					struct wc_entry *f = writecache_pop_from_freelist(wc, current_cache_sec);
+-					if (!f)
+-						break;
+-					write_original_sector_seq_count(wc, f, bio->bi_iter.bi_sector +
+-									(bio_size >> SECTOR_SHIFT), wc->seq_count);
+-					writecache_insert_entry(wc, f);
+-					wc->uncommitted_blocks++;
++					if (!search_used) {
++						struct wc_entry *f = writecache_pop_from_freelist(wc, current_cache_sec);
++						if (!f)
++							break;
++						write_original_sector_seq_count(wc, f, bio->bi_iter.bi_sector +
++										(bio_size >> SECTOR_SHIFT), wc->seq_count);
++						writecache_insert_entry(wc, f);
++						wc->uncommitted_blocks++;
++					} else {
++						struct wc_entry *f;
++						struct rb_node *next = rb_next(&e->rb_node);
++						if (!next)
++							break;
++						f = container_of(next, struct wc_entry, rb_node);
++						if (f != e + 1)
++							break;
++						if (read_original_sector(wc, f) !=
++						    read_original_sector(wc, e) + (wc->block_size >> SECTOR_SHIFT))
++							break;
++						if (unlikely(f->write_in_progress))
++							break;
++						if (writecache_entry_is_committed(wc, f))
++							wc->overwrote_committed = true;
++						e = f;
++					}
+ 					bio_size += wc->block_size;
+ 					current_cache_sec += wc->block_size >> SECTOR_SHIFT;
+ 				}
+@@ -2463,7 +2489,7 @@ overflow:
+ 		goto bad;
+ 	}
+ 
+-	ti->num_flush_bios = 1;
++	ti->num_flush_bios = WC_MODE_PMEM(wc) ? 1 : 2;
+ 	ti->flush_supported = true;
+ 	ti->num_discard_bios = 1;
+ 
+diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c
+index 039d17b289384..ee4626d085574 100644
+--- a/drivers/md/dm-zoned-metadata.c
++++ b/drivers/md/dm-zoned-metadata.c
+@@ -1390,6 +1390,13 @@ static int dmz_init_zone(struct blk_zone *blkz, unsigned int num, void *data)
+ 		return -ENXIO;
+ 	}
+ 
++	/*
++	 * Devices that have zones with a capacity smaller than the zone size
++	 * (e.g. NVMe zoned namespaces) are not supported.
++	 */
++	if (blkz->capacity != blkz->len)
++		return -ENXIO;
++
+ 	switch (blkz->type) {
+ 	case BLK_ZONE_TYPE_CONVENTIONAL:
+ 		set_bit(DMZ_RND, &zone->flags);
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index ca2aedd8ee7d1..11af200806391 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1237,8 +1237,8 @@ static int dm_dax_zero_page_range(struct dax_device *dax_dev, pgoff_t pgoff,
+ 
+ /*
+  * A target may call dm_accept_partial_bio only from the map routine.  It is
+- * allowed for all bio types except REQ_PREFLUSH, REQ_OP_ZONE_RESET,
+- * REQ_OP_ZONE_OPEN, REQ_OP_ZONE_CLOSE and REQ_OP_ZONE_FINISH.
++ * allowed for all bio types except REQ_PREFLUSH, REQ_OP_ZONE_* zone management
++ * operations and REQ_OP_ZONE_APPEND (zone append writes).
+  *
+  * dm_accept_partial_bio informs the dm that the target only wants to process
+  * additional n_sectors sectors of the bio and the rest of the data should be
+@@ -1268,9 +1268,13 @@ void dm_accept_partial_bio(struct bio *bio, unsigned n_sectors)
+ {
+ 	struct dm_target_io *tio = container_of(bio, struct dm_target_io, clone);
+ 	unsigned bi_size = bio->bi_iter.bi_size >> SECTOR_SHIFT;
++
+ 	BUG_ON(bio->bi_opf & REQ_PREFLUSH);
++	BUG_ON(op_is_zone_mgmt(bio_op(bio)));
++	BUG_ON(bio_op(bio) == REQ_OP_ZONE_APPEND);
+ 	BUG_ON(bi_size > *tio->len_ptr);
+ 	BUG_ON(n_sectors > bi_size);
++
+ 	*tio->len_ptr -= bi_size - n_sectors;
+ 	bio->bi_iter.bi_size = n_sectors << SECTOR_SHIFT;
+ }
+diff --git a/drivers/md/persistent-data/dm-btree-remove.c b/drivers/md/persistent-data/dm-btree-remove.c
+index eff04fa23dfad..9e4d1212f4c16 100644
+--- a/drivers/md/persistent-data/dm-btree-remove.c
++++ b/drivers/md/persistent-data/dm-btree-remove.c
+@@ -549,7 +549,8 @@ int dm_btree_remove(struct dm_btree_info *info, dm_block_t root,
+ 		delete_at(n, index);
+ 	}
+ 
+-	*new_root = shadow_root(&spine);
++	if (!r)
++		*new_root = shadow_root(&spine);
+ 	exit_shadow_spine(&spine);
+ 
+ 	return r;
+diff --git a/drivers/md/persistent-data/dm-space-map-disk.c b/drivers/md/persistent-data/dm-space-map-disk.c
+index 61f56909e00be..4f8069bb04816 100644
+--- a/drivers/md/persistent-data/dm-space-map-disk.c
++++ b/drivers/md/persistent-data/dm-space-map-disk.c
+@@ -171,6 +171,14 @@ static int sm_disk_new_block(struct dm_space_map *sm, dm_block_t *b)
+ 	 * Any block we allocate has to be free in both the old and current ll.
+ 	 */
+ 	r = sm_ll_find_common_free_block(&smd->old_ll, &smd->ll, smd->begin, smd->ll.nr_blocks, b);
++	if (r == -ENOSPC) {
++		/*
++		 * There's no free block between smd->begin and the end of the metadata device.
++		 * We search before smd->begin in case something has been freed.
++		 */
++		r = sm_ll_find_common_free_block(&smd->old_ll, &smd->ll, 0, smd->begin, b);
++	}
++
+ 	if (r)
+ 		return r;
+ 
+@@ -194,7 +202,6 @@ static int sm_disk_commit(struct dm_space_map *sm)
+ 		return r;
+ 
+ 	memcpy(&smd->old_ll, &smd->ll, sizeof(smd->old_ll));
+-	smd->begin = 0;
+ 	smd->nr_allocated_this_transaction = 0;
+ 
+ 	return 0;
+diff --git a/drivers/md/persistent-data/dm-space-map-metadata.c b/drivers/md/persistent-data/dm-space-map-metadata.c
+index 9e3c64ec2026f..da439ac857963 100644
+--- a/drivers/md/persistent-data/dm-space-map-metadata.c
++++ b/drivers/md/persistent-data/dm-space-map-metadata.c
+@@ -452,6 +452,14 @@ static int sm_metadata_new_block_(struct dm_space_map *sm, dm_block_t *b)
+ 	 * Any block we allocate has to be free in both the old and current ll.
+ 	 */
+ 	r = sm_ll_find_common_free_block(&smm->old_ll, &smm->ll, smm->begin, smm->ll.nr_blocks, b);
++	if (r == -ENOSPC) {
++		/*
++		 * There's no free block between smm->begin and the end of the metadata device.
++		 * We search before smm->begin in case something has been freed.
++		 */
++		r = sm_ll_find_common_free_block(&smm->old_ll, &smm->ll, 0, smm->begin, b);
++	}
++
+ 	if (r)
+ 		return r;
+ 
+@@ -503,7 +511,6 @@ static int sm_metadata_commit(struct dm_space_map *sm)
+ 		return r;
+ 
+ 	memcpy(&smm->old_ll, &smm->ll, sizeof(smm->old_ll));
+-	smm->begin = 0;
+ 	smm->allocated_this_transaction = 0;
+ 
+ 	return 0;
+diff --git a/drivers/media/i2c/ccs/ccs-core.c b/drivers/media/i2c/ccs/ccs-core.c
+index b05f409014b2f..4a848ac2d2cd2 100644
+--- a/drivers/media/i2c/ccs/ccs-core.c
++++ b/drivers/media/i2c/ccs/ccs-core.c
+@@ -1880,21 +1880,33 @@ static int ccs_pm_get_init(struct ccs_sensor *sensor)
+ 	struct i2c_client *client = v4l2_get_subdevdata(&sensor->src->sd);
+ 	int rval;
+ 
++	/*
++	 * It can't use pm_runtime_resume_and_get() here, as the driver
++	 * relies at the returned value to detect if the device was already
++	 * active or not.
++	 */
+ 	rval = pm_runtime_get_sync(&client->dev);
+-	if (rval < 0) {
+-		pm_runtime_put_noidle(&client->dev);
++	if (rval < 0)
++		goto error;
+ 
+-		return rval;
+-	} else if (!rval) {
+-		rval = v4l2_ctrl_handler_setup(&sensor->pixel_array->
+-					       ctrl_handler);
+-		if (rval)
+-			return rval;
++	/* Device was already active, so don't set controls */
++	if (rval == 1)
++		return 0;
+ 
+-		return v4l2_ctrl_handler_setup(&sensor->src->ctrl_handler);
+-	}
++	/* Restore V4L2 controls to the previously suspended device */
++	rval = v4l2_ctrl_handler_setup(&sensor->pixel_array->ctrl_handler);
++	if (rval)
++		goto error;
+ 
++	rval = v4l2_ctrl_handler_setup(&sensor->src->ctrl_handler);
++	if (rval)
++		goto error;
++
++	/* Keep PM runtime usage_count incremented on success */
+ 	return 0;
++error:
++	pm_runtime_put(&client->dev);
++	return rval;
+ }
+ 
+ static int ccs_set_stream(struct v4l2_subdev *subdev, int enable)
+diff --git a/drivers/media/i2c/ccs/ccs-limits.c b/drivers/media/i2c/ccs/ccs-limits.c
+index f5511789ac837..4969fa425317d 100644
+--- a/drivers/media/i2c/ccs/ccs-limits.c
++++ b/drivers/media/i2c/ccs/ccs-limits.c
+@@ -1,5 +1,9 @@
+ // SPDX-License-Identifier: GPL-2.0-only OR BSD-3-Clause
+ /* Copyright (C) 2019--2020 Intel Corporation */
++/*
++ * Generated by Documentation/driver-api/media/drivers/ccs/mk-ccs-regs;
++ * do not modify.
++ */
+ 
+ #include "ccs-limits.h"
+ #include "ccs-regs.h"
+diff --git a/drivers/media/i2c/ccs/ccs-limits.h b/drivers/media/i2c/ccs/ccs-limits.h
+index 1efa43c23a2eb..551d3ee9d04e1 100644
+--- a/drivers/media/i2c/ccs/ccs-limits.h
++++ b/drivers/media/i2c/ccs/ccs-limits.h
+@@ -1,5 +1,9 @@
+ /* SPDX-License-Identifier: GPL-2.0-only OR BSD-3-Clause */
+ /* Copyright (C) 2019--2020 Intel Corporation */
++/*
++ * Generated by Documentation/driver-api/media/drivers/ccs/mk-ccs-regs;
++ * do not modify.
++ */
+ 
+ #ifndef __CCS_LIMITS_H__
+ #define __CCS_LIMITS_H__
+diff --git a/drivers/media/i2c/ccs/ccs-regs.h b/drivers/media/i2c/ccs/ccs-regs.h
+index 4b3e5df2121f8..6ce84c5ecf207 100644
+--- a/drivers/media/i2c/ccs/ccs-regs.h
++++ b/drivers/media/i2c/ccs/ccs-regs.h
+@@ -1,5 +1,9 @@
+ /* SPDX-License-Identifier: GPL-2.0-only OR BSD-3-Clause */
+ /* Copyright (C) 2019--2020 Intel Corporation */
++/*
++ * Generated by Documentation/driver-api/media/drivers/ccs/mk-ccs-regs;
++ * do not modify.
++ */
+ 
+ #ifndef __CCS_REGS_H__
+ #define __CCS_REGS_H__
+@@ -202,7 +206,7 @@
+ #define CCS_R_OP_PIX_CLK_DIV					(0x0308 | CCS_FL_16BIT)
+ #define CCS_R_OP_SYS_CLK_DIV					(0x030a | CCS_FL_16BIT)
+ #define CCS_R_OP_PRE_PLL_CLK_DIV				(0x030c | CCS_FL_16BIT)
+-#define CCS_R_OP_PLL_MULTIPLIER					(0x031e | CCS_FL_16BIT)
++#define CCS_R_OP_PLL_MULTIPLIER					(0x030e | CCS_FL_16BIT)
+ #define CCS_R_PLL_MODE						0x0310
+ #define CCS_PLL_MODE_SHIFT					0U
+ #define CCS_PLL_MODE_MASK					0x1
+diff --git a/drivers/media/i2c/saa6588.c b/drivers/media/i2c/saa6588.c
+index ecb491d5f2ab8..d1e0716bdfffd 100644
+--- a/drivers/media/i2c/saa6588.c
++++ b/drivers/media/i2c/saa6588.c
+@@ -380,7 +380,7 @@ static void saa6588_configure(struct saa6588 *s)
+ 
+ /* ---------------------------------------------------------------------- */
+ 
+-static long saa6588_ioctl(struct v4l2_subdev *sd, unsigned int cmd, void *arg)
++static long saa6588_command(struct v4l2_subdev *sd, unsigned int cmd, void *arg)
+ {
+ 	struct saa6588 *s = to_saa6588(sd);
+ 	struct saa6588_command *a = arg;
+@@ -433,7 +433,7 @@ static int saa6588_s_tuner(struct v4l2_subdev *sd, const struct v4l2_tuner *vt)
+ /* ----------------------------------------------------------------------- */
+ 
+ static const struct v4l2_subdev_core_ops saa6588_core_ops = {
+-	.ioctl = saa6588_ioctl,
++	.command = saa6588_command,
+ };
+ 
+ static const struct v4l2_subdev_tuner_ops saa6588_tuner_ops = {
+diff --git a/drivers/media/pci/bt8xx/bttv-driver.c b/drivers/media/pci/bt8xx/bttv-driver.c
+index 1f62a9d8ea1d3..0e9df8b35ac66 100644
+--- a/drivers/media/pci/bt8xx/bttv-driver.c
++++ b/drivers/media/pci/bt8xx/bttv-driver.c
+@@ -3179,7 +3179,7 @@ static int radio_release(struct file *file)
+ 
+ 	btv->radio_user--;
+ 
+-	bttv_call_all(btv, core, ioctl, SAA6588_CMD_CLOSE, &cmd);
++	bttv_call_all(btv, core, command, SAA6588_CMD_CLOSE, &cmd);
+ 
+ 	if (btv->radio_user == 0)
+ 		btv->has_radio_tuner = 0;
+@@ -3260,7 +3260,7 @@ static ssize_t radio_read(struct file *file, char __user *data,
+ 	cmd.result = -ENODEV;
+ 	radio_enable(btv);
+ 
+-	bttv_call_all(btv, core, ioctl, SAA6588_CMD_READ, &cmd);
++	bttv_call_all(btv, core, command, SAA6588_CMD_READ, &cmd);
+ 
+ 	return cmd.result;
+ }
+@@ -3281,7 +3281,7 @@ static __poll_t radio_poll(struct file *file, poll_table *wait)
+ 	cmd.instance = file;
+ 	cmd.event_list = wait;
+ 	cmd.poll_mask = res;
+-	bttv_call_all(btv, core, ioctl, SAA6588_CMD_POLL, &cmd);
++	bttv_call_all(btv, core, command, SAA6588_CMD_POLL, &cmd);
+ 
+ 	return cmd.poll_mask;
+ }
+diff --git a/drivers/media/pci/saa7134/saa7134-video.c b/drivers/media/pci/saa7134/saa7134-video.c
+index 0f9d6b9edb90a..374c8e1087de1 100644
+--- a/drivers/media/pci/saa7134/saa7134-video.c
++++ b/drivers/media/pci/saa7134/saa7134-video.c
+@@ -1181,7 +1181,7 @@ static int video_release(struct file *file)
+ 
+ 	saa_call_all(dev, tuner, standby);
+ 	if (vdev->vfl_type == VFL_TYPE_RADIO)
+-		saa_call_all(dev, core, ioctl, SAA6588_CMD_CLOSE, &cmd);
++		saa_call_all(dev, core, command, SAA6588_CMD_CLOSE, &cmd);
+ 	mutex_unlock(&dev->lock);
+ 
+ 	return 0;
+@@ -1200,7 +1200,7 @@ static ssize_t radio_read(struct file *file, char __user *data,
+ 	cmd.result = -ENODEV;
+ 
+ 	mutex_lock(&dev->lock);
+-	saa_call_all(dev, core, ioctl, SAA6588_CMD_READ, &cmd);
++	saa_call_all(dev, core, command, SAA6588_CMD_READ, &cmd);
+ 	mutex_unlock(&dev->lock);
+ 
+ 	return cmd.result;
+@@ -1216,7 +1216,7 @@ static __poll_t radio_poll(struct file *file, poll_table *wait)
+ 	cmd.event_list = wait;
+ 	cmd.poll_mask = 0;
+ 	mutex_lock(&dev->lock);
+-	saa_call_all(dev, core, ioctl, SAA6588_CMD_POLL, &cmd);
++	saa_call_all(dev, core, command, SAA6588_CMD_POLL, &cmd);
+ 	mutex_unlock(&dev->lock);
+ 
+ 	return rc | cmd.poll_mask;
+diff --git a/drivers/media/platform/davinci/vpbe_display.c b/drivers/media/platform/davinci/vpbe_display.c
+index d19bad997f30c..bf3c3e76b9213 100644
+--- a/drivers/media/platform/davinci/vpbe_display.c
++++ b/drivers/media/platform/davinci/vpbe_display.c
+@@ -47,7 +47,7 @@ static int venc_is_second_field(struct vpbe_display *disp_dev)
+ 
+ 	ret = v4l2_subdev_call(vpbe_dev->venc,
+ 			       core,
+-			       ioctl,
++			       command,
+ 			       VENC_GET_FLD,
+ 			       &val);
+ 	if (ret < 0) {
+diff --git a/drivers/media/platform/davinci/vpbe_venc.c b/drivers/media/platform/davinci/vpbe_venc.c
+index 8caa084e57046..bde241c26d795 100644
+--- a/drivers/media/platform/davinci/vpbe_venc.c
++++ b/drivers/media/platform/davinci/vpbe_venc.c
+@@ -521,9 +521,7 @@ static int venc_s_routing(struct v4l2_subdev *sd, u32 input, u32 output,
+ 	return ret;
+ }
+ 
+-static long venc_ioctl(struct v4l2_subdev *sd,
+-			unsigned int cmd,
+-			void *arg)
++static long venc_command(struct v4l2_subdev *sd, unsigned int cmd, void *arg)
+ {
+ 	u32 val;
+ 
+@@ -542,7 +540,7 @@ static long venc_ioctl(struct v4l2_subdev *sd,
+ }
+ 
+ static const struct v4l2_subdev_core_ops venc_core_ops = {
+-	.ioctl      = venc_ioctl,
++	.command      = venc_command,
+ };
+ 
+ static const struct v4l2_subdev_video_ops venc_video_ops = {
+diff --git a/drivers/media/rc/bpf-lirc.c b/drivers/media/rc/bpf-lirc.c
+index 3fe3edd808765..afae0afe3f810 100644
+--- a/drivers/media/rc/bpf-lirc.c
++++ b/drivers/media/rc/bpf-lirc.c
+@@ -326,7 +326,8 @@ int lirc_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr)
+ 	}
+ 
+ 	if (attr->query.prog_cnt != 0 && prog_ids && cnt)
+-		ret = bpf_prog_array_copy_to_user(progs, prog_ids, cnt);
++		ret = bpf_prog_array_copy_to_user(progs, prog_ids,
++						  attr->query.prog_cnt);
+ 
+ unlock:
+ 	mutex_unlock(&ir_raw_handler_lock);
+diff --git a/drivers/media/usb/dvb-usb/dtv5100.c b/drivers/media/usb/dvb-usb/dtv5100.c
+index fba06932a9e0e..1c13e493322cc 100644
+--- a/drivers/media/usb/dvb-usb/dtv5100.c
++++ b/drivers/media/usb/dvb-usb/dtv5100.c
+@@ -26,6 +26,7 @@ static int dtv5100_i2c_msg(struct dvb_usb_device *d, u8 addr,
+ 			   u8 *wbuf, u16 wlen, u8 *rbuf, u16 rlen)
+ {
+ 	struct dtv5100_state *st = d->priv;
++	unsigned int pipe;
+ 	u8 request;
+ 	u8 type;
+ 	u16 value;
+@@ -34,6 +35,7 @@ static int dtv5100_i2c_msg(struct dvb_usb_device *d, u8 addr,
+ 	switch (wlen) {
+ 	case 1:
+ 		/* write { reg }, read { value } */
++		pipe = usb_rcvctrlpipe(d->udev, 0);
+ 		request = (addr == DTV5100_DEMOD_ADDR ? DTV5100_DEMOD_READ :
+ 							DTV5100_TUNER_READ);
+ 		type = USB_TYPE_VENDOR | USB_DIR_IN;
+@@ -41,6 +43,7 @@ static int dtv5100_i2c_msg(struct dvb_usb_device *d, u8 addr,
+ 		break;
+ 	case 2:
+ 		/* write { reg, value } */
++		pipe = usb_sndctrlpipe(d->udev, 0);
+ 		request = (addr == DTV5100_DEMOD_ADDR ? DTV5100_DEMOD_WRITE :
+ 							DTV5100_TUNER_WRITE);
+ 		type = USB_TYPE_VENDOR | USB_DIR_OUT;
+@@ -54,7 +57,7 @@ static int dtv5100_i2c_msg(struct dvb_usb_device *d, u8 addr,
+ 
+ 	memcpy(st->data, rbuf, rlen);
+ 	msleep(1); /* avoid I2C errors */
+-	return usb_control_msg(d->udev, usb_rcvctrlpipe(d->udev, 0), request,
++	return usb_control_msg(d->udev, pipe, request,
+ 			       type, value, index, st->data, rlen,
+ 			       DTV5100_USB_TIMEOUT);
+ }
+@@ -141,7 +144,7 @@ static int dtv5100_probe(struct usb_interface *intf,
+ 
+ 	/* initialize non qt1010/zl10353 part? */
+ 	for (i = 0; dtv5100_init[i].request; i++) {
+-		ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
++		ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
+ 				      dtv5100_init[i].request,
+ 				      USB_TYPE_VENDOR | USB_DIR_OUT,
+ 				      dtv5100_init[i].value,
+diff --git a/drivers/media/usb/gspca/sq905.c b/drivers/media/usb/gspca/sq905.c
+index 9491110709718..32504ebcfd4de 100644
+--- a/drivers/media/usb/gspca/sq905.c
++++ b/drivers/media/usb/gspca/sq905.c
+@@ -116,7 +116,7 @@ static int sq905_command(struct gspca_dev *gspca_dev, u16 index)
+ 	}
+ 
+ 	ret = usb_control_msg(gspca_dev->dev,
+-			      usb_sndctrlpipe(gspca_dev->dev, 0),
++			      usb_rcvctrlpipe(gspca_dev->dev, 0),
+ 			      USB_REQ_SYNCH_FRAME,                /* request */
+ 			      USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 			      SQ905_PING, 0, gspca_dev->usb_buf, 1,
+diff --git a/drivers/media/usb/gspca/sunplus.c b/drivers/media/usb/gspca/sunplus.c
+index ace3da40006e7..971dee0a56dae 100644
+--- a/drivers/media/usb/gspca/sunplus.c
++++ b/drivers/media/usb/gspca/sunplus.c
+@@ -242,6 +242,10 @@ static void reg_r(struct gspca_dev *gspca_dev,
+ 		gspca_err(gspca_dev, "reg_r: buffer overflow\n");
+ 		return;
+ 	}
++	if (len == 0) {
++		gspca_err(gspca_dev, "reg_r: zero-length read\n");
++		return;
++	}
+ 	if (gspca_dev->usb_err < 0)
+ 		return;
+ 	ret = usb_control_msg(gspca_dev->dev,
+@@ -250,7 +254,7 @@ static void reg_r(struct gspca_dev *gspca_dev,
+ 			USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ 			0,		/* value */
+ 			index,
+-			len ? gspca_dev->usb_buf : NULL, len,
++			gspca_dev->usb_buf, len,
+ 			500);
+ 	if (ret < 0) {
+ 		pr_err("reg_r err %d\n", ret);
+@@ -727,7 +731,7 @@ static int sd_start(struct gspca_dev *gspca_dev)
+ 		case MegaImageVI:
+ 			reg_w_riv(gspca_dev, 0xf0, 0, 0);
+ 			spca504B_WaitCmdStatus(gspca_dev);
+-			reg_r(gspca_dev, 0xf0, 4, 0);
++			reg_w_riv(gspca_dev, 0xf0, 4, 0);
+ 			spca504B_WaitCmdStatus(gspca_dev);
+ 			break;
+ 		default:
+diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
+index a777b389a66ec..e16464606b140 100644
+--- a/drivers/media/usb/uvc/uvc_video.c
++++ b/drivers/media/usb/uvc/uvc_video.c
+@@ -127,10 +127,37 @@ int uvc_query_ctrl(struct uvc_device *dev, u8 query, u8 unit,
+ static void uvc_fixup_video_ctrl(struct uvc_streaming *stream,
+ 	struct uvc_streaming_control *ctrl)
+ {
++	static const struct usb_device_id elgato_cam_link_4k = {
++		USB_DEVICE(0x0fd9, 0x0066)
++	};
+ 	struct uvc_format *format = NULL;
+ 	struct uvc_frame *frame = NULL;
+ 	unsigned int i;
+ 
++	/*
++	 * The response of the Elgato Cam Link 4K is incorrect: The second byte
++	 * contains bFormatIndex (instead of being the second byte of bmHint).
++	 * The first byte is always zero. The third byte is always 1.
++	 *
++	 * The UVC 1.5 class specification defines the first five bits in the
++	 * bmHint bitfield. The remaining bits are reserved and should be zero.
++	 * Therefore a valid bmHint will be less than 32.
++	 *
++	 * Latest Elgato Cam Link 4K firmware as of 2021-03-23 needs this fix.
++	 * MCU: 20.02.19, FPGA: 67
++	 */
++	if (usb_match_one_id(stream->dev->intf, &elgato_cam_link_4k) &&
++	    ctrl->bmHint > 255) {
++		u8 corrected_format_index = ctrl->bmHint >> 8;
++
++		uvc_dbg(stream->dev, VIDEO,
++			"Correct USB video probe response from {bmHint: 0x%04x, bFormatIndex: %u} to {bmHint: 0x%04x, bFormatIndex: %u}\n",
++			ctrl->bmHint, ctrl->bFormatIndex,
++			1, corrected_format_index);
++		ctrl->bmHint = 1;
++		ctrl->bFormatIndex = corrected_format_index;
++	}
++
+ 	for (i = 0; i < stream->nformats; ++i) {
+ 		if (stream->format[i].index == ctrl->bFormatIndex) {
+ 			format = &stream->format[i];
+diff --git a/drivers/media/usb/zr364xx/zr364xx.c b/drivers/media/usb/zr364xx/zr364xx.c
+index 1ef611e083237..538a330046ec9 100644
+--- a/drivers/media/usb/zr364xx/zr364xx.c
++++ b/drivers/media/usb/zr364xx/zr364xx.c
+@@ -1032,6 +1032,7 @@ static int zr364xx_start_readpipe(struct zr364xx_camera *cam)
+ 	DBG("submitting URB %p\n", pipe_info->stream_urb);
+ 	retval = usb_submit_urb(pipe_info->stream_urb, GFP_KERNEL);
+ 	if (retval) {
++		usb_free_urb(pipe_info->stream_urb);
+ 		printk(KERN_ERR KBUILD_MODNAME ": start read pipe failed\n");
+ 		return retval;
+ 	}
+diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
+index 07d823656ee65..cf50c60bbb5d3 100644
+--- a/drivers/media/v4l2-core/v4l2-ioctl.c
++++ b/drivers/media/v4l2-core/v4l2-ioctl.c
+@@ -3124,8 +3124,10 @@ static int video_get_user(void __user *arg, void *parg,
+ 		if (copy_from_user(parg, (void __user *)arg, n))
+ 			err = -EFAULT;
+ 	} else if (in_compat_syscall()) {
++		memset(parg, 0, n);
+ 		err = v4l2_compat_get_user(arg, parg, cmd);
+ 	} else {
++		memset(parg, 0, n);
+ #if !defined(CONFIG_64BIT) && defined(CONFIG_COMPAT_32BIT_TIME)
+ 		switch (cmd) {
+ 		case VIDIOC_QUERYBUF_TIME32:
+diff --git a/drivers/mfd/syscon.c b/drivers/mfd/syscon.c
+index c6f139b2e0c09..765c0210cb528 100644
+--- a/drivers/mfd/syscon.c
++++ b/drivers/mfd/syscon.c
+@@ -108,6 +108,7 @@ static struct syscon *of_syscon_register(struct device_node *np, bool check_clk)
+ 	syscon_config.max_register = resource_size(&res) - reg_io_width;
+ 
+ 	regmap = regmap_init_mmio(NULL, base, &syscon_config);
++	kfree(syscon_config.name);
+ 	if (IS_ERR(regmap)) {
+ 		pr_err("regmap init failed\n");
+ 		ret = PTR_ERR(regmap);
+@@ -144,7 +145,6 @@ err_clk:
+ 	regmap_exit(regmap);
+ err_regmap:
+ 	iounmap(base);
+-	kfree(syscon_config.name);
+ err_map:
+ 	kfree(syscon);
+ 	return ERR_PTR(ret);
+diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c
+index 0e8254d0cf0ba..9ff02bdf31530 100644
+--- a/drivers/misc/lkdtm/bugs.c
++++ b/drivers/misc/lkdtm/bugs.c
+@@ -161,6 +161,9 @@ void lkdtm_UNALIGNED_LOAD_STORE_WRITE(void)
+ 	if (*p == 0)
+ 		val = 0x87654321;
+ 	*p = val;
++
++	if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS))
++		pr_err("XFAIL: arch has CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS\n");
+ }
+ 
+ void lkdtm_SOFTLOCKUP(void)
+diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c
+index 8024b6a5cc7fc..cd833011f2850 100644
+--- a/drivers/misc/lkdtm/core.c
++++ b/drivers/misc/lkdtm/core.c
+@@ -177,9 +177,7 @@ static const struct crashtype crashtypes[] = {
+ 	CRASHTYPE(STACKLEAK_ERASING),
+ 	CRASHTYPE(CFI_FORWARD_PROTO),
+ 	CRASHTYPE(FORTIFIED_STRSCPY),
+-#ifdef CONFIG_X86_32
+ 	CRASHTYPE(DOUBLE_FAULT),
+-#endif
+ #ifdef CONFIG_PPC_BOOK3S_64
+ 	CRASHTYPE(PPC_SLB_MULTIHIT),
+ #endif
+diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
+index f194940c59746..cc403da868708 100644
+--- a/drivers/mmc/core/core.c
++++ b/drivers/mmc/core/core.c
+@@ -937,11 +937,14 @@ int mmc_execute_tuning(struct mmc_card *card)
+ 
+ 	err = host->ops->execute_tuning(host, opcode);
+ 
+-	if (err)
++	if (err) {
+ 		pr_err("%s: tuning execution failed: %d\n",
+ 			mmc_hostname(host), err);
+-	else
++	} else {
++		host->retune_now = 0;
++		host->need_retune = 0;
+ 		mmc_retune_enable(host);
++	}
+ 
+ 	return err;
+ }
+diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
+index 2c48d65041013..e277ef4fa0995 100644
+--- a/drivers/mmc/core/sd.c
++++ b/drivers/mmc/core/sd.c
+@@ -847,11 +847,13 @@ try_again:
+ 		return err;
+ 
+ 	/*
+-	 * In case CCS and S18A in the response is set, start Signal Voltage
+-	 * Switch procedure. SPI mode doesn't support CMD11.
++	 * In case the S18A bit is set in the response, let's start the signal
++	 * voltage switch procedure. SPI mode doesn't support CMD11.
++	 * Note that, according to the spec, the S18A bit is not valid unless
++	 * the CCS bit is set as well. We deliberately deviate from the spec in
++	 * regards to this, which allows UHS-I to be supported for SDSC cards.
+ 	 */
+-	if (!mmc_host_is_spi(host) && rocr &&
+-	   ((*rocr & 0x41000000) == 0x41000000)) {
++	if (!mmc_host_is_spi(host) && rocr && (*rocr & 0x01000000)) {
+ 		err = mmc_set_uhs_voltage(host, pocr);
+ 		if (err == -EAGAIN) {
+ 			retries--;
+diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c
+index c3fbf8c825c4f..8fe65f172a611 100644
+--- a/drivers/mmc/host/sdhci-acpi.c
++++ b/drivers/mmc/host/sdhci-acpi.c
+@@ -822,6 +822,17 @@ static const struct dmi_system_id sdhci_acpi_quirks[] = {
+ 		},
+ 		.driver_data = (void *)DMI_QUIRK_SD_NO_WRITE_PROTECT,
+ 	},
++	{
++		/*
++		 * The Toshiba WT8-B's microSD slot always reports the card being
++		 * write-protected.
++		 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "TOSHIBA ENCORE 2 WT8-B"),
++		},
++		.driver_data = (void *)DMI_QUIRK_SD_NO_WRITE_PROTECT,
++	},
+ 	{} /* Terminating entry */
+ };
+ 
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index bf238ade16021..6b39126fbf06c 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -1812,6 +1812,10 @@ static u16 sdhci_get_preset_value(struct sdhci_host *host)
+ 	u16 preset = 0;
+ 
+ 	switch (host->timing) {
++	case MMC_TIMING_MMC_HS:
++	case MMC_TIMING_SD_HS:
++		preset = sdhci_readw(host, SDHCI_PRESET_FOR_HIGH_SPEED);
++		break;
+ 	case MMC_TIMING_UHS_SDR12:
+ 		preset = sdhci_readw(host, SDHCI_PRESET_FOR_SDR12);
+ 		break;
+diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
+index 0770c036e2ff5..960fed78529e1 100644
+--- a/drivers/mmc/host/sdhci.h
++++ b/drivers/mmc/host/sdhci.h
+@@ -253,6 +253,7 @@
+ 
+ /* 60-FB reserved */
+ 
++#define SDHCI_PRESET_FOR_HIGH_SPEED	0x64
+ #define SDHCI_PRESET_FOR_SDR12 0x66
+ #define SDHCI_PRESET_FOR_SDR25 0x68
+ #define SDHCI_PRESET_FOR_SDR50 0x6A
+diff --git a/drivers/net/dsa/ocelot/seville_vsc9953.c b/drivers/net/dsa/ocelot/seville_vsc9953.c
+index 84f93a874d502..deae923c8b7a8 100644
+--- a/drivers/net/dsa/ocelot/seville_vsc9953.c
++++ b/drivers/net/dsa/ocelot/seville_vsc9953.c
+@@ -1206,6 +1206,11 @@ static int seville_probe(struct platform_device *pdev)
+ 	felix->info = &seville_info_vsc9953;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!res) {
++		err = -EINVAL;
++		dev_err(&pdev->dev, "Invalid resource\n");
++		goto err_alloc_felix;
++	}
+ 	felix->switch_base = res->start;
+ 
+ 	ds = kzalloc(sizeof(struct dsa_switch), GFP_KERNEL);
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index 5335244e4577a..89d16c587bb7d 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -423,6 +423,10 @@ static int bcmgenet_mii_register(struct bcmgenet_priv *priv)
+ 	int id, ret;
+ 
+ 	pres = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!pres) {
++		dev_err(&pdev->dev, "Invalid resource\n");
++		return -EINVAL;
++	}
+ 	memset(&res, 0, sizeof(res));
+ 	memset(&ppd, 0, sizeof(ppd));
+ 
+diff --git a/drivers/net/ethernet/freescale/fec.h b/drivers/net/ethernet/freescale/fec.h
+index 0602d5d5d2eee..2e002e4b4b4aa 100644
+--- a/drivers/net/ethernet/freescale/fec.h
++++ b/drivers/net/ethernet/freescale/fec.h
+@@ -467,6 +467,11 @@ struct bufdesc_ex {
+  */
+ #define FEC_QUIRK_NO_HARD_RESET		(1 << 18)
+ 
++/* i.MX6SX ENET IP supports multiple queues (3 queues), use this quirk to
++ * represents this ENET IP.
++ */
++#define FEC_QUIRK_HAS_MULTI_QUEUES	(1 << 19)
++
+ struct bufdesc_prop {
+ 	int qid;
+ 	/* Address of Rx and Tx buffers */
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index ad82cffc6f3f5..8aea707a65a77 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -76,6 +76,8 @@ static void fec_enet_itr_coal_init(struct net_device *ndev);
+ 
+ #define DRIVER_NAME	"fec"
+ 
++static const u16 fec_enet_vlan_pri_to_queue[8] = {0, 0, 1, 1, 1, 2, 2, 2};
++
+ /* Pause frame feild and FIFO threshold */
+ #define FEC_ENET_FCE	(1 << 5)
+ #define FEC_ENET_RSEM_V	0x84
+@@ -122,7 +124,7 @@ static const struct fec_devinfo fec_imx6x_info = {
+ 		  FEC_QUIRK_HAS_VLAN | FEC_QUIRK_HAS_AVB |
+ 		  FEC_QUIRK_ERR007885 | FEC_QUIRK_BUG_CAPTURE |
+ 		  FEC_QUIRK_HAS_RACC | FEC_QUIRK_HAS_COALESCE |
+-		  FEC_QUIRK_CLEAR_SETUP_MII,
++		  FEC_QUIRK_CLEAR_SETUP_MII | FEC_QUIRK_HAS_MULTI_QUEUES,
+ };
+ 
+ static const struct fec_devinfo fec_imx6ul_info = {
+@@ -421,6 +423,7 @@ fec_enet_txq_submit_frag_skb(struct fec_enet_priv_tx_q *txq,
+ 				estatus |= FEC_TX_BD_FTYPE(txq->bd.qid);
+ 			if (skb->ip_summed == CHECKSUM_PARTIAL)
+ 				estatus |= BD_ENET_TX_PINS | BD_ENET_TX_IINS;
++
+ 			ebdp->cbd_bdu = 0;
+ 			ebdp->cbd_esc = cpu_to_fec32(estatus);
+ 		}
+@@ -954,7 +957,7 @@ fec_restart(struct net_device *ndev)
+ 	 * For i.MX6SX SOC, enet use AXI bus, we use disable MAC
+ 	 * instead of reset MAC itself.
+ 	 */
+-	if (fep->quirks & FEC_QUIRK_HAS_AVB ||
++	if (fep->quirks & FEC_QUIRK_HAS_MULTI_QUEUES ||
+ 	    ((fep->quirks & FEC_QUIRK_NO_HARD_RESET) && fep->link)) {
+ 		writel(0, fep->hwp + FEC_ECNTRL);
+ 	} else {
+@@ -1165,7 +1168,7 @@ fec_stop(struct net_device *ndev)
+ 	 * instead of reset MAC itself.
+ 	 */
+ 	if (!(fep->wol_flag & FEC_WOL_FLAG_SLEEP_ON)) {
+-		if (fep->quirks & FEC_QUIRK_HAS_AVB) {
++		if (fep->quirks & FEC_QUIRK_HAS_MULTI_QUEUES) {
+ 			writel(0, fep->hwp + FEC_ECNTRL);
+ 		} else {
+ 			writel(1, fep->hwp + FEC_ECNTRL);
+@@ -2570,7 +2573,7 @@ static void fec_enet_itr_coal_set(struct net_device *ndev)
+ 
+ 	writel(tx_itr, fep->hwp + FEC_TXIC0);
+ 	writel(rx_itr, fep->hwp + FEC_RXIC0);
+-	if (fep->quirks & FEC_QUIRK_HAS_AVB) {
++	if (fep->quirks & FEC_QUIRK_HAS_MULTI_QUEUES) {
+ 		writel(tx_itr, fep->hwp + FEC_TXIC1);
+ 		writel(rx_itr, fep->hwp + FEC_RXIC1);
+ 		writel(tx_itr, fep->hwp + FEC_TXIC2);
+@@ -3239,10 +3242,40 @@ static int fec_set_features(struct net_device *netdev,
+ 	return 0;
+ }
+ 
++static u16 fec_enet_get_raw_vlan_tci(struct sk_buff *skb)
++{
++	struct vlan_ethhdr *vhdr;
++	unsigned short vlan_TCI = 0;
++
++	if (skb->protocol == htons(ETH_P_ALL)) {
++		vhdr = (struct vlan_ethhdr *)(skb->data);
++		vlan_TCI = ntohs(vhdr->h_vlan_TCI);
++	}
++
++	return vlan_TCI;
++}
++
++static u16 fec_enet_select_queue(struct net_device *ndev, struct sk_buff *skb,
++				 struct net_device *sb_dev)
++{
++	struct fec_enet_private *fep = netdev_priv(ndev);
++	u16 vlan_tag;
++
++	if (!(fep->quirks & FEC_QUIRK_HAS_AVB))
++		return netdev_pick_tx(ndev, skb, NULL);
++
++	vlan_tag = fec_enet_get_raw_vlan_tci(skb);
++	if (!vlan_tag)
++		return vlan_tag;
++
++	return fec_enet_vlan_pri_to_queue[vlan_tag >> 13];
++}
++
+ static const struct net_device_ops fec_netdev_ops = {
+ 	.ndo_open		= fec_enet_open,
+ 	.ndo_stop		= fec_enet_close,
+ 	.ndo_start_xmit		= fec_enet_start_xmit,
++	.ndo_select_queue       = fec_enet_select_queue,
+ 	.ndo_set_rx_mode	= set_multicast_list,
+ 	.ndo_validate_addr	= eth_validate_addr,
+ 	.ndo_tx_timeout		= fec_timeout,
+@@ -3371,7 +3404,7 @@ static int fec_enet_init(struct net_device *ndev)
+ 		fep->csum_flags |= FLAG_RX_CSUM_ENABLED;
+ 	}
+ 
+-	if (fep->quirks & FEC_QUIRK_HAS_AVB) {
++	if (fep->quirks & FEC_QUIRK_HAS_MULTI_QUEUES) {
+ 		fep->tx_align = 0;
+ 		fep->rx_align = 0x3f;
+ 	}
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index ede65b32f8212..efc98903c0b72 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -1554,7 +1554,8 @@ static int create_hdr_descs(u8 hdr_field, u8 *hdr_data, int len, int *hdr_len,
+ 
+ /**
+  * build_hdr_descs_arr - build a header descriptor array
+- * @txbuff: tx buffer
++ * @skb: tx socket buffer
++ * @indir_arr: indirect array
+  * @num_entries: number of descriptors to be sent
+  * @hdr_field: bit field determining which headers will be sent
+  *
+diff --git a/drivers/net/ethernet/intel/e100.c b/drivers/net/ethernet/intel/e100.c
+index f8d78af76d7de..1b0958bd24f6c 100644
+--- a/drivers/net/ethernet/intel/e100.c
++++ b/drivers/net/ethernet/intel/e100.c
+@@ -1395,7 +1395,7 @@ static int e100_phy_check_without_mii(struct nic *nic)
+ 	u8 phy_type;
+ 	int without_mii;
+ 
+-	phy_type = (nic->eeprom[eeprom_phy_iface] >> 8) & 0x0f;
++	phy_type = (le16_to_cpu(nic->eeprom[eeprom_phy_iface]) >> 8) & 0x0f;
+ 
+ 	switch (phy_type) {
+ 	case NoSuchPhy: /* Non-MII PHY; UNTESTED! */
+@@ -1515,7 +1515,7 @@ static int e100_phy_init(struct nic *nic)
+ 		mdio_write(netdev, nic->mii.phy_id, MII_BMCR, bmcr);
+ 	} else if ((nic->mac >= mac_82550_D102) || ((nic->flags & ich) &&
+ 	   (mdio_read(netdev, nic->mii.phy_id, MII_TPISTATUS) & 0x8000) &&
+-		(nic->eeprom[eeprom_cnfg_mdix] & eeprom_mdix_enabled))) {
++	   (le16_to_cpu(nic->eeprom[eeprom_cnfg_mdix]) & eeprom_mdix_enabled))) {
+ 		/* enable/disable MDI/MDI-X auto-switching. */
+ 		mdio_write(netdev, nic->mii.phy_id, MII_NCONFIG,
+ 				nic->mii.force_media ? 0 : NCONFIG_AUTO_SWITCH);
+@@ -2269,9 +2269,9 @@ static int e100_asf(struct nic *nic)
+ {
+ 	/* ASF can be enabled from eeprom */
+ 	return (nic->pdev->device >= 0x1050) && (nic->pdev->device <= 0x1057) &&
+-	   (nic->eeprom[eeprom_config_asf] & eeprom_asf) &&
+-	   !(nic->eeprom[eeprom_config_asf] & eeprom_gcl) &&
+-	   ((nic->eeprom[eeprom_smbus_addr] & 0xFF) != 0xFE);
++	   (le16_to_cpu(nic->eeprom[eeprom_config_asf]) & eeprom_asf) &&
++	   !(le16_to_cpu(nic->eeprom[eeprom_config_asf]) & eeprom_gcl) &&
++	   ((le16_to_cpu(nic->eeprom[eeprom_smbus_addr]) & 0xFF) != 0xFE);
+ }
+ 
+ static int e100_up(struct nic *nic)
+@@ -2926,7 +2926,7 @@ static int e100_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	/* Wol magic packet can be enabled from eeprom */
+ 	if ((nic->mac >= mac_82558_D101_A4) &&
+-	   (nic->eeprom[eeprom_id] & eeprom_id_wol)) {
++	   (le16_to_cpu(nic->eeprom[eeprom_id]) & eeprom_id_wol)) {
+ 		nic->flags |= wol_magic;
+ 		device_set_wakeup_enable(&pdev->dev, true);
+ 	}
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ptp.c b/drivers/net/ethernet/intel/i40e/i40e_ptp.c
+index f1f6fc3744e9c..7b971b205d36a 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ptp.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ptp.c
+@@ -11,13 +11,14 @@
+  * operate with the nanosecond field directly without fear of overflow.
+  *
+  * Much like the 82599, the update period is dependent upon the link speed:
+- * At 40Gb link or no link, the period is 1.6ns.
+- * At 10Gb link, the period is multiplied by 2. (3.2ns)
++ * At 40Gb, 25Gb, or no link, the period is 1.6ns.
++ * At 10Gb or 5Gb link, the period is multiplied by 2. (3.2ns)
+  * At 1Gb link, the period is multiplied by 20. (32ns)
+  * 1588 functionality is not supported at 100Mbps.
+  */
+ #define I40E_PTP_40GB_INCVAL		0x0199999999ULL
+ #define I40E_PTP_10GB_INCVAL_MULT	2
++#define I40E_PTP_5GB_INCVAL_MULT	2
+ #define I40E_PTP_1GB_INCVAL_MULT	20
+ 
+ #define I40E_PRTTSYN_CTL1_TSYNTYPE_V1  BIT(I40E_PRTTSYN_CTL1_TSYNTYPE_SHIFT)
+@@ -465,6 +466,9 @@ void i40e_ptp_set_increment(struct i40e_pf *pf)
+ 	case I40E_LINK_SPEED_10GB:
+ 		mult = I40E_PTP_10GB_INCVAL_MULT;
+ 		break;
++	case I40E_LINK_SPEED_5GB:
++		mult = I40E_PTP_5GB_INCVAL_MULT;
++		break;
+ 	case I40E_LINK_SPEED_1GB:
+ 		mult = I40E_PTP_1GB_INCVAL_MULT;
+ 		break;
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index 99301ad95290d..1f30f24648d8e 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -3462,13 +3462,9 @@ static int
+ ice_get_rc_coalesce(struct ethtool_coalesce *ec, enum ice_container_type c_type,
+ 		    struct ice_ring_container *rc)
+ {
+-	struct ice_pf *pf;
+-
+ 	if (!rc->ring)
+ 		return -EINVAL;
+ 
+-	pf = rc->ring->vsi->back;
+-
+ 	switch (c_type) {
+ 	case ICE_RX_CONTAINER:
+ 		ec->use_adaptive_rx_coalesce = ITR_IS_DYNAMIC(rc);
+@@ -3480,7 +3476,7 @@ ice_get_rc_coalesce(struct ethtool_coalesce *ec, enum ice_container_type c_type,
+ 		ec->tx_coalesce_usecs = rc->itr_setting;
+ 		break;
+ 	default:
+-		dev_dbg(ice_pf_to_dev(pf), "Invalid c_type %d\n", c_type);
++		dev_dbg(ice_pf_to_dev(rc->ring->vsi->back), "Invalid c_type %d\n", c_type);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
+index 21329ed3087e1..4238ab0433eef 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
++++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
+@@ -630,7 +630,7 @@ static const struct ice_rx_ptype_decoded ice_ptype_lkup[] = {
+ 	/* L2 Packet types */
+ 	ICE_PTT_UNUSED_ENTRY(0),
+ 	ICE_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2),
+-	ICE_PTT(2, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE),
++	ICE_PTT_UNUSED_ENTRY(2),
+ 	ICE_PTT_UNUSED_ENTRY(3),
+ 	ICE_PTT_UNUSED_ENTRY(4),
+ 	ICE_PTT_UNUSED_ENTRY(5),
+@@ -744,7 +744,7 @@ static const struct ice_rx_ptype_decoded ice_ptype_lkup[] = {
+ 	/* Non Tunneled IPv6 */
+ 	ICE_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
+ 	ICE_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
+-	ICE_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP,  PAY3),
++	ICE_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP,  PAY4),
+ 	ICE_PTT_UNUSED_ENTRY(91),
+ 	ICE_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP,  PAY4),
+ 	ICE_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
+diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h
+index 4474dd6a7ba14..a925273c2cca6 100644
+--- a/drivers/net/ethernet/intel/ice/ice_type.h
++++ b/drivers/net/ethernet/intel/ice/ice_type.h
+@@ -63,7 +63,7 @@ enum ice_aq_res_ids {
+ /* FW update timeout definitions are in milliseconds */
+ #define ICE_NVM_TIMEOUT			180000
+ #define ICE_CHANGE_LOCK_TIMEOUT		1000
+-#define ICE_GLOBAL_CFG_LOCK_TIMEOUT	3000
++#define ICE_GLOBAL_CFG_LOCK_TIMEOUT	5000
+ 
+ enum ice_aq_res_access_type {
+ 	ICE_RES_READ = 1,
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index b2a042f825ff5..7b1885f9ce039 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -2643,7 +2643,8 @@ static int igb_parse_cls_flower(struct igb_adapter *adapter,
+ 			}
+ 
+ 			input->filter.match_flags |= IGB_FILTER_FLAG_VLAN_TCI;
+-			input->filter.vlan_tci = match.key->vlan_priority;
++			input->filter.vlan_tci =
++				(__force __be16)match.key->vlan_priority;
+ 		}
+ 	}
+ 
+@@ -6275,12 +6276,12 @@ int igb_xmit_xdp_ring(struct igb_adapter *adapter,
+ 	cmd_type |= len | IGB_TXD_DCMD;
+ 	tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type);
+ 
+-	olinfo_status = cpu_to_le32(len << E1000_ADVTXD_PAYLEN_SHIFT);
++	olinfo_status = len << E1000_ADVTXD_PAYLEN_SHIFT;
+ 	/* 82575 requires a unique index per ring */
+ 	if (test_bit(IGB_RING_FLAG_TX_CTX_IDX, &tx_ring->flags))
+ 		olinfo_status |= tx_ring->reg_idx << 4;
+ 
+-	tx_desc->read.olinfo_status = olinfo_status;
++	tx_desc->read.olinfo_status = cpu_to_le32(olinfo_status);
+ 
+ 	netdev_tx_sent_queue(txring_txq(tx_ring), tx_buffer->bytecount);
+ 
+@@ -8592,7 +8593,7 @@ static void igb_process_skb_fields(struct igb_ring *rx_ring,
+ 
+ 		if (igb_test_staterr(rx_desc, E1000_RXDEXT_STATERR_LB) &&
+ 		    test_bit(IGB_RING_FLAG_RX_LB_VLAN_BSWAP, &rx_ring->flags))
+-			vid = be16_to_cpu(rx_desc->wb.upper.vlan);
++			vid = be16_to_cpu((__force __be16)rx_desc->wb.upper.vlan);
+ 		else
+ 			vid = le16_to_cpu(rx_desc->wb.upper.vlan);
+ 
+diff --git a/drivers/net/ethernet/intel/igbvf/netdev.c b/drivers/net/ethernet/intel/igbvf/netdev.c
+index fb3fbcb133310..630c1155f1965 100644
+--- a/drivers/net/ethernet/intel/igbvf/netdev.c
++++ b/drivers/net/ethernet/intel/igbvf/netdev.c
+@@ -83,14 +83,14 @@ static int igbvf_desc_unused(struct igbvf_ring *ring)
+ static void igbvf_receive_skb(struct igbvf_adapter *adapter,
+ 			      struct net_device *netdev,
+ 			      struct sk_buff *skb,
+-			      u32 status, u16 vlan)
++			      u32 status, __le16 vlan)
+ {
+ 	u16 vid;
+ 
+ 	if (status & E1000_RXD_STAT_VP) {
+ 		if ((adapter->flags & IGBVF_FLAG_RX_LB_VLAN_BSWAP) &&
+ 		    (status & E1000_RXDEXT_STATERR_LB))
+-			vid = be16_to_cpu(vlan) & E1000_RXD_SPC_VLAN_MASK;
++			vid = be16_to_cpu((__force __be16)vlan) & E1000_RXD_SPC_VLAN_MASK;
+ 		else
+ 			vid = le16_to_cpu(vlan) & E1000_RXD_SPC_VLAN_MASK;
+ 		if (test_bit(vid, adapter->active_vlans))
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index b3041fe6c0aed..42cc27ef73785 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -7387,6 +7387,10 @@ static int mvpp2_probe(struct platform_device *pdev)
+ 			return PTR_ERR(priv->lms_base);
+ 	} else {
+ 		res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
++		if (!res) {
++			dev_err(&pdev->dev, "Invalid resource\n");
++			return -EINVAL;
++		}
+ 		if (has_acpi_companion(&pdev->dev)) {
+ 			/* In case the MDIO memory region is declared in
+ 			 * the ACPI, it can already appear as 'in-use'
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index f90894eea9e04..5346271974f55 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -1310,7 +1310,8 @@ static void mlx5e_handle_rx_cqe_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
+ 	if (rep->vlan && skb_vlan_tag_present(skb))
+ 		skb_vlan_pop(skb);
+ 
+-	if (!mlx5e_rep_tc_update_skb(cqe, skb, &tc_priv)) {
++	if (unlikely(!mlx5_ipsec_is_rx_flow(cqe) &&
++		     !mlx5e_rep_tc_update_skb(cqe, skb, &tc_priv))) {
+ 		dev_kfree_skb_any(skb);
+ 		goto free_wqe;
+ 	}
+@@ -1367,7 +1368,8 @@ static void mlx5e_handle_rx_cqe_mpwrq_rep(struct mlx5e_rq *rq, struct mlx5_cqe64
+ 
+ 	mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb);
+ 
+-	if (!mlx5e_rep_tc_update_skb(cqe, skb, &tc_priv)) {
++	if (unlikely(!mlx5_ipsec_is_rx_flow(cqe) &&
++		     !mlx5e_rep_tc_update_skb(cqe, skb, &tc_priv))) {
+ 		dev_kfree_skb_any(skb);
+ 		goto mpwrq_cqe_out;
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag.c
+index b8748390335fd..9ce144ef83261 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag.c
+@@ -118,17 +118,24 @@ static bool __mlx5_lag_is_sriov(struct mlx5_lag *ldev)
+ static void mlx5_infer_tx_affinity_mapping(struct lag_tracker *tracker,
+ 					   u8 *port1, u8 *port2)
+ {
++	bool p1en;
++	bool p2en;
++
++	p1en = tracker->netdev_state[MLX5_LAG_P1].tx_enabled &&
++	       tracker->netdev_state[MLX5_LAG_P1].link_up;
++
++	p2en = tracker->netdev_state[MLX5_LAG_P2].tx_enabled &&
++	       tracker->netdev_state[MLX5_LAG_P2].link_up;
++
+ 	*port1 = 1;
+ 	*port2 = 2;
+-	if (!tracker->netdev_state[MLX5_LAG_P1].tx_enabled ||
+-	    !tracker->netdev_state[MLX5_LAG_P1].link_up) {
+-		*port1 = 2;
++	if ((!p1en && !p2en) || (p1en && p2en))
+ 		return;
+-	}
+ 
+-	if (!tracker->netdev_state[MLX5_LAG_P2].tx_enabled ||
+-	    !tracker->netdev_state[MLX5_LAG_P2].link_up)
++	if (p1en)
+ 		*port2 = 1;
++	else
++		*port1 = 2;
+ }
+ 
+ void mlx5_modify_lag(struct mlx5_lag *ldev,
+diff --git a/drivers/net/ethernet/micrel/ks8842.c b/drivers/net/ethernet/micrel/ks8842.c
+index caa251d0e3815..b27713906d3a6 100644
+--- a/drivers/net/ethernet/micrel/ks8842.c
++++ b/drivers/net/ethernet/micrel/ks8842.c
+@@ -1135,6 +1135,10 @@ static int ks8842_probe(struct platform_device *pdev)
+ 	unsigned i;
+ 
+ 	iomem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!iomem) {
++		dev_err(&pdev->dev, "Invalid resource\n");
++		return -EINVAL;
++	}
+ 	if (!request_mem_region(iomem->start, resource_size(iomem), DRV_NAME))
+ 		goto err_mem_region;
+ 
+diff --git a/drivers/net/ethernet/moxa/moxart_ether.c b/drivers/net/ethernet/moxa/moxart_ether.c
+index b85733942053f..5249b64f4fc54 100644
+--- a/drivers/net/ethernet/moxa/moxart_ether.c
++++ b/drivers/net/ethernet/moxa/moxart_ether.c
+@@ -481,13 +481,12 @@ static int moxart_mac_probe(struct platform_device *pdev)
+ 	priv->ndev = ndev;
+ 	priv->pdev = pdev;
+ 
+-	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	ndev->base_addr = res->start;
+-	priv->base = devm_ioremap_resource(p_dev, res);
++	priv->base = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+ 	if (IS_ERR(priv->base)) {
+ 		ret = PTR_ERR(priv->base);
+ 		goto init_fail;
+ 	}
++	ndev->base_addr = res->start;
+ 
+ 	spin_lock_init(&priv->txlock);
+ 
+diff --git a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
+index 3dc29b282a884..45b9ba1ec7607 100644
+--- a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
++++ b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
+@@ -108,7 +108,7 @@ static int pch_ptp_match(struct sk_buff *skb, u16 uid_hi, u32 uid_lo, u16 seqid)
+ {
+ 	u8 *data = skb->data;
+ 	unsigned int offset;
+-	u16 *hi, *id;
++	u16 hi, id;
+ 	u32 lo;
+ 
+ 	if (ptp_classify_raw(skb) == PTP_CLASS_NONE)
+@@ -119,14 +119,11 @@ static int pch_ptp_match(struct sk_buff *skb, u16 uid_hi, u32 uid_lo, u16 seqid)
+ 	if (skb->len < offset + OFF_PTP_SEQUENCE_ID + sizeof(seqid))
+ 		return 0;
+ 
+-	hi = (u16 *)(data + offset + OFF_PTP_SOURCE_UUID);
+-	id = (u16 *)(data + offset + OFF_PTP_SEQUENCE_ID);
++	hi = get_unaligned_be16(data + offset + OFF_PTP_SOURCE_UUID + 0);
++	lo = get_unaligned_be32(data + offset + OFF_PTP_SOURCE_UUID + 2);
++	id = get_unaligned_be16(data + offset + OFF_PTP_SEQUENCE_ID);
+ 
+-	memcpy(&lo, &hi[1], sizeof(lo));
+-
+-	return (uid_hi == *hi &&
+-		uid_lo == lo &&
+-		seqid  == *id);
++	return (uid_hi == hi && uid_lo == lo && seqid == id);
+ }
+ 
+ static void
+@@ -136,7 +133,6 @@ pch_rx_timestamp(struct pch_gbe_adapter *adapter, struct sk_buff *skb)
+ 	struct pci_dev *pdev;
+ 	u64 ns;
+ 	u32 hi, lo, val;
+-	u16 uid, seq;
+ 
+ 	if (!adapter->hwts_rx_en)
+ 		return;
+@@ -152,10 +148,7 @@ pch_rx_timestamp(struct pch_gbe_adapter *adapter, struct sk_buff *skb)
+ 	lo = pch_src_uuid_lo_read(pdev);
+ 	hi = pch_src_uuid_hi_read(pdev);
+ 
+-	uid = hi & 0xffff;
+-	seq = (hi >> 16) & 0xffff;
+-
+-	if (!pch_ptp_match(skb, htons(uid), htonl(lo), htons(seq)))
++	if (!pch_ptp_match(skb, hi, lo, hi >> 16))
+ 		goto out;
+ 
+ 	ns = pch_rx_snap_read(pdev);
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 2ee72dc431cd5..a0d4e052a79ed 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -3510,7 +3510,6 @@ static void rtl_hw_start_8106(struct rtl8169_private *tp)
+ 	rtl_eri_write(tp, 0x1b0, ERIAR_MASK_0011, 0x0000);
+ 
+ 	rtl_pcie_state_l2l3_disable(tp);
+-	rtl_hw_aspm_clkreq_enable(tp, true);
+ }
+ 
+ DECLARE_RTL_COND(rtl_mac_ocp_e00e_cond)
+diff --git a/drivers/net/ethernet/sfc/ef10_sriov.c b/drivers/net/ethernet/sfc/ef10_sriov.c
+index 21fa6c0e88734..84041cd587d78 100644
+--- a/drivers/net/ethernet/sfc/ef10_sriov.c
++++ b/drivers/net/ethernet/sfc/ef10_sriov.c
+@@ -402,12 +402,17 @@ fail1:
+ 	return rc;
+ }
+ 
++/* Disable SRIOV and remove VFs
++ * If some VFs are attached to a guest (using Xen, only) nothing is
++ * done if force=false, and vports are freed if force=true (for the non
++ * attachedc ones, only) but SRIOV is not disabled and VFs are not
++ * removed in either case.
++ */
+ static int efx_ef10_pci_sriov_disable(struct efx_nic *efx, bool force)
+ {
+ 	struct pci_dev *dev = efx->pci_dev;
+-	unsigned int vfs_assigned = 0;
+-
+-	vfs_assigned = pci_vfs_assigned(dev);
++	unsigned int vfs_assigned = pci_vfs_assigned(dev);
++	int rc = 0;
+ 
+ 	if (vfs_assigned && !force) {
+ 		netif_info(efx, drv, efx->net_dev, "VFs are assigned to guests; "
+@@ -417,10 +422,12 @@ static int efx_ef10_pci_sriov_disable(struct efx_nic *efx, bool force)
+ 
+ 	if (!vfs_assigned)
+ 		pci_disable_sriov(dev);
++	else
++		rc = -EBUSY;
+ 
+ 	efx_ef10_sriov_free_vf_vswitching(efx);
+ 	efx->vf_count = 0;
+-	return 0;
++	return rc;
+ }
+ 
+ int efx_ef10_sriov_configure(struct efx_nic *efx, int num_vfs)
+@@ -439,7 +446,6 @@ int efx_ef10_sriov_init(struct efx_nic *efx)
+ void efx_ef10_sriov_fini(struct efx_nic *efx)
+ {
+ 	struct efx_ef10_nic_data *nic_data = efx->nic_data;
+-	unsigned int i;
+ 	int rc;
+ 
+ 	if (!nic_data->vf) {
+@@ -449,14 +455,7 @@ void efx_ef10_sriov_fini(struct efx_nic *efx)
+ 		return;
+ 	}
+ 
+-	/* Remove any VFs in the host */
+-	for (i = 0; i < efx->vf_count; ++i) {
+-		struct efx_nic *vf_efx = nic_data->vf[i].efx;
+-
+-		if (vf_efx)
+-			vf_efx->pci_dev->driver->remove(vf_efx->pci_dev);
+-	}
+-
++	/* Disable SRIOV and remove any VFs in the host */
+ 	rc = efx_ef10_pci_sriov_disable(efx, true);
+ 	if (rc)
+ 		netif_dbg(efx, drv, efx->net_dev,
+diff --git a/drivers/net/ethernet/sgi/ioc3-eth.c b/drivers/net/ethernet/sgi/ioc3-eth.c
+index 6eef0f45b133b..2b29fd4cbdf44 100644
+--- a/drivers/net/ethernet/sgi/ioc3-eth.c
++++ b/drivers/net/ethernet/sgi/ioc3-eth.c
+@@ -835,6 +835,10 @@ static int ioc3eth_probe(struct platform_device *pdev)
+ 	int err;
+ 
+ 	regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!regs) {
++		dev_err(&pdev->dev, "Invalid resource\n");
++		return -EINVAL;
++	}
+ 	/* get mac addr from one wire prom */
+ 	if (ioc3eth_get_mac_addr(regs, mac_addr))
+ 		return -EPROBE_DEFER; /* not available yet */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+index b750074f8f9cf..e293bf1ce9f37 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+@@ -503,6 +503,12 @@ int stmmac_mdio_register(struct net_device *ndev)
+ 		found = 1;
+ 	}
+ 
++	if (!found && !mdio_node) {
++		dev_warn(dev, "No PHY found\n");
++		err = -ENODEV;
++		goto no_phy_found;
++	}
++
+ 	/* Try to probe the XPCS by scanning all addresses. */
+ 	if (priv->hw->xpcs) {
+ 		struct mdio_xpcs_args *xpcs = &priv->hw->xpcs_args;
+@@ -511,6 +517,7 @@ int stmmac_mdio_register(struct net_device *ndev)
+ 
+ 		xpcs->bus = new_bus;
+ 
++		found = 0;
+ 		for (addr = 0; addr < max_addr; addr++) {
+ 			xpcs->addr = addr;
+ 
+@@ -520,13 +527,12 @@ int stmmac_mdio_register(struct net_device *ndev)
+ 				break;
+ 			}
+ 		}
+-	}
+ 
+-	if (!found && !mdio_node) {
+-		dev_warn(dev, "No PHY found\n");
+-		mdiobus_unregister(new_bus);
+-		mdiobus_free(new_bus);
+-		return -ENODEV;
++		if (!found && !mdio_node) {
++			dev_warn(dev, "No XPCS found\n");
++			err = -ENODEV;
++			goto no_xpcs_found;
++		}
+ 	}
+ 
+ bus_register_done:
+@@ -534,6 +540,9 @@ bus_register_done:
+ 
+ 	return 0;
+ 
++no_xpcs_found:
++no_phy_found:
++	mdiobus_unregister(new_bus);
+ bus_register_fail:
+ 	mdiobus_free(new_bus);
+ 	return err;
+diff --git a/drivers/net/ethernet/xilinx/xilinx_emaclite.c b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+index d9d58a7dabee8..b06377fe72938 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_emaclite.c
++++ b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+@@ -1189,9 +1189,8 @@ static int xemaclite_of_probe(struct platform_device *ofdev)
+ 	}
+ 
+ 	dev_info(dev,
+-		 "Xilinx EmacLite at 0x%08lX mapped to 0x%08lX, irq=%d\n",
+-		 (unsigned long __force)ndev->mem_start,
+-		 (unsigned long __force)lp->base_addr, ndev->irq);
++		 "Xilinx EmacLite at 0x%08lX mapped to 0x%p, irq=%d\n",
++		 (unsigned long __force)ndev->mem_start, lp->base_addr, ndev->irq);
+ 	return 0;
+ 
+ error:
+diff --git a/drivers/net/ethernet/xscale/ixp4xx_eth.c b/drivers/net/ethernet/xscale/ixp4xx_eth.c
+index cb89323855d84..1ecceeb9700db 100644
+--- a/drivers/net/ethernet/xscale/ixp4xx_eth.c
++++ b/drivers/net/ethernet/xscale/ixp4xx_eth.c
+@@ -1531,8 +1531,8 @@ static int ixp4xx_eth_probe(struct platform_device *pdev)
+ 		phydev = of_phy_get_and_connect(ndev, np, ixp4xx_adjust_link);
+ 	} else {
+ 		phydev = mdiobus_get_phy(mdio_bus, plat->phy);
+-		if (IS_ERR(phydev)) {
+-			err = PTR_ERR(phydev);
++		if (!phydev) {
++			err = -ENODEV;
+ 			dev_err(dev, "could not connect phydev (%d)\n", err);
+ 			goto err_free_mem;
+ 		}
+diff --git a/drivers/net/fjes/fjes_main.c b/drivers/net/fjes/fjes_main.c
+index 466622664424d..e449d94661225 100644
+--- a/drivers/net/fjes/fjes_main.c
++++ b/drivers/net/fjes/fjes_main.c
+@@ -1262,6 +1262,10 @@ static int fjes_probe(struct platform_device *plat_dev)
+ 	adapter->interrupt_watch_enable = false;
+ 
+ 	res = platform_get_resource(plat_dev, IORESOURCE_MEM, 0);
++	if (!res) {
++		err = -EINVAL;
++		goto err_free_control_wq;
++	}
+ 	hw->hw_res.start = res->start;
+ 	hw->hw_res.size = resource_size(res);
+ 	hw->hw_res.irq = platform_get_irq(plat_dev, 0);
+diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
+index 9915603ed10ba..c4ad5f20496ed 100644
+--- a/drivers/net/ipa/ipa_main.c
++++ b/drivers/net/ipa/ipa_main.c
+@@ -529,6 +529,7 @@ static int ipa_firmware_load(struct device *dev)
+ 	}
+ 
+ 	ret = of_address_to_resource(node, 0, &res);
++	of_node_put(node);
+ 	if (ret) {
+ 		dev_err(dev, "error %d getting \"memory-region\" resource\n",
+ 			ret);
+diff --git a/drivers/net/mdio/mdio-ipq8064.c b/drivers/net/mdio/mdio-ipq8064.c
+index 8fe8f0119fc19..8de11f35ac1e3 100644
+--- a/drivers/net/mdio/mdio-ipq8064.c
++++ b/drivers/net/mdio/mdio-ipq8064.c
+@@ -7,10 +7,9 @@
+ 
+ #include <linux/delay.h>
+ #include <linux/kernel.h>
+-#include <linux/mfd/syscon.h>
+ #include <linux/module.h>
+ #include <linux/of_mdio.h>
+-#include <linux/phy.h>
++#include <linux/of_address.h>
+ #include <linux/platform_device.h>
+ #include <linux/regmap.h>
+ 
+@@ -96,14 +95,34 @@ ipq8064_mdio_write(struct mii_bus *bus, int phy_addr, int reg_offset, u16 data)
+ 	return ipq8064_mdio_wait_busy(priv);
+ }
+ 
++static const struct regmap_config ipq8064_mdio_regmap_config = {
++	.reg_bits = 32,
++	.reg_stride = 4,
++	.val_bits = 32,
++	.can_multi_write = false,
++	/* the mdio lock is used by any user of this mdio driver */
++	.disable_locking = true,
++
++	.cache_type = REGCACHE_NONE,
++};
++
+ static int
+ ipq8064_mdio_probe(struct platform_device *pdev)
+ {
+ 	struct device_node *np = pdev->dev.of_node;
+ 	struct ipq8064_mdio *priv;
++	struct resource res;
+ 	struct mii_bus *bus;
++	void __iomem *base;
+ 	int ret;
+ 
++	if (of_address_to_resource(np, 0, &res))
++		return -ENOMEM;
++
++	base = ioremap(res.start, resource_size(&res));
++	if (!base)
++		return -ENOMEM;
++
+ 	bus = devm_mdiobus_alloc_size(&pdev->dev, sizeof(*priv));
+ 	if (!bus)
+ 		return -ENOMEM;
+@@ -115,15 +134,10 @@ ipq8064_mdio_probe(struct platform_device *pdev)
+ 	bus->parent = &pdev->dev;
+ 
+ 	priv = bus->priv;
+-	priv->base = device_node_to_regmap(np);
+-	if (IS_ERR(priv->base)) {
+-		if (priv->base == ERR_PTR(-EPROBE_DEFER))
+-			return -EPROBE_DEFER;
+-
+-		dev_err(&pdev->dev, "error getting device regmap, error=%pe\n",
+-			priv->base);
++	priv->base = devm_regmap_init_mmio(&pdev->dev, base,
++					   &ipq8064_mdio_regmap_config);
++	if (IS_ERR(priv->base))
+ 		return PTR_ERR(priv->base);
+-	}
+ 
+ 	ret = of_mdiobus_register(bus, np);
+ 	if (ret)
+diff --git a/drivers/net/mdio/mdio-mux-bcm-iproc.c b/drivers/net/mdio/mdio-mux-bcm-iproc.c
+index 03261e6b9ceb5..aa29d6bdbdf25 100644
+--- a/drivers/net/mdio/mdio-mux-bcm-iproc.c
++++ b/drivers/net/mdio/mdio-mux-bcm-iproc.c
+@@ -187,7 +187,9 @@ static int mdio_mux_iproc_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 	md->dev = &pdev->dev;
+ 
+-	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	md->base = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
++	if (IS_ERR(md->base))
++		return PTR_ERR(md->base);
+ 	if (res->start & 0xfff) {
+ 		/* For backward compatibility in case the
+ 		 * base address is specified with an offset.
+@@ -196,9 +198,6 @@ static int mdio_mux_iproc_probe(struct platform_device *pdev)
+ 		res->start &= ~0xfff;
+ 		res->end = res->start + MDIO_REG_ADDR_SPACE_SIZE - 1;
+ 	}
+-	md->base = devm_ioremap_resource(&pdev->dev, res);
+-	if (IS_ERR(md->base))
+-		return PTR_ERR(md->base);
+ 
+ 	md->mii_bus = devm_mdiobus_alloc(&pdev->dev);
+ 	if (!md->mii_bus) {
+diff --git a/drivers/net/phy/nxp-c45-tja11xx.c b/drivers/net/phy/nxp-c45-tja11xx.c
+index 26b9c0d7cb9d5..b7ce0e737333c 100644
+--- a/drivers/net/phy/nxp-c45-tja11xx.c
++++ b/drivers/net/phy/nxp-c45-tja11xx.c
+@@ -546,6 +546,12 @@ static int nxp_c45_config_init(struct phy_device *phydev)
+ 		return ret;
+ 	}
+ 
++	/* Bug workaround for SJA1110 rev B: enable write access
++	 * to MDIO_MMD_PMAPMD
++	 */
++	phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F8, 1);
++	phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F9, 2);
++
+ 	phy_set_bits_mmd(phydev, MDIO_MMD_VEND1, VEND1_PHY_CONFIG,
+ 			 PHY_CONFIG_AUTO);
+ 
+diff --git a/drivers/net/phy/realtek.c b/drivers/net/phy/realtek.c
+index 821e85a973679..7b99a3234c652 100644
+--- a/drivers/net/phy/realtek.c
++++ b/drivers/net/phy/realtek.c
+@@ -357,6 +357,19 @@ static int rtl8211f_config_init(struct phy_device *phydev)
+ 	return 0;
+ }
+ 
++static int rtl821x_resume(struct phy_device *phydev)
++{
++	int ret;
++
++	ret = genphy_resume(phydev);
++	if (ret < 0)
++		return ret;
++
++	msleep(20);
++
++	return 0;
++}
++
+ static int rtl8211e_config_init(struct phy_device *phydev)
+ {
+ 	int ret = 0, oldpage;
+@@ -852,7 +865,7 @@ static struct phy_driver realtek_drvs[] = {
+ 		.config_intr	= &rtl8211f_config_intr,
+ 		.handle_interrupt = rtl8211f_handle_interrupt,
+ 		.suspend	= genphy_suspend,
+-		.resume		= genphy_resume,
++		.resume		= rtl821x_resume,
+ 		.read_page	= rtl821x_read_page,
+ 		.write_page	= rtl821x_write_page,
+ 	}, {
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 78a01c71a17cf..2debb32a1813a 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -721,6 +721,12 @@ static struct sk_buff *receive_small(struct net_device *dev,
+ 	len -= vi->hdr_len;
+ 	stats->bytes += len;
+ 
++	if (unlikely(len > GOOD_PACKET_LEN)) {
++		pr_debug("%s: rx error: len %u exceeds max size %d\n",
++			 dev->name, len, GOOD_PACKET_LEN);
++		dev->stats.rx_length_errors++;
++		goto err_len;
++	}
+ 	rcu_read_lock();
+ 	xdp_prog = rcu_dereference(rq->xdp_prog);
+ 	if (xdp_prog) {
+@@ -824,6 +830,7 @@ err:
+ err_xdp:
+ 	rcu_read_unlock();
+ 	stats->xdp_drops++;
++err_len:
+ 	stats->drops++;
+ 	put_page(page);
+ xdp_xmit:
+@@ -877,6 +884,12 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
+ 	head_skb = NULL;
+ 	stats->bytes += len - vi->hdr_len;
+ 
++	if (unlikely(len > truesize)) {
++		pr_debug("%s: rx error: len %u exceeds truesize %lu\n",
++			 dev->name, len, (unsigned long)ctx);
++		dev->stats.rx_length_errors++;
++		goto err_skb;
++	}
+ 	rcu_read_lock();
+ 	xdp_prog = rcu_dereference(rq->xdp_prog);
+ 	if (xdp_prog) {
+@@ -1004,13 +1017,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
+ 	}
+ 	rcu_read_unlock();
+ 
+-	if (unlikely(len > truesize)) {
+-		pr_debug("%s: rx error: len %u exceeds truesize %lu\n",
+-			 dev->name, len, (unsigned long)ctx);
+-		dev->stats.rx_length_errors++;
+-		goto err_skb;
+-	}
+-
+ 	head_skb = page_to_skb(vi, rq, page, offset, len, truesize, !xdp_prog,
+ 			       metasize, headroom);
+ 	curr_skb = head_skb;
+@@ -1619,7 +1625,7 @@ static int xmit_skb(struct send_queue *sq, struct sk_buff *skb)
+ 	if (virtio_net_hdr_from_skb(skb, &hdr->hdr,
+ 				    virtio_is_little_endian(vi->vdev), false,
+ 				    0))
+-		BUG();
++		return -EPROTO;
+ 
+ 	if (vi->mergeable_rx_bufs)
+ 		hdr->num_buffers = 0;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 607d5d564928d..141d9fc299b01 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -3800,6 +3800,7 @@ static int iwl_mvm_roc(struct ieee80211_hw *hw,
+ 	struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ 	struct cfg80211_chan_def chandef;
+ 	struct iwl_mvm_phy_ctxt *phy_ctxt;
++	bool band_change_removal;
+ 	int ret, i;
+ 
+ 	IWL_DEBUG_MAC80211(mvm, "enter (%d, %d, %d)\n", channel->hw_value,
+@@ -3880,19 +3881,30 @@ static int iwl_mvm_roc(struct ieee80211_hw *hw,
+ 	cfg80211_chandef_create(&chandef, channel, NL80211_CHAN_NO_HT);
+ 
+ 	/*
+-	 * Change the PHY context configuration as it is currently referenced
+-	 * only by the P2P Device MAC
++	 * Check if the remain-on-channel is on a different band and that
++	 * requires context removal, see iwl_mvm_phy_ctxt_changed(). If
++	 * so, we'll need to release and then re-configure here, since we
++	 * must not remove a PHY context that's part of a binding.
+ 	 */
+-	if (mvmvif->phy_ctxt->ref == 1) {
++	band_change_removal =
++		fw_has_capa(&mvm->fw->ucode_capa,
++			    IWL_UCODE_TLV_CAPA_BINDING_CDB_SUPPORT) &&
++		mvmvif->phy_ctxt->channel->band != chandef.chan->band;
++
++	if (mvmvif->phy_ctxt->ref == 1 && !band_change_removal) {
++		/*
++		 * Change the PHY context configuration as it is currently
++		 * referenced only by the P2P Device MAC (and we can modify it)
++		 */
+ 		ret = iwl_mvm_phy_ctxt_changed(mvm, mvmvif->phy_ctxt,
+ 					       &chandef, 1, 1);
+ 		if (ret)
+ 			goto out_unlock;
+ 	} else {
+ 		/*
+-		 * The PHY context is shared with other MACs. Need to remove the
+-		 * P2P Device from the binding, allocate an new PHY context and
+-		 * create a new binding
++		 * The PHY context is shared with other MACs (or we're trying to
++		 * switch bands), so remove the P2P Device from the binding,
++		 * allocate an new PHY context and create a new binding.
+ 		 */
+ 		phy_ctxt = iwl_mvm_get_free_phy_ctxt(mvm);
+ 		if (!phy_ctxt) {
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+index 4d9d4d6892fc7..02cf521338578 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+@@ -1827,7 +1827,8 @@ int iwl_mvm_disable_beacon_filter(struct iwl_mvm *mvm,
+ void iwl_mvm_update_smps(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ 				enum iwl_mvm_smps_type_request req_type,
+ 				enum ieee80211_smps_mode smps_request);
+-bool iwl_mvm_rx_diversity_allowed(struct iwl_mvm *mvm);
++bool iwl_mvm_rx_diversity_allowed(struct iwl_mvm *mvm,
++				  struct iwl_mvm_phy_ctxt *ctxt);
+ 
+ /* Low latency */
+ int iwl_mvm_update_low_latency(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c
+index 0fd51f6aa2061..4ed2338027d13 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+- * Copyright (C) 2012-2014, 2018-2020 Intel Corporation
++ * Copyright (C) 2012-2014, 2018-2021 Intel Corporation
+  * Copyright (C) 2013-2014 Intel Mobile Communications GmbH
+  * Copyright (C) 2017 Intel Deutschland GmbH
+  */
+@@ -76,6 +76,7 @@ static void iwl_mvm_phy_ctxt_cmd_hdr(struct iwl_mvm_phy_ctxt *ctxt,
+ }
+ 
+ static void iwl_mvm_phy_ctxt_set_rxchain(struct iwl_mvm *mvm,
++					 struct iwl_mvm_phy_ctxt *ctxt,
+ 					 __le32 *rxchain_info,
+ 					 u8 chains_static,
+ 					 u8 chains_dynamic)
+@@ -93,7 +94,7 @@ static void iwl_mvm_phy_ctxt_set_rxchain(struct iwl_mvm *mvm,
+ 	 * between the two antennas is sufficiently different to impact
+ 	 * performance.
+ 	 */
+-	if (active_cnt == 1 && iwl_mvm_rx_diversity_allowed(mvm)) {
++	if (active_cnt == 1 && iwl_mvm_rx_diversity_allowed(mvm, ctxt)) {
+ 		idle_cnt = 2;
+ 		active_cnt = 2;
+ 	}
+@@ -113,6 +114,7 @@ static void iwl_mvm_phy_ctxt_set_rxchain(struct iwl_mvm *mvm,
+  * Add the phy configuration to the PHY context command
+  */
+ static void iwl_mvm_phy_ctxt_cmd_data_v1(struct iwl_mvm *mvm,
++					 struct iwl_mvm_phy_ctxt *ctxt,
+ 					 struct iwl_phy_context_cmd_v1 *cmd,
+ 					 struct cfg80211_chan_def *chandef,
+ 					 u8 chains_static, u8 chains_dynamic)
+@@ -123,7 +125,7 @@ static void iwl_mvm_phy_ctxt_cmd_data_v1(struct iwl_mvm *mvm,
+ 	/* Set the channel info data */
+ 	iwl_mvm_set_chan_info_chandef(mvm, &cmd->ci, chandef);
+ 
+-	iwl_mvm_phy_ctxt_set_rxchain(mvm, &tail->rxchain_info,
++	iwl_mvm_phy_ctxt_set_rxchain(mvm, ctxt, &tail->rxchain_info,
+ 				     chains_static, chains_dynamic);
+ 
+ 	tail->txchain_info = cpu_to_le32(iwl_mvm_get_valid_tx_ant(mvm));
+@@ -133,6 +135,7 @@ static void iwl_mvm_phy_ctxt_cmd_data_v1(struct iwl_mvm *mvm,
+  * Add the phy configuration to the PHY context command
+  */
+ static void iwl_mvm_phy_ctxt_cmd_data(struct iwl_mvm *mvm,
++				      struct iwl_mvm_phy_ctxt *ctxt,
+ 				      struct iwl_phy_context_cmd *cmd,
+ 				      struct cfg80211_chan_def *chandef,
+ 				      u8 chains_static, u8 chains_dynamic)
+@@ -143,7 +146,7 @@ static void iwl_mvm_phy_ctxt_cmd_data(struct iwl_mvm *mvm,
+ 	/* Set the channel info data */
+ 	iwl_mvm_set_chan_info_chandef(mvm, &cmd->ci, chandef);
+ 
+-	iwl_mvm_phy_ctxt_set_rxchain(mvm, &cmd->rxchain_info,
++	iwl_mvm_phy_ctxt_set_rxchain(mvm, ctxt, &cmd->rxchain_info,
+ 				     chains_static, chains_dynamic);
+ }
+ 
+@@ -170,7 +173,7 @@ static int iwl_mvm_phy_ctxt_apply(struct iwl_mvm *mvm,
+ 		iwl_mvm_phy_ctxt_cmd_hdr(ctxt, &cmd, action);
+ 
+ 		/* Set the command data */
+-		iwl_mvm_phy_ctxt_cmd_data(mvm, &cmd, chandef,
++		iwl_mvm_phy_ctxt_cmd_data(mvm, ctxt, &cmd, chandef,
+ 					  chains_static,
+ 					  chains_dynamic);
+ 
+@@ -186,7 +189,7 @@ static int iwl_mvm_phy_ctxt_apply(struct iwl_mvm *mvm,
+ 					 action);
+ 
+ 		/* Set the command data */
+-		iwl_mvm_phy_ctxt_cmd_data_v1(mvm, &cmd, chandef,
++		iwl_mvm_phy_ctxt_cmd_data_v1(mvm, ctxt, &cmd, chandef,
+ 					     chains_static,
+ 					     chains_dynamic);
+ 		ret = iwl_mvm_send_cmd_pdu(mvm, PHY_CONTEXT_CMD,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+index 83342a6a6d5b5..f19081a6f0460 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+@@ -310,6 +310,8 @@ static void iwl_mvm_te_handle_notif(struct iwl_mvm *mvm,
+ 			 * and know the dtim period.
+ 			 */
+ 			iwl_mvm_te_check_disconnect(mvm, te_data->vif,
++				!te_data->vif->bss_conf.assoc ?
++				"Not associated and the time event is over already..." :
+ 				"No beacon heard and the time event is over already...");
+ 			break;
+ 		default:
+@@ -808,6 +810,8 @@ void iwl_mvm_rx_session_protect_notif(struct iwl_mvm *mvm,
+ 			 * and know the dtim period.
+ 			 */
+ 			iwl_mvm_te_check_disconnect(mvm, vif,
++						    !vif->bss_conf.assoc ?
++						    "Not associated and the session protection is over already..." :
+ 						    "No beacon heard and the session protection is over already...");
+ 			spin_lock_bh(&mvm->time_event_lock);
+ 			iwl_mvm_te_clear_data(mvm, te_data);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
+index c566be99a4c74..a89eb7c40ee7c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
+@@ -683,23 +683,37 @@ void iwl_mvm_accu_radio_stats(struct iwl_mvm *mvm)
+ 	mvm->accu_radio_stats.on_time_scan += mvm->radio_stats.on_time_scan;
+ }
+ 
++struct iwl_mvm_diversity_iter_data {
++	struct iwl_mvm_phy_ctxt *ctxt;
++	bool result;
++};
++
+ static void iwl_mvm_diversity_iter(void *_data, u8 *mac,
+ 				   struct ieee80211_vif *vif)
+ {
+ 	struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+-	bool *result = _data;
++	struct iwl_mvm_diversity_iter_data *data = _data;
+ 	int i;
+ 
++	if (mvmvif->phy_ctxt != data->ctxt)
++		return;
++
+ 	for (i = 0; i < NUM_IWL_MVM_SMPS_REQ; i++) {
+ 		if (mvmvif->smps_requests[i] == IEEE80211_SMPS_STATIC ||
+-		    mvmvif->smps_requests[i] == IEEE80211_SMPS_DYNAMIC)
+-			*result = false;
++		    mvmvif->smps_requests[i] == IEEE80211_SMPS_DYNAMIC) {
++			data->result = false;
++			break;
++		}
+ 	}
+ }
+ 
+-bool iwl_mvm_rx_diversity_allowed(struct iwl_mvm *mvm)
++bool iwl_mvm_rx_diversity_allowed(struct iwl_mvm *mvm,
++				  struct iwl_mvm_phy_ctxt *ctxt)
+ {
+-	bool result = true;
++	struct iwl_mvm_diversity_iter_data data = {
++		.ctxt = ctxt,
++		.result = true,
++	};
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+ 
+@@ -711,9 +725,9 @@ bool iwl_mvm_rx_diversity_allowed(struct iwl_mvm *mvm)
+ 
+ 	ieee80211_iterate_active_interfaces_atomic(
+ 			mvm->hw, IEEE80211_IFACE_ITER_NORMAL,
+-			iwl_mvm_diversity_iter, &result);
++			iwl_mvm_diversity_iter, &data);
+ 
+-	return result;
++	return data.result;
+ }
+ 
+ void iwl_mvm_send_low_latency_cmd(struct iwl_mvm *mvm,
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+index cecc32e7dbe8a..2dbc51daa2f8a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+@@ -79,7 +79,6 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
+ 	struct iwl_prph_scratch *prph_scratch;
+ 	struct iwl_prph_scratch_ctrl_cfg *prph_sc_ctrl;
+ 	struct iwl_prph_info *prph_info;
+-	void *iml_img;
+ 	u32 control_flags = 0;
+ 	int ret;
+ 	int cmdq_size = max_t(u32, IWL_CMD_QUEUE_SIZE,
+@@ -187,14 +186,15 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
+ 	trans_pcie->prph_scratch = prph_scratch;
+ 
+ 	/* Allocate IML */
+-	iml_img = dma_alloc_coherent(trans->dev, trans->iml_len,
+-				     &trans_pcie->iml_dma_addr, GFP_KERNEL);
+-	if (!iml_img) {
++	trans_pcie->iml = dma_alloc_coherent(trans->dev, trans->iml_len,
++					     &trans_pcie->iml_dma_addr,
++					     GFP_KERNEL);
++	if (!trans_pcie->iml) {
+ 		ret = -ENOMEM;
+ 		goto err_free_ctxt_info;
+ 	}
+ 
+-	memcpy(iml_img, trans->iml, trans->iml_len);
++	memcpy(trans_pcie->iml, trans->iml, trans->iml_len);
+ 
+ 	iwl_enable_fw_load_int_ctx_info(trans);
+ 
+@@ -243,6 +243,11 @@ void iwl_pcie_ctxt_info_gen3_free(struct iwl_trans *trans)
+ 	trans_pcie->ctxt_info_dma_addr = 0;
+ 	trans_pcie->ctxt_info_gen3 = NULL;
+ 
++	dma_free_coherent(trans->dev, trans->iml_len, trans_pcie->iml,
++			  trans_pcie->iml_dma_addr);
++	trans_pcie->iml_dma_addr = 0;
++	trans_pcie->iml = NULL;
++
+ 	iwl_pcie_ctxt_info_free_fw_img(trans);
+ 
+ 	dma_free_coherent(trans->dev, sizeof(*trans_pcie->prph_scratch),
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+index 76a512cd2e5c8..3f7cfbf707fdc 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+@@ -279,6 +279,8 @@ struct cont_rec {
+  *	Context information addresses will be taken from here.
+  *	This is driver's local copy for keeping track of size and
+  *	count for allocating and freeing the memory.
++ * @iml: image loader image virtual address
++ * @iml_dma_addr: image loader image DMA address
+  * @trans: pointer to the generic transport area
+  * @scd_base_addr: scheduler sram base address in SRAM
+  * @kw: keep warm address
+@@ -329,6 +331,7 @@ struct iwl_trans_pcie {
+ 	};
+ 	struct iwl_prph_info *prph_info;
+ 	struct iwl_prph_scratch *prph_scratch;
++	void *iml;
+ 	dma_addr_t ctxt_info_dma_addr;
+ 	dma_addr_t prph_info_dma_addr;
+ 	dma_addr_t prph_scratch_dma_addr;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
+index 1bcd36e9e0086..9ce195d80c512 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
+@@ -254,7 +254,8 @@ void iwl_trans_pcie_gen2_fw_alive(struct iwl_trans *trans, u32 scd_addr)
+ 	/* now that we got alive we can free the fw image & the context info.
+ 	 * paging memory cannot be freed included since FW will still use it
+ 	 */
+-	iwl_pcie_ctxt_info_free(trans);
++	if (trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_AX210)
++		iwl_pcie_ctxt_info_free(trans);
+ 
+ 	/*
+ 	 * Re-enable all the interrupts, including the RF-Kill one, now that
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index 7a6fd46d0c6e8..ceb089e4a136a 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -626,6 +626,7 @@ struct mac80211_hwsim_data {
+ 	u32 ciphers[ARRAY_SIZE(hwsim_ciphers)];
+ 
+ 	struct mac_address addresses[2];
++	struct ieee80211_chanctx_conf *chanctx;
+ 	int channels, idx;
+ 	bool use_chanctx;
+ 	bool destroy_on_close;
+@@ -1257,7 +1258,8 @@ static inline u16 trans_tx_rate_flags_ieee2hwsim(struct ieee80211_tx_rate *rate)
+ 
+ static void mac80211_hwsim_tx_frame_nl(struct ieee80211_hw *hw,
+ 				       struct sk_buff *my_skb,
+-				       int dst_portid)
++				       int dst_portid,
++				       struct ieee80211_channel *channel)
+ {
+ 	struct sk_buff *skb;
+ 	struct mac80211_hwsim_data *data = hw->priv;
+@@ -1312,7 +1314,7 @@ static void mac80211_hwsim_tx_frame_nl(struct ieee80211_hw *hw,
+ 	if (nla_put_u32(skb, HWSIM_ATTR_FLAGS, hwsim_flags))
+ 		goto nla_put_failure;
+ 
+-	if (nla_put_u32(skb, HWSIM_ATTR_FREQ, data->channel->center_freq))
++	if (nla_put_u32(skb, HWSIM_ATTR_FREQ, channel->center_freq))
+ 		goto nla_put_failure;
+ 
+ 	/* We get the tx control (rate and retries) info*/
+@@ -1659,7 +1661,7 @@ static void mac80211_hwsim_tx(struct ieee80211_hw *hw,
+ 	_portid = READ_ONCE(data->wmediumd);
+ 
+ 	if (_portid || hwsim_virtio_enabled)
+-		return mac80211_hwsim_tx_frame_nl(hw, skb, _portid);
++		return mac80211_hwsim_tx_frame_nl(hw, skb, _portid, channel);
+ 
+ 	/* NO wmediumd detected, perfect medium simulation */
+ 	data->tx_pkts++;
+@@ -1775,7 +1777,7 @@ static void mac80211_hwsim_tx_frame(struct ieee80211_hw *hw,
+ 	mac80211_hwsim_monitor_rx(hw, skb, chan);
+ 
+ 	if (_pid || hwsim_virtio_enabled)
+-		return mac80211_hwsim_tx_frame_nl(hw, skb, _pid);
++		return mac80211_hwsim_tx_frame_nl(hw, skb, _pid, chan);
+ 
+ 	mac80211_hwsim_tx_frame_no_nl(hw, skb, chan);
+ 	dev_kfree_skb(skb);
+@@ -2514,6 +2516,11 @@ static int mac80211_hwsim_croc(struct ieee80211_hw *hw,
+ static int mac80211_hwsim_add_chanctx(struct ieee80211_hw *hw,
+ 				      struct ieee80211_chanctx_conf *ctx)
+ {
++	struct mac80211_hwsim_data *hwsim = hw->priv;
++
++	mutex_lock(&hwsim->mutex);
++	hwsim->chanctx = ctx;
++	mutex_unlock(&hwsim->mutex);
+ 	hwsim_set_chanctx_magic(ctx);
+ 	wiphy_dbg(hw->wiphy,
+ 		  "add channel context control: %d MHz/width: %d/cfreqs:%d/%d MHz\n",
+@@ -2525,6 +2532,11 @@ static int mac80211_hwsim_add_chanctx(struct ieee80211_hw *hw,
+ static void mac80211_hwsim_remove_chanctx(struct ieee80211_hw *hw,
+ 					  struct ieee80211_chanctx_conf *ctx)
+ {
++	struct mac80211_hwsim_data *hwsim = hw->priv;
++
++	mutex_lock(&hwsim->mutex);
++	hwsim->chanctx = NULL;
++	mutex_unlock(&hwsim->mutex);
+ 	wiphy_dbg(hw->wiphy,
+ 		  "remove channel context control: %d MHz/width: %d/cfreqs:%d/%d MHz\n",
+ 		  ctx->def.chan->center_freq, ctx->def.width,
+@@ -2537,6 +2549,11 @@ static void mac80211_hwsim_change_chanctx(struct ieee80211_hw *hw,
+ 					  struct ieee80211_chanctx_conf *ctx,
+ 					  u32 changed)
+ {
++	struct mac80211_hwsim_data *hwsim = hw->priv;
++
++	mutex_lock(&hwsim->mutex);
++	hwsim->chanctx = ctx;
++	mutex_unlock(&hwsim->mutex);
+ 	hwsim_check_chanctx_magic(ctx);
+ 	wiphy_dbg(hw->wiphy,
+ 		  "change channel context control: %d MHz/width: %d/cfreqs:%d/%d MHz\n",
+@@ -3129,6 +3146,7 @@ static int mac80211_hwsim_new_radio(struct genl_info *info,
+ 		hw->wiphy->max_remain_on_channel_duration = 1000;
+ 		data->if_combination.radar_detect_widths = 0;
+ 		data->if_combination.num_different_channels = data->channels;
++		data->chanctx = NULL;
+ 	} else {
+ 		data->if_combination.num_different_channels = 1;
+ 		data->if_combination.radar_detect_widths =
+@@ -3638,6 +3656,7 @@ static int hwsim_cloned_frame_received_nl(struct sk_buff *skb_2,
+ 	int frame_data_len;
+ 	void *frame_data;
+ 	struct sk_buff *skb = NULL;
++	struct ieee80211_channel *channel = NULL;
+ 
+ 	if (!info->attrs[HWSIM_ATTR_ADDR_RECEIVER] ||
+ 	    !info->attrs[HWSIM_ATTR_FRAME] ||
+@@ -3664,6 +3683,17 @@ static int hwsim_cloned_frame_received_nl(struct sk_buff *skb_2,
+ 	if (!data2)
+ 		goto out;
+ 
++	if (data2->use_chanctx) {
++		if (data2->tmp_chan)
++			channel = data2->tmp_chan;
++		else if (data2->chanctx)
++			channel = data2->chanctx->def.chan;
++	} else {
++		channel = data2->channel;
++	}
++	if (!channel)
++		goto out;
++
+ 	if (!hwsim_virtio_enabled) {
+ 		if (hwsim_net_get_netgroup(genl_info_net(info)) !=
+ 		    data2->netgroup)
+@@ -3675,7 +3705,7 @@ static int hwsim_cloned_frame_received_nl(struct sk_buff *skb_2,
+ 
+ 	/* check if radio is configured properly */
+ 
+-	if (data2->idle || !data2->started)
++	if ((data2->idle && !data2->tmp_chan) || !data2->started)
+ 		goto out;
+ 
+ 	/* A frame is received from user space */
+@@ -3688,18 +3718,16 @@ static int hwsim_cloned_frame_received_nl(struct sk_buff *skb_2,
+ 		mutex_lock(&data2->mutex);
+ 		rx_status.freq = nla_get_u32(info->attrs[HWSIM_ATTR_FREQ]);
+ 
+-		if (rx_status.freq != data2->channel->center_freq &&
+-		    (!data2->tmp_chan ||
+-		     rx_status.freq != data2->tmp_chan->center_freq)) {
++		if (rx_status.freq != channel->center_freq) {
+ 			mutex_unlock(&data2->mutex);
+ 			goto out;
+ 		}
+ 		mutex_unlock(&data2->mutex);
+ 	} else {
+-		rx_status.freq = data2->channel->center_freq;
++		rx_status.freq = channel->center_freq;
+ 	}
+ 
+-	rx_status.band = data2->channel->band;
++	rx_status.band = channel->band;
+ 	rx_status.rate_idx = nla_get_u32(info->attrs[HWSIM_ATTR_RX_RATE]);
+ 	rx_status.signal = nla_get_u32(info->attrs[HWSIM_ATTR_SIGNAL]);
+ 
+diff --git a/drivers/net/wireless/marvell/mwifiex/main.c b/drivers/net/wireless/marvell/mwifiex/main.c
+index 529dfd8b7ae85..17399d4aa1290 100644
+--- a/drivers/net/wireless/marvell/mwifiex/main.c
++++ b/drivers/net/wireless/marvell/mwifiex/main.c
+@@ -1445,11 +1445,18 @@ static void mwifiex_uninit_sw(struct mwifiex_adapter *adapter)
+ 		if (!priv)
+ 			continue;
+ 		rtnl_lock();
+-		wiphy_lock(adapter->wiphy);
+ 		if (priv->netdev &&
+-		    priv->wdev.iftype != NL80211_IFTYPE_UNSPECIFIED)
++		    priv->wdev.iftype != NL80211_IFTYPE_UNSPECIFIED) {
++			/*
++			 * Close the netdev now, because if we do it later, the
++			 * netdev notifiers will need to acquire the wiphy lock
++			 * again --> deadlock.
++			 */
++			dev_close(priv->wdev.netdev);
++			wiphy_lock(adapter->wiphy);
+ 			mwifiex_del_virtual_intf(adapter->wiphy, &priv->wdev);
+-		wiphy_unlock(adapter->wiphy);
++			wiphy_unlock(adapter->wiphy);
++		}
+ 		rtnl_unlock();
+ 	}
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
+index 72b1cc0ecfda3..e5c324dd24f93 100644
+--- a/drivers/net/wireless/mediatek/mt76/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/dma.c
+@@ -349,6 +349,9 @@ mt76_dma_tx_queue_skb(struct mt76_dev *dev, struct mt76_queue *q,
+ 		      struct sk_buff *skb, struct mt76_wcid *wcid,
+ 		      struct ieee80211_sta *sta)
+ {
++	struct ieee80211_tx_status status = {
++		.sta = sta,
++	};
+ 	struct mt76_tx_info tx_info = {
+ 		.skb = skb,
+ 	};
+@@ -360,11 +363,9 @@ mt76_dma_tx_queue_skb(struct mt76_dev *dev, struct mt76_queue *q,
+ 	u8 *txwi;
+ 
+ 	t = mt76_get_txwi(dev);
+-	if (!t) {
+-		hw = mt76_tx_status_get_hw(dev, skb);
+-		ieee80211_free_txskb(hw, skb);
+-		return -ENOMEM;
+-	}
++	if (!t)
++		goto free_skb;
++
+ 	txwi = mt76_get_txwi_ptr(dev, t);
+ 
+ 	skb->prev = skb->next = NULL;
+@@ -427,8 +428,13 @@ free:
+ 	}
+ #endif
+ 
+-	dev_kfree_skb(tx_info.skb);
+ 	mt76_put_txwi(dev, t);
++
++free_skb:
++	status.skb = tx_info.skb;
++	hw = mt76_tx_status_get_hw(dev, tx_info.skb);
++	ieee80211_tx_status_ext(hw, &status);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index 36ede65919f8e..0c23edbfbdbbf 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -87,6 +87,22 @@ enum mt76_rxq_id {
+ 	__MT_RXQ_MAX
+ };
+ 
++enum mt76_cipher_type {
++	MT_CIPHER_NONE,
++	MT_CIPHER_WEP40,
++	MT_CIPHER_TKIP,
++	MT_CIPHER_TKIP_NO_MIC,
++	MT_CIPHER_AES_CCMP,
++	MT_CIPHER_WEP104,
++	MT_CIPHER_BIP_CMAC_128,
++	MT_CIPHER_WEP128,
++	MT_CIPHER_WAPI,
++	MT_CIPHER_CCMP_CCX,
++	MT_CIPHER_CCMP_256,
++	MT_CIPHER_GCMP,
++	MT_CIPHER_GCMP_256,
++};
++
+ struct mt76_queue_buf {
+ 	dma_addr_t addr;
+ 	u16 len;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/mac.c b/drivers/net/wireless/mediatek/mt76/mt7603/mac.c
+index fbceb07c5f378..3aa7483e929f1 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7603/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7603/mac.c
+@@ -550,14 +550,27 @@ mt7603_mac_fill_rx(struct mt7603_dev *dev, struct sk_buff *skb)
+ 		u8 *data = (u8 *)rxd;
+ 
+ 		if (status->flag & RX_FLAG_DECRYPTED) {
+-			status->iv[0] = data[5];
+-			status->iv[1] = data[4];
+-			status->iv[2] = data[3];
+-			status->iv[3] = data[2];
+-			status->iv[4] = data[1];
+-			status->iv[5] = data[0];
+-
+-			insert_ccmp_hdr = FIELD_GET(MT_RXD2_NORMAL_FRAG, rxd2);
++			switch (FIELD_GET(MT_RXD2_NORMAL_SEC_MODE, rxd2)) {
++			case MT_CIPHER_AES_CCMP:
++			case MT_CIPHER_CCMP_CCX:
++			case MT_CIPHER_CCMP_256:
++				insert_ccmp_hdr =
++					FIELD_GET(MT_RXD2_NORMAL_FRAG, rxd2);
++				fallthrough;
++			case MT_CIPHER_TKIP:
++			case MT_CIPHER_TKIP_NO_MIC:
++			case MT_CIPHER_GCMP:
++			case MT_CIPHER_GCMP_256:
++				status->iv[0] = data[5];
++				status->iv[1] = data[4];
++				status->iv[2] = data[3];
++				status->iv[3] = data[2];
++				status->iv[4] = data[1];
++				status->iv[5] = data[0];
++				break;
++			default:
++				break;
++			}
+ 		}
+ 
+ 		rxd += 4;
+@@ -831,7 +844,7 @@ void mt7603_wtbl_set_rates(struct mt7603_dev *dev, struct mt7603_sta *sta,
+ 	sta->wcid.tx_info |= MT_WCID_TX_INFO_SET;
+ }
+ 
+-static enum mt7603_cipher_type
++static enum mt76_cipher_type
+ mt7603_mac_get_key_info(struct ieee80211_key_conf *key, u8 *key_data)
+ {
+ 	memset(key_data, 0, 32);
+@@ -863,7 +876,7 @@ mt7603_mac_get_key_info(struct ieee80211_key_conf *key, u8 *key_data)
+ int mt7603_wtbl_set_key(struct mt7603_dev *dev, int wcid,
+ 			struct ieee80211_key_conf *key)
+ {
+-	enum mt7603_cipher_type cipher;
++	enum mt76_cipher_type cipher;
+ 	u32 addr = mt7603_wtbl3_addr(wcid);
+ 	u8 key_data[32];
+ 	int key_len = sizeof(key_data);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/regs.h b/drivers/net/wireless/mediatek/mt76/mt7603/regs.h
+index 6741e69071940..3b901090b29c6 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7603/regs.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7603/regs.h
+@@ -765,16 +765,4 @@ enum {
+ #define MT_WTBL1_OR			(MT_WTBL1_BASE + 0x2300)
+ #define MT_WTBL1_OR_PSM_WRITE		BIT(31)
+ 
+-enum mt7603_cipher_type {
+-	MT_CIPHER_NONE,
+-	MT_CIPHER_WEP40,
+-	MT_CIPHER_TKIP,
+-	MT_CIPHER_TKIP_NO_MIC,
+-	MT_CIPHER_AES_CCMP,
+-	MT_CIPHER_WEP104,
+-	MT_CIPHER_BIP_CMAC_128,
+-	MT_CIPHER_WEP128,
+-	MT_CIPHER_WAPI,
+-};
+-
+ #endif
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/init.c b/drivers/net/wireless/mediatek/mt76/mt7615/init.c
+index d20f05a7717d0..0d01fd3c77b5c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/init.c
+@@ -362,7 +362,7 @@ mt7615_init_wiphy(struct ieee80211_hw *hw)
+ 	wiphy->reg_notifier = mt7615_regd_notifier;
+ 
+ 	wiphy->max_sched_scan_plan_interval =
+-		MT76_CONNAC_MAX_SCHED_SCAN_INTERVAL;
++		MT76_CONNAC_MAX_TIME_SCHED_SCAN_INTERVAL;
+ 	wiphy->max_sched_scan_ie_len = IEEE80211_MAX_DATA_LEN;
+ 	wiphy->max_scan_ie_len = MT76_CONNAC_SCAN_IE_LEN;
+ 	wiphy->max_sched_scan_ssids = MT76_CONNAC_MAX_SCHED_SCAN_SSID;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+index e2dcfee6be81e..4873154d082ec 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+@@ -57,6 +57,33 @@ static const struct mt7615_dfs_radar_spec jp_radar_specs = {
+ 	},
+ };
+ 
++static enum mt76_cipher_type
++mt7615_mac_get_cipher(int cipher)
++{
++	switch (cipher) {
++	case WLAN_CIPHER_SUITE_WEP40:
++		return MT_CIPHER_WEP40;
++	case WLAN_CIPHER_SUITE_WEP104:
++		return MT_CIPHER_WEP104;
++	case WLAN_CIPHER_SUITE_TKIP:
++		return MT_CIPHER_TKIP;
++	case WLAN_CIPHER_SUITE_AES_CMAC:
++		return MT_CIPHER_BIP_CMAC_128;
++	case WLAN_CIPHER_SUITE_CCMP:
++		return MT_CIPHER_AES_CCMP;
++	case WLAN_CIPHER_SUITE_CCMP_256:
++		return MT_CIPHER_CCMP_256;
++	case WLAN_CIPHER_SUITE_GCMP:
++		return MT_CIPHER_GCMP;
++	case WLAN_CIPHER_SUITE_GCMP_256:
++		return MT_CIPHER_GCMP_256;
++	case WLAN_CIPHER_SUITE_SMS4:
++		return MT_CIPHER_WAPI;
++	default:
++		return MT_CIPHER_NONE;
++	}
++}
++
+ static struct mt76_wcid *mt7615_rx_get_wcid(struct mt7615_dev *dev,
+ 					    u8 idx, bool unicast)
+ {
+@@ -313,14 +340,27 @@ static int mt7615_mac_fill_rx(struct mt7615_dev *dev, struct sk_buff *skb)
+ 		u8 *data = (u8 *)rxd;
+ 
+ 		if (status->flag & RX_FLAG_DECRYPTED) {
+-			status->iv[0] = data[5];
+-			status->iv[1] = data[4];
+-			status->iv[2] = data[3];
+-			status->iv[3] = data[2];
+-			status->iv[4] = data[1];
+-			status->iv[5] = data[0];
+-
+-			insert_ccmp_hdr = FIELD_GET(MT_RXD2_NORMAL_FRAG, rxd2);
++			switch (FIELD_GET(MT_RXD2_NORMAL_SEC_MODE, rxd2)) {
++			case MT_CIPHER_AES_CCMP:
++			case MT_CIPHER_CCMP_CCX:
++			case MT_CIPHER_CCMP_256:
++				insert_ccmp_hdr =
++					FIELD_GET(MT_RXD2_NORMAL_FRAG, rxd2);
++				fallthrough;
++			case MT_CIPHER_TKIP:
++			case MT_CIPHER_TKIP_NO_MIC:
++			case MT_CIPHER_GCMP:
++			case MT_CIPHER_GCMP_256:
++				status->iv[0] = data[5];
++				status->iv[1] = data[4];
++				status->iv[2] = data[3];
++				status->iv[3] = data[2];
++				status->iv[4] = data[1];
++				status->iv[5] = data[0];
++				break;
++			default:
++				break;
++			}
+ 		}
+ 		rxd += 4;
+ 		if ((u8 *)rxd - skb->data >= skb->len)
+@@ -1078,7 +1118,7 @@ EXPORT_SYMBOL_GPL(mt7615_mac_set_rates);
+ static int
+ mt7615_mac_wtbl_update_key(struct mt7615_dev *dev, struct mt76_wcid *wcid,
+ 			   struct ieee80211_key_conf *key,
+-			   enum mt7615_cipher_type cipher, u16 cipher_mask,
++			   enum mt76_cipher_type cipher, u16 cipher_mask,
+ 			   enum set_key_cmd cmd)
+ {
+ 	u32 addr = mt7615_mac_wtbl_addr(dev, wcid->idx) + 30 * 4;
+@@ -1118,7 +1158,7 @@ mt7615_mac_wtbl_update_key(struct mt7615_dev *dev, struct mt76_wcid *wcid,
+ 
+ static int
+ mt7615_mac_wtbl_update_pk(struct mt7615_dev *dev, struct mt76_wcid *wcid,
+-			  enum mt7615_cipher_type cipher, u16 cipher_mask,
++			  enum mt76_cipher_type cipher, u16 cipher_mask,
+ 			  int keyidx, enum set_key_cmd cmd)
+ {
+ 	u32 addr = mt7615_mac_wtbl_addr(dev, wcid->idx), w0, w1;
+@@ -1157,7 +1197,7 @@ mt7615_mac_wtbl_update_pk(struct mt7615_dev *dev, struct mt76_wcid *wcid,
+ 
+ static void
+ mt7615_mac_wtbl_update_cipher(struct mt7615_dev *dev, struct mt76_wcid *wcid,
+-			      enum mt7615_cipher_type cipher, u16 cipher_mask,
++			      enum mt76_cipher_type cipher, u16 cipher_mask,
+ 			      enum set_key_cmd cmd)
+ {
+ 	u32 addr = mt7615_mac_wtbl_addr(dev, wcid->idx);
+@@ -1183,7 +1223,7 @@ int __mt7615_mac_wtbl_set_key(struct mt7615_dev *dev,
+ 			      struct ieee80211_key_conf *key,
+ 			      enum set_key_cmd cmd)
+ {
+-	enum mt7615_cipher_type cipher;
++	enum mt76_cipher_type cipher;
+ 	u16 cipher_mask = wcid->cipher;
+ 	int err;
+ 
+@@ -1235,22 +1275,20 @@ static bool mt7615_fill_txs(struct mt7615_dev *dev, struct mt7615_sta *sta,
+ 	int first_idx = 0, last_idx;
+ 	int i, idx, count;
+ 	bool fixed_rate, ack_timeout;
+-	bool probe, ampdu, cck = false;
++	bool ampdu, cck = false;
+ 	bool rs_idx;
+ 	u32 rate_set_tsf;
+ 	u32 final_rate, final_rate_flags, final_nss, txs;
+ 
+-	fixed_rate = info->status.rates[0].count;
+-	probe = !!(info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE);
+-
+ 	txs = le32_to_cpu(txs_data[1]);
+-	ampdu = !fixed_rate && (txs & MT_TXS1_AMPDU);
++	ampdu = txs & MT_TXS1_AMPDU;
+ 
+ 	txs = le32_to_cpu(txs_data[3]);
+ 	count = FIELD_GET(MT_TXS3_TX_COUNT, txs);
+ 	last_idx = FIELD_GET(MT_TXS3_LAST_TX_RATE, txs);
+ 
+ 	txs = le32_to_cpu(txs_data[0]);
++	fixed_rate = txs & MT_TXS0_FIXED_RATE;
+ 	final_rate = FIELD_GET(MT_TXS0_TX_RATE, txs);
+ 	ack_timeout = txs & MT_TXS0_ACK_TIMEOUT;
+ 
+@@ -1272,7 +1310,7 @@ static bool mt7615_fill_txs(struct mt7615_dev *dev, struct mt7615_sta *sta,
+ 
+ 	first_idx = max_t(int, 0, last_idx - (count - 1) / MT7615_RATE_RETRY);
+ 
+-	if (fixed_rate && !probe) {
++	if (fixed_rate) {
+ 		info->status.rates[0].count = count;
+ 		i = 0;
+ 		goto out;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.h b/drivers/net/wireless/mediatek/mt76/mt7615/mac.h
+index 6bf9da0401966..46f283eb8d0f4 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.h
+@@ -383,48 +383,6 @@ struct mt7615_dfs_radar_spec {
+ 	struct mt7615_dfs_pattern radar_pattern[16];
+ };
+ 
+-enum mt7615_cipher_type {
+-	MT_CIPHER_NONE,
+-	MT_CIPHER_WEP40,
+-	MT_CIPHER_TKIP,
+-	MT_CIPHER_TKIP_NO_MIC,
+-	MT_CIPHER_AES_CCMP,
+-	MT_CIPHER_WEP104,
+-	MT_CIPHER_BIP_CMAC_128,
+-	MT_CIPHER_WEP128,
+-	MT_CIPHER_WAPI,
+-	MT_CIPHER_CCMP_256 = 10,
+-	MT_CIPHER_GCMP,
+-	MT_CIPHER_GCMP_256,
+-};
+-
+-static inline enum mt7615_cipher_type
+-mt7615_mac_get_cipher(int cipher)
+-{
+-	switch (cipher) {
+-	case WLAN_CIPHER_SUITE_WEP40:
+-		return MT_CIPHER_WEP40;
+-	case WLAN_CIPHER_SUITE_WEP104:
+-		return MT_CIPHER_WEP104;
+-	case WLAN_CIPHER_SUITE_TKIP:
+-		return MT_CIPHER_TKIP;
+-	case WLAN_CIPHER_SUITE_AES_CMAC:
+-		return MT_CIPHER_BIP_CMAC_128;
+-	case WLAN_CIPHER_SUITE_CCMP:
+-		return MT_CIPHER_AES_CCMP;
+-	case WLAN_CIPHER_SUITE_CCMP_256:
+-		return MT_CIPHER_CCMP_256;
+-	case WLAN_CIPHER_SUITE_GCMP:
+-		return MT_CIPHER_GCMP;
+-	case WLAN_CIPHER_SUITE_GCMP_256:
+-		return MT_CIPHER_GCMP_256;
+-	case WLAN_CIPHER_SUITE_SMS4:
+-		return MT_CIPHER_WAPI;
+-	default:
+-		return MT_CIPHER_NONE;
+-	}
+-}
+-
+ static inline struct mt7615_txp_common *
+ mt7615_txwi_to_txp(struct mt76_dev *dev, struct mt76_txwi_cache *t)
+ {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+index ae2191371f511..257a2c4ddf367 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+@@ -1123,12 +1123,14 @@ mt7615_mcu_sta_rx_ba(struct mt7615_dev *dev,
+ 
+ static int
+ __mt7615_mcu_add_sta(struct mt76_phy *phy, struct ieee80211_vif *vif,
+-		     struct ieee80211_sta *sta, bool enable, int cmd)
++		     struct ieee80211_sta *sta, bool enable, int cmd,
++		     bool offload_fw)
+ {
+ 	struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
+ 	struct mt76_sta_cmd_info info = {
+ 		.sta = sta,
+ 		.vif = vif,
++		.offload_fw = offload_fw,
+ 		.enable = enable,
+ 		.cmd = cmd,
+ 	};
+@@ -1142,7 +1144,7 @@ mt7615_mcu_add_sta(struct mt7615_phy *phy, struct ieee80211_vif *vif,
+ 		   struct ieee80211_sta *sta, bool enable)
+ {
+ 	return __mt7615_mcu_add_sta(phy->mt76, vif, sta, enable,
+-				    MCU_EXT_CMD_STA_REC_UPDATE);
++				    MCU_EXT_CMD_STA_REC_UPDATE, false);
+ }
+ 
+ static const struct mt7615_mcu_ops sta_update_ops = {
+@@ -1283,7 +1285,7 @@ mt7615_mcu_uni_add_sta(struct mt7615_phy *phy, struct ieee80211_vif *vif,
+ 		       struct ieee80211_sta *sta, bool enable)
+ {
+ 	return __mt7615_mcu_add_sta(phy->mt76, vif, sta, enable,
+-				    MCU_UNI_CMD_STA_REC_UPDATE);
++				    MCU_UNI_CMD_STA_REC_UPDATE, true);
+ }
+ 
+ static int
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac.h b/drivers/net/wireless/mediatek/mt76/mt76_connac.h
+index c26cfef425ed8..75223b6e1c874 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac.h
+@@ -7,7 +7,8 @@
+ #include "mt76.h"
+ 
+ #define MT76_CONNAC_SCAN_IE_LEN			600
+-#define MT76_CONNAC_MAX_SCHED_SCAN_INTERVAL	10
++#define MT76_CONNAC_MAX_NUM_SCHED_SCAN_INTERVAL	 10
++#define MT76_CONNAC_MAX_TIME_SCHED_SCAN_INTERVAL U16_MAX
+ #define MT76_CONNAC_MAX_SCHED_SCAN_SSID		10
+ #define MT76_CONNAC_MAX_SCAN_MATCH		16
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+index eb19721f9d79a..e5721603586f8 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+@@ -841,10 +841,12 @@ int mt76_connac_mcu_add_sta_cmd(struct mt76_phy *phy,
+ 	if (IS_ERR(skb))
+ 		return PTR_ERR(skb);
+ 
+-	mt76_connac_mcu_sta_basic_tlv(skb, info->vif, info->sta, info->enable);
+-	if (info->enable && info->sta)
+-		mt76_connac_mcu_sta_tlv(phy, skb, info->sta, info->vif,
+-					info->rcpi);
++	if (info->sta || !info->offload_fw)
++		mt76_connac_mcu_sta_basic_tlv(skb, info->vif, info->sta,
++					      info->enable);
++	if (info->sta && info->enable)
++		mt76_connac_mcu_sta_tlv(phy, skb, info->sta,
++					info->vif, info->rcpi);
+ 
+ 	sta_wtbl = mt76_connac_mcu_add_tlv(skb, STA_REC_WTBL,
+ 					   sizeof(struct tlv));
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+index 3bcae732872ed..facebed1e3017 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+@@ -770,7 +770,7 @@ struct mt76_connac_sched_scan_req {
+ 	u8 intervals_num;
+ 	u8 scan_func; /* MT7663: BIT(0) eable random mac address */
+ 	struct mt76_connac_mcu_scan_channel channels[64];
+-	__le16 intervals[MT76_CONNAC_MAX_SCHED_SCAN_INTERVAL];
++	__le16 intervals[MT76_CONNAC_MAX_NUM_SCHED_SCAN_INTERVAL];
+ 	union {
+ 		struct {
+ 			u8 random_mac[ETH_ALEN];
+@@ -899,6 +899,7 @@ struct mt76_sta_cmd_info {
+ 
+ 	struct ieee80211_vif *vif;
+ 
++	bool offload_fw;
+ 	bool enable;
+ 	int cmd;
+ 	u8 rcpi;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
+index 0da37867cb64c..10d66775c3917 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
+@@ -34,24 +34,24 @@ mt76x02_mac_get_key_info(struct ieee80211_key_conf *key, u8 *key_data)
+ {
+ 	memset(key_data, 0, 32);
+ 	if (!key)
+-		return MT_CIPHER_NONE;
++		return MT76X02_CIPHER_NONE;
+ 
+ 	if (key->keylen > 32)
+-		return MT_CIPHER_NONE;
++		return MT76X02_CIPHER_NONE;
+ 
+ 	memcpy(key_data, key->key, key->keylen);
+ 
+ 	switch (key->cipher) {
+ 	case WLAN_CIPHER_SUITE_WEP40:
+-		return MT_CIPHER_WEP40;
++		return MT76X02_CIPHER_WEP40;
+ 	case WLAN_CIPHER_SUITE_WEP104:
+-		return MT_CIPHER_WEP104;
++		return MT76X02_CIPHER_WEP104;
+ 	case WLAN_CIPHER_SUITE_TKIP:
+-		return MT_CIPHER_TKIP;
++		return MT76X02_CIPHER_TKIP;
+ 	case WLAN_CIPHER_SUITE_CCMP:
+-		return MT_CIPHER_AES_CCMP;
++		return MT76X02_CIPHER_AES_CCMP;
+ 	default:
+-		return MT_CIPHER_NONE;
++		return MT76X02_CIPHER_NONE;
+ 	}
+ }
+ 
+@@ -63,7 +63,7 @@ int mt76x02_mac_shared_key_setup(struct mt76x02_dev *dev, u8 vif_idx,
+ 	u32 val;
+ 
+ 	cipher = mt76x02_mac_get_key_info(key, key_data);
+-	if (cipher == MT_CIPHER_NONE && key)
++	if (cipher == MT76X02_CIPHER_NONE && key)
+ 		return -EOPNOTSUPP;
+ 
+ 	val = mt76_rr(dev, MT_SKEY_MODE(vif_idx));
+@@ -91,10 +91,10 @@ void mt76x02_mac_wcid_sync_pn(struct mt76x02_dev *dev, u8 idx,
+ 	eiv = mt76_rr(dev, MT_WCID_IV(idx) + 4);
+ 
+ 	pn = (u64)eiv << 16;
+-	if (cipher == MT_CIPHER_TKIP) {
++	if (cipher == MT76X02_CIPHER_TKIP) {
+ 		pn |= (iv >> 16) & 0xff;
+ 		pn |= (iv & 0xff) << 8;
+-	} else if (cipher >= MT_CIPHER_AES_CCMP) {
++	} else if (cipher >= MT76X02_CIPHER_AES_CCMP) {
+ 		pn |= iv & 0xffff;
+ 	} else {
+ 		return;
+@@ -112,7 +112,7 @@ int mt76x02_mac_wcid_set_key(struct mt76x02_dev *dev, u8 idx,
+ 	u64 pn;
+ 
+ 	cipher = mt76x02_mac_get_key_info(key, key_data);
+-	if (cipher == MT_CIPHER_NONE && key)
++	if (cipher == MT76X02_CIPHER_NONE && key)
+ 		return -EOPNOTSUPP;
+ 
+ 	mt76_wr_copy(dev, MT_WCID_KEY(idx), key_data, sizeof(key_data));
+@@ -126,16 +126,16 @@ int mt76x02_mac_wcid_set_key(struct mt76x02_dev *dev, u8 idx,
+ 		pn = atomic64_read(&key->tx_pn);
+ 
+ 		iv_data[3] = key->keyidx << 6;
+-		if (cipher >= MT_CIPHER_TKIP) {
++		if (cipher >= MT76X02_CIPHER_TKIP) {
+ 			iv_data[3] |= 0x20;
+ 			put_unaligned_le32(pn >> 16, &iv_data[4]);
+ 		}
+ 
+-		if (cipher == MT_CIPHER_TKIP) {
++		if (cipher == MT76X02_CIPHER_TKIP) {
+ 			iv_data[0] = (pn >> 8) & 0xff;
+ 			iv_data[1] = (iv_data[0] | 0x20) & 0x7f;
+ 			iv_data[2] = pn & 0xff;
+-		} else if (cipher >= MT_CIPHER_AES_CCMP) {
++		} else if (cipher >= MT76X02_CIPHER_AES_CCMP) {
+ 			put_unaligned_le16((pn & 0xffff), &iv_data[0]);
+ 		}
+ 	}
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_regs.h b/drivers/net/wireless/mediatek/mt76/mt76x02_regs.h
+index 3e722276b5c2f..fa7872ac22bf8 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_regs.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_regs.h
+@@ -692,15 +692,15 @@ struct mt76_wcid_key {
+ } __packed __aligned(4);
+ 
+ enum mt76x02_cipher_type {
+-	MT_CIPHER_NONE,
+-	MT_CIPHER_WEP40,
+-	MT_CIPHER_WEP104,
+-	MT_CIPHER_TKIP,
+-	MT_CIPHER_AES_CCMP,
+-	MT_CIPHER_CKIP40,
+-	MT_CIPHER_CKIP104,
+-	MT_CIPHER_CKIP128,
+-	MT_CIPHER_WAPI,
++	MT76X02_CIPHER_NONE,
++	MT76X02_CIPHER_WEP40,
++	MT76X02_CIPHER_WEP104,
++	MT76X02_CIPHER_TKIP,
++	MT76X02_CIPHER_AES_CCMP,
++	MT76X02_CIPHER_CKIP40,
++	MT76X02_CIPHER_CKIP104,
++	MT76X02_CIPHER_CKIP128,
++	MT76X02_CIPHER_WAPI,
+ };
+ 
+ #endif
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.h b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.h
+index 30bf41b8ed152..a43389a418006 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.h
+@@ -99,12 +99,15 @@ static inline bool
+ mt7915_tssi_enabled(struct mt7915_dev *dev, enum nl80211_band band)
+ {
+ 	u8 *eep = dev->mt76.eeprom.data;
++	u8 val = eep[MT_EE_WIFI_CONF + 7];
+ 
+-	/* TODO: DBDC */
+-	if (band == NL80211_BAND_5GHZ)
+-		return eep[MT_EE_WIFI_CONF + 7] & MT_EE_WIFI_CONF7_TSSI0_5G;
++	if (band == NL80211_BAND_2GHZ)
++		return val & MT_EE_WIFI_CONF7_TSSI0_2G;
++
++	if (dev->dbdc_support)
++		return val & MT_EE_WIFI_CONF7_TSSI1_5G;
+ 	else
+-		return eep[MT_EE_WIFI_CONF + 7] & MT_EE_WIFI_CONF7_TSSI0_2G;
++		return val & MT_EE_WIFI_CONF7_TSSI0_5G;
+ }
+ 
+ extern const u8 mt7915_sku_group_len[MAX_SKU_RATE_GROUP_NUM];
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+index 822f3aa6bb8b5..feb2aa57ef221 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+@@ -480,6 +480,9 @@ mt7915_set_stream_he_txbf_caps(struct ieee80211_sta_he_cap *he_cap,
+ 	if (nss < 2)
+ 		return;
+ 
++	/* the maximum cap is 4 x 3, (Nr, Nc) = (3, 2) */
++	elem->phy_cap_info[7] |= min_t(int, nss - 1, 2) << 3;
++
+ 	if (vif != NL80211_IFTYPE_AP)
+ 		return;
+ 
+@@ -493,9 +496,6 @@ mt7915_set_stream_he_txbf_caps(struct ieee80211_sta_he_cap *he_cap,
+ 	c = IEEE80211_HE_PHY_CAP6_TRIG_SU_BEAMFORMING_FB |
+ 	    IEEE80211_HE_PHY_CAP6_TRIG_MU_BEAMFORMING_PARTIAL_BW_FB;
+ 	elem->phy_cap_info[6] |= c;
+-
+-	/* the maximum cap is 4 x 3, (Nr, Nc) = (3, 2) */
+-	elem->phy_cap_info[7] |= min_t(int, nss - 1, 2) << 3;
+ }
+ 
+ static void
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+index 7a9759fb79d89..f4544c46c1731 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+@@ -412,14 +412,27 @@ int mt7915_mac_fill_rx(struct mt7915_dev *dev, struct sk_buff *skb)
+ 		u8 *data = (u8 *)rxd;
+ 
+ 		if (status->flag & RX_FLAG_DECRYPTED) {
+-			status->iv[0] = data[5];
+-			status->iv[1] = data[4];
+-			status->iv[2] = data[3];
+-			status->iv[3] = data[2];
+-			status->iv[4] = data[1];
+-			status->iv[5] = data[0];
+-
+-			insert_ccmp_hdr = FIELD_GET(MT_RXD2_NORMAL_FRAG, rxd2);
++			switch (FIELD_GET(MT_RXD1_NORMAL_SEC_MODE, rxd1)) {
++			case MT_CIPHER_AES_CCMP:
++			case MT_CIPHER_CCMP_CCX:
++			case MT_CIPHER_CCMP_256:
++				insert_ccmp_hdr =
++					FIELD_GET(MT_RXD2_NORMAL_FRAG, rxd2);
++				fallthrough;
++			case MT_CIPHER_TKIP:
++			case MT_CIPHER_TKIP_NO_MIC:
++			case MT_CIPHER_GCMP:
++			case MT_CIPHER_GCMP_256:
++				status->iv[0] = data[5];
++				status->iv[1] = data[4];
++				status->iv[2] = data[3];
++				status->iv[3] = data[2];
++				status->iv[4] = data[1];
++				status->iv[5] = data[0];
++				break;
++			default:
++				break;
++			}
+ 		}
+ 		rxd += 4;
+ 		if ((u8 *)rxd - skb->data >= skb->len)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index 764f25a828fa2..607980321d275 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -88,28 +88,28 @@ struct mt7915_fw_region {
+ #define HE_PHY(p, c)			u8_get_bits(c, IEEE80211_HE_PHY_##p)
+ #define HE_MAC(m, c)			u8_get_bits(c, IEEE80211_HE_MAC_##m)
+ 
+-static enum mt7915_cipher_type
++static enum mcu_cipher_type
+ mt7915_mcu_get_cipher(int cipher)
+ {
+ 	switch (cipher) {
+ 	case WLAN_CIPHER_SUITE_WEP40:
+-		return MT_CIPHER_WEP40;
++		return MCU_CIPHER_WEP40;
+ 	case WLAN_CIPHER_SUITE_WEP104:
+-		return MT_CIPHER_WEP104;
++		return MCU_CIPHER_WEP104;
+ 	case WLAN_CIPHER_SUITE_TKIP:
+-		return MT_CIPHER_TKIP;
++		return MCU_CIPHER_TKIP;
+ 	case WLAN_CIPHER_SUITE_AES_CMAC:
+-		return MT_CIPHER_BIP_CMAC_128;
++		return MCU_CIPHER_BIP_CMAC_128;
+ 	case WLAN_CIPHER_SUITE_CCMP:
+-		return MT_CIPHER_AES_CCMP;
++		return MCU_CIPHER_AES_CCMP;
+ 	case WLAN_CIPHER_SUITE_CCMP_256:
+-		return MT_CIPHER_CCMP_256;
++		return MCU_CIPHER_CCMP_256;
+ 	case WLAN_CIPHER_SUITE_GCMP:
+-		return MT_CIPHER_GCMP;
++		return MCU_CIPHER_GCMP;
+ 	case WLAN_CIPHER_SUITE_GCMP_256:
+-		return MT_CIPHER_GCMP_256;
++		return MCU_CIPHER_GCMP_256;
+ 	case WLAN_CIPHER_SUITE_SMS4:
+-		return MT_CIPHER_WAPI;
++		return MCU_CIPHER_WAPI;
+ 	default:
+ 		return MT_CIPHER_NONE;
+ 	}
+@@ -1072,14 +1072,14 @@ mt7915_mcu_sta_key_tlv(struct mt7915_sta *msta, struct sk_buff *skb,
+ 		sec_key = &sec->key[0];
+ 		sec_key->cipher_len = sizeof(*sec_key);
+ 
+-		if (cipher == MT_CIPHER_BIP_CMAC_128) {
+-			sec_key->cipher_id = MT_CIPHER_AES_CCMP;
++		if (cipher == MCU_CIPHER_BIP_CMAC_128) {
++			sec_key->cipher_id = MCU_CIPHER_AES_CCMP;
+ 			sec_key->key_id = bip->keyidx;
+ 			sec_key->key_len = 16;
+ 			memcpy(sec_key->key, bip->key, 16);
+ 
+ 			sec_key = &sec->key[1];
+-			sec_key->cipher_id = MT_CIPHER_BIP_CMAC_128;
++			sec_key->cipher_id = MCU_CIPHER_BIP_CMAC_128;
+ 			sec_key->cipher_len = sizeof(*sec_key);
+ 			sec_key->key_len = 16;
+ 			memcpy(sec_key->key, key->key, 16);
+@@ -1091,14 +1091,14 @@ mt7915_mcu_sta_key_tlv(struct mt7915_sta *msta, struct sk_buff *skb,
+ 			sec_key->key_len = key->keylen;
+ 			memcpy(sec_key->key, key->key, key->keylen);
+ 
+-			if (cipher == MT_CIPHER_TKIP) {
++			if (cipher == MCU_CIPHER_TKIP) {
+ 				/* Rx/Tx MIC keys are swapped */
+ 				memcpy(sec_key->key + 16, key->key + 24, 8);
+ 				memcpy(sec_key->key + 24, key->key + 16, 8);
+ 			}
+ 
+ 			/* store key_conf for BIP batch update */
+-			if (cipher == MT_CIPHER_AES_CCMP) {
++			if (cipher == MCU_CIPHER_AES_CCMP) {
+ 				memcpy(bip->key, key->key, key->keylen);
+ 				bip->keyidx = key->keyidx;
+ 			}
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.h
+index 42582a66e42dd..517621044d9e9 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.h
+@@ -1034,18 +1034,17 @@ enum {
+ 	STA_REC_MAX_NUM
+ };
+ 
+-enum mt7915_cipher_type {
+-	MT_CIPHER_NONE,
+-	MT_CIPHER_WEP40,
+-	MT_CIPHER_WEP104,
+-	MT_CIPHER_WEP128,
+-	MT_CIPHER_TKIP,
+-	MT_CIPHER_AES_CCMP,
+-	MT_CIPHER_CCMP_256,
+-	MT_CIPHER_GCMP,
+-	MT_CIPHER_GCMP_256,
+-	MT_CIPHER_WAPI,
+-	MT_CIPHER_BIP_CMAC_128,
++enum mcu_cipher_type {
++	MCU_CIPHER_WEP40 = 1,
++	MCU_CIPHER_WEP104,
++	MCU_CIPHER_WEP128,
++	MCU_CIPHER_TKIP,
++	MCU_CIPHER_AES_CCMP,
++	MCU_CIPHER_CCMP_256,
++	MCU_CIPHER_GCMP,
++	MCU_CIPHER_GCMP_256,
++	MCU_CIPHER_WAPI,
++	MCU_CIPHER_BIP_CMAC_128,
+ };
+ 
+ enum {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/dma.c b/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
+index bd9143dc865f9..7fca7dc466b85 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
+@@ -402,6 +402,10 @@ int mt7921_dma_init(struct mt7921_dev *dev)
+ 	if (ret)
+ 		return ret;
+ 
++	ret = mt7921_wfsys_reset(dev);
++	if (ret)
++		return ret;
++
+ 	/* init tx queue */
+ 	ret = mt7921_init_tx_queues(&dev->phy, MT7921_TXQ_BAND0,
+ 				    MT7921_TX_RING_SIZE);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/init.c b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
+index 2cb0252e63b21..db7e436076b37 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
+@@ -93,7 +93,7 @@ mt7921_init_wiphy(struct ieee80211_hw *hw)
+ 	wiphy->max_scan_ie_len = MT76_CONNAC_SCAN_IE_LEN;
+ 	wiphy->max_scan_ssids = 4;
+ 	wiphy->max_sched_scan_plan_interval =
+-		MT76_CONNAC_MAX_SCHED_SCAN_INTERVAL;
++		MT76_CONNAC_MAX_TIME_SCHED_SCAN_INTERVAL;
+ 	wiphy->max_sched_scan_ie_len = IEEE80211_MAX_DATA_LEN;
+ 	wiphy->max_sched_scan_ssids = MT76_CONNAC_MAX_SCHED_SCAN_SSID;
+ 	wiphy->max_match_sets = MT76_CONNAC_MAX_SCAN_MATCH;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+index 493c2aba2f791..d7d8c909acdff 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+@@ -386,14 +386,27 @@ int mt7921_mac_fill_rx(struct mt7921_dev *dev, struct sk_buff *skb)
+ 		u8 *data = (u8 *)rxd;
+ 
+ 		if (status->flag & RX_FLAG_DECRYPTED) {
+-			status->iv[0] = data[5];
+-			status->iv[1] = data[4];
+-			status->iv[2] = data[3];
+-			status->iv[3] = data[2];
+-			status->iv[4] = data[1];
+-			status->iv[5] = data[0];
+-
+-			insert_ccmp_hdr = FIELD_GET(MT_RXD2_NORMAL_FRAG, rxd2);
++			switch (FIELD_GET(MT_RXD1_NORMAL_SEC_MODE, rxd1)) {
++			case MT_CIPHER_AES_CCMP:
++			case MT_CIPHER_CCMP_CCX:
++			case MT_CIPHER_CCMP_256:
++				insert_ccmp_hdr =
++					FIELD_GET(MT_RXD2_NORMAL_FRAG, rxd2);
++				fallthrough;
++			case MT_CIPHER_TKIP:
++			case MT_CIPHER_TKIP_NO_MIC:
++			case MT_CIPHER_GCMP:
++			case MT_CIPHER_GCMP_256:
++				status->iv[0] = data[5];
++				status->iv[1] = data[4];
++				status->iv[2] = data[3];
++				status->iv[3] = data[2];
++				status->iv[4] = data[1];
++				status->iv[5] = data[0];
++				break;
++			default:
++				break;
++			}
+ 		}
+ 		rxd += 4;
+ 		if ((u8 *)rxd - skb->data >= skb->len)
+@@ -1245,9 +1258,10 @@ mt7921_mac_reset(struct mt7921_dev *dev)
+ 	mt76_worker_enable(&dev->mt76.tx_worker);
+ 
+ 	clear_bit(MT76_MCU_RESET, &dev->mphy.state);
+-	clear_bit(MT76_STATE_PM, &dev->mphy.state);
+ 
+-	mt76_wr(dev, MT_WFDMA0_HOST_INT_ENA, 0);
++	mt76_wr(dev, MT_WFDMA0_HOST_INT_ENA,
++		MT_INT_RX_DONE_ALL | MT_INT_TX_DONE_ALL |
++		MT_INT_MCU_CMD);
+ 	mt76_wr(dev, MT_PCIE_MAC_INT_ENABLE, 0xff);
+ 
+ 	err = mt7921_run_firmware(dev);
+@@ -1265,23 +1279,24 @@ mt7921_mac_reset(struct mt7921_dev *dev)
+ /* system error recovery */
+ void mt7921_mac_reset_work(struct work_struct *work)
+ {
+-	struct ieee80211_hw *hw;
+-	struct mt7921_dev *dev;
++	struct mt7921_dev *dev = container_of(work, struct mt7921_dev,
++					      reset_work);
++	struct ieee80211_hw *hw = mt76_hw(dev);
++	struct mt76_connac_pm *pm = &dev->pm;
+ 	int i;
+ 
+-	dev = container_of(work, struct mt7921_dev, reset_work);
+-	hw = mt76_hw(dev);
+-
+ 	dev_err(dev->mt76.dev, "chip reset\n");
+ 	dev->hw_full_reset = true;
+ 	ieee80211_stop_queues(hw);
+ 
+ 	cancel_delayed_work_sync(&dev->mphy.mac_work);
+-	cancel_delayed_work_sync(&dev->pm.ps_work);
+-	cancel_work_sync(&dev->pm.wake_work);
++	cancel_delayed_work_sync(&pm->ps_work);
++	cancel_work_sync(&pm->wake_work);
+ 
+ 	mutex_lock(&dev->mt76.mutex);
+ 	for (i = 0; i < 10; i++) {
++		__mt7921_mcu_drv_pmctrl(dev);
++
+ 		if (!mt7921_mac_reset(dev))
+ 			break;
+ 	}
+@@ -1303,6 +1318,7 @@ void mt7921_mac_reset_work(struct work_struct *work)
+ 	ieee80211_iterate_active_interfaces(hw,
+ 					    IEEE80211_IFACE_ITER_RESUME_ALL,
+ 					    mt7921_vif_connect_iter, NULL);
++	mt76_connac_power_save_sched(&dev->mt76.phy, pm);
+ }
+ 
+ void mt7921_reset(struct mt76_dev *mdev)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index bd77a04a15fb2..992a74e122e50 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -376,6 +376,10 @@ static int mt7921_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ 		key->flags |= IEEE80211_KEY_FLAG_GENERATE_MMIE;
+ 		wcid_keyidx = &wcid->hw_key_idx2;
+ 		break;
++	case WLAN_CIPHER_SUITE_WEP40:
++	case WLAN_CIPHER_SUITE_WEP104:
++		if (!mvif->wep_sta)
++			return -EOPNOTSUPP;
+ 	case WLAN_CIPHER_SUITE_TKIP:
+ 	case WLAN_CIPHER_SUITE_CCMP:
+ 	case WLAN_CIPHER_SUITE_CCMP_256:
+@@ -383,8 +387,6 @@ static int mt7921_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ 	case WLAN_CIPHER_SUITE_GCMP_256:
+ 	case WLAN_CIPHER_SUITE_SMS4:
+ 		break;
+-	case WLAN_CIPHER_SUITE_WEP40:
+-	case WLAN_CIPHER_SUITE_WEP104:
+ 	default:
+ 		return -EOPNOTSUPP;
+ 	}
+@@ -402,6 +404,12 @@ static int mt7921_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ 			    cmd == SET_KEY ? key : NULL);
+ 
+ 	err = mt7921_mcu_add_key(dev, vif, msta, key, cmd);
++	if (err)
++		goto out;
++
++	if (key->cipher == WLAN_CIPHER_SUITE_WEP104 ||
++	    key->cipher == WLAN_CIPHER_SUITE_WEP40)
++		err = mt7921_mcu_add_key(dev, vif, mvif->wep_sta, key, cmd);
+ out:
+ 	mt7921_mutex_release(dev);
+ 
+@@ -608,9 +616,12 @@ int mt7921_mac_sta_add(struct mt76_dev *mdev, struct ieee80211_vif *vif,
+ 	if (ret)
+ 		return ret;
+ 
+-	if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls)
+-		mt76_connac_mcu_uni_add_bss(&dev->mphy, vif, &mvif->sta.wcid,
+-					    true);
++	if (vif->type == NL80211_IFTYPE_STATION) {
++		mvif->wep_sta = msta;
++		if (!sta->tdls)
++			mt76_connac_mcu_uni_add_bss(&dev->mphy, vif,
++						    &mvif->sta.wcid, true);
++	}
+ 
+ 	mt7921_mac_wtbl_update(dev, idx,
+ 			       MT_WTBL_UPDATE_ADM_COUNT_CLEAR);
+@@ -640,6 +651,7 @@ void mt7921_mac_sta_remove(struct mt76_dev *mdev, struct ieee80211_vif *vif,
+ 	if (vif->type == NL80211_IFTYPE_STATION) {
+ 		struct mt7921_vif *mvif = (struct mt7921_vif *)vif->drv_priv;
+ 
++		mvif->wep_sta = NULL;
+ 		ewma_rssi_init(&mvif->rssi);
+ 		if (!sta->tdls)
+ 			mt76_connac_mcu_uni_add_bss(&dev->mphy, vif,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+index 7c68182cad552..ee6cf189103f7 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+@@ -88,28 +88,28 @@ struct mt7921_fw_region {
+ #define to_wcid_lo(id)			FIELD_GET(GENMASK(7, 0), (u16)id)
+ #define to_wcid_hi(id)			FIELD_GET(GENMASK(9, 8), (u16)id)
+ 
+-static enum mt7921_cipher_type
++static enum mcu_cipher_type
+ mt7921_mcu_get_cipher(int cipher)
+ {
+ 	switch (cipher) {
+ 	case WLAN_CIPHER_SUITE_WEP40:
+-		return MT_CIPHER_WEP40;
++		return MCU_CIPHER_WEP40;
+ 	case WLAN_CIPHER_SUITE_WEP104:
+-		return MT_CIPHER_WEP104;
++		return MCU_CIPHER_WEP104;
+ 	case WLAN_CIPHER_SUITE_TKIP:
+-		return MT_CIPHER_TKIP;
++		return MCU_CIPHER_TKIP;
+ 	case WLAN_CIPHER_SUITE_AES_CMAC:
+-		return MT_CIPHER_BIP_CMAC_128;
++		return MCU_CIPHER_BIP_CMAC_128;
+ 	case WLAN_CIPHER_SUITE_CCMP:
+-		return MT_CIPHER_AES_CCMP;
++		return MCU_CIPHER_AES_CCMP;
+ 	case WLAN_CIPHER_SUITE_CCMP_256:
+-		return MT_CIPHER_CCMP_256;
++		return MCU_CIPHER_CCMP_256;
+ 	case WLAN_CIPHER_SUITE_GCMP:
+-		return MT_CIPHER_GCMP;
++		return MCU_CIPHER_GCMP;
+ 	case WLAN_CIPHER_SUITE_GCMP_256:
+-		return MT_CIPHER_GCMP_256;
++		return MCU_CIPHER_GCMP_256;
+ 	case WLAN_CIPHER_SUITE_SMS4:
+-		return MT_CIPHER_WAPI;
++		return MCU_CIPHER_WAPI;
+ 	default:
+ 		return MT_CIPHER_NONE;
+ 	}
+@@ -615,14 +615,14 @@ mt7921_mcu_sta_key_tlv(struct mt7921_sta *msta, struct sk_buff *skb,
+ 		sec_key = &sec->key[0];
+ 		sec_key->cipher_len = sizeof(*sec_key);
+ 
+-		if (cipher == MT_CIPHER_BIP_CMAC_128) {
+-			sec_key->cipher_id = MT_CIPHER_AES_CCMP;
++		if (cipher == MCU_CIPHER_BIP_CMAC_128) {
++			sec_key->cipher_id = MCU_CIPHER_AES_CCMP;
+ 			sec_key->key_id = bip->keyidx;
+ 			sec_key->key_len = 16;
+ 			memcpy(sec_key->key, bip->key, 16);
+ 
+ 			sec_key = &sec->key[1];
+-			sec_key->cipher_id = MT_CIPHER_BIP_CMAC_128;
++			sec_key->cipher_id = MCU_CIPHER_BIP_CMAC_128;
+ 			sec_key->cipher_len = sizeof(*sec_key);
+ 			sec_key->key_len = 16;
+ 			memcpy(sec_key->key, key->key, 16);
+@@ -634,14 +634,14 @@ mt7921_mcu_sta_key_tlv(struct mt7921_sta *msta, struct sk_buff *skb,
+ 			sec_key->key_len = key->keylen;
+ 			memcpy(sec_key->key, key->key, key->keylen);
+ 
+-			if (cipher == MT_CIPHER_TKIP) {
++			if (cipher == MCU_CIPHER_TKIP) {
+ 				/* Rx/Tx MIC keys are swapped */
+ 				memcpy(sec_key->key + 16, key->key + 24, 8);
+ 				memcpy(sec_key->key + 24, key->key + 16, 8);
+ 			}
+ 
+ 			/* store key_conf for BIP batch update */
+-			if (cipher == MT_CIPHER_AES_CCMP) {
++			if (cipher == MCU_CIPHER_AES_CCMP) {
+ 				memcpy(bip->key, key->key, key->keylen);
+ 				bip->keyidx = key->keyidx;
+ 			}
+@@ -1289,6 +1289,7 @@ int mt7921_mcu_sta_add(struct mt7921_dev *dev, struct ieee80211_sta *sta,
+ 		.vif = vif,
+ 		.enable = enable,
+ 		.cmd = MCU_UNI_CMD_STA_REC_UPDATE,
++		.offload_fw = true,
+ 		.rcpi = to_rcpi(rssi),
+ 	};
+ 	struct mt7921_sta *msta;
+@@ -1299,17 +1300,12 @@ int mt7921_mcu_sta_add(struct mt7921_dev *dev, struct ieee80211_sta *sta,
+ 	return mt76_connac_mcu_add_sta_cmd(&dev->mphy, &info);
+ }
+ 
+-int mt7921_mcu_drv_pmctrl(struct mt7921_dev *dev)
++int __mt7921_mcu_drv_pmctrl(struct mt7921_dev *dev)
+ {
+ 	struct mt76_phy *mphy = &dev->mt76.phy;
+ 	struct mt76_connac_pm *pm = &dev->pm;
+ 	int i, err = 0;
+ 
+-	mutex_lock(&pm->mutex);
+-
+-	if (!test_bit(MT76_STATE_PM, &mphy->state))
+-		goto out;
+-
+ 	for (i = 0; i < MT7921_DRV_OWN_RETRY_COUNT; i++) {
+ 		mt76_wr(dev, MT_CONN_ON_LPCTL, PCIE_LPCR_HOST_CLR_OWN);
+ 		if (mt76_poll_msec(dev, MT_CONN_ON_LPCTL,
+@@ -1329,6 +1325,22 @@ int mt7921_mcu_drv_pmctrl(struct mt7921_dev *dev)
+ 	pm->stats.last_wake_event = jiffies;
+ 	pm->stats.doze_time += pm->stats.last_wake_event -
+ 			       pm->stats.last_doze_event;
++out:
++	return err;
++}
++
++int mt7921_mcu_drv_pmctrl(struct mt7921_dev *dev)
++{
++	struct mt76_phy *mphy = &dev->mt76.phy;
++	struct mt76_connac_pm *pm = &dev->pm;
++	int err = 0;
++
++	mutex_lock(&pm->mutex);
++
++	if (!test_bit(MT76_STATE_PM, &mphy->state))
++		goto out;
++
++	err = __mt7921_mcu_drv_pmctrl(dev);
+ out:
+ 	mutex_unlock(&pm->mutex);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.h
+index 49823d0a3d0ab..07abe86f07a94 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.h
+@@ -197,18 +197,17 @@ struct sta_rec_sec {
+ 	struct sec_key key[2];
+ } __packed;
+ 
+-enum mt7921_cipher_type {
+-	MT_CIPHER_NONE,
+-	MT_CIPHER_WEP40,
+-	MT_CIPHER_WEP104,
+-	MT_CIPHER_WEP128,
+-	MT_CIPHER_TKIP,
+-	MT_CIPHER_AES_CCMP,
+-	MT_CIPHER_CCMP_256,
+-	MT_CIPHER_GCMP,
+-	MT_CIPHER_GCMP_256,
+-	MT_CIPHER_WAPI,
+-	MT_CIPHER_BIP_CMAC_128,
++enum mcu_cipher_type {
++	MCU_CIPHER_WEP40 = 1,
++	MCU_CIPHER_WEP104,
++	MCU_CIPHER_WEP128,
++	MCU_CIPHER_TKIP,
++	MCU_CIPHER_AES_CCMP,
++	MCU_CIPHER_CCMP_256,
++	MCU_CIPHER_GCMP,
++	MCU_CIPHER_GCMP_256,
++	MCU_CIPHER_WAPI,
++	MCU_CIPHER_BIP_CMAC_128,
+ };
+ 
+ enum {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+index 4cc8a372b2772..957084c3ca432 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+@@ -100,6 +100,8 @@ struct mt7921_vif {
+ 	struct mt76_vif mt76; /* must be first */
+ 
+ 	struct mt7921_sta sta;
++	struct mt7921_sta *wep_sta;
++
+ 	struct mt7921_phy *phy;
+ 
+ 	struct ewma_rssi rssi;
+@@ -369,6 +371,7 @@ int mt7921_mcu_uni_bss_bcnft(struct mt7921_dev *dev, struct ieee80211_vif *vif,
+ 			     bool enable);
+ int mt7921_mcu_set_bss_pm(struct mt7921_dev *dev, struct ieee80211_vif *vif,
+ 			  bool enable);
++int __mt7921_mcu_drv_pmctrl(struct mt7921_dev *dev);
+ int mt7921_mcu_drv_pmctrl(struct mt7921_dev *dev);
+ int mt7921_mcu_fw_pmctrl(struct mt7921_dev *dev);
+ void mt7921_pm_wake_work(struct work_struct *work);
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+index d1a566cc0c9e0..01735776345a9 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+@@ -853,15 +853,10 @@ struct rtl8192eu_efuse {
+ 	u8 usb_optional_function;
+ 	u8 res9[2];
+ 	u8 mac_addr[ETH_ALEN];		/* 0xd7 */
+-	u8 res10[2];
+-	u8 vendor_name[7];
+-	u8 res11[2];
+-	u8 device_name[0x0b];		/* 0xe8 */
+-	u8 res12[2];
+-	u8 serial[0x0b];		/* 0xf5 */
+-	u8 res13[0x30];
++	u8 device_info[80];
++	u8 res11[3];
+ 	u8 unknown[0x0d];		/* 0x130 */
+-	u8 res14[0xc3];
++	u8 res12[0xc3];
+ };
+ 
+ struct rtl8xxxu_reg8val {
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
+index cfe2dfdae928f..b06508d0cdf8f 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
+@@ -554,9 +554,43 @@ rtl8192e_set_tx_power(struct rtl8xxxu_priv *priv, int channel, bool ht40)
+ 	}
+ }
+ 
++static void rtl8192eu_log_next_device_info(struct rtl8xxxu_priv *priv,
++					   char *record_name,
++					   char *device_info,
++					   unsigned int *record_offset)
++{
++	char *record = device_info + *record_offset;
++
++	/* A record is [ total length | 0x03 | value ] */
++	unsigned char l = record[0];
++
++	/*
++	 * The whole device info section seems to be 80 characters, make sure
++	 * we don't read further.
++	 */
++	if (*record_offset + l > 80) {
++		dev_warn(&priv->udev->dev,
++			 "invalid record length %d while parsing \"%s\" at offset %u.\n",
++			 l, record_name, *record_offset);
++		return;
++	}
++
++	if (l >= 2) {
++		char value[80];
++
++		memcpy(value, &record[2], l - 2);
++		value[l - 2] = '\0';
++		dev_info(&priv->udev->dev, "%s: %s\n", record_name, value);
++		*record_offset = *record_offset + l;
++	} else {
++		dev_info(&priv->udev->dev, "%s not available.\n", record_name);
++	}
++}
++
+ static int rtl8192eu_parse_efuse(struct rtl8xxxu_priv *priv)
+ {
+ 	struct rtl8192eu_efuse *efuse = &priv->efuse_wifi.efuse8192eu;
++	unsigned int record_offset;
+ 	int i;
+ 
+ 	if (efuse->rtl_id != cpu_to_le16(0x8129))
+@@ -604,12 +638,25 @@ static int rtl8192eu_parse_efuse(struct rtl8xxxu_priv *priv)
+ 	priv->has_xtalk = 1;
+ 	priv->xtalk = priv->efuse_wifi.efuse8192eu.xtal_k & 0x3f;
+ 
+-	dev_info(&priv->udev->dev, "Vendor: %.7s\n", efuse->vendor_name);
+-	dev_info(&priv->udev->dev, "Product: %.11s\n", efuse->device_name);
+-	if (memchr_inv(efuse->serial, 0xff, 11))
+-		dev_info(&priv->udev->dev, "Serial: %.11s\n", efuse->serial);
+-	else
+-		dev_info(&priv->udev->dev, "Serial not available.\n");
++	/*
++	 * device_info section seems to be laid out as records
++	 * [ total length | 0x03 | value ] so:
++	 * - vendor length + 2
++	 * - 0x03
++	 * - vendor string (not null terminated)
++	 * - product length + 2
++	 * - 0x03
++	 * - product string (not null terminated)
++	 * Then there is one or 2 0x00 on all the 4 devices I own or found
++	 * dumped online.
++	 * As previous version of the code handled an optional serial
++	 * string, I now assume there may be a third record if the
++	 * length is not 0.
++	 */
++	record_offset = 0;
++	rtl8192eu_log_next_device_info(priv, "Vendor", efuse->device_info, &record_offset);
++	rtl8192eu_log_next_device_info(priv, "Product", efuse->device_info, &record_offset);
++	rtl8192eu_log_next_device_info(priv, "Serial", efuse->device_info, &record_offset);
+ 
+ 	if (rtl8xxxu_debug & RTL8XXXU_DEBUG_EFUSE) {
+ 		unsigned char *raw = priv->efuse_wifi.raw;
+diff --git a/drivers/net/wireless/realtek/rtw88/pci.c b/drivers/net/wireless/realtek/rtw88/pci.c
+index f59a4c462e3bc..e7d17ab8f113b 100644
+--- a/drivers/net/wireless/realtek/rtw88/pci.c
++++ b/drivers/net/wireless/realtek/rtw88/pci.c
+@@ -2,6 +2,7 @@
+ /* Copyright(c) 2018-2019  Realtek Corporation
+  */
+ 
++#include <linux/dmi.h>
+ #include <linux/module.h>
+ #include <linux/pci.h>
+ #include "main.h"
+@@ -1673,6 +1674,36 @@ static void rtw_pci_napi_deinit(struct rtw_dev *rtwdev)
+ 	netif_napi_del(&rtwpci->napi);
+ }
+ 
++enum rtw88_quirk_dis_pci_caps {
++	QUIRK_DIS_PCI_CAP_MSI,
++	QUIRK_DIS_PCI_CAP_ASPM,
++};
++
++static int disable_pci_caps(const struct dmi_system_id *dmi)
++{
++	uintptr_t dis_caps = (uintptr_t)dmi->driver_data;
++
++	if (dis_caps & BIT(QUIRK_DIS_PCI_CAP_MSI))
++		rtw_disable_msi = true;
++	if (dis_caps & BIT(QUIRK_DIS_PCI_CAP_ASPM))
++		rtw_pci_disable_aspm = true;
++
++	return 1;
++}
++
++static const struct dmi_system_id rtw88_pci_quirks[] = {
++	{
++		.callback = disable_pci_caps,
++		.ident = "Protempo Ltd L116HTN6SPW",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Protempo Ltd"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "L116HTN6SPW"),
++		},
++		.driver_data = (void *)BIT(QUIRK_DIS_PCI_CAP_ASPM),
++	},
++	{}
++};
++
+ int rtw_pci_probe(struct pci_dev *pdev,
+ 		  const struct pci_device_id *id)
+ {
+@@ -1723,6 +1754,7 @@ int rtw_pci_probe(struct pci_dev *pdev,
+ 		goto err_destroy_pci;
+ 	}
+ 
++	dmi_check_system(rtw88_pci_quirks);
+ 	rtw_pci_phy_cfg(rtwdev);
+ 
+ 	ret = rtw_register_hw(rtwdev, hw);
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c_table.c b/drivers/net/wireless/realtek/rtw88/rtw8822c_table.c
+index 822f3da91f1be..f9e3d0779c597 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822c_table.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822c_table.c
+@@ -16812,53 +16812,53 @@ static const u32 rtw8822c_rf_a[] = {
+ 	0x92000002,	0x00000000,	0x40000000,	0x00000000,
+ 		0x03F, 0x00010E46,
+ 	0x93000001,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x93000002,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x93000003,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x93000004,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x93000005,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x93000006,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x93000015,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x93000016,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x94000001,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x94000002,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x94000003,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x94000004,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x94000005,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x94000006,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x94000015,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x94000016,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x95000001,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x95000002,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x95000003,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x95000004,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x95000005,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x95000006,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x95000015,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0x95000016,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00030246,
++		0x03F, 0x0003D646,
+ 	0xA0000000,	0x00000000,
+ 		0x03F, 0x00002A46,
+ 	0xB0000000,	0x00000000,
+@@ -18762,53 +18762,53 @@ static const u32 rtw8822c_rf_a[] = {
+ 	0x92000002,	0x00000000,	0x40000000,	0x00000000,
+ 		0x03F, 0x0000EA46,
+ 	0x93000001,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000002,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000003,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000004,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000005,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000006,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000015,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000016,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000001,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000002,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000003,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000004,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000005,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000006,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000015,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000016,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000001,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000002,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000003,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000004,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000005,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000006,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000015,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000016,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0xA0000000,	0x00000000,
+ 		0x03F, 0x00002A46,
+ 	0xB0000000,	0x00000000,
+@@ -18957,53 +18957,53 @@ static const u32 rtw8822c_rf_a[] = {
+ 	0x92000002,	0x00000000,	0x40000000,	0x00000000,
+ 		0x03F, 0x0000EA46,
+ 	0x93000001,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000002,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000003,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000004,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000005,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000006,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000015,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000016,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000001,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000002,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000003,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000004,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000005,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000006,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000015,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000016,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000001,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000002,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000003,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000004,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000005,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000006,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000015,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000016,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0xA0000000,	0x00000000,
+ 		0x03F, 0x00002A46,
+ 	0xB0000000,	0x00000000,
+@@ -19152,53 +19152,53 @@ static const u32 rtw8822c_rf_a[] = {
+ 	0x92000002,	0x00000000,	0x40000000,	0x00000000,
+ 		0x03F, 0x0000EA46,
+ 	0x93000001,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000002,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000003,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000004,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000005,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000006,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000015,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000016,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000001,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000002,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000003,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000004,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000005,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000006,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000015,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000016,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000001,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000002,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000003,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000004,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000005,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000006,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000015,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000016,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0xA0000000,	0x00000000,
+ 		0x03F, 0x00002A46,
+ 	0xB0000000,	0x00000000,
+@@ -19347,53 +19347,53 @@ static const u32 rtw8822c_rf_a[] = {
+ 	0x92000002,	0x00000000,	0x40000000,	0x00000000,
+ 		0x03F, 0x0000EA46,
+ 	0x93000001,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000002,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000003,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000004,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000005,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000006,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000015,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x93000016,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000001,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000002,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000003,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000004,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000005,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000006,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000015,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x94000016,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000001,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000002,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000003,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000004,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000005,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000006,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000015,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0x95000016,	0x00000000,	0x40000000,	0x00000000,
+-		0x03F, 0x00031E46,
++		0x03F, 0x0003D646,
+ 	0xA0000000,	0x00000000,
+ 		0x03F, 0x00002A46,
+ 	0xB0000000,	0x00000000,
+@@ -19610,21 +19610,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x93000002,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -19633,21 +19633,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x93000003,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -19656,21 +19656,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x93000004,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -19679,21 +19679,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x93000005,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -19702,21 +19702,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x93000006,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -19725,21 +19725,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x93000015,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -19748,21 +19748,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x93000016,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -19771,21 +19771,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x94000001,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -19794,21 +19794,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x94000002,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -19817,21 +19817,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x94000003,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -19840,21 +19840,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x94000004,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -19863,21 +19863,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x94000005,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -19886,21 +19886,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x94000006,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -19909,21 +19909,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x94000015,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -19932,21 +19932,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x94000016,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -19955,21 +19955,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x95000001,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -19978,21 +19978,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x95000002,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -20001,21 +20001,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x95000003,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -20024,21 +20024,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x95000004,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -20047,21 +20047,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x95000005,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -20070,21 +20070,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x95000006,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -20093,21 +20093,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x95000015,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -20116,21 +20116,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x95000016,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -20139,21 +20139,21 @@ static const u32 rtw8822c_rf_a[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x000008C8,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x000008CB,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x000008CE,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x000008D1,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x000008D4,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000DD1,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0xA0000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000487,
+@@ -38484,21 +38484,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x93000002,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -38507,21 +38507,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x93000003,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -38530,21 +38530,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x93000004,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -38553,21 +38553,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x93000005,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -38576,21 +38576,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x93000006,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -38599,21 +38599,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x93000015,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -38622,21 +38622,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x93000016,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -38645,21 +38645,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x94000001,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -38668,21 +38668,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x94000002,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -38691,21 +38691,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x94000003,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -38714,21 +38714,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x94000004,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -38737,21 +38737,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x94000005,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -38760,21 +38760,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x94000006,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -38783,21 +38783,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x94000015,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -38806,21 +38806,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x94000016,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -38829,21 +38829,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x95000001,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -38852,21 +38852,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x95000002,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -38875,21 +38875,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x95000003,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -38898,21 +38898,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x95000004,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -38921,21 +38921,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x95000005,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -38944,21 +38944,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x95000006,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -38967,21 +38967,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x95000015,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -38990,21 +38990,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0x95000016,	0x00000000,	0x40000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000467,
+@@ -39013,21 +39013,21 @@ static const u32 rtw8822c_rf_b[] = {
+ 		0x033, 0x00000062,
+ 		0x03F, 0x00000908,
+ 		0x033, 0x00000063,
+-		0x03F, 0x00000D09,
++		0x03F, 0x00000CC6,
+ 		0x033, 0x00000064,
+-		0x03F, 0x00000D49,
++		0x03F, 0x00000CC9,
+ 		0x033, 0x00000065,
+-		0x03F, 0x00000D8A,
++		0x03F, 0x00000CCC,
+ 		0x033, 0x00000066,
+-		0x03F, 0x00000DEB,
++		0x03F, 0x00000CCF,
+ 		0x033, 0x00000067,
+-		0x03F, 0x00000DEE,
++		0x03F, 0x00000CD2,
+ 		0x033, 0x00000068,
+-		0x03F, 0x00000DF1,
++		0x03F, 0x00000CD5,
+ 		0x033, 0x00000069,
+-		0x03F, 0x00000DF4,
++		0x03F, 0x00000DD4,
+ 		0x033, 0x0000006A,
+-		0x03F, 0x00000DF7,
++		0x03F, 0x00000DD7,
+ 	0xA0000000,	0x00000000,
+ 		0x033, 0x00000060,
+ 		0x03F, 0x00000487,
+diff --git a/drivers/net/wireless/st/cw1200/cw1200_sdio.c b/drivers/net/wireless/st/cw1200/cw1200_sdio.c
+index b65ec14136c7e..4c30b5772ce0f 100644
+--- a/drivers/net/wireless/st/cw1200/cw1200_sdio.c
++++ b/drivers/net/wireless/st/cw1200/cw1200_sdio.c
+@@ -53,6 +53,7 @@ static const struct sdio_device_id cw1200_sdio_ids[] = {
+ 	{ SDIO_DEVICE(SDIO_VENDOR_ID_STE, SDIO_DEVICE_ID_STE_CW1200) },
+ 	{ /* end: all zeroes */			},
+ };
++MODULE_DEVICE_TABLE(sdio, cw1200_sdio_ids);
+ 
+ /* hwbus_ops implemetation */
+ 
+diff --git a/drivers/net/wireless/ti/wl1251/cmd.c b/drivers/net/wireless/ti/wl1251/cmd.c
+index 498c8db2eb48b..d7a869106782f 100644
+--- a/drivers/net/wireless/ti/wl1251/cmd.c
++++ b/drivers/net/wireless/ti/wl1251/cmd.c
+@@ -454,9 +454,12 @@ int wl1251_cmd_scan(struct wl1251 *wl, u8 *ssid, size_t ssid_len,
+ 		cmd->channels[i].channel = channels[i]->hw_value;
+ 	}
+ 
+-	cmd->params.ssid_len = ssid_len;
+-	if (ssid)
+-		memcpy(cmd->params.ssid, ssid, ssid_len);
++	if (ssid) {
++		int len = clamp_val(ssid_len, 0, IEEE80211_MAX_SSID_LEN);
++
++		cmd->params.ssid_len = len;
++		memcpy(cmd->params.ssid, ssid, len);
++	}
+ 
+ 	ret = wl1251_cmd_send(wl, CMD_SCAN, cmd, sizeof(*cmd));
+ 	if (ret < 0) {
+diff --git a/drivers/net/wireless/ti/wl12xx/main.c b/drivers/net/wireless/ti/wl12xx/main.c
+index 9d7dbfe7fe0c3..c6da0cfb4afbe 100644
+--- a/drivers/net/wireless/ti/wl12xx/main.c
++++ b/drivers/net/wireless/ti/wl12xx/main.c
+@@ -1503,6 +1503,13 @@ static int wl12xx_get_fuse_mac(struct wl1271 *wl)
+ 	u32 mac1, mac2;
+ 	int ret;
+ 
++	/* Device may be in ELP from the bootloader or kexec */
++	ret = wlcore_write32(wl, WL12XX_WELP_ARM_COMMAND, WELP_ARM_COMMAND_VAL);
++	if (ret < 0)
++		goto out;
++
++	usleep_range(500000, 700000);
++
+ 	ret = wlcore_set_partition(wl, &wl->ptable[PART_DRPW]);
+ 	if (ret < 0)
+ 		goto out;
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index bca671ff4e546..f9c9c98599197 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -686,15 +686,17 @@ static int nvmem_add_cells_from_of(struct nvmem_device *nvmem)
+ 			continue;
+ 		if (len < 2 * sizeof(u32)) {
+ 			dev_err(dev, "nvmem: invalid reg on %pOF\n", child);
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 
+ 		cell = kzalloc(sizeof(*cell), GFP_KERNEL);
+-		if (!cell)
++		if (!cell) {
++			of_node_put(child);
+ 			return -ENOMEM;
++		}
+ 
+ 		cell->nvmem = nvmem;
+-		cell->np = of_node_get(child);
+ 		cell->offset = be32_to_cpup(addr++);
+ 		cell->bytes = be32_to_cpup(addr);
+ 		cell->name = kasprintf(GFP_KERNEL, "%pOFn", child);
+@@ -715,11 +717,12 @@ static int nvmem_add_cells_from_of(struct nvmem_device *nvmem)
+ 				cell->name, nvmem->stride);
+ 			/* Cells already added will be freed later. */
+ 			kfree_const(cell->name);
+-			of_node_put(cell->np);
+ 			kfree(cell);
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 
++		cell->np = of_node_get(child);
+ 		nvmem_cell_add(cell);
+ 	}
+ 
+diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c
+index 504669e3afe07..fa6b12cfc0430 100644
+--- a/drivers/pci/controller/dwc/pcie-tegra194.c
++++ b/drivers/pci/controller/dwc/pcie-tegra194.c
+@@ -2214,6 +2214,8 @@ static int tegra_pcie_dw_resume_noirq(struct device *dev)
+ 		goto fail_host_init;
+ 	}
+ 
++	dw_pcie_setup_rc(&pcie->pci.pp);
++
+ 	ret = tegra_pcie_dw_start_link(&pcie->pci);
+ 	if (ret < 0)
+ 		goto fail_host_init;
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index e3f5e7ab76063..c95ebe808f92b 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -57,7 +57,7 @@
+ #define   PIO_COMPLETION_STATUS_UR		1
+ #define   PIO_COMPLETION_STATUS_CRS		2
+ #define   PIO_COMPLETION_STATUS_CA		4
+-#define   PIO_NON_POSTED_REQ			BIT(0)
++#define   PIO_NON_POSTED_REQ			BIT(10)
+ #define PIO_ADDR_LS				(PIO_BASE_ADDR + 0x8)
+ #define PIO_ADDR_MS				(PIO_BASE_ADDR + 0xc)
+ #define PIO_WR_DATA				(PIO_BASE_ADDR + 0x10)
+@@ -125,6 +125,7 @@
+ #define     LTSSM_MASK				0x3f
+ #define     LTSSM_L0				0x10
+ #define     RC_BAR_CONFIG			0x300
++#define VENDOR_ID_REG				(LMI_BASE_ADDR + 0x44)
+ 
+ /* PCIe core controller registers */
+ #define CTRL_CORE_BASE_ADDR			0x18000
+@@ -385,6 +386,16 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ 	reg |= (IS_RC_MSK << IS_RC_SHIFT);
+ 	advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
+ 
++	/*
++	 * Replace incorrect PCI vendor id value 0x1b4b by correct value 0x11ab.
++	 * VENDOR_ID_REG contains vendor id in low 16 bits and subsystem vendor
++	 * id in high 16 bits. Updating this register changes readback value of
++	 * read-only vendor id bits in PCIE_CORE_DEV_ID_REG register. Workaround
++	 * for erratum 4.1: "The value of device and vendor ID is incorrect".
++	 */
++	reg = (PCI_VENDOR_ID_MARVELL << 16) | PCI_VENDOR_ID_MARVELL;
++	advk_writel(pcie, reg, VENDOR_ID_REG);
++
+ 	/* Set Advanced Error Capabilities and Control PF0 register */
+ 	reg = PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX |
+ 		PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN |
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 22b2bb1109c9e..6d74386eadc2c 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -27,6 +27,7 @@
+ #include <linux/nvme.h>
+ #include <linux/platform_data/x86/apple.h>
+ #include <linux/pm_runtime.h>
++#include <linux/suspend.h>
+ #include <linux/switchtec.h>
+ #include <asm/dma.h>	/* isa_dma_bridge_buggy */
+ #include "pci.h"
+@@ -3656,6 +3657,16 @@ static void quirk_apple_poweroff_thunderbolt(struct pci_dev *dev)
+ 		return;
+ 	if (pci_pcie_type(dev) != PCI_EXP_TYPE_UPSTREAM)
+ 		return;
++
++	/*
++	 * SXIO/SXFP/SXLF turns off power to the Thunderbolt controller.
++	 * We don't know how to turn it back on again, but firmware does,
++	 * so we can only use SXIO/SXFP/SXLF if we're suspending via
++	 * firmware.
++	 */
++	if (!pm_suspend_via_firmware())
++		return;
++
+ 	bridge = ACPI_HANDLE(&dev->dev);
+ 	if (!bridge)
+ 		return;
+diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
+index 2d4acf21117cc..c5950a3b4e4ce 100644
+--- a/drivers/pinctrl/pinctrl-amd.c
++++ b/drivers/pinctrl/pinctrl-amd.c
+@@ -991,6 +991,7 @@ static int amd_gpio_remove(struct platform_device *pdev)
+ static const struct acpi_device_id amd_gpio_acpi_match[] = {
+ 	{ "AMD0030", 0 },
+ 	{ "AMDI0030", 0},
++	{ "AMDI0031", 0},
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(acpi, amd_gpio_acpi_match);
+diff --git a/drivers/pinctrl/pinctrl-equilibrium.c b/drivers/pinctrl/pinctrl-equilibrium.c
+index a194d8089b6f4..38cc20fa9d5af 100644
+--- a/drivers/pinctrl/pinctrl-equilibrium.c
++++ b/drivers/pinctrl/pinctrl-equilibrium.c
+@@ -939,6 +939,7 @@ static const struct of_device_id eqbr_pinctrl_dt_match[] = {
+ 	{ .compatible = "intel,lgm-io" },
+ 	{}
+ };
++MODULE_DEVICE_TABLE(of, eqbr_pinctrl_dt_match);
+ 
+ static struct platform_driver eqbr_pinctrl_driver = {
+ 	.probe	= eqbr_pinctrl_probe,
+diff --git a/drivers/pinctrl/pinctrl-mcp23s08.c b/drivers/pinctrl/pinctrl-mcp23s08.c
+index ce2d8014b7e0b..d0259577934e9 100644
+--- a/drivers/pinctrl/pinctrl-mcp23s08.c
++++ b/drivers/pinctrl/pinctrl-mcp23s08.c
+@@ -351,6 +351,11 @@ static irqreturn_t mcp23s08_irq(int irq, void *data)
+ 	if (mcp_read(mcp, MCP_INTF, &intf))
+ 		goto unlock;
+ 
++	if (intf == 0) {
++		/* There is no interrupt pending */
++		goto unlock;
++	}
++
+ 	if (mcp_read(mcp, MCP_INTCAP, &intcap))
+ 		goto unlock;
+ 
+@@ -368,11 +373,6 @@ static irqreturn_t mcp23s08_irq(int irq, void *data)
+ 	mcp->cached_gpio = gpio;
+ 	mutex_unlock(&mcp->lock);
+ 
+-	if (intf == 0) {
+-		/* There is no interrupt pending */
+-		return IRQ_HANDLED;
+-	}
+-
+ 	dev_dbg(mcp->chip.parent,
+ 		"intcap 0x%04X intf 0x%04X gpio_orig 0x%04X gpio 0x%04X\n",
+ 		intcap, intf, gpio_orig, gpio);
+diff --git a/drivers/power/supply/ab8500-chargalg.h b/drivers/power/supply/ab8500-chargalg.h
+index 94a6f9068bc56..07e6ff50084f0 100644
+--- a/drivers/power/supply/ab8500-chargalg.h
++++ b/drivers/power/supply/ab8500-chargalg.h
+@@ -15,7 +15,7 @@
+  * - POWER_SUPPLY_TYPE_USB,
+  * because only them store as drv_data pointer to struct ux500_charger.
+  */
+-#define psy_to_ux500_charger(x) power_supply_get_drvdata(psy)
++#define psy_to_ux500_charger(x) power_supply_get_drvdata(x)
+ 
+ /* Forward declaration */
+ struct ux500_charger;
+diff --git a/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c b/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c
+index 9e6f2a895a234..5b1355fae9b41 100644
+--- a/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c
++++ b/drivers/thermal/intel/int340x_thermal/processor_thermal_device.c
+@@ -100,24 +100,27 @@ static ssize_t tcc_offset_degree_celsius_show(struct device *dev,
+ 	if (err)
+ 		return err;
+ 
+-	val = (val >> 24) & 0xff;
++	val = (val >> 24) & 0x3f;
+ 	return sprintf(buf, "%d\n", (int)val);
+ }
+ 
+-static int tcc_offset_update(int tcc)
++static int tcc_offset_update(unsigned int tcc)
+ {
+ 	u64 val;
+ 	int err;
+ 
+-	if (!tcc)
++	if (tcc > 63)
+ 		return -EINVAL;
+ 
+ 	err = rdmsrl_safe(MSR_IA32_TEMPERATURE_TARGET, &val);
+ 	if (err)
+ 		return err;
+ 
+-	val &= ~GENMASK_ULL(31, 24);
+-	val |= (tcc & 0xff) << 24;
++	if (val & BIT(31))
++		return -EPERM;
++
++	val &= ~GENMASK_ULL(29, 24);
++	val |= (tcc & 0x3f) << 24;
+ 
+ 	err = wrmsrl_safe(MSR_IA32_TEMPERATURE_TARGET, val);
+ 	if (err)
+@@ -126,14 +129,15 @@ static int tcc_offset_update(int tcc)
+ 	return 0;
+ }
+ 
+-static int tcc_offset_save;
++static unsigned int tcc_offset_save;
+ 
+ static ssize_t tcc_offset_degree_celsius_store(struct device *dev,
+ 				struct device_attribute *attr, const char *buf,
+ 				size_t count)
+ {
++	unsigned int tcc;
+ 	u64 val;
+-	int tcc, err;
++	int err;
+ 
+ 	err = rdmsrl_safe(MSR_PLATFORM_INFO, &val);
+ 	if (err)
+@@ -142,7 +146,7 @@ static ssize_t tcc_offset_degree_celsius_store(struct device *dev,
+ 	if (!(val & BIT(30)))
+ 		return -EACCES;
+ 
+-	if (kstrtoint(buf, 0, &tcc))
++	if (kstrtouint(buf, 0, &tcc))
+ 		return -EINVAL;
+ 
+ 	err = tcc_offset_update(tcc);
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 37002663d5212..a179c0bbc12ef 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1488,6 +1488,7 @@ struct ext4_sb_info {
+ 	struct kobject s_kobj;
+ 	struct completion s_kobj_unregister;
+ 	struct super_block *s_sb;
++	struct buffer_head *s_mmp_bh;
+ 
+ 	/* Journaling */
+ 	struct journal_s *s_journal;
+@@ -3720,6 +3721,9 @@ extern struct ext4_io_end_vec *ext4_last_io_end_vec(ext4_io_end_t *io_end);
+ /* mmp.c */
+ extern int ext4_multi_mount_protect(struct super_block *, ext4_fsblk_t);
+ 
++/* mmp.c */
++extern void ext4_stop_mmpd(struct ext4_sb_info *sbi);
++
+ /* verity.c */
+ extern const struct fsverity_operations ext4_verityops;
+ 
+diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
+index 68fbeedd627bc..bc364c119af6a 100644
+--- a/fs/ext4/mmp.c
++++ b/fs/ext4/mmp.c
+@@ -127,9 +127,9 @@ void __dump_mmp_msg(struct super_block *sb, struct mmp_struct *mmp,
+  */
+ static int kmmpd(void *data)
+ {
+-	struct super_block *sb = ((struct mmpd_data *) data)->sb;
+-	struct buffer_head *bh = ((struct mmpd_data *) data)->bh;
++	struct super_block *sb = (struct super_block *) data;
+ 	struct ext4_super_block *es = EXT4_SB(sb)->s_es;
++	struct buffer_head *bh = EXT4_SB(sb)->s_mmp_bh;
+ 	struct mmp_struct *mmp;
+ 	ext4_fsblk_t mmp_block;
+ 	u32 seq = 0;
+@@ -156,7 +156,12 @@ static int kmmpd(void *data)
+ 	memcpy(mmp->mmp_nodename, init_utsname()->nodename,
+ 	       sizeof(mmp->mmp_nodename));
+ 
+-	while (!kthread_should_stop()) {
++	while (!kthread_should_stop() && !sb_rdonly(sb)) {
++		if (!ext4_has_feature_mmp(sb)) {
++			ext4_warning(sb, "kmmpd being stopped since MMP feature"
++				     " has been disabled.");
++			goto wait_to_exit;
++		}
+ 		if (++seq > EXT4_MMP_SEQ_MAX)
+ 			seq = 1;
+ 
+@@ -177,16 +182,6 @@ static int kmmpd(void *data)
+ 			failed_writes++;
+ 		}
+ 
+-		if (!(le32_to_cpu(es->s_feature_incompat) &
+-		    EXT4_FEATURE_INCOMPAT_MMP)) {
+-			ext4_warning(sb, "kmmpd being stopped since MMP feature"
+-				     " has been disabled.");
+-			goto exit_thread;
+-		}
+-
+-		if (sb_rdonly(sb))
+-			break;
+-
+ 		diff = jiffies - last_update_time;
+ 		if (diff < mmp_update_interval * HZ)
+ 			schedule_timeout_interruptible(mmp_update_interval *
+@@ -207,7 +202,7 @@ static int kmmpd(void *data)
+ 				ext4_error_err(sb, -retval,
+ 					       "error reading MMP data: %d",
+ 					       retval);
+-				goto exit_thread;
++				goto wait_to_exit;
+ 			}
+ 
+ 			mmp_check = (struct mmp_struct *)(bh_check->b_data);
+@@ -221,7 +216,7 @@ static int kmmpd(void *data)
+ 				ext4_error_err(sb, EBUSY, "abort");
+ 				put_bh(bh_check);
+ 				retval = -EBUSY;
+-				goto exit_thread;
++				goto wait_to_exit;
+ 			}
+ 			put_bh(bh_check);
+ 		}
+@@ -244,13 +239,25 @@ static int kmmpd(void *data)
+ 
+ 	retval = write_mmp_block(sb, bh);
+ 
+-exit_thread:
+-	EXT4_SB(sb)->s_mmp_tsk = NULL;
+-	kfree(data);
+-	brelse(bh);
++wait_to_exit:
++	while (!kthread_should_stop()) {
++		set_current_state(TASK_INTERRUPTIBLE);
++		if (!kthread_should_stop())
++			schedule();
++	}
++	set_current_state(TASK_RUNNING);
+ 	return retval;
+ }
+ 
++void ext4_stop_mmpd(struct ext4_sb_info *sbi)
++{
++	if (sbi->s_mmp_tsk) {
++		kthread_stop(sbi->s_mmp_tsk);
++		brelse(sbi->s_mmp_bh);
++		sbi->s_mmp_tsk = NULL;
++	}
++}
++
+ /*
+  * Get a random new sequence number but make sure it is not greater than
+  * EXT4_MMP_SEQ_MAX.
+@@ -275,7 +282,6 @@ int ext4_multi_mount_protect(struct super_block *sb,
+ 	struct ext4_super_block *es = EXT4_SB(sb)->s_es;
+ 	struct buffer_head *bh = NULL;
+ 	struct mmp_struct *mmp = NULL;
+-	struct mmpd_data *mmpd_data;
+ 	u32 seq;
+ 	unsigned int mmp_check_interval = le16_to_cpu(es->s_mmp_update_interval);
+ 	unsigned int wait_time = 0;
+@@ -364,24 +370,17 @@ skip:
+ 		goto failed;
+ 	}
+ 
+-	mmpd_data = kmalloc(sizeof(*mmpd_data), GFP_KERNEL);
+-	if (!mmpd_data) {
+-		ext4_warning(sb, "not enough memory for mmpd_data");
+-		goto failed;
+-	}
+-	mmpd_data->sb = sb;
+-	mmpd_data->bh = bh;
++	EXT4_SB(sb)->s_mmp_bh = bh;
+ 
+ 	/*
+ 	 * Start a kernel thread to update the MMP block periodically.
+ 	 */
+-	EXT4_SB(sb)->s_mmp_tsk = kthread_run(kmmpd, mmpd_data, "kmmpd-%.*s",
++	EXT4_SB(sb)->s_mmp_tsk = kthread_run(kmmpd, sb, "kmmpd-%.*s",
+ 					     (int)sizeof(mmp->mmp_bdevname),
+ 					     bdevname(bh->b_bdev,
+ 						      mmp->mmp_bdevname));
+ 	if (IS_ERR(EXT4_SB(sb)->s_mmp_tsk)) {
+ 		EXT4_SB(sb)->s_mmp_tsk = NULL;
+-		kfree(mmpd_data);
+ 		ext4_warning(sb, "Unable to create kmmpd thread for %s.",
+ 			     sb->s_id);
+ 		goto failed;
+@@ -393,5 +392,3 @@ failed:
+ 	brelse(bh);
+ 	return 1;
+ }
+-
+-
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 736724ce86d73..09f1f02e1d6d6 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -1245,8 +1245,8 @@ static void ext4_put_super(struct super_block *sb)
+ 	ext4_xattr_destroy_cache(sbi->s_ea_block_cache);
+ 	sbi->s_ea_block_cache = NULL;
+ 
+-	if (sbi->s_mmp_tsk)
+-		kthread_stop(sbi->s_mmp_tsk);
++	ext4_stop_mmpd(sbi);
++
+ 	brelse(sbi->s_sbh);
+ 	sb->s_fs_info = NULL;
+ 	/*
+@@ -5194,8 +5194,7 @@ failed_mount3a:
+ failed_mount3:
+ 	flush_work(&sbi->s_error_work);
+ 	del_timer_sync(&sbi->s_err_report);
+-	if (sbi->s_mmp_tsk)
+-		kthread_stop(sbi->s_mmp_tsk);
++	ext4_stop_mmpd(sbi);
+ failed_mount2:
+ 	rcu_read_lock();
+ 	group_desc = rcu_dereference(sbi->s_group_desc);
+@@ -5997,8 +5996,6 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 				 */
+ 				ext4_mark_recovery_complete(sb, es);
+ 			}
+-			if (sbi->s_mmp_tsk)
+-				kthread_stop(sbi->s_mmp_tsk);
+ 		} else {
+ 			/* Make sure we can mount this feature set readwrite */
+ 			if (ext4_has_feature_readonly(sb) ||
+@@ -6112,6 +6109,9 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 	if (!test_opt(sb, BLOCK_VALIDITY) && sbi->s_system_blks)
+ 		ext4_release_system_zone(sb);
+ 
++	if (!ext4_has_feature_mmp(sb) || sb_rdonly(sb))
++		ext4_stop_mmpd(sbi);
++
+ 	/*
+ 	 * Some options can be enabled by ext4 and/or by VFS mount flag
+ 	 * either way we need to make sure it matches in both *flags and
+@@ -6145,6 +6145,8 @@ restore_opts:
+ 	for (i = 0; i < EXT4_MAXQUOTAS; i++)
+ 		kfree(to_free[i]);
+ #endif
++	if (!ext4_has_feature_mmp(sb) || sb_rdonly(sb))
++		ext4_stop_mmpd(sbi);
+ 	kfree(orig_data);
+ 	return err;
+ }
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index c83d90125ebd9..a5de48e768d7b 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -3566,6 +3566,8 @@ void f2fs_destroy_garbage_collection_cache(void);
+  */
+ int f2fs_recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only);
+ bool f2fs_space_for_roll_forward(struct f2fs_sb_info *sbi);
++int __init f2fs_create_recovery_cache(void);
++void f2fs_destroy_recovery_cache(void);
+ 
+ /*
+  * debug.c
+diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
+index 422146c6d866d..4b2f7d1d5bf4b 100644
+--- a/fs/f2fs/recovery.c
++++ b/fs/f2fs/recovery.c
+@@ -788,13 +788,6 @@ int f2fs_recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only)
+ 	quota_enabled = f2fs_enable_quota_files(sbi, s_flags & SB_RDONLY);
+ #endif
+ 
+-	fsync_entry_slab = f2fs_kmem_cache_create("f2fs_fsync_inode_entry",
+-			sizeof(struct fsync_inode_entry));
+-	if (!fsync_entry_slab) {
+-		err = -ENOMEM;
+-		goto out;
+-	}
+-
+ 	INIT_LIST_HEAD(&inode_list);
+ 	INIT_LIST_HEAD(&tmp_inode_list);
+ 	INIT_LIST_HEAD(&dir_list);
+@@ -867,8 +860,6 @@ skip:
+ 		}
+ 	}
+ 
+-	kmem_cache_destroy(fsync_entry_slab);
+-out:
+ #ifdef CONFIG_QUOTA
+ 	/* Turn quotas off */
+ 	if (quota_enabled)
+@@ -878,3 +869,17 @@ out:
+ 
+ 	return ret ? ret : err;
+ }
++
++int __init f2fs_create_recovery_cache(void)
++{
++	fsync_entry_slab = f2fs_kmem_cache_create("f2fs_fsync_inode_entry",
++					sizeof(struct fsync_inode_entry));
++	if (!fsync_entry_slab)
++		return -ENOMEM;
++	return 0;
++}
++
++void f2fs_destroy_recovery_cache(void)
++{
++	kmem_cache_destroy(fsync_entry_slab);
++}
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 7d325bfaf65ae..096492caaa6bf 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -4227,9 +4227,12 @@ static int __init init_f2fs_fs(void)
+ 	err = f2fs_create_checkpoint_caches();
+ 	if (err)
+ 		goto free_segment_manager_caches;
+-	err = f2fs_create_extent_cache();
++	err = f2fs_create_recovery_cache();
+ 	if (err)
+ 		goto free_checkpoint_caches;
++	err = f2fs_create_extent_cache();
++	if (err)
++		goto free_recovery_cache;
+ 	err = f2fs_create_garbage_collection_cache();
+ 	if (err)
+ 		goto free_extent_cache;
+@@ -4278,6 +4281,8 @@ free_garbage_collection_cache:
+ 	f2fs_destroy_garbage_collection_cache();
+ free_extent_cache:
+ 	f2fs_destroy_extent_cache();
++free_recovery_cache:
++	f2fs_destroy_recovery_cache();
+ free_checkpoint_caches:
+ 	f2fs_destroy_checkpoint_caches();
+ free_segment_manager_caches:
+@@ -4303,6 +4308,7 @@ static void __exit exit_f2fs_fs(void)
+ 	f2fs_exit_sysfs();
+ 	f2fs_destroy_garbage_collection_cache();
+ 	f2fs_destroy_extent_cache();
++	f2fs_destroy_recovery_cache();
+ 	f2fs_destroy_checkpoint_caches();
+ 	f2fs_destroy_segment_manager_caches();
+ 	f2fs_destroy_node_manager_caches();
+diff --git a/fs/io-wq.c b/fs/io-wq.c
+index b3e8624a37d09..60f58efdb5f48 100644
+--- a/fs/io-wq.c
++++ b/fs/io-wq.c
+@@ -241,7 +241,8 @@ static void io_wqe_wake_worker(struct io_wqe *wqe, struct io_wqe_acct *acct)
+ 	 * Most likely an attempt to queue unbounded work on an io_wq that
+ 	 * wasn't setup with any unbounded workers.
+ 	 */
+-	WARN_ON_ONCE(!acct->max_workers);
++	if (unlikely(!acct->max_workers))
++		pr_warn_once("io-wq is not configured for unbound workers");
+ 
+ 	rcu_read_lock();
+ 	ret = io_wqe_activate_free_worker(wqe);
+@@ -906,6 +907,8 @@ struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data)
+ 
+ 	if (WARN_ON_ONCE(!data->free_work || !data->do_work))
+ 		return ERR_PTR(-EINVAL);
++	if (WARN_ON_ONCE(!bounded))
++		return ERR_PTR(-EINVAL);
+ 
+ 	wq = kzalloc(sizeof(*wq), GFP_KERNEL);
+ 	if (!wq)
+diff --git a/fs/jfs/inode.c b/fs/jfs/inode.c
+index 6f65bfa9f18d5..b0eb9c85eea0c 100644
+--- a/fs/jfs/inode.c
++++ b/fs/jfs/inode.c
+@@ -151,7 +151,8 @@ void jfs_evict_inode(struct inode *inode)
+ 			if (test_cflag(COMMIT_Freewmap, inode))
+ 				jfs_free_zero_link(inode);
+ 
+-			diFree(inode);
++			if (JFS_SBI(inode->i_sb)->ipimap)
++				diFree(inode);
+ 
+ 			/*
+ 			 * Free the inode from the quota allocation.
+diff --git a/fs/reiserfs/journal.c b/fs/reiserfs/journal.c
+index 9edc8e2b154e0..0834b101c316d 100644
+--- a/fs/reiserfs/journal.c
++++ b/fs/reiserfs/journal.c
+@@ -2758,6 +2758,20 @@ int journal_init(struct super_block *sb, const char *j_dev_name,
+ 		goto free_and_return;
+ 	}
+ 
++	/*
++	 * Sanity check to see if journal first block is correct.
++	 * If journal first block is invalid it can cause
++	 * zeroing important superblock members.
++	 */
++	if (!SB_ONDISK_JOURNAL_DEVICE(sb) &&
++	    SB_ONDISK_JOURNAL_1st_BLOCK(sb) < SB_JOURNAL_1st_RESERVED_BLOCK(sb)) {
++		reiserfs_warning(sb, "journal-1393",
++				 "journal 1st super block is invalid: 1st reserved block %d, but actual 1st block is %d",
++				 SB_JOURNAL_1st_RESERVED_BLOCK(sb),
++				 SB_ONDISK_JOURNAL_1st_BLOCK(sb));
++		goto free_and_return;
++	}
++
+ 	if (journal_init_dev(sb, journal, j_dev_name) != 0) {
+ 		reiserfs_warning(sb, "sh-462",
+ 				 "unable to initialize journal device");
+diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
+index 7b572e1414ba2..e279a069a26b0 100644
+--- a/fs/ubifs/super.c
++++ b/fs/ubifs/super.c
+@@ -275,6 +275,7 @@ static struct inode *ubifs_alloc_inode(struct super_block *sb)
+ 	memset((void *)ui + sizeof(struct inode), 0,
+ 	       sizeof(struct ubifs_inode) - sizeof(struct inode));
+ 	mutex_init(&ui->ui_mutex);
++	init_rwsem(&ui->xattr_sem);
+ 	spin_lock_init(&ui->ui_lock);
+ 	return &ui->vfs_inode;
+ };
+diff --git a/fs/ubifs/ubifs.h b/fs/ubifs/ubifs.h
+index b65c599a386a4..7e978f421430a 100644
+--- a/fs/ubifs/ubifs.h
++++ b/fs/ubifs/ubifs.h
+@@ -356,6 +356,7 @@ struct ubifs_gced_idx_leb {
+  * @ui_mutex: serializes inode write-back with the rest of VFS operations,
+  *            serializes "clean <-> dirty" state changes, serializes bulk-read,
+  *            protects @dirty, @bulk_read, @ui_size, and @xattr_size
++ * @xattr_sem: serilizes write operations (remove|set|create) on xattr
+  * @ui_lock: protects @synced_i_size
+  * @synced_i_size: synchronized size of inode, i.e. the value of inode size
+  *                 currently stored on the flash; used only for regular file
+@@ -409,6 +410,7 @@ struct ubifs_inode {
+ 	unsigned int bulk_read:1;
+ 	unsigned int compr_type:2;
+ 	struct mutex ui_mutex;
++	struct rw_semaphore xattr_sem;
+ 	spinlock_t ui_lock;
+ 	loff_t synced_i_size;
+ 	loff_t ui_size;
+diff --git a/fs/ubifs/xattr.c b/fs/ubifs/xattr.c
+index 6b1e9830b2745..1fce27e9b7697 100644
+--- a/fs/ubifs/xattr.c
++++ b/fs/ubifs/xattr.c
+@@ -285,6 +285,7 @@ int ubifs_xattr_set(struct inode *host, const char *name, const void *value,
+ 	if (!xent)
+ 		return -ENOMEM;
+ 
++	down_write(&ubifs_inode(host)->xattr_sem);
+ 	/*
+ 	 * The extended attribute entries are stored in LNC, so multiple
+ 	 * look-ups do not involve reading the flash.
+@@ -319,6 +320,7 @@ int ubifs_xattr_set(struct inode *host, const char *name, const void *value,
+ 	iput(inode);
+ 
+ out_free:
++	up_write(&ubifs_inode(host)->xattr_sem);
+ 	kfree(xent);
+ 	return err;
+ }
+@@ -341,18 +343,19 @@ ssize_t ubifs_xattr_get(struct inode *host, const char *name, void *buf,
+ 	if (!xent)
+ 		return -ENOMEM;
+ 
++	down_read(&ubifs_inode(host)->xattr_sem);
+ 	xent_key_init(c, &key, host->i_ino, &nm);
+ 	err = ubifs_tnc_lookup_nm(c, &key, xent, &nm);
+ 	if (err) {
+ 		if (err == -ENOENT)
+ 			err = -ENODATA;
+-		goto out_unlock;
++		goto out_cleanup;
+ 	}
+ 
+ 	inode = iget_xattr(c, le64_to_cpu(xent->inum));
+ 	if (IS_ERR(inode)) {
+ 		err = PTR_ERR(inode);
+-		goto out_unlock;
++		goto out_cleanup;
+ 	}
+ 
+ 	ui = ubifs_inode(inode);
+@@ -374,7 +377,8 @@ ssize_t ubifs_xattr_get(struct inode *host, const char *name, void *buf,
+ out_iput:
+ 	mutex_unlock(&ui->ui_mutex);
+ 	iput(inode);
+-out_unlock:
++out_cleanup:
++	up_read(&ubifs_inode(host)->xattr_sem);
+ 	kfree(xent);
+ 	return err;
+ }
+@@ -406,16 +410,21 @@ ssize_t ubifs_listxattr(struct dentry *dentry, char *buffer, size_t size)
+ 	dbg_gen("ino %lu ('%pd'), buffer size %zd", host->i_ino,
+ 		dentry, size);
+ 
++	down_read(&host_ui->xattr_sem);
+ 	len = host_ui->xattr_names + host_ui->xattr_cnt;
+-	if (!buffer)
++	if (!buffer) {
+ 		/*
+ 		 * We should return the minimum buffer size which will fit a
+ 		 * null-terminated list of all the extended attribute names.
+ 		 */
+-		return len;
++		err = len;
++		goto out_err;
++	}
+ 
+-	if (len > size)
+-		return -ERANGE;
++	if (len > size) {
++		err = -ERANGE;
++		goto out_err;
++	}
+ 
+ 	lowest_xent_key(c, &key, host->i_ino);
+ 	while (1) {
+@@ -437,8 +446,9 @@ ssize_t ubifs_listxattr(struct dentry *dentry, char *buffer, size_t size)
+ 		pxent = xent;
+ 		key_read(c, &xent->key, &key);
+ 	}
+-
+ 	kfree(pxent);
++	up_read(&host_ui->xattr_sem);
++
+ 	if (err != -ENOENT) {
+ 		ubifs_err(c, "cannot find next direntry, error %d", err);
+ 		return err;
+@@ -446,6 +456,10 @@ ssize_t ubifs_listxattr(struct dentry *dentry, char *buffer, size_t size)
+ 
+ 	ubifs_assert(c, written <= size);
+ 	return written;
++
++out_err:
++	up_read(&host_ui->xattr_sem);
++	return err;
+ }
+ 
+ static int remove_xattr(struct ubifs_info *c, struct inode *host,
+@@ -504,6 +518,7 @@ int ubifs_purge_xattrs(struct inode *host)
+ 	ubifs_warn(c, "inode %lu has too many xattrs, doing a non-atomic deletion",
+ 		   host->i_ino);
+ 
++	down_write(&ubifs_inode(host)->xattr_sem);
+ 	lowest_xent_key(c, &key, host->i_ino);
+ 	while (1) {
+ 		xent = ubifs_tnc_next_ent(c, &key, &nm);
+@@ -523,7 +538,7 @@ int ubifs_purge_xattrs(struct inode *host)
+ 			ubifs_ro_mode(c, err);
+ 			kfree(pxent);
+ 			kfree(xent);
+-			return err;
++			goto out_err;
+ 		}
+ 
+ 		ubifs_assert(c, ubifs_inode(xino)->xattr);
+@@ -535,7 +550,7 @@ int ubifs_purge_xattrs(struct inode *host)
+ 			kfree(xent);
+ 			iput(xino);
+ 			ubifs_err(c, "cannot remove xattr, error %d", err);
+-			return err;
++			goto out_err;
+ 		}
+ 
+ 		iput(xino);
+@@ -544,14 +559,19 @@ int ubifs_purge_xattrs(struct inode *host)
+ 		pxent = xent;
+ 		key_read(c, &xent->key, &key);
+ 	}
+-
+ 	kfree(pxent);
++	up_write(&ubifs_inode(host)->xattr_sem);
++
+ 	if (err != -ENOENT) {
+ 		ubifs_err(c, "cannot find next direntry, error %d", err);
+ 		return err;
+ 	}
+ 
+ 	return 0;
++
++out_err:
++	up_write(&ubifs_inode(host)->xattr_sem);
++	return err;
+ }
+ 
+ /**
+@@ -594,6 +614,7 @@ static int ubifs_xattr_remove(struct inode *host, const char *name)
+ 	if (!xent)
+ 		return -ENOMEM;
+ 
++	down_write(&ubifs_inode(host)->xattr_sem);
+ 	xent_key_init(c, &key, host->i_ino, &nm);
+ 	err = ubifs_tnc_lookup_nm(c, &key, xent, &nm);
+ 	if (err) {
+@@ -618,6 +639,7 @@ static int ubifs_xattr_remove(struct inode *host, const char *name)
+ 	iput(inode);
+ 
+ out_free:
++	up_write(&ubifs_inode(host)->xattr_sem);
+ 	kfree(xent);
+ 	return err;
+ }
+diff --git a/fs/udf/namei.c b/fs/udf/namei.c
+index 3ae9f1e919846..7c7c9bbbfa571 100644
+--- a/fs/udf/namei.c
++++ b/fs/udf/namei.c
+@@ -934,6 +934,10 @@ static int udf_symlink(struct user_namespace *mnt_userns, struct inode *dir,
+ 				iinfo->i_location.partitionReferenceNum,
+ 				0);
+ 		epos.bh = udf_tgetblk(sb, block);
++		if (unlikely(!epos.bh)) {
++			err = -ENOMEM;
++			goto out_no_entry;
++		}
+ 		lock_buffer(epos.bh);
+ 		memset(epos.bh->b_data, 0x00, bsize);
+ 		set_buffer_uptodate(epos.bh);
+diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
+index db026b6ec15ab..e5cf12f102a21 100644
+--- a/include/linux/blk_types.h
++++ b/include/linux/blk_types.h
+@@ -304,6 +304,7 @@ enum {
+ 	BIO_CGROUP_ACCT,	/* has been accounted to a cgroup */
+ 	BIO_TRACKED,		/* set if bio goes through the rq_qos path */
+ 	BIO_REMAPPED,
++	BIO_ZONE_WRITE_LOCKED,	/* Owns a zoned device zone write lock */
+ 	BIO_FLAG_LAST
+ };
+ 
+diff --git a/include/linux/netdev_features.h b/include/linux/netdev_features.h
+index 3de38d6a0aeac..2c6b9e4162254 100644
+--- a/include/linux/netdev_features.h
++++ b/include/linux/netdev_features.h
+@@ -93,7 +93,7 @@ enum {
+ 
+ 	/*
+ 	 * Add your fresh new feature above and remember to update
+-	 * netdev_features_strings[] in net/core/ethtool.c and maybe
++	 * netdev_features_strings[] in net/ethtool/common.c and maybe
+ 	 * some feature mask #defines below. Please also describe it
+ 	 * in Documentation/networking/netdev-features.rst.
+ 	 */
+diff --git a/include/linux/of_mdio.h b/include/linux/of_mdio.h
+index 2b05e7f7c2385..da633d34ab866 100644
+--- a/include/linux/of_mdio.h
++++ b/include/linux/of_mdio.h
+@@ -72,6 +72,13 @@ static inline int of_mdiobus_register(struct mii_bus *mdio, struct device_node *
+ 	return mdiobus_register(mdio);
+ }
+ 
++static inline int devm_of_mdiobus_register(struct device *dev,
++					   struct mii_bus *mdio,
++					   struct device_node *np)
++{
++	return devm_mdiobus_register(dev, mdio);
++}
++
+ static inline struct mdio_device *of_mdio_find_device(struct device_node *np)
+ {
+ 	return NULL;
+diff --git a/include/linux/wait.h b/include/linux/wait.h
+index fe10e8570a522..6598ae35e1b5a 100644
+--- a/include/linux/wait.h
++++ b/include/linux/wait.h
+@@ -1136,7 +1136,7 @@ do {										\
+  * Waitqueues which are removed from the waitqueue_head at wakeup time
+  */
+ void prepare_to_wait(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry, int state);
+-void prepare_to_wait_exclusive(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry, int state);
++bool prepare_to_wait_exclusive(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry, int state);
+ long prepare_to_wait_event(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry, int state);
+ void finish_wait(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry);
+ long wait_woken(struct wait_queue_entry *wq_entry, unsigned mode, long timeout);
+diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
+index d0e9a5bdb08bb..6078dd29f3e77 100644
+--- a/include/media/v4l2-subdev.h
++++ b/include/media/v4l2-subdev.h
+@@ -162,6 +162,9 @@ struct v4l2_subdev_io_pin_config {
+  * @s_gpio: set GPIO pins. Very simple right now, might need to be extended with
+  *	a direction argument if needed.
+  *
++ * @command: called by in-kernel drivers in order to call functions internal
++ *	   to subdev drivers driver that have a separate callback.
++ *
+  * @ioctl: called at the end of ioctl() syscall handler at the V4L2 core.
+  *	   used to provide support for private ioctls used on the driver.
+  *
+@@ -193,6 +196,7 @@ struct v4l2_subdev_core_ops {
+ 	int (*load_fw)(struct v4l2_subdev *sd);
+ 	int (*reset)(struct v4l2_subdev *sd, u32 val);
+ 	int (*s_gpio)(struct v4l2_subdev *sd, u32 val);
++	long (*command)(struct v4l2_subdev *sd, unsigned int cmd, void *arg);
+ 	long (*ioctl)(struct v4l2_subdev *sd, unsigned int cmd, void *arg);
+ #ifdef CONFIG_COMPAT
+ 	long (*compat_ioctl32)(struct v4l2_subdev *sd, unsigned int cmd,
+diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h
+index dc5c1e69cd9f2..69c9eabf83252 100644
+--- a/include/net/flow_offload.h
++++ b/include/net/flow_offload.h
+@@ -319,12 +319,14 @@ flow_action_mixed_hw_stats_check(const struct flow_action *action,
+ 	if (flow_offload_has_one_action(action))
+ 		return true;
+ 
+-	flow_action_for_each(i, action_entry, action) {
+-		if (i && action_entry->hw_stats != last_hw_stats) {
+-			NL_SET_ERR_MSG_MOD(extack, "Mixing HW stats types for actions is not supported");
+-			return false;
++	if (action) {
++		flow_action_for_each(i, action_entry, action) {
++			if (i && action_entry->hw_stats != last_hw_stats) {
++				NL_SET_ERR_MSG_MOD(extack, "Mixing HW stats types for actions is not supported");
++				return false;
++			}
++			last_hw_stats = action_entry->hw_stats;
+ 		}
+-		last_hw_stats = action_entry->hw_stats;
+ 	}
+ 	return true;
+ }
+diff --git a/include/net/sctp/structs.h b/include/net/sctp/structs.h
+index 1aa585216f34b..d49593c72a555 100644
+--- a/include/net/sctp/structs.h
++++ b/include/net/sctp/structs.h
+@@ -461,7 +461,7 @@ struct sctp_af {
+ 					 int saddr);
+ 	void		(*from_sk)	(union sctp_addr *,
+ 					 struct sock *sk);
+-	void		(*from_addr_param) (union sctp_addr *,
++	bool		(*from_addr_param) (union sctp_addr *,
+ 					    union sctp_addr_param *,
+ 					    __be16 port, int iif);
+ 	int		(*to_addr_param) (const union sctp_addr *,
+diff --git a/include/uapi/asm-generic/socket.h b/include/uapi/asm-generic/socket.h
+index 4dcd13d097a9e..d588c244ec2fa 100644
+--- a/include/uapi/asm-generic/socket.h
++++ b/include/uapi/asm-generic/socket.h
+@@ -122,6 +122,8 @@
+ #define SO_PREFER_BUSY_POLL	69
+ #define SO_BUSY_POLL_BUDGET	70
+ 
++#define SO_NETNS_COOKIE		71
++
+ #if !defined(__KERNEL__)
+ 
+ #if __BITS_PER_LONG == 64 || (defined(__x86_64__) && defined(__ILP32__))
+diff --git a/include/uapi/linux/ethtool.h b/include/uapi/linux/ethtool.h
+index cfef6b08169a1..67aa7134b3019 100644
+--- a/include/uapi/linux/ethtool.h
++++ b/include/uapi/linux/ethtool.h
+@@ -233,7 +233,7 @@ enum tunable_id {
+ 	ETHTOOL_PFC_PREVENTION_TOUT, /* timeout in msecs */
+ 	/*
+ 	 * Add your fresh new tunable attribute above and remember to update
+-	 * tunable_strings[] in net/core/ethtool.c
++	 * tunable_strings[] in net/ethtool/common.c
+ 	 */
+ 	__ETHTOOL_TUNABLE_COUNT,
+ };
+@@ -297,7 +297,7 @@ enum phy_tunable_id {
+ 	ETHTOOL_PHY_EDPD,
+ 	/*
+ 	 * Add your fresh new phy tunable attribute above and remember to update
+-	 * phy_tunable_strings[] in net/core/ethtool.c
++	 * phy_tunable_strings[] in net/ethtool/common.c
+ 	 */
+ 	__ETHTOOL_PHY_TUNABLE_COUNT,
+ };
+diff --git a/include/uapi/linux/icmp.h b/include/uapi/linux/icmp.h
+index c1da8244c5e18..163c0998aec9e 100644
+--- a/include/uapi/linux/icmp.h
++++ b/include/uapi/linux/icmp.h
+@@ -20,7 +20,6 @@
+ 
+ #include <linux/types.h>
+ #include <asm/byteorder.h>
+-#include <linux/in.h>
+ #include <linux/if.h>
+ #include <linux/in6.h>
+ 
+@@ -154,7 +153,7 @@ struct icmp_ext_echo_iio {
+ 		struct {
+ 			struct icmp_ext_echo_ctype3_hdr ctype3_hdr;
+ 			union {
+-				struct in_addr	ipv4_addr;
++				__be32		ipv4_addr;
+ 				struct in6_addr	ipv6_addr;
+ 			} ip_addr;
+ 		} addr;
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 5e31ee9f75129..034ad93a1ad71 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -1392,29 +1392,54 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
+ select_insn:
+ 	goto *jumptable[insn->code];
+ 
+-	/* ALU */
+-#define ALU(OPCODE, OP)			\
+-	ALU64_##OPCODE##_X:		\
+-		DST = DST OP SRC;	\
+-		CONT;			\
+-	ALU_##OPCODE##_X:		\
+-		DST = (u32) DST OP (u32) SRC;	\
+-		CONT;			\
+-	ALU64_##OPCODE##_K:		\
+-		DST = DST OP IMM;		\
+-		CONT;			\
+-	ALU_##OPCODE##_K:		\
+-		DST = (u32) DST OP (u32) IMM;	\
++	/* Explicitly mask the register-based shift amounts with 63 or 31
++	 * to avoid undefined behavior. Normally this won't affect the
++	 * generated code, for example, in case of native 64 bit archs such
++	 * as x86-64 or arm64, the compiler is optimizing the AND away for
++	 * the interpreter. In case of JITs, each of the JIT backends compiles
++	 * the BPF shift operations to machine instructions which produce
++	 * implementation-defined results in such a case; the resulting
++	 * contents of the register may be arbitrary, but program behaviour
++	 * as a whole remains defined. In other words, in case of JIT backends,
++	 * the AND must /not/ be added to the emitted LSH/RSH/ARSH translation.
++	 */
++	/* ALU (shifts) */
++#define SHT(OPCODE, OP)					\
++	ALU64_##OPCODE##_X:				\
++		DST = DST OP (SRC & 63);		\
++		CONT;					\
++	ALU_##OPCODE##_X:				\
++		DST = (u32) DST OP ((u32) SRC & 31);	\
++		CONT;					\
++	ALU64_##OPCODE##_K:				\
++		DST = DST OP IMM;			\
++		CONT;					\
++	ALU_##OPCODE##_K:				\
++		DST = (u32) DST OP (u32) IMM;		\
++		CONT;
++	/* ALU (rest) */
++#define ALU(OPCODE, OP)					\
++	ALU64_##OPCODE##_X:				\
++		DST = DST OP SRC;			\
++		CONT;					\
++	ALU_##OPCODE##_X:				\
++		DST = (u32) DST OP (u32) SRC;		\
++		CONT;					\
++	ALU64_##OPCODE##_K:				\
++		DST = DST OP IMM;			\
++		CONT;					\
++	ALU_##OPCODE##_K:				\
++		DST = (u32) DST OP (u32) IMM;		\
+ 		CONT;
+-
+ 	ALU(ADD,  +)
+ 	ALU(SUB,  -)
+ 	ALU(AND,  &)
+ 	ALU(OR,   |)
+-	ALU(LSH, <<)
+-	ALU(RSH, >>)
+ 	ALU(XOR,  ^)
+ 	ALU(MUL,  *)
++	SHT(LSH, <<)
++	SHT(RSH, >>)
++#undef SHT
+ #undef ALU
+ 	ALU_NEG:
+ 		DST = (u32) -DST;
+@@ -1439,13 +1464,13 @@ select_insn:
+ 		insn++;
+ 		CONT;
+ 	ALU_ARSH_X:
+-		DST = (u64) (u32) (((s32) DST) >> SRC);
++		DST = (u64) (u32) (((s32) DST) >> (SRC & 31));
+ 		CONT;
+ 	ALU_ARSH_K:
+ 		DST = (u64) (u32) (((s32) DST) >> IMM);
+ 		CONT;
+ 	ALU64_ARSH_X:
+-		(*(s64 *) &DST) >>= SRC;
++		(*(s64 *) &DST) >>= (SRC & 63);
+ 		CONT;
+ 	ALU64_ARSH_K:
+ 		(*(s64 *) &DST) >>= IMM;
+diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
+index 84b3b35fc0d05..9e0c10c6892ad 100644
+--- a/kernel/bpf/ringbuf.c
++++ b/kernel/bpf/ringbuf.c
+@@ -8,6 +8,7 @@
+ #include <linux/vmalloc.h>
+ #include <linux/wait.h>
+ #include <linux/poll.h>
++#include <linux/kmemleak.h>
+ #include <uapi/linux/btf.h>
+ 
+ #define RINGBUF_CREATE_FLAG_MASK (BPF_F_NUMA_NODE)
+@@ -105,6 +106,7 @@ static struct bpf_ringbuf *bpf_ringbuf_area_alloc(size_t data_sz, int numa_node)
+ 	rb = vmap(pages, nr_meta_pages + 2 * nr_data_pages,
+ 		  VM_ALLOC | VM_USERMAP, PAGE_KERNEL);
+ 	if (rb) {
++		kmemleak_not_leak(pages);
+ 		rb->pages = pages;
+ 		rb->nr_pages = nr_pages;
+ 		return rb;
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index e538518556f47..d2e1692d7bdf8 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -32,6 +32,7 @@
+ #include <linux/relay.h>
+ #include <linux/slab.h>
+ #include <linux/percpu-rwsem.h>
++#include <linux/cpuset.h>
+ 
+ #include <trace/events/power.h>
+ #define CREATE_TRACE_POINTS
+@@ -873,6 +874,52 @@ void __init cpuhp_threads_init(void)
+ 	kthread_unpark(this_cpu_read(cpuhp_state.thread));
+ }
+ 
++/*
++ *
++ * Serialize hotplug trainwrecks outside of the cpu_hotplug_lock
++ * protected region.
++ *
++ * The operation is still serialized against concurrent CPU hotplug via
++ * cpu_add_remove_lock, i.e. CPU map protection.  But it is _not_
++ * serialized against other hotplug related activity like adding or
++ * removing of state callbacks and state instances, which invoke either the
++ * startup or the teardown callback of the affected state.
++ *
++ * This is required for subsystems which are unfixable vs. CPU hotplug and
++ * evade lock inversion problems by scheduling work which has to be
++ * completed _before_ cpu_up()/_cpu_down() returns.
++ *
++ * Don't even think about adding anything to this for any new code or even
++ * drivers. It's only purpose is to keep existing lock order trainwrecks
++ * working.
++ *
++ * For cpu_down() there might be valid reasons to finish cleanups which are
++ * not required to be done under cpu_hotplug_lock, but that's a different
++ * story and would be not invoked via this.
++ */
++static void cpu_up_down_serialize_trainwrecks(bool tasks_frozen)
++{
++	/*
++	 * cpusets delegate hotplug operations to a worker to "solve" the
++	 * lock order problems. Wait for the worker, but only if tasks are
++	 * _not_ frozen (suspend, hibernate) as that would wait forever.
++	 *
++	 * The wait is required because otherwise the hotplug operation
++	 * returns with inconsistent state, which could even be observed in
++	 * user space when a new CPU is brought up. The CPU plug uevent
++	 * would be delivered and user space reacting on it would fail to
++	 * move tasks to the newly plugged CPU up to the point where the
++	 * work has finished because up to that point the newly plugged CPU
++	 * is not assignable in cpusets/cgroups. On unplug that's not
++	 * necessarily a visible issue, but it is still inconsistent state,
++	 * which is the real problem which needs to be "fixed". This can't
++	 * prevent the transient state between scheduling the work and
++	 * returning from waiting for it.
++	 */
++	if (!tasks_frozen)
++		cpuset_wait_for_hotplug();
++}
++
+ #ifdef CONFIG_HOTPLUG_CPU
+ #ifndef arch_clear_mm_cpumask_cpu
+ #define arch_clear_mm_cpumask_cpu(cpu, mm) cpumask_clear_cpu(cpu, mm_cpumask(mm))
+@@ -1108,6 +1155,7 @@ out:
+ 	 */
+ 	lockup_detector_cleanup();
+ 	arch_smt_update();
++	cpu_up_down_serialize_trainwrecks(tasks_frozen);
+ 	return ret;
+ }
+ 
+@@ -1302,6 +1350,7 @@ static int _cpu_up(unsigned int cpu, int tasks_frozen, enum cpuhp_state target)
+ out:
+ 	cpus_write_unlock();
+ 	arch_smt_update();
++	cpu_up_down_serialize_trainwrecks(tasks_frozen);
+ 	return ret;
+ }
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index e807b743353d1..7dd0d859d95bb 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3716,15 +3716,15 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
+ 
+ 		r = removed_load;
+ 		sub_positive(&sa->load_avg, r);
+-		sub_positive(&sa->load_sum, r * divider);
++		sa->load_sum = sa->load_avg * divider;
+ 
+ 		r = removed_util;
+ 		sub_positive(&sa->util_avg, r);
+-		sub_positive(&sa->util_sum, r * divider);
++		sa->util_sum = sa->util_avg * divider;
+ 
+ 		r = removed_runnable;
+ 		sub_positive(&sa->runnable_avg, r);
+-		sub_positive(&sa->runnable_sum, r * divider);
++		sa->runnable_sum = sa->runnable_avg * divider;
+ 
+ 		/*
+ 		 * removed_runnable is the unweighted version of removed_load so we
+diff --git a/kernel/sched/wait.c b/kernel/sched/wait.c
+index 183cc6ae68a68..76577d1642a5d 100644
+--- a/kernel/sched/wait.c
++++ b/kernel/sched/wait.c
+@@ -264,17 +264,22 @@ prepare_to_wait(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_ent
+ }
+ EXPORT_SYMBOL(prepare_to_wait);
+ 
+-void
++/* Returns true if we are the first waiter in the queue, false otherwise. */
++bool
+ prepare_to_wait_exclusive(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry, int state)
+ {
+ 	unsigned long flags;
++	bool was_empty = false;
+ 
+ 	wq_entry->flags |= WQ_FLAG_EXCLUSIVE;
+ 	spin_lock_irqsave(&wq_head->lock, flags);
+-	if (list_empty(&wq_entry->entry))
++	if (list_empty(&wq_entry->entry)) {
++		was_empty = list_empty(&wq_head->head);
+ 		__add_wait_queue_entry_tail(wq_head, wq_entry);
++	}
+ 	set_current_state(state);
+ 	spin_unlock_irqrestore(&wq_head->lock, flags);
++	return was_empty;
+ }
+ EXPORT_SYMBOL(prepare_to_wait_exclusive);
+ 
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index d23a09d3eb37b..1afd3b57ae517 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -2184,8 +2184,15 @@ void tracing_reset_all_online_cpus(void)
+ 	}
+ }
+ 
++/*
++ * The tgid_map array maps from pid to tgid; i.e. the value stored at index i
++ * is the tgid last observed corresponding to pid=i.
++ */
+ static int *tgid_map;
+ 
++/* The maximum valid index into tgid_map. */
++static size_t tgid_map_max;
++
+ #define SAVED_CMDLINES_DEFAULT 128
+ #define NO_CMDLINE_MAP UINT_MAX
+ static arch_spinlock_t trace_cmdline_lock = __ARCH_SPIN_LOCK_UNLOCKED;
+@@ -2458,24 +2465,41 @@ void trace_find_cmdline(int pid, char comm[])
+ 	preempt_enable();
+ }
+ 
++static int *trace_find_tgid_ptr(int pid)
++{
++	/*
++	 * Pairs with the smp_store_release in set_tracer_flag() to ensure that
++	 * if we observe a non-NULL tgid_map then we also observe the correct
++	 * tgid_map_max.
++	 */
++	int *map = smp_load_acquire(&tgid_map);
++
++	if (unlikely(!map || pid > tgid_map_max))
++		return NULL;
++
++	return &map[pid];
++}
++
+ int trace_find_tgid(int pid)
+ {
+-	if (unlikely(!tgid_map || !pid || pid > PID_MAX_DEFAULT))
+-		return 0;
++	int *ptr = trace_find_tgid_ptr(pid);
+ 
+-	return tgid_map[pid];
++	return ptr ? *ptr : 0;
+ }
+ 
+ static int trace_save_tgid(struct task_struct *tsk)
+ {
++	int *ptr;
++
+ 	/* treat recording of idle task as a success */
+ 	if (!tsk->pid)
+ 		return 1;
+ 
+-	if (unlikely(!tgid_map || tsk->pid > PID_MAX_DEFAULT))
++	ptr = trace_find_tgid_ptr(tsk->pid);
++	if (!ptr)
+ 		return 0;
+ 
+-	tgid_map[tsk->pid] = tsk->tgid;
++	*ptr = tsk->tgid;
+ 	return 1;
+ }
+ 
+@@ -5171,6 +5195,8 @@ int trace_keep_overwrite(struct tracer *tracer, u32 mask, int set)
+ 
+ int set_tracer_flag(struct trace_array *tr, unsigned int mask, int enabled)
+ {
++	int *map;
++
+ 	if ((mask == TRACE_ITER_RECORD_TGID) ||
+ 	    (mask == TRACE_ITER_RECORD_CMD))
+ 		lockdep_assert_held(&event_mutex);
+@@ -5193,10 +5219,19 @@ int set_tracer_flag(struct trace_array *tr, unsigned int mask, int enabled)
+ 		trace_event_enable_cmd_record(enabled);
+ 
+ 	if (mask == TRACE_ITER_RECORD_TGID) {
+-		if (!tgid_map)
+-			tgid_map = kvcalloc(PID_MAX_DEFAULT + 1,
+-					   sizeof(*tgid_map),
+-					   GFP_KERNEL);
++		if (!tgid_map) {
++			tgid_map_max = pid_max;
++			map = kvcalloc(tgid_map_max + 1, sizeof(*tgid_map),
++				       GFP_KERNEL);
++
++			/*
++			 * Pairs with smp_load_acquire() in
++			 * trace_find_tgid_ptr() to ensure that if it observes
++			 * the tgid_map we just allocated then it also observes
++			 * the corresponding tgid_map_max value.
++			 */
++			smp_store_release(&tgid_map, map);
++		}
+ 		if (!tgid_map) {
+ 			tr->trace_flags &= ~TRACE_ITER_RECORD_TGID;
+ 			return -ENOMEM;
+@@ -5608,37 +5643,16 @@ static const struct file_operations tracing_readme_fops = {
+ 
+ static void *saved_tgids_next(struct seq_file *m, void *v, loff_t *pos)
+ {
+-	int *ptr = v;
++	int pid = ++(*pos);
+ 
+-	if (*pos || m->count)
+-		ptr++;
+-
+-	(*pos)++;
+-
+-	for (; ptr <= &tgid_map[PID_MAX_DEFAULT]; ptr++) {
+-		if (trace_find_tgid(*ptr))
+-			return ptr;
+-	}
+-
+-	return NULL;
++	return trace_find_tgid_ptr(pid);
+ }
+ 
+ static void *saved_tgids_start(struct seq_file *m, loff_t *pos)
+ {
+-	void *v;
+-	loff_t l = 0;
++	int pid = *pos;
+ 
+-	if (!tgid_map)
+-		return NULL;
+-
+-	v = &tgid_map[0];
+-	while (l <= *pos) {
+-		v = saved_tgids_next(m, v, &l);
+-		if (!v)
+-			return NULL;
+-	}
+-
+-	return v;
++	return trace_find_tgid_ptr(pid);
+ }
+ 
+ static void saved_tgids_stop(struct seq_file *m, void *v)
+@@ -5647,9 +5661,14 @@ static void saved_tgids_stop(struct seq_file *m, void *v)
+ 
+ static int saved_tgids_show(struct seq_file *m, void *v)
+ {
+-	int pid = (int *)v - tgid_map;
++	int *entry = (int *)v;
++	int pid = entry - tgid_map;
++	int tgid = *entry;
++
++	if (tgid == 0)
++		return SEQ_SKIP;
+ 
+-	seq_printf(m, "%d %d\n", pid, trace_find_tgid(pid));
++	seq_printf(m, "%d %d\n", pid, tgid);
+ 	return 0;
+ }
+ 
+diff --git a/lib/seq_buf.c b/lib/seq_buf.c
+index 89c26c393bdba..6dafde8513337 100644
+--- a/lib/seq_buf.c
++++ b/lib/seq_buf.c
+@@ -229,8 +229,10 @@ int seq_buf_putmem_hex(struct seq_buf *s, const void *mem,
+ 
+ 	WARN_ON(s->size == 0);
+ 
++	BUILD_BUG_ON(MAX_MEMHEX_BYTES * 2 >= HEX_CHARS);
++
+ 	while (len) {
+-		start_len = min(len, HEX_CHARS - 1);
++		start_len = min(len, MAX_MEMHEX_BYTES);
+ #ifdef __BIG_ENDIAN
+ 		for (i = 0, j = 0; i < start_len; i++) {
+ #else
+diff --git a/mm/mremap.c b/mm/mremap.c
+index 47c255b60150b..c0d683fd00ad2 100644
+--- a/mm/mremap.c
++++ b/mm/mremap.c
+@@ -439,7 +439,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
+ 			if (!new_pud)
+ 				break;
+ 			if (move_pgt_entry(NORMAL_PUD, vma, old_addr, new_addr,
+-					   old_pud, new_pud, need_rmap_locks))
++					   old_pud, new_pud, true))
+ 				continue;
+ 		}
+ 
+@@ -466,7 +466,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
+ 			 * moving at the PMD level if possible.
+ 			 */
+ 			if (move_pgt_entry(NORMAL_PMD, vma, old_addr, new_addr,
+-					   old_pmd, new_pmd, need_rmap_locks))
++					   old_pmd, new_pmd, true))
+ 				continue;
+ 		}
+ 
+diff --git a/net/bluetooth/cmtp/core.c b/net/bluetooth/cmtp/core.c
+index 07cfa3249f83a..0a2d78e811cf5 100644
+--- a/net/bluetooth/cmtp/core.c
++++ b/net/bluetooth/cmtp/core.c
+@@ -392,6 +392,11 @@ int cmtp_add_connection(struct cmtp_connadd_req *req, struct socket *sock)
+ 	if (!(session->flags & BIT(CMTP_LOOPBACK))) {
+ 		err = cmtp_attach_device(session);
+ 		if (err < 0) {
++			/* Caller will call fput in case of failure, and so
++			 * will cmtp_session kthread.
++			 */
++			get_file(session->sock->file);
++
+ 			atomic_inc(&session->terminate);
+ 			wake_up_interruptible(sk_sleep(session->sock->sk));
+ 			up_write(&cmtp_session_sem);
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 7d71d104fdfda..ded55f54d9c8e 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -1721,14 +1721,6 @@ int hci_dev_do_close(struct hci_dev *hdev)
+ 
+ 	BT_DBG("%s %p", hdev->name, hdev);
+ 
+-	if (!hci_dev_test_flag(hdev, HCI_UNREGISTER) &&
+-	    !hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
+-	    test_bit(HCI_UP, &hdev->flags)) {
+-		/* Execute vendor specific shutdown routine */
+-		if (hdev->shutdown)
+-			hdev->shutdown(hdev);
+-	}
+-
+ 	cancel_delayed_work(&hdev->power_off);
+ 
+ 	hci_request_cancel_all(hdev);
+@@ -1805,6 +1797,14 @@ int hci_dev_do_close(struct hci_dev *hdev)
+ 		clear_bit(HCI_INIT, &hdev->flags);
+ 	}
+ 
++	if (!hci_dev_test_flag(hdev, HCI_UNREGISTER) &&
++	    !hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
++	    test_bit(HCI_UP, &hdev->flags)) {
++		/* Execute vendor specific shutdown routine */
++		if (hdev->shutdown)
++			hdev->shutdown(hdev);
++	}
++
+ 	/* flush cmd  work */
+ 	flush_work(&hdev->cmd_work);
+ 
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index b077d150ac529..62c99e015609d 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -4404,12 +4404,12 @@ static void hci_sync_conn_complete_evt(struct hci_dev *hdev,
+ 
+ 	bt_dev_dbg(hdev, "SCO connected with air mode: %02x", ev->air_mode);
+ 
+-	switch (conn->setting & SCO_AIRMODE_MASK) {
+-	case SCO_AIRMODE_CVSD:
++	switch (ev->air_mode) {
++	case 0x02:
+ 		if (hdev->notify)
+ 			hdev->notify(hdev, HCI_NOTIFY_ENABLE_SCO_CVSD);
+ 		break;
+-	case SCO_AIRMODE_TRANSP:
++	case 0x03:
+ 		if (hdev->notify)
+ 			hdev->notify(hdev, HCI_NOTIFY_ENABLE_SCO_TRANSP);
+ 		break;
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index b6a88b8256c77..9908aa53a682f 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -6066,7 +6066,7 @@ static inline int l2cap_ecred_conn_rsp(struct l2cap_conn *conn,
+ 	struct l2cap_ecred_conn_rsp *rsp = (void *) data;
+ 	struct hci_conn *hcon = conn->hcon;
+ 	u16 mtu, mps, credits, result;
+-	struct l2cap_chan *chan;
++	struct l2cap_chan *chan, *tmp;
+ 	int err = 0, sec_level;
+ 	int i = 0;
+ 
+@@ -6085,7 +6085,7 @@ static inline int l2cap_ecred_conn_rsp(struct l2cap_conn *conn,
+ 
+ 	cmd_len -= sizeof(*rsp);
+ 
+-	list_for_each_entry(chan, &conn->chan_l, list) {
++	list_for_each_entry_safe(chan, tmp, &conn->chan_l, list) {
+ 		u16 dcid;
+ 
+ 		if (chan->ident != cmd->ident ||
+@@ -6248,7 +6248,7 @@ static inline int l2cap_ecred_reconf_rsp(struct l2cap_conn *conn,
+ 					 struct l2cap_cmd_hdr *cmd, u16 cmd_len,
+ 					 u8 *data)
+ {
+-	struct l2cap_chan *chan;
++	struct l2cap_chan *chan, *tmp;
+ 	struct l2cap_ecred_conn_rsp *rsp = (void *) data;
+ 	u16 result;
+ 
+@@ -6262,7 +6262,7 @@ static inline int l2cap_ecred_reconf_rsp(struct l2cap_conn *conn,
+ 	if (!result)
+ 		return 0;
+ 
+-	list_for_each_entry(chan, &conn->chan_l, list) {
++	list_for_each_entry_safe(chan, tmp, &conn->chan_l, list) {
+ 		if (chan->ident != cmd->ident)
+ 			continue;
+ 
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 023a98f7c9922..470eaabb021f9 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -252,12 +252,15 @@ static const u8 mgmt_status_table[] = {
+ 	MGMT_STATUS_TIMEOUT,		/* Instant Passed */
+ 	MGMT_STATUS_NOT_SUPPORTED,	/* Pairing Not Supported */
+ 	MGMT_STATUS_FAILED,		/* Transaction Collision */
++	MGMT_STATUS_FAILED,		/* Reserved for future use */
+ 	MGMT_STATUS_INVALID_PARAMS,	/* Unacceptable Parameter */
+ 	MGMT_STATUS_REJECTED,		/* QoS Rejected */
+ 	MGMT_STATUS_NOT_SUPPORTED,	/* Classification Not Supported */
+ 	MGMT_STATUS_REJECTED,		/* Insufficient Security */
+ 	MGMT_STATUS_INVALID_PARAMS,	/* Parameter Out Of Range */
++	MGMT_STATUS_FAILED,		/* Reserved for future use */
+ 	MGMT_STATUS_BUSY,		/* Role Switch Pending */
++	MGMT_STATUS_FAILED,		/* Reserved for future use */
+ 	MGMT_STATUS_FAILED,		/* Slot Violation */
+ 	MGMT_STATUS_FAILED,		/* Role Switch Failed */
+ 	MGMT_STATUS_INVALID_PARAMS,	/* EIR Too Large */
+@@ -4058,6 +4061,8 @@ static int get_device_flags(struct sock *sk, struct hci_dev *hdev, void *data,
+ 
+ 	hci_dev_lock(hdev);
+ 
++	memset(&rp, 0, sizeof(rp));
++
+ 	if (cp->addr.type == BDADDR_BREDR) {
+ 		br_params = hci_bdaddr_list_lookup_with_flags(&hdev->whitelist,
+ 							      &cp->addr.bdaddr,
+diff --git a/net/bridge/br_mrp.c b/net/bridge/br_mrp.c
+index cd2b1e424e54e..f7012b7d7ce40 100644
+--- a/net/bridge/br_mrp.c
++++ b/net/bridge/br_mrp.c
+@@ -627,8 +627,7 @@ int br_mrp_set_ring_state(struct net_bridge *br,
+ 	if (!mrp)
+ 		return -EINVAL;
+ 
+-	if (mrp->ring_state == BR_MRP_RING_STATE_CLOSED &&
+-	    state->ring_state != BR_MRP_RING_STATE_CLOSED)
++	if (mrp->ring_state != state->ring_state)
+ 		mrp->ring_transitions++;
+ 
+ 	mrp->ring_state = state->ring_state;
+@@ -715,8 +714,7 @@ int br_mrp_set_in_state(struct net_bridge *br, struct br_mrp_in_state *state)
+ 	if (!mrp)
+ 		return -EINVAL;
+ 
+-	if (mrp->in_state == BR_MRP_IN_STATE_CLOSED &&
+-	    state->in_state != BR_MRP_IN_STATE_CLOSED)
++	if (mrp->in_state != state->in_state)
+ 		mrp->in_transitions++;
+ 
+ 	mrp->in_state = state->in_state;
+diff --git a/net/core/dev.c b/net/core/dev.c
+index ef8cf7619bafa..50531a2d0b20d 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -6520,11 +6520,18 @@ EXPORT_SYMBOL(napi_schedule_prep);
+  * __napi_schedule_irqoff - schedule for receive
+  * @n: entry to schedule
+  *
+- * Variant of __napi_schedule() assuming hard irqs are masked
++ * Variant of __napi_schedule() assuming hard irqs are masked.
++ *
++ * On PREEMPT_RT enabled kernels this maps to __napi_schedule()
++ * because the interrupt disabled assumption might not be true
++ * due to force-threaded interrupts and spinlock substitution.
+  */
+ void __napi_schedule_irqoff(struct napi_struct *n)
+ {
+-	____napi_schedule(this_cpu_ptr(&softnet_data), n);
++	if (!IS_ENABLED(CONFIG_PREEMPT_RT))
++		____napi_schedule(this_cpu_ptr(&softnet_data), n);
++	else
++		__napi_schedule(n);
+ }
+ EXPORT_SYMBOL(__napi_schedule_irqoff);
+ 
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 946888afef880..2003c5ebb4c2e 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -1622,6 +1622,13 @@ int sock_getsockopt(struct socket *sock, int level, int optname,
+ 		v.val = sk->sk_bound_dev_if;
+ 		break;
+ 
++	case SO_NETNS_COOKIE:
++		lv = sizeof(u64);
++		if (len != lv)
++			return -EINVAL;
++		v.val64 = sock_net(sk)->net_cookie;
++		break;
++
+ 	default:
+ 		/* We implement the SO_SNDLOWAT etc to not be settable
+ 		 * (1003.1g 7).
+diff --git a/net/hsr/hsr_framereg.c b/net/hsr/hsr_framereg.c
+index bb1351c38397f..e31949479305e 100644
+--- a/net/hsr/hsr_framereg.c
++++ b/net/hsr/hsr_framereg.c
+@@ -397,7 +397,8 @@ void hsr_register_frame_in(struct hsr_node *node, struct hsr_port *port,
+ 	 * ensures entries of restarted nodes gets pruned so that they can
+ 	 * re-register and resume communications.
+ 	 */
+-	if (seq_nr_before(sequence_nr, node->seq_out[port->type]))
++	if (!(port->dev->features & NETIF_F_HW_HSR_TAG_RM) &&
++	    seq_nr_before(sequence_nr, node->seq_out[port->type]))
+ 		return;
+ 
+ 	node->time_in[port->type] = jiffies;
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index 752e392083e64..0a57f1892e7e0 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -1066,7 +1066,7 @@ static bool icmp_echo(struct sk_buff *skb)
+ 			if (ident_len != sizeof(iio->ident.addr.ctype3_hdr) +
+ 					 sizeof(struct in_addr))
+ 				goto send_mal_query;
+-			dev = ip_dev_find(net, iio->ident.addr.ip_addr.ipv4_addr.s_addr);
++			dev = ip_dev_find(net, iio->ident.addr.ip_addr.ipv4_addr);
+ 			break;
+ #if IS_ENABLED(CONFIG_IPV6)
+ 		case ICMP_AFI_IP6:
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index c3efc7d658f6c..8d8a8da3ae7e0 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -1054,7 +1054,7 @@ static int __ip_append_data(struct sock *sk,
+ 			unsigned int datalen;
+ 			unsigned int fraglen;
+ 			unsigned int fraggap;
+-			unsigned int alloclen;
++			unsigned int alloclen, alloc_extra;
+ 			unsigned int pagedlen;
+ 			struct sk_buff *skb_prev;
+ alloc_new_skb:
+@@ -1074,35 +1074,39 @@ alloc_new_skb:
+ 			fraglen = datalen + fragheaderlen;
+ 			pagedlen = 0;
+ 
++			alloc_extra = hh_len + 15;
++			alloc_extra += exthdrlen;
++
++			/* The last fragment gets additional space at tail.
++			 * Note, with MSG_MORE we overallocate on fragments,
++			 * because we have no idea what fragment will be
++			 * the last.
++			 */
++			if (datalen == length + fraggap)
++				alloc_extra += rt->dst.trailer_len;
++
+ 			if ((flags & MSG_MORE) &&
+ 			    !(rt->dst.dev->features&NETIF_F_SG))
+ 				alloclen = mtu;
+-			else if (!paged)
++			else if (!paged &&
++				 (fraglen + alloc_extra < SKB_MAX_ALLOC ||
++				  !(rt->dst.dev->features & NETIF_F_SG)))
+ 				alloclen = fraglen;
+ 			else {
+ 				alloclen = min_t(int, fraglen, MAX_HEADER);
+ 				pagedlen = fraglen - alloclen;
+ 			}
+ 
+-			alloclen += exthdrlen;
+-
+-			/* The last fragment gets additional space at tail.
+-			 * Note, with MSG_MORE we overallocate on fragments,
+-			 * because we have no idea what fragment will be
+-			 * the last.
+-			 */
+-			if (datalen == length + fraggap)
+-				alloclen += rt->dst.trailer_len;
++			alloclen += alloc_extra;
+ 
+ 			if (transhdrlen) {
+-				skb = sock_alloc_send_skb(sk,
+-						alloclen + hh_len + 15,
++				skb = sock_alloc_send_skb(sk, alloclen,
+ 						(flags & MSG_DONTWAIT), &err);
+ 			} else {
+ 				skb = NULL;
+ 				if (refcount_read(&sk->sk_wmem_alloc) + wmem_alloc_delta <=
+ 				    2 * sk->sk_sndbuf)
+-					skb = alloc_skb(alloclen + hh_len + 15,
++					skb = alloc_skb(alloclen,
+ 							sk->sk_allocation);
+ 				if (unlikely(!skb))
+ 					err = -ENOBUFS;
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 4cf4dd532d1c6..bc266514ce589 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -2816,8 +2816,17 @@ static void tcp_process_loss(struct sock *sk, int flag, int num_dupack,
+ 	*rexmit = REXMIT_LOST;
+ }
+ 
++static bool tcp_force_fast_retransmit(struct sock *sk)
++{
++	struct tcp_sock *tp = tcp_sk(sk);
++
++	return after(tcp_highest_sack_seq(tp),
++		     tp->snd_una + tp->reordering * tp->mss_cache);
++}
++
+ /* Undo during fast recovery after partial ACK. */
+-static bool tcp_try_undo_partial(struct sock *sk, u32 prior_snd_una)
++static bool tcp_try_undo_partial(struct sock *sk, u32 prior_snd_una,
++				 bool *do_lost)
+ {
+ 	struct tcp_sock *tp = tcp_sk(sk);
+ 
+@@ -2842,7 +2851,9 @@ static bool tcp_try_undo_partial(struct sock *sk, u32 prior_snd_una)
+ 		tcp_undo_cwnd_reduction(sk, true);
+ 		NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPPARTIALUNDO);
+ 		tcp_try_keep_open(sk);
+-		return true;
++	} else {
++		/* Partial ACK arrived. Force fast retransmit. */
++		*do_lost = tcp_force_fast_retransmit(sk);
+ 	}
+ 	return false;
+ }
+@@ -2866,14 +2877,6 @@ static void tcp_identify_packet_loss(struct sock *sk, int *ack_flag)
+ 	}
+ }
+ 
+-static bool tcp_force_fast_retransmit(struct sock *sk)
+-{
+-	struct tcp_sock *tp = tcp_sk(sk);
+-
+-	return after(tcp_highest_sack_seq(tp),
+-		     tp->snd_una + tp->reordering * tp->mss_cache);
+-}
+-
+ /* Process an event, which can update packets-in-flight not trivially.
+  * Main goal of this function is to calculate new estimate for left_out,
+  * taking into account both packets sitting in receiver's buffer and
+@@ -2943,17 +2946,21 @@ static void tcp_fastretrans_alert(struct sock *sk, const u32 prior_snd_una,
+ 		if (!(flag & FLAG_SND_UNA_ADVANCED)) {
+ 			if (tcp_is_reno(tp))
+ 				tcp_add_reno_sack(sk, num_dupack, ece_ack);
+-		} else {
+-			if (tcp_try_undo_partial(sk, prior_snd_una))
+-				return;
+-			/* Partial ACK arrived. Force fast retransmit. */
+-			do_lost = tcp_force_fast_retransmit(sk);
+-		}
+-		if (tcp_try_undo_dsack(sk)) {
+-			tcp_try_keep_open(sk);
++		} else if (tcp_try_undo_partial(sk, prior_snd_una, &do_lost))
+ 			return;
+-		}
++
++		if (tcp_try_undo_dsack(sk))
++			tcp_try_keep_open(sk);
++
+ 		tcp_identify_packet_loss(sk, ack_flag);
++		if (icsk->icsk_ca_state != TCP_CA_Recovery) {
++			if (!tcp_time_to_recover(sk, flag))
++				return;
++			/* Undo reverts the recovery state. If loss is evident,
++			 * starts a new recovery (e.g. reordering then loss);
++			 */
++			tcp_enter_recovery(sk, ece_ack);
++		}
+ 		break;
+ 	case TCP_CA_Loss:
+ 		tcp_process_loss(sk, flag, num_dupack, rexmit);
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index ff4f9ebcf7f65..497974b4372a7 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -1555,7 +1555,7 @@ emsgsize:
+ 			unsigned int datalen;
+ 			unsigned int fraglen;
+ 			unsigned int fraggap;
+-			unsigned int alloclen;
++			unsigned int alloclen, alloc_extra;
+ 			unsigned int pagedlen;
+ alloc_new_skb:
+ 			/* There's no room in the current skb */
+@@ -1582,17 +1582,28 @@ alloc_new_skb:
+ 			fraglen = datalen + fragheaderlen;
+ 			pagedlen = 0;
+ 
++			alloc_extra = hh_len;
++			alloc_extra += dst_exthdrlen;
++			alloc_extra += rt->dst.trailer_len;
++
++			/* We just reserve space for fragment header.
++			 * Note: this may be overallocation if the message
++			 * (without MSG_MORE) fits into the MTU.
++			 */
++			alloc_extra += sizeof(struct frag_hdr);
++
+ 			if ((flags & MSG_MORE) &&
+ 			    !(rt->dst.dev->features&NETIF_F_SG))
+ 				alloclen = mtu;
+-			else if (!paged)
++			else if (!paged &&
++				 (fraglen + alloc_extra < SKB_MAX_ALLOC ||
++				  !(rt->dst.dev->features & NETIF_F_SG)))
+ 				alloclen = fraglen;
+ 			else {
+ 				alloclen = min_t(int, fraglen, MAX_HEADER);
+ 				pagedlen = fraglen - alloclen;
+ 			}
+-
+-			alloclen += dst_exthdrlen;
++			alloclen += alloc_extra;
+ 
+ 			if (datalen != length + fraggap) {
+ 				/*
+@@ -1602,30 +1613,21 @@ alloc_new_skb:
+ 				datalen += rt->dst.trailer_len;
+ 			}
+ 
+-			alloclen += rt->dst.trailer_len;
+ 			fraglen = datalen + fragheaderlen;
+ 
+-			/*
+-			 * We just reserve space for fragment header.
+-			 * Note: this may be overallocation if the message
+-			 * (without MSG_MORE) fits into the MTU.
+-			 */
+-			alloclen += sizeof(struct frag_hdr);
+-
+ 			copy = datalen - transhdrlen - fraggap - pagedlen;
+ 			if (copy < 0) {
+ 				err = -EINVAL;
+ 				goto error;
+ 			}
+ 			if (transhdrlen) {
+-				skb = sock_alloc_send_skb(sk,
+-						alloclen + hh_len,
++				skb = sock_alloc_send_skb(sk, alloclen,
+ 						(flags & MSG_DONTWAIT), &err);
+ 			} else {
+ 				skb = NULL;
+ 				if (refcount_read(&sk->sk_wmem_alloc) + wmem_alloc_delta <=
+ 				    2 * sk->sk_sndbuf)
+-					skb = alloc_skb(alloclen + hh_len,
++					skb = alloc_skb(alloclen,
+ 							sk->sk_allocation);
+ 				if (unlikely(!skb))
+ 					err = -ENOBUFS;
+diff --git a/net/ipv6/output_core.c b/net/ipv6/output_core.c
+index af36acc1a6448..2880dc7d9a491 100644
+--- a/net/ipv6/output_core.c
++++ b/net/ipv6/output_core.c
+@@ -15,29 +15,11 @@ static u32 __ipv6_select_ident(struct net *net,
+ 			       const struct in6_addr *dst,
+ 			       const struct in6_addr *src)
+ {
+-	const struct {
+-		struct in6_addr dst;
+-		struct in6_addr src;
+-	} __aligned(SIPHASH_ALIGNMENT) combined = {
+-		.dst = *dst,
+-		.src = *src,
+-	};
+-	u32 hash, id;
+-
+-	/* Note the following code is not safe, but this is okay. */
+-	if (unlikely(siphash_key_is_zero(&net->ipv4.ip_id_key)))
+-		get_random_bytes(&net->ipv4.ip_id_key,
+-				 sizeof(net->ipv4.ip_id_key));
+-
+-	hash = siphash(&combined, sizeof(combined), &net->ipv4.ip_id_key);
+-
+-	/* Treat id of 0 as unset and if we get 0 back from ip_idents_reserve,
+-	 * set the hight order instead thus minimizing possible future
+-	 * collisions.
+-	 */
+-	id = ip_idents_reserve(hash, 1);
+-	if (unlikely(!id))
+-		id = 1 << 31;
++	u32 id;
++
++	do {
++		id = prandom_u32();
++	} while (!id);
+ 
+ 	return id;
+ }
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index f33a3acd7f969..2481bfdfafd09 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -257,14 +257,13 @@ static void ieee80211_restart_work(struct work_struct *work)
+ 	/* wait for scan work complete */
+ 	flush_workqueue(local->workqueue);
+ 	flush_work(&local->sched_scan_stopped_work);
++	flush_work(&local->radar_detected_work);
++
++	rtnl_lock();
+ 
+ 	WARN(test_bit(SCAN_HW_SCANNING, &local->scanning),
+ 	     "%s called with hardware scan in progress\n", __func__);
+ 
+-	flush_work(&local->radar_detected_work);
+-	/* we might do interface manipulations, so need both */
+-	rtnl_lock();
+-	wiphy_lock(local->hw.wiphy);
+ 	list_for_each_entry(sdata, &local->interfaces, list) {
+ 		/*
+ 		 * XXX: there may be more work for other vif types and even
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index 13250cadb4202..e18c3855f6161 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -2088,10 +2088,9 @@ static struct ieee80211_sta_rx_stats *
+ sta_get_last_rx_stats(struct sta_info *sta)
+ {
+ 	struct ieee80211_sta_rx_stats *stats = &sta->rx_stats;
+-	struct ieee80211_local *local = sta->local;
+ 	int cpu;
+ 
+-	if (!ieee80211_hw_check(&local->hw, USES_RSS))
++	if (!sta->pcpu_rx_stats)
+ 		return stats;
+ 
+ 	for_each_possible_cpu(cpu) {
+@@ -2191,9 +2190,7 @@ static void sta_set_tidstats(struct sta_info *sta,
+ 	int cpu;
+ 
+ 	if (!(tidstats->filled & BIT(NL80211_TID_STATS_RX_MSDU))) {
+-		if (!ieee80211_hw_check(&local->hw, USES_RSS))
+-			tidstats->rx_msdu +=
+-				sta_get_tidstats_msdu(&sta->rx_stats, tid);
++		tidstats->rx_msdu += sta_get_tidstats_msdu(&sta->rx_stats, tid);
+ 
+ 		if (sta->pcpu_rx_stats) {
+ 			for_each_possible_cpu(cpu) {
+@@ -2272,7 +2269,6 @@ void sta_set_sinfo(struct sta_info *sta, struct station_info *sinfo,
+ 		sinfo->rx_beacon = sdata->u.mgd.count_beacon_signal;
+ 
+ 	drv_sta_statistics(local, sdata, &sta->sta, sinfo);
+-
+ 	sinfo->filled |= BIT_ULL(NL80211_STA_INFO_INACTIVE_TIME) |
+ 			 BIT_ULL(NL80211_STA_INFO_STA_FLAGS) |
+ 			 BIT_ULL(NL80211_STA_INFO_BSS_PARAM) |
+@@ -2307,8 +2303,7 @@ void sta_set_sinfo(struct sta_info *sta, struct station_info *sinfo,
+ 
+ 	if (!(sinfo->filled & (BIT_ULL(NL80211_STA_INFO_RX_BYTES64) |
+ 			       BIT_ULL(NL80211_STA_INFO_RX_BYTES)))) {
+-		if (!ieee80211_hw_check(&local->hw, USES_RSS))
+-			sinfo->rx_bytes += sta_get_stats_bytes(&sta->rx_stats);
++		sinfo->rx_bytes += sta_get_stats_bytes(&sta->rx_stats);
+ 
+ 		if (sta->pcpu_rx_stats) {
+ 			for_each_possible_cpu(cpu) {
+diff --git a/net/sched/act_api.c b/net/sched/act_api.c
+index f6d5755d669eb..d17a66aab8eea 100644
+--- a/net/sched/act_api.c
++++ b/net/sched/act_api.c
+@@ -381,7 +381,8 @@ static int tcf_del_walker(struct tcf_idrinfo *idrinfo, struct sk_buff *skb,
+ 	}
+ 	mutex_unlock(&idrinfo->lock);
+ 
+-	if (nla_put_u32(skb, TCA_FCNT, n_i))
++	ret = nla_put_u32(skb, TCA_FCNT, n_i);
++	if (ret)
+ 		goto nla_put_failure;
+ 	nla_nest_end(skb, nest);
+ 
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 279f9e2a2319a..d73b5c5514a9f 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -1531,7 +1531,7 @@ static inline int __tcf_classify(struct sk_buff *skb,
+ 				 u32 *last_executed_chain)
+ {
+ #ifdef CONFIG_NET_CLS_ACT
+-	const int max_reclassify_loop = 4;
++	const int max_reclassify_loop = 16;
+ 	const struct tcf_proto *first_tp;
+ 	int limit = 0;
+ 
+diff --git a/net/sctp/bind_addr.c b/net/sctp/bind_addr.c
+index 53e5ed79f63f3..59e653b528b1f 100644
+--- a/net/sctp/bind_addr.c
++++ b/net/sctp/bind_addr.c
+@@ -270,22 +270,19 @@ int sctp_raw_to_bind_addrs(struct sctp_bind_addr *bp, __u8 *raw_addr_list,
+ 		rawaddr = (union sctp_addr_param *)raw_addr_list;
+ 
+ 		af = sctp_get_af_specific(param_type2af(param->type));
+-		if (unlikely(!af)) {
++		if (unlikely(!af) ||
++		    !af->from_addr_param(&addr, rawaddr, htons(port), 0)) {
+ 			retval = -EINVAL;
+-			sctp_bind_addr_clean(bp);
+-			break;
++			goto out_err;
+ 		}
+ 
+-		af->from_addr_param(&addr, rawaddr, htons(port), 0);
+ 		if (sctp_bind_addr_state(bp, &addr) != -1)
+ 			goto next;
+ 		retval = sctp_add_bind_addr(bp, &addr, sizeof(addr),
+ 					    SCTP_ADDR_SRC, gfp);
+-		if (retval) {
++		if (retval)
+ 			/* Can't finish building the list, clean up. */
+-			sctp_bind_addr_clean(bp);
+-			break;
+-		}
++			goto out_err;
+ 
+ next:
+ 		len = ntohs(param->length);
+@@ -294,6 +291,12 @@ next:
+ 	}
+ 
+ 	return retval;
++
++out_err:
++	if (retval)
++		sctp_bind_addr_clean(bp);
++
++	return retval;
+ }
+ 
+ /********************************************************************
+diff --git a/net/sctp/input.c b/net/sctp/input.c
+index d508f6f3dd08a..f72bff93745c4 100644
+--- a/net/sctp/input.c
++++ b/net/sctp/input.c
+@@ -1131,7 +1131,8 @@ static struct sctp_association *__sctp_rcv_init_lookup(struct net *net,
+ 		if (!af)
+ 			continue;
+ 
+-		af->from_addr_param(paddr, params.addr, sh->source, 0);
++		if (!af->from_addr_param(paddr, params.addr, sh->source, 0))
++			continue;
+ 
+ 		asoc = __sctp_lookup_association(net, laddr, paddr, transportp);
+ 		if (asoc)
+@@ -1174,7 +1175,8 @@ static struct sctp_association *__sctp_rcv_asconf_lookup(
+ 	if (unlikely(!af))
+ 		return NULL;
+ 
+-	af->from_addr_param(&paddr, param, peer_port, 0);
++	if (af->from_addr_param(&paddr, param, peer_port, 0))
++		return NULL;
+ 
+ 	return __sctp_lookup_association(net, laddr, &paddr, transportp);
+ }
+@@ -1245,7 +1247,7 @@ static struct sctp_association *__sctp_rcv_walk_lookup(struct net *net,
+ 
+ 		ch = (struct sctp_chunkhdr *)ch_end;
+ 		chunk_num++;
+-	} while (ch_end < skb_tail_pointer(skb));
++	} while (ch_end + sizeof(*ch) < skb_tail_pointer(skb));
+ 
+ 	return asoc;
+ }
+diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
+index bd08807c9e447..5c6f5ced9cfa6 100644
+--- a/net/sctp/ipv6.c
++++ b/net/sctp/ipv6.c
+@@ -551,15 +551,20 @@ static void sctp_v6_to_sk_daddr(union sctp_addr *addr, struct sock *sk)
+ }
+ 
+ /* Initialize a sctp_addr from an address parameter. */
+-static void sctp_v6_from_addr_param(union sctp_addr *addr,
++static bool sctp_v6_from_addr_param(union sctp_addr *addr,
+ 				    union sctp_addr_param *param,
+ 				    __be16 port, int iif)
+ {
++	if (ntohs(param->v6.param_hdr.length) < sizeof(struct sctp_ipv6addr_param))
++		return false;
++
+ 	addr->v6.sin6_family = AF_INET6;
+ 	addr->v6.sin6_port = port;
+ 	addr->v6.sin6_flowinfo = 0; /* BUG */
+ 	addr->v6.sin6_addr = param->v6.addr;
+ 	addr->v6.sin6_scope_id = iif;
++
++	return true;
+ }
+ 
+ /* Initialize an address parameter from a sctp_addr and return the length
+diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
+index 6f2bbfeec3a4c..25192b378e2ec 100644
+--- a/net/sctp/protocol.c
++++ b/net/sctp/protocol.c
+@@ -254,14 +254,19 @@ static void sctp_v4_to_sk_daddr(union sctp_addr *addr, struct sock *sk)
+ }
+ 
+ /* Initialize a sctp_addr from an address parameter. */
+-static void sctp_v4_from_addr_param(union sctp_addr *addr,
++static bool sctp_v4_from_addr_param(union sctp_addr *addr,
+ 				    union sctp_addr_param *param,
+ 				    __be16 port, int iif)
+ {
++	if (ntohs(param->v4.param_hdr.length) < sizeof(struct sctp_ipv4addr_param))
++		return false;
++
+ 	addr->v4.sin_family = AF_INET;
+ 	addr->v4.sin_port = port;
+ 	addr->v4.sin_addr.s_addr = param->v4.addr.s_addr;
+ 	memset(addr->v4.sin_zero, 0, sizeof(addr->v4.sin_zero));
++
++	return true;
+ }
+ 
+ /* Initialize an address parameter from a sctp_addr and return the length
+diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c
+index 5b44d228b6cac..f33a870b483da 100644
+--- a/net/sctp/sm_make_chunk.c
++++ b/net/sctp/sm_make_chunk.c
+@@ -2346,11 +2346,13 @@ int sctp_process_init(struct sctp_association *asoc, struct sctp_chunk *chunk,
+ 
+ 	/* Process the initialization parameters.  */
+ 	sctp_walk_params(param, peer_init, init_hdr.params) {
+-		if (!src_match && (param.p->type == SCTP_PARAM_IPV4_ADDRESS ||
+-		    param.p->type == SCTP_PARAM_IPV6_ADDRESS)) {
++		if (!src_match &&
++		    (param.p->type == SCTP_PARAM_IPV4_ADDRESS ||
++		     param.p->type == SCTP_PARAM_IPV6_ADDRESS)) {
+ 			af = sctp_get_af_specific(param_type2af(param.p->type));
+-			af->from_addr_param(&addr, param.addr,
+-					    chunk->sctp_hdr->source, 0);
++			if (!af->from_addr_param(&addr, param.addr,
++						 chunk->sctp_hdr->source, 0))
++				continue;
+ 			if (sctp_cmp_addr_exact(sctp_source(chunk), &addr))
+ 				src_match = 1;
+ 		}
+@@ -2531,7 +2533,8 @@ static int sctp_process_param(struct sctp_association *asoc,
+ 			break;
+ do_addr_param:
+ 		af = sctp_get_af_specific(param_type2af(param.p->type));
+-		af->from_addr_param(&addr, param.addr, htons(asoc->peer.port), 0);
++		if (!af->from_addr_param(&addr, param.addr, htons(asoc->peer.port), 0))
++			break;
+ 		scope = sctp_scope(peer_addr);
+ 		if (sctp_in_scope(net, &addr, scope))
+ 			if (!sctp_assoc_add_peer(asoc, &addr, gfp, SCTP_UNCONFIRMED))
+@@ -2632,15 +2635,13 @@ do_addr_param:
+ 		addr_param = param.v + sizeof(struct sctp_addip_param);
+ 
+ 		af = sctp_get_af_specific(param_type2af(addr_param->p.type));
+-		if (af == NULL)
++		if (!af)
+ 			break;
+ 
+-		af->from_addr_param(&addr, addr_param,
+-				    htons(asoc->peer.port), 0);
++		if (!af->from_addr_param(&addr, addr_param,
++					 htons(asoc->peer.port), 0))
++			break;
+ 
+-		/* if the address is invalid, we can't process it.
+-		 * XXX: see spec for what to do.
+-		 */
+ 		if (!af->addr_valid(&addr, NULL, NULL))
+ 			break;
+ 
+@@ -3054,7 +3055,8 @@ static __be16 sctp_process_asconf_param(struct sctp_association *asoc,
+ 	if (unlikely(!af))
+ 		return SCTP_ERROR_DNS_FAILED;
+ 
+-	af->from_addr_param(&addr, addr_param, htons(asoc->peer.port), 0);
++	if (!af->from_addr_param(&addr, addr_param, htons(asoc->peer.port), 0))
++		return SCTP_ERROR_DNS_FAILED;
+ 
+ 	/* ADDIP 4.2.1  This parameter MUST NOT contain a broadcast
+ 	 * or multicast address.
+@@ -3331,7 +3333,8 @@ static void sctp_asconf_param_success(struct sctp_association *asoc,
+ 
+ 	/* We have checked the packet before, so we do not check again.	*/
+ 	af = sctp_get_af_specific(param_type2af(addr_param->p.type));
+-	af->from_addr_param(&addr, addr_param, htons(bp->port), 0);
++	if (!af->from_addr_param(&addr, addr_param, htons(bp->port), 0))
++		return;
+ 
+ 	switch (asconf_param->param_hdr.type) {
+ 	case SCTP_PARAM_ADD_IP:
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 92a72f0e0d942..ae11311807fdc 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -1369,7 +1369,7 @@ static int vsock_stream_connect(struct socket *sock, struct sockaddr *addr,
+ 
+ 		if (signal_pending(current)) {
+ 			err = sock_intr_errno(timeout);
+-			sk->sk_state = TCP_CLOSE;
++			sk->sk_state = sk->sk_state == TCP_ESTABLISHED ? TCP_CLOSING : TCP_CLOSE;
+ 			sock->state = SS_UNCONNECTED;
+ 			vsock_transport_cancel_pkt(vsk);
+ 			goto out_wait;
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index fc9286afe3c99..912977bf3ec80 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -4781,11 +4781,10 @@ static int nl80211_parse_tx_bitrate_mask(struct genl_info *info,
+ 		       sband->ht_cap.mcs.rx_mask,
+ 		       sizeof(mask->control[i].ht_mcs));
+ 
+-		if (!sband->vht_cap.vht_supported)
+-			continue;
+-
+-		vht_tx_mcs_map = le16_to_cpu(sband->vht_cap.vht_mcs.tx_mcs_map);
+-		vht_build_mcs_mask(vht_tx_mcs_map, mask->control[i].vht_mcs);
++		if (sband->vht_cap.vht_supported) {
++			vht_tx_mcs_map = le16_to_cpu(sband->vht_cap.vht_mcs.tx_mcs_map);
++			vht_build_mcs_mask(vht_tx_mcs_map, mask->control[i].vht_mcs);
++		}
+ 
+ 		he_cap = ieee80211_get_he_iftype_cap(sband, wdev->iftype);
+ 		if (!he_cap)
+diff --git a/net/wireless/wext-spy.c b/net/wireless/wext-spy.c
+index 33bef22e44e95..b379a03716539 100644
+--- a/net/wireless/wext-spy.c
++++ b/net/wireless/wext-spy.c
+@@ -120,8 +120,8 @@ int iw_handler_set_thrspy(struct net_device *	dev,
+ 		return -EOPNOTSUPP;
+ 
+ 	/* Just do it */
+-	memcpy(&(spydata->spy_thr_low), &(threshold->low),
+-	       2 * sizeof(struct iw_quality));
++	spydata->spy_thr_low = threshold->low;
++	spydata->spy_thr_high = threshold->high;
+ 
+ 	/* Clear flag */
+ 	memset(spydata->spy_thr_under, '\0', sizeof(spydata->spy_thr_under));
+@@ -147,8 +147,8 @@ int iw_handler_get_thrspy(struct net_device *	dev,
+ 		return -EOPNOTSUPP;
+ 
+ 	/* Just do it */
+-	memcpy(&(threshold->low), &(spydata->spy_thr_low),
+-	       2 * sizeof(struct iw_quality));
++	threshold->low = spydata->spy_thr_low;
++	threshold->high = spydata->spy_thr_high;
+ 
+ 	return 0;
+ }
+@@ -173,10 +173,10 @@ static void iw_send_thrspy_event(struct net_device *	dev,
+ 	memcpy(threshold.addr.sa_data, address, ETH_ALEN);
+ 	threshold.addr.sa_family = ARPHRD_ETHER;
+ 	/* Copy stats */
+-	memcpy(&(threshold.qual), wstats, sizeof(struct iw_quality));
++	threshold.qual = *wstats;
+ 	/* Copy also thresholds */
+-	memcpy(&(threshold.low), &(spydata->spy_thr_low),
+-	       2 * sizeof(struct iw_quality));
++	threshold.low = spydata->spy_thr_low;
++	threshold.high = spydata->spy_thr_high;
+ 
+ 	/* Send event to user space */
+ 	wireless_send_event(dev, SIOCGIWTHRSPY, &wrqu, (char *) &threshold);
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index f0aecee4d5392..b47d613409b70 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -580,6 +580,20 @@ static struct xfrm_state *xfrm_state_construct(struct net *net,
+ 
+ 	copy_from_user_state(x, p);
+ 
++	if (attrs[XFRMA_ENCAP]) {
++		x->encap = kmemdup(nla_data(attrs[XFRMA_ENCAP]),
++				   sizeof(*x->encap), GFP_KERNEL);
++		if (x->encap == NULL)
++			goto error;
++	}
++
++	if (attrs[XFRMA_COADDR]) {
++		x->coaddr = kmemdup(nla_data(attrs[XFRMA_COADDR]),
++				    sizeof(*x->coaddr), GFP_KERNEL);
++		if (x->coaddr == NULL)
++			goto error;
++	}
++
+ 	if (attrs[XFRMA_SA_EXTRA_FLAGS])
+ 		x->props.extra_flags = nla_get_u32(attrs[XFRMA_SA_EXTRA_FLAGS]);
+ 
+@@ -600,23 +614,9 @@ static struct xfrm_state *xfrm_state_construct(struct net *net,
+ 				   attrs[XFRMA_ALG_COMP])))
+ 		goto error;
+ 
+-	if (attrs[XFRMA_ENCAP]) {
+-		x->encap = kmemdup(nla_data(attrs[XFRMA_ENCAP]),
+-				   sizeof(*x->encap), GFP_KERNEL);
+-		if (x->encap == NULL)
+-			goto error;
+-	}
+-
+ 	if (attrs[XFRMA_TFCPAD])
+ 		x->tfcpad = nla_get_u32(attrs[XFRMA_TFCPAD]);
+ 
+-	if (attrs[XFRMA_COADDR]) {
+-		x->coaddr = kmemdup(nla_data(attrs[XFRMA_COADDR]),
+-				    sizeof(*x->coaddr), GFP_KERNEL);
+-		if (x->coaddr == NULL)
+-			goto error;
+-	}
+-
+ 	xfrm_mark_get(attrs, &x->mark);
+ 
+ 	xfrm_smark_init(attrs, &x->props.smark);
+diff --git a/security/selinux/avc.c b/security/selinux/avc.c
+index ad451cf9375e4..a2dc83228daf4 100644
+--- a/security/selinux/avc.c
++++ b/security/selinux/avc.c
+@@ -297,26 +297,27 @@ static struct avc_xperms_decision_node
+ 	struct avc_xperms_decision_node *xpd_node;
+ 	struct extended_perms_decision *xpd;
+ 
+-	xpd_node = kmem_cache_zalloc(avc_xperms_decision_cachep, GFP_NOWAIT);
++	xpd_node = kmem_cache_zalloc(avc_xperms_decision_cachep,
++				     GFP_NOWAIT | __GFP_NOWARN);
+ 	if (!xpd_node)
+ 		return NULL;
+ 
+ 	xpd = &xpd_node->xpd;
+ 	if (which & XPERMS_ALLOWED) {
+ 		xpd->allowed = kmem_cache_zalloc(avc_xperms_data_cachep,
+-						GFP_NOWAIT);
++						GFP_NOWAIT | __GFP_NOWARN);
+ 		if (!xpd->allowed)
+ 			goto error;
+ 	}
+ 	if (which & XPERMS_AUDITALLOW) {
+ 		xpd->auditallow = kmem_cache_zalloc(avc_xperms_data_cachep,
+-						GFP_NOWAIT);
++						GFP_NOWAIT | __GFP_NOWARN);
+ 		if (!xpd->auditallow)
+ 			goto error;
+ 	}
+ 	if (which & XPERMS_DONTAUDIT) {
+ 		xpd->dontaudit = kmem_cache_zalloc(avc_xperms_data_cachep,
+-						GFP_NOWAIT);
++						GFP_NOWAIT | __GFP_NOWARN);
+ 		if (!xpd->dontaudit)
+ 			goto error;
+ 	}
+@@ -344,7 +345,7 @@ static struct avc_xperms_node *avc_xperms_alloc(void)
+ {
+ 	struct avc_xperms_node *xp_node;
+ 
+-	xp_node = kmem_cache_zalloc(avc_xperms_cachep, GFP_NOWAIT);
++	xp_node = kmem_cache_zalloc(avc_xperms_cachep, GFP_NOWAIT | __GFP_NOWARN);
+ 	if (!xp_node)
+ 		return xp_node;
+ 	INIT_LIST_HEAD(&xp_node->xpd_head);
+@@ -500,7 +501,7 @@ static struct avc_node *avc_alloc_node(struct selinux_avc *avc)
+ {
+ 	struct avc_node *node;
+ 
+-	node = kmem_cache_zalloc(avc_node_cachep, GFP_NOWAIT);
++	node = kmem_cache_zalloc(avc_node_cachep, GFP_NOWAIT | __GFP_NOWARN);
+ 	if (!node)
+ 		goto out;
+ 
+diff --git a/security/smack/smackfs.c b/security/smack/smackfs.c
+index 22ded2c26089c..1ad7d0d1ea62e 100644
+--- a/security/smack/smackfs.c
++++ b/security/smack/smackfs.c
+@@ -855,6 +855,8 @@ static ssize_t smk_set_cipso(struct file *file, const char __user *buf,
+ 	if (format == SMK_FIXED24_FMT &&
+ 	    (count < SMK_CIPSOMIN || count > SMK_CIPSOMAX))
+ 		return -EINVAL;
++	if (count > PAGE_SIZE)
++		return -EINVAL;
+ 
+ 	data = memdup_user_nul(buf, count);
+ 	if (IS_ERR(data))
+diff --git a/sound/soc/tegra/tegra_alc5632.c b/sound/soc/tegra/tegra_alc5632.c
+index 0a0efd24e4b0f..81ea6ceba689a 100644
+--- a/sound/soc/tegra/tegra_alc5632.c
++++ b/sound/soc/tegra/tegra_alc5632.c
+@@ -139,6 +139,7 @@ static struct snd_soc_dai_link tegra_alc5632_dai = {
+ 
+ static struct snd_soc_card snd_soc_tegra_alc5632 = {
+ 	.name = "tegra-alc5632",
++	.driver_name = "tegra",
+ 	.owner = THIS_MODULE,
+ 	.dai_link = &tegra_alc5632_dai,
+ 	.num_links = 1,
+diff --git a/sound/soc/tegra/tegra_max98090.c b/sound/soc/tegra/tegra_max98090.c
+index 00c19704057b6..5a649810c0c87 100644
+--- a/sound/soc/tegra/tegra_max98090.c
++++ b/sound/soc/tegra/tegra_max98090.c
+@@ -182,6 +182,7 @@ static struct snd_soc_dai_link tegra_max98090_dai = {
+ 
+ static struct snd_soc_card snd_soc_tegra_max98090 = {
+ 	.name = "tegra-max98090",
++	.driver_name = "tegra",
+ 	.owner = THIS_MODULE,
+ 	.dai_link = &tegra_max98090_dai,
+ 	.num_links = 1,
+diff --git a/sound/soc/tegra/tegra_rt5640.c b/sound/soc/tegra/tegra_rt5640.c
+index 9afba37a3b086..3344f16258bec 100644
+--- a/sound/soc/tegra/tegra_rt5640.c
++++ b/sound/soc/tegra/tegra_rt5640.c
+@@ -132,6 +132,7 @@ static struct snd_soc_dai_link tegra_rt5640_dai = {
+ 
+ static struct snd_soc_card snd_soc_tegra_rt5640 = {
+ 	.name = "tegra-rt5640",
++	.driver_name = "tegra",
+ 	.owner = THIS_MODULE,
+ 	.dai_link = &tegra_rt5640_dai,
+ 	.num_links = 1,
+diff --git a/sound/soc/tegra/tegra_rt5677.c b/sound/soc/tegra/tegra_rt5677.c
+index d30f8b6deda4b..0f03e97d93558 100644
+--- a/sound/soc/tegra/tegra_rt5677.c
++++ b/sound/soc/tegra/tegra_rt5677.c
+@@ -175,6 +175,7 @@ static struct snd_soc_dai_link tegra_rt5677_dai = {
+ 
+ static struct snd_soc_card snd_soc_tegra_rt5677 = {
+ 	.name = "tegra-rt5677",
++	.driver_name = "tegra",
+ 	.owner = THIS_MODULE,
+ 	.dai_link = &tegra_rt5677_dai,
+ 	.num_links = 1,
+diff --git a/sound/soc/tegra/tegra_sgtl5000.c b/sound/soc/tegra/tegra_sgtl5000.c
+index 885332170c77b..ef6a553e0b7da 100644
+--- a/sound/soc/tegra/tegra_sgtl5000.c
++++ b/sound/soc/tegra/tegra_sgtl5000.c
+@@ -97,6 +97,7 @@ static struct snd_soc_dai_link tegra_sgtl5000_dai = {
+ 
+ static struct snd_soc_card snd_soc_tegra_sgtl5000 = {
+ 	.name = "tegra-sgtl5000",
++	.driver_name = "tegra",
+ 	.owner = THIS_MODULE,
+ 	.dai_link = &tegra_sgtl5000_dai,
+ 	.num_links = 1,
+diff --git a/sound/soc/tegra/tegra_wm8753.c b/sound/soc/tegra/tegra_wm8753.c
+index efd7938866895..27089077f2ea4 100644
+--- a/sound/soc/tegra/tegra_wm8753.c
++++ b/sound/soc/tegra/tegra_wm8753.c
+@@ -101,6 +101,7 @@ static struct snd_soc_dai_link tegra_wm8753_dai = {
+ 
+ static struct snd_soc_card snd_soc_tegra_wm8753 = {
+ 	.name = "tegra-wm8753",
++	.driver_name = "tegra",
+ 	.owner = THIS_MODULE,
+ 	.dai_link = &tegra_wm8753_dai,
+ 	.num_links = 1,
+diff --git a/sound/soc/tegra/tegra_wm8903.c b/sound/soc/tegra/tegra_wm8903.c
+index e4863fa37b0c6..f219c26d66a31 100644
+--- a/sound/soc/tegra/tegra_wm8903.c
++++ b/sound/soc/tegra/tegra_wm8903.c
+@@ -235,6 +235,7 @@ static struct snd_soc_dai_link tegra_wm8903_dai = {
+ 
+ static struct snd_soc_card snd_soc_tegra_wm8903 = {
+ 	.name = "tegra-wm8903",
++	.driver_name = "tegra",
+ 	.owner = THIS_MODULE,
+ 	.dai_link = &tegra_wm8903_dai,
+ 	.num_links = 1,
+diff --git a/sound/soc/tegra/tegra_wm9712.c b/sound/soc/tegra/tegra_wm9712.c
+index 4f09a178049d3..c66da161c85aa 100644
+--- a/sound/soc/tegra/tegra_wm9712.c
++++ b/sound/soc/tegra/tegra_wm9712.c
+@@ -54,6 +54,7 @@ static struct snd_soc_dai_link tegra_wm9712_dai = {
+ 
+ static struct snd_soc_card snd_soc_tegra_wm9712 = {
+ 	.name = "tegra-wm9712",
++	.driver_name = "tegra",
+ 	.owner = THIS_MODULE,
+ 	.dai_link = &tegra_wm9712_dai,
+ 	.num_links = 1,
+diff --git a/sound/soc/tegra/trimslice.c b/sound/soc/tegra/trimslice.c
+index 6c1cc3d0ac336..cb4c8f72e4e67 100644
+--- a/sound/soc/tegra/trimslice.c
++++ b/sound/soc/tegra/trimslice.c
+@@ -94,6 +94,7 @@ static struct snd_soc_dai_link trimslice_tlv320aic23_dai = {
+ 
+ static struct snd_soc_card snd_soc_trimslice = {
+ 	.name = "tegra-trimslice",
++	.driver_name = "tegra",
+ 	.owner = THIS_MODULE,
+ 	.dai_link = &trimslice_tlv320aic23_dai,
+ 	.num_links = 1,
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/devlink_trap_l3_drops.sh b/tools/testing/selftests/drivers/net/mlxsw/devlink_trap_l3_drops.sh
+index 4029833f7e271..160891dcb4bcb 100755
+--- a/tools/testing/selftests/drivers/net/mlxsw/devlink_trap_l3_drops.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/devlink_trap_l3_drops.sh
+@@ -109,6 +109,9 @@ router_destroy()
+ 	__addr_add_del $rp1 del 192.0.2.2/24 2001:db8:1::2/64
+ 
+ 	tc qdisc del dev $rp2 clsact
++
++	ip link set dev $rp2 down
++	ip link set dev $rp1 down
+ }
+ 
+ setup_prepare()
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/devlink_trap_l3_exceptions.sh b/tools/testing/selftests/drivers/net/mlxsw/devlink_trap_l3_exceptions.sh
+index 42d44e27802cd..190c1b6b5365a 100755
+--- a/tools/testing/selftests/drivers/net/mlxsw/devlink_trap_l3_exceptions.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/devlink_trap_l3_exceptions.sh
+@@ -111,6 +111,9 @@ router_destroy()
+ 	__addr_add_del $rp1 del 192.0.2.2/24 2001:db8:1::2/64
+ 
+ 	tc qdisc del dev $rp2 clsact
++
++	ip link set dev $rp2 down
++	ip link set dev $rp1 down
+ }
+ 
+ setup_prepare()
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/qos_dscp_bridge.sh b/tools/testing/selftests/drivers/net/mlxsw/qos_dscp_bridge.sh
+index 5cbff8038f84c..28a570006d4d9 100755
+--- a/tools/testing/selftests/drivers/net/mlxsw/qos_dscp_bridge.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/qos_dscp_bridge.sh
+@@ -93,7 +93,9 @@ switch_destroy()
+ 	lldptool -T -i $swp1 -V APP -d $(dscp_map 10) >/dev/null
+ 	lldpad_app_wait_del
+ 
++	ip link set dev $swp2 down
+ 	ip link set dev $swp2 nomaster
++	ip link set dev $swp1 down
+ 	ip link set dev $swp1 nomaster
+ 	ip link del dev br1
+ }
+diff --git a/tools/testing/selftests/lkdtm/tests.txt b/tools/testing/selftests/lkdtm/tests.txt
+index 11ef159be0fd4..a5fce7fd4520d 100644
+--- a/tools/testing/selftests/lkdtm/tests.txt
++++ b/tools/testing/selftests/lkdtm/tests.txt
+@@ -11,7 +11,7 @@ CORRUPT_LIST_ADD list_add corruption
+ CORRUPT_LIST_DEL list_del corruption
+ STACK_GUARD_PAGE_LEADING
+ STACK_GUARD_PAGE_TRAILING
+-UNSET_SMEP CR4 bits went missing
++UNSET_SMEP pinned CR4 bits changed:
+ DOUBLE_FAULT
+ CORRUPT_PAC
+ UNALIGNED_LOAD_STORE_WRITE
+diff --git a/tools/testing/selftests/net/forwarding/pedit_dsfield.sh b/tools/testing/selftests/net/forwarding/pedit_dsfield.sh
+index 55eeacf592411..64fbd211d907b 100755
+--- a/tools/testing/selftests/net/forwarding/pedit_dsfield.sh
++++ b/tools/testing/selftests/net/forwarding/pedit_dsfield.sh
+@@ -75,7 +75,9 @@ switch_destroy()
+ 	tc qdisc del dev $swp2 clsact
+ 	tc qdisc del dev $swp1 clsact
+ 
++	ip link set dev $swp2 down
+ 	ip link set dev $swp2 nomaster
++	ip link set dev $swp1 down
+ 	ip link set dev $swp1 nomaster
+ 	ip link del dev br1
+ }
+diff --git a/tools/testing/selftests/net/forwarding/pedit_l4port.sh b/tools/testing/selftests/net/forwarding/pedit_l4port.sh
+index 5f20d289ee43c..10e594c551175 100755
+--- a/tools/testing/selftests/net/forwarding/pedit_l4port.sh
++++ b/tools/testing/selftests/net/forwarding/pedit_l4port.sh
+@@ -71,7 +71,9 @@ switch_destroy()
+ 	tc qdisc del dev $swp2 clsact
+ 	tc qdisc del dev $swp1 clsact
+ 
++	ip link set dev $swp2 down
+ 	ip link set dev $swp2 nomaster
++	ip link set dev $swp1 down
+ 	ip link set dev $swp1 nomaster
+ 	ip link del dev br1
+ }
+diff --git a/tools/testing/selftests/net/forwarding/skbedit_priority.sh b/tools/testing/selftests/net/forwarding/skbedit_priority.sh
+index e3bd8a6bb8b40..bde11dc27873c 100755
+--- a/tools/testing/selftests/net/forwarding/skbedit_priority.sh
++++ b/tools/testing/selftests/net/forwarding/skbedit_priority.sh
+@@ -72,7 +72,9 @@ switch_destroy()
+ 	tc qdisc del dev $swp2 clsact
+ 	tc qdisc del dev $swp1 clsact
+ 
++	ip link set dev $swp2 down
+ 	ip link set dev $swp2 nomaster
++	ip link set dev $swp1 down
+ 	ip link set dev $swp1 nomaster
+ 	ip link del dev br1
+ }


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-07-20 15:51 Alice Ferrazzi
  0 siblings, 0 replies; 42+ messages in thread
From: Alice Ferrazzi @ 2021-07-20 15:51 UTC (permalink / raw
  To: gentoo-commits

commit:     fe3dea5f817d7c072d6d7031b47dc6edfe273af0
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Tue Jul 20 15:50:24 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Tue Jul 20 15:50:43 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fe3dea5f

Linux patch 5.13.4

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README             |     4 +
 1003_linux-5.13.4.patch | 18727 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 18731 insertions(+)

diff --git a/0000_README b/0000_README
index 13b88ab..1873ca7 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1002_linux-5.13.3.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.13.3
 
+Patch:  1003_linux-5.13.4.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.13.4
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1003_linux-5.13.4.patch b/1003_linux-5.13.4.patch
new file mode 100644
index 0000000..7db7f26
--- /dev/null
+++ b/1003_linux-5.13.4.patch
@@ -0,0 +1,18727 @@
+diff --git a/Documentation/devicetree/bindings/i2c/i2c-at91.txt b/Documentation/devicetree/bindings/i2c/i2c-at91.txt
+index 96c914e048f59..2015f50aed0f9 100644
+--- a/Documentation/devicetree/bindings/i2c/i2c-at91.txt
++++ b/Documentation/devicetree/bindings/i2c/i2c-at91.txt
+@@ -73,7 +73,7 @@ i2c0: i2c@f8034600 {
+ 	pinctrl-0 = <&pinctrl_i2c0>;
+ 	pinctrl-1 = <&pinctrl_i2c0_gpio>;
+ 	sda-gpios = <&pioA 30 GPIO_ACTIVE_HIGH>;
+-	scl-gpios = <&pioA 31 GPIO_ACTIVE_HIGH>;
++	scl-gpios = <&pioA 31 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
+ 
+ 	wm8731: wm8731@1a {
+ 		compatible = "wm8731";
+diff --git a/Documentation/devicetree/bindings/power/supply/maxim,max17040.yaml b/Documentation/devicetree/bindings/power/supply/maxim,max17040.yaml
+index de91cf3f058cc..f792d06db413f 100644
+--- a/Documentation/devicetree/bindings/power/supply/maxim,max17040.yaml
++++ b/Documentation/devicetree/bindings/power/supply/maxim,max17040.yaml
+@@ -89,7 +89,7 @@ examples:
+         reg = <0x36>;
+         maxim,alert-low-soc-level = <10>;
+         interrupt-parent = <&gpio7>;
+-        interrupts = <2 IRQ_TYPE_EDGE_FALLING>;
++        interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
+         wakeup-source;
+       };
+     };
+diff --git a/Documentation/filesystems/f2fs.rst b/Documentation/filesystems/f2fs.rst
+index 992bf91eeec82..53d396650afbe 100644
+--- a/Documentation/filesystems/f2fs.rst
++++ b/Documentation/filesystems/f2fs.rst
+@@ -717,10 +717,10 @@ users.
+ ===================== ======================== ===================
+ User                  F2FS                     Block
+ ===================== ======================== ===================
+-                      META                     WRITE_LIFE_NOT_SET
+-                      HOT_NODE                 "
+-                      WARM_NODE                "
+-                      COLD_NODE                "
++N/A                   META                     WRITE_LIFE_NOT_SET
++N/A                   HOT_NODE                 "
++N/A                   WARM_NODE                "
++N/A                   COLD_NODE                "
+ ioctl(COLD)           COLD_DATA                WRITE_LIFE_EXTREME
+ extension list        "                        "
+ 
+@@ -746,10 +746,10 @@ WRITE_LIFE_LONG       "                        WRITE_LIFE_LONG
+ ===================== ======================== ===================
+ User                  F2FS                     Block
+ ===================== ======================== ===================
+-                      META                     WRITE_LIFE_MEDIUM;
+-                      HOT_NODE                 WRITE_LIFE_NOT_SET
+-                      WARM_NODE                "
+-                      COLD_NODE                WRITE_LIFE_NONE
++N/A                   META                     WRITE_LIFE_MEDIUM;
++N/A                   HOT_NODE                 WRITE_LIFE_NOT_SET
++N/A                   WARM_NODE                "
++N/A                   COLD_NODE                WRITE_LIFE_NONE
+ ioctl(COLD)           COLD_DATA                WRITE_LIFE_EXTREME
+ extension list        "                        "
+ 
+diff --git a/Makefile b/Makefile
+index 83f4212e004f1..975acb16046d2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 13
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+@@ -129,6 +129,11 @@ endif
+ $(if $(word 2, $(KBUILD_EXTMOD)), \
+ 	$(error building multiple external modules is not supported))
+ 
++# Remove trailing slashes
++ifneq ($(filter %/, $(KBUILD_EXTMOD)),)
++KBUILD_EXTMOD := $(shell dirname $(KBUILD_EXTMOD).)
++endif
++
+ export KBUILD_EXTMOD
+ 
+ # Kbuild will save output files in the current working directory.
+diff --git a/arch/arm/boot/dts/am335x-cm-t335.dts b/arch/arm/boot/dts/am335x-cm-t335.dts
+index 36d963db40266..ec4730c82d398 100644
+--- a/arch/arm/boot/dts/am335x-cm-t335.dts
++++ b/arch/arm/boot/dts/am335x-cm-t335.dts
+@@ -496,7 +496,7 @@ status = "okay";
+ 	status = "okay";
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&spi0_pins>;
+-	ti,pindir-d0-out-d1-in = <1>;
++	ti,pindir-d0-out-d1-in;
+ 	/* WLS1271 WiFi */
+ 	wlcore: wlcore@1 {
+ 		compatible = "ti,wl1271";
+diff --git a/arch/arm/boot/dts/am43x-epos-evm.dts b/arch/arm/boot/dts/am43x-epos-evm.dts
+index f517d1e843cf4..8b696107eef8c 100644
+--- a/arch/arm/boot/dts/am43x-epos-evm.dts
++++ b/arch/arm/boot/dts/am43x-epos-evm.dts
+@@ -860,7 +860,7 @@
+ 	pinctrl-names = "default", "sleep";
+ 	pinctrl-0 = <&spi0_pins_default>;
+ 	pinctrl-1 = <&spi0_pins_sleep>;
+-	ti,pindir-d0-out-d1-in = <1>;
++	ti,pindir-d0-out-d1-in;
+ };
+ 
+ &spi1 {
+@@ -868,7 +868,7 @@
+ 	pinctrl-names = "default", "sleep";
+ 	pinctrl-0 = <&spi1_pins_default>;
+ 	pinctrl-1 = <&spi1_pins_sleep>;
+-	ti,pindir-d0-out-d1-in = <1>;
++	ti,pindir-d0-out-d1-in;
+ };
+ 
+ &usb2_phy1 {
+diff --git a/arch/arm/boot/dts/am5718.dtsi b/arch/arm/boot/dts/am5718.dtsi
+index ebf4d3cc1cfbe..6d7530a48c73f 100644
+--- a/arch/arm/boot/dts/am5718.dtsi
++++ b/arch/arm/boot/dts/am5718.dtsi
+@@ -17,17 +17,13 @@
+  * VCP1, VCP2
+  * MLB
+  * ISS
+- * USB3, USB4
++ * USB3
+  */
+ 
+ &usb3_tm {
+ 	status = "disabled";
+ };
+ 
+-&usb4_tm {
+-	status = "disabled";
+-};
+-
+ &atl_tm {
+ 	status = "disabled";
+ };
+diff --git a/arch/arm/boot/dts/bcm283x-rpi-usb-otg.dtsi b/arch/arm/boot/dts/bcm283x-rpi-usb-otg.dtsi
+index 20322de2f8bf1..e2fd9610e1252 100644
+--- a/arch/arm/boot/dts/bcm283x-rpi-usb-otg.dtsi
++++ b/arch/arm/boot/dts/bcm283x-rpi-usb-otg.dtsi
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ &usb {
+ 	dr_mode = "otg";
+-	g-rx-fifo-size = <558>;
++	g-rx-fifo-size = <256>;
+ 	g-np-tx-fifo-size = <32>;
+ 	/*
+ 	 * According to dwc2 the sum of all device EP
+diff --git a/arch/arm/boot/dts/bcm283x-rpi-usb-peripheral.dtsi b/arch/arm/boot/dts/bcm283x-rpi-usb-peripheral.dtsi
+index 1409d1b559c18..0ff0e9e253272 100644
+--- a/arch/arm/boot/dts/bcm283x-rpi-usb-peripheral.dtsi
++++ b/arch/arm/boot/dts/bcm283x-rpi-usb-peripheral.dtsi
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ &usb {
+ 	dr_mode = "peripheral";
+-	g-rx-fifo-size = <558>;
++	g-rx-fifo-size = <256>;
+ 	g-np-tx-fifo-size = <32>;
+ 	g-tx-fifo-size = <256 256 512 512 512 768 768>;
+ };
+diff --git a/arch/arm/boot/dts/bcm5301x.dtsi b/arch/arm/boot/dts/bcm5301x.dtsi
+index 7db72a2f1020d..86872e12c355e 100644
+--- a/arch/arm/boot/dts/bcm5301x.dtsi
++++ b/arch/arm/boot/dts/bcm5301x.dtsi
+@@ -520,27 +520,27 @@
+ 		      <0x1811b408 0x004>,
+ 		      <0x180293a0 0x01c>;
+ 		reg-names = "mspi", "bspi", "intr_regs", "intr_status_reg";
+-		interrupts = <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>,
++		interrupts = <GIC_SPI 77 IRQ_TYPE_LEVEL_HIGH>,
++			     <GIC_SPI 78 IRQ_TYPE_LEVEL_HIGH>,
++			     <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>,
+ 			     <GIC_SPI 73 IRQ_TYPE_LEVEL_HIGH>,
+ 			     <GIC_SPI 74 IRQ_TYPE_LEVEL_HIGH>,
+ 			     <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>,
+-			     <GIC_SPI 76 IRQ_TYPE_LEVEL_HIGH>,
+-			     <GIC_SPI 77 IRQ_TYPE_LEVEL_HIGH>,
+-			     <GIC_SPI 78 IRQ_TYPE_LEVEL_HIGH>;
+-		interrupt-names = "spi_lr_fullness_reached",
++			     <GIC_SPI 76 IRQ_TYPE_LEVEL_HIGH>;
++		interrupt-names = "mspi_done",
++				  "mspi_halted",
++				  "spi_lr_fullness_reached",
+ 				  "spi_lr_session_aborted",
+ 				  "spi_lr_impatient",
+ 				  "spi_lr_session_done",
+-				  "spi_lr_overhead",
+-				  "mspi_done",
+-				  "mspi_halted";
++				  "spi_lr_overread";
+ 		clocks = <&iprocmed>;
+ 		clock-names = "iprocmed";
+ 		num-cs = <2>;
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 
+-		spi_nor: spi-nor@0 {
++		spi_nor: flash@0 {
+ 			compatible = "jedec,spi-nor";
+ 			reg = <0>;
+ 			spi-max-frequency = <20000000>;
+diff --git a/arch/arm/boot/dts/dra7-l4.dtsi b/arch/arm/boot/dts/dra7-l4.dtsi
+index 149144cdff35a..648d23f7f7481 100644
+--- a/arch/arm/boot/dts/dra7-l4.dtsi
++++ b/arch/arm/boot/dts/dra7-l4.dtsi
+@@ -4129,28 +4129,6 @@
+ 			};
+ 		};
+ 
+-		usb4_tm: target-module@140000 {		/* 0x48940000, ap 75 3c.0 */
+-			compatible = "ti,sysc-omap4", "ti,sysc";
+-			reg = <0x140000 0x4>,
+-			      <0x140010 0x4>;
+-			reg-names = "rev", "sysc";
+-			ti,sysc-mask = <SYSC_OMAP4_DMADISABLE>;
+-			ti,sysc-midle = <SYSC_IDLE_FORCE>,
+-					<SYSC_IDLE_NO>,
+-					<SYSC_IDLE_SMART>,
+-					<SYSC_IDLE_SMART_WKUP>;
+-			ti,sysc-sidle = <SYSC_IDLE_FORCE>,
+-					<SYSC_IDLE_NO>,
+-					<SYSC_IDLE_SMART>,
+-					<SYSC_IDLE_SMART_WKUP>;
+-			/* Domains (P, C): l3init_pwrdm, l3init_clkdm */
+-			clocks = <&l3init_clkctrl DRA7_L3INIT_USB_OTG_SS4_CLKCTRL 0>;
+-			clock-names = "fck";
+-			#address-cells = <1>;
+-			#size-cells = <1>;
+-			ranges = <0x0 0x140000 0x20000>;
+-		};
+-
+ 		target-module@170000 {			/* 0x48970000, ap 21 0a.0 */
+ 			compatible = "ti,sysc-omap4", "ti,sysc";
+ 			reg = <0x170010 0x4>;
+diff --git a/arch/arm/boot/dts/dra71x.dtsi b/arch/arm/boot/dts/dra71x.dtsi
+index cad0e4a2bd8df..9c270d8f75d5b 100644
+--- a/arch/arm/boot/dts/dra71x.dtsi
++++ b/arch/arm/boot/dts/dra71x.dtsi
+@@ -11,7 +11,3 @@
+ &rtctarget {
+ 	status = "disabled";
+ };
+-
+-&usb4_tm {
+-	status = "disabled";
+-};
+diff --git a/arch/arm/boot/dts/dra72x.dtsi b/arch/arm/boot/dts/dra72x.dtsi
+index d403acc754b68..f3e934ef7d3e2 100644
+--- a/arch/arm/boot/dts/dra72x.dtsi
++++ b/arch/arm/boot/dts/dra72x.dtsi
+@@ -108,7 +108,3 @@
+ &pcie2_rc {
+ 	compatible = "ti,dra726-pcie-rc", "ti,dra7-pcie";
+ };
+-
+-&usb4_tm {
+-	status = "disabled";
+-};
+diff --git a/arch/arm/boot/dts/dra74x.dtsi b/arch/arm/boot/dts/dra74x.dtsi
+index e1850d6c841a7..b4e07d99ffde1 100644
+--- a/arch/arm/boot/dts/dra74x.dtsi
++++ b/arch/arm/boot/dts/dra74x.dtsi
+@@ -49,49 +49,6 @@
+ 			reg = <0x41500000 0x100>;
+ 		};
+ 
+-		target-module@48940000 {
+-			compatible = "ti,sysc-omap4", "ti,sysc";
+-			reg = <0x48940000 0x4>,
+-			      <0x48940010 0x4>;
+-			reg-names = "rev", "sysc";
+-			ti,sysc-mask = <SYSC_OMAP4_DMADISABLE>;
+-			ti,sysc-midle = <SYSC_IDLE_FORCE>,
+-					<SYSC_IDLE_NO>,
+-					<SYSC_IDLE_SMART>,
+-					<SYSC_IDLE_SMART_WKUP>;
+-			ti,sysc-sidle = <SYSC_IDLE_FORCE>,
+-					<SYSC_IDLE_NO>,
+-					<SYSC_IDLE_SMART>,
+-					<SYSC_IDLE_SMART_WKUP>;
+-			clocks = <&l3init_clkctrl DRA7_L3INIT_USB_OTG_SS4_CLKCTRL 0>;
+-			clock-names = "fck";
+-			#address-cells = <1>;
+-			#size-cells = <1>;
+-			ranges = <0x0 0x48940000 0x20000>;
+-
+-			omap_dwc3_4: omap_dwc3_4@0 {
+-				compatible = "ti,dwc3";
+-				reg = <0 0x10000>;
+-				interrupts = <GIC_SPI 346 IRQ_TYPE_LEVEL_HIGH>;
+-				#address-cells = <1>;
+-				#size-cells = <1>;
+-				utmi-mode = <2>;
+-				ranges;
+-				status = "disabled";
+-				usb4: usb@10000 {
+-					compatible = "snps,dwc3";
+-					reg = <0x10000 0x17000>;
+-					interrupts = <GIC_SPI 345 IRQ_TYPE_LEVEL_HIGH>,
+-						     <GIC_SPI 345 IRQ_TYPE_LEVEL_HIGH>,
+-						     <GIC_SPI 346 IRQ_TYPE_LEVEL_HIGH>;
+-					interrupt-names = "peripheral",
+-							  "host",
+-							  "otg";
+-					maximum-speed = "high-speed";
+-					dr_mode = "otg";
+-				};
+-			};
+-		};
+ 
+ 		target-module@41501000 {
+ 			compatible = "ti,sysc-omap2", "ti,sysc";
+@@ -224,3 +181,52 @@
+ &pcie2_rc {
+ 	compatible = "ti,dra746-pcie-rc", "ti,dra7-pcie";
+ };
++
++&l4_per3 {
++	segment@0 {
++		usb4_tm: target-module@140000 {         /* 0x48940000, ap 75 3c.0 */
++			compatible = "ti,sysc-omap4", "ti,sysc";
++			reg = <0x140000 0x4>,
++			      <0x140010 0x4>;
++			reg-names = "rev", "sysc";
++			ti,sysc-mask = <SYSC_OMAP4_DMADISABLE>;
++			ti,sysc-midle = <SYSC_IDLE_FORCE>,
++					<SYSC_IDLE_NO>,
++					<SYSC_IDLE_SMART>,
++					<SYSC_IDLE_SMART_WKUP>;
++			ti,sysc-sidle = <SYSC_IDLE_FORCE>,
++					<SYSC_IDLE_NO>,
++					<SYSC_IDLE_SMART>,
++					<SYSC_IDLE_SMART_WKUP>;
++			/* Domains (P, C): l3init_pwrdm, l3init_clkdm */
++			clocks = <&l3init_clkctrl DRA7_L3INIT_USB_OTG_SS4_CLKCTRL 0>;
++			clock-names = "fck";
++			#address-cells = <1>;
++			#size-cells = <1>;
++			ranges = <0x0 0x140000 0x20000>;
++
++			omap_dwc3_4: omap_dwc3_4@0 {
++				compatible = "ti,dwc3";
++				reg = <0 0x10000>;
++				interrupts = <GIC_SPI 346 IRQ_TYPE_LEVEL_HIGH>;
++				#address-cells = <1>;
++				#size-cells = <1>;
++				utmi-mode = <2>;
++				ranges;
++				status = "disabled";
++				usb4: usb@10000 {
++					compatible = "snps,dwc3";
++					reg = <0x10000 0x17000>;
++					interrupts = <GIC_SPI 345 IRQ_TYPE_LEVEL_HIGH>,
++						     <GIC_SPI 345 IRQ_TYPE_LEVEL_HIGH>,
++						     <GIC_SPI 346 IRQ_TYPE_LEVEL_HIGH>;
++					interrupt-names = "peripheral",
++							  "host",
++							  "otg";
++					maximum-speed = "high-speed";
++					dr_mode = "otg";
++				};
++			};
++		};
++	};
++};
+diff --git a/arch/arm/boot/dts/exynos5422-odroidhc1.dts b/arch/arm/boot/dts/exynos5422-odroidhc1.dts
+index 20c222b33f98e..d91f7fa2cf808 100644
+--- a/arch/arm/boot/dts/exynos5422-odroidhc1.dts
++++ b/arch/arm/boot/dts/exynos5422-odroidhc1.dts
+@@ -22,7 +22,7 @@
+ 			label = "blue:heartbeat";
+ 			pwms = <&pwm 2 2000000 0>;
+ 			pwm-names = "pwm2";
+-			max_brightness = <255>;
++			max-brightness = <255>;
+ 			linux,default-trigger = "heartbeat";
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/exynos5422-odroidxu4.dts b/arch/arm/boot/dts/exynos5422-odroidxu4.dts
+index ede7822576436..1c24f9b35973f 100644
+--- a/arch/arm/boot/dts/exynos5422-odroidxu4.dts
++++ b/arch/arm/boot/dts/exynos5422-odroidxu4.dts
+@@ -24,7 +24,7 @@
+ 			label = "blue:heartbeat";
+ 			pwms = <&pwm 2 2000000 0>;
+ 			pwm-names = "pwm2";
+-			max_brightness = <255>;
++			max-brightness = <255>;
+ 			linux,default-trigger = "heartbeat";
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/exynos54xx-odroidxu-leds.dtsi b/arch/arm/boot/dts/exynos54xx-odroidxu-leds.dtsi
+index 2fc3e86dc5f70..982752e1df24a 100644
+--- a/arch/arm/boot/dts/exynos54xx-odroidxu-leds.dtsi
++++ b/arch/arm/boot/dts/exynos54xx-odroidxu-leds.dtsi
+@@ -22,7 +22,7 @@
+ 			 * Green LED is much brighter than the others
+ 			 * so limit its max brightness
+ 			 */
+-			max_brightness = <127>;
++			max-brightness = <127>;
+ 			linux,default-trigger = "mmc0";
+ 		};
+ 
+@@ -30,7 +30,7 @@
+ 			label = "blue:heartbeat";
+ 			pwms = <&pwm 2 2000000 0>;
+ 			pwm-names = "pwm2";
+-			max_brightness = <255>;
++			max-brightness = <255>;
+ 			linux,default-trigger = "heartbeat";
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/gemini-rut1xx.dts b/arch/arm/boot/dts/gemini-rut1xx.dts
+index 9611ddf067927..08091d2a64e15 100644
+--- a/arch/arm/boot/dts/gemini-rut1xx.dts
++++ b/arch/arm/boot/dts/gemini-rut1xx.dts
+@@ -125,18 +125,6 @@
+ 			};
+ 		};
+ 
+-		ethernet@60000000 {
+-			status = "okay";
+-
+-			ethernet-port@0 {
+-				phy-mode = "rgmii";
+-				phy-handle = <&phy0>;
+-			};
+-			ethernet-port@1 {
+-				/* Not used in this platform */
+-			};
+-		};
+-
+ 		usb@68000000 {
+ 			status = "okay";
+ 		};
+diff --git a/arch/arm/boot/dts/imx6q-dhcom-som.dtsi b/arch/arm/boot/dts/imx6q-dhcom-som.dtsi
+index d0768ae429faa..e3de2b487cf49 100644
+--- a/arch/arm/boot/dts/imx6q-dhcom-som.dtsi
++++ b/arch/arm/boot/dts/imx6q-dhcom-som.dtsi
+@@ -96,30 +96,40 @@
+ 			reg = <0>;
+ 			max-speed = <100>;
+ 			reset-gpios = <&gpio5 0 GPIO_ACTIVE_LOW>;
+-			reset-delay-us = <1000>;
+-			reset-post-delay-us = <1000>;
++			reset-assert-us = <1000>;
++			reset-deassert-us = <1000>;
++			smsc,disable-energy-detect; /* Make plugin detection reliable */
+ 		};
+ 	};
+ };
+ 
+ &i2c1 {
+ 	clock-frequency = <100000>;
+-	pinctrl-names = "default";
++	pinctrl-names = "default", "gpio";
+ 	pinctrl-0 = <&pinctrl_i2c1>;
++	pinctrl-1 = <&pinctrl_i2c1_gpio>;
++	scl-gpios = <&gpio3 21 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
++	sda-gpios = <&gpio3 28 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
+ 	status = "okay";
+ };
+ 
+ &i2c2 {
+ 	clock-frequency = <100000>;
+-	pinctrl-names = "default";
++	pinctrl-names = "default", "gpio";
+ 	pinctrl-0 = <&pinctrl_i2c2>;
++	pinctrl-1 = <&pinctrl_i2c2_gpio>;
++	scl-gpios = <&gpio4 12 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
++	sda-gpios = <&gpio4 13 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
+ 	status = "okay";
+ };
+ 
+ &i2c3 {
+ 	clock-frequency = <100000>;
+-	pinctrl-names = "default";
++	pinctrl-names = "default", "gpio";
+ 	pinctrl-0 = <&pinctrl_i2c3>;
++	pinctrl-1 = <&pinctrl_i2c3_gpio>;
++	scl-gpios = <&gpio1 3 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
++	sda-gpios = <&gpio1 6 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
+ 	status = "okay";
+ 
+ 	ltc3676: pmic@3c {
+@@ -285,6 +295,13 @@
+ 		>;
+ 	};
+ 
++	pinctrl_i2c1_gpio: i2c1-gpio-grp {
++		fsl,pins = <
++			MX6QDL_PAD_EIM_D21__GPIO3_IO21		0x4001b8b1
++			MX6QDL_PAD_EIM_D28__GPIO3_IO28		0x4001b8b1
++		>;
++	};
++
+ 	pinctrl_i2c2: i2c2-grp {
+ 		fsl,pins = <
+ 			MX6QDL_PAD_KEY_COL3__I2C2_SCL		0x4001b8b1
+@@ -292,6 +309,13 @@
+ 		>;
+ 	};
+ 
++	pinctrl_i2c2_gpio: i2c2-gpio-grp {
++		fsl,pins = <
++			MX6QDL_PAD_KEY_COL3__GPIO4_IO12		0x4001b8b1
++			MX6QDL_PAD_KEY_ROW3__GPIO4_IO13		0x4001b8b1
++		>;
++	};
++
+ 	pinctrl_i2c3: i2c3-grp {
+ 		fsl,pins = <
+ 			MX6QDL_PAD_GPIO_3__I2C3_SCL		0x4001b8b1
+@@ -299,6 +323,13 @@
+ 		>;
+ 	};
+ 
++	pinctrl_i2c3_gpio: i2c3-gpio-grp {
++		fsl,pins = <
++			MX6QDL_PAD_GPIO_3__GPIO1_IO03		0x4001b8b1
++			MX6QDL_PAD_GPIO_6__GPIO1_IO06		0x4001b8b1
++		>;
++	};
++
+ 	pinctrl_pmic_hw300: pmic-hw300-grp {
+ 		fsl,pins = <
+ 			MX6QDL_PAD_EIM_A25__GPIO5_IO02		0x1B0B0
+diff --git a/arch/arm/boot/dts/qcom-sdx55-t55.dts b/arch/arm/boot/dts/qcom-sdx55-t55.dts
+index ddcd53aa533db..2ffcd085904db 100644
+--- a/arch/arm/boot/dts/qcom-sdx55-t55.dts
++++ b/arch/arm/boot/dts/qcom-sdx55-t55.dts
+@@ -250,7 +250,7 @@
+ 		nand-ecc-step-size = <512>;
+ 		nand-bus-width = <8>;
+ 		/* efs2 partition is secured */
+-		secure-regions = <0x500000 0xb00000>;
++		secure-regions = /bits/ 64 <0x500000 0xb00000>;
+ 	};
+ };
+ 
+diff --git a/arch/arm/boot/dts/qcom-sdx55-telit-fn980-tlb.dts b/arch/arm/boot/dts/qcom-sdx55-telit-fn980-tlb.dts
+index 3065f84634b80..80c40da79604f 100644
+--- a/arch/arm/boot/dts/qcom-sdx55-telit-fn980-tlb.dts
++++ b/arch/arm/boot/dts/qcom-sdx55-telit-fn980-tlb.dts
+@@ -250,8 +250,8 @@
+ 		nand-ecc-step-size = <512>;
+ 		nand-bus-width = <8>;
+ 		/* ico and efs2 partitions are secured */
+-		secure-regions = <0x500000 0x500000
+-				  0xa00000 0xb00000>;
++		secure-regions = /bits/ 64 <0x500000 0x500000
++					    0xa00000 0xb00000>;
+ 	};
+ };
+ 
+diff --git a/arch/arm/boot/dts/r8a7779-marzen.dts b/arch/arm/boot/dts/r8a7779-marzen.dts
+index d2240b89ee529..4658453234959 100644
+--- a/arch/arm/boot/dts/r8a7779-marzen.dts
++++ b/arch/arm/boot/dts/r8a7779-marzen.dts
+@@ -145,7 +145,7 @@
+ 	status = "okay";
+ 
+ 	clocks = <&mstp1_clks R8A7779_CLK_DU>, <&x3_clk>;
+-	clock-names = "du", "dclkin.0";
++	clock-names = "du.0", "dclkin.0";
+ 
+ 	ports {
+ 		port@0 {
+diff --git a/arch/arm/boot/dts/r8a7779.dtsi b/arch/arm/boot/dts/r8a7779.dtsi
+index 74d7e9084eabe..3c5fcdfe16b87 100644
+--- a/arch/arm/boot/dts/r8a7779.dtsi
++++ b/arch/arm/boot/dts/r8a7779.dtsi
+@@ -463,6 +463,7 @@
+ 		reg = <0xfff80000 0x40000>;
+ 		interrupts = <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&mstp1_clks R8A7779_CLK_DU>;
++		clock-names = "du.0";
+ 		power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
+ 		status = "disabled";
+ 
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
+index 272a1a67a9adc..31d08423a32f0 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
+@@ -123,7 +123,6 @@
+ 	max-speed = <100>;
+ 	phy-handle = <&phy0>;
+ 	st,eth-ref-clk-sel;
+-	phy-reset-gpios = <&gpioh 3 GPIO_ACTIVE_LOW>;
+ 
+ 	mdio0 {
+ 		#address-cells = <1>;
+@@ -132,6 +131,13 @@
+ 
+ 		phy0: ethernet-phy@1 {
+ 			reg = <1>;
++			/* LAN8710Ai */
++			compatible = "ethernet-phy-id0007.c0f0",
++				     "ethernet-phy-ieee802.3-c22";
++			clocks = <&rcc ETHCK_K>;
++			reset-gpios = <&gpioh 3 GPIO_ACTIVE_LOW>;
++			reset-assert-us = <500>;
++			reset-deassert-us = <500>;
+ 			interrupt-parent = <&gpioi>;
+ 			interrupts = <11 IRQ_TYPE_LEVEL_LOW>;
+ 		};
+diff --git a/arch/arm/boot/dts/sun8i-h3-orangepi-plus.dts b/arch/arm/boot/dts/sun8i-h3-orangepi-plus.dts
+index 97f497854e05d..d05fa679dcd30 100644
+--- a/arch/arm/boot/dts/sun8i-h3-orangepi-plus.dts
++++ b/arch/arm/boot/dts/sun8i-h3-orangepi-plus.dts
+@@ -85,7 +85,7 @@
+ 	pinctrl-0 = <&emac_rgmii_pins>;
+ 	phy-supply = <&reg_gmac_3v3>;
+ 	phy-handle = <&ext_rgmii_phy>;
+-	phy-mode = "rgmii";
++	phy-mode = "rgmii-id";
+ 
+ 	status = "okay";
+ };
+diff --git a/arch/arm/mach-exynos/exynos.c b/arch/arm/mach-exynos/exynos.c
+index 25b01da4771b3..8b48326be9fd5 100644
+--- a/arch/arm/mach-exynos/exynos.c
++++ b/arch/arm/mach-exynos/exynos.c
+@@ -55,6 +55,7 @@ void __init exynos_sysram_init(void)
+ 		sysram_base_addr = of_iomap(node, 0);
+ 		sysram_base_phys = of_translate_address(node,
+ 					   of_get_address(node, 0, NULL, NULL));
++		of_node_put(node);
+ 		break;
+ 	}
+ 
+@@ -62,6 +63,7 @@ void __init exynos_sysram_init(void)
+ 		if (!of_device_is_available(node))
+ 			continue;
+ 		sysram_ns_base_addr = of_iomap(node, 0);
++		of_node_put(node);
+ 		break;
+ 	}
+ }
+diff --git a/arch/arm/probes/kprobes/test-thumb.c b/arch/arm/probes/kprobes/test-thumb.c
+index 456c181a7bfe8..4e11f0b760f89 100644
+--- a/arch/arm/probes/kprobes/test-thumb.c
++++ b/arch/arm/probes/kprobes/test-thumb.c
+@@ -441,21 +441,21 @@ void kprobe_thumb32_test_cases(void)
+ 		"3:	mvn	r0, r0	\n\t"
+ 		"2:	nop		\n\t")
+ 
+-	TEST_RX("tbh	[pc, r",7, (9f-(1f+4))>>1,"]",
++	TEST_RX("tbh	[pc, r",7, (9f-(1f+4))>>1,", lsl #1]",
+ 		"9:			\n\t"
+ 		".short	(2f-1b-4)>>1	\n\t"
+ 		".short	(3f-1b-4)>>1	\n\t"
+ 		"3:	mvn	r0, r0	\n\t"
+ 		"2:	nop		\n\t")
+ 
+-	TEST_RX("tbh	[pc, r",12, ((9f-(1f+4))>>1)+1,"]",
++	TEST_RX("tbh	[pc, r",12, ((9f-(1f+4))>>1)+1,", lsl #1]",
+ 		"9:			\n\t"
+ 		".short	(2f-1b-4)>>1	\n\t"
+ 		".short	(3f-1b-4)>>1	\n\t"
+ 		"3:	mvn	r0, r0	\n\t"
+ 		"2:	nop		\n\t")
+ 
+-	TEST_RRX("tbh	[r",1,9f, ", r",14,1,"]",
++	TEST_RRX("tbh	[r",1,9f, ", r",14,1,", lsl #1]",
+ 		"9:			\n\t"
+ 		".short	(2f-1b-4)>>1	\n\t"
+ 		".short	(3f-1b-4)>>1	\n\t"
+@@ -468,10 +468,10 @@ void kprobe_thumb32_test_cases(void)
+ 
+ 	TEST_UNSUPPORTED("strexb	r0, r1, [r2]")
+ 	TEST_UNSUPPORTED("strexh	r0, r1, [r2]")
+-	TEST_UNSUPPORTED("strexd	r0, r1, [r2]")
++	TEST_UNSUPPORTED("strexd	r0, r1, r2, [r2]")
+ 	TEST_UNSUPPORTED("ldrexb	r0, [r1]")
+ 	TEST_UNSUPPORTED("ldrexh	r0, [r1]")
+-	TEST_UNSUPPORTED("ldrexd	r0, [r1]")
++	TEST_UNSUPPORTED("ldrexd	r0, r1, [r1]")
+ 
+ 	TEST_GROUP("Data-processing (shifted register) and (modified immediate)")
+ 
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine-baseboard.dts b/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine-baseboard.dts
+index e22b94c83647f..5e66ce1a334fb 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine-baseboard.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine-baseboard.dts
+@@ -79,7 +79,7 @@
+ &emac {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&rgmii_pins>;
+-	phy-mode = "rgmii-id";
++	phy-mode = "rgmii-txid";
+ 	phy-handle = <&ext_rgmii_phy>;
+ 	phy-supply = <&reg_dc1sw>;
+ 	status = "okay";
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
+index 24d293ef56d7b..74d15789ce9a0 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
+@@ -655,6 +655,8 @@ edp_brij_i2c: &i2c2 {
+ 		clocks = <&rpmhcc RPMH_LN_BB_CLK3>;
+ 		clock-names = "refclk";
+ 
++		no-hpd;
++
+ 		ports {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+index 6228ba2d85132..b82014eac56b7 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+@@ -2754,8 +2754,8 @@
+ 		usb_1_qmpphy: phy-wrapper@88e9000 {
+ 			compatible = "qcom,sc7180-qmp-usb3-dp-phy";
+ 			reg = <0 0x088e9000 0 0x18c>,
+-			      <0 0x088e8000 0 0x38>,
+-			      <0 0x088ea000 0 0x40>;
++			      <0 0x088e8000 0 0x3c>,
++			      <0 0x088ea000 0 0x18c>;
+ 			status = "disabled";
+ 			#address-cells = <2>;
+ 			#size-cells = <2>;
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-oneplus-common.dtsi b/arch/arm64/boot/dts/qcom/sdm845-oneplus-common.dtsi
+index 8f617f7b6d34e..f712771df0c77 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-oneplus-common.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845-oneplus-common.dtsi
+@@ -46,6 +46,14 @@
+ 	};
+ 
+ 	reserved-memory {
++		/* The rmtfs_mem needs to be guarded due to "XPU limitations"
++		 * it is otherwise possible for an allocation adjacent to the
++		 * rmtfs_mem region to trigger an XPU violation, causing a crash.
++		 */
++		rmtfs_lower_guard: memory@f5b00000 {
++			no-map;
++			reg = <0 0xf5b00000 0 0x1000>;
++		};
+ 		/*
+ 		 * The rmtfs memory region in downstream is 'dynamically allocated'
+ 		 * but given the same address every time. Hard code it as this address is
+@@ -59,6 +67,10 @@
+ 			qcom,client-id = <1>;
+ 			qcom,vmid = <15>;
+ 		};
++		rmtfs_upper_guard: memory@f5d01000 {
++			no-map;
++			reg = <0 0xf5d01000 0 0x2000>;
++		};
+ 
+ 		/*
+ 		 * It seems like reserving the old rmtfs_mem region is also needed to prevent
+diff --git a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+index 140db2d5ba317..c2a709a384e9e 100644
+--- a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
++++ b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+@@ -376,6 +376,8 @@
+ 		clocks = <&sn65dsi86_refclk>;
+ 		clock-names = "refclk";
+ 
++		no-hpd;
++
+ 		ports {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a774a1.dtsi b/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
+index 46f8dbf689048..95d9262d4f8a6 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a774a1.dtsi
+@@ -76,6 +76,7 @@
+ 			opp-hz = /bits/ 64 <1500000000>;
+ 			opp-microvolt = <820000>;
+ 			clock-latency-ns = <300000>;
++			opp-suspend;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/renesas/r8a77960.dtsi b/arch/arm64/boot/dts/renesas/r8a77960.dtsi
+index 12476e354d746..a91a42c57484e 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77960.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77960.dtsi
+@@ -63,18 +63,19 @@
+ 
+ 		opp-500000000 {
+ 			opp-hz = /bits/ 64 <500000000>;
+-			opp-microvolt = <820000>;
++			opp-microvolt = <830000>;
+ 			clock-latency-ns = <300000>;
+ 		};
+ 		opp-1000000000 {
+ 			opp-hz = /bits/ 64 <1000000000>;
+-			opp-microvolt = <820000>;
++			opp-microvolt = <830000>;
+ 			clock-latency-ns = <300000>;
+ 		};
+ 		opp-1500000000 {
+ 			opp-hz = /bits/ 64 <1500000000>;
+-			opp-microvolt = <820000>;
++			opp-microvolt = <830000>;
+ 			clock-latency-ns = <300000>;
++			opp-suspend;
+ 		};
+ 		opp-1600000000 {
+ 			opp-hz = /bits/ 64 <1600000000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77961.dtsi b/arch/arm64/boot/dts/renesas/r8a77961.dtsi
+index d9804768425a7..7f9d5c3605222 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77961.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77961.dtsi
+@@ -52,18 +52,19 @@
+ 
+ 		opp-500000000 {
+ 			opp-hz = /bits/ 64 <500000000>;
+-			opp-microvolt = <820000>;
++			opp-microvolt = <830000>;
+ 			clock-latency-ns = <300000>;
+ 		};
+ 		opp-1000000000 {
+ 			opp-hz = /bits/ 64 <1000000000>;
+-			opp-microvolt = <820000>;
++			opp-microvolt = <830000>;
+ 			clock-latency-ns = <300000>;
+ 		};
+ 		opp-1500000000 {
+ 			opp-hz = /bits/ 64 <1500000000>;
+-			opp-microvolt = <820000>;
++			opp-microvolt = <830000>;
+ 			clock-latency-ns = <300000>;
++			opp-suspend;
+ 		};
+ 		opp-1600000000 {
+ 			opp-hz = /bits/ 64 <1600000000>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77970-v3msk.dts b/arch/arm64/boot/dts/renesas/r8a77970-v3msk.dts
+index 7417cf5fea0f0..2426e533128ce 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77970-v3msk.dts
++++ b/arch/arm64/boot/dts/renesas/r8a77970-v3msk.dts
+@@ -59,7 +59,7 @@
+ 	memory@48000000 {
+ 		device_type = "memory";
+ 		/* first 128MB is reserved for secure area. */
+-		reg = <0x0 0x48000000 0x0 0x38000000>;
++		reg = <0x0 0x48000000 0x0 0x78000000>;
+ 	};
+ 
+ 	osc5_clk: osc5-clock {
+diff --git a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
+index 70b3604e56cd7..9a7d1aca49984 100644
+--- a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
+@@ -1096,7 +1096,6 @@
+ 			      <0x0 0xf1060000 0 0x110000>;
+ 			interrupts = <GIC_PPI 9
+ 				      (GIC_CPU_MASK_SIMPLE(1) | IRQ_TYPE_LEVEL_HIGH)>;
+-			power-domains = <&sysc R8A779A0_PD_ALWAYS_ON>;
+ 		};
+ 
+ 		fcpvd0: fcp@fea10000 {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328-rock-pi-e.dts b/arch/arm64/boot/dts/rockchip/rk3328-rock-pi-e.dts
+index c7e31efdd2e11..c02059c0a9547 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328-rock-pi-e.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3328-rock-pi-e.dts
+@@ -177,8 +177,6 @@
+ };
+ 
+ &gmac2phy {
+-	pinctrl-names = "default";
+-	pinctrl-0 = <&fephyled_linkm1>, <&fephyled_rxm1>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-nanopi-r4s.dts b/arch/arm64/boot/dts/rockchip/rk3399-nanopi-r4s.dts
+index fa58098876438..cef4d18b599dd 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-nanopi-r4s.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-nanopi-r4s.dts
+@@ -33,7 +33,7 @@
+ 
+ 		sys_led: led-sys {
+ 			gpios = <&gpio0 RK_PB5 GPIO_ACTIVE_HIGH>;
+-			label = "red:sys";
++			label = "red:power";
+ 			default-state = "on";
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-roc-pc.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-roc-pc.dtsi
+index c172f5a803e7b..8b27ee4be7556 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-roc-pc.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-roc-pc.dtsi
+@@ -389,6 +389,7 @@
+ 
+ 			vcc_sdio: LDO_REG4 {
+ 				regulator-name = "vcc_sdio";
++				regulator-always-on;
+ 				regulator-boot-on;
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <3000000>;
+@@ -493,6 +494,8 @@
+ 		regulator-min-microvolt = <712500>;
+ 		regulator-max-microvolt = <1500000>;
+ 		regulator-ramp-delay = <1000>;
++		regulator-always-on;
++		regulator-boot-on;
+ 		vin-supply = <&vcc3v3_sys>;
+ 
+ 		regulator-state-mem {
+diff --git a/arch/arm64/boot/dts/ti/k3-am64-main.dtsi b/arch/arm64/boot/dts/ti/k3-am64-main.dtsi
+index ca59d1f711f8a..bcbf436a96b55 100644
+--- a/arch/arm64/boot/dts/ti/k3-am64-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am64-main.dtsi
+@@ -489,7 +489,8 @@
+ 				ti,mac-only;
+ 				label = "port1";
+ 				phys = <&phy_gmii_sel 1>;
+-				mac-address = [00 00 de ad be ef];
++				mac-address = [00 00 00 00 00 00];
++				ti,syscon-efuse = <&main_conf 0x200>;
+ 			};
+ 
+ 			cpsw_port2: port@2 {
+@@ -497,7 +498,7 @@
+ 				ti,mac-only;
+ 				label = "port2";
+ 				phys = <&phy_gmii_sel 2>;
+-				mac-address = [00 01 de ad be ef];
++				mac-address = [00 00 00 00 00 00];
+ 			};
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/ti/k3-am64-mcu.dtsi b/arch/arm64/boot/dts/ti/k3-am64-mcu.dtsi
+index deb19ae5e168a..eaf7edb2ef4dd 100644
+--- a/arch/arm64/boot/dts/ti/k3-am64-mcu.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am64-mcu.dtsi
+@@ -87,7 +87,7 @@
+ 	};
+ 
+ 	mcu_gpio0: gpio@4201000 {
+-		compatible = "ti,am64-gpio", "keystone-gpio";
++		compatible = "ti,am64-gpio", "ti,keystone-gpio";
+ 		reg = <0x0 0x4201000 0x0 0x100>;
+ 		gpio-controller;
+ 		#gpio-cells = <2>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am642-evm.dts b/arch/arm64/boot/dts/ti/k3-am642-evm.dts
+index dad0efa961ed1..2fd0de905e618 100644
+--- a/arch/arm64/boot/dts/ti/k3-am642-evm.dts
++++ b/arch/arm64/boot/dts/ti/k3-am642-evm.dts
+@@ -334,7 +334,7 @@
+ &main_spi0 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&main_spi0_pins_default>;
+-	ti,pindir-d0-out-d1-in = <1>;
++	ti,pindir-d0-out-d1-in;
+ 	eeprom@0 {
+ 		compatible = "microchip,93lc46b";
+ 		reg = <0>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am65-iot2050-common.dtsi b/arch/arm64/boot/dts/ti/k3-am65-iot2050-common.dtsi
+index de763ca9251c7..d9a7e2689b93b 100644
+--- a/arch/arm64/boot/dts/ti/k3-am65-iot2050-common.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am65-iot2050-common.dtsi
+@@ -575,7 +575,7 @@
+ 
+ 	#address-cells = <1>;
+ 	#size-cells= <0>;
+-	ti,pindir-d0-out-d1-in = <1>;
++	ti,pindir-d0-out-d1-in;
+ };
+ 
+ &tscadc0 {
+diff --git a/arch/arm64/boot/dts/ti/k3-am654-base-board.dts b/arch/arm64/boot/dts/ti/k3-am654-base-board.dts
+index eddb2ffb93ca6..1b947e2c2e74f 100644
+--- a/arch/arm64/boot/dts/ti/k3-am654-base-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-am654-base-board.dts
+@@ -299,7 +299,7 @@
+ 	pinctrl-0 = <&main_spi0_pins_default>;
+ 	#address-cells = <1>;
+ 	#size-cells= <0>;
+-	ti,pindir-d0-out-d1-in = <1>;
++	ti,pindir-d0-out-d1-in;
+ 
+ 	flash@0{
+ 		compatible = "jedec,spi-nor";
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+index 19fea8adbcff4..a4b4b17a6ad74 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+@@ -683,6 +683,7 @@
+ 					  "otg";
+ 			maximum-speed = "super-speed";
+ 			dr_mode = "otg";
++			cdns,phyrst-a-enable;
+ 		};
+ 	};
+ 
+@@ -696,7 +697,6 @@
+ 			     <149>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+-		#address-cells = <0>;
+ 		ti,ngpio = <69>;
+ 		ti,davinci-gpio-unbanked = <0>;
+ 		power-domains = <&k3_pds 105 TI_SCI_PD_EXCLUSIVE>;
+@@ -714,7 +714,6 @@
+ 			     <158>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+-		#address-cells = <0>;
+ 		ti,ngpio = <69>;
+ 		ti,davinci-gpio-unbanked = <0>;
+ 		power-domains = <&k3_pds 107 TI_SCI_PD_EXCLUSIVE>;
+@@ -732,7 +731,6 @@
+ 			     <167>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+-		#address-cells = <0>;
+ 		ti,ngpio = <69>;
+ 		ti,davinci-gpio-unbanked = <0>;
+ 		power-domains = <&k3_pds 109 TI_SCI_PD_EXCLUSIVE>;
+@@ -750,7 +748,6 @@
+ 			     <176>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+-		#address-cells = <0>;
+ 		ti,ngpio = <69>;
+ 		ti,davinci-gpio-unbanked = <0>;
+ 		power-domains = <&k3_pds 111 TI_SCI_PD_EXCLUSIVE>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
+index 5663fe3ea4660..343449af53fb9 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
+@@ -117,7 +117,6 @@
+ 		interrupts = <103>, <104>, <105>, <106>, <107>, <108>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+-		#address-cells = <0>;
+ 		ti,ngpio = <85>;
+ 		ti,davinci-gpio-unbanked = <0>;
+ 		power-domains = <&k3_pds 113 TI_SCI_PD_EXCLUSIVE>;
+@@ -134,7 +133,6 @@
+ 		interrupts = <112>, <113>, <114>, <115>, <116>, <117>;
+ 		interrupt-controller;
+ 		#interrupt-cells = <2>;
+-		#address-cells = <0>;
+ 		ti,ngpio = <85>;
+ 		ti,davinci-gpio-unbanked = <0>;
+ 		power-domains = <&k3_pds 114 TI_SCI_PD_EXCLUSIVE>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts
+index 60764366e22bd..ffccbc53f1e75 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts
+@@ -9,6 +9,7 @@
+ #include <dt-bindings/gpio/gpio.h>
+ #include <dt-bindings/input/input.h>
+ #include <dt-bindings/net/ti-dp83867.h>
++#include <dt-bindings/phy/phy-cadence.h>
+ 
+ / {
+ 	chosen {
+@@ -358,7 +359,7 @@
+ };
+ 
+ &serdes3 {
+-	serdes3_usb_link: link@0 {
++	serdes3_usb_link: phy@0 {
+ 		reg = <0>;
+ 		cdns,num-lanes = <2>;
+ 		#phy-cells = <0>;
+@@ -635,8 +636,45 @@
+ 	status = "disabled";
+ };
+ 
++&cmn_refclk1 {
++	clock-frequency = <100000000>;
++};
++
++&wiz0_pll1_refclk {
++	assigned-clocks = <&wiz0_pll1_refclk>;
++	assigned-clock-parents = <&cmn_refclk1>;
++};
++
++&wiz0_refclk_dig {
++	assigned-clocks = <&wiz0_refclk_dig>;
++	assigned-clock-parents = <&cmn_refclk1>;
++};
++
++&wiz1_pll1_refclk {
++	assigned-clocks = <&wiz1_pll1_refclk>;
++	assigned-clock-parents = <&cmn_refclk1>;
++};
++
++&wiz1_refclk_dig {
++	assigned-clocks = <&wiz1_refclk_dig>;
++	assigned-clock-parents = <&cmn_refclk1>;
++};
++
++&wiz2_pll1_refclk {
++	assigned-clocks = <&wiz2_pll1_refclk>;
++	assigned-clock-parents = <&cmn_refclk1>;
++};
++
++&wiz2_refclk_dig {
++	assigned-clocks = <&wiz2_refclk_dig>;
++	assigned-clock-parents = <&cmn_refclk1>;
++};
++
+ &serdes0 {
+-	serdes0_pcie_link: link@0 {
++	assigned-clocks = <&serdes0 CDNS_SIERRA_PLL_CMNLC>;
++	assigned-clock-parents = <&wiz0_pll1_refclk>;
++
++	serdes0_pcie_link: phy@0 {
+ 		reg = <0>;
+ 		cdns,num-lanes = <1>;
+ 		#phy-cells = <0>;
+@@ -646,7 +684,10 @@
+ };
+ 
+ &serdes1 {
+-	serdes1_pcie_link: link@0 {
++	assigned-clocks = <&serdes1 CDNS_SIERRA_PLL_CMNLC>;
++	assigned-clock-parents = <&wiz1_pll1_refclk>;
++
++	serdes1_pcie_link: phy@0 {
+ 		reg = <0>;
+ 		cdns,num-lanes = <2>;
+ 		#phy-cells = <0>;
+@@ -656,7 +697,10 @@
+ };
+ 
+ &serdes2 {
+-	serdes2_pcie_link: link@0 {
++	assigned-clocks = <&serdes2 CDNS_SIERRA_PLL_CMNLC>;
++	assigned-clock-parents = <&wiz2_pll1_refclk>;
++
++	serdes2_pcie_link: phy@0 {
+ 		reg = <0>;
+ 		cdns,num-lanes = <2>;
+ 		#phy-cells = <0>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+index 3bcafe4c1742e..9ce4afd6da62f 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+@@ -8,6 +8,20 @@
+ #include <dt-bindings/mux/mux.h>
+ #include <dt-bindings/mux/ti-serdes.h>
+ 
++/ {
++	cmn_refclk: clock-cmnrefclk {
++		#clock-cells = <0>;
++		compatible = "fixed-clock";
++		clock-frequency = <0>;
++	};
++
++	cmn_refclk1: clock-cmnrefclk1 {
++		#clock-cells = <0>;
++		compatible = "fixed-clock";
++		clock-frequency = <0>;
++	};
++};
++
+ &cbass_main {
+ 	msmc_ram: sram@70000000 {
+ 		compatible = "mmio-sram";
+@@ -338,24 +352,12 @@
+ 		pinctrl-single,function-mask = <0xffffffff>;
+ 	};
+ 
+-	dummy_cmn_refclk: dummy-cmn-refclk {
+-		#clock-cells = <0>;
+-		compatible = "fixed-clock";
+-		clock-frequency = <100000000>;
+-	};
+-
+-	dummy_cmn_refclk1: dummy-cmn-refclk1 {
+-		#clock-cells = <0>;
+-		compatible = "fixed-clock";
+-		clock-frequency = <100000000>;
+-	};
+-
+ 	serdes_wiz0: wiz@5000000 {
+ 		compatible = "ti,j721e-wiz-16g";
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		power-domains = <&k3_pds 292 TI_SCI_PD_EXCLUSIVE>;
+-		clocks = <&k3_clks 292 5>, <&k3_clks 292 11>, <&dummy_cmn_refclk>;
++		clocks = <&k3_clks 292 5>, <&k3_clks 292 11>, <&cmn_refclk>;
+ 		clock-names = "fck", "core_ref_clk", "ext_ref_clk";
+ 		assigned-clocks = <&k3_clks 292 11>, <&k3_clks 292 0>;
+ 		assigned-clock-parents = <&k3_clks 292 15>, <&k3_clks 292 4>;
+@@ -364,21 +366,21 @@
+ 		ranges = <0x5000000 0x0 0x5000000 0x10000>;
+ 
+ 		wiz0_pll0_refclk: pll0-refclk {
+-			clocks = <&k3_clks 292 11>, <&dummy_cmn_refclk>;
++			clocks = <&k3_clks 292 11>, <&cmn_refclk>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz0_pll0_refclk>;
+ 			assigned-clock-parents = <&k3_clks 292 11>;
+ 		};
+ 
+ 		wiz0_pll1_refclk: pll1-refclk {
+-			clocks = <&k3_clks 292 0>, <&dummy_cmn_refclk1>;
++			clocks = <&k3_clks 292 0>, <&cmn_refclk1>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz0_pll1_refclk>;
+ 			assigned-clock-parents = <&k3_clks 292 0>;
+ 		};
+ 
+ 		wiz0_refclk_dig: refclk-dig {
+-			clocks = <&k3_clks 292 11>, <&k3_clks 292 0>, <&dummy_cmn_refclk>, <&dummy_cmn_refclk1>;
++			clocks = <&k3_clks 292 11>, <&k3_clks 292 0>, <&cmn_refclk>, <&cmn_refclk1>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz0_refclk_dig>;
+ 			assigned-clock-parents = <&k3_clks 292 11>;
+@@ -412,7 +414,7 @@
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		power-domains = <&k3_pds 293 TI_SCI_PD_EXCLUSIVE>;
+-		clocks = <&k3_clks 293 5>, <&k3_clks 293 13>, <&dummy_cmn_refclk>;
++		clocks = <&k3_clks 293 5>, <&k3_clks 293 13>, <&cmn_refclk>;
+ 		clock-names = "fck", "core_ref_clk", "ext_ref_clk";
+ 		assigned-clocks = <&k3_clks 293 13>, <&k3_clks 293 0>;
+ 		assigned-clock-parents = <&k3_clks 293 17>, <&k3_clks 293 4>;
+@@ -421,21 +423,21 @@
+ 		ranges = <0x5010000 0x0 0x5010000 0x10000>;
+ 
+ 		wiz1_pll0_refclk: pll0-refclk {
+-			clocks = <&k3_clks 293 13>, <&dummy_cmn_refclk>;
++			clocks = <&k3_clks 293 13>, <&cmn_refclk>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz1_pll0_refclk>;
+ 			assigned-clock-parents = <&k3_clks 293 13>;
+ 		};
+ 
+ 		wiz1_pll1_refclk: pll1-refclk {
+-			clocks = <&k3_clks 293 0>, <&dummy_cmn_refclk1>;
++			clocks = <&k3_clks 293 0>, <&cmn_refclk1>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz1_pll1_refclk>;
+ 			assigned-clock-parents = <&k3_clks 293 0>;
+ 		};
+ 
+ 		wiz1_refclk_dig: refclk-dig {
+-			clocks = <&k3_clks 293 13>, <&k3_clks 293 0>, <&dummy_cmn_refclk>, <&dummy_cmn_refclk1>;
++			clocks = <&k3_clks 293 13>, <&k3_clks 293 0>, <&cmn_refclk>, <&cmn_refclk1>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz1_refclk_dig>;
+ 			assigned-clock-parents = <&k3_clks 293 13>;
+@@ -469,7 +471,7 @@
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		power-domains = <&k3_pds 294 TI_SCI_PD_EXCLUSIVE>;
+-		clocks = <&k3_clks 294 5>, <&k3_clks 294 11>, <&dummy_cmn_refclk>;
++		clocks = <&k3_clks 294 5>, <&k3_clks 294 11>, <&cmn_refclk>;
+ 		clock-names = "fck", "core_ref_clk", "ext_ref_clk";
+ 		assigned-clocks = <&k3_clks 294 11>, <&k3_clks 294 0>;
+ 		assigned-clock-parents = <&k3_clks 294 15>, <&k3_clks 294 4>;
+@@ -478,21 +480,21 @@
+ 		ranges = <0x5020000 0x0 0x5020000 0x10000>;
+ 
+ 		wiz2_pll0_refclk: pll0-refclk {
+-			clocks = <&k3_clks 294 11>, <&dummy_cmn_refclk>;
++			clocks = <&k3_clks 294 11>, <&cmn_refclk>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz2_pll0_refclk>;
+ 			assigned-clock-parents = <&k3_clks 294 11>;
+ 		};
+ 
+ 		wiz2_pll1_refclk: pll1-refclk {
+-			clocks = <&k3_clks 294 0>, <&dummy_cmn_refclk1>;
++			clocks = <&k3_clks 294 0>, <&cmn_refclk1>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz2_pll1_refclk>;
+ 			assigned-clock-parents = <&k3_clks 294 0>;
+ 		};
+ 
+ 		wiz2_refclk_dig: refclk-dig {
+-			clocks = <&k3_clks 294 11>, <&k3_clks 294 0>, <&dummy_cmn_refclk>, <&dummy_cmn_refclk1>;
++			clocks = <&k3_clks 294 11>, <&k3_clks 294 0>, <&cmn_refclk>, <&cmn_refclk1>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz2_refclk_dig>;
+ 			assigned-clock-parents = <&k3_clks 294 11>;
+@@ -526,7 +528,7 @@
+ 		#address-cells = <1>;
+ 		#size-cells = <1>;
+ 		power-domains = <&k3_pds 295 TI_SCI_PD_EXCLUSIVE>;
+-		clocks = <&k3_clks 295 5>, <&k3_clks 295 9>, <&dummy_cmn_refclk>;
++		clocks = <&k3_clks 295 5>, <&k3_clks 295 9>, <&cmn_refclk>;
+ 		clock-names = "fck", "core_ref_clk", "ext_ref_clk";
+ 		assigned-clocks = <&k3_clks 295 9>, <&k3_clks 295 0>;
+ 		assigned-clock-parents = <&k3_clks 295 13>, <&k3_clks 295 4>;
+@@ -535,21 +537,21 @@
+ 		ranges = <0x5030000 0x0 0x5030000 0x10000>;
+ 
+ 		wiz3_pll0_refclk: pll0-refclk {
+-			clocks = <&k3_clks 295 9>, <&dummy_cmn_refclk>;
++			clocks = <&k3_clks 295 9>, <&cmn_refclk>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz3_pll0_refclk>;
+ 			assigned-clock-parents = <&k3_clks 295 9>;
+ 		};
+ 
+ 		wiz3_pll1_refclk: pll1-refclk {
+-			clocks = <&k3_clks 295 0>, <&dummy_cmn_refclk1>;
++			clocks = <&k3_clks 295 0>, <&cmn_refclk1>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz3_pll1_refclk>;
+ 			assigned-clock-parents = <&k3_clks 295 0>;
+ 		};
+ 
+ 		wiz3_refclk_dig: refclk-dig {
+-			clocks = <&k3_clks 295 9>, <&k3_clks 295 0>, <&dummy_cmn_refclk>, <&dummy_cmn_refclk1>;
++			clocks = <&k3_clks 295 9>, <&k3_clks 295 0>, <&cmn_refclk>, <&cmn_refclk1>;
+ 			#clock-cells = <0>;
+ 			assigned-clocks = <&wiz3_refclk_dig>;
+ 			assigned-clock-parents = <&k3_clks 295 9>;
+diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
+index 08c6f769df9ae..9907a431db0d0 100644
+--- a/arch/arm64/configs/defconfig
++++ b/arch/arm64/configs/defconfig
+@@ -491,7 +491,6 @@ CONFIG_SPI_S3C64XX=y
+ CONFIG_SPI_SH_MSIOF=m
+ CONFIG_SPI_SUN6I=y
+ CONFIG_SPI_SPIDEV=m
+-CONFIG_MTK_PMIC_WRAP=m
+ CONFIG_SPMI=y
+ CONFIG_PINCTRL_SINGLE=y
+ CONFIG_PINCTRL_MAX77620=y
+diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
+index 95cd62d673711..2cf999e41d30e 100644
+--- a/arch/arm64/lib/copy_from_user.S
++++ b/arch/arm64/lib/copy_from_user.S
+@@ -29,7 +29,7 @@
+ 	.endm
+ 
+ 	.macro ldrh1 reg, ptr, val
+-	user_ldst 9998f, ldtrh, \reg, \ptr, \val
++	user_ldst 9997f, ldtrh, \reg, \ptr, \val
+ 	.endm
+ 
+ 	.macro strh1 reg, ptr, val
+@@ -37,7 +37,7 @@
+ 	.endm
+ 
+ 	.macro ldr1 reg, ptr, val
+-	user_ldst 9998f, ldtr, \reg, \ptr, \val
++	user_ldst 9997f, ldtr, \reg, \ptr, \val
+ 	.endm
+ 
+ 	.macro str1 reg, ptr, val
+@@ -45,7 +45,7 @@
+ 	.endm
+ 
+ 	.macro ldp1 reg1, reg2, ptr, val
+-	user_ldp 9998f, \reg1, \reg2, \ptr, \val
++	user_ldp 9997f, \reg1, \reg2, \ptr, \val
+ 	.endm
+ 
+ 	.macro stp1 reg1, reg2, ptr, val
+@@ -53,8 +53,10 @@
+ 	.endm
+ 
+ end	.req	x5
++srcin	.req	x15
+ SYM_FUNC_START(__arch_copy_from_user)
+ 	add	end, x0, x2
++	mov	srcin, x1
+ #include "copy_template.S"
+ 	mov	x0, #0				// Nothing to copy
+ 	ret
+@@ -63,6 +65,11 @@ EXPORT_SYMBOL(__arch_copy_from_user)
+ 
+ 	.section .fixup,"ax"
+ 	.align	2
++9997:	cmp	dst, dstin
++	b.ne	9998f
++	// Before being absolutely sure we couldn't copy anything, try harder
++USER(9998f, ldtrb tmp1w, [srcin])
++	strb	tmp1w, [dst], #1
+ 9998:	sub	x0, end, dst			// bytes not copied
+ 	ret
+ 	.previous
+diff --git a/arch/arm64/lib/copy_in_user.S b/arch/arm64/lib/copy_in_user.S
+index 1f61cd0df0627..dbea3799c3efb 100644
+--- a/arch/arm64/lib/copy_in_user.S
++++ b/arch/arm64/lib/copy_in_user.S
+@@ -30,33 +30,34 @@
+ 	.endm
+ 
+ 	.macro ldrh1 reg, ptr, val
+-	user_ldst 9998f, ldtrh, \reg, \ptr, \val
++	user_ldst 9997f, ldtrh, \reg, \ptr, \val
+ 	.endm
+ 
+ 	.macro strh1 reg, ptr, val
+-	user_ldst 9998f, sttrh, \reg, \ptr, \val
++	user_ldst 9997f, sttrh, \reg, \ptr, \val
+ 	.endm
+ 
+ 	.macro ldr1 reg, ptr, val
+-	user_ldst 9998f, ldtr, \reg, \ptr, \val
++	user_ldst 9997f, ldtr, \reg, \ptr, \val
+ 	.endm
+ 
+ 	.macro str1 reg, ptr, val
+-	user_ldst 9998f, sttr, \reg, \ptr, \val
++	user_ldst 9997f, sttr, \reg, \ptr, \val
+ 	.endm
+ 
+ 	.macro ldp1 reg1, reg2, ptr, val
+-	user_ldp 9998f, \reg1, \reg2, \ptr, \val
++	user_ldp 9997f, \reg1, \reg2, \ptr, \val
+ 	.endm
+ 
+ 	.macro stp1 reg1, reg2, ptr, val
+-	user_stp 9998f, \reg1, \reg2, \ptr, \val
++	user_stp 9997f, \reg1, \reg2, \ptr, \val
+ 	.endm
+ 
+ end	.req	x5
+-
++srcin	.req	x15
+ SYM_FUNC_START(__arch_copy_in_user)
+ 	add	end, x0, x2
++	mov	srcin, x1
+ #include "copy_template.S"
+ 	mov	x0, #0
+ 	ret
+@@ -65,6 +66,12 @@ EXPORT_SYMBOL(__arch_copy_in_user)
+ 
+ 	.section .fixup,"ax"
+ 	.align	2
++9997:	cmp	dst, dstin
++	b.ne	9998f
++	// Before being absolutely sure we couldn't copy anything, try harder
++USER(9998f, ldtrb tmp1w, [srcin])
++USER(9998f, sttrb tmp1w, [dst])
++	add	dst, dst, #1
+ 9998:	sub	x0, end, dst			// bytes not copied
+ 	ret
+ 	.previous
+diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
+index 043da90f5dd7d..9f380eecf6531 100644
+--- a/arch/arm64/lib/copy_to_user.S
++++ b/arch/arm64/lib/copy_to_user.S
+@@ -32,7 +32,7 @@
+ 	.endm
+ 
+ 	.macro strh1 reg, ptr, val
+-	user_ldst 9998f, sttrh, \reg, \ptr, \val
++	user_ldst 9997f, sttrh, \reg, \ptr, \val
+ 	.endm
+ 
+ 	.macro ldr1 reg, ptr, val
+@@ -40,7 +40,7 @@
+ 	.endm
+ 
+ 	.macro str1 reg, ptr, val
+-	user_ldst 9998f, sttr, \reg, \ptr, \val
++	user_ldst 9997f, sttr, \reg, \ptr, \val
+ 	.endm
+ 
+ 	.macro ldp1 reg1, reg2, ptr, val
+@@ -48,12 +48,14 @@
+ 	.endm
+ 
+ 	.macro stp1 reg1, reg2, ptr, val
+-	user_stp 9998f, \reg1, \reg2, \ptr, \val
++	user_stp 9997f, \reg1, \reg2, \ptr, \val
+ 	.endm
+ 
+ end	.req	x5
++srcin	.req	x15
+ SYM_FUNC_START(__arch_copy_to_user)
+ 	add	end, x0, x2
++	mov	srcin, x1
+ #include "copy_template.S"
+ 	mov	x0, #0
+ 	ret
+@@ -62,6 +64,12 @@ EXPORT_SYMBOL(__arch_copy_to_user)
+ 
+ 	.section .fixup,"ax"
+ 	.align	2
++9997:	cmp	dst, dstin
++	b.ne	9998f
++	// Before being absolutely sure we couldn't copy anything, try harder
++	ldrb	tmp1w, [srcin]
++USER(9998f, sttrb tmp1w, [dst])
++	add	dst, dst, #1
+ 9998:	sub	x0, end, dst			// bytes not copied
+ 	ret
+ 	.previous
+diff --git a/arch/hexagon/kernel/vmlinux.lds.S b/arch/hexagon/kernel/vmlinux.lds.S
+index 35b18e55eae80..57465bff1fe49 100644
+--- a/arch/hexagon/kernel/vmlinux.lds.S
++++ b/arch/hexagon/kernel/vmlinux.lds.S
+@@ -38,6 +38,8 @@ SECTIONS
+ 	.text : AT(ADDR(.text)) {
+ 		_text = .;
+ 		TEXT_TEXT
++		IRQENTRY_TEXT
++		SOFTIRQENTRY_TEXT
+ 		SCHED_TEXT
+ 		CPUIDLE_TEXT
+ 		LOCK_TEXT
+@@ -59,14 +61,9 @@ SECTIONS
+ 
+ 	_end = .;
+ 
+-	/DISCARD/ : {
+-		EXIT_TEXT
+-		EXIT_DATA
+-		EXIT_CALL
+-	}
+-
+ 	STABS_DEBUG
+ 	DWARF_DEBUG
+ 	ELF_DETAILS
+ 
++	DISCARDS
+ }
+diff --git a/arch/m68k/68000/dragen2.c b/arch/m68k/68000/dragen2.c
+index 584893c57c373..62f10a9e1ab7f 100644
+--- a/arch/m68k/68000/dragen2.c
++++ b/arch/m68k/68000/dragen2.c
+@@ -11,6 +11,7 @@
+ #include <linux/init.h>
+ #include <asm/machdep.h>
+ #include <asm/MC68VZ328.h>
++#include "screen.h"
+ 
+ /***************************************************************************/
+ /*                        Init Drangon Engine hardware                     */
+diff --git a/arch/m68k/68000/screen.h b/arch/m68k/68000/screen.h
+new file mode 100644
+index 0000000000000..2089bdf02688f
+--- /dev/null
++++ b/arch/m68k/68000/screen.h
+@@ -0,0 +1,804 @@
++/* Created with The GIMP */
++#define screen_width 320
++#define screen_height 240
++static unsigned char screen_bits[] = {
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0e,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd3, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x56,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x0d, 0x55, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x29, 0x5a, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x6b, 0x55,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x70, 0xe0, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x70, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x38, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x03, 0x95, 0x6a, 0x00, 0x00, 0x00, 0x03, 0xf8, 0x70, 0xe0, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x70, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x38,
++   0x01, 0xf8, 0x00, 0x1f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x0c, 0x5b, 0x55, 0x00, 0x00, 0x00, 0x0f,
++   0xfc, 0x70, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x70, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x38, 0x03, 0xfc, 0x00, 0x7f, 0x80, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x31, 0xb5, 0x56,
++   0x00, 0x00, 0x00, 0x1e, 0x0c, 0x70, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x70, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x38, 0x03, 0x0e, 0x00, 0x61,
++   0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0xd7, 0x55, 0x55, 0x00, 0x03, 0x8e, 0x1c, 0x00, 0x70, 0xe1, 0x9e,
++   0x0e, 0x38, 0xe1, 0x80, 0x70, 0xc0, 0xf0, 0x73, 0x33, 0xc0, 0x78, 0x38,
++   0x00, 0x0e, 0x00, 0xe0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x02, 0xa9, 0xaa, 0xad, 0x00, 0x03, 0x8e, 0x38,
++   0x00, 0x70, 0xe1, 0xff, 0x0e, 0x38, 0xe3, 0x80, 0x71, 0xc3, 0xf8, 0x77,
++   0x3f, 0xe1, 0xfc, 0x38, 0x00, 0x0e, 0x00, 0xef, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x2b, 0x55, 0x6a,
++   0x00, 0x03, 0x8e, 0x38, 0x00, 0x70, 0xe1, 0xe7, 0x0e, 0x38, 0x73, 0x00,
++   0x73, 0x83, 0x9c, 0x7f, 0x3c, 0xe1, 0xce, 0x38, 0x00, 0x1c, 0x00, 0xff,
++   0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x15, 0x56, 0xab, 0x55, 0x00, 0x03, 0x8e, 0x38, 0x00, 0x70, 0xe1, 0xc7,
++   0x0e, 0x38, 0x76, 0x00, 0x77, 0x07, 0x1c, 0x78, 0x38, 0xe3, 0x8e, 0x38,
++   0x00, 0x78, 0x00, 0xf3, 0xc0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x69, 0x55, 0x5a, 0xb7, 0x00, 0x03, 0x8e, 0x38,
++   0x00, 0x70, 0xe1, 0xc7, 0x0e, 0x38, 0x3c, 0x00, 0x7e, 0x07, 0xfc, 0x70,
++   0x38, 0xe3, 0xfe, 0x38, 0x00, 0xf0, 0x00, 0xe1, 0xc0, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0xa5, 0x9a, 0xab, 0x6d,
++   0x00, 0x03, 0x8e, 0x3c, 0x00, 0x70, 0xe1, 0xc7, 0x0e, 0x38, 0x3e, 0x00,
++   0x7f, 0x07, 0xfc, 0x70, 0x38, 0xe3, 0xfe, 0x38, 0x01, 0xc0, 0x00, 0xe1,
++   0xc0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05,
++   0x4d, 0x75, 0xb6, 0xd5, 0x00, 0x03, 0x8e, 0x1c, 0x00, 0x70, 0xe1, 0xc7,
++   0x0e, 0x38, 0x77, 0x00, 0x77, 0x87, 0x00, 0x70, 0x38, 0xe3, 0x80, 0x38,
++   0x03, 0x80, 0x00, 0xe1, 0xc0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x14, 0xaa, 0xca, 0xd5, 0x5b, 0x00, 0x03, 0x9e, 0x1f,
++   0x0c, 0x70, 0xe1, 0xc7, 0x0e, 0x78, 0x67, 0x00, 0x73, 0xc7, 0x8c, 0x70,
++   0x38, 0xe3, 0xc6, 0x38, 0x03, 0xfe, 0x38, 0x71, 0xc0, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x2a, 0xb5, 0xbb, 0x5b, 0x6a,
++   0x00, 0x03, 0xfe, 0x0f, 0xfc, 0x70, 0xe1, 0xc7, 0x0f, 0xf8, 0xe3, 0x80,
++   0x71, 0xe3, 0xfc, 0x70, 0x38, 0xe1, 0xfe, 0x38, 0x03, 0xfe, 0x38, 0x7f,
++   0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xa9,
++   0x6b, 0x56, 0xd6, 0xad, 0x00, 0x01, 0xe6, 0x03, 0xf0, 0x70, 0xe1, 0xc7,
++   0x07, 0x98, 0xc3, 0x80, 0x70, 0xe0, 0xf8, 0x70, 0x38, 0xe0, 0x7c, 0x38,
++   0x03, 0xfe, 0x38, 0x1f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x03, 0x55, 0x55, 0x6a, 0xba, 0xeb, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x15, 0x56, 0xd5, 0xd5, 0xd5, 0xac,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x2a, 0x5a,
++   0xb7, 0x3d, 0x57, 0x5b, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x75, 0x6d, 0xaa, 0xd3, 0xac, 0xaa, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x8b, 0x6a, 0xb6, 0xde, 0x6b, 0xb6,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xba, 0xad,
++   0xeb, 0x32, 0xda, 0xad, 0x00, 0x00, 0x00, 0x00, 0x00, 0x06, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0c,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x1e, 0xaa, 0xb5, 0xad, 0x6e, 0x96, 0xaa, 0x00, 0x00, 0x00, 0x00,
++   0x01, 0x86, 0x00, 0x00, 0x07, 0xf0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0xfc, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0xc3, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x15, 0x6b, 0x6f, 0x5a, 0xb5, 0x75, 0x69,
++   0x00, 0x00, 0x00, 0x00, 0x01, 0x86, 0x00, 0x00, 0x06, 0x18, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0xc0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xc3,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf5, 0xda, 0xd9,
++   0x53, 0xeb, 0x6b, 0x4b, 0x00, 0x00, 0xf0, 0xde, 0x03, 0xe6, 0xf0, 0x78,
++   0x06, 0x0c, 0x6c, 0x7c, 0x1f, 0x87, 0x86, 0xf0, 0xc0, 0xde, 0x0f, 0xcc,
++   0xde, 0x0f, 0x00, 0xc3, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x03, 0xd6, 0xab, 0x57, 0x6e, 0x8a, 0xd6, 0xba, 0x00, 0x01, 0x98, 0xe7,
++   0x01, 0x87, 0x38, 0xcc, 0x06, 0x0c, 0x6c, 0x06, 0x31, 0x8c, 0xc7, 0x38,
++   0xc0, 0xe7, 0x18, 0xcc, 0xe7, 0x19, 0x80, 0xc3, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x0e, 0x2d, 0x5e, 0xda, 0x55, 0xbb, 0x59, 0x42,
++   0x00, 0x03, 0x0c, 0xc3, 0x01, 0x86, 0x19, 0x8c, 0x06, 0x0c, 0x70, 0x06,
++   0x61, 0x98, 0x66, 0x18, 0xf8, 0xc3, 0x30, 0xcc, 0xc3, 0x31, 0x80, 0xc3,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1a, 0xf7, 0x6b, 0x6a,
++   0xab, 0x56, 0xd6, 0xbf, 0x00, 0x03, 0x0c, 0xc3, 0x01, 0x86, 0x19, 0xfc,
++   0x06, 0x0c, 0x60, 0x7e, 0x61, 0x98, 0x66, 0x18, 0xc0, 0xc3, 0x30, 0xcc,
++   0xc3, 0x3f, 0x80, 0xc3, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x75, 0x94, 0xdb, 0x75, 0x6e, 0xda, 0xaa, 0xa2, 0x00, 0x03, 0x0c, 0xc3,
++   0x01, 0x86, 0x19, 0x80, 0x06, 0x0c, 0x60, 0xc6, 0x61, 0x98, 0x66, 0x18,
++   0xc0, 0xc3, 0x30, 0xcc, 0xc3, 0x30, 0x00, 0xc3, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x01, 0xdb, 0x6b, 0xad, 0x9b, 0x27, 0x55, 0x55, 0x55,
++   0x00, 0x03, 0x0c, 0xc3, 0x01, 0x86, 0x19, 0x80, 0x06, 0x0c, 0x60, 0xc6,
++   0x33, 0x98, 0x66, 0x18, 0xc0, 0xc3, 0x19, 0xcc, 0xc3, 0x30, 0x00, 0xc3,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x07, 0x36, 0xde, 0xb5, 0x65,
++   0x75, 0x5a, 0xd5, 0x56, 0x00, 0x01, 0x98, 0xc3, 0x01, 0x86, 0x18, 0xc4,
++   0x06, 0x18, 0x60, 0xce, 0x1d, 0x8c, 0xc6, 0x18, 0xc0, 0xc3, 0x0e, 0xcc,
++   0xc3, 0x18, 0x80, 0xc3, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0a,
++   0xed, 0xb5, 0x6d, 0x56, 0x55, 0x55, 0x55, 0xae, 0x00, 0x00, 0xf0, 0xc3,
++   0x00, 0xe6, 0x18, 0x78, 0x07, 0xf0, 0x60, 0x77, 0x01, 0x87, 0x86, 0x18,
++   0xfc, 0xc3, 0x00, 0xcc, 0xc3, 0x0f, 0x00, 0xc3, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x7a, 0xab, 0x6d, 0xda, 0xaa, 0xca, 0xd5, 0x6d, 0x58,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x63, 0x00, 0x00, 0x00, 0x00, 0x00, 0x31, 0x80, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x56, 0xda, 0xaa, 0xea, 0xae,
++   0x9b, 0x5a, 0xa9, 0x4b, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x3e, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1f, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0xd5,
++   0xbb, 0xfd, 0xad, 0xad, 0x69, 0xea, 0xcb, 0x55, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x05, 0xaf, 0xb6, 0x8a, 0x6a, 0xb9, 0x5a, 0x2d, 0xba, 0xb6,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1d, 0x7a, 0x75, 0x6e, 0xcd, 0x52,
++   0x9b, 0xdb, 0x55, 0xaa, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x5b, 0xad,
++   0xaf, 0x95, 0x55, 0xed, 0x55, 0x55, 0x6b, 0x55, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0xea, 0xb5, 0xec, 0xfd, 0x59, 0x5a, 0xb5, 0x56, 0xaa, 0xb6,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0xb7, 0x6b, 0x36, 0x4a, 0xeb, 0xab,
++   0x2d, 0x6a, 0x9b, 0x6b, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x06, 0xad, 0x6f,
++   0x6b, 0xeb, 0xdd, 0x7b, 0x6a, 0x55, 0xb5, 0x56, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x17, 0x76, 0xda, 0xd6, 0x5d, 0x62, 0xc6, 0xd5, 0x36, 0xaa, 0xb5,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x7c, 0xdb, 0x56, 0xbc, 0xf2, 0xa5, 0x5d,
++   0x96, 0xaa, 0xb6, 0xd6, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xab, 0xee, 0xfd,
++   0xd5, 0x2c, 0x6d, 0x9a, 0x75, 0x5b, 0x6a, 0xb5, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x03, 0xee, 0xad, 0x55, 0x15, 0xef, 0x54, 0xf6, 0xc5, 0x6a, 0xa9, 0xa6,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x05, 0x5b, 0xd6, 0xad, 0xbe, 0xb0, 0xa6, 0x35,
++   0x5b, 0xd5, 0x4a, 0x9b, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1b, 0xeb, 0x5d, 0xf5,
++   0xaa, 0xd7, 0xf4, 0x75, 0xba, 0x55, 0xaf, 0x6e, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x37, 0x5e, 0xf7, 0x55, 0x61, 0xbc, 0x08, 0x5b, 0x55, 0x5a, 0xa9, 0xb5,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x01, 0xf6, 0xba, 0xaa, 0xaa, 0x9f, 0x69, 0xec, 0xd5,
++   0x4b, 0xa9, 0xaa, 0x2a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x2d, 0xd5, 0xed, 0x5a,
++   0xf2, 0xd6, 0xae, 0xdb, 0x9e, 0x27, 0x5f, 0xb5, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0f,
++   0x6d, 0x5b, 0xaa, 0xda, 0xae, 0x95, 0x58, 0xd5, 0x34, 0x6d, 0x68, 0xad,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x15, 0xdb, 0xd7, 0x56, 0xb5, 0xd5, 0x6d, 0x29, 0x5b,
++   0x4b, 0xdb, 0x57, 0xb6, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x36, 0xda, 0xac, 0xd5, 0x4b,
++   0x57, 0x6a, 0x9b, 0x76, 0x5c, 0x95, 0x54, 0x2a, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xfa,
++   0xb7, 0xfb, 0xab, 0xb6, 0xea, 0xad, 0x62, 0x95, 0xa1, 0xf5, 0xa9, 0xad,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x03, 0x57, 0xaf, 0x6b, 0x72, 0x54, 0x99, 0xd5, 0x0e, 0xf3,
++   0x5f, 0x15, 0x2a, 0xaa, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xea, 0xd4, 0xad, 0x5d, 0x35,
++   0xb5, 0x34, 0xb2, 0xaa, 0x54, 0xba, 0xad, 0x55, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1f, 0x5e,
++   0xaf, 0xed, 0xab, 0xea, 0xb3, 0xaa, 0x6a, 0xad, 0xd5, 0xd5, 0xaa, 0xab,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x2a, 0xb9, 0xf5, 0xbb, 0xb5, 0x55, 0x64, 0x57, 0x55, 0x6b,
++   0x5d, 0xd0, 0x52, 0xac, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xab, 0x5b, 0xb6, 0x6d, 0x37,
++   0x6f, 0xaa, 0x5b, 0xa2, 0xb5, 0x1f, 0xed, 0x73, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x65, 0xaa,
++   0x6e, 0xfb, 0xd3, 0x5a, 0xd4, 0x54, 0xaa, 0x5e, 0xc3, 0xb5, 0x15, 0x56,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x07, 0xaf, 0x7f, 0xea, 0xb6, 0xaa, 0xd6, 0xad, 0xf1, 0xd2, 0xd5,
++   0x56, 0x55, 0x6a, 0xd5, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x0c, 0xbd, 0xda, 0xaf, 0x75, 0x6f, 0x5a,
++   0x45, 0x26, 0xb7, 0x5b, 0x20, 0xdd, 0x55, 0x2a, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x3a, 0xeb, 0x7f,
++   0xf5, 0x4d, 0x95, 0x76, 0xd9, 0x56, 0xb5, 0x52, 0x6d, 0x12, 0xad, 0xaa,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x57, 0x3f, 0xa4, 0x8f, 0xb9, 0x6e, 0xa2, 0x6f, 0x1d, 0x6a, 0xef,
++   0xb5, 0x6d, 0xaa, 0xb5, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x03, 0xfd, 0x75, 0x2f, 0x7c, 0x57, 0x88, 0xf6,
++   0x80, 0xb5, 0x57, 0x5c, 0xd7, 0x55, 0x54, 0xae, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x06, 0x39, 0xff, 0xf6,
++   0xab, 0x7a, 0x57, 0x94, 0xef, 0x0d, 0xe4, 0xea, 0xa8, 0xaa, 0xab, 0xff,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x0c, 0xf7, 0xff, 0x6d, 0xb7, 0x4b, 0x39, 0x17, 0x2a, 0xfa, 0xb7, 0x56,
++   0xb7, 0xaa, 0xed, 0xd9, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x18, 0xbe, 0xfd, 0xda, 0x9e, 0xec, 0xe6, 0xd5,
++   0x52, 0x2a, 0x58, 0xa9, 0x54, 0x5a, 0x97, 0xe7, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x72, 0xe9, 0xf6, 0xbe,
++   0xd4, 0x5b, 0xad, 0x63, 0x61, 0xf7, 0xb7, 0xaf, 0x55, 0x52, 0x9f, 0xee,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02,
++   0xc9, 0xbd, 0xfb, 0x6b, 0xeb, 0xf5, 0x6b, 0x2d, 0x57, 0x52, 0x94, 0xaa,
++   0xb1, 0xab, 0x4a, 0x14, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x02, 0xdb, 0xcb, 0xff, 0xf5, 0x3b, 0x55, 0xa0, 0x00,
++   0x45, 0x6e, 0xb5, 0xb5, 0x65, 0x52, 0x00, 0x4c, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0d, 0x2e, 0x7d, 0xdb, 0xa5,
++   0xca, 0xbe, 0x80, 0x00, 0x0a, 0x54, 0xaa, 0xa5, 0x45, 0x08, 0x09, 0x15,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x38,
++   0xd8, 0xd3, 0x1b, 0xae, 0xb7, 0xea, 0x00, 0x00, 0x00, 0x95, 0xaa, 0x56,
++   0xdc, 0xe1, 0x21, 0x35, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x53, 0x41, 0xff, 0x77, 0xbd, 0xaa, 0x58, 0x00, 0x00,
++   0x00, 0xdb, 0x75, 0xd4, 0xb2, 0xa4, 0x07, 0xea, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0xef, 0x07, 0xbd, 0x65, 0xeb,
++   0xa2, 0xd0, 0x00, 0x00, 0x00, 0x25, 0x4b, 0x35, 0x56, 0x80, 0x77, 0x6a,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x28,
++   0x01, 0x28, 0x9a, 0xb6, 0xd7, 0x60, 0x00, 0x00, 0x00, 0x0c, 0xaa, 0xaa,
++   0xaa, 0x02, 0x07, 0x55, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x17, 0xe0, 0x17, 0xb1, 0xbf, 0xee, 0xb4, 0xc0, 0x00, 0x00,
++   0x00, 0x08, 0xaa, 0xad, 0x68, 0x54, 0x04, 0x5a, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x09, 0x74, 0x87, 0x77, 0x72, 0x5a,
++   0xab, 0x80, 0x00, 0x00, 0x00, 0x02, 0xd4, 0xb5, 0x52, 0x08, 0x5b, 0xd4,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x75, 0x00,
++   0x1f, 0xd6, 0xef, 0xda, 0xd5, 0x00, 0x00, 0x00, 0x0c, 0x01, 0x52, 0xd5,
++   0x40, 0xf1, 0x55, 0x15, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x9b, 0x00, 0x17, 0x4b, 0x92, 0xb7, 0xaf, 0x00, 0x00, 0x00,
++   0x0e, 0x01, 0x4e, 0xaa, 0xae, 0x95, 0x55, 0x6d, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0xb8, 0x11, 0x2b, 0x13, 0x76, 0xef,
++   0x54, 0x00, 0x00, 0x00, 0x0f, 0x00, 0x54, 0xaa, 0xaa, 0xb5, 0xad, 0x55,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x40, 0x00,
++   0x7e, 0x7c, 0x65, 0xf4, 0x78, 0x00, 0x00, 0x00, 0x1f, 0x00, 0xab, 0x56,
++   0xd5, 0x55, 0x59, 0x5a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x0a, 0x40, 0x42, 0x6d, 0xce, 0xe5, 0xae, 0x54, 0x00, 0x00, 0x00,
++   0x18, 0x00, 0x6d, 0x75, 0x5d, 0x55, 0x4d, 0x52, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x2a, 0x82, 0x03, 0xdc, 0x54, 0xbf, 0x61,
++   0xa8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x12, 0xad, 0xa2, 0xb5, 0x60, 0xad,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x5a, 0x00, 0x16,
++   0xd9, 0xb5, 0xa4, 0xc7, 0x6c, 0x00, 0x00, 0x00, 0x00, 0x00, 0x5a, 0xea,
++   0xae, 0xd5, 0x57, 0xaa, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x54, 0x04, 0x0d, 0x76, 0xbb, 0x4b, 0xbc, 0x58, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x36, 0x95, 0xa9, 0x55, 0x54, 0xaa, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x01, 0x50, 0x08, 0x5b, 0xc5, 0x3d, 0x97, 0x0a,
++   0xd0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0xb6, 0xab, 0x2b, 0x55, 0xab,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x24, 0x20, 0x3d,
++   0x59, 0x7b, 0x76, 0x37, 0x58, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1d, 0x6d,
++   0x75, 0xb5, 0x55, 0x54, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x12, 0xf0, 0x01, 0xff, 0x21, 0xa8, 0xc3, 0x74, 0xa8, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x1b, 0xa9, 0x4b, 0x55, 0x55, 0x2a, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x25, 0xc0, 0x41, 0xca, 0x9c, 0x77, 0x58, 0x9d,
++   0x68, 0x00, 0x00, 0x00, 0x00, 0x00, 0x15, 0x56, 0xb4, 0xad, 0xb2, 0x55,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x4b, 0x80, 0x0f, 0xbe,
++   0xc0, 0xf6, 0xd5, 0xb3, 0x50, 0x00, 0x00, 0x00, 0x00, 0x00, 0x15, 0x54,
++   0xa5, 0xaa, 0x55, 0x54, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x17, 0x04, 0x87, 0xbc, 0x6a, 0x3b, 0xac, 0x9d, 0x58, 0x00, 0x00, 0x03,
++   0xe0, 0x00, 0x16, 0xab, 0x55, 0x4a, 0xd6, 0xa5, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x01, 0x5c, 0x00, 0x1f, 0x54, 0xc9, 0x2a, 0xb7, 0xa6,
++   0xd8, 0x0f, 0x00, 0x07, 0xf8, 0x00, 0x15, 0x6a, 0x55, 0x5a, 0xa4, 0xaa,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x7a, 0x08, 0x74, 0xca,
++   0x9c, 0x5a, 0xa8, 0xc5, 0x30, 0x1f, 0x80, 0x0f, 0xfc, 0x00, 0x0d, 0x55,
++   0xaa, 0xa5, 0x55, 0x49, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0a,
++   0xf0, 0x20, 0xea, 0x5a, 0xb0, 0xe7, 0x95, 0x7d, 0x10, 0x3f, 0xc0, 0x1f,
++   0xfe, 0x00, 0x0a, 0xaa, 0xaa, 0xad, 0x4a, 0x95, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x03, 0xf4, 0x02, 0x7d, 0xb5, 0x8f, 0x9c, 0xaa, 0xe9,
++   0xa0, 0x3f, 0xc0, 0x1f, 0xfe, 0x00, 0x06, 0xb5, 0x54, 0xa9, 0x2a, 0x54,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x17, 0x40, 0x47, 0xeb, 0xab,
++   0x75, 0x51, 0x55, 0x4d, 0xd8, 0x3f, 0xe0, 0x3f, 0x3f, 0x00, 0x0a, 0xaa,
++   0xa9, 0x4a, 0xaa, 0xad, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x23,
++   0xc0, 0x06, 0xff, 0xe5, 0xb6, 0xbb, 0xa9, 0x34, 0x48, 0x30, 0xe0, 0x3c,
++   0x0f, 0x00, 0x0a, 0xaa, 0x8a, 0xaa, 0xaa, 0xa4, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0xaf, 0x80, 0x9f, 0xa1, 0x8d, 0xb5, 0xd6, 0x93, 0xcd,
++   0x90, 0x62, 0x60, 0x3c, 0x27, 0x00, 0x06, 0xaa, 0xb5, 0xaa, 0xaa, 0xa9,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x9f, 0x92, 0x3f, 0xb5, 0x36,
++   0x56, 0xb5, 0x9d, 0x55, 0x90, 0x62, 0x60, 0x38, 0x17, 0x80, 0x0d, 0x54,
++   0xaa, 0x54, 0xaa, 0x9a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x7a,
++   0x00, 0xba, 0xab, 0x73, 0xe7, 0xa2, 0xb6, 0x55, 0x2c, 0x61, 0x60, 0x38,
++   0x17, 0x80, 0x09, 0x6b, 0x4a, 0xd5, 0x4a, 0x92, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x0d, 0xfe, 0x03, 0x6d, 0xea, 0x08, 0x51, 0x1d, 0x2b, 0x35,
++   0x08, 0x61, 0x60, 0x38, 0x07, 0x80, 0x06, 0xda, 0xaa, 0xa9, 0x55, 0x55,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x27, 0xf8, 0x4e, 0xfb, 0x89, 0xde,
++   0x35, 0xea, 0x6b, 0x72, 0x28, 0x60, 0x70, 0x38, 0x07, 0x80, 0x05, 0x52,
++   0xaa, 0xaa, 0x95, 0x55, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0f, 0xf8,
++   0x0d, 0x63, 0xbd, 0xe3, 0x57, 0x55, 0xb6, 0x49, 0xcc, 0x20, 0x7f, 0xf8,
++   0x07, 0x80, 0x0a, 0xaa, 0xaa, 0xaa, 0xaa, 0xad, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x5d, 0xe1, 0x34, 0xc6, 0xab, 0xa7, 0x91, 0x77, 0x37, 0xcc,
++   0x48, 0x30, 0x9f, 0x3c, 0x07, 0x80, 0x0a, 0xaa, 0xaa, 0xaa, 0xaa, 0x69,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x3b, 0xc0, 0x34, 0xdb, 0x4e, 0xf9,
++   0xae, 0xd5, 0x54, 0xe1, 0xb4, 0x19, 0x3f, 0xfe, 0x0f, 0x00, 0x05, 0x55,
++   0x55, 0x55, 0x52, 0x92, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x7f, 0x89,
++   0xf1, 0xf7, 0xe5, 0x57, 0xea, 0x8d, 0x4c, 0xda, 0xac, 0x13, 0xff, 0xff,
++   0x80, 0x00, 0x0a, 0xd5, 0xaa, 0xa4, 0x95, 0x54, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0xf7, 0x02, 0x53, 0x6c, 0x72, 0x10, 0xa3, 0x6a, 0x74, 0xea,
++   0x34, 0x07, 0xff, 0xff, 0xe0, 0x00, 0x05, 0x4e, 0x54, 0x95, 0x55, 0x55,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x13, 0xfc, 0x97, 0xd6, 0x55, 0xb6, 0xab,
++   0xb7, 0x45, 0x46, 0xad, 0xf4, 0x1f, 0xff, 0xff, 0xfe, 0x00, 0x05, 0x2a,
++   0xd5, 0x54, 0x95, 0x51, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x07, 0xcf, 0x0e,
++   0xfb, 0x2d, 0x12, 0xd4, 0xb8, 0xb6, 0xeb, 0x6a, 0x10, 0x3f, 0xff, 0xff,
++   0xff, 0x00, 0x02, 0xda, 0x8a, 0xab, 0x6a, 0xaa, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x57, 0xbc, 0x3a, 0xc6, 0x0d, 0x76, 0xb1, 0x77, 0x15, 0x2a, 0x56,
++   0x34, 0x7f, 0xff, 0xff, 0xff, 0x80, 0x02, 0xa5, 0x75, 0x2a, 0x52, 0xaa,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x02, 0x0f, 0xfc, 0xdf, 0x75, 0x9d, 0x5a, 0x67,
++   0x15, 0x59, 0xb3, 0x6c, 0x4c, 0x7f, 0xff, 0xff, 0xff, 0x80, 0x05, 0x55,
++   0x8a, 0xaa, 0xaa, 0xa4, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1f, 0xf9, 0xdb,
++   0xed, 0x36, 0x5a, 0xeb, 0xad, 0x6b, 0x57, 0x5d, 0xa4, 0x7f, 0xff, 0xff,
++   0xf3, 0x80, 0x05, 0x54, 0xb2, 0xaa, 0xaa, 0xa9, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x7f, 0xf4, 0x5a, 0xaa, 0xdb, 0x40, 0x20, 0xea, 0xaa, 0xa8, 0x19,
++   0xe8, 0x1f, 0xff, 0xff, 0xe7, 0x00, 0x02, 0x55, 0x4b, 0xd5, 0x55, 0x4a,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x05, 0xff, 0xf2, 0xb5, 0x7d, 0x56, 0xaa, 0x76,
++   0xa5, 0xce, 0xb4, 0xd1, 0x2c, 0x0f, 0xff, 0xff, 0x0f, 0x00, 0x02, 0xa5,
++   0x5a, 0x2a, 0x55, 0x59, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0xff, 0xc1, 0x31,
++   0x9a, 0x8d, 0x61, 0x2e, 0xb2, 0xb5, 0x57, 0x59, 0x88, 0x07, 0xff, 0xf8,
++   0x7e, 0x06, 0x00, 0xaa, 0x65, 0xd2, 0xaa, 0xaa, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x1f, 0x7e, 0x02, 0x55, 0x67, 0xb5, 0x5b, 0x05, 0xfa, 0xab, 0x0a, 0x8d,
++   0xe4, 0x03, 0xff, 0xc3, 0xfe, 0x07, 0x01, 0xaa, 0xaa, 0x26, 0xaa, 0xa9,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x16, 0x7d, 0x10, 0x57, 0x9d, 0x4a, 0x8a, 0xd7,
++   0x95, 0x5a, 0xd5, 0x4d, 0x58, 0x00, 0xfc, 0x1f, 0xe6, 0x03, 0xc1, 0x55,
++   0x52, 0xd5, 0x55, 0x2a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x26, 0xe1, 0x42, 0xa4,
++   0x7d, 0xd7, 0xba, 0xb0, 0x24, 0xd5, 0x97, 0xc5, 0xd2, 0x00, 0x00, 0x7f,
++   0x87, 0x03, 0xe0, 0x42, 0xaa, 0x95, 0x55, 0x4b, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x0d, 0x42, 0x17, 0xfb, 0x8c, 0xac, 0x46, 0xae, 0xdd, 0xb3, 0x68, 0x92,
++   0x6c, 0x03, 0x03, 0xfe, 0x0f, 0x01, 0xe0, 0x5a, 0x55, 0x62, 0xaa, 0x54,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x01, 0x3d, 0x0a, 0x32, 0xd5, 0x7b, 0xc9, 0xde, 0xad,
++   0x91, 0x4a, 0xaa, 0xc6, 0x68, 0x21, 0xff, 0xf8, 0x7f, 0x00, 0x60, 0x2a,
++   0xa9, 0x2e, 0xa9, 0x55, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x22, 0x64, 0xb3, 0x18,
++   0x8e, 0xae, 0xb4, 0x46, 0x9a, 0xbb, 0x54, 0xd5, 0xb4, 0x30, 0xff, 0xe0,
++   0xff, 0x80, 0x00, 0x52, 0xa5, 0x51, 0x55, 0x4a, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05,
++   0x82, 0xc1, 0x52, 0xb3, 0xcb, 0xc5, 0xd4, 0xda, 0xd3, 0x4a, 0x68, 0x87,
++   0x68, 0x30, 0x3f, 0x03, 0xff, 0x80, 0x00, 0x2a, 0xaa, 0x55, 0x45, 0x55,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x01, 0x5b, 0xc6, 0xa9, 0x19, 0x74, 0x3d, 0xa9, 0x92,
++   0xab, 0x5b, 0x95, 0xb0, 0x28, 0x78, 0x00, 0x07, 0xff, 0xc0, 0x00, 0x15,
++   0x2a, 0x95, 0x2a, 0x2a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x10, 0x0f, 0x6b, 0xa2,
++   0xb7, 0x22, 0x61, 0x2b, 0x52, 0xaa, 0x33, 0x1d, 0xa0, 0x7c, 0x00, 0x0f,
++   0xff, 0xe0, 0x00, 0x14, 0xa4, 0xaa, 0xa9, 0x52, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x09,
++   0x00, 0x23, 0x27, 0x64, 0x8d, 0xfb, 0x64, 0xa7, 0x5a, 0xaa, 0xd3, 0x5a,
++   0xa0, 0x7c, 0x00, 0x3f, 0xff, 0xe0, 0x00, 0x05, 0x29, 0x52, 0x45, 0x55,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x02, 0x4a, 0x5f, 0x66, 0x6a, 0xab, 0x22, 0xaf, 0xec,
++   0xa9, 0x55, 0x17, 0x17, 0xa0, 0xfe, 0x00, 0x7f, 0xff, 0xf0, 0x00, 0x02,
++   0xaa, 0x4a, 0xaa, 0x94, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x11, 0x8d, 0x83, 0x49, 0xad,
++   0x9a, 0x57, 0x50, 0x6b, 0x6b, 0x5a, 0xd1, 0x1a, 0x80, 0xff, 0x00, 0xff,
++   0xff, 0xf0, 0x00, 0x05, 0x52, 0x95, 0x29, 0x55, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x8c,
++   0x1b, 0x4b, 0xcb, 0x45, 0x7b, 0x4e, 0x4b, 0x5d, 0x2a, 0xd5, 0x16, 0x29,
++   0x41, 0xff, 0x87, 0xff, 0xff, 0xf8, 0x00, 0x01, 0x2a, 0xa5, 0x4a, 0x52,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x20, 0x52, 0x98, 0x9d, 0x0e, 0x95, 0x65, 0x38, 0x95,
++   0x55, 0x6a, 0xab, 0x35, 0x81, 0xff, 0xff, 0xff, 0xff, 0xf8, 0x00, 0x01,
++   0xa9, 0x14, 0xa9, 0x4a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x80, 0xe3, 0x2b, 0x74, 0x4d,
++   0xb5, 0x55, 0xd5, 0x95, 0x69, 0x4a, 0xac, 0x6e, 0x03, 0xff, 0xff, 0xff,
++   0xff, 0xfc, 0x00, 0x01, 0x25, 0x65, 0x2a, 0x95, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
++   0x9a, 0xb7, 0x0b, 0xb0, 0x6f, 0xfc, 0xfd, 0x6a, 0x8b, 0xd5, 0x4e, 0xa8,
++   0x03, 0xff, 0xff, 0xff, 0xff, 0xfc, 0x00, 0x01, 0x49, 0x14, 0xa5, 0x6a,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x08, 0x02, 0x94, 0xab, 0x3d, 0xc9, 0x6b, 0xb5, 0x43, 0xee,
++   0xb6, 0x95, 0x54, 0x9a, 0x07, 0xff, 0xff, 0xff, 0xff, 0xfc, 0x00, 0x00,
++   0xaa, 0x54, 0xaa, 0x4a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x42, 0x13, 0x9f, 0x68, 0x5b,
++   0xff, 0xfa, 0xd7, 0xfa, 0xa5, 0x6a, 0xac, 0xd4, 0x07, 0xff, 0xff, 0xff,
++   0xff, 0xfe, 0x00, 0x00, 0x55, 0x53, 0x2a, 0xaa, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x04,
++   0x65, 0x3c, 0x8b, 0x93, 0x87, 0x76, 0x75, 0xf5, 0x57, 0x55, 0x5c, 0xa8,
++   0x0f, 0xff, 0xff, 0xff, 0xff, 0xfe, 0x00, 0x00, 0x0b, 0x54, 0x95, 0x54,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x2a, 0x7b, 0xfd, 0x75, 0xba, 0xae, 0xfe, 0x44, 0x5a,
++   0x8a, 0xca, 0x9c, 0xf0, 0x0f, 0xff, 0xff, 0xff, 0xff, 0xfe, 0x00, 0x00,
++   0x14, 0x4a, 0xaa, 0xaa, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0xd8, 0xa2, 0xbd, 0x59, 0x52,
++   0x8e, 0xec, 0x48, 0xf1, 0x2d, 0x52, 0x9c, 0xa0, 0x0f, 0xff, 0xff, 0xff,
++   0xff, 0xfe, 0x02, 0x00, 0x02, 0x95, 0x54, 0x92, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0b, 0x35,
++   0x5f, 0xed, 0x6a, 0xf2, 0x95, 0x19, 0x09, 0xaa, 0x6a, 0x5d, 0x48, 0xc0,
++   0x0f, 0xff, 0xff, 0xff, 0xff, 0xff, 0x01, 0x00, 0x02, 0xaa, 0x52, 0xaa,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x04, 0x22, 0xa6, 0xc3, 0xdf, 0xd0, 0x66, 0x29, 0xf0, 0x01, 0x6a,
++   0xca, 0xd5, 0x59, 0x80, 0x07, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x80,
++   0x05, 0x2a, 0xaa, 0x52, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0d, 0xa4, 0x03, 0xf2, 0x61, 0xed,
++   0x17, 0xe2, 0x02, 0xaa, 0x99, 0x2a, 0x98, 0x80, 0x07, 0xff, 0xff, 0xff,
++   0xff, 0xff, 0x80, 0x40, 0x02, 0x55, 0x55, 0x2a, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x09, 0x05,
++   0x88, 0xa7, 0x47, 0xdc, 0xc0, 0x00, 0x2a, 0x6b, 0x4a, 0x6a, 0xb0, 0x00,
++   0x07, 0xff, 0xff, 0xff, 0xff, 0xff, 0xc0, 0x20, 0x01, 0x55, 0x55, 0x55,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x80, 0x2e, 0x89, 0x11, 0xfb, 0x87, 0x9c, 0x90, 0x64, 0xaa, 0x9a,
++   0x4a, 0xa9, 0x1a, 0x00, 0x0f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xc0, 0x30,
++   0x00, 0xa8, 0x42, 0xaa, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x2b, 0x16, 0x2b, 0x85, 0x09, 0xf8,
++   0x83, 0x45, 0x5d, 0xad, 0x95, 0x49, 0x5c, 0x08, 0x1f, 0xff, 0xff, 0xff,
++   0xff, 0xff, 0xe0, 0x10, 0x00, 0xae, 0xea, 0x4a, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x49, 0x28,
++   0x3b, 0xf5, 0xbd, 0x30, 0x14, 0x96, 0x95, 0x6a, 0xd2, 0xb5, 0xaa, 0x10,
++   0x1f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xe0, 0x18, 0x00, 0xa9, 0x15, 0x55,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x02, 0x19, 0x26, 0x58, 0x76, 0x4a, 0x1e, 0xe1, 0x7a, 0xba, 0xb1, 0x5b,
++   0x4a, 0xab, 0x2c, 0x10, 0x3f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xf0, 0x08,
++   0x00, 0x2b, 0x6a, 0xa8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x90, 0x11, 0x21, 0x4d, 0x78, 0x6d, 0x20,
++   0x0a, 0x84, 0xda, 0xaf, 0x55, 0x56, 0x52, 0x20, 0x3f, 0xff, 0xff, 0xff,
++   0xff, 0xff, 0xf0, 0x0c, 0x00, 0x54, 0xaa, 0xa5, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x22, 0xe8, 0x62,
++   0xcd, 0x49, 0xd8, 0x42, 0xb4, 0xb8, 0x96, 0x3a, 0xa5, 0x54, 0xd4, 0x00,
++   0x7f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xf8, 0x04, 0x00, 0x14, 0xaa, 0x4a,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x05, 0x2d, 0x18, 0x4e, 0x0e, 0x61, 0xb0, 0x0a, 0xab, 0x6b, 0xaa, 0x37,
++   0x0d, 0x5a, 0xa8, 0x00, 0x7f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xf8, 0x06,
++   0x00, 0x2b, 0x4a, 0xaa, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x10, 0x40, 0xb1, 0xd8, 0x5f, 0xc1, 0x90, 0x55,
++   0x55, 0x4a, 0x32, 0xed, 0x79, 0x49, 0x50, 0x00, 0xff, 0xff, 0xff, 0xff,
++   0xff, 0xff, 0xf8, 0x06, 0x00, 0x12, 0xaa, 0xa9, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x16, 0x00, 0xd1,
++   0xac, 0x89, 0x91, 0x4a, 0xaa, 0xaa, 0xa9, 0xb2, 0xc5, 0x65, 0x68, 0x00,
++   0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xfc, 0x03, 0x00, 0x15, 0x2a, 0x4a,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02,
++   0x64, 0x54, 0x81, 0xcf, 0x89, 0x00, 0x1f, 0xf2, 0xaa, 0xd5, 0x6b, 0x35,
++   0x5a, 0xad, 0xa8, 0x01, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xfc, 0x03,
++   0x00, 0x05, 0x6a, 0x92, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x89, 0x23, 0xf6, 0x52, 0x28, 0x19, 0xd5,
++   0x45, 0x55, 0x49, 0x54, 0xa5, 0x6a, 0xa0, 0x01, 0xff, 0xff, 0xff, 0xff,
++   0xff, 0xff, 0xfc, 0x03, 0x00, 0x15, 0x24, 0xaa, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x09, 0x18, 0x04, 0x6c,
++   0x64, 0x61, 0x21, 0x95, 0xad, 0x56, 0xaa, 0xd7, 0xab, 0x55, 0x50, 0x03,
++   0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xfc, 0x01, 0x80, 0x04, 0xaa, 0x94,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02,
++   0x12, 0x46, 0x20, 0xdc, 0xc4, 0x01, 0x01, 0xaa, 0xaa, 0xa9, 0x5b, 0x49,
++   0x2a, 0xad, 0xa0, 0x03, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xfe, 0x01,
++   0x80, 0x02, 0xa4, 0xa5, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x10, 0x44, 0x08, 0x00, 0xd1, 0x84, 0x20, 0x05, 0xaa,
++   0xaa, 0x95, 0x66, 0xaa, 0xaa, 0xaa, 0xa0, 0x03, 0xff, 0xff, 0xff, 0xff,
++   0xff, 0xff, 0xfe, 0x01, 0x80, 0x05, 0x55, 0x29, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x0c, 0xa8, 0x89, 0x99,
++   0x00, 0x22, 0x03, 0xd5, 0x4a, 0xaa, 0xa7, 0x4d, 0x5a, 0xd5, 0x40, 0x07,
++   0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xfe, 0x01, 0x80, 0x02, 0x49, 0x4a,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x48,
++   0x18, 0x02, 0x03, 0x24, 0x10, 0xa8, 0x2f, 0x6a, 0xaa, 0xaa, 0x45, 0x51,
++   0x4a, 0xb5, 0x40, 0x07, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xfe, 0x00,
++   0x80, 0x04, 0x95, 0x2a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x02, 0x01, 0x05, 0x48, 0x07, 0xae, 0x83, 0x02, 0x8c, 0x2a,
++   0x95, 0x52, 0xcd, 0x56, 0xad, 0x56, 0xc0, 0x07, 0xff, 0xff, 0xff, 0xff,
++   0xff, 0xff, 0xfe, 0x00, 0xc0, 0x01, 0x52, 0x52, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x12, 0x58, 0x98, 0x2f, 0x30,
++   0x02, 0x82, 0x01, 0x6a, 0xaa, 0xaa, 0x8d, 0x29, 0x35, 0xb5, 0x04, 0x07,
++   0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xfe, 0x00, 0xc0, 0x02, 0x94, 0x95,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08,
++   0x45, 0x30, 0x1f, 0xa0, 0x25, 0x1d, 0x15, 0x4a, 0xa5, 0x51, 0xad, 0x55,
++   0x6d, 0x6a, 0x84, 0x0f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xfe, 0x00,
++   0xc0, 0x01, 0x29, 0x52, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x08, 0xa1, 0x2d, 0x21, 0x7f, 0x40, 0x4d, 0xb0, 0x35, 0x55,
++   0x15, 0x15, 0x79, 0x25, 0x4a, 0xca, 0x04, 0x0f, 0xff, 0xff, 0xff, 0xff,
++   0xff, 0xff, 0xff, 0x00, 0xc0, 0x02, 0x52, 0x96, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80, 0x80, 0x40, 0x7f, 0x24,
++   0x8a, 0x58, 0xad, 0x55, 0x55, 0x52, 0x0a, 0x5a, 0xaa, 0xbb, 0x04, 0x0f,
++   0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0xc0, 0x00, 0xa5, 0x20,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x41, 0x09,
++   0x50, 0x42, 0xfd, 0xc0, 0x59, 0x53, 0x53, 0x4c, 0xaa, 0x54, 0x5a, 0xa2,
++   0xad, 0xaa, 0x04, 0x0f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00,
++   0xc0, 0x01, 0x4d, 0x55, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x08, 0x02, 0x15, 0x90, 0x7d, 0x15, 0x76, 0xac, 0xad, 0x52,
++   0x95, 0x55, 0xaa, 0x4c, 0xa5, 0x54, 0x04, 0x0f, 0xff, 0xff, 0xff, 0xff,
++   0xff, 0xff, 0xff, 0x00, 0xc0, 0x00, 0xa5, 0x12, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80, 0x0c, 0x83, 0x01, 0xff, 0xf9,
++   0x15, 0x29, 0x51, 0x55, 0x52, 0x2c, 0xa2, 0x54, 0xab, 0x6c, 0x04, 0x0f,
++   0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x80, 0x01, 0x28, 0x54,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x98,
++   0x06, 0x01, 0xff, 0xe9, 0x55, 0xd5, 0x56, 0xa5, 0x55, 0xaa, 0xad, 0x51,
++   0x35, 0x54, 0x04, 0x0f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00,
++   0x80, 0x00, 0x4a, 0xa9, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x01, 0x08, 0x31, 0x1c, 0x12, 0xf4, 0xc5, 0x44, 0x5a, 0xaa, 0x49,
++   0xaa, 0xa9, 0xa8, 0x95, 0x55, 0x50, 0x04, 0x0f, 0xff, 0xff, 0xff, 0xff,
++   0xff, 0xff, 0xff, 0x01, 0x80, 0x01, 0x52, 0x82, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x40, 0x48, 0x1c, 0x41, 0x3e, 0x2e,
++   0x8b, 0x54, 0xaa, 0xab, 0x52, 0xa5, 0x45, 0x24, 0x95, 0x50, 0x06, 0x0f,
++   0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x01, 0x00, 0x01, 0x24, 0x5a,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xc1, 0x00,
++   0x38, 0x01, 0x1d, 0x64, 0x9a, 0xa9, 0x2a, 0x94, 0xaa, 0xab, 0x55, 0x55,
++   0xaa, 0xd8, 0x02, 0x0f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x03,
++   0x00, 0x01, 0x4a, 0x92, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x13, 0xc8, 0x00, 0xb0, 0x02, 0x3e, 0x96, 0x35, 0x6a, 0xaa, 0x55,
++   0x4a, 0x95, 0x4a, 0x54, 0x55, 0x28, 0x03, 0x0f, 0xff, 0xff, 0xff, 0xff,
++   0xff, 0xff, 0xff, 0x8f, 0xc0, 0x01, 0x29, 0x4a, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x83, 0x49, 0x20, 0x8c, 0x78, 0xb5,
++   0x65, 0x4a, 0xaa, 0xa6, 0xaa, 0xa5, 0x29, 0x45, 0xaa, 0xa8, 0x03, 0x0f,
++   0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xf8, 0x70, 0x01, 0x45, 0x28,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x25, 0x03, 0x20,
++   0x00, 0x30, 0x75, 0x4c, 0x95, 0x55, 0x54, 0xa9, 0x52, 0x95, 0x52, 0x95,
++   0x2a, 0xa8, 0x01, 0x8f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x80,
++   0x18, 0x00, 0x94, 0xa5, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x0d, 0x0c, 0x0c, 0x44, 0x18, 0xe5, 0xb9, 0xb5, 0x54, 0x92, 0x96,
++   0xaa, 0x55, 0x4a, 0x69, 0x55, 0x20, 0x01, 0xff, 0xff, 0xff, 0xff, 0xff,
++   0xff, 0xff, 0xff, 0x00, 0x00, 0x01, 0x29, 0x29, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x01, 0x2e, 0x08, 0x1c, 0x01, 0x7b, 0xdc, 0x96,
++   0x4a, 0xaa, 0xaa, 0xa9, 0x4a, 0xaa, 0x95, 0x0a, 0x73, 0x50, 0x00, 0xff,
++   0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xfe, 0x00, 0x00, 0x01, 0x52, 0x52,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1c, 0x20, 0x30,
++   0x44, 0x79, 0x88, 0xd4, 0x95, 0x49, 0x54, 0xab, 0x52, 0x94, 0xa4, 0xaa,
++   0xc6, 0x51, 0xe0, 0x7f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xfe, 0x00,
++   0x00, 0x00, 0x8a, 0x8a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x38, 0x2a, 0x21, 0x01, 0xfb, 0xa0, 0x7d, 0x54, 0xb5, 0x55, 0x54,
++   0x95, 0x55, 0x29, 0x55, 0x29, 0x57, 0xf8, 0x3f, 0xff, 0xff, 0xff, 0xff,
++   0xff, 0xff, 0xfe, 0x00, 0x00, 0x02, 0x50, 0x52, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x02, 0x78, 0x90, 0x00, 0x0d, 0xfb, 0xca, 0xd2,
++   0xaa, 0x92, 0x49, 0x55, 0x55, 0x2a, 0x4a, 0x4a, 0xd3, 0x57, 0xfc, 0x1f,
++   0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xe0, 0x00, 0x00, 0x02, 0x95, 0x54,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x39, 0x20, 0xa4,
++   0x13, 0xfd, 0xab, 0x2a, 0x52, 0xa5, 0x52, 0x55, 0x52, 0xa9, 0x55, 0x55,
++   0x2a, 0x57, 0xff, 0x0f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xc6, 0x00,
++   0x00, 0x0d, 0x24, 0x89, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x08, 0x30, 0xc0, 0x00, 0x77, 0xb5, 0x55, 0x69, 0x4a, 0xaa, 0x96, 0xaa,
++   0x94, 0xaa, 0x91, 0x45, 0x55, 0x4f, 0xff, 0x03, 0xff, 0xff, 0xff, 0xff,
++   0xff, 0xff, 0xcf, 0x00, 0x00, 0x1e, 0x55, 0x2a, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x01, 0x30, 0x81, 0x00, 0x67, 0xb4, 0x74, 0x52,
++   0x55, 0x55, 0x2a, 0xaa, 0xaa, 0x52, 0x55, 0x35, 0x52, 0xaf, 0xff, 0x81,
++   0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xcf, 0x00, 0x00, 0x1e, 0xa0, 0x48,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x20, 0xe3, 0x00, 0x08,
++   0x86, 0x4a, 0xd4, 0xd5, 0x54, 0x91, 0x49, 0x55, 0x51, 0x54, 0x95, 0x49,
++   0x56, 0x4f, 0xff, 0xc0, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xdf, 0x80,
++   0x00, 0x3e, 0x0b, 0x53, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x05, 0xc4, 0x02, 0x00, 0x7f, 0xbd, 0x8a, 0x25, 0x49, 0x55, 0x2a, 0xb5,
++   0x55, 0x52, 0xa4, 0x96, 0xac, 0x8f, 0xff, 0xe0, 0x7f, 0xff, 0xff, 0xff,
++   0xff, 0xff, 0x9f, 0x80, 0x00, 0x7e, 0x54, 0xa4, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x43, 0x80, 0x20, 0x21, 0x4d, 0x3f, 0x52, 0xb4,
++   0x55, 0x2a, 0x55, 0x55, 0x4a, 0xa4, 0x95, 0x32, 0xa5, 0x4f, 0xff, 0xe0,
++   0x3f, 0xff, 0xff, 0xff, 0xff, 0xff, 0x9f, 0xc0, 0x00, 0xfe, 0x49, 0x14,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0xe8, 0x04, 0x08,
++   0x1a, 0x46, 0xd4, 0x49, 0xaa, 0xa4, 0xaa, 0xaa, 0x54, 0x95, 0x2a, 0xc4,
++   0x94, 0x8f, 0xff, 0xf0, 0x0f, 0xff, 0xff, 0xff, 0xff, 0xff, 0x9f, 0xe0,
++   0x01, 0xfe, 0x92, 0x49, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0x8e, 0x40, 0x80, 0x00, 0x74, 0x44, 0x51, 0x55, 0x64, 0xa9, 0x49, 0x55,
++   0x49, 0x51, 0x52, 0x5a, 0x2a, 0x1f, 0xff, 0xf8, 0x07, 0xff, 0xff, 0xff,
++   0xff, 0xff, 0x9f, 0xf0, 0x07, 0xfe, 0x24, 0x52, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x00, 0x0d, 0x90, 0x00, 0x41, 0xeb, 0x12, 0xab, 0x52,
++   0xa9, 0x4a, 0xaa, 0xa5, 0x52, 0x95, 0x2a, 0xa2, 0xa8, 0x1f, 0xff, 0xf8,
++   0x03, 0xff, 0xff, 0xff, 0xff, 0xff, 0x9f, 0xf8, 0x1f, 0xfe, 0x52, 0x8a,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x3b, 0x81, 0x08, 0x05,
++   0xe3, 0x15, 0xa5, 0x36, 0x95, 0x4a, 0x4a, 0xaa, 0x45, 0x2a, 0xa2, 0xac,
++   0x00, 0x3f, 0xff, 0xfc, 0x01, 0xff, 0xff, 0xff, 0xff, 0xff, 0x9f, 0xff,
++   0xff, 0xfe, 0x04, 0xa8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++   0xb2, 0x00, 0x00, 0x80, 0xe8, 0x52, 0xac, 0xa8, 0xa4, 0x92, 0xa9, 0x4a,
++   0x95, 0x51, 0x2d, 0x50, 0xe1, 0xff, 0xff, 0xfe, 0x00, 0x7f, 0xff, 0xff,
++   0xff, 0xff, 0x9f, 0xff, 0xff, 0xfe, 0x29, 0x05, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x04, 0x00, 0x35, 0x10, 0x09, 0xa9, 0x4a, 0xaa, 0xaa,
++   0xa9, 0x54, 0x95, 0x2a, 0xa4, 0x8a, 0xa9, 0x43, 0xff, 0xff, 0xff, 0xfe,
++   0x00, 0x3f, 0xff, 0xff, 0xff, 0xff, 0x9f, 0xff, 0xff, 0xfe, 0x12, 0xa9,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x2e, 0x01, 0x03,
++   0x34, 0xa5, 0x19, 0x25, 0x55, 0x55, 0x55, 0x55, 0x15, 0x55, 0x45, 0x53,
++   0xff, 0xff, 0xff, 0xff, 0x00, 0x1f, 0xff, 0xff, 0xff, 0xff, 0x9f, 0xff,
++   0xff, 0xfe, 0x04, 0x92, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10,
++   0x49, 0x3c, 0x00, 0x34, 0x75, 0x4a, 0xa2, 0xaa, 0x92, 0x92, 0x92, 0x54,
++   0x52, 0x54, 0x99, 0x53, 0xff, 0xff, 0xff, 0xff, 0x80, 0x0f, 0xff, 0xff,
++   0xff, 0xff, 0x9f, 0xff, 0xff, 0xff, 0x14, 0x94, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0x02, 0x00, 0x78, 0x12, 0xfc, 0xaa, 0x92, 0x2d, 0x54,
++   0xa4, 0xa5, 0x2a, 0x92, 0x94, 0x92, 0xa2, 0xa3, 0xff, 0xff, 0xff, 0xff,
++   0x80, 0x0f, 0xff, 0xff, 0xff, 0xff, 0x9f, 0xff, 0xff, 0xff, 0x01, 0x51,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x16, 0xf0, 0x80, 0x71,
++   0xc5, 0x2a, 0x65, 0x25, 0x55, 0x4a, 0x52, 0xaa, 0x51, 0x52, 0x95, 0x21,
++   0xff, 0xff, 0xff, 0xff, 0xc0, 0x07, 0xff, 0xff, 0xff, 0xff, 0x8f, 0xff,
++   0xff, 0xff, 0x81, 0x12, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04,
++   0x8e, 0xa0, 0x00, 0xb3, 0x94, 0xa4, 0xa9, 0x55, 0x45, 0x54, 0xa5, 0x25,
++   0x25, 0x2a, 0x65, 0x41, 0xff, 0xff, 0xff, 0xff, 0xe0, 0x07, 0xff, 0xff,
++   0xff, 0xff, 0x8f, 0xff, 0xff, 0xff, 0x80, 0x4a, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x00, 0xa0, 0x0f, 0x42, 0x13, 0x46, 0x94, 0x94, 0xa5, 0x4a,
++   0x54, 0x92, 0x88, 0xa8, 0x49, 0x55, 0x4a, 0x51, 0xff, 0xff, 0xff, 0xff,
++   0xe0, 0x07, 0xff, 0xff, 0xff, 0xff, 0x8f, 0xff, 0xff, 0xff, 0xe0, 0x52,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x4c, 0x2d, 0x00, 0x87, 0x56,
++   0x4b, 0x2a, 0x99, 0x55, 0x4a, 0xaa, 0x55, 0x4a, 0x92, 0xa2, 0x54, 0xa8,
++   0xff, 0xff, 0xff, 0xff, 0xf0, 0x07, 0xff, 0xff, 0xff, 0xfd, 0x8f, 0xff,
++   0xff, 0xff, 0xfe, 0x04, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x88,
++   0x18, 0x04, 0x1f, 0x11, 0x51, 0x51, 0x0b, 0x54, 0x94, 0xa5, 0x52, 0x92,
++   0x55, 0x54, 0x95, 0x20, 0xff, 0xff, 0xff, 0xff, 0xf8, 0x0f, 0xff, 0xff,
++   0xff, 0xf9, 0x8f, 0xff, 0xff, 0xff, 0xff, 0x89, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x11, 0xe0, 0x38, 0x01, 0x3e, 0x4d, 0x0a, 0x55, 0x5a, 0x55,
++   0x55, 0x4a, 0x25, 0x24, 0x81, 0x25, 0x29, 0x48, 0xff, 0xff, 0xff, 0xff,
++   0xf8, 0x1f, 0xff, 0xff, 0xff, 0xf0, 0x8f, 0xff, 0xff, 0xff, 0xff, 0xc2,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x80, 0xf0, 0x90, 0x7e, 0xd4,
++   0x68, 0x92, 0xaa, 0x54, 0x92, 0x92, 0xa9, 0x55, 0x2e, 0xaa, 0xa2, 0x58,
++   0x7f, 0xff, 0xff, 0xff, 0xfc, 0x3f, 0xff, 0xff, 0xff, 0xe0, 0x8f, 0xff,
++   0xff, 0xff, 0xff, 0xe4, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0f, 0x04,
++   0x40, 0x00, 0x49, 0xe2, 0x8a, 0xa9, 0x54, 0x95, 0x2a, 0x54, 0x4a, 0x48,
++   0x91, 0x51, 0x2a, 0xc8, 0x7f, 0xff, 0xff, 0xff, 0xfe, 0xff, 0xff, 0xff,
++   0xff, 0xc0, 0x8f, 0xff, 0xff, 0xff, 0xff, 0xe2, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x2e, 0x00, 0x00, 0x03, 0x79, 0x4a, 0xa9, 0x45, 0x49, 0x54,
++   0xa4, 0x89, 0x49, 0x52, 0x8a, 0x85, 0x4a, 0x98, 0x7f, 0xff, 0xff, 0xff,
++   0xfe, 0x7f, 0xff, 0xff, 0xff, 0x80, 0x8f, 0xff, 0xff, 0xff, 0xff, 0xc4,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1c, 0x48, 0x44, 0x91, 0x65, 0x55,
++   0x4a, 0x54, 0x96, 0x85, 0x55, 0x52, 0xaa, 0x8a, 0x1a, 0xaa, 0x59, 0x30,
++   0xff, 0xff, 0xff, 0xff, 0xff, 0x3f, 0xff, 0xff, 0xfe, 0x01, 0x0f, 0xff,
++   0xff, 0xff, 0xff, 0x82, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x20, 0x00,
++   0x00, 0x02, 0xc3, 0x50, 0x29, 0x49, 0x2d, 0x35, 0x24, 0x54, 0x89, 0x32,
++   0x52, 0x91, 0x49, 0x40, 0xff, 0xff, 0xff, 0xff, 0xff, 0x30, 0xff, 0xff,
++   0xfc, 0x01, 0x0f, 0xff, 0xff, 0xff, 0xff, 0x04, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x00, 0x20, 0x08, 0x80, 0x00, 0x87, 0x4b, 0x42, 0x95, 0x2a, 0x4a,
++   0x4a, 0x89, 0x52, 0x44, 0x45, 0x25, 0x55, 0x10, 0xff, 0xff, 0xff, 0xff,
++   0xff, 0x98, 0x7f, 0xff, 0xf0, 0x01, 0x0f, 0xff, 0xff, 0xff, 0xfc, 0x09,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x40, 0x40, 0x11, 0x15, 0x6a, 0x92,
++   0x4d, 0x29, 0x4a, 0xaa, 0xa4, 0xaa, 0x55, 0x54, 0x9a, 0x74, 0x44, 0xb1,
++   0xff, 0xff, 0xff, 0xff, 0xff, 0x8c, 0x1f, 0xff, 0x00, 0x01, 0x0f, 0xff,
++   0xff, 0xff, 0xf0, 0x29, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00,
++   0x80, 0x01, 0x48, 0x54, 0x94, 0xa6, 0xa9, 0x52, 0x45, 0x24, 0x84, 0x92,
++   0xa4, 0x85, 0x2a, 0x43, 0xff, 0xff, 0xff, 0xff, 0xff, 0x84, 0x00, 0x00,
++   0x00, 0x01, 0x1f, 0xff, 0xff, 0xff, 0xc0, 0x92, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x11, 0x00, 0x09, 0x02, 0x03, 0x53, 0x15, 0x2a, 0x49, 0x4a, 0xa5,
++   0x49, 0x29, 0x29, 0x24, 0x2a, 0xd5, 0x54, 0xa3, 0xff, 0xff, 0xff, 0xff,
++   0xff, 0xc6, 0x00, 0x00, 0x00, 0x03, 0x1f, 0xff, 0xff, 0xfe, 0x01, 0x24,
++   0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x24, 0x02, 0x20, 0x26, 0xd4, 0xa2,
++   0x49, 0xb5, 0x15, 0x4a, 0x52, 0xa9, 0x4a, 0x92, 0xa5, 0x12, 0x52, 0xa3,
++   0xff, 0xff, 0xff, 0xff, 0xff, 0xc6, 0x00, 0x00, 0x00, 0x03, 0x1f, 0xff,
++   0xff, 0xf0, 0x14, 0x92, 0x00, 0x00, 0x00, 0x00, 0x00, 0x84, 0x00, 0x44,
++   0x00, 0x0c, 0xc4, 0xa4, 0xa8, 0x25, 0x2a, 0xaa, 0x84, 0x45, 0x28, 0xa4,
++   0xa8, 0xa5, 0x4a, 0x43, 0xff, 0xff, 0xff, 0xff, 0xff, 0xc2, 0x00, 0x00,
++   0x00, 0x06, 0x1f, 0xff, 0xff, 0xc0, 0x25, 0x24, 0x00, 0x00, 0x00, 0x00,
++   0x00, 0x10, 0x80, 0x12, 0x01, 0x11, 0x12, 0x8a, 0x95, 0x49, 0x55, 0x49,
++   0x2a, 0xaa, 0x45, 0x0a, 0x4a, 0x94, 0xaa, 0x90, 0xff, 0xff, 0xff, 0xff,
++   0xff, 0xc2, 0x00, 0x00, 0x00, 0x06, 0x1f, 0xff, 0xff, 0x02, 0x48, 0x92,
++   0x00, 0x00, 0x00, 0x00, 0x02, 0x01, 0x08, 0x00, 0x22, 0x01, 0x24, 0x95,
++   0x21, 0x55, 0x55, 0x2a, 0x54, 0x92, 0x94, 0xa8, 0xa9, 0x2a, 0x92, 0xa8,
++   0x1f, 0xff, 0xff, 0xff, 0xff, 0x82, 0x00, 0x00, 0x00, 0x06, 0x0f, 0xff,
++   0xfe, 0x02, 0x49, 0x24, 0x00, 0x00, 0x00, 0x00, 0x00, 0x60, 0x01, 0x40,
++   0x00, 0xc4, 0x89, 0x11, 0x54, 0x95, 0x29, 0x49, 0x49, 0x24, 0x29, 0x12,
++   0x92, 0xa5, 0x55, 0x28, 0x01, 0xff, 0xff, 0xff, 0xff, 0x82, 0x00, 0x00,
++   0x00, 0x02, 0x0f, 0xff, 0xfc, 0x14, 0x92, 0x49, 0x00, 0x00, 0x00, 0x00,
++   0x08, 0x42, 0x06, 0x44, 0x45, 0x08, 0x92, 0x24, 0x95, 0xaa, 0xa5, 0x52,
++   0x92, 0x55, 0x42, 0x52, 0xaa, 0x4a, 0x49, 0x4a, 0x80, 0x0f, 0xff, 0xff,
++   0xff, 0x84, 0x00, 0x00, 0x00, 0x00, 0x0f, 0xff, 0xf0, 0x04, 0x92, 0x49,
++   0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x2a, 0x80, 0x01, 0x95, 0x2a, 0x8a,
++   0xa0, 0x95, 0x2a, 0x95, 0x54, 0xa4, 0x54, 0x84, 0x44, 0x95, 0x55, 0x29,
++   0x54, 0x00, 0x7f, 0xff, 0xff, 0x00, 0x00, 0x00, 0x00, 0x00, 0x07, 0xff,
++   0xe0, 0x29, 0x24, 0x92, 0x00, 0x00, 0x00, 0x00, 0x22, 0x04, 0x03, 0x03,
++   0x42, 0x12, 0x52, 0x2a, 0x15, 0x55, 0x54, 0xa8, 0x92, 0xa9, 0x49, 0x29,
++   0x54, 0xaa, 0x92, 0x52, 0xa5, 0x40, 0x07, 0xff, 0xff, 0x00, 0x15, 0x50,
++   0x92, 0x44, 0x07, 0xff, 0xc0, 0x52, 0x54, 0xaa, 0x00, 0x00, 0x00, 0x00,
++   0x84, 0x02, 0x86, 0x03, 0x08, 0x41, 0x48, 0x94, 0xa5, 0x52, 0xaa, 0x8a,
++   0x49, 0x52, 0x92, 0x45, 0x49, 0x55, 0x2a, 0xaa, 0x94, 0x96, 0x00, 0x7f,
++   0xfe, 0x01, 0x55, 0x2b, 0x25, 0x50, 0x03, 0xff, 0x00, 0xa4, 0x91, 0x08,
++   0x00, 0x00, 0x00, 0x00, 0x88, 0x2c, 0x18, 0x14, 0x0a, 0xaa, 0x92, 0x55,
++   0x49, 0x55, 0x4a, 0xa9, 0x52, 0x44, 0x92, 0x94, 0x92, 0x92, 0xca, 0x4a,
++   0x55, 0x27, 0xa0, 0x0f, 0xf8, 0x05, 0x35, 0x49, 0x68, 0x91, 0x00, 0xf8,
++   0x04, 0x95, 0x25, 0x29, 0x00, 0x00, 0x00, 0x01, 0x08, 0x04, 0x18, 0x00,
++   0x08, 0x04, 0x49, 0x08, 0x55, 0x4a, 0xa9, 0x12, 0x95, 0x55, 0x49, 0x22,
++   0x65, 0x2a, 0x5a, 0x92, 0xa9, 0x49, 0x54, 0x00, 0xe0, 0x24, 0xb2, 0x55,
++   0x15, 0x4c, 0x00, 0x00, 0x02, 0x48, 0x52, 0x52, 0x00, 0x00, 0x00, 0x02,
++   0x21, 0x09, 0x50, 0x81, 0x51, 0x69, 0x48, 0x52, 0x92, 0x94, 0xaa, 0x54,
++   0x25, 0x54, 0x92, 0x4a, 0x89, 0x54, 0x82, 0xaa, 0x4a, 0x53, 0x55, 0x00,
++   0x00, 0x49, 0x65, 0x52, 0x51, 0x12, 0x80, 0x00, 0x14, 0x92, 0x92, 0x42,
++   0x00, 0x00, 0x00, 0x01, 0x00, 0x20, 0x50, 0x24, 0x2a, 0x95, 0x25, 0x29,
++   0x4a, 0xb5, 0x48, 0xa5, 0x52, 0x49, 0x24, 0x94, 0xaa, 0xab, 0x15, 0x24,
++   0x92, 0xa5, 0x54, 0x40, 0x00, 0xaa, 0xb5, 0x25, 0x4a, 0xa4, 0x80, 0x00,
++   0x21, 0x24, 0xa8, 0x94, 0x00, 0x00, 0x00, 0x0a, 0x44, 0x02, 0xb1, 0x01,
++   0x24, 0x80, 0x48, 0xa2, 0x55, 0x55, 0x52, 0x8a, 0x94, 0xaa, 0x49, 0x22,
++   0x8a, 0x49, 0x49, 0x55, 0x4a, 0x51, 0x25, 0x48, 0x02, 0xa8, 0x6a, 0x69,
++   0x12, 0x29, 0x50, 0x00, 0x55, 0x24, 0x81, 0x24, 0x00, 0x00, 0x00, 0x23,
++   0x80, 0x90, 0x70, 0x14, 0x82, 0x55, 0x22, 0x8a, 0xa9, 0x14, 0xaa, 0xa9,
++   0x25, 0x51, 0x2a, 0x55, 0x21, 0x55, 0x55, 0x12, 0x55, 0x32, 0xaa, 0x94,
++   0xa5, 0x55, 0x5a, 0x92, 0x54, 0xa5, 0x52, 0x24, 0xa2, 0x49, 0x2a, 0x92,
++   0x00, 0x00, 0x00, 0x0e, 0x08, 0x00, 0xe2, 0xa0, 0x04, 0x54, 0x84, 0x52,
++   0x4a, 0xa4, 0xaa, 0x2a, 0xa9, 0x25, 0x45, 0x42, 0xae, 0x92, 0x44, 0xa4,
++   0x91, 0x24, 0xaa, 0xaa, 0xaa, 0x91, 0x55, 0x55, 0x51, 0x55, 0x54, 0xa9,
++   0x14, 0x92, 0x49, 0x24, 0x00, 0x00, 0x00, 0x24, 0x01, 0x82, 0x70, 0x09,
++   0x2a, 0x41, 0x2a, 0x89, 0x14, 0x49, 0x29, 0x52, 0x4a, 0xaa, 0x2a, 0x15,
++   0x12, 0xaa, 0xa9, 0x2a, 0xaa, 0xba, 0xa4, 0x94, 0x95, 0x55, 0x5a, 0x49,
++   0x25, 0x20, 0x49, 0x22, 0xa5, 0x24, 0x92, 0x92, 0x00, 0x00, 0x01, 0x0c,
++   0x93, 0x10, 0xe2, 0x11, 0x00, 0x94, 0x22, 0x52, 0x69, 0x53, 0x52, 0x45,
++   0x49, 0x22, 0xa4, 0x4a, 0x55, 0x29, 0x2a, 0xa4, 0x52, 0x42, 0xaa, 0xa5,
++   0x52, 0xa8, 0xaa, 0x55, 0x4a, 0xab, 0xa9, 0x4a, 0x54, 0x49, 0x32, 0x24 };
+diff --git a/arch/mips/boot/compressed/Makefile b/arch/mips/boot/compressed/Makefile
+index e4b7839293e16..3548b3b452699 100644
+--- a/arch/mips/boot/compressed/Makefile
++++ b/arch/mips/boot/compressed/Makefile
+@@ -40,7 +40,7 @@ GCOV_PROFILE := n
+ UBSAN_SANITIZE := n
+ 
+ # decompressor objects (linked with vmlinuz)
+-vmlinuzobjs-y := $(obj)/head.o $(obj)/decompress.o $(obj)/string.o
++vmlinuzobjs-y := $(obj)/head.o $(obj)/decompress.o $(obj)/string.o $(obj)/bswapsi.o
+ 
+ ifdef CONFIG_DEBUG_ZBOOT
+ vmlinuzobjs-$(CONFIG_DEBUG_ZBOOT)		   += $(obj)/dbg.o
+@@ -54,7 +54,7 @@ extra-y += uart-ath79.c
+ $(obj)/uart-ath79.c: $(srctree)/arch/mips/ath79/early_printk.c
+ 	$(call cmd,shipped)
+ 
+-vmlinuzobjs-$(CONFIG_KERNEL_XZ) += $(obj)/ashldi3.o $(obj)/bswapsi.o
++vmlinuzobjs-$(CONFIG_KERNEL_XZ) += $(obj)/ashldi3.o
+ 
+ extra-y += ashldi3.c
+ $(obj)/ashldi3.c: $(obj)/%.c: $(srctree)/lib/%.c FORCE
+diff --git a/arch/mips/boot/compressed/decompress.c b/arch/mips/boot/compressed/decompress.c
+index 3d70d15ada286..aae1346a509a9 100644
+--- a/arch/mips/boot/compressed/decompress.c
++++ b/arch/mips/boot/compressed/decompress.c
+@@ -7,6 +7,8 @@
+  * Author: Wu Zhangjin <wuzhangjin@gmail.com>
+  */
+ 
++#define DISABLE_BRANCH_PROFILING
++
+ #include <linux/types.h>
+ #include <linux/kernel.h>
+ #include <linux/string.h>
+diff --git a/arch/mips/include/asm/vdso/vdso.h b/arch/mips/include/asm/vdso/vdso.h
+index 737ddfc3411cb..a327ca21270ec 100644
+--- a/arch/mips/include/asm/vdso/vdso.h
++++ b/arch/mips/include/asm/vdso/vdso.h
+@@ -67,7 +67,7 @@ static inline const struct vdso_data *get_vdso_data(void)
+ 
+ static inline void __iomem *get_gic(const struct vdso_data *data)
+ {
+-	return (void __iomem *)data - PAGE_SIZE;
++	return (void __iomem *)((unsigned long)data & PAGE_MASK) - PAGE_SIZE;
+ }
+ 
+ #endif /* CONFIG_CLKSRC_MIPS_GIC */
+diff --git a/arch/powerpc/boot/devtree.c b/arch/powerpc/boot/devtree.c
+index 5d91036ad626d..58fbcfcc98c9e 100644
+--- a/arch/powerpc/boot/devtree.c
++++ b/arch/powerpc/boot/devtree.c
+@@ -13,6 +13,7 @@
+ #include "string.h"
+ #include "stdio.h"
+ #include "ops.h"
++#include "of.h"
+ 
+ void dt_fixup_memory(u64 start, u64 size)
+ {
+@@ -23,21 +24,25 @@ void dt_fixup_memory(u64 start, u64 size)
+ 	root = finddevice("/");
+ 	if (getprop(root, "#address-cells", &naddr, sizeof(naddr)) < 0)
+ 		naddr = 2;
++	else
++		naddr = be32_to_cpu(naddr);
+ 	if (naddr < 1 || naddr > 2)
+ 		fatal("Can't cope with #address-cells == %d in /\n\r", naddr);
+ 
+ 	if (getprop(root, "#size-cells", &nsize, sizeof(nsize)) < 0)
+ 		nsize = 1;
++	else
++		nsize = be32_to_cpu(nsize);
+ 	if (nsize < 1 || nsize > 2)
+ 		fatal("Can't cope with #size-cells == %d in /\n\r", nsize);
+ 
+ 	i = 0;
+ 	if (naddr == 2)
+-		memreg[i++] = start >> 32;
+-	memreg[i++] = start & 0xffffffff;
++		memreg[i++] = cpu_to_be32(start >> 32);
++	memreg[i++] = cpu_to_be32(start & 0xffffffff);
+ 	if (nsize == 2)
+-		memreg[i++] = size >> 32;
+-	memreg[i++] = size & 0xffffffff;
++		memreg[i++] = cpu_to_be32(size >> 32);
++	memreg[i++] = cpu_to_be32(size & 0xffffffff);
+ 
+ 	memory = finddevice("/memory");
+ 	if (! memory) {
+@@ -45,9 +50,9 @@ void dt_fixup_memory(u64 start, u64 size)
+ 		setprop_str(memory, "device_type", "memory");
+ 	}
+ 
+-	printf("Memory <- <0x%x", memreg[0]);
++	printf("Memory <- <0x%x", be32_to_cpu(memreg[0]));
+ 	for (i = 1; i < (naddr + nsize); i++)
+-		printf(" 0x%x", memreg[i]);
++		printf(" 0x%x", be32_to_cpu(memreg[i]));
+ 	printf("> (%ldMB)\n\r", (unsigned long)(size >> 20));
+ 
+ 	setprop(memory, "reg", memreg, (naddr + nsize)*sizeof(u32));
+@@ -65,10 +70,10 @@ void dt_fixup_cpu_clocks(u32 cpu, u32 tb, u32 bus)
+ 		printf("CPU bus-frequency <- 0x%x (%dMHz)\n\r", bus, MHZ(bus));
+ 
+ 	while ((devp = find_node_by_devtype(devp, "cpu"))) {
+-		setprop_val(devp, "clock-frequency", cpu);
+-		setprop_val(devp, "timebase-frequency", tb);
++		setprop_val(devp, "clock-frequency", cpu_to_be32(cpu));
++		setprop_val(devp, "timebase-frequency", cpu_to_be32(tb));
+ 		if (bus > 0)
+-			setprop_val(devp, "bus-frequency", bus);
++			setprop_val(devp, "bus-frequency", cpu_to_be32(bus));
+ 	}
+ 
+ 	timebase_period_ns = 1000000000 / tb;
+@@ -80,7 +85,7 @@ void dt_fixup_clock(const char *path, u32 freq)
+ 
+ 	if (devp) {
+ 		printf("%s: clock-frequency <- %x (%dMHz)\n\r", path, freq, MHZ(freq));
+-		setprop_val(devp, "clock-frequency", freq);
++		setprop_val(devp, "clock-frequency", cpu_to_be32(freq));
+ 	}
+ }
+ 
+@@ -133,8 +138,12 @@ void dt_get_reg_format(void *node, u32 *naddr, u32 *nsize)
+ {
+ 	if (getprop(node, "#address-cells", naddr, 4) != 4)
+ 		*naddr = 2;
++	else
++		*naddr = be32_to_cpu(*naddr);
+ 	if (getprop(node, "#size-cells", nsize, 4) != 4)
+ 		*nsize = 1;
++	else
++		*nsize = be32_to_cpu(*nsize);
+ }
+ 
+ static void copy_val(u32 *dest, u32 *src, int naddr)
+@@ -163,9 +172,9 @@ static int add_reg(u32 *reg, u32 *add, int naddr)
+ 	int i, carry = 0;
+ 
+ 	for (i = MAX_ADDR_CELLS - 1; i >= MAX_ADDR_CELLS - naddr; i--) {
+-		u64 tmp = (u64)reg[i] + add[i] + carry;
++		u64 tmp = (u64)be32_to_cpu(reg[i]) + be32_to_cpu(add[i]) + carry;
+ 		carry = tmp >> 32;
+-		reg[i] = (u32)tmp;
++		reg[i] = cpu_to_be32((u32)tmp);
+ 	}
+ 
+ 	return !carry;
+@@ -180,18 +189,18 @@ static int compare_reg(u32 *reg, u32 *range, u32 *rangesize)
+ 	u32 end;
+ 
+ 	for (i = 0; i < MAX_ADDR_CELLS; i++) {
+-		if (reg[i] < range[i])
++		if (be32_to_cpu(reg[i]) < be32_to_cpu(range[i]))
+ 			return 0;
+-		if (reg[i] > range[i])
++		if (be32_to_cpu(reg[i]) > be32_to_cpu(range[i]))
+ 			break;
+ 	}
+ 
+ 	for (i = 0; i < MAX_ADDR_CELLS; i++) {
+-		end = range[i] + rangesize[i];
++		end = be32_to_cpu(range[i]) + be32_to_cpu(rangesize[i]);
+ 
+-		if (reg[i] < end)
++		if (be32_to_cpu(reg[i]) < end)
+ 			break;
+-		if (reg[i] > end)
++		if (be32_to_cpu(reg[i]) > end)
+ 			return 0;
+ 	}
+ 
+@@ -240,7 +249,6 @@ static int dt_xlate(void *node, int res, int reglen, unsigned long *addr,
+ 		return 0;
+ 
+ 	dt_get_reg_format(parent, &naddr, &nsize);
+-
+ 	if (nsize > 2)
+ 		return 0;
+ 
+@@ -252,10 +260,10 @@ static int dt_xlate(void *node, int res, int reglen, unsigned long *addr,
+ 
+ 	copy_val(last_addr, prop_buf + offset, naddr);
+ 
+-	ret_size = prop_buf[offset + naddr];
++	ret_size = be32_to_cpu(prop_buf[offset + naddr]);
+ 	if (nsize == 2) {
+ 		ret_size <<= 32;
+-		ret_size |= prop_buf[offset + naddr + 1];
++		ret_size |= be32_to_cpu(prop_buf[offset + naddr + 1]);
+ 	}
+ 
+ 	for (;;) {
+@@ -278,7 +286,6 @@ static int dt_xlate(void *node, int res, int reglen, unsigned long *addr,
+ 
+ 		offset = find_range(last_addr, prop_buf, prev_naddr,
+ 		                    naddr, prev_nsize, buflen / 4);
+-
+ 		if (offset < 0)
+ 			return 0;
+ 
+@@ -296,8 +303,7 @@ static int dt_xlate(void *node, int res, int reglen, unsigned long *addr,
+ 	if (naddr > 2)
+ 		return 0;
+ 
+-	ret_addr = ((u64)last_addr[2] << 32) | last_addr[3];
+-
++	ret_addr = ((u64)be32_to_cpu(last_addr[2]) << 32) | be32_to_cpu(last_addr[3]);
+ 	if (sizeof(void *) == 4 &&
+ 	    (ret_addr >= 0x100000000ULL || ret_size > 0x100000000ULL ||
+ 	     ret_addr + ret_size > 0x100000000ULL))
+@@ -350,11 +356,14 @@ int dt_is_compatible(void *node, const char *compat)
+ int dt_get_virtual_reg(void *node, void **addr, int nres)
+ {
+ 	unsigned long xaddr;
+-	int n;
++	int n, i;
+ 
+ 	n = getprop(node, "virtual-reg", addr, nres * 4);
+-	if (n > 0)
++	if (n > 0) {
++		for (i = 0; i < n/4; i ++)
++			((u32 *)addr)[i] = be32_to_cpu(((u32 *)addr)[i]);
+ 		return n / 4;
++	}
+ 
+ 	for (n = 0; n < nres; n++) {
+ 		if (!dt_xlate_reg(node, n, &xaddr, NULL))
+diff --git a/arch/powerpc/boot/ns16550.c b/arch/powerpc/boot/ns16550.c
+index b0da4466d4198..f16d2be1d0f31 100644
+--- a/arch/powerpc/boot/ns16550.c
++++ b/arch/powerpc/boot/ns16550.c
+@@ -15,6 +15,7 @@
+ #include "stdio.h"
+ #include "io.h"
+ #include "ops.h"
++#include "of.h"
+ 
+ #define UART_DLL	0	/* Out: Divisor Latch Low */
+ #define UART_DLM	1	/* Out: Divisor Latch High */
+@@ -58,16 +59,20 @@ int ns16550_console_init(void *devp, struct serial_console_data *scdp)
+ 	int n;
+ 	u32 reg_offset;
+ 
+-	if (dt_get_virtual_reg(devp, (void **)&reg_base, 1) < 1)
++	if (dt_get_virtual_reg(devp, (void **)&reg_base, 1) < 1) {
++		printf("virt reg parse fail...\r\n");
+ 		return -1;
++	}
+ 
+ 	n = getprop(devp, "reg-offset", &reg_offset, sizeof(reg_offset));
+ 	if (n == sizeof(reg_offset))
+-		reg_base += reg_offset;
++		reg_base += be32_to_cpu(reg_offset);
+ 
+ 	n = getprop(devp, "reg-shift", &reg_shift, sizeof(reg_shift));
+ 	if (n != sizeof(reg_shift))
+ 		reg_shift = 0;
++	else
++		reg_shift = be32_to_cpu(reg_shift);
+ 
+ 	scdp->open = ns16550_open;
+ 	scdp->putc = ns16550_putc;
+diff --git a/arch/powerpc/include/asm/inst.h b/arch/powerpc/include/asm/inst.h
+index 268d3bd073c8a..887ef150fdda7 100644
+--- a/arch/powerpc/include/asm/inst.h
++++ b/arch/powerpc/include/asm/inst.h
+@@ -12,6 +12,8 @@
+ 	unsigned long __gui_ptr = (unsigned long)ptr;			\
+ 	struct ppc_inst __gui_inst;					\
+ 	unsigned int __prefix, __suffix;				\
++									\
++	__chk_user_ptr(ptr);						\
+ 	__gui_ret = gu_op(__prefix, (unsigned int __user *)__gui_ptr);	\
+ 	if (__gui_ret == 0) {						\
+ 		if ((__prefix >> 26) == OP_PREFIX) {			\
+@@ -29,7 +31,10 @@
+ })
+ #else /* !CONFIG_PPC64 */
+ #define ___get_user_instr(gu_op, dest, ptr)				\
+-	gu_op((dest).val, (u32 __user *)(ptr))
++({									\
++	__chk_user_ptr(ptr);						\
++	gu_op((dest).val, (u32 __user *)(ptr));				\
++})
+ #endif /* CONFIG_PPC64 */
+ 
+ #define get_user_instr(x, ptr) \
+diff --git a/arch/powerpc/include/asm/ps3.h b/arch/powerpc/include/asm/ps3.h
+index e646c7f218bc8..12b6b76e8d0f7 100644
+--- a/arch/powerpc/include/asm/ps3.h
++++ b/arch/powerpc/include/asm/ps3.h
+@@ -71,6 +71,7 @@ struct ps3_dma_region_ops;
+  * @bus_addr: The 'translated' bus address of the region.
+  * @len: The length in bytes of the region.
+  * @offset: The offset from the start of memory of the region.
++ * @dma_mask: Device dma_mask.
+  * @ioid: The IOID of the device who owns this region
+  * @chunk_list: Opaque variable used by the ioc page manager.
+  * @region_ops: struct ps3_dma_region_ops - dma region operations
+@@ -85,6 +86,7 @@ struct ps3_dma_region {
+ 	enum ps3_dma_region_type region_type;
+ 	unsigned long len;
+ 	unsigned long offset;
++	u64 dma_mask;
+ 
+ 	/* driver variables  (set by ps3_dma_region_create) */
+ 	unsigned long bus_addr;
+diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c
+index 409e612107892..817a02ef60320 100644
+--- a/arch/powerpc/mm/book3s64/radix_tlb.c
++++ b/arch/powerpc/mm/book3s64/radix_tlb.c
+@@ -291,22 +291,30 @@ static inline void fixup_tlbie_lpid(unsigned long lpid)
+ /*
+  * We use 128 set in radix mode and 256 set in hpt mode.
+  */
+-static __always_inline void _tlbiel_pid(unsigned long pid, unsigned long ric)
++static inline void _tlbiel_pid(unsigned long pid, unsigned long ric)
+ {
+ 	int set;
+ 
+ 	asm volatile("ptesync": : :"memory");
+ 
+-	/*
+-	 * Flush the first set of the TLB, and if we're doing a RIC_FLUSH_ALL,
+-	 * also flush the entire Page Walk Cache.
+-	 */
+-	__tlbiel_pid(pid, 0, ric);
++	switch (ric) {
++	case RIC_FLUSH_PWC:
+ 
+-	/* For PWC, only one flush is needed */
+-	if (ric == RIC_FLUSH_PWC) {
++		/* For PWC, only one flush is needed */
++		__tlbiel_pid(pid, 0, RIC_FLUSH_PWC);
+ 		ppc_after_tlbiel_barrier();
+ 		return;
++	case RIC_FLUSH_TLB:
++		__tlbiel_pid(pid, 0, RIC_FLUSH_TLB);
++		break;
++	case RIC_FLUSH_ALL:
++	default:
++		/*
++		 * Flush the first set of the TLB, and if
++		 * we're doing a RIC_FLUSH_ALL, also flush
++		 * the entire Page Walk Cache.
++		 */
++		__tlbiel_pid(pid, 0, RIC_FLUSH_ALL);
+ 	}
+ 
+ 	if (!cpu_has_feature(CPU_FTR_ARCH_31)) {
+@@ -1176,7 +1184,7 @@ void radix__tlb_flush(struct mmu_gather *tlb)
+ 	}
+ }
+ 
+-static __always_inline void __radix__flush_tlb_range_psize(struct mm_struct *mm,
++static void __radix__flush_tlb_range_psize(struct mm_struct *mm,
+ 				unsigned long start, unsigned long end,
+ 				int psize, bool also_pwc)
+ {
+diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
+index 57a8c1153851a..94411af24013f 100644
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -667,7 +667,7 @@ emit_clear:
+ 		 * BPF_STX ATOMIC (atomic ops)
+ 		 */
+ 		case BPF_STX | BPF_ATOMIC | BPF_W:
+-			if (insn->imm != BPF_ADD) {
++			if (imm != BPF_ADD) {
+ 				pr_err_ratelimited(
+ 					"eBPF filter atomic op code %02x (@%d) unsupported\n",
+ 					code, i);
+@@ -689,7 +689,7 @@ emit_clear:
+ 			PPC_BCC_SHORT(COND_NE, tmp_idx);
+ 			break;
+ 		case BPF_STX | BPF_ATOMIC | BPF_DW:
+-			if (insn->imm != BPF_ADD) {
++			if (imm != BPF_ADD) {
+ 				pr_err_ratelimited(
+ 					"eBPF filter atomic op code %02x (@%d) unsupported\n",
+ 					code, i);
+diff --git a/arch/powerpc/platforms/ps3/mm.c b/arch/powerpc/platforms/ps3/mm.c
+index d094321964fb0..a81eac35d9009 100644
+--- a/arch/powerpc/platforms/ps3/mm.c
++++ b/arch/powerpc/platforms/ps3/mm.c
+@@ -6,6 +6,7 @@
+  *  Copyright 2006 Sony Corp.
+  */
+ 
++#include <linux/dma-mapping.h>
+ #include <linux/kernel.h>
+ #include <linux/export.h>
+ #include <linux/memblock.h>
+@@ -1118,6 +1119,7 @@ int ps3_dma_region_init(struct ps3_system_bus_device *dev,
+ 	enum ps3_dma_region_type region_type, void *addr, unsigned long len)
+ {
+ 	unsigned long lpar_addr;
++	int result;
+ 
+ 	lpar_addr = addr ? ps3_mm_phys_to_lpar(__pa(addr)) : 0;
+ 
+@@ -1129,6 +1131,16 @@ int ps3_dma_region_init(struct ps3_system_bus_device *dev,
+ 		r->offset -= map.r1.offset;
+ 	r->len = len ? len : ALIGN(map.total, 1 << r->page_size);
+ 
++	dev->core.dma_mask = &r->dma_mask;
++
++	result = dma_set_mask_and_coherent(&dev->core, DMA_BIT_MASK(32));
++
++	if (result < 0) {
++		dev_err(&dev->core, "%s:%d: dma_set_mask_and_coherent failed: %d\n",
++			__func__, __LINE__, result);
++		return result;
++	}
++
+ 	switch (dev->dev_type) {
+ 	case PS3_DEVICE_TYPE_SB:
+ 		r->region_ops =  (USE_DYNAMIC_DMA)
+diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
+index 93488bbf491b9..90e1639697b4d 100644
+--- a/arch/s390/Kconfig
++++ b/arch/s390/Kconfig
+@@ -165,7 +165,6 @@ config S390
+ 	select HAVE_GCC_PLUGINS
+ 	select HAVE_GENERIC_VDSO
+ 	select HAVE_IOREMAP_PROT if PCI
+-	select HAVE_IRQ_EXIT_ON_IRQ_STACK
+ 	select HAVE_KERNEL_BZIP2
+ 	select HAVE_KERNEL_GZIP
+ 	select HAVE_KERNEL_LZ4
+diff --git a/arch/s390/Makefile b/arch/s390/Makefile
+index e443ed9947bd6..098abe3a56f37 100644
+--- a/arch/s390/Makefile
++++ b/arch/s390/Makefile
+@@ -28,6 +28,7 @@ KBUILD_CFLAGS_DECOMPRESSOR += -DDISABLE_BRANCH_PROFILING -D__NO_FORTIFY
+ KBUILD_CFLAGS_DECOMPRESSOR += -fno-delete-null-pointer-checks -msoft-float -mbackchain
+ KBUILD_CFLAGS_DECOMPRESSOR += -fno-asynchronous-unwind-tables
+ KBUILD_CFLAGS_DECOMPRESSOR += -ffreestanding
++KBUILD_CFLAGS_DECOMPRESSOR += -fno-stack-protector
+ KBUILD_CFLAGS_DECOMPRESSOR += $(call cc-disable-warning, address-of-packed-member)
+ KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO),-g)
+ KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO_DWARF4), $(call cc-option, -gdwarf-4,))
+diff --git a/arch/s390/boot/ipl_parm.c b/arch/s390/boot/ipl_parm.c
+index d372a45fe10e7..dd92092e3eec3 100644
+--- a/arch/s390/boot/ipl_parm.c
++++ b/arch/s390/boot/ipl_parm.c
+@@ -28,22 +28,25 @@ static inline int __diag308(unsigned long subcode, void *addr)
+ 	register unsigned long _addr asm("0") = (unsigned long)addr;
+ 	register unsigned long _rc asm("1") = 0;
+ 	unsigned long reg1, reg2;
+-	psw_t old = S390_lowcore.program_new_psw;
++	psw_t old;
+ 
+ 	asm volatile(
++		"	mvc	0(16,%[psw_old]),0(%[psw_pgm])\n"
+ 		"	epsw	%0,%1\n"
+-		"	st	%0,%[psw_pgm]\n"
+-		"	st	%1,%[psw_pgm]+4\n"
++		"	st	%0,0(%[psw_pgm])\n"
++		"	st	%1,4(%[psw_pgm])\n"
+ 		"	larl	%0,1f\n"
+-		"	stg	%0,%[psw_pgm]+8\n"
++		"	stg	%0,8(%[psw_pgm])\n"
+ 		"	diag	%[addr],%[subcode],0x308\n"
+-		"1:	nopr	%%r7\n"
++		"1:	mvc	0(16,%[psw_pgm]),0(%[psw_old])\n"
+ 		: "=&d" (reg1), "=&a" (reg2),
+-		  [psw_pgm] "=Q" (S390_lowcore.program_new_psw),
++		  "+Q" (S390_lowcore.program_new_psw),
++		  "=Q" (old),
+ 		  [addr] "+d" (_addr), "+d" (_rc)
+-		: [subcode] "d" (subcode)
++		: [subcode] "d" (subcode),
++		  [psw_old] "a" (&old),
++		  [psw_pgm] "a" (&S390_lowcore.program_new_psw)
+ 		: "cc", "memory");
+-	S390_lowcore.program_new_psw = old;
+ 	return _rc;
+ }
+ 
+diff --git a/arch/s390/boot/mem_detect.c b/arch/s390/boot/mem_detect.c
+index 40168e59abd38..a0e980f57c02c 100644
+--- a/arch/s390/boot/mem_detect.c
++++ b/arch/s390/boot/mem_detect.c
+@@ -69,24 +69,27 @@ static int __diag260(unsigned long rx1, unsigned long rx2)
+ 	register unsigned long _ry asm("4") = 0x10; /* storage configuration */
+ 	int rc = -1;				    /* fail */
+ 	unsigned long reg1, reg2;
+-	psw_t old = S390_lowcore.program_new_psw;
++	psw_t old;
+ 
+ 	asm volatile(
++		"	mvc	0(16,%[psw_old]),0(%[psw_pgm])\n"
+ 		"	epsw	%0,%1\n"
+-		"	st	%0,%[psw_pgm]\n"
+-		"	st	%1,%[psw_pgm]+4\n"
++		"	st	%0,0(%[psw_pgm])\n"
++		"	st	%1,4(%[psw_pgm])\n"
+ 		"	larl	%0,1f\n"
+-		"	stg	%0,%[psw_pgm]+8\n"
++		"	stg	%0,8(%[psw_pgm])\n"
+ 		"	diag	%[rx],%[ry],0x260\n"
+ 		"	ipm	%[rc]\n"
+ 		"	srl	%[rc],28\n"
+-		"1:\n"
++		"1:	mvc	0(16,%[psw_pgm]),0(%[psw_old])\n"
+ 		: "=&d" (reg1), "=&a" (reg2),
+-		  [psw_pgm] "=Q" (S390_lowcore.program_new_psw),
++		  "+Q" (S390_lowcore.program_new_psw),
++		  "=Q" (old),
+ 		  [rc] "+&d" (rc), [ry] "+d" (_ry)
+-		: [rx] "d" (_rx1), "d" (_rx2)
++		: [rx] "d" (_rx1), "d" (_rx2),
++		  [psw_old] "a" (&old),
++		  [psw_pgm] "a" (&S390_lowcore.program_new_psw)
+ 		: "cc", "memory");
+-	S390_lowcore.program_new_psw = old;
+ 	return rc == 0 ? _ry : -1;
+ }
+ 
+@@ -111,24 +114,30 @@ static int diag260(void)
+ 
+ static int tprot(unsigned long addr)
+ {
+-	unsigned long pgm_addr;
++	unsigned long reg1, reg2;
+ 	int rc = -EFAULT;
+-	psw_t old = S390_lowcore.program_new_psw;
++	psw_t old;
+ 
+-	S390_lowcore.program_new_psw.mask = __extract_psw();
+ 	asm volatile(
+-		"	larl	%[pgm_addr],1f\n"
+-		"	stg	%[pgm_addr],%[psw_pgm_addr]\n"
++		"	mvc	0(16,%[psw_old]),0(%[psw_pgm])\n"
++		"	epsw	%[reg1],%[reg2]\n"
++		"	st	%[reg1],0(%[psw_pgm])\n"
++		"	st	%[reg2],4(%[psw_pgm])\n"
++		"	larl	%[reg1],1f\n"
++		"	stg	%[reg1],8(%[psw_pgm])\n"
+ 		"	tprot	0(%[addr]),0\n"
+ 		"	ipm	%[rc]\n"
+ 		"	srl	%[rc],28\n"
+-		"1:\n"
+-		: [pgm_addr] "=&d"(pgm_addr),
+-		  [psw_pgm_addr] "=Q"(S390_lowcore.program_new_psw.addr),
+-		  [rc] "+&d"(rc)
+-		: [addr] "a"(addr)
++		"1:	mvc	0(16,%[psw_pgm]),0(%[psw_old])\n"
++		: [reg1] "=&d" (reg1),
++		  [reg2] "=&a" (reg2),
++		  [rc] "+&d" (rc),
++		  "=Q" (S390_lowcore.program_new_psw.addr),
++		  "=Q" (old)
++		: [psw_old] "a" (&old),
++		  [psw_pgm] "a" (&S390_lowcore.program_new_psw),
++		  [addr] "a" (addr)
+ 		: "cc", "memory");
+-	S390_lowcore.program_new_psw = old;
+ 	return rc;
+ }
+ 
+diff --git a/arch/s390/include/asm/processor.h b/arch/s390/include/asm/processor.h
+index 023a15dc25a33..dbd380d811335 100644
+--- a/arch/s390/include/asm/processor.h
++++ b/arch/s390/include/asm/processor.h
+@@ -207,7 +207,7 @@ static __always_inline unsigned long current_stack_pointer(void)
+ 	return sp;
+ }
+ 
+-static __no_kasan_or_inline unsigned short stap(void)
++static __always_inline unsigned short stap(void)
+ {
+ 	unsigned short cpu_address;
+ 
+@@ -246,7 +246,7 @@ static inline void __load_psw(psw_t psw)
+  * Set PSW mask to specified value, while leaving the
+  * PSW addr pointing to the next instruction.
+  */
+-static __no_kasan_or_inline void __load_psw_mask(unsigned long mask)
++static __always_inline void __load_psw_mask(unsigned long mask)
+ {
+ 	unsigned long addr;
+ 	psw_t psw;
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index 382d73da134cf..93538e63fa03f 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -165,7 +165,7 @@ static void __init set_preferred_console(void)
+ 	else if (CONSOLE_IS_3270)
+ 		add_preferred_console("tty3270", 0, NULL);
+ 	else if (CONSOLE_IS_VT220)
+-		add_preferred_console("ttyS", 1, NULL);
++		add_preferred_console("ttysclp", 0, NULL);
+ 	else if (CONSOLE_IS_HVC)
+ 		add_preferred_console("hvc", 0, NULL);
+ }
+diff --git a/arch/s390/purgatory/Makefile b/arch/s390/purgatory/Makefile
+index c57f8c40e9926..21c4ebe29b9a2 100644
+--- a/arch/s390/purgatory/Makefile
++++ b/arch/s390/purgatory/Makefile
+@@ -24,6 +24,7 @@ KBUILD_CFLAGS := -fno-strict-aliasing -Wall -Wstrict-prototypes
+ KBUILD_CFLAGS += -Wno-pointer-sign -Wno-sign-compare
+ KBUILD_CFLAGS += -fno-zero-initialized-in-bss -fno-builtin -ffreestanding
+ KBUILD_CFLAGS += -c -MD -Os -m64 -msoft-float -fno-common
++KBUILD_CFLAGS += -fno-stack-protector
+ KBUILD_CFLAGS += $(CLANG_FLAGS)
+ KBUILD_CFLAGS += $(call cc-option,-fno-PIE)
+ KBUILD_AFLAGS := $(filter-out -DCC_USING_EXPOLINE,$(KBUILD_AFLAGS))
+diff --git a/arch/um/drivers/chan_user.c b/arch/um/drivers/chan_user.c
+index d8845d4aac6a7..6040817c036f3 100644
+--- a/arch/um/drivers/chan_user.c
++++ b/arch/um/drivers/chan_user.c
+@@ -256,7 +256,8 @@ static int winch_tramp(int fd, struct tty_port *port, int *fd_out,
+ 		goto out_close;
+ 	}
+ 
+-	if (os_set_fd_block(*fd_out, 0)) {
++	err = os_set_fd_block(*fd_out, 0);
++	if (err) {
+ 		printk(UM_KERN_ERR "winch_tramp: failed to set thread_fd "
+ 		       "non-blocking.\n");
+ 		goto out_close;
+diff --git a/arch/um/drivers/slip_user.c b/arch/um/drivers/slip_user.c
+index 482a19c5105c5..7334019c9e60a 100644
+--- a/arch/um/drivers/slip_user.c
++++ b/arch/um/drivers/slip_user.c
+@@ -145,7 +145,8 @@ static int slip_open(void *data)
+ 	}
+ 	sfd = err;
+ 
+-	if (set_up_tty(sfd))
++	err = set_up_tty(sfd);
++	if (err)
+ 		goto out_close2;
+ 
+ 	pri->slave = sfd;
+diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
+index 8e0b43cf089f4..cbd4f00fe77ee 100644
+--- a/arch/um/drivers/ubd_kern.c
++++ b/arch/um/drivers/ubd_kern.c
+@@ -1242,8 +1242,7 @@ static int __init ubd_driver_init(void){
+ 		 * enough. So use anyway the io thread. */
+ 	}
+ 	stack = alloc_stack(0, 0);
+-	io_pid = start_io_thread(stack + PAGE_SIZE - sizeof(void *),
+-				 &thread_fd);
++	io_pid = start_io_thread(stack + PAGE_SIZE, &thread_fd);
+ 	if(io_pid < 0){
+ 		printk(KERN_ERR
+ 		       "ubd : Failed to start I/O thread (errno = %d) - "
+diff --git a/arch/um/kernel/skas/clone.c b/arch/um/kernel/skas/clone.c
+index 592cdb1384415..5afac0fef24ea 100644
+--- a/arch/um/kernel/skas/clone.c
++++ b/arch/um/kernel/skas/clone.c
+@@ -29,7 +29,7 @@ stub_clone_handler(void)
+ 	long err;
+ 
+ 	err = stub_syscall2(__NR_clone, CLONE_PARENT | CLONE_FILES | SIGCHLD,
+-			    (unsigned long)data + UM_KERN_PAGE_SIZE / 2 - sizeof(void *));
++			    (unsigned long)data + UM_KERN_PAGE_SIZE / 2);
+ 	if (err) {
+ 		data->parent_err = err;
+ 		goto done;
+diff --git a/arch/um/os-Linux/helper.c b/arch/um/os-Linux/helper.c
+index 9fa6e4187d4fb..32e88baf18dd4 100644
+--- a/arch/um/os-Linux/helper.c
++++ b/arch/um/os-Linux/helper.c
+@@ -64,7 +64,7 @@ int run_helper(void (*pre_exec)(void *), void *pre_data, char **argv)
+ 		goto out_close;
+ 	}
+ 
+-	sp = stack + UM_KERN_PAGE_SIZE - sizeof(void *);
++	sp = stack + UM_KERN_PAGE_SIZE;
+ 	data.pre_exec = pre_exec;
+ 	data.pre_data = pre_data;
+ 	data.argv = argv;
+@@ -120,7 +120,7 @@ int run_helper_thread(int (*proc)(void *), void *arg, unsigned int flags,
+ 	if (stack == 0)
+ 		return -ENOMEM;
+ 
+-	sp = stack + UM_KERN_PAGE_SIZE - sizeof(void *);
++	sp = stack + UM_KERN_PAGE_SIZE;
+ 	pid = clone(proc, (void *) sp, flags, arg);
+ 	if (pid < 0) {
+ 		err = -errno;
+diff --git a/arch/um/os-Linux/signal.c b/arch/um/os-Linux/signal.c
+index 96f511d1aabe6..e283f130aadc5 100644
+--- a/arch/um/os-Linux/signal.c
++++ b/arch/um/os-Linux/signal.c
+@@ -129,7 +129,7 @@ void set_sigstack(void *sig_stack, int size)
+ 	stack_t stack = {
+ 		.ss_flags = 0,
+ 		.ss_sp = sig_stack,
+-		.ss_size = size - sizeof(void *)
++		.ss_size = size
+ 	};
+ 
+ 	if (sigaltstack(&stack, NULL) != 0)
+diff --git a/arch/um/os-Linux/skas/process.c b/arch/um/os-Linux/skas/process.c
+index fba674fac8b73..87d3129e7362e 100644
+--- a/arch/um/os-Linux/skas/process.c
++++ b/arch/um/os-Linux/skas/process.c
+@@ -327,7 +327,7 @@ int start_userspace(unsigned long stub_stack)
+ 	}
+ 
+ 	/* set stack pointer to the end of the stack page, so it can grow downwards */
+-	sp = (unsigned long) stack + UM_KERN_PAGE_SIZE - sizeof(void *);
++	sp = (unsigned long)stack + UM_KERN_PAGE_SIZE;
+ 
+ 	flags = CLONE_FILES | SIGCHLD;
+ 
+diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h
+index 16bf4d4a8159e..4e5af2b00d89b 100644
+--- a/arch/x86/include/asm/fpu/internal.h
++++ b/arch/x86/include/asm/fpu/internal.h
+@@ -103,6 +103,7 @@ static inline void fpstate_init_fxstate(struct fxregs_state *fx)
+ }
+ extern void fpstate_sanitize_xstate(struct fpu *fpu);
+ 
++/* Returns 0 or the negated trap number, which results in -EFAULT for #PF */
+ #define user_insn(insn, output, input...)				\
+ ({									\
+ 	int err;							\
+@@ -110,14 +111,14 @@ extern void fpstate_sanitize_xstate(struct fpu *fpu);
+ 	might_fault();							\
+ 									\
+ 	asm volatile(ASM_STAC "\n"					\
+-		     "1:" #insn "\n\t"					\
++		     "1: " #insn "\n"					\
+ 		     "2: " ASM_CLAC "\n"				\
+ 		     ".section .fixup,\"ax\"\n"				\
+-		     "3:  movl $-1,%[err]\n"				\
++		     "3:  negl %%eax\n"					\
+ 		     "    jmp  2b\n"					\
+ 		     ".previous\n"					\
+-		     _ASM_EXTABLE(1b, 3b)				\
+-		     : [err] "=r" (err), output				\
++		     _ASM_EXTABLE_FAULT(1b, 3b)				\
++		     : [err] "=a" (err), output				\
+ 		     : "0"(0), input);					\
+ 	err;								\
+ })
+@@ -219,16 +220,20 @@ static inline void fxsave(struct fxregs_state *fx)
+ #define XRSTOR		".byte " REX_PREFIX "0x0f,0xae,0x2f"
+ #define XRSTORS		".byte " REX_PREFIX "0x0f,0xc7,0x1f"
+ 
++/*
++ * After this @err contains 0 on success or the negated trap number when
++ * the operation raises an exception. For faults this results in -EFAULT.
++ */
+ #define XSTATE_OP(op, st, lmask, hmask, err)				\
+ 	asm volatile("1:" op "\n\t"					\
+ 		     "xor %[err], %[err]\n"				\
+ 		     "2:\n\t"						\
+ 		     ".pushsection .fixup,\"ax\"\n\t"			\
+-		     "3: movl $-2,%[err]\n\t"				\
++		     "3: negl %%eax\n\t"				\
+ 		     "jmp 2b\n\t"					\
+ 		     ".popsection\n\t"					\
+-		     _ASM_EXTABLE(1b, 3b)				\
+-		     : [err] "=r" (err)					\
++		     _ASM_EXTABLE_FAULT(1b, 3b)				\
++		     : [err] "=a" (err)					\
+ 		     : "D" (st), "m" (*st), "a" (lmask), "d" (hmask)	\
+ 		     : "memory")
+ 
+diff --git a/arch/x86/kernel/fpu/regset.c b/arch/x86/kernel/fpu/regset.c
+index c413756ba89fa..6bb874441de8b 100644
+--- a/arch/x86/kernel/fpu/regset.c
++++ b/arch/x86/kernel/fpu/regset.c
+@@ -117,7 +117,7 @@ int xstateregs_set(struct task_struct *target, const struct user_regset *regset,
+ 	/*
+ 	 * A whole standard-format XSAVE buffer is needed:
+ 	 */
+-	if ((pos != 0) || (count < fpu_user_xstate_size))
++	if (pos != 0 || count != fpu_user_xstate_size)
+ 		return -EFAULT;
+ 
+ 	xsave = &fpu->state.xsave;
+diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
+index 1cadb2faf7405..4819251ffe7cc 100644
+--- a/arch/x86/kernel/fpu/xstate.c
++++ b/arch/x86/kernel/fpu/xstate.c
+@@ -1084,20 +1084,10 @@ static inline bool xfeatures_mxcsr_quirk(u64 xfeatures)
+ 	return true;
+ }
+ 
+-static void fill_gap(struct membuf *to, unsigned *last, unsigned offset)
++static void copy_feature(bool from_xstate, struct membuf *to, void *xstate,
++			 void *init_xstate, unsigned int size)
+ {
+-	if (*last >= offset)
+-		return;
+-	membuf_write(to, (void *)&init_fpstate.xsave + *last, offset - *last);
+-	*last = offset;
+-}
+-
+-static void copy_part(struct membuf *to, unsigned *last, unsigned offset,
+-		      unsigned size, void *from)
+-{
+-	fill_gap(to, last, offset);
+-	membuf_write(to, from, size);
+-	*last = offset + size;
++	membuf_write(to, from_xstate ? xstate : init_xstate, size);
+ }
+ 
+ /*
+@@ -1109,10 +1099,10 @@ static void copy_part(struct membuf *to, unsigned *last, unsigned offset,
+  */
+ void copy_xstate_to_kernel(struct membuf to, struct xregs_state *xsave)
+ {
++	const unsigned int off_mxcsr = offsetof(struct fxregs_state, mxcsr);
++	struct xregs_state *xinit = &init_fpstate.xsave;
+ 	struct xstate_header header;
+-	const unsigned off_mxcsr = offsetof(struct fxregs_state, mxcsr);
+-	unsigned size = to.left;
+-	unsigned last = 0;
++	unsigned int zerofrom;
+ 	int i;
+ 
+ 	/*
+@@ -1122,41 +1112,68 @@ void copy_xstate_to_kernel(struct membuf to, struct xregs_state *xsave)
+ 	header.xfeatures = xsave->header.xfeatures;
+ 	header.xfeatures &= xfeatures_mask_user();
+ 
+-	if (header.xfeatures & XFEATURE_MASK_FP)
+-		copy_part(&to, &last, 0, off_mxcsr, &xsave->i387);
+-	if (header.xfeatures & (XFEATURE_MASK_SSE | XFEATURE_MASK_YMM))
+-		copy_part(&to, &last, off_mxcsr,
+-			  MXCSR_AND_FLAGS_SIZE, &xsave->i387.mxcsr);
+-	if (header.xfeatures & XFEATURE_MASK_FP)
+-		copy_part(&to, &last, offsetof(struct fxregs_state, st_space),
+-			  128, &xsave->i387.st_space);
+-	if (header.xfeatures & XFEATURE_MASK_SSE)
+-		copy_part(&to, &last, xstate_offsets[XFEATURE_SSE],
+-			  256, &xsave->i387.xmm_space);
+-	/*
+-	 * Fill xsave->i387.sw_reserved value for ptrace frame:
+-	 */
+-	copy_part(&to, &last, offsetof(struct fxregs_state, sw_reserved),
+-		  48, xstate_fx_sw_bytes);
+-	/*
+-	 * Copy xregs_state->header:
+-	 */
+-	copy_part(&to, &last, offsetof(struct xregs_state, header),
+-		  sizeof(header), &header);
++	/* Copy FP state up to MXCSR */
++	copy_feature(header.xfeatures & XFEATURE_MASK_FP, &to, &xsave->i387,
++		     &xinit->i387, off_mxcsr);
++
++	/* Copy MXCSR when SSE or YMM are set in the feature mask */
++	copy_feature(header.xfeatures & (XFEATURE_MASK_SSE | XFEATURE_MASK_YMM),
++		     &to, &xsave->i387.mxcsr, &xinit->i387.mxcsr,
++		     MXCSR_AND_FLAGS_SIZE);
++
++	/* Copy the remaining FP state */
++	copy_feature(header.xfeatures & XFEATURE_MASK_FP,
++		     &to, &xsave->i387.st_space, &xinit->i387.st_space,
++		     sizeof(xsave->i387.st_space));
++
++	/* Copy the SSE state - shared with YMM, but independently managed */
++	copy_feature(header.xfeatures & XFEATURE_MASK_SSE,
++		     &to, &xsave->i387.xmm_space, &xinit->i387.xmm_space,
++		     sizeof(xsave->i387.xmm_space));
++
++	/* Zero the padding area */
++	membuf_zero(&to, sizeof(xsave->i387.padding));
++
++	/* Copy xsave->i387.sw_reserved */
++	membuf_write(&to, xstate_fx_sw_bytes, sizeof(xsave->i387.sw_reserved));
++
++	/* Copy the user space relevant state of @xsave->header */
++	membuf_write(&to, &header, sizeof(header));
++
++	zerofrom = offsetof(struct xregs_state, extended_state_area);
+ 
+ 	for (i = FIRST_EXTENDED_XFEATURE; i < XFEATURE_MAX; i++) {
+ 		/*
+-		 * Copy only in-use xstates:
++		 * The ptrace buffer is in non-compacted XSAVE format.
++		 * In non-compacted format disabled features still occupy
++		 * state space, but there is no state to copy from in the
++		 * compacted init_fpstate. The gap tracking will zero this
++		 * later.
+ 		 */
+-		if ((header.xfeatures >> i) & 1) {
+-			void *src = __raw_xsave_addr(xsave, i);
++		if (!(xfeatures_mask_user() & BIT_ULL(i)))
++			continue;
+ 
+-			copy_part(&to, &last, xstate_offsets[i],
+-				  xstate_sizes[i], src);
+-		}
++		/*
++		 * If there was a feature or alignment gap, zero the space
++		 * in the destination buffer.
++		 */
++		if (zerofrom < xstate_offsets[i])
++			membuf_zero(&to, xstate_offsets[i] - zerofrom);
++
++		copy_feature(header.xfeatures & BIT_ULL(i), &to,
++			     __raw_xsave_addr(xsave, i),
++			     __raw_xsave_addr(xinit, i),
++			     xstate_sizes[i]);
+ 
++		/*
++		 * Keep track of the last copied state in the non-compacted
++		 * target buffer for gap zeroing.
++		 */
++		zerofrom = xstate_offsets[i] + xstate_sizes[i];
+ 	}
+-	fill_gap(&to, &last, size);
++
++	if (to.left)
++		membuf_zero(&to, to.left);
+ }
+ 
+ /*
+diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c
+index a06cb107c0e88..ca3ab89be5ccb 100644
+--- a/arch/x86/kernel/signal.c
++++ b/arch/x86/kernel/signal.c
+@@ -234,10 +234,11 @@ get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, size_t frame_size,
+ 	     void __user **fpstate)
+ {
+ 	/* Default to using normal stack */
++	bool nested_altstack = on_sig_stack(regs->sp);
++	bool entering_altstack = false;
+ 	unsigned long math_size = 0;
+ 	unsigned long sp = regs->sp;
+ 	unsigned long buf_fx = 0;
+-	int onsigstack = on_sig_stack(sp);
+ 	int ret;
+ 
+ 	/* redzone */
+@@ -246,15 +247,23 @@ get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, size_t frame_size,
+ 
+ 	/* This is the X/Open sanctioned signal stack switching.  */
+ 	if (ka->sa.sa_flags & SA_ONSTACK) {
+-		if (sas_ss_flags(sp) == 0)
++		/*
++		 * This checks nested_altstack via sas_ss_flags(). Sensible
++		 * programs use SS_AUTODISARM, which disables that check, and
++		 * programs that don't use SS_AUTODISARM get compatible.
++		 */
++		if (sas_ss_flags(sp) == 0) {
+ 			sp = current->sas_ss_sp + current->sas_ss_size;
++			entering_altstack = true;
++		}
+ 	} else if (IS_ENABLED(CONFIG_X86_32) &&
+-		   !onsigstack &&
++		   !nested_altstack &&
+ 		   regs->ss != __USER_DS &&
+ 		   !(ka->sa.sa_flags & SA_RESTORER) &&
+ 		   ka->sa.sa_restorer) {
+ 		/* This is the legacy signal stack switching. */
+ 		sp = (unsigned long) ka->sa.sa_restorer;
++		entering_altstack = true;
+ 	}
+ 
+ 	sp = fpu__alloc_mathframe(sp, IS_ENABLED(CONFIG_X86_32),
+@@ -267,8 +276,15 @@ get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, size_t frame_size,
+ 	 * If we are on the alternate signal stack and would overflow it, don't.
+ 	 * Return an always-bogus address instead so we will die with SIGSEGV.
+ 	 */
+-	if (onsigstack && !likely(on_sig_stack(sp)))
++	if (unlikely((nested_altstack || entering_altstack) &&
++		     !__on_sig_stack(sp))) {
++
++		if (show_unhandled_signals && printk_ratelimit())
++			pr_info("%s[%d] overflowed sigaltstack\n",
++				current->comm, task_pid_nr(current));
++
+ 		return (void __user *)-1L;
++	}
+ 
+ 	/* save i387 and extended state */
+ 	ret = copy_fpstate_to_sigframe(*fpstate, (void __user *)buf_fx, math_size);
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index c42613cfb5ba6..ca7866d63e982 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -940,8 +940,21 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ 		unsigned virt_as = max((entry->eax >> 8) & 0xff, 48U);
+ 		unsigned phys_as = entry->eax & 0xff;
+ 
+-		if (!g_phys_as)
++		/*
++		 * If TDP (NPT) is disabled use the adjusted host MAXPHYADDR as
++		 * the guest operates in the same PA space as the host, i.e.
++		 * reductions in MAXPHYADDR for memory encryption affect shadow
++		 * paging, too.
++		 *
++		 * If TDP is enabled but an explicit guest MAXPHYADDR is not
++		 * provided, use the raw bare metal MAXPHYADDR as reductions to
++		 * the HPAs do not affect GPAs.
++		 */
++		if (!tdp_enabled)
++			g_phys_as = boot_cpu_data.x86_phys_bits;
++		else if (!g_phys_as)
+ 			g_phys_as = phys_as;
++
+ 		entry->eax = g_phys_as | (virt_as << 8);
+ 		entry->edx = 0;
+ 		cpuid_entry_override(entry, CPUID_8000_0008_EBX);
+@@ -964,12 +977,18 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ 	case 0x8000001a:
+ 	case 0x8000001e:
+ 		break;
+-	/* Support memory encryption cpuid if host supports it */
+ 	case 0x8000001F:
+-		if (!kvm_cpu_cap_has(X86_FEATURE_SEV))
++		if (!kvm_cpu_cap_has(X86_FEATURE_SEV)) {
+ 			entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
+-		else
++		} else {
+ 			cpuid_entry_override(entry, CPUID_8000_001F_EAX);
++
++			/*
++			 * Enumerate '0' for "PA bits reduction", the adjusted
++			 * MAXPHYADDR is enumerated directly (see 0x80000008).
++			 */
++			entry->ebx &= ~GENMASK(11, 6);
++		}
+ 		break;
+ 	/*Add support for Centaur's CPUID instruction*/
+ 	case 0xC0000000:
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 99afc6f1eed02..716266ab177f4 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -53,6 +53,8 @@
+ #include <asm/kvm_page_track.h>
+ #include "trace.h"
+ 
++#include "paging.h"
++
+ extern bool itlb_multihit_kvm_mitigation;
+ 
+ static int __read_mostly nx_huge_pages = -1;
+diff --git a/arch/x86/kvm/mmu/paging.h b/arch/x86/kvm/mmu/paging.h
+new file mode 100644
+index 0000000000000..de8ab323bb707
+--- /dev/null
++++ b/arch/x86/kvm/mmu/paging.h
+@@ -0,0 +1,14 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/* Shadow paging constants/helpers that don't need to be #undef'd. */
++#ifndef __KVM_X86_PAGING_H
++#define __KVM_X86_PAGING_H
++
++#define GUEST_PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1))
++#define PT64_LVL_ADDR_MASK(level) \
++	(GUEST_PT64_BASE_ADDR_MASK & ~((1ULL << (PAGE_SHIFT + (((level) - 1) \
++						* PT64_LEVEL_BITS))) - 1))
++#define PT64_LVL_OFFSET_MASK(level) \
++	(GUEST_PT64_BASE_ADDR_MASK & ((1ULL << (PAGE_SHIFT + (((level) - 1) \
++						* PT64_LEVEL_BITS))) - 1))
++#endif /* __KVM_X86_PAGING_H */
++
+diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
+index 52fffd68b5229..45160fe800983 100644
+--- a/arch/x86/kvm/mmu/paging_tmpl.h
++++ b/arch/x86/kvm/mmu/paging_tmpl.h
+@@ -24,7 +24,7 @@
+ 	#define pt_element_t u64
+ 	#define guest_walker guest_walker64
+ 	#define FNAME(name) paging##64_##name
+-	#define PT_BASE_ADDR_MASK PT64_BASE_ADDR_MASK
++	#define PT_BASE_ADDR_MASK GUEST_PT64_BASE_ADDR_MASK
+ 	#define PT_LVL_ADDR_MASK(lvl) PT64_LVL_ADDR_MASK(lvl)
+ 	#define PT_LVL_OFFSET_MASK(lvl) PT64_LVL_OFFSET_MASK(lvl)
+ 	#define PT_INDEX(addr, level) PT64_INDEX(addr, level)
+@@ -57,7 +57,7 @@
+ 	#define pt_element_t u64
+ 	#define guest_walker guest_walkerEPT
+ 	#define FNAME(name) ept_##name
+-	#define PT_BASE_ADDR_MASK PT64_BASE_ADDR_MASK
++	#define PT_BASE_ADDR_MASK GUEST_PT64_BASE_ADDR_MASK
+ 	#define PT_LVL_ADDR_MASK(lvl) PT64_LVL_ADDR_MASK(lvl)
+ 	#define PT_LVL_OFFSET_MASK(lvl) PT64_LVL_OFFSET_MASK(lvl)
+ 	#define PT_INDEX(addr, level) PT64_INDEX(addr, level)
+diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
+index bca0ba11cccf3..6925dfc389817 100644
+--- a/arch/x86/kvm/mmu/spte.h
++++ b/arch/x86/kvm/mmu/spte.h
+@@ -38,12 +38,6 @@ static_assert(SPTE_TDP_AD_ENABLED_MASK == 0);
+ #else
+ #define PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1))
+ #endif
+-#define PT64_LVL_ADDR_MASK(level) \
+-	(PT64_BASE_ADDR_MASK & ~((1ULL << (PAGE_SHIFT + (((level) - 1) \
+-						* PT64_LEVEL_BITS))) - 1))
+-#define PT64_LVL_OFFSET_MASK(level) \
+-	(PT64_BASE_ADDR_MASK & ((1ULL << (PAGE_SHIFT + (((level) - 1) \
+-						* PT64_LEVEL_BITS))) - 1))
+ 
+ #define PT64_PERM_MASK (PT_PRESENT_MASK | PT_WRITABLE_MASK | shadow_user_mask \
+ 			| shadow_x_mask | shadow_nx_mask | shadow_me_mask)
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index e088086f3de6a..62f1f06fe1970 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -1897,7 +1897,7 @@ static int npf_interception(struct kvm_vcpu *vcpu)
+ {
+ 	struct vcpu_svm *svm = to_svm(vcpu);
+ 
+-	u64 fault_address = __sme_clr(svm->vmcb->control.exit_info_2);
++	u64 fault_address = svm->vmcb->control.exit_info_2;
+ 	u64 error_code = svm->vmcb->control.exit_info_1;
+ 
+ 	trace_kvm_page_fault(fault_address, error_code);
+@@ -2080,6 +2080,11 @@ static int nmi_interception(struct kvm_vcpu *vcpu)
+ 	return 1;
+ }
+ 
++static int smi_interception(struct kvm_vcpu *vcpu)
++{
++	return 1;
++}
++
+ static int intr_interception(struct kvm_vcpu *vcpu)
+ {
+ 	++vcpu->stat.irq_exits;
+@@ -2915,7 +2920,16 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+ 			svm_disable_lbrv(vcpu);
+ 		break;
+ 	case MSR_VM_HSAVE_PA:
+-		svm->nested.hsave_msr = data;
++		/*
++		 * Old kernels did not validate the value written to
++		 * MSR_VM_HSAVE_PA.  Allow KVM_SET_MSR to set an invalid
++		 * value to allow live migrating buggy or malicious guests
++		 * originating from those kernels.
++		 */
++		if (!msr->host_initiated && !page_address_valid(vcpu, data))
++			return 1;
++
++		svm->nested.hsave_msr = data & PAGE_MASK;
+ 		break;
+ 	case MSR_VM_CR:
+ 		return svm_set_vm_cr(vcpu, data);
+@@ -3054,8 +3068,7 @@ static int (*const svm_exit_handlers[])(struct kvm_vcpu *vcpu) = {
+ 	[SVM_EXIT_EXCP_BASE + GP_VECTOR]	= gp_interception,
+ 	[SVM_EXIT_INTR]				= intr_interception,
+ 	[SVM_EXIT_NMI]				= nmi_interception,
+-	[SVM_EXIT_SMI]				= kvm_emulate_as_nop,
+-	[SVM_EXIT_INIT]				= kvm_emulate_as_nop,
++	[SVM_EXIT_SMI]				= smi_interception,
+ 	[SVM_EXIT_VINTR]			= interrupt_window_interception,
+ 	[SVM_EXIT_RDPMC]			= kvm_emulate_rdpmc,
+ 	[SVM_EXIT_CPUID]			= kvm_emulate_cpuid,
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index dad282fe0dac2..b5a3de788b5fc 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -9347,6 +9347,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 		set_debugreg(vcpu->arch.eff_db[3], 3);
+ 		set_debugreg(vcpu->arch.dr6, 6);
+ 		vcpu->arch.switch_db_regs &= ~KVM_DEBUGREG_RELOAD;
++	} else if (unlikely(hw_breakpoint_active())) {
++		set_debugreg(0, 7);
+ 	}
+ 
+ 	for (;;) {
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 9bcdae93f6d4f..ce0125efbaa76 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -1253,7 +1253,7 @@ static void update_io_ticks(struct block_device *part, unsigned long now,
+ 	unsigned long stamp;
+ again:
+ 	stamp = READ_ONCE(part->bd_stamp);
+-	if (unlikely(stamp != now)) {
++	if (unlikely(time_after(now, stamp))) {
+ 		if (likely(cmpxchg(&part->bd_stamp, stamp, now) == stamp))
+ 			__part_stat_add(part, io_ticks, end ? now - stamp : 1);
+ 	}
+diff --git a/block/genhd.c b/block/genhd.c
+index 9f8cb7beaad11..ad7436bd60c1b 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -402,12 +402,12 @@ void disk_uevent(struct gendisk *disk, enum kobject_action action)
+ 	xa_for_each(&disk->part_tbl, idx, part) {
+ 		if (bdev_is_partition(part) && !bdev_nr_sectors(part))
+ 			continue;
+-		if (!bdgrab(part))
++		if (!kobject_get_unless_zero(&part->bd_device.kobj))
+ 			continue;
+ 
+ 		rcu_read_unlock();
+ 		kobject_uevent(bdev_kobj(part), action);
+-		bdput(part);
++		put_device(&part->bd_device);
+ 		rcu_read_lock();
+ 	}
+ 	rcu_read_unlock();
+diff --git a/block/partitions/ldm.c b/block/partitions/ldm.c
+index d333786b5c7eb..cc86534c80ad9 100644
+--- a/block/partitions/ldm.c
++++ b/block/partitions/ldm.c
+@@ -510,7 +510,7 @@ static bool ldm_validate_partition_table(struct parsed_partitions *state)
+ 
+ 	p = (struct msdos_partition *)(data + 0x01BE);
+ 	for (i = 0; i < 4; i++, p++)
+-		if (SYS_IND (p) == LDM_PARTITION) {
++		if (p->sys_ind == LDM_PARTITION) {
+ 			result = true;
+ 			break;
+ 		}
+diff --git a/block/partitions/ldm.h b/block/partitions/ldm.h
+index d8d6beaa72c4d..8693704dcf5e9 100644
+--- a/block/partitions/ldm.h
++++ b/block/partitions/ldm.h
+@@ -84,9 +84,6 @@ struct parsed_partitions;
+ #define TOC_BITMAP1		"config"	/* Names of the two defined */
+ #define TOC_BITMAP2		"log"		/* bitmaps in the TOCBLOCK. */
+ 
+-/* Borrowed from msdos.c */
+-#define SYS_IND(p)		(get_unaligned(&(p)->sys_ind))
+-
+ struct frag {				/* VBLK Fragment handling */
+ 	struct list_head list;
+ 	u32		group;
+diff --git a/block/partitions/msdos.c b/block/partitions/msdos.c
+index 8f2fcc0802642..c94de377c5025 100644
+--- a/block/partitions/msdos.c
++++ b/block/partitions/msdos.c
+@@ -38,8 +38,6 @@
+  */
+ #include <asm/unaligned.h>
+ 
+-#define SYS_IND(p)	get_unaligned(&p->sys_ind)
+-
+ static inline sector_t nr_sects(struct msdos_partition *p)
+ {
+ 	return (sector_t)get_unaligned_le32(&p->nr_sects);
+@@ -52,9 +50,9 @@ static inline sector_t start_sect(struct msdos_partition *p)
+ 
+ static inline int is_extended_partition(struct msdos_partition *p)
+ {
+-	return (SYS_IND(p) == DOS_EXTENDED_PARTITION ||
+-		SYS_IND(p) == WIN98_EXTENDED_PARTITION ||
+-		SYS_IND(p) == LINUX_EXTENDED_PARTITION);
++	return (p->sys_ind == DOS_EXTENDED_PARTITION ||
++		p->sys_ind == WIN98_EXTENDED_PARTITION ||
++		p->sys_ind == LINUX_EXTENDED_PARTITION);
+ }
+ 
+ #define MSDOS_LABEL_MAGIC1	0x55
+@@ -193,7 +191,7 @@ static void parse_extended(struct parsed_partitions *state,
+ 
+ 			put_partition(state, state->next, next, size);
+ 			set_info(state, state->next, disksig);
+-			if (SYS_IND(p) == LINUX_RAID_PARTITION)
++			if (p->sys_ind == LINUX_RAID_PARTITION)
+ 				state->parts[state->next].flags = ADDPART_FLAG_RAID;
+ 			loopct = 0;
+ 			if (++state->next == state->limit)
+@@ -546,7 +544,7 @@ static void parse_minix(struct parsed_partitions *state,
+ 	 * a secondary MBR describing its subpartitions, or
+ 	 * the normal boot sector. */
+ 	if (msdos_magic_present(data + 510) &&
+-	    SYS_IND(p) == MINIX_PARTITION) { /* subpartition table present */
++	    p->sys_ind == MINIX_PARTITION) { /* subpartition table present */
+ 		char tmp[1 + BDEVNAME_SIZE + 10 + 9 + 1];
+ 
+ 		snprintf(tmp, sizeof(tmp), " %s%d: <minix:", state->name, origin);
+@@ -555,7 +553,7 @@ static void parse_minix(struct parsed_partitions *state,
+ 			if (state->next == state->limit)
+ 				break;
+ 			/* add each partition in use */
+-			if (SYS_IND(p) == MINIX_PARTITION)
++			if (p->sys_ind == MINIX_PARTITION)
+ 				put_partition(state, state->next++,
+ 					      start_sect(p), nr_sects(p));
+ 		}
+@@ -643,7 +641,7 @@ int msdos_partition(struct parsed_partitions *state)
+ 	p = (struct msdos_partition *) (data + 0x1be);
+ 	for (slot = 1 ; slot <= 4 ; slot++, p++) {
+ 		/* If this is an EFI GPT disk, msdos should ignore it. */
+-		if (SYS_IND(p) == EFI_PMBR_OSTYPE_EFI_GPT) {
++		if (p->sys_ind == EFI_PMBR_OSTYPE_EFI_GPT) {
+ 			put_dev_sector(sect);
+ 			return 0;
+ 		}
+@@ -685,11 +683,11 @@ int msdos_partition(struct parsed_partitions *state)
+ 		}
+ 		put_partition(state, slot, start, size);
+ 		set_info(state, slot, disksig);
+-		if (SYS_IND(p) == LINUX_RAID_PARTITION)
++		if (p->sys_ind == LINUX_RAID_PARTITION)
+ 			state->parts[slot].flags = ADDPART_FLAG_RAID;
+-		if (SYS_IND(p) == DM6_PARTITION)
++		if (p->sys_ind == DM6_PARTITION)
+ 			strlcat(state->pp_buf, "[DM]", PAGE_SIZE);
+-		if (SYS_IND(p) == EZD_PARTITION)
++		if (p->sys_ind == EZD_PARTITION)
+ 			strlcat(state->pp_buf, "[EZD]", PAGE_SIZE);
+ 	}
+ 
+@@ -698,7 +696,7 @@ int msdos_partition(struct parsed_partitions *state)
+ 	/* second pass - output for each on a separate line */
+ 	p = (struct msdos_partition *) (0x1be + data);
+ 	for (slot = 1 ; slot <= 4 ; slot++, p++) {
+-		unsigned char id = SYS_IND(p);
++		unsigned char id = p->sys_ind;
+ 		int n;
+ 
+ 		if (!nr_sects(p))
+diff --git a/drivers/acpi/acpi_amba.c b/drivers/acpi/acpi_amba.c
+index 49b781a9cd979..ab8a4e0191b19 100644
+--- a/drivers/acpi/acpi_amba.c
++++ b/drivers/acpi/acpi_amba.c
+@@ -76,6 +76,7 @@ static int amba_handler_attach(struct acpi_device *adev,
+ 		case IORESOURCE_MEM:
+ 			if (!address_found) {
+ 				dev->res = *rentry->res;
++				dev->res.name = dev_name(&dev->dev);
+ 				address_found = true;
+ 			}
+ 			break;
+diff --git a/drivers/acpi/acpi_video.c b/drivers/acpi/acpi_video.c
+index 0c884020f74b4..08a51dd285c43 100644
+--- a/drivers/acpi/acpi_video.c
++++ b/drivers/acpi/acpi_video.c
+@@ -540,6 +540,15 @@ static const struct dmi_system_id video_dmi_table[] = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "Vostro V131"),
+ 		},
+ 	},
++	{
++	 .callback = video_set_report_key_events,
++	 .driver_data = (void *)((uintptr_t)REPORT_BRIGHTNESS_KEY_EVENTS),
++	 .ident = "Dell Vostro 3350",
++	 .matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++		DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 3350"),
++		},
++	},
+ 	/*
+ 	 * Some machines change the brightness themselves when a brightness
+ 	 * hotkey gets pressed, despite us telling them not to. In this case
+diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
+index c1179edc0f3b8..921312a8d9576 100644
+--- a/drivers/base/arch_topology.c
++++ b/drivers/base/arch_topology.c
+@@ -18,10 +18,11 @@
+ #include <linux/cpumask.h>
+ #include <linux/init.h>
+ #include <linux/percpu.h>
++#include <linux/rcupdate.h>
+ #include <linux/sched.h>
+ #include <linux/smp.h>
+ 
+-static DEFINE_PER_CPU(struct scale_freq_data *, sft_data);
++static DEFINE_PER_CPU(struct scale_freq_data __rcu *, sft_data);
+ static struct cpumask scale_freq_counters_mask;
+ static bool scale_freq_invariant;
+ 
+@@ -66,16 +67,20 @@ void topology_set_scale_freq_source(struct scale_freq_data *data,
+ 	if (cpumask_empty(&scale_freq_counters_mask))
+ 		scale_freq_invariant = topology_scale_freq_invariant();
+ 
++	rcu_read_lock();
++
+ 	for_each_cpu(cpu, cpus) {
+-		sfd = per_cpu(sft_data, cpu);
++		sfd = rcu_dereference(*per_cpu_ptr(&sft_data, cpu));
+ 
+ 		/* Use ARCH provided counters whenever possible */
+ 		if (!sfd || sfd->source != SCALE_FREQ_SOURCE_ARCH) {
+-			per_cpu(sft_data, cpu) = data;
++			rcu_assign_pointer(per_cpu(sft_data, cpu), data);
+ 			cpumask_set_cpu(cpu, &scale_freq_counters_mask);
+ 		}
+ 	}
+ 
++	rcu_read_unlock();
++
+ 	update_scale_freq_invariant(true);
+ }
+ EXPORT_SYMBOL_GPL(topology_set_scale_freq_source);
+@@ -86,22 +91,32 @@ void topology_clear_scale_freq_source(enum scale_freq_source source,
+ 	struct scale_freq_data *sfd;
+ 	int cpu;
+ 
++	rcu_read_lock();
++
+ 	for_each_cpu(cpu, cpus) {
+-		sfd = per_cpu(sft_data, cpu);
++		sfd = rcu_dereference(*per_cpu_ptr(&sft_data, cpu));
+ 
+ 		if (sfd && sfd->source == source) {
+-			per_cpu(sft_data, cpu) = NULL;
++			rcu_assign_pointer(per_cpu(sft_data, cpu), NULL);
+ 			cpumask_clear_cpu(cpu, &scale_freq_counters_mask);
+ 		}
+ 	}
+ 
++	rcu_read_unlock();
++
++	/*
++	 * Make sure all references to previous sft_data are dropped to avoid
++	 * use-after-free races.
++	 */
++	synchronize_rcu();
++
+ 	update_scale_freq_invariant(false);
+ }
+ EXPORT_SYMBOL_GPL(topology_clear_scale_freq_source);
+ 
+ void topology_scale_freq_tick(void)
+ {
+-	struct scale_freq_data *sfd = *this_cpu_ptr(&sft_data);
++	struct scale_freq_data *sfd = rcu_dereference_sched(*this_cpu_ptr(&sft_data));
+ 
+ 	if (sfd)
+ 		sfd->set_freq_scale();
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index b9fa3ef5b57c0..425bae6181319 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -948,6 +948,8 @@ static int virtblk_freeze(struct virtio_device *vdev)
+ 	blk_mq_quiesce_queue(vblk->disk->queue);
+ 
+ 	vdev->config->del_vqs(vdev);
++	kfree(vblk->vqs);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
+index 59dfd9c421a1e..7eaf303a7a86f 100644
+--- a/drivers/char/virtio_console.c
++++ b/drivers/char/virtio_console.c
+@@ -475,7 +475,7 @@ static struct port_buffer *get_inbuf(struct port *port)
+ 
+ 	buf = virtqueue_get_buf(port->in_vq, &len);
+ 	if (buf) {
+-		buf->len = len;
++		buf->len = min_t(size_t, len, buf->size);
+ 		buf->offset = 0;
+ 		port->stats.bytes_received += len;
+ 	}
+@@ -1709,7 +1709,7 @@ static void control_work_handler(struct work_struct *work)
+ 	while ((buf = virtqueue_get_buf(vq, &len))) {
+ 		spin_unlock(&portdev->c_ivq_lock);
+ 
+-		buf->len = len;
++		buf->len = min_t(size_t, len, buf->size);
+ 		buf->offset = 0;
+ 
+ 		handle_control_message(vq->vdev, portdev, buf);
+diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c
+index 2f769b1630c57..a27445f40d2c2 100644
+--- a/drivers/cpufreq/cppc_cpufreq.c
++++ b/drivers/cpufreq/cppc_cpufreq.c
+@@ -182,6 +182,16 @@ static int cppc_verify_policy(struct cpufreq_policy_data *policy)
+ 	return 0;
+ }
+ 
++static void cppc_cpufreq_put_cpu_data(struct cpufreq_policy *policy)
++{
++	struct cppc_cpudata *cpu_data = policy->driver_data;
++
++	list_del(&cpu_data->node);
++	free_cpumask_var(cpu_data->shared_cpu_map);
++	kfree(cpu_data);
++	policy->driver_data = NULL;
++}
++
+ static void cppc_cpufreq_stop_cpu(struct cpufreq_policy *policy)
+ {
+ 	struct cppc_cpudata *cpu_data = policy->driver_data;
+@@ -196,11 +206,7 @@ static void cppc_cpufreq_stop_cpu(struct cpufreq_policy *policy)
+ 		pr_debug("Err setting perf value:%d on CPU:%d. ret:%d\n",
+ 			 caps->lowest_perf, cpu, ret);
+ 
+-	/* Remove CPU node from list and free driver data for policy */
+-	free_cpumask_var(cpu_data->shared_cpu_map);
+-	list_del(&cpu_data->node);
+-	kfree(policy->driver_data);
+-	policy->driver_data = NULL;
++	cppc_cpufreq_put_cpu_data(policy);
+ }
+ 
+ /*
+@@ -330,7 +336,8 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy)
+ 	default:
+ 		pr_debug("Unsupported CPU co-ord type: %d\n",
+ 			 policy->shared_type);
+-		return -EFAULT;
++		ret = -EFAULT;
++		goto out;
+ 	}
+ 
+ 	/*
+@@ -345,10 +352,16 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy)
+ 	cpu_data->perf_ctrls.desired_perf =  caps->highest_perf;
+ 
+ 	ret = cppc_set_perf(cpu, &cpu_data->perf_ctrls);
+-	if (ret)
++	if (ret) {
+ 		pr_debug("Err setting perf value:%d on CPU:%d. ret:%d\n",
+ 			 caps->highest_perf, cpu, ret);
++		goto out;
++	}
+ 
++	return 0;
++
++out:
++	cppc_cpufreq_put_cpu_data(policy);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/cpufreq/scmi-cpufreq.c b/drivers/cpufreq/scmi-cpufreq.c
+index c8a4364ad3c26..ec9a87ca2dbb8 100644
+--- a/drivers/cpufreq/scmi-cpufreq.c
++++ b/drivers/cpufreq/scmi-cpufreq.c
+@@ -174,7 +174,7 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
+ 		nr_opp = dev_pm_opp_get_opp_count(cpu_dev);
+ 		if (nr_opp <= 0) {
+ 			dev_err(cpu_dev, "%s: No OPPs for this device: %d\n",
+-				__func__, ret);
++				__func__, nr_opp);
+ 
+ 			ret = -ENODEV;
+ 			goto out_free_opp;
+diff --git a/drivers/dma/fsl-qdma.c b/drivers/dma/fsl-qdma.c
+index ed2ab46b15e7f..045ead46ec8fc 100644
+--- a/drivers/dma/fsl-qdma.c
++++ b/drivers/dma/fsl-qdma.c
+@@ -1235,7 +1235,11 @@ static int fsl_qdma_probe(struct platform_device *pdev)
+ 	fsl_qdma->dma_dev.device_synchronize = fsl_qdma_synchronize;
+ 	fsl_qdma->dma_dev.device_terminate_all = fsl_qdma_terminate_all;
+ 
+-	dma_set_mask(&pdev->dev, DMA_BIT_MASK(40));
++	ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(40));
++	if (ret) {
++		dev_err(&pdev->dev, "dma_set_mask failure.\n");
++		return ret;
++	}
+ 
+ 	platform_set_drvdata(pdev, fsl_qdma);
+ 
+diff --git a/drivers/edac/Kconfig b/drivers/edac/Kconfig
+index 91164c5f0757d..2fc4c3f91fd54 100644
+--- a/drivers/edac/Kconfig
++++ b/drivers/edac/Kconfig
+@@ -271,7 +271,7 @@ config EDAC_PND2
+ config EDAC_IGEN6
+ 	tristate "Intel client SoC Integrated MC"
+ 	depends on PCI && PCI_MMCONFIG && ARCH_HAVE_NMI_SAFE_CMPXCHG
+-	depends on X64_64 && X86_MCE_INTEL
++	depends on X86_64 && X86_MCE_INTEL
+ 	help
+ 	  Support for error detection and correction on the Intel
+ 	  client SoC Integrated Memory Controller using In-Band ECC IP.
+diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
+index 66eb3f0e5dafc..c2983ed534949 100644
+--- a/drivers/firmware/arm_scmi/driver.c
++++ b/drivers/firmware/arm_scmi/driver.c
+@@ -335,6 +335,10 @@ static void scmi_handle_response(struct scmi_chan_info *cinfo,
+ 		return;
+ 	}
+ 
++	/* rx.len could be shrunk in the sync do_xfer, so reset to maxsz */
++	if (msg_type == MSG_TYPE_DELAYED_RESP)
++		xfer->rx.len = info->desc->max_msg_size;
++
+ 	scmi_dump_header_dbg(dev, &xfer->hdr);
+ 
+ 	info->desc->ops->fetch_response(cinfo, xfer);
+@@ -513,8 +517,12 @@ static int do_xfer_with_response(const struct scmi_protocol_handle *ph,
+ 	xfer->async_done = &async_response;
+ 
+ 	ret = do_xfer(ph, xfer);
+-	if (!ret && !wait_for_completion_timeout(xfer->async_done, timeout))
+-		ret = -ETIMEDOUT;
++	if (!ret) {
++		if (!wait_for_completion_timeout(xfer->async_done, timeout))
++			ret = -ETIMEDOUT;
++		else if (xfer->hdr.status)
++			ret = scmi_to_linux_errno(xfer->hdr.status);
++	}
+ 
+ 	xfer->async_done = NULL;
+ 	return ret;
+diff --git a/drivers/firmware/tegra/bpmp-tegra210.c b/drivers/firmware/tegra/bpmp-tegra210.c
+index ae15940a078e3..c32754055c60b 100644
+--- a/drivers/firmware/tegra/bpmp-tegra210.c
++++ b/drivers/firmware/tegra/bpmp-tegra210.c
+@@ -210,7 +210,7 @@ static int tegra210_bpmp_init(struct tegra_bpmp *bpmp)
+ 	priv->tx_irq_data = irq_get_irq_data(err);
+ 	if (!priv->tx_irq_data) {
+ 		dev_err(&pdev->dev, "failed to get IRQ data for TX IRQ\n");
+-		return err;
++		return -ENOENT;
+ 	}
+ 
+ 	err = platform_get_irq_byname(pdev, "rx");
+diff --git a/drivers/firmware/turris-mox-rwtm.c b/drivers/firmware/turris-mox-rwtm.c
+index 62f0d1a5dd324..1cf4f10874923 100644
+--- a/drivers/firmware/turris-mox-rwtm.c
++++ b/drivers/firmware/turris-mox-rwtm.c
+@@ -147,11 +147,14 @@ MOX_ATTR_RO(pubkey, "%s\n", pubkey);
+ 
+ static int mox_get_status(enum mbox_cmd cmd, u32 retval)
+ {
+-	if (MBOX_STS_CMD(retval) != cmd ||
+-	    MBOX_STS_ERROR(retval) != MBOX_STS_SUCCESS)
++	if (MBOX_STS_CMD(retval) != cmd)
+ 		return -EIO;
+ 	else if (MBOX_STS_ERROR(retval) == MBOX_STS_FAIL)
+ 		return -(int)MBOX_STS_VALUE(retval);
++	else if (MBOX_STS_ERROR(retval) == MBOX_STS_BADCMD)
++		return -ENOSYS;
++	else if (MBOX_STS_ERROR(retval) != MBOX_STS_SUCCESS)
++		return -EIO;
+ 	else
+ 		return MBOX_STS_VALUE(retval);
+ }
+@@ -201,11 +204,14 @@ static int mox_get_board_info(struct mox_rwtm *rwtm)
+ 		return ret;
+ 
+ 	ret = mox_get_status(MBOX_CMD_BOARD_INFO, reply->retval);
+-	if (ret < 0 && ret != -ENODATA) {
+-		return ret;
+-	} else if (ret == -ENODATA) {
++	if (ret == -ENODATA) {
+ 		dev_warn(rwtm->dev,
+ 			 "Board does not have manufacturing information burned!\n");
++	} else if (ret == -ENOSYS) {
++		dev_notice(rwtm->dev,
++			   "Firmware does not support the BOARD_INFO command\n");
++	} else if (ret < 0) {
++		return ret;
+ 	} else {
+ 		rwtm->serial_number = reply->status[1];
+ 		rwtm->serial_number <<= 32;
+@@ -234,10 +240,13 @@ static int mox_get_board_info(struct mox_rwtm *rwtm)
+ 		return ret;
+ 
+ 	ret = mox_get_status(MBOX_CMD_ECDSA_PUB_KEY, reply->retval);
+-	if (ret < 0 && ret != -ENODATA) {
+-		return ret;
+-	} else if (ret == -ENODATA) {
++	if (ret == -ENODATA) {
+ 		dev_warn(rwtm->dev, "Board has no public key burned!\n");
++	} else if (ret == -ENOSYS) {
++		dev_notice(rwtm->dev,
++			   "Firmware does not support the ECDSA_PUB_KEY command\n");
++	} else if (ret < 0) {
++		return ret;
+ 	} else {
+ 		u32 *s = reply->status;
+ 
+@@ -251,6 +260,27 @@ static int mox_get_board_info(struct mox_rwtm *rwtm)
+ 	return 0;
+ }
+ 
++static int check_get_random_support(struct mox_rwtm *rwtm)
++{
++	struct armada_37xx_rwtm_tx_msg msg;
++	int ret;
++
++	msg.command = MBOX_CMD_GET_RANDOM;
++	msg.args[0] = 1;
++	msg.args[1] = rwtm->buf_phys;
++	msg.args[2] = 4;
++
++	ret = mbox_send_message(rwtm->mbox, &msg);
++	if (ret < 0)
++		return ret;
++
++	ret = wait_for_completion_timeout(&rwtm->cmd_done, HZ / 2);
++	if (ret < 0)
++		return ret;
++
++	return mox_get_status(MBOX_CMD_GET_RANDOM, rwtm->reply.retval);
++}
++
+ static int mox_hwrng_read(struct hwrng *rng, void *data, size_t max, bool wait)
+ {
+ 	struct mox_rwtm *rwtm = (struct mox_rwtm *) rng->priv;
+@@ -488,6 +518,13 @@ static int turris_mox_rwtm_probe(struct platform_device *pdev)
+ 	if (ret < 0)
+ 		dev_warn(dev, "Cannot read board information: %i\n", ret);
+ 
++	ret = check_get_random_support(rwtm);
++	if (ret < 0) {
++		dev_notice(dev,
++			   "Firmware does not support the GET_RANDOM command\n");
++		goto free_channel;
++	}
++
+ 	rwtm->hwrng.name = DRIVER_NAME "_hwrng";
+ 	rwtm->hwrng.read = mox_hwrng_read;
+ 	rwtm->hwrng.priv = (unsigned long) rwtm;
+@@ -505,6 +542,8 @@ static int turris_mox_rwtm_probe(struct platform_device *pdev)
+ 		goto free_channel;
+ 	}
+ 
++	dev_info(dev, "HWRNG successfully registered\n");
++
+ 	return 0;
+ 
+ free_channel:
+diff --git a/drivers/fsi/fsi-master-aspeed.c b/drivers/fsi/fsi-master-aspeed.c
+index 90dbe58ca1edc..dbad73162c833 100644
+--- a/drivers/fsi/fsi-master-aspeed.c
++++ b/drivers/fsi/fsi-master-aspeed.c
+@@ -645,6 +645,7 @@ static const struct of_device_id fsi_master_aspeed_match[] = {
+ 	{ .compatible = "aspeed,ast2600-fsi-master" },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, fsi_master_aspeed_match);
+ 
+ static struct platform_driver fsi_master_aspeed_driver = {
+ 	.driver = {
+diff --git a/drivers/fsi/fsi-master-ast-cf.c b/drivers/fsi/fsi-master-ast-cf.c
+index 57a779a89b073..70c03e304d6c8 100644
+--- a/drivers/fsi/fsi-master-ast-cf.c
++++ b/drivers/fsi/fsi-master-ast-cf.c
+@@ -1427,6 +1427,7 @@ static const struct of_device_id fsi_master_acf_match[] = {
+ 	{ .compatible = "aspeed,ast2500-cf-fsi-master" },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, fsi_master_acf_match);
+ 
+ static struct platform_driver fsi_master_acf = {
+ 	.driver = {
+diff --git a/drivers/fsi/fsi-master-gpio.c b/drivers/fsi/fsi-master-gpio.c
+index aa97c4a250cb4..7d5f29b4b595d 100644
+--- a/drivers/fsi/fsi-master-gpio.c
++++ b/drivers/fsi/fsi-master-gpio.c
+@@ -882,6 +882,7 @@ static const struct of_device_id fsi_master_gpio_match[] = {
+ 	{ .compatible = "fsi-master-gpio" },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, fsi_master_gpio_match);
+ 
+ static struct platform_driver fsi_master_gpio_driver = {
+ 	.driver = {
+diff --git a/drivers/fsi/fsi-occ.c b/drivers/fsi/fsi-occ.c
+index cb05b6dacc9d5..dc74bffedd727 100644
+--- a/drivers/fsi/fsi-occ.c
++++ b/drivers/fsi/fsi-occ.c
+@@ -636,6 +636,7 @@ static const struct of_device_id occ_match[] = {
+ 	},
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, occ_match);
+ 
+ static struct platform_driver occ_driver = {
+ 	.driver = {
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index c91d056515966..f5cfc0698799a 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -1241,6 +1241,7 @@ static const struct of_device_id pca953x_dt_ids[] = {
+ 
+ 	{ .compatible = "onnn,cat9554", .data = OF_953X( 8, PCA_INT), },
+ 	{ .compatible = "onnn,pca9654", .data = OF_953X( 8, PCA_INT), },
++	{ .compatible = "onnn,pca9655", .data = OF_953X(16, PCA_INT), },
+ 
+ 	{ .compatible = "exar,xra1202", .data = OF_953X( 8, 0), },
+ 	{ }
+diff --git a/drivers/gpio/gpio-zynq.c b/drivers/gpio/gpio-zynq.c
+index 3521c1dc3ac00..c288a7502de25 100644
+--- a/drivers/gpio/gpio-zynq.c
++++ b/drivers/gpio/gpio-zynq.c
+@@ -736,6 +736,11 @@ static int __maybe_unused zynq_gpio_suspend(struct device *dev)
+ 	struct zynq_gpio *gpio = dev_get_drvdata(dev);
+ 	struct irq_data *data = irq_get_irq_data(gpio->irq);
+ 
++	if (!data) {
++		dev_err(dev, "irq_get_irq_data() failed\n");
++		return -EINVAL;
++	}
++
+ 	if (!device_may_wakeup(dev))
+ 		disable_irq(gpio->irq);
+ 
+@@ -753,6 +758,11 @@ static int __maybe_unused zynq_gpio_resume(struct device *dev)
+ 	struct irq_data *data = irq_get_irq_data(gpio->irq);
+ 	int ret;
+ 
++	if (!data) {
++		dev_err(dev, "irq_get_irq_data() failed\n");
++		return -EINVAL;
++	}
++
+ 	if (!device_may_wakeup(dev))
+ 		enable_irq(gpio->irq);
+ 
+@@ -1001,8 +1011,11 @@ err_pm_dis:
+ static int zynq_gpio_remove(struct platform_device *pdev)
+ {
+ 	struct zynq_gpio *gpio = platform_get_drvdata(pdev);
++	int ret;
+ 
+-	pm_runtime_get_sync(&pdev->dev);
++	ret = pm_runtime_get_sync(&pdev->dev);
++	if (ret < 0)
++		dev_warn(&pdev->dev, "pm_runtime_get_sync() Failed\n");
+ 	gpiochip_remove(&gpio->chip);
+ 	clk_disable_unprepare(gpio->clk);
+ 	device_set_wakeup_capable(&pdev->dev, 0);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index c576902ccf8e6..9dfed9d560dc4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -1148,6 +1148,7 @@ static const struct pci_device_id pciidlist[] = {
+ 	{0x1002, 0x734F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI14},
+ 
+ 	/* Renoir */
++	{0x1002, 0x15E7, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RENOIR|AMD_IS_APU},
+ 	{0x1002, 0x1636, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RENOIR|AMD_IS_APU},
+ 	{0x1002, 0x1638, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RENOIR|AMD_IS_APU},
+ 	{0x1002, 0x164C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RENOIR|AMD_IS_APU},
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index 0597aeb5f0e89..9ea5b4d2fe8b8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -7846,6 +7846,97 @@ static void gfx_v10_0_update_fine_grain_clock_gating(struct amdgpu_device *adev,
+ 	}
+ }
+ 
++static void gfx_v10_0_apply_medium_grain_clock_gating_workaround(struct amdgpu_device *adev)
++{
++	uint32_t reg_data = 0;
++	uint32_t reg_idx = 0;
++	uint32_t i;
++
++	const uint32_t tcp_ctrl_regs[] = {
++		mmCGTS_SA0_WGP00_CU0_TCP_CTRL_REG,
++		mmCGTS_SA0_WGP00_CU1_TCP_CTRL_REG,
++		mmCGTS_SA0_WGP01_CU0_TCP_CTRL_REG,
++		mmCGTS_SA0_WGP01_CU1_TCP_CTRL_REG,
++		mmCGTS_SA0_WGP02_CU0_TCP_CTRL_REG,
++		mmCGTS_SA0_WGP02_CU1_TCP_CTRL_REG,
++		mmCGTS_SA0_WGP10_CU0_TCP_CTRL_REG,
++		mmCGTS_SA0_WGP10_CU1_TCP_CTRL_REG,
++		mmCGTS_SA0_WGP11_CU0_TCP_CTRL_REG,
++		mmCGTS_SA0_WGP11_CU1_TCP_CTRL_REG,
++		mmCGTS_SA0_WGP12_CU0_TCP_CTRL_REG,
++		mmCGTS_SA0_WGP12_CU1_TCP_CTRL_REG,
++		mmCGTS_SA1_WGP00_CU0_TCP_CTRL_REG,
++		mmCGTS_SA1_WGP00_CU1_TCP_CTRL_REG,
++		mmCGTS_SA1_WGP01_CU0_TCP_CTRL_REG,
++		mmCGTS_SA1_WGP01_CU1_TCP_CTRL_REG,
++		mmCGTS_SA1_WGP02_CU0_TCP_CTRL_REG,
++		mmCGTS_SA1_WGP02_CU1_TCP_CTRL_REG,
++		mmCGTS_SA1_WGP10_CU0_TCP_CTRL_REG,
++		mmCGTS_SA1_WGP10_CU1_TCP_CTRL_REG,
++		mmCGTS_SA1_WGP11_CU0_TCP_CTRL_REG,
++		mmCGTS_SA1_WGP11_CU1_TCP_CTRL_REG,
++		mmCGTS_SA1_WGP12_CU0_TCP_CTRL_REG,
++		mmCGTS_SA1_WGP12_CU1_TCP_CTRL_REG
++	};
++
++	const uint32_t tcp_ctrl_regs_nv12[] = {
++		mmCGTS_SA0_WGP00_CU0_TCP_CTRL_REG,
++		mmCGTS_SA0_WGP00_CU1_TCP_CTRL_REG,
++		mmCGTS_SA0_WGP01_CU0_TCP_CTRL_REG,
++		mmCGTS_SA0_WGP01_CU1_TCP_CTRL_REG,
++		mmCGTS_SA0_WGP02_CU0_TCP_CTRL_REG,
++		mmCGTS_SA0_WGP02_CU1_TCP_CTRL_REG,
++		mmCGTS_SA0_WGP10_CU0_TCP_CTRL_REG,
++		mmCGTS_SA0_WGP10_CU1_TCP_CTRL_REG,
++		mmCGTS_SA0_WGP11_CU0_TCP_CTRL_REG,
++		mmCGTS_SA0_WGP11_CU1_TCP_CTRL_REG,
++		mmCGTS_SA1_WGP00_CU0_TCP_CTRL_REG,
++		mmCGTS_SA1_WGP00_CU1_TCP_CTRL_REG,
++		mmCGTS_SA1_WGP01_CU0_TCP_CTRL_REG,
++		mmCGTS_SA1_WGP01_CU1_TCP_CTRL_REG,
++		mmCGTS_SA1_WGP02_CU0_TCP_CTRL_REG,
++		mmCGTS_SA1_WGP02_CU1_TCP_CTRL_REG,
++		mmCGTS_SA1_WGP10_CU0_TCP_CTRL_REG,
++		mmCGTS_SA1_WGP10_CU1_TCP_CTRL_REG,
++		mmCGTS_SA1_WGP11_CU0_TCP_CTRL_REG,
++		mmCGTS_SA1_WGP11_CU1_TCP_CTRL_REG,
++	};
++
++	const uint32_t sm_ctlr_regs[] = {
++		mmCGTS_SA0_QUAD0_SM_CTRL_REG,
++		mmCGTS_SA0_QUAD1_SM_CTRL_REG,
++		mmCGTS_SA1_QUAD0_SM_CTRL_REG,
++		mmCGTS_SA1_QUAD1_SM_CTRL_REG
++	};
++
++	if (adev->asic_type == CHIP_NAVI12) {
++		for (i = 0; i < ARRAY_SIZE(tcp_ctrl_regs_nv12); i++) {
++			reg_idx = adev->reg_offset[GC_HWIP][0][mmCGTS_SA0_WGP00_CU0_TCP_CTRL_REG_BASE_IDX] +
++				  tcp_ctrl_regs_nv12[i];
++			reg_data = RREG32(reg_idx);
++			reg_data |= CGTS_SA0_WGP00_CU0_TCP_CTRL_REG__TCPI_LS_OVERRIDE_MASK;
++			WREG32(reg_idx, reg_data);
++		}
++	} else {
++		for (i = 0; i < ARRAY_SIZE(tcp_ctrl_regs); i++) {
++			reg_idx = adev->reg_offset[GC_HWIP][0][mmCGTS_SA0_WGP00_CU0_TCP_CTRL_REG_BASE_IDX] +
++				  tcp_ctrl_regs[i];
++			reg_data = RREG32(reg_idx);
++			reg_data |= CGTS_SA0_WGP00_CU0_TCP_CTRL_REG__TCPI_LS_OVERRIDE_MASK;
++			WREG32(reg_idx, reg_data);
++		}
++	}
++
++	for (i = 0; i < ARRAY_SIZE(sm_ctlr_regs); i++) {
++		reg_idx = adev->reg_offset[GC_HWIP][0][mmCGTS_SA0_QUAD0_SM_CTRL_REG_BASE_IDX] +
++			  sm_ctlr_regs[i];
++		reg_data = RREG32(reg_idx);
++		reg_data &= ~CGTS_SA0_QUAD0_SM_CTRL_REG__SM_MODE_MASK;
++		reg_data |= 2 << CGTS_SA0_QUAD0_SM_CTRL_REG__SM_MODE__SHIFT;
++		WREG32(reg_idx, reg_data);
++	}
++}
++
+ static int gfx_v10_0_update_gfx_clock_gating(struct amdgpu_device *adev,
+ 					    bool enable)
+ {
+@@ -7862,6 +7953,10 @@ static int gfx_v10_0_update_gfx_clock_gating(struct amdgpu_device *adev,
+ 		gfx_v10_0_update_3d_clock_gating(adev, enable);
+ 		/* ===  CGCG + CGLS === */
+ 		gfx_v10_0_update_coarse_grain_clock_gating(adev, enable);
++
++		if ((adev->asic_type >= CHIP_NAVI10) &&
++		     (adev->asic_type <= CHIP_NAVI12))
++			gfx_v10_0_apply_medium_grain_clock_gating_workaround(adev);
+ 	} else {
+ 		/* CGCG/CGLS should be disabled before MGCG/MGLS
+ 		 * ===  CGCG + CGLS ===
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index d97e330a50221..7fe746c5a2b89 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -452,13 +452,9 @@ static const struct sysfs_ops procfs_stats_ops = {
+ 	.show = kfd_procfs_stats_show,
+ };
+ 
+-static struct attribute *procfs_stats_attrs[] = {
+-	NULL
+-};
+-
+ static struct kobj_type procfs_stats_type = {
+ 	.sysfs_ops = &procfs_stats_ops,
+-	.default_attrs = procfs_stats_attrs,
++	.release = kfd_procfs_kobj_release,
+ };
+ 
+ int kfd_procfs_add_queue(struct queue *q)
+@@ -984,9 +980,11 @@ static void kfd_process_wq_release(struct work_struct *work)
+ 
+ 			sysfs_remove_file(p->kobj, &pdd->attr_vram);
+ 			sysfs_remove_file(p->kobj, &pdd->attr_sdma);
+-			sysfs_remove_file(p->kobj, &pdd->attr_evict);
+-			if (pdd->dev->kfd2kgd->get_cu_occupancy != NULL)
+-				sysfs_remove_file(p->kobj, &pdd->attr_cu_occupancy);
++
++			sysfs_remove_file(pdd->kobj_stats, &pdd->attr_evict);
++			if (pdd->dev->kfd2kgd->get_cu_occupancy)
++				sysfs_remove_file(pdd->kobj_stats,
++						  &pdd->attr_cu_occupancy);
+ 			kobject_del(pdd->kobj_stats);
+ 			kobject_put(pdd->kobj_stats);
+ 			pdd->kobj_stats = NULL;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+index 95a6c36cea4c6..243dd1efcdbf5 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+@@ -153,6 +153,7 @@ void pqm_uninit(struct process_queue_manager *pqm)
+ 		if (pqn->q && pqn->q->gws)
+ 			amdgpu_amdkfd_remove_gws_from_process(pqm->process->kgd_process_info,
+ 				pqn->q->gws);
++		kfd_procfs_del_queue(pqn->q);
+ 		uninit_queue(pqn->q);
+ 		list_del(&pqn->process_queue_list);
+ 		kfree(pqn);
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 159014455fab0..a68dc25a19c6d 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -94,6 +94,9 @@ static int drm_dp_mst_register_i2c_bus(struct drm_dp_mst_port *port);
+ static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_mst_port *port);
+ static void drm_dp_mst_kick_tx(struct drm_dp_mst_topology_mgr *mgr);
+ 
++static bool drm_dp_mst_port_downstream_of_branch(struct drm_dp_mst_port *port,
++						 struct drm_dp_mst_branch *branch);
++
+ #define DBG_PREFIX "[dp_mst]"
+ 
+ #define DP_STR(x) [DP_ ## x] = #x
+@@ -2497,7 +2500,7 @@ drm_dp_mst_handle_conn_stat(struct drm_dp_mst_branch *mstb,
+ {
+ 	struct drm_dp_mst_topology_mgr *mgr = mstb->mgr;
+ 	struct drm_dp_mst_port *port;
+-	int old_ddps, old_input, ret, i;
++	int old_ddps, ret;
+ 	u8 new_pdt;
+ 	bool new_mcs;
+ 	bool dowork = false, create_connector = false;
+@@ -2529,7 +2532,6 @@ drm_dp_mst_handle_conn_stat(struct drm_dp_mst_branch *mstb,
+ 	}
+ 
+ 	old_ddps = port->ddps;
+-	old_input = port->input;
+ 	port->input = conn_stat->input_port;
+ 	port->ldps = conn_stat->legacy_device_plug_status;
+ 	port->ddps = conn_stat->displayport_device_plug_status;
+@@ -2552,28 +2554,6 @@ drm_dp_mst_handle_conn_stat(struct drm_dp_mst_branch *mstb,
+ 		dowork = false;
+ 	}
+ 
+-	if (!old_input && old_ddps != port->ddps && !port->ddps) {
+-		for (i = 0; i < mgr->max_payloads; i++) {
+-			struct drm_dp_vcpi *vcpi = mgr->proposed_vcpis[i];
+-			struct drm_dp_mst_port *port_validated;
+-
+-			if (!vcpi)
+-				continue;
+-
+-			port_validated =
+-				container_of(vcpi, struct drm_dp_mst_port, vcpi);
+-			port_validated =
+-				drm_dp_mst_topology_get_port_validated(mgr, port_validated);
+-			if (!port_validated) {
+-				mutex_lock(&mgr->payload_lock);
+-				vcpi->num_slots = 0;
+-				mutex_unlock(&mgr->payload_lock);
+-			} else {
+-				drm_dp_mst_topology_put_port(port_validated);
+-			}
+-		}
+-	}
+-
+ 	if (port->connector)
+ 		drm_modeset_unlock(&mgr->base.lock);
+ 	else if (create_connector)
+@@ -3383,6 +3363,7 @@ int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr)
+ 	struct drm_dp_mst_port *port;
+ 	int i, j;
+ 	int cur_slots = 1;
++	bool skip;
+ 
+ 	mutex_lock(&mgr->payload_lock);
+ 	for (i = 0; i < mgr->max_payloads; i++) {
+@@ -3397,6 +3378,16 @@ int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr)
+ 			port = container_of(vcpi, struct drm_dp_mst_port,
+ 					    vcpi);
+ 
++			mutex_lock(&mgr->lock);
++			skip = !drm_dp_mst_port_downstream_of_branch(port, mgr->mst_primary);
++			mutex_unlock(&mgr->lock);
++
++			if (skip) {
++				drm_dbg_kms(mgr->dev,
++					    "Virtual channel %d is not in current topology\n",
++					    i);
++				continue;
++			}
+ 			/* Validated ports don't matter if we're releasing
+ 			 * VCPI
+ 			 */
+@@ -3404,8 +3395,16 @@ int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr)
+ 				port = drm_dp_mst_topology_get_port_validated(
+ 				    mgr, port);
+ 				if (!port) {
+-					mutex_unlock(&mgr->payload_lock);
+-					return -EINVAL;
++					if (vcpi->num_slots == payload->num_slots) {
++						cur_slots += vcpi->num_slots;
++						payload->start_slot = req_payload.start_slot;
++						continue;
++					} else {
++						drm_dbg_kms(mgr->dev,
++							    "Fail:set payload to invalid sink");
++						mutex_unlock(&mgr->payload_lock);
++						return -EINVAL;
++					}
+ 				}
+ 				put_port = true;
+ 			}
+@@ -3489,6 +3488,7 @@ int drm_dp_update_payload_part2(struct drm_dp_mst_topology_mgr *mgr)
+ 	struct drm_dp_mst_port *port;
+ 	int i;
+ 	int ret = 0;
++	bool skip;
+ 
+ 	mutex_lock(&mgr->payload_lock);
+ 	for (i = 0; i < mgr->max_payloads; i++) {
+@@ -3498,6 +3498,13 @@ int drm_dp_update_payload_part2(struct drm_dp_mst_topology_mgr *mgr)
+ 
+ 		port = container_of(mgr->proposed_vcpis[i], struct drm_dp_mst_port, vcpi);
+ 
++		mutex_lock(&mgr->lock);
++		skip = !drm_dp_mst_port_downstream_of_branch(port, mgr->mst_primary);
++		mutex_unlock(&mgr->lock);
++
++		if (skip)
++			continue;
++
+ 		DRM_DEBUG_KMS("payload %d %d\n", i, mgr->payloads[i].payload_state);
+ 		if (mgr->payloads[i].payload_state == DP_PAYLOAD_LOCAL) {
+ 			ret = drm_dp_create_payload_step2(mgr, port, mgr->proposed_vcpis[i]->vcpi, &mgr->payloads[i]);
+@@ -4577,9 +4584,18 @@ EXPORT_SYMBOL(drm_dp_mst_reset_vcpi_slots);
+ void drm_dp_mst_deallocate_vcpi(struct drm_dp_mst_topology_mgr *mgr,
+ 				struct drm_dp_mst_port *port)
+ {
++	bool skip;
++
+ 	if (!port->vcpi.vcpi)
+ 		return;
+ 
++	mutex_lock(&mgr->lock);
++	skip = !drm_dp_mst_port_downstream_of_branch(port, mgr->mst_primary);
++	mutex_unlock(&mgr->lock);
++
++	if (skip)
++		return;
++
+ 	drm_dp_mst_put_payload_id(mgr, port->vcpi.vcpi);
+ 	port->vcpi.num_slots = 0;
+ 	port->vcpi.pbn = 0;
+diff --git a/drivers/gpu/drm/gma500/framebuffer.c b/drivers/gpu/drm/gma500/framebuffer.c
+index ebe9dccf2d830..0b8648396fb24 100644
+--- a/drivers/gpu/drm/gma500/framebuffer.c
++++ b/drivers/gpu/drm/gma500/framebuffer.c
+@@ -352,6 +352,7 @@ static struct drm_framebuffer *psb_user_framebuffer_create
+ 			 const struct drm_mode_fb_cmd2 *cmd)
+ {
+ 	struct drm_gem_object *obj;
++	struct drm_framebuffer *fb;
+ 
+ 	/*
+ 	 *	Find the GEM object and thus the gtt range object that is
+@@ -362,7 +363,11 @@ static struct drm_framebuffer *psb_user_framebuffer_create
+ 		return ERR_PTR(-ENOENT);
+ 
+ 	/* Let the core code do all the work */
+-	return psb_framebuffer_create(dev, cmd, obj);
++	fb = psb_framebuffer_create(dev, cmd, obj);
++	if (IS_ERR(fb))
++		drm_gem_object_put(obj);
++
++	return fb;
+ }
+ 
+ static int psbfb_probe(struct drm_fb_helper *fb_helper,
+diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
+index 74bf6fc8461fe..674ef8805c704 100644
+--- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
++++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
+@@ -304,10 +304,7 @@ static void __gen8_ppgtt_alloc(struct i915_address_space * const vm,
+ 			__i915_gem_object_pin_pages(pt->base);
+ 			i915_gem_object_make_unshrinkable(pt->base);
+ 
+-			if (lvl ||
+-			    gen8_pt_count(*start, end) < I915_PDES ||
+-			    intel_vgpu_active(vm->i915))
+-				fill_px(pt, vm->scratch[lvl]->encode);
++			fill_px(pt, vm->scratch[lvl]->encode);
+ 
+ 			spin_lock(&pd->lock);
+ 			if (likely(!pd->entry[idx])) {
+diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
+index 8a322594210c4..f282a26131c0b 100644
+--- a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
++++ b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
+@@ -348,7 +348,7 @@ static struct i915_fence_reg *fence_find(struct i915_ggtt *ggtt)
+ 	if (intel_has_pending_fb_unpin(ggtt->vm.i915))
+ 		return ERR_PTR(-EAGAIN);
+ 
+-	return ERR_PTR(-EDEADLK);
++	return ERR_PTR(-ENOBUFS);
+ }
+ 
+ int __i915_vma_pin_fence(struct i915_vma *vma)
+diff --git a/drivers/hwtracing/intel_th/core.c b/drivers/hwtracing/intel_th/core.c
+index 24d0c974bfd55..1b44d86af9c22 100644
+--- a/drivers/hwtracing/intel_th/core.c
++++ b/drivers/hwtracing/intel_th/core.c
+@@ -215,6 +215,22 @@ static ssize_t port_show(struct device *dev, struct device_attribute *attr,
+ 
+ static DEVICE_ATTR_RO(port);
+ 
++static void intel_th_trace_prepare(struct intel_th_device *thdev)
++{
++	struct intel_th_device *hub = to_intel_th_hub(thdev);
++	struct intel_th_driver *hubdrv = to_intel_th_driver(hub->dev.driver);
++
++	if (hub->type != INTEL_TH_SWITCH)
++		return;
++
++	if (thdev->type != INTEL_TH_OUTPUT)
++		return;
++
++	pm_runtime_get_sync(&thdev->dev);
++	hubdrv->prepare(hub, &thdev->output);
++	pm_runtime_put(&thdev->dev);
++}
++
+ static int intel_th_output_activate(struct intel_th_device *thdev)
+ {
+ 	struct intel_th_driver *thdrv =
+@@ -235,6 +251,7 @@ static int intel_th_output_activate(struct intel_th_device *thdev)
+ 	if (ret)
+ 		goto fail_put;
+ 
++	intel_th_trace_prepare(thdev);
+ 	if (thdrv->activate)
+ 		ret = thdrv->activate(thdev);
+ 	else
+diff --git a/drivers/hwtracing/intel_th/gth.c b/drivers/hwtracing/intel_th/gth.c
+index 28509b02a0b56..b3308934a687d 100644
+--- a/drivers/hwtracing/intel_th/gth.c
++++ b/drivers/hwtracing/intel_th/gth.c
+@@ -564,6 +564,21 @@ static void gth_tscu_resync(struct gth_device *gth)
+ 	iowrite32(reg, gth->base + REG_TSCU_TSUCTRL);
+ }
+ 
++static void intel_th_gth_prepare(struct intel_th_device *thdev,
++				 struct intel_th_output *output)
++{
++	struct gth_device *gth = dev_get_drvdata(&thdev->dev);
++	int count;
++
++	/*
++	 * Wait until the output port is in reset before we start
++	 * programming it.
++	 */
++	for (count = GTH_PLE_WAITLOOP_DEPTH;
++	     count && !(gth_output_get(gth, output->port) & BIT(5)); count--)
++		cpu_relax();
++}
++
+ /**
+  * intel_th_gth_enable() - enable tracing to an output device
+  * @thdev:	GTH device
+@@ -815,6 +830,7 @@ static struct intel_th_driver intel_th_gth_driver = {
+ 	.assign		= intel_th_gth_assign,
+ 	.unassign	= intel_th_gth_unassign,
+ 	.set_output	= intel_th_gth_set_output,
++	.prepare	= intel_th_gth_prepare,
+ 	.enable		= intel_th_gth_enable,
+ 	.trig_switch	= intel_th_gth_switch,
+ 	.disable	= intel_th_gth_disable,
+diff --git a/drivers/hwtracing/intel_th/intel_th.h b/drivers/hwtracing/intel_th/intel_th.h
+index 89c67e0e1d348..0ffb42990175a 100644
+--- a/drivers/hwtracing/intel_th/intel_th.h
++++ b/drivers/hwtracing/intel_th/intel_th.h
+@@ -143,6 +143,7 @@ intel_th_output_assigned(struct intel_th_device *thdev)
+  * @remove:	remove method
+  * @assign:	match a given output type device against available outputs
+  * @unassign:	deassociate an output type device from an output port
++ * @prepare:	prepare output port for tracing
+  * @enable:	enable tracing for a given output device
+  * @disable:	disable tracing for a given output device
+  * @irq:	interrupt callback
+@@ -164,6 +165,8 @@ struct intel_th_driver {
+ 					  struct intel_th_device *othdev);
+ 	void			(*unassign)(struct intel_th_device *thdev,
+ 					    struct intel_th_device *othdev);
++	void			(*prepare)(struct intel_th_device *thdev,
++					   struct intel_th_output *output);
+ 	void			(*enable)(struct intel_th_device *thdev,
+ 					  struct intel_th_output *output);
+ 	void			(*trig_switch)(struct intel_th_device *thdev,
+diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
+index 5a97e4a02fa2a..e314ccaf114ac 100644
+--- a/drivers/i2c/i2c-core-base.c
++++ b/drivers/i2c/i2c-core-base.c
+@@ -24,6 +24,7 @@
+ #include <linux/i2c-smbus.h>
+ #include <linux/idr.h>
+ #include <linux/init.h>
++#include <linux/interrupt.h>
+ #include <linux/irqflags.h>
+ #include <linux/jump_label.h>
+ #include <linux/kernel.h>
+@@ -627,6 +628,8 @@ static void i2c_device_shutdown(struct device *dev)
+ 	driver = to_i2c_driver(dev->driver);
+ 	if (driver->shutdown)
+ 		driver->shutdown(client);
++	else if (client->irq > 0)
++		disable_irq(client->irq);
+ }
+ 
+ static void i2c_client_dev_release(struct device *dev)
+diff --git a/drivers/iio/gyro/fxas21002c_core.c b/drivers/iio/gyro/fxas21002c_core.c
+index 645461c704547..6a85f231fd57c 100644
+--- a/drivers/iio/gyro/fxas21002c_core.c
++++ b/drivers/iio/gyro/fxas21002c_core.c
+@@ -366,14 +366,7 @@ out_unlock:
+ 
+ static int  fxas21002c_pm_get(struct fxas21002c_data *data)
+ {
+-	struct device *dev = regmap_get_device(data->regmap);
+-	int ret;
+-
+-	ret = pm_runtime_get_sync(dev);
+-	if (ret < 0)
+-		pm_runtime_put_noidle(dev);
+-
+-	return ret;
++	return pm_runtime_resume_and_get(regmap_get_device(data->regmap));
+ }
+ 
+ static int  fxas21002c_pm_put(struct fxas21002c_data *data)
+@@ -1004,7 +997,6 @@ int fxas21002c_core_probe(struct device *dev, struct regmap *regmap, int irq,
+ pm_disable:
+ 	pm_runtime_disable(dev);
+ 	pm_runtime_set_suspended(dev);
+-	pm_runtime_put_noidle(dev);
+ 
+ 	return ret;
+ }
+@@ -1018,7 +1010,6 @@ void fxas21002c_core_remove(struct device *dev)
+ 
+ 	pm_runtime_disable(dev);
+ 	pm_runtime_set_suspended(dev);
+-	pm_runtime_put_noidle(dev);
+ }
+ EXPORT_SYMBOL_GPL(fxas21002c_core_remove);
+ 
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+index 7cedaab096a7f..e8d242ee6743b 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+@@ -15,19 +15,19 @@
+  *
+  * Supported sensors:
+  * - LSM6DS3:
+- *   - Accelerometer/Gyroscope supported ODR [Hz]: 13, 26, 52, 104, 208, 416
++ *   - Accelerometer/Gyroscope supported ODR [Hz]: 12.5, 26, 52, 104, 208, 416
+  *   - Accelerometer supported full-scale [g]: +-2/+-4/+-8/+-16
+  *   - Gyroscope supported full-scale [dps]: +-125/+-245/+-500/+-1000/+-2000
+  *   - FIFO size: 8KB
+  *
+  * - LSM6DS3H/LSM6DSL/LSM6DSM/ISM330DLC/LSM6DS3TR-C:
+- *   - Accelerometer/Gyroscope supported ODR [Hz]: 13, 26, 52, 104, 208, 416
++ *   - Accelerometer/Gyroscope supported ODR [Hz]: 12.5, 26, 52, 104, 208, 416
+  *   - Accelerometer supported full-scale [g]: +-2/+-4/+-8/+-16
+  *   - Gyroscope supported full-scale [dps]: +-125/+-245/+-500/+-1000/+-2000
+  *   - FIFO size: 4KB
+  *
+  * - LSM6DSO/LSM6DSOX/ASM330LHH/LSM6DSR/ISM330DHCX/LSM6DST/LSM6DSOP:
+- *   - Accelerometer/Gyroscope supported ODR [Hz]: 13, 26, 52, 104, 208, 416,
++ *   - Accelerometer/Gyroscope supported ODR [Hz]: 12.5, 26, 52, 104, 208, 416,
+  *     833
+  *   - Accelerometer supported full-scale [g]: +-2/+-4/+-8/+-16
+  *   - Gyroscope supported full-scale [dps]: +-125/+-245/+-500/+-1000/+-2000
+diff --git a/drivers/iio/magnetometer/bmc150_magn.c b/drivers/iio/magnetometer/bmc150_magn.c
+index d534f4f3909eb..6cf3eceaf2d97 100644
+--- a/drivers/iio/magnetometer/bmc150_magn.c
++++ b/drivers/iio/magnetometer/bmc150_magn.c
+@@ -265,7 +265,7 @@ static int bmc150_magn_set_power_state(struct bmc150_magn_data *data, bool on)
+ 	int ret;
+ 
+ 	if (on) {
+-		ret = pm_runtime_get_sync(data->dev);
++		ret = pm_runtime_resume_and_get(data->dev);
+ 	} else {
+ 		pm_runtime_mark_last_busy(data->dev);
+ 		ret = pm_runtime_put_autosuspend(data->dev);
+@@ -274,9 +274,6 @@ static int bmc150_magn_set_power_state(struct bmc150_magn_data *data, bool on)
+ 	if (ret < 0) {
+ 		dev_err(data->dev,
+ 			"failed to change power state to %d\n", on);
+-		if (on)
+-			pm_runtime_put_noidle(data->dev);
+-
+ 		return ret;
+ 	}
+ #endif
+@@ -966,12 +963,14 @@ int bmc150_magn_probe(struct device *dev, struct regmap *regmap,
+ 	ret = iio_device_register(indio_dev);
+ 	if (ret < 0) {
+ 		dev_err(dev, "unable to register iio device\n");
+-		goto err_buffer_cleanup;
++		goto err_disable_runtime_pm;
+ 	}
+ 
+ 	dev_dbg(dev, "Registered device %s\n", name);
+ 	return 0;
+ 
++err_disable_runtime_pm:
++	pm_runtime_disable(dev);
+ err_buffer_cleanup:
+ 	iio_triggered_buffer_cleanup(indio_dev);
+ err_free_irq:
+@@ -995,7 +994,6 @@ int bmc150_magn_remove(struct device *dev)
+ 
+ 	pm_runtime_disable(dev);
+ 	pm_runtime_set_suspended(dev);
+-	pm_runtime_put_noidle(dev);
+ 
+ 	iio_triggered_buffer_cleanup(indio_dev);
+ 
+diff --git a/drivers/input/touchscreen/hideep.c b/drivers/input/touchscreen/hideep.c
+index ddad4a82a5e55..e9547ee297564 100644
+--- a/drivers/input/touchscreen/hideep.c
++++ b/drivers/input/touchscreen/hideep.c
+@@ -361,13 +361,16 @@ static int hideep_enter_pgm(struct hideep_ts *ts)
+ 	return -EIO;
+ }
+ 
+-static void hideep_nvm_unlock(struct hideep_ts *ts)
++static int hideep_nvm_unlock(struct hideep_ts *ts)
+ {
+ 	u32 unmask_code;
++	int error;
+ 
+ 	hideep_pgm_w_reg(ts, HIDEEP_FLASH_CFG, HIDEEP_NVM_SFR_RPAGE);
+-	hideep_pgm_r_reg(ts, 0x0000000C, &unmask_code);
++	error = hideep_pgm_r_reg(ts, 0x0000000C, &unmask_code);
+ 	hideep_pgm_w_reg(ts, HIDEEP_FLASH_CFG, HIDEEP_NVM_DEFAULT_PAGE);
++	if (error)
++		return error;
+ 
+ 	/* make it unprotected code */
+ 	unmask_code &= ~HIDEEP_PROT_MODE;
+@@ -384,6 +387,8 @@ static void hideep_nvm_unlock(struct hideep_ts *ts)
+ 	NVM_W_SFR(HIDEEP_NVM_MASK_OFS, ts->nvm_mask);
+ 	SET_FLASH_HWCONTROL();
+ 	hideep_pgm_w_reg(ts, HIDEEP_FLASH_CFG, HIDEEP_NVM_DEFAULT_PAGE);
++
++	return 0;
+ }
+ 
+ static int hideep_check_status(struct hideep_ts *ts)
+@@ -462,7 +467,9 @@ static int hideep_program_nvm(struct hideep_ts *ts,
+ 	u32 addr = 0;
+ 	int error;
+ 
+-	hideep_nvm_unlock(ts);
++       error = hideep_nvm_unlock(ts);
++       if (error)
++               return error;
+ 
+ 	while (ucode_len > 0) {
+ 		xfer_len = min_t(size_t, ucode_len, HIDEEP_NVM_PAGE_SIZE);
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+index 98b3a1c2a1813..44a427833385e 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+@@ -130,6 +130,16 @@ static int qcom_adreno_smmu_alloc_context_bank(struct arm_smmu_domain *smmu_doma
+ 	return __arm_smmu_alloc_bitmap(smmu->context_map, start, count);
+ }
+ 
++static bool qcom_adreno_can_do_ttbr1(struct arm_smmu_device *smmu)
++{
++	const struct device_node *np = smmu->dev->of_node;
++
++	if (of_device_is_compatible(np, "qcom,msm8996-smmu-v2"))
++		return false;
++
++	return true;
++}
++
+ static int qcom_adreno_smmu_init_context(struct arm_smmu_domain *smmu_domain,
+ 		struct io_pgtable_cfg *pgtbl_cfg, struct device *dev)
+ {
+@@ -144,7 +154,8 @@ static int qcom_adreno_smmu_init_context(struct arm_smmu_domain *smmu_domain,
+ 	 * be AARCH64 stage 1 but double check because the arm-smmu code assumes
+ 	 * that is the case when the TTBR1 quirk is enabled
+ 	 */
+-	if ((smmu_domain->stage == ARM_SMMU_DOMAIN_S1) &&
++	if (qcom_adreno_can_do_ttbr1(smmu_domain->smmu) &&
++	    (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) &&
+ 	    (smmu_domain->cfg.fmt == ARM_SMMU_CTX_FMT_AARCH64))
+ 		pgtbl_cfg->quirks |= IO_PGTABLE_QUIRK_ARM_TTBR1;
+ 
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c
+index 6f72c4d208cad..1a647e0ea3ebb 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
+@@ -74,7 +74,7 @@ static bool using_legacy_binding, using_generic_binding;
+ static inline int arm_smmu_rpm_get(struct arm_smmu_device *smmu)
+ {
+ 	if (pm_runtime_enabled(smmu->dev))
+-		return pm_runtime_get_sync(smmu->dev);
++		return pm_runtime_resume_and_get(smmu->dev);
+ 
+ 	return 0;
+ }
+@@ -1271,6 +1271,7 @@ static phys_addr_t arm_smmu_iova_to_phys_hard(struct iommu_domain *domain,
+ 	u64 phys;
+ 	unsigned long va, flags;
+ 	int ret, idx = cfg->cbndx;
++	phys_addr_t addr = 0;
+ 
+ 	ret = arm_smmu_rpm_get(smmu);
+ 	if (ret < 0)
+@@ -1290,6 +1291,7 @@ static phys_addr_t arm_smmu_iova_to_phys_hard(struct iommu_domain *domain,
+ 		dev_err(dev,
+ 			"iova to phys timed out on %pad. Falling back to software table walk.\n",
+ 			&iova);
++		arm_smmu_rpm_put(smmu);
+ 		return ops->iova_to_phys(ops, iova);
+ 	}
+ 
+@@ -1298,12 +1300,14 @@ static phys_addr_t arm_smmu_iova_to_phys_hard(struct iommu_domain *domain,
+ 	if (phys & ARM_SMMU_CB_PAR_F) {
+ 		dev_err(dev, "translation fault!\n");
+ 		dev_err(dev, "PAR = 0x%llx\n", phys);
+-		return 0;
++		goto out;
+ 	}
+ 
++	addr = (phys & GENMASK_ULL(39, 12)) | (iova & 0xfff);
++out:
+ 	arm_smmu_rpm_put(smmu);
+ 
+-	return (phys & GENMASK_ULL(39, 12)) | (iova & 0xfff);
++	return addr;
+ }
+ 
+ static phys_addr_t arm_smmu_iova_to_phys(struct iommu_domain *domain,
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index be35284a20160..dad910ba9553d 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -2434,10 +2434,11 @@ __domain_mapping(struct dmar_domain *domain, unsigned long iov_pfn,
+ 	return 0;
+ }
+ 
+-static void domain_context_clear_one(struct intel_iommu *iommu, u8 bus, u8 devfn)
++static void domain_context_clear_one(struct device_domain_info *info, u8 bus, u8 devfn)
+ {
+-	unsigned long flags;
++	struct intel_iommu *iommu = info->iommu;
+ 	struct context_entry *context;
++	unsigned long flags;
+ 	u16 did_old;
+ 
+ 	if (!iommu)
+@@ -2449,7 +2450,16 @@ static void domain_context_clear_one(struct intel_iommu *iommu, u8 bus, u8 devfn
+ 		spin_unlock_irqrestore(&iommu->lock, flags);
+ 		return;
+ 	}
+-	did_old = context_domain_id(context);
++
++	if (sm_supported(iommu)) {
++		if (hw_pass_through && domain_type_is_si(info->domain))
++			did_old = FLPT_DEFAULT_DID;
++		else
++			did_old = info->domain->iommu_did[iommu->seq_id];
++	} else {
++		did_old = context_domain_id(context);
++	}
++
+ 	context_clear_entry(context);
+ 	__iommu_flush_cache(iommu, context, sizeof(*context));
+ 	spin_unlock_irqrestore(&iommu->lock, flags);
+@@ -2467,6 +2477,8 @@ static void domain_context_clear_one(struct intel_iommu *iommu, u8 bus, u8 devfn
+ 				 0,
+ 				 0,
+ 				 DMA_TLB_DSI_FLUSH);
++
++	__iommu_flush_dev_iotlb(info, 0, MAX_AGAW_PFN_WIDTH);
+ }
+ 
+ static inline void unlink_domain_info(struct device_domain_info *info)
+@@ -4436,9 +4448,9 @@ out_free_dmar:
+ 
+ static int domain_context_clear_one_cb(struct pci_dev *pdev, u16 alias, void *opaque)
+ {
+-	struct intel_iommu *iommu = opaque;
++	struct device_domain_info *info = opaque;
+ 
+-	domain_context_clear_one(iommu, PCI_BUS_NUM(alias), alias & 0xff);
++	domain_context_clear_one(info, PCI_BUS_NUM(alias), alias & 0xff);
+ 	return 0;
+ }
+ 
+@@ -4448,12 +4460,13 @@ static int domain_context_clear_one_cb(struct pci_dev *pdev, u16 alias, void *op
+  * devices, unbinding the driver from any one of them will possibly leave
+  * the others unable to operate.
+  */
+-static void domain_context_clear(struct intel_iommu *iommu, struct device *dev)
++static void domain_context_clear(struct device_domain_info *info)
+ {
+-	if (!iommu || !dev || !dev_is_pci(dev))
++	if (!info->iommu || !info->dev || !dev_is_pci(info->dev))
+ 		return;
+ 
+-	pci_for_each_dma_alias(to_pci_dev(dev), &domain_context_clear_one_cb, iommu);
++	pci_for_each_dma_alias(to_pci_dev(info->dev),
++			       &domain_context_clear_one_cb, info);
+ }
+ 
+ static void __dmar_remove_one_dev_info(struct device_domain_info *info)
+@@ -4470,14 +4483,13 @@ static void __dmar_remove_one_dev_info(struct device_domain_info *info)
+ 	iommu = info->iommu;
+ 	domain = info->domain;
+ 
+-	if (info->dev) {
++	if (info->dev && !dev_is_real_dma_subdevice(info->dev)) {
+ 		if (dev_is_pci(info->dev) && sm_supported(iommu))
+ 			intel_pasid_tear_down_entry(iommu, info->dev,
+ 					PASID_RID2PASID, false);
+ 
+ 		iommu_disable_dev_iotlb(info);
+-		if (!dev_is_real_dma_subdevice(info->dev))
+-			domain_context_clear(iommu, info->dev);
++		domain_context_clear(info);
+ 		intel_pasid_free_table(info->dev);
+ 	}
+ 
+diff --git a/drivers/leds/leds-tlc591xx.c b/drivers/leds/leds-tlc591xx.c
+index 5b9dfdf743ecd..cb7bd1353f9f0 100644
+--- a/drivers/leds/leds-tlc591xx.c
++++ b/drivers/leds/leds-tlc591xx.c
+@@ -148,16 +148,20 @@ static int
+ tlc591xx_probe(struct i2c_client *client,
+ 	       const struct i2c_device_id *id)
+ {
+-	struct device_node *np = dev_of_node(&client->dev), *child;
++	struct device_node *np, *child;
+ 	struct device *dev = &client->dev;
+ 	const struct tlc591xx *tlc591xx;
+ 	struct tlc591xx_priv *priv;
+ 	int err, count, reg;
+ 
+-	tlc591xx = device_get_match_data(dev);
++	np = dev_of_node(dev);
+ 	if (!np)
+ 		return -ENODEV;
+ 
++	tlc591xx = device_get_match_data(dev);
++	if (!tlc591xx)
++		return -ENODEV;
++
+ 	count = of_get_available_child_count(np);
+ 	if (!count || count > tlc591xx->max_leds)
+ 		return -EINVAL;
+diff --git a/drivers/leds/leds-turris-omnia.c b/drivers/leds/leds-turris-omnia.c
+index 2f9a289ab2456..1adfed1c0619b 100644
+--- a/drivers/leds/leds-turris-omnia.c
++++ b/drivers/leds/leds-turris-omnia.c
+@@ -274,6 +274,7 @@ static const struct i2c_device_id omnia_id[] = {
+ 	{ "omnia", 0 },
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(i2c, omnia_id);
+ 
+ static struct i2c_driver omnia_leds_driver = {
+ 	.probe		= omnia_leds_probe,
+diff --git a/drivers/memory/atmel-ebi.c b/drivers/memory/atmel-ebi.c
+index 14386d0b5f578..c267283b01fda 100644
+--- a/drivers/memory/atmel-ebi.c
++++ b/drivers/memory/atmel-ebi.c
+@@ -600,8 +600,10 @@ static int atmel_ebi_probe(struct platform_device *pdev)
+ 				child);
+ 
+ 			ret = atmel_ebi_dev_disable(ebi, child);
+-			if (ret)
++			if (ret) {
++				of_node_put(child);
+ 				return ret;
++			}
+ 		}
+ 	}
+ 
+diff --git a/drivers/memory/fsl_ifc.c b/drivers/memory/fsl_ifc.c
+index 89f99b5b64504..d062c2f8250f4 100644
+--- a/drivers/memory/fsl_ifc.c
++++ b/drivers/memory/fsl_ifc.c
+@@ -97,7 +97,6 @@ static int fsl_ifc_ctrl_remove(struct platform_device *dev)
+ 	iounmap(ctrl->gregs);
+ 
+ 	dev_set_drvdata(&dev->dev, NULL);
+-	kfree(ctrl);
+ 
+ 	return 0;
+ }
+@@ -209,7 +208,8 @@ static int fsl_ifc_ctrl_probe(struct platform_device *dev)
+ 
+ 	dev_info(&dev->dev, "Freescale Integrated Flash Controller\n");
+ 
+-	fsl_ifc_ctrl_dev = kzalloc(sizeof(*fsl_ifc_ctrl_dev), GFP_KERNEL);
++	fsl_ifc_ctrl_dev = devm_kzalloc(&dev->dev, sizeof(*fsl_ifc_ctrl_dev),
++					GFP_KERNEL);
+ 	if (!fsl_ifc_ctrl_dev)
+ 		return -ENOMEM;
+ 
+@@ -219,8 +219,7 @@ static int fsl_ifc_ctrl_probe(struct platform_device *dev)
+ 	fsl_ifc_ctrl_dev->gregs = of_iomap(dev->dev.of_node, 0);
+ 	if (!fsl_ifc_ctrl_dev->gregs) {
+ 		dev_err(&dev->dev, "failed to get memory region\n");
+-		ret = -ENODEV;
+-		goto err;
++		return -ENODEV;
+ 	}
+ 
+ 	if (of_property_read_bool(dev->dev.of_node, "little-endian")) {
+@@ -295,6 +294,7 @@ err_irq:
+ 	free_irq(fsl_ifc_ctrl_dev->irq, fsl_ifc_ctrl_dev);
+ 	irq_dispose_mapping(fsl_ifc_ctrl_dev->irq);
+ err:
++	iounmap(fsl_ifc_ctrl_dev->gregs);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/memory/pl353-smc.c b/drivers/memory/pl353-smc.c
+index 9c0a284167773..b0b251bb207f3 100644
+--- a/drivers/memory/pl353-smc.c
++++ b/drivers/memory/pl353-smc.c
+@@ -407,6 +407,7 @@ static int pl353_smc_probe(struct amba_device *adev, const struct amba_id *id)
+ 		break;
+ 	}
+ 	if (!match) {
++		err = -ENODEV;
+ 		dev_err(&adev->dev, "no matching children\n");
+ 		goto out_clk_disable;
+ 	}
+diff --git a/drivers/memory/stm32-fmc2-ebi.c b/drivers/memory/stm32-fmc2-ebi.c
+index 4d5758c419c55..ffec26a99313b 100644
+--- a/drivers/memory/stm32-fmc2-ebi.c
++++ b/drivers/memory/stm32-fmc2-ebi.c
+@@ -1048,16 +1048,19 @@ static int stm32_fmc2_ebi_parse_dt(struct stm32_fmc2_ebi *ebi)
+ 		if (ret) {
+ 			dev_err(dev, "could not retrieve reg property: %d\n",
+ 				ret);
++			of_node_put(child);
+ 			return ret;
+ 		}
+ 
+ 		if (bank >= FMC2_MAX_BANKS) {
+ 			dev_err(dev, "invalid reg value: %d\n", bank);
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 
+ 		if (ebi->bank_assigned & BIT(bank)) {
+ 			dev_err(dev, "bank already assigned: %d\n", bank);
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 
+@@ -1066,6 +1069,7 @@ static int stm32_fmc2_ebi_parse_dt(struct stm32_fmc2_ebi *ebi)
+ 			if (ret) {
+ 				dev_err(dev, "setup chip select %d failed: %d\n",
+ 					bank, ret);
++				of_node_put(child);
+ 				return ret;
+ 			}
+ 		}
+diff --git a/drivers/mfd/da9052-i2c.c b/drivers/mfd/da9052-i2c.c
+index 47556d2d9abe2..8ebfc7bbe4e01 100644
+--- a/drivers/mfd/da9052-i2c.c
++++ b/drivers/mfd/da9052-i2c.c
+@@ -113,6 +113,7 @@ static const struct i2c_device_id da9052_i2c_id[] = {
+ 	{"da9053-bc", DA9053_BC},
+ 	{}
+ };
++MODULE_DEVICE_TABLE(i2c, da9052_i2c_id);
+ 
+ #ifdef CONFIG_OF
+ static const struct of_device_id dialog_dt_ids[] = {
+diff --git a/drivers/mfd/motorola-cpcap.c b/drivers/mfd/motorola-cpcap.c
+index 30d82bfe5b02f..6fb206da27298 100644
+--- a/drivers/mfd/motorola-cpcap.c
++++ b/drivers/mfd/motorola-cpcap.c
+@@ -327,6 +327,10 @@ static int cpcap_probe(struct spi_device *spi)
+ 	if (ret)
+ 		return ret;
+ 
++	/* Parent SPI controller uses DMA, CPCAP and child devices do not */
++	spi->dev.coherent_dma_mask = 0;
++	spi->dev.dma_mask = &spi->dev.coherent_dma_mask;
++
+ 	return devm_mfd_add_devices(&spi->dev, 0, cpcap_mfd_devices,
+ 				    ARRAY_SIZE(cpcap_mfd_devices), NULL, 0, NULL);
+ }
+diff --git a/drivers/mfd/stmpe-i2c.c b/drivers/mfd/stmpe-i2c.c
+index 61aa020199f57..cd2f45257dc16 100644
+--- a/drivers/mfd/stmpe-i2c.c
++++ b/drivers/mfd/stmpe-i2c.c
+@@ -109,7 +109,7 @@ static const struct i2c_device_id stmpe_i2c_id[] = {
+ 	{ "stmpe2403", STMPE2403 },
+ 	{ }
+ };
+-MODULE_DEVICE_TABLE(i2c, stmpe_id);
++MODULE_DEVICE_TABLE(i2c, stmpe_i2c_id);
+ 
+ static struct i2c_driver stmpe_i2c_driver = {
+ 	.driver = {
+diff --git a/drivers/misc/cardreader/alcor_pci.c b/drivers/misc/cardreader/alcor_pci.c
+index cd402c89189ea..de6d44a158bba 100644
+--- a/drivers/misc/cardreader/alcor_pci.c
++++ b/drivers/misc/cardreader/alcor_pci.c
+@@ -139,7 +139,13 @@ static void alcor_pci_init_check_aspm(struct alcor_pci_priv *priv)
+ 	u32 val32;
+ 
+ 	priv->pdev_cap_off    = alcor_pci_find_cap_offset(priv, priv->pdev);
+-	priv->parent_cap_off = alcor_pci_find_cap_offset(priv,
++	/*
++	 * A device might be attached to root complex directly and
++	 * priv->parent_pdev will be NULL. In this case we don't check its
++	 * capability and disable ASPM completely.
++	 */
++	if (priv->parent_pdev)
++		priv->parent_cap_off = alcor_pci_find_cap_offset(priv,
+ 							 priv->parent_pdev);
+ 
+ 	if ((priv->pdev_cap_off == 0) || (priv->parent_cap_off == 0)) {
+diff --git a/drivers/misc/habanalabs/common/device.c b/drivers/misc/habanalabs/common/device.c
+index 00e92b6788284..eea0d3e35b88d 100644
+--- a/drivers/misc/habanalabs/common/device.c
++++ b/drivers/misc/habanalabs/common/device.c
+@@ -1334,8 +1334,9 @@ int hl_device_init(struct hl_device *hdev, struct class *hclass)
+ 	}
+ 
+ 	/*
+-	 * From this point, in case of an error, add char devices and create
+-	 * sysfs nodes as part of the error flow, to allow debugging.
++	 * From this point, override rc (=0) in case of an error to allow
++	 * debugging (by adding char devices and create sysfs nodes as part of
++	 * the error flow).
+ 	 */
+ 	add_cdev_sysfs_on_err = true;
+ 
+diff --git a/drivers/misc/habanalabs/common/firmware_if.c b/drivers/misc/habanalabs/common/firmware_if.c
+index 0713b2c12d54f..86efb9fa701c9 100644
+--- a/drivers/misc/habanalabs/common/firmware_if.c
++++ b/drivers/misc/habanalabs/common/firmware_if.c
+@@ -1006,11 +1006,14 @@ int hl_fw_init_cpu(struct hl_device *hdev, u32 cpu_boot_status_reg,
+ 
+ 	if (!(hdev->fw_components & FW_TYPE_LINUX)) {
+ 		dev_info(hdev->dev, "Skip loading Linux F/W\n");
++		rc = 0;
+ 		goto out;
+ 	}
+ 
+-	if (status == CPU_BOOT_STATUS_SRAM_AVAIL)
++	if (status == CPU_BOOT_STATUS_SRAM_AVAIL) {
++		rc = 0;
+ 		goto out;
++	}
+ 
+ 	dev_info(hdev->dev,
+ 		"Loading firmware to device, may take some time...\n");
+diff --git a/drivers/misc/habanalabs/common/habanalabs_drv.c b/drivers/misc/habanalabs/common/habanalabs_drv.c
+index d15b912a347bd..91bcaaed5cd64 100644
+--- a/drivers/misc/habanalabs/common/habanalabs_drv.c
++++ b/drivers/misc/habanalabs/common/habanalabs_drv.c
+@@ -309,7 +309,7 @@ int create_hdev(struct hl_device **dev, struct pci_dev *pdev,
+ 
+ 	if (pdev)
+ 		hdev->asic_prop.fw_security_disabled =
+-				!is_asic_secured(pdev->device);
++				!is_asic_secured(hdev->asic_type);
+ 	else
+ 		hdev->asic_prop.fw_security_disabled = true;
+ 
+diff --git a/drivers/misc/habanalabs/common/mmu/mmu.c b/drivers/misc/habanalabs/common/mmu/mmu.c
+index b37189956b146..792d25b79ea68 100644
+--- a/drivers/misc/habanalabs/common/mmu/mmu.c
++++ b/drivers/misc/habanalabs/common/mmu/mmu.c
+@@ -501,12 +501,20 @@ static void hl_mmu_pa_page_with_offset(struct hl_ctx *ctx, u64 virt_addr,
+ 
+ 	if ((hops->range_type == HL_VA_RANGE_TYPE_DRAM) &&
+ 			!is_power_of_2(prop->dram_page_size)) {
+-		u32 bit;
++		unsigned long dram_page_size = prop->dram_page_size;
+ 		u64 page_offset_mask;
+ 		u64 phys_addr_mask;
++		u32 bit;
+ 
+-		bit = __ffs64((u64)prop->dram_page_size);
+-		page_offset_mask = ((1ull << bit) - 1);
++		/*
++		 * find last set bit in page_size to cover all bits of page
++		 * offset. note that 1 has to be added to bit index.
++		 * note that the internal ulong variable is used to avoid
++		 * alignment issue.
++		 */
++		bit = find_last_bit(&dram_page_size,
++					sizeof(dram_page_size) * BITS_PER_BYTE) + 1;
++		page_offset_mask = (BIT_ULL(bit) - 1);
+ 		phys_addr_mask = ~page_offset_mask;
+ 		*phys_addr = (tmp_phys_addr & phys_addr_mask) |
+ 				(virt_addr & page_offset_mask);
+diff --git a/drivers/misc/habanalabs/gaudi/gaudi.c b/drivers/misc/habanalabs/gaudi/gaudi.c
+index 9e4a6bb3acd11..b7b281ae8d5ae 100644
+--- a/drivers/misc/habanalabs/gaudi/gaudi.c
++++ b/drivers/misc/habanalabs/gaudi/gaudi.c
+@@ -2834,7 +2834,7 @@ static void gaudi_init_mme_qman(struct hl_device *hdev, u32 mme_offset,
+ 
+ 		/* Configure RAZWI IRQ */
+ 		mme_id = mme_offset /
+-				(mmMME1_QM_GLBL_CFG0 - mmMME0_QM_GLBL_CFG0);
++				(mmMME1_QM_GLBL_CFG0 - mmMME0_QM_GLBL_CFG0) / 2;
+ 
+ 		mme_qm_err_cfg = MME_QMAN_GLBL_ERR_CFG_MSG_EN_MASK;
+ 		if (hdev->stop_on_err) {
+@@ -4934,6 +4934,7 @@ already_pinned:
+ 	return 0;
+ 
+ unpin_memory:
++	list_del(&userptr->job_node);
+ 	hl_unpin_host_memory(hdev, userptr);
+ free_userptr:
+ 	kfree(userptr);
+@@ -8306,8 +8307,10 @@ static int gaudi_internal_cb_pool_init(struct hl_device *hdev,
+ 			HL_VA_RANGE_TYPE_HOST, HOST_SPACE_INTERNAL_CB_SZ,
+ 			HL_MMU_VA_ALIGNMENT_NOT_NEEDED);
+ 
+-	if (!hdev->internal_cb_va_base)
++	if (!hdev->internal_cb_va_base) {
++		rc = -ENOMEM;
+ 		goto destroy_internal_cb_pool;
++	}
+ 
+ 	mutex_lock(&ctx->mmu_lock);
+ 	rc = hl_mmu_map_contiguous(ctx, hdev->internal_cb_va_base,
+diff --git a/drivers/misc/habanalabs/goya/goya.c b/drivers/misc/habanalabs/goya/goya.c
+index e0ad2a269779b..d4ac10b9f25ee 100644
+--- a/drivers/misc/habanalabs/goya/goya.c
++++ b/drivers/misc/habanalabs/goya/goya.c
+@@ -3270,6 +3270,7 @@ already_pinned:
+ 	return 0;
+ 
+ unpin_memory:
++	list_del(&userptr->job_node);
+ 	hl_unpin_host_memory(hdev, userptr);
+ free_userptr:
+ 	kfree(userptr);
+diff --git a/drivers/misc/ibmasm/module.c b/drivers/misc/ibmasm/module.c
+index 4edad6c445d37..dc8a06c06c637 100644
+--- a/drivers/misc/ibmasm/module.c
++++ b/drivers/misc/ibmasm/module.c
+@@ -111,7 +111,7 @@ static int ibmasm_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	result = ibmasm_init_remote_input_dev(sp);
+ 	if (result) {
+ 		dev_err(sp->dev, "Failed to initialize remote queue\n");
+-		goto error_send_message;
++		goto error_init_remote;
+ 	}
+ 
+ 	result = ibmasm_send_driver_vpd(sp);
+@@ -131,8 +131,9 @@ static int ibmasm_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	return 0;
+ 
+ error_send_message:
+-	disable_sp_interrupts(sp->base_address);
+ 	ibmasm_free_remote_input_dev(sp);
++error_init_remote:
++	disable_sp_interrupts(sp->base_address);
+ 	free_irq(sp->irq, (void *)sp);
+ error_request_irq:
+ 	iounmap(sp->base_address);
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 2debb32a1813a..6af2279644137 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -1581,6 +1581,8 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
+ 	struct virtnet_info *vi = sq->vq->vdev->priv;
+ 	unsigned int index = vq2txq(sq->vq);
+ 	struct netdev_queue *txq;
++	int opaque;
++	bool done;
+ 
+ 	if (unlikely(is_xdp_raw_buffer_queue(vi, index))) {
+ 		/* We don't need to enable cb for XDP */
+@@ -1590,10 +1592,28 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
+ 
+ 	txq = netdev_get_tx_queue(vi->dev, index);
+ 	__netif_tx_lock(txq, raw_smp_processor_id());
++	virtqueue_disable_cb(sq->vq);
+ 	free_old_xmit_skbs(sq, true);
++
++	opaque = virtqueue_enable_cb_prepare(sq->vq);
++
++	done = napi_complete_done(napi, 0);
++
++	if (!done)
++		virtqueue_disable_cb(sq->vq);
++
+ 	__netif_tx_unlock(txq);
+ 
+-	virtqueue_napi_complete(napi, sq->vq, 0);
++	if (done) {
++		if (unlikely(virtqueue_poll(sq->vq, opaque))) {
++			if (napi_schedule_prep(napi)) {
++				__netif_tx_lock(txq, raw_smp_processor_id());
++				virtqueue_disable_cb(sq->vq);
++				__netif_tx_unlock(txq);
++				__napi_schedule(napi);
++			}
++		}
++	}
+ 
+ 	if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS)
+ 		netif_tx_wake_queue(txq);
+@@ -3299,8 +3319,11 @@ static __maybe_unused int virtnet_restore(struct virtio_device *vdev)
+ 	virtnet_set_queues(vi, vi->curr_queue_pairs);
+ 
+ 	err = virtnet_cpu_notif_add(vi);
+-	if (err)
++	if (err) {
++		virtnet_freeze_down(vdev);
++		remove_vq_common(vi);
+ 		return err;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index d8aceef832846..07ee347ea3f3c 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -1497,7 +1497,6 @@ static void nvmet_tcp_state_change(struct sock *sk)
+ 	case TCP_CLOSE_WAIT:
+ 	case TCP_CLOSE:
+ 		/* FALLTHRU */
+-		sk->sk_user_data = NULL;
+ 		nvmet_tcp_schedule_release_queue(queue);
+ 		break;
+ 	default:
+diff --git a/drivers/pci/controller/dwc/pcie-intel-gw.c b/drivers/pci/controller/dwc/pcie-intel-gw.c
+index f89a7d24ba28e..d15cf35fa7f2d 100644
+--- a/drivers/pci/controller/dwc/pcie-intel-gw.c
++++ b/drivers/pci/controller/dwc/pcie-intel-gw.c
+@@ -39,6 +39,10 @@
+ #define PCIE_APP_IRN_PM_TO_ACK		BIT(9)
+ #define PCIE_APP_IRN_LINK_AUTO_BW_STAT	BIT(11)
+ #define PCIE_APP_IRN_BW_MGT		BIT(12)
++#define PCIE_APP_IRN_INTA		BIT(13)
++#define PCIE_APP_IRN_INTB		BIT(14)
++#define PCIE_APP_IRN_INTC		BIT(15)
++#define PCIE_APP_IRN_INTD		BIT(16)
+ #define PCIE_APP_IRN_MSG_LTR		BIT(18)
+ #define PCIE_APP_IRN_SYS_ERR_RC		BIT(29)
+ #define PCIE_APP_INTX_OFST		12
+@@ -48,10 +52,8 @@
+ 	PCIE_APP_IRN_RX_VDM_MSG | PCIE_APP_IRN_SYS_ERR_RC | \
+ 	PCIE_APP_IRN_PM_TO_ACK | PCIE_APP_IRN_MSG_LTR | \
+ 	PCIE_APP_IRN_BW_MGT | PCIE_APP_IRN_LINK_AUTO_BW_STAT | \
+-	(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTA) | \
+-	(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTB) | \
+-	(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTC) | \
+-	(PCIE_APP_INTX_OFST + PCI_INTERRUPT_INTD))
++	PCIE_APP_IRN_INTA | PCIE_APP_IRN_INTB | \
++	PCIE_APP_IRN_INTC | PCIE_APP_IRN_INTD)
+ 
+ #define BUS_IATU_OFFSET			SZ_256M
+ #define RESET_INTERVAL_MS		100
+diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c
+index fa6b12cfc0430..3ec7b29d5dc72 100644
+--- a/drivers/pci/controller/dwc/pcie-tegra194.c
++++ b/drivers/pci/controller/dwc/pcie-tegra194.c
+@@ -1826,7 +1826,7 @@ static int tegra_pcie_ep_raise_msi_irq(struct tegra_pcie_dw *pcie, u16 irq)
+ 	if (unlikely(irq > 31))
+ 		return -EINVAL;
+ 
+-	appl_writel(pcie, (1 << irq), APPL_MSI_CTRL_1);
++	appl_writel(pcie, BIT(irq), APPL_MSI_CTRL_1);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/pci/controller/pci-ftpci100.c b/drivers/pci/controller/pci-ftpci100.c
+index da3cd216da007..aefef1986201a 100644
+--- a/drivers/pci/controller/pci-ftpci100.c
++++ b/drivers/pci/controller/pci-ftpci100.c
+@@ -34,12 +34,12 @@
+  * Special configuration registers directly in the first few words
+  * in I/O space.
+  */
+-#define PCI_IOSIZE	0x00
+-#define PCI_PROT	0x04 /* AHB protection */
+-#define PCI_CTRL	0x08 /* PCI control signal */
+-#define PCI_SOFTRST	0x10 /* Soft reset counter and response error enable */
+-#define PCI_CONFIG	0x28 /* PCI configuration command register */
+-#define PCI_DATA	0x2C
++#define FTPCI_IOSIZE	0x00
++#define FTPCI_PROT	0x04 /* AHB protection */
++#define FTPCI_CTRL	0x08 /* PCI control signal */
++#define FTPCI_SOFTRST	0x10 /* Soft reset counter and response error enable */
++#define FTPCI_CONFIG	0x28 /* PCI configuration command register */
++#define FTPCI_DATA	0x2C
+ 
+ #define FARADAY_PCI_STATUS_CMD		0x04 /* Status and command */
+ #define FARADAY_PCI_PMC			0x40 /* Power management control */
+@@ -195,9 +195,9 @@ static int faraday_raw_pci_read_config(struct faraday_pci *p, int bus_number,
+ 			PCI_CONF_FUNCTION(PCI_FUNC(fn)) |
+ 			PCI_CONF_WHERE(config) |
+ 			PCI_CONF_ENABLE,
+-			p->base + PCI_CONFIG);
++			p->base + FTPCI_CONFIG);
+ 
+-	*value = readl(p->base + PCI_DATA);
++	*value = readl(p->base + FTPCI_DATA);
+ 
+ 	if (size == 1)
+ 		*value = (*value >> (8 * (config & 3))) & 0xFF;
+@@ -230,17 +230,17 @@ static int faraday_raw_pci_write_config(struct faraday_pci *p, int bus_number,
+ 			PCI_CONF_FUNCTION(PCI_FUNC(fn)) |
+ 			PCI_CONF_WHERE(config) |
+ 			PCI_CONF_ENABLE,
+-			p->base + PCI_CONFIG);
++			p->base + FTPCI_CONFIG);
+ 
+ 	switch (size) {
+ 	case 4:
+-		writel(value, p->base + PCI_DATA);
++		writel(value, p->base + FTPCI_DATA);
+ 		break;
+ 	case 2:
+-		writew(value, p->base + PCI_DATA + (config & 3));
++		writew(value, p->base + FTPCI_DATA + (config & 3));
+ 		break;
+ 	case 1:
+-		writeb(value, p->base + PCI_DATA + (config & 3));
++		writeb(value, p->base + FTPCI_DATA + (config & 3));
+ 		break;
+ 	default:
+ 		ret = PCIBIOS_BAD_REGISTER_NUMBER;
+@@ -469,7 +469,7 @@ static int faraday_pci_probe(struct platform_device *pdev)
+ 		if (!faraday_res_to_memcfg(io->start - win->offset,
+ 					   resource_size(io), &val)) {
+ 			/* setup I/O space size */
+-			writel(val, p->base + PCI_IOSIZE);
++			writel(val, p->base + FTPCI_IOSIZE);
+ 		} else {
+ 			dev_err(dev, "illegal IO mem size\n");
+ 			return -EINVAL;
+@@ -477,11 +477,11 @@ static int faraday_pci_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	/* Setup hostbridge */
+-	val = readl(p->base + PCI_CTRL);
++	val = readl(p->base + FTPCI_CTRL);
+ 	val |= PCI_COMMAND_IO;
+ 	val |= PCI_COMMAND_MEMORY;
+ 	val |= PCI_COMMAND_MASTER;
+-	writel(val, p->base + PCI_CTRL);
++	writel(val, p->base + FTPCI_CTRL);
+ 	/* Mask and clear all interrupts */
+ 	faraday_raw_pci_write_config(p, 0, 0, FARADAY_PCI_CTRL2 + 2, 2, 0xF000);
+ 	if (variant->cascaded_irq) {
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index bebe3eeebc4e1..efbf8c80bd311 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -444,7 +444,6 @@ enum hv_pcibus_state {
+ 	hv_pcibus_probed,
+ 	hv_pcibus_installed,
+ 	hv_pcibus_removing,
+-	hv_pcibus_removed,
+ 	hv_pcibus_maximum
+ };
+ 
+@@ -3243,8 +3242,9 @@ static int hv_pci_bus_exit(struct hv_device *hdev, bool keep_devs)
+ 		struct pci_packet teardown_packet;
+ 		u8 buffer[sizeof(struct pci_message)];
+ 	} pkt;
+-	struct hv_dr_state *dr;
+ 	struct hv_pci_compl comp_pkt;
++	struct hv_pci_dev *hpdev, *tmp;
++	unsigned long flags;
+ 	int ret;
+ 
+ 	/*
+@@ -3256,9 +3256,16 @@ static int hv_pci_bus_exit(struct hv_device *hdev, bool keep_devs)
+ 
+ 	if (!keep_devs) {
+ 		/* Delete any children which might still exist. */
+-		dr = kzalloc(sizeof(*dr), GFP_KERNEL);
+-		if (dr && hv_pci_start_relations_work(hbus, dr))
+-			kfree(dr);
++		spin_lock_irqsave(&hbus->device_list_lock, flags);
++		list_for_each_entry_safe(hpdev, tmp, &hbus->children, list_entry) {
++			list_del(&hpdev->list_entry);
++			if (hpdev->pci_slot)
++				pci_destroy_slot(hpdev->pci_slot);
++			/* For the two refs got in new_pcichild_device() */
++			put_pcichild(hpdev);
++			put_pcichild(hpdev);
++		}
++		spin_unlock_irqrestore(&hbus->device_list_lock, flags);
+ 	}
+ 
+ 	ret = hv_send_resources_released(hdev);
+@@ -3301,13 +3308,23 @@ static int hv_pci_remove(struct hv_device *hdev)
+ 
+ 	hbus = hv_get_drvdata(hdev);
+ 	if (hbus->state == hv_pcibus_installed) {
++		tasklet_disable(&hdev->channel->callback_event);
++		hbus->state = hv_pcibus_removing;
++		tasklet_enable(&hdev->channel->callback_event);
++		destroy_workqueue(hbus->wq);
++		hbus->wq = NULL;
++		/*
++		 * At this point, no work is running or can be scheduled
++		 * on hbus-wq. We can't race with hv_pci_devices_present()
++		 * or hv_pci_eject_device(), it's safe to proceed.
++		 */
++
+ 		/* Remove the bus from PCI's point of view. */
+ 		pci_lock_rescan_remove();
+ 		pci_stop_root_bus(hbus->pci_bus);
+ 		hv_pci_remove_slots(hbus);
+ 		pci_remove_root_bus(hbus->pci_bus);
+ 		pci_unlock_rescan_remove();
+-		hbus->state = hv_pcibus_removed;
+ 	}
+ 
+ 	ret = hv_pci_bus_exit(hdev, false);
+@@ -3322,7 +3339,6 @@ static int hv_pci_remove(struct hv_device *hdev)
+ 	irq_domain_free_fwnode(hbus->sysdata.fwnode);
+ 	put_hvpcibus(hbus);
+ 	wait_for_completion(&hbus->remove_event);
+-	destroy_workqueue(hbus->wq);
+ 
+ 	hv_put_dom_num(hbus->sysdata.domain);
+ 
+diff --git a/drivers/pci/controller/pci-tegra.c b/drivers/pci/controller/pci-tegra.c
+index 8069bd9232d45..c979229a6d0df 100644
+--- a/drivers/pci/controller/pci-tegra.c
++++ b/drivers/pci/controller/pci-tegra.c
+@@ -2539,6 +2539,7 @@ static const struct of_device_id tegra_pcie_of_match[] = {
+ 	{ .compatible = "nvidia,tegra20-pcie", .data = &tegra20_pcie },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, tegra_pcie_of_match);
+ 
+ static void *tegra_pcie_ports_seq_start(struct seq_file *s, loff_t *pos)
+ {
+diff --git a/drivers/pci/controller/pcie-iproc-msi.c b/drivers/pci/controller/pcie-iproc-msi.c
+index eede4e8f3f75a..81b4effeb1309 100644
+--- a/drivers/pci/controller/pcie-iproc-msi.c
++++ b/drivers/pci/controller/pcie-iproc-msi.c
+@@ -171,7 +171,7 @@ static struct irq_chip iproc_msi_irq_chip = {
+ 
+ static struct msi_domain_info iproc_msi_domain_info = {
+ 	.flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
+-		MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX,
++		MSI_FLAG_PCI_MSIX,
+ 	.chip = &iproc_msi_irq_chip,
+ };
+ 
+@@ -250,20 +250,23 @@ static int iproc_msi_irq_domain_alloc(struct irq_domain *domain,
+ 	struct iproc_msi *msi = domain->host_data;
+ 	int hwirq, i;
+ 
++	if (msi->nr_cpus > 1 && nr_irqs > 1)
++		return -EINVAL;
++
+ 	mutex_lock(&msi->bitmap_lock);
+ 
+-	/* Allocate 'nr_cpus' number of MSI vectors each time */
+-	hwirq = bitmap_find_next_zero_area(msi->bitmap, msi->nr_msi_vecs, 0,
+-					   msi->nr_cpus, 0);
+-	if (hwirq < msi->nr_msi_vecs) {
+-		bitmap_set(msi->bitmap, hwirq, msi->nr_cpus);
+-	} else {
+-		mutex_unlock(&msi->bitmap_lock);
+-		return -ENOSPC;
+-	}
++	/*
++	 * Allocate 'nr_irqs' multiplied by 'nr_cpus' number of MSI vectors
++	 * each time
++	 */
++	hwirq = bitmap_find_free_region(msi->bitmap, msi->nr_msi_vecs,
++					order_base_2(msi->nr_cpus * nr_irqs));
+ 
+ 	mutex_unlock(&msi->bitmap_lock);
+ 
++	if (hwirq < 0)
++		return -ENOSPC;
++
+ 	for (i = 0; i < nr_irqs; i++) {
+ 		irq_domain_set_info(domain, virq + i, hwirq + i,
+ 				    &iproc_msi_bottom_irq_chip,
+@@ -284,7 +287,8 @@ static void iproc_msi_irq_domain_free(struct irq_domain *domain,
+ 	mutex_lock(&msi->bitmap_lock);
+ 
+ 	hwirq = hwirq_to_canonical_hwirq(msi, data->hwirq);
+-	bitmap_clear(msi->bitmap, hwirq, msi->nr_cpus);
++	bitmap_release_region(msi->bitmap, hwirq,
++			      order_base_2(msi->nr_cpus * nr_irqs));
+ 
+ 	mutex_unlock(&msi->bitmap_lock);
+ 
+@@ -539,6 +543,9 @@ int iproc_msi_init(struct iproc_pcie *pcie, struct device_node *node)
+ 	mutex_init(&msi->bitmap_lock);
+ 	msi->nr_cpus = num_possible_cpus();
+ 
++	if (msi->nr_cpus == 1)
++		iproc_msi_domain_info.flags |=  MSI_FLAG_MULTI_PCI_MSI;
++
+ 	msi->nr_irqs = of_irq_count(node);
+ 	if (!msi->nr_irqs) {
+ 		dev_err(pcie->dev, "found no MSI GIC interrupt\n");
+diff --git a/drivers/pci/controller/pcie-mediatek-gen3.c b/drivers/pci/controller/pcie-mediatek-gen3.c
+index 3c5b97716d409..f3aeb8d4eaca8 100644
+--- a/drivers/pci/controller/pcie-mediatek-gen3.c
++++ b/drivers/pci/controller/pcie-mediatek-gen3.c
+@@ -1012,6 +1012,7 @@ static const struct of_device_id mtk_pcie_of_match[] = {
+ 	{ .compatible = "mediatek,mt8192-pcie" },
+ 	{},
+ };
++MODULE_DEVICE_TABLE(of, mtk_pcie_of_match);
+ 
+ static struct platform_driver mtk_pcie_driver = {
+ 	.probe = mtk_pcie_probe,
+diff --git a/drivers/pci/controller/pcie-rockchip-host.c b/drivers/pci/controller/pcie-rockchip-host.c
+index f1d08a1b1591f..78d04ac29cd56 100644
+--- a/drivers/pci/controller/pcie-rockchip-host.c
++++ b/drivers/pci/controller/pcie-rockchip-host.c
+@@ -592,10 +592,6 @@ static int rockchip_pcie_parse_host_dt(struct rockchip_pcie *rockchip)
+ 	if (err)
+ 		return err;
+ 
+-	err = rockchip_pcie_setup_irq(rockchip);
+-	if (err)
+-		return err;
+-
+ 	rockchip->vpcie12v = devm_regulator_get_optional(dev, "vpcie12v");
+ 	if (IS_ERR(rockchip->vpcie12v)) {
+ 		if (PTR_ERR(rockchip->vpcie12v) != -ENODEV)
+@@ -973,8 +969,6 @@ static int rockchip_pcie_probe(struct platform_device *pdev)
+ 	if (err)
+ 		goto err_vpcie;
+ 
+-	rockchip_pcie_enable_interrupts(rockchip);
+-
+ 	err = rockchip_pcie_init_irq_domain(rockchip);
+ 	if (err < 0)
+ 		goto err_deinit_port;
+@@ -992,6 +986,12 @@ static int rockchip_pcie_probe(struct platform_device *pdev)
+ 	bridge->sysdata = rockchip;
+ 	bridge->ops = &rockchip_pcie_ops;
+ 
++	err = rockchip_pcie_setup_irq(rockchip);
++	if (err)
++		goto err_remove_irq_domain;
++
++	rockchip_pcie_enable_interrupts(rockchip);
++
+ 	err = pci_host_probe(bridge);
+ 	if (err < 0)
+ 		goto err_remove_irq_domain;
+diff --git a/drivers/pci/ecam.c b/drivers/pci/ecam.c
+index d2a1920bb0557..1c40d2506aef3 100644
+--- a/drivers/pci/ecam.c
++++ b/drivers/pci/ecam.c
+@@ -32,7 +32,7 @@ struct pci_config_window *pci_ecam_create(struct device *dev,
+ 	struct pci_config_window *cfg;
+ 	unsigned int bus_range, bus_range_max, bsz;
+ 	struct resource *conflict;
+-	int i, err;
++	int err;
+ 
+ 	if (busr->start > busr->end)
+ 		return ERR_PTR(-EINVAL);
+@@ -50,6 +50,7 @@ struct pci_config_window *pci_ecam_create(struct device *dev,
+ 	cfg->busr.start = busr->start;
+ 	cfg->busr.end = busr->end;
+ 	cfg->busr.flags = IORESOURCE_BUS;
++	cfg->bus_shift = bus_shift;
+ 	bus_range = resource_size(&cfg->busr);
+ 	bus_range_max = resource_size(cfgres) >> bus_shift;
+ 	if (bus_range > bus_range_max) {
+@@ -77,13 +78,6 @@ struct pci_config_window *pci_ecam_create(struct device *dev,
+ 		cfg->winp = kcalloc(bus_range, sizeof(*cfg->winp), GFP_KERNEL);
+ 		if (!cfg->winp)
+ 			goto err_exit_malloc;
+-		for (i = 0; i < bus_range; i++) {
+-			cfg->winp[i] =
+-				pci_remap_cfgspace(cfgres->start + i * bsz,
+-						   bsz);
+-			if (!cfg->winp[i])
+-				goto err_exit_iomap;
+-		}
+ 	} else {
+ 		cfg->win = pci_remap_cfgspace(cfgres->start, bus_range * bsz);
+ 		if (!cfg->win)
+@@ -129,6 +123,44 @@ void pci_ecam_free(struct pci_config_window *cfg)
+ }
+ EXPORT_SYMBOL_GPL(pci_ecam_free);
+ 
++static int pci_ecam_add_bus(struct pci_bus *bus)
++{
++	struct pci_config_window *cfg = bus->sysdata;
++	unsigned int bsz = 1 << cfg->bus_shift;
++	unsigned int busn = bus->number;
++	phys_addr_t start;
++
++	if (!per_bus_mapping)
++		return 0;
++
++	if (busn < cfg->busr.start || busn > cfg->busr.end)
++		return -EINVAL;
++
++	busn -= cfg->busr.start;
++	start = cfg->res.start + busn * bsz;
++
++	cfg->winp[busn] = pci_remap_cfgspace(start, bsz);
++	if (!cfg->winp[busn])
++		return -ENOMEM;
++
++	return 0;
++}
++
++static void pci_ecam_remove_bus(struct pci_bus *bus)
++{
++	struct pci_config_window *cfg = bus->sysdata;
++	unsigned int busn = bus->number;
++
++	if (!per_bus_mapping || busn < cfg->busr.start || busn > cfg->busr.end)
++		return;
++
++	busn -= cfg->busr.start;
++	if (cfg->winp[busn]) {
++		iounmap(cfg->winp[busn]);
++		cfg->winp[busn] = NULL;
++	}
++}
++
+ /*
+  * Function to implement the pci_ops ->map_bus method
+  */
+@@ -167,6 +199,8 @@ EXPORT_SYMBOL_GPL(pci_ecam_map_bus);
+ /* ECAM ops */
+ const struct pci_ecam_ops pci_generic_ecam_ops = {
+ 	.pci_ops	= {
++		.add_bus	= pci_ecam_add_bus,
++		.remove_bus	= pci_ecam_remove_bus,
+ 		.map_bus	= pci_ecam_map_bus,
+ 		.read		= pci_generic_config_read,
+ 		.write		= pci_generic_config_write,
+@@ -178,6 +212,8 @@ EXPORT_SYMBOL_GPL(pci_generic_ecam_ops);
+ /* ECAM ops for 32-bit access only (non-compliant) */
+ const struct pci_ecam_ops pci_32b_ops = {
+ 	.pci_ops	= {
++		.add_bus	= pci_ecam_add_bus,
++		.remove_bus	= pci_ecam_remove_bus,
+ 		.map_bus	= pci_ecam_map_bus,
+ 		.read		= pci_generic_config_read32,
+ 		.write		= pci_generic_config_write32,
+@@ -187,6 +223,8 @@ const struct pci_ecam_ops pci_32b_ops = {
+ /* ECAM ops for 32-bit read only (non-compliant) */
+ const struct pci_ecam_ops pci_32b_read_ops = {
+ 	.pci_ops	= {
++		.add_bus	= pci_ecam_add_bus,
++		.remove_bus	= pci_ecam_remove_bus,
+ 		.map_bus	= pci_ecam_map_bus,
+ 		.read		= pci_generic_config_read32,
+ 		.write		= pci_generic_config_write,
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index fb3840e222add..9d06939736c0f 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -563,6 +563,32 @@ void pciehp_power_off_slot(struct controller *ctrl)
+ 		 PCI_EXP_SLTCTL_PWR_OFF);
+ }
+ 
++static void pciehp_ignore_dpc_link_change(struct controller *ctrl,
++					  struct pci_dev *pdev, int irq)
++{
++	/*
++	 * Ignore link changes which occurred while waiting for DPC recovery.
++	 * Could be several if DPC triggered multiple times consecutively.
++	 */
++	synchronize_hardirq(irq);
++	atomic_and(~PCI_EXP_SLTSTA_DLLSC, &ctrl->pending_events);
++	if (pciehp_poll_mode)
++		pcie_capability_write_word(pdev, PCI_EXP_SLTSTA,
++					   PCI_EXP_SLTSTA_DLLSC);
++	ctrl_info(ctrl, "Slot(%s): Link Down/Up ignored (recovered by DPC)\n",
++		  slot_name(ctrl));
++
++	/*
++	 * If the link is unexpectedly down after successful recovery,
++	 * the corresponding link change may have been ignored above.
++	 * Synthesize it to ensure that it is acted on.
++	 */
++	down_read(&ctrl->reset_lock);
++	if (!pciehp_check_link_active(ctrl))
++		pciehp_request(ctrl, PCI_EXP_SLTSTA_DLLSC);
++	up_read(&ctrl->reset_lock);
++}
++
+ static irqreturn_t pciehp_isr(int irq, void *dev_id)
+ {
+ 	struct controller *ctrl = (struct controller *)dev_id;
+@@ -706,6 +732,16 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
+ 				      PCI_EXP_SLTCTL_ATTN_IND_ON);
+ 	}
+ 
++	/*
++	 * Ignore Link Down/Up events caused by Downstream Port Containment
++	 * if recovery from the error succeeded.
++	 */
++	if ((events & PCI_EXP_SLTSTA_DLLSC) && pci_dpc_recovered(pdev) &&
++	    ctrl->state == ON_STATE) {
++		events &= ~PCI_EXP_SLTSTA_DLLSC;
++		pciehp_ignore_dpc_link_change(ctrl, pdev, irq);
++	}
++
+ 	/*
+ 	 * Disable requests have higher priority than Presence Detect Changed
+ 	 * or Data Link Layer State Changed events.
+diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
+index 1963826303631..c49c13a5fedc7 100644
+--- a/drivers/pci/p2pdma.c
++++ b/drivers/pci/p2pdma.c
+@@ -308,10 +308,41 @@ static const struct pci_p2pdma_whitelist_entry {
+ 	{}
+ };
+ 
++/*
++ * This lookup function tries to find the PCI device corresponding to a given
++ * host bridge.
++ *
++ * It assumes the host bridge device is the first PCI device in the
++ * bus->devices list and that the devfn is 00.0. These assumptions should hold
++ * for all the devices in the whitelist above.
++ *
++ * This function is equivalent to pci_get_slot(host->bus, 0), however it does
++ * not take the pci_bus_sem lock seeing __host_bridge_whitelist() must not
++ * sleep.
++ *
++ * For this to be safe, the caller should hold a reference to a device on the
++ * bridge, which should ensure the host_bridge device will not be freed
++ * or removed from the head of the devices list.
++ */
++static struct pci_dev *pci_host_bridge_dev(struct pci_host_bridge *host)
++{
++	struct pci_dev *root;
++
++	root = list_first_entry_or_null(&host->bus->devices,
++					struct pci_dev, bus_list);
++
++	if (!root)
++		return NULL;
++	if (root->devfn != PCI_DEVFN(0, 0))
++		return NULL;
++
++	return root;
++}
++
+ static bool __host_bridge_whitelist(struct pci_host_bridge *host,
+ 				    bool same_host_bridge)
+ {
+-	struct pci_dev *root = pci_get_slot(host->bus, PCI_DEVFN(0, 0));
++	struct pci_dev *root = pci_host_bridge_dev(host);
+ 	const struct pci_p2pdma_whitelist_entry *entry;
+ 	unsigned short vendor, device;
+ 
+@@ -320,7 +351,6 @@ static bool __host_bridge_whitelist(struct pci_host_bridge *host,
+ 
+ 	vendor = root->vendor;
+ 	device = root->device;
+-	pci_dev_put(root);
+ 
+ 	for (entry = pci_p2pdma_whitelist; entry->vendor; entry++) {
+ 		if (vendor != entry->vendor || device != entry->device)
+diff --git a/drivers/pci/pci-label.c b/drivers/pci/pci-label.c
+index c32f3b7540e87..76b381cf70b27 100644
+--- a/drivers/pci/pci-label.c
++++ b/drivers/pci/pci-label.c
+@@ -145,7 +145,7 @@ static void dsm_label_utf16s_to_utf8s(union acpi_object *obj, char *buf)
+ 	len = utf16s_to_utf8s((const wchar_t *)obj->buffer.pointer,
+ 			      obj->buffer.length,
+ 			      UTF16_LITTLE_ENDIAN,
+-			      buf, PAGE_SIZE);
++			      buf, PAGE_SIZE - 1);
+ 	buf[len] = '\n';
+ }
+ 
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index 37c913bbc6e17..dac6922553b40 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -385,6 +385,8 @@ static inline bool pci_dev_is_disconnected(const struct pci_dev *dev)
+ 
+ /* pci_dev priv_flags */
+ #define PCI_DEV_ADDED 0
++#define PCI_DPC_RECOVERED 1
++#define PCI_DPC_RECOVERING 2
+ 
+ static inline void pci_dev_assign_added(struct pci_dev *dev, bool added)
+ {
+@@ -439,10 +441,12 @@ void pci_restore_dpc_state(struct pci_dev *dev);
+ void pci_dpc_init(struct pci_dev *pdev);
+ void dpc_process_error(struct pci_dev *pdev);
+ pci_ers_result_t dpc_reset_link(struct pci_dev *pdev);
++bool pci_dpc_recovered(struct pci_dev *pdev);
+ #else
+ static inline void pci_save_dpc_state(struct pci_dev *dev) {}
+ static inline void pci_restore_dpc_state(struct pci_dev *dev) {}
+ static inline void pci_dpc_init(struct pci_dev *pdev) {}
++static inline bool pci_dpc_recovered(struct pci_dev *pdev) { return false; }
+ #endif
+ 
+ #ifdef CONFIG_PCIEPORTBUS
+diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
+index e05aba86a3179..c556e7beafe38 100644
+--- a/drivers/pci/pcie/dpc.c
++++ b/drivers/pci/pcie/dpc.c
+@@ -71,6 +71,58 @@ void pci_restore_dpc_state(struct pci_dev *dev)
+ 	pci_write_config_word(dev, dev->dpc_cap + PCI_EXP_DPC_CTL, *cap);
+ }
+ 
++static DECLARE_WAIT_QUEUE_HEAD(dpc_completed_waitqueue);
++
++#ifdef CONFIG_HOTPLUG_PCI_PCIE
++static bool dpc_completed(struct pci_dev *pdev)
++{
++	u16 status;
++
++	pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_STATUS, &status);
++	if ((status != 0xffff) && (status & PCI_EXP_DPC_STATUS_TRIGGER))
++		return false;
++
++	if (test_bit(PCI_DPC_RECOVERING, &pdev->priv_flags))
++		return false;
++
++	return true;
++}
++
++/**
++ * pci_dpc_recovered - whether DPC triggered and has recovered successfully
++ * @pdev: PCI device
++ *
++ * Return true if DPC was triggered for @pdev and has recovered successfully.
++ * Wait for recovery if it hasn't completed yet.  Called from the PCIe hotplug
++ * driver to recognize and ignore Link Down/Up events caused by DPC.
++ */
++bool pci_dpc_recovered(struct pci_dev *pdev)
++{
++	struct pci_host_bridge *host;
++
++	if (!pdev->dpc_cap)
++		return false;
++
++	/*
++	 * Synchronization between hotplug and DPC is not supported
++	 * if DPC is owned by firmware and EDR is not enabled.
++	 */
++	host = pci_find_host_bridge(pdev->bus);
++	if (!host->native_dpc && !IS_ENABLED(CONFIG_PCIE_EDR))
++		return false;
++
++	/*
++	 * Need a timeout in case DPC never completes due to failure of
++	 * dpc_wait_rp_inactive().  The spec doesn't mandate a time limit,
++	 * but reports indicate that DPC completes within 4 seconds.
++	 */
++	wait_event_timeout(dpc_completed_waitqueue, dpc_completed(pdev),
++			   msecs_to_jiffies(4000));
++
++	return test_and_clear_bit(PCI_DPC_RECOVERED, &pdev->priv_flags);
++}
++#endif /* CONFIG_HOTPLUG_PCI_PCIE */
++
+ static int dpc_wait_rp_inactive(struct pci_dev *pdev)
+ {
+ 	unsigned long timeout = jiffies + HZ;
+@@ -91,8 +143,11 @@ static int dpc_wait_rp_inactive(struct pci_dev *pdev)
+ 
+ pci_ers_result_t dpc_reset_link(struct pci_dev *pdev)
+ {
++	pci_ers_result_t ret;
+ 	u16 cap;
+ 
++	set_bit(PCI_DPC_RECOVERING, &pdev->priv_flags);
++
+ 	/*
+ 	 * DPC disables the Link automatically in hardware, so it has
+ 	 * already been reset by the time we get here.
+@@ -106,18 +161,27 @@ pci_ers_result_t dpc_reset_link(struct pci_dev *pdev)
+ 	if (!pcie_wait_for_link(pdev, false))
+ 		pci_info(pdev, "Data Link Layer Link Active not cleared in 1000 msec\n");
+ 
+-	if (pdev->dpc_rp_extensions && dpc_wait_rp_inactive(pdev))
+-		return PCI_ERS_RESULT_DISCONNECT;
++	if (pdev->dpc_rp_extensions && dpc_wait_rp_inactive(pdev)) {
++		clear_bit(PCI_DPC_RECOVERED, &pdev->priv_flags);
++		ret = PCI_ERS_RESULT_DISCONNECT;
++		goto out;
++	}
+ 
+ 	pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS,
+ 			      PCI_EXP_DPC_STATUS_TRIGGER);
+ 
+ 	if (!pcie_wait_for_link(pdev, true)) {
+ 		pci_info(pdev, "Data Link Layer Link Active not set in 1000 msec\n");
+-		return PCI_ERS_RESULT_DISCONNECT;
++		clear_bit(PCI_DPC_RECOVERED, &pdev->priv_flags);
++		ret = PCI_ERS_RESULT_DISCONNECT;
++	} else {
++		set_bit(PCI_DPC_RECOVERED, &pdev->priv_flags);
++		ret = PCI_ERS_RESULT_RECOVERED;
+ 	}
+-
+-	return PCI_ERS_RESULT_RECOVERED;
++out:
++	clear_bit(PCI_DPC_RECOVERING, &pdev->priv_flags);
++	wake_up_all(&dpc_completed_waitqueue);
++	return ret;
+ }
+ 
+ static void dpc_process_rp_pio_error(struct pci_dev *pdev)
+diff --git a/drivers/phy/intel/phy-intel-keembay-emmc.c b/drivers/phy/intel/phy-intel-keembay-emmc.c
+index eb7c635ed89ae..0eb11ac7c2e2e 100644
+--- a/drivers/phy/intel/phy-intel-keembay-emmc.c
++++ b/drivers/phy/intel/phy-intel-keembay-emmc.c
+@@ -95,7 +95,8 @@ static int keembay_emmc_phy_power(struct phy *phy, bool on_off)
+ 	else
+ 		freqsel = 0x0;
+ 
+-	if (mhz < 50 || mhz > 200)
++	/* Check for EMMC clock rate*/
++	if (mhz > 175)
+ 		dev_warn(&phy->dev, "Unsupported rate: %d MHz\n", mhz);
+ 
+ 	/*
+diff --git a/drivers/power/reset/gpio-poweroff.c b/drivers/power/reset/gpio-poweroff.c
+index c5067eb753706..1c5af2fef1423 100644
+--- a/drivers/power/reset/gpio-poweroff.c
++++ b/drivers/power/reset/gpio-poweroff.c
+@@ -90,6 +90,7 @@ static const struct of_device_id of_gpio_poweroff_match[] = {
+ 	{ .compatible = "gpio-poweroff", },
+ 	{},
+ };
++MODULE_DEVICE_TABLE(of, of_gpio_poweroff_match);
+ 
+ static struct platform_driver gpio_poweroff_driver = {
+ 	.probe = gpio_poweroff_probe,
+diff --git a/drivers/power/reset/regulator-poweroff.c b/drivers/power/reset/regulator-poweroff.c
+index f697088e0ad1a..20701203935f0 100644
+--- a/drivers/power/reset/regulator-poweroff.c
++++ b/drivers/power/reset/regulator-poweroff.c
+@@ -64,6 +64,7 @@ static const struct of_device_id of_regulator_poweroff_match[] = {
+ 	{ .compatible = "regulator-poweroff", },
+ 	{},
+ };
++MODULE_DEVICE_TABLE(of, of_regulator_poweroff_match);
+ 
+ static struct platform_driver regulator_poweroff_driver = {
+ 	.probe = regulator_poweroff_probe,
+diff --git a/drivers/power/supply/Kconfig b/drivers/power/supply/Kconfig
+index e696364126f1d..20a2f93252f91 100644
+--- a/drivers/power/supply/Kconfig
++++ b/drivers/power/supply/Kconfig
+@@ -712,7 +712,8 @@ config BATTERY_GOLDFISH
+ 
+ config BATTERY_RT5033
+ 	tristate "RT5033 fuel gauge support"
+-	depends on MFD_RT5033
++	depends on I2C
++	select REGMAP_I2C
+ 	help
+ 	  This adds support for battery fuel gauge in Richtek RT5033 PMIC.
+ 	  The fuelgauge calculates and determines the battery state of charge
+diff --git a/drivers/power/supply/ab8500-bm.h b/drivers/power/supply/ab8500-bm.h
+index 41c69a4f2a1fc..871bdc1f5cbda 100644
+--- a/drivers/power/supply/ab8500-bm.h
++++ b/drivers/power/supply/ab8500-bm.h
+@@ -507,8 +507,6 @@ struct abx500_bm_data {
+ 	int bkup_bat_v;
+ 	int bkup_bat_i;
+ 	bool autopower_cfg;
+-	bool ac_enabled;
+-	bool usb_enabled;
+ 	bool no_maintenance;
+ 	bool capacity_scaling;
+ 	bool chg_unknown_bat;
+@@ -730,4 +728,8 @@ int ab8500_bm_of_probe(struct device *dev,
+ 		       struct device_node *np,
+ 		       struct abx500_bm_data *bm);
+ 
++extern struct platform_driver ab8500_fg_driver;
++extern struct platform_driver ab8500_btemp_driver;
++extern struct platform_driver abx500_chargalg_driver;
++
+ #endif /* _AB8500_CHARGER_H_ */
+diff --git a/drivers/power/supply/ab8500_btemp.c b/drivers/power/supply/ab8500_btemp.c
+index fdfcd59fc43e3..4df305c767c5c 100644
+--- a/drivers/power/supply/ab8500_btemp.c
++++ b/drivers/power/supply/ab8500_btemp.c
+@@ -13,6 +13,7 @@
+ #include <linux/init.h>
+ #include <linux/module.h>
+ #include <linux/device.h>
++#include <linux/component.h>
+ #include <linux/interrupt.h>
+ #include <linux/delay.h>
+ #include <linux/slab.h>
+@@ -932,26 +933,6 @@ static int __maybe_unused ab8500_btemp_suspend(struct device *dev)
+ 	return 0;
+ }
+ 
+-static int ab8500_btemp_remove(struct platform_device *pdev)
+-{
+-	struct ab8500_btemp *di = platform_get_drvdata(pdev);
+-	int i, irq;
+-
+-	/* Disable interrupts */
+-	for (i = 0; i < ARRAY_SIZE(ab8500_btemp_irq); i++) {
+-		irq = platform_get_irq_byname(pdev, ab8500_btemp_irq[i].name);
+-		free_irq(irq, di);
+-	}
+-
+-	/* Delete the work queue */
+-	destroy_workqueue(di->btemp_wq);
+-
+-	flush_scheduled_work();
+-	power_supply_unregister(di->btemp_psy);
+-
+-	return 0;
+-}
+-
+ static char *supply_interface[] = {
+ 	"ab8500_chargalg",
+ 	"ab8500_fg",
+@@ -966,6 +947,40 @@ static const struct power_supply_desc ab8500_btemp_desc = {
+ 	.external_power_changed	= ab8500_btemp_external_power_changed,
+ };
+ 
++static int ab8500_btemp_bind(struct device *dev, struct device *master,
++			     void *data)
++{
++	struct ab8500_btemp *di = dev_get_drvdata(dev);
++
++	/* Create a work queue for the btemp */
++	di->btemp_wq =
++		alloc_workqueue("ab8500_btemp_wq", WQ_MEM_RECLAIM, 0);
++	if (di->btemp_wq == NULL) {
++		dev_err(dev, "failed to create work queue\n");
++		return -ENOMEM;
++	}
++
++	/* Kick off periodic temperature measurements */
++	ab8500_btemp_periodic(di, true);
++
++	return 0;
++}
++
++static void ab8500_btemp_unbind(struct device *dev, struct device *master,
++				void *data)
++{
++	struct ab8500_btemp *di = dev_get_drvdata(dev);
++
++	/* Delete the work queue */
++	destroy_workqueue(di->btemp_wq);
++	flush_scheduled_work();
++}
++
++static const struct component_ops ab8500_btemp_component_ops = {
++	.bind = ab8500_btemp_bind,
++	.unbind = ab8500_btemp_unbind,
++};
++
+ static int ab8500_btemp_probe(struct platform_device *pdev)
+ {
+ 	struct device_node *np = pdev->dev.of_node;
+@@ -1011,14 +1026,6 @@ static int ab8500_btemp_probe(struct platform_device *pdev)
+ 	psy_cfg.num_supplicants = ARRAY_SIZE(supply_interface);
+ 	psy_cfg.drv_data = di;
+ 
+-	/* Create a work queue for the btemp */
+-	di->btemp_wq =
+-		alloc_workqueue("ab8500_btemp_wq", WQ_MEM_RECLAIM, 0);
+-	if (di->btemp_wq == NULL) {
+-		dev_err(dev, "failed to create work queue\n");
+-		return -ENOMEM;
+-	}
+-
+ 	/* Init work for measuring temperature periodically */
+ 	INIT_DEFERRABLE_WORK(&di->btemp_periodic_work,
+ 		ab8500_btemp_periodic_work);
+@@ -1031,7 +1038,7 @@ static int ab8500_btemp_probe(struct platform_device *pdev)
+ 		AB8500_BTEMP_HIGH_TH, &val);
+ 	if (ret < 0) {
+ 		dev_err(dev, "%s ab8500 read failed\n", __func__);
+-		goto free_btemp_wq;
++		return ret;
+ 	}
+ 	switch (val) {
+ 	case BTEMP_HIGH_TH_57_0:
+@@ -1050,30 +1057,28 @@ static int ab8500_btemp_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	/* Register BTEMP power supply class */
+-	di->btemp_psy = power_supply_register(dev, &ab8500_btemp_desc,
+-					      &psy_cfg);
++	di->btemp_psy = devm_power_supply_register(dev, &ab8500_btemp_desc,
++						   &psy_cfg);
+ 	if (IS_ERR(di->btemp_psy)) {
+ 		dev_err(dev, "failed to register BTEMP psy\n");
+-		ret = PTR_ERR(di->btemp_psy);
+-		goto free_btemp_wq;
++		return PTR_ERR(di->btemp_psy);
+ 	}
+ 
+ 	/* Register interrupts */
+ 	for (i = 0; i < ARRAY_SIZE(ab8500_btemp_irq); i++) {
+ 		irq = platform_get_irq_byname(pdev, ab8500_btemp_irq[i].name);
+-		if (irq < 0) {
+-			ret = irq;
+-			goto free_irq;
+-		}
++		if (irq < 0)
++			return irq;
+ 
+-		ret = request_threaded_irq(irq, NULL, ab8500_btemp_irq[i].isr,
++		ret = devm_request_threaded_irq(dev, irq, NULL,
++			ab8500_btemp_irq[i].isr,
+ 			IRQF_SHARED | IRQF_NO_SUSPEND | IRQF_ONESHOT,
+ 			ab8500_btemp_irq[i].name, di);
+ 
+ 		if (ret) {
+ 			dev_err(dev, "failed to request %s IRQ %d: %d\n"
+ 				, ab8500_btemp_irq[i].name, irq, ret);
+-			goto free_irq;
++			return ret;
+ 		}
+ 		dev_dbg(dev, "Requested %s IRQ %d: %d\n",
+ 			ab8500_btemp_irq[i].name, irq, ret);
+@@ -1081,23 +1086,16 @@ static int ab8500_btemp_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, di);
+ 
+-	/* Kick off periodic temperature measurements */
+-	ab8500_btemp_periodic(di, true);
+ 	list_add_tail(&di->node, &ab8500_btemp_list);
+ 
+-	return ret;
++	return component_add(dev, &ab8500_btemp_component_ops);
++}
+ 
+-free_irq:
+-	/* We also have to free all successfully registered irqs */
+-	for (i = i - 1; i >= 0; i--) {
+-		irq = platform_get_irq_byname(pdev, ab8500_btemp_irq[i].name);
+-		free_irq(irq, di);
+-	}
++static int ab8500_btemp_remove(struct platform_device *pdev)
++{
++	component_del(&pdev->dev, &ab8500_btemp_component_ops);
+ 
+-	power_supply_unregister(di->btemp_psy);
+-free_btemp_wq:
+-	destroy_workqueue(di->btemp_wq);
+-	return ret;
++	return 0;
+ }
+ 
+ static SIMPLE_DEV_PM_OPS(ab8500_btemp_pm_ops, ab8500_btemp_suspend, ab8500_btemp_resume);
+@@ -1106,8 +1104,9 @@ static const struct of_device_id ab8500_btemp_match[] = {
+ 	{ .compatible = "stericsson,ab8500-btemp", },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, ab8500_btemp_match);
+ 
+-static struct platform_driver ab8500_btemp_driver = {
++struct platform_driver ab8500_btemp_driver = {
+ 	.probe = ab8500_btemp_probe,
+ 	.remove = ab8500_btemp_remove,
+ 	.driver = {
+@@ -1116,20 +1115,6 @@ static struct platform_driver ab8500_btemp_driver = {
+ 		.pm = &ab8500_btemp_pm_ops,
+ 	},
+ };
+-
+-static int __init ab8500_btemp_init(void)
+-{
+-	return platform_driver_register(&ab8500_btemp_driver);
+-}
+-
+-static void __exit ab8500_btemp_exit(void)
+-{
+-	platform_driver_unregister(&ab8500_btemp_driver);
+-}
+-
+-device_initcall(ab8500_btemp_init);
+-module_exit(ab8500_btemp_exit);
+-
+ MODULE_LICENSE("GPL v2");
+ MODULE_AUTHOR("Johan Palsson, Karl Komierowski, Arun R Murthy");
+ MODULE_ALIAS("platform:ab8500-btemp");
+diff --git a/drivers/power/supply/ab8500_charger.c b/drivers/power/supply/ab8500_charger.c
+index a9be10eb2c22e..98127e3591f92 100644
+--- a/drivers/power/supply/ab8500_charger.c
++++ b/drivers/power/supply/ab8500_charger.c
+@@ -13,6 +13,7 @@
+ #include <linux/init.h>
+ #include <linux/module.h>
+ #include <linux/device.h>
++#include <linux/component.h>
+ #include <linux/interrupt.h>
+ #include <linux/delay.h>
+ #include <linux/notifier.h>
+@@ -414,6 +415,14 @@ disable_otp:
+ static void ab8500_power_supply_changed(struct ab8500_charger *di,
+ 					struct power_supply *psy)
+ {
++	/*
++	 * This happens if we get notifications or interrupts and
++	 * the platform has been configured not to support one or
++	 * other type of charging.
++	 */
++	if (!psy)
++		return;
++
+ 	if (di->autopower_cfg) {
+ 		if (!di->usb.charger_connected &&
+ 		    !di->ac.charger_connected &&
+@@ -440,7 +449,15 @@ static void ab8500_charger_set_usb_connected(struct ab8500_charger *di,
+ 		if (!connected)
+ 			di->flags.vbus_drop_end = false;
+ 
+-		sysfs_notify(&di->usb_chg.psy->dev.kobj, NULL, "present");
++		/*
++		 * Sometimes the platform is configured not to support
++		 * USB charging and no psy has been created, but we still
++		 * will get these notifications.
++		 */
++		if (di->usb_chg.psy) {
++			sysfs_notify(&di->usb_chg.psy->dev.kobj, NULL,
++				     "present");
++		}
+ 
+ 		if (connected) {
+ 			mutex_lock(&di->charger_attached_mutex);
+@@ -3276,10 +3293,74 @@ static struct notifier_block charger_nb = {
+ 	.notifier_call = ab8500_external_charger_prepare,
+ };
+ 
+-static int ab8500_charger_remove(struct platform_device *pdev)
++static char *supply_interface[] = {
++	"ab8500_chargalg",
++	"ab8500_fg",
++	"ab8500_btemp",
++};
++
++static const struct power_supply_desc ab8500_ac_chg_desc = {
++	.name		= "ab8500_ac",
++	.type		= POWER_SUPPLY_TYPE_MAINS,
++	.properties	= ab8500_charger_ac_props,
++	.num_properties	= ARRAY_SIZE(ab8500_charger_ac_props),
++	.get_property	= ab8500_charger_ac_get_property,
++};
++
++static const struct power_supply_desc ab8500_usb_chg_desc = {
++	.name		= "ab8500_usb",
++	.type		= POWER_SUPPLY_TYPE_USB,
++	.properties	= ab8500_charger_usb_props,
++	.num_properties	= ARRAY_SIZE(ab8500_charger_usb_props),
++	.get_property	= ab8500_charger_usb_get_property,
++};
++
++static int ab8500_charger_bind(struct device *dev)
+ {
+-	struct ab8500_charger *di = platform_get_drvdata(pdev);
+-	int i, irq, ret;
++	struct ab8500_charger *di = dev_get_drvdata(dev);
++	int ch_stat;
++	int ret;
++
++	/* Create a work queue for the charger */
++	di->charger_wq = alloc_ordered_workqueue("ab8500_charger_wq",
++						 WQ_MEM_RECLAIM);
++	if (di->charger_wq == NULL) {
++		dev_err(dev, "failed to create work queue\n");
++		return -ENOMEM;
++	}
++
++	ch_stat = ab8500_charger_detect_chargers(di, false);
++
++	if (ch_stat & AC_PW_CONN) {
++		if (is_ab8500(di->parent))
++			queue_delayed_work(di->charger_wq,
++					   &di->ac_charger_attached_work,
++					   HZ);
++	}
++	if (ch_stat & USB_PW_CONN) {
++		if (is_ab8500(di->parent))
++			queue_delayed_work(di->charger_wq,
++					   &di->usb_charger_attached_work,
++					   HZ);
++		di->vbus_detected = true;
++		di->vbus_detected_start = true;
++		queue_work(di->charger_wq,
++			   &di->detect_usb_type_work);
++	}
++
++	ret = component_bind_all(dev, di);
++	if (ret) {
++		dev_err(dev, "can't bind component devices\n");
++		return ret;
++	}
++
++	return 0;
++}
++
++static void ab8500_charger_unbind(struct device *dev)
++{
++	struct ab8500_charger *di = dev_get_drvdata(dev);
++	int ret;
+ 
+ 	/* Disable AC charging */
+ 	ab8500_charger_ac_en(&di->ac_chg, false, 0, 0);
+@@ -3287,68 +3368,47 @@ static int ab8500_charger_remove(struct platform_device *pdev)
+ 	/* Disable USB charging */
+ 	ab8500_charger_usb_en(&di->usb_chg, false, 0, 0);
+ 
+-	/* Disable interrupts */
+-	for (i = 0; i < ARRAY_SIZE(ab8500_charger_irq); i++) {
+-		irq = platform_get_irq_byname(pdev, ab8500_charger_irq[i].name);
+-		free_irq(irq, di);
+-	}
+-
+ 	/* Backup battery voltage and current disable */
+ 	ret = abx500_mask_and_set_register_interruptible(di->dev,
+ 		AB8500_RTC, AB8500_RTC_CTRL_REG, RTC_BUP_CH_ENA, 0);
+ 	if (ret < 0)
+ 		dev_err(di->dev, "%s mask and set failed\n", __func__);
+ 
+-	usb_unregister_notifier(di->usb_phy, &di->nb);
+-	usb_put_phy(di->usb_phy);
+-
+ 	/* Delete the work queue */
+ 	destroy_workqueue(di->charger_wq);
+ 
+-	/* Unregister external charger enable notifier */
+-	if (!di->ac_chg.enabled)
+-		blocking_notifier_chain_unregister(
+-			&charger_notifier_list, &charger_nb);
+-
+ 	flush_scheduled_work();
+-	if (di->usb_chg.enabled)
+-		power_supply_unregister(di->usb_chg.psy);
+ 
+-	if (di->ac_chg.enabled && !di->ac_chg.external)
+-		power_supply_unregister(di->ac_chg.psy);
+-
+-	return 0;
++	/* Unbind fg, btemp, algorithm */
++	component_unbind_all(dev, di);
+ }
+ 
+-static char *supply_interface[] = {
+-	"ab8500_chargalg",
+-	"ab8500_fg",
+-	"ab8500_btemp",
++static const struct component_master_ops ab8500_charger_comp_ops = {
++	.bind = ab8500_charger_bind,
++	.unbind = ab8500_charger_unbind,
+ };
+ 
+-static const struct power_supply_desc ab8500_ac_chg_desc = {
+-	.name		= "ab8500_ac",
+-	.type		= POWER_SUPPLY_TYPE_MAINS,
+-	.properties	= ab8500_charger_ac_props,
+-	.num_properties	= ARRAY_SIZE(ab8500_charger_ac_props),
+-	.get_property	= ab8500_charger_ac_get_property,
++static struct platform_driver *const ab8500_charger_component_drivers[] = {
++	&ab8500_fg_driver,
++	&ab8500_btemp_driver,
++	&abx500_chargalg_driver,
+ };
+ 
+-static const struct power_supply_desc ab8500_usb_chg_desc = {
+-	.name		= "ab8500_usb",
+-	.type		= POWER_SUPPLY_TYPE_USB,
+-	.properties	= ab8500_charger_usb_props,
+-	.num_properties	= ARRAY_SIZE(ab8500_charger_usb_props),
+-	.get_property	= ab8500_charger_usb_get_property,
+-};
++static int ab8500_charger_compare_dev(struct device *dev, void *data)
++{
++	return dev == data;
++}
+ 
+ static int ab8500_charger_probe(struct platform_device *pdev)
+ {
+-	struct device_node *np = pdev->dev.of_node;
++	struct device *dev = &pdev->dev;
++	struct device_node *np = dev->of_node;
++	struct component_match *match = NULL;
+ 	struct power_supply_config ac_psy_cfg = {}, usb_psy_cfg = {};
+ 	struct ab8500_charger *di;
+-	int irq, i, charger_status, ret = 0, ch_stat;
+-	struct device *dev = &pdev->dev;
++	int charger_status;
++	int i, irq;
++	int ret;
+ 
+ 	di = devm_kzalloc(dev, sizeof(*di), GFP_KERNEL);
+ 	if (!di)
+@@ -3393,6 +3453,38 @@ static int ab8500_charger_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
++	/*
++	 * VDD ADC supply needs to be enabled from this driver when there
++	 * is a charger connected to avoid erroneous BTEMP_HIGH/LOW
++	 * interrupts during charging
++	 */
++	di->regu = devm_regulator_get(dev, "vddadc");
++	if (IS_ERR(di->regu)) {
++		ret = PTR_ERR(di->regu);
++		dev_err(dev, "failed to get vddadc regulator\n");
++		return ret;
++	}
++
++	/* Request interrupts */
++	for (i = 0; i < ARRAY_SIZE(ab8500_charger_irq); i++) {
++		irq = platform_get_irq_byname(pdev, ab8500_charger_irq[i].name);
++		if (irq < 0)
++			return irq;
++
++		ret = devm_request_threaded_irq(dev,
++			irq, NULL, ab8500_charger_irq[i].isr,
++			IRQF_SHARED | IRQF_NO_SUSPEND | IRQF_ONESHOT,
++			ab8500_charger_irq[i].name, di);
++
++		if (ret != 0) {
++			dev_err(dev, "failed to request %s IRQ %d: %d\n"
++				, ab8500_charger_irq[i].name, irq, ret);
++			return ret;
++		}
++		dev_dbg(dev, "Requested %s IRQ %d: %d\n",
++			ab8500_charger_irq[i].name, irq, ret);
++	}
++
+ 	/* initialize lock */
+ 	spin_lock_init(&di->usb_state.usb_lock);
+ 	mutex_init(&di->usb_ipt_crnt_lock);
+@@ -3419,14 +3511,16 @@ static int ab8500_charger_probe(struct platform_device *pdev)
+ 	di->ac_chg.max_out_curr =
+ 		di->bm->chg_output_curr[di->bm->n_chg_out_curr - 1];
+ 	di->ac_chg.wdt_refresh = CHG_WD_INTERVAL;
+-	di->ac_chg.enabled = di->bm->ac_enabled;
++	/*
++	 * The AB8505 only supports USB charging. If we are not the
++	 * AB8505, register an AC charger.
++	 *
++	 * TODO: if this should be opt-in, add DT properties for this.
++	 */
++	if (!is_ab8505(di->parent))
++		di->ac_chg.enabled = true;
+ 	di->ac_chg.external = false;
+ 
+-	/*notifier for external charger enabling*/
+-	if (!di->ac_chg.enabled)
+-		blocking_notifier_chain_register(
+-			&charger_notifier_list, &charger_nb);
+-
+ 	/* USB supply */
+ 	/* ux500_charger sub-class */
+ 	di->usb_chg.ops.enable = &ab8500_charger_usb_en;
+@@ -3438,18 +3532,9 @@ static int ab8500_charger_probe(struct platform_device *pdev)
+ 	di->usb_chg.max_out_curr =
+ 		di->bm->chg_output_curr[di->bm->n_chg_out_curr - 1];
+ 	di->usb_chg.wdt_refresh = CHG_WD_INTERVAL;
+-	di->usb_chg.enabled = di->bm->usb_enabled;
+ 	di->usb_chg.external = false;
+ 	di->usb_state.usb_current = -1;
+ 
+-	/* Create a work queue for the charger */
+-	di->charger_wq = alloc_ordered_workqueue("ab8500_charger_wq",
+-						 WQ_MEM_RECLAIM);
+-	if (di->charger_wq == NULL) {
+-		dev_err(dev, "failed to create work queue\n");
+-		return -ENOMEM;
+-	}
+-
+ 	mutex_init(&di->charger_attached_mutex);
+ 
+ 	/* Init work for HW failure check */
+@@ -3500,61 +3585,32 @@ static int ab8500_charger_probe(struct platform_device *pdev)
+ 	INIT_WORK(&di->check_usb_thermal_prot_work,
+ 		ab8500_charger_check_usb_thermal_prot_work);
+ 
+-	/*
+-	 * VDD ADC supply needs to be enabled from this driver when there
+-	 * is a charger connected to avoid erroneous BTEMP_HIGH/LOW
+-	 * interrupts during charging
+-	 */
+-	di->regu = devm_regulator_get(dev, "vddadc");
+-	if (IS_ERR(di->regu)) {
+-		ret = PTR_ERR(di->regu);
+-		dev_err(dev, "failed to get vddadc regulator\n");
+-		goto free_charger_wq;
+-	}
+-
+ 
+ 	/* Initialize OVV, and other registers */
+ 	ret = ab8500_charger_init_hw_registers(di);
+ 	if (ret) {
+ 		dev_err(dev, "failed to initialize ABB registers\n");
+-		goto free_charger_wq;
++		return ret;
+ 	}
+ 
+ 	/* Register AC charger class */
+ 	if (di->ac_chg.enabled) {
+-		di->ac_chg.psy = power_supply_register(dev,
++		di->ac_chg.psy = devm_power_supply_register(dev,
+ 						       &ab8500_ac_chg_desc,
+ 						       &ac_psy_cfg);
+ 		if (IS_ERR(di->ac_chg.psy)) {
+ 			dev_err(dev, "failed to register AC charger\n");
+-			ret = PTR_ERR(di->ac_chg.psy);
+-			goto free_charger_wq;
++			return PTR_ERR(di->ac_chg.psy);
+ 		}
+ 	}
+ 
+ 	/* Register USB charger class */
+-	if (di->usb_chg.enabled) {
+-		di->usb_chg.psy = power_supply_register(dev,
+-							&ab8500_usb_chg_desc,
+-							&usb_psy_cfg);
+-		if (IS_ERR(di->usb_chg.psy)) {
+-			dev_err(dev, "failed to register USB charger\n");
+-			ret = PTR_ERR(di->usb_chg.psy);
+-			goto free_ac;
+-		}
+-	}
+-
+-	di->usb_phy = usb_get_phy(USB_PHY_TYPE_USB2);
+-	if (IS_ERR_OR_NULL(di->usb_phy)) {
+-		dev_err(dev, "failed to get usb transceiver\n");
+-		ret = -EINVAL;
+-		goto free_usb;
+-	}
+-	di->nb.notifier_call = ab8500_charger_usb_notifier_call;
+-	ret = usb_register_notifier(di->usb_phy, &di->nb);
+-	if (ret) {
+-		dev_err(dev, "failed to register usb notifier\n");
+-		goto put_usb_phy;
++	di->usb_chg.psy = devm_power_supply_register(dev,
++						     &ab8500_usb_chg_desc,
++						     &usb_psy_cfg);
++	if (IS_ERR(di->usb_chg.psy)) {
++		dev_err(dev, "failed to register USB charger\n");
++		return PTR_ERR(di->usb_chg.psy);
+ 	}
+ 
+ 	/* Identify the connected charger types during startup */
+@@ -3566,84 +3622,93 @@ static int ab8500_charger_probe(struct platform_device *pdev)
+ 		sysfs_notify(&di->ac_chg.psy->dev.kobj, NULL, "present");
+ 	}
+ 
+-	if (charger_status & USB_PW_CONN) {
+-		di->vbus_detected = true;
+-		di->vbus_detected_start = true;
+-		queue_work(di->charger_wq,
+-			&di->detect_usb_type_work);
+-	}
+-
+-	/* Register interrupts */
+-	for (i = 0; i < ARRAY_SIZE(ab8500_charger_irq); i++) {
+-		irq = platform_get_irq_byname(pdev, ab8500_charger_irq[i].name);
+-		if (irq < 0) {
+-			ret = irq;
+-			goto free_irq;
+-		}
++	platform_set_drvdata(pdev, di);
+ 
+-		ret = request_threaded_irq(irq, NULL, ab8500_charger_irq[i].isr,
+-			IRQF_SHARED | IRQF_NO_SUSPEND | IRQF_ONESHOT,
+-			ab8500_charger_irq[i].name, di);
++	/* Create something that will match the subdrivers when we bind */
++	for (i = 0; i < ARRAY_SIZE(ab8500_charger_component_drivers); i++) {
++		struct device_driver *drv = &ab8500_charger_component_drivers[i]->driver;
++		struct device *p = NULL, *d;
+ 
+-		if (ret != 0) {
+-			dev_err(dev, "failed to request %s IRQ %d: %d\n"
+-				, ab8500_charger_irq[i].name, irq, ret);
+-			goto free_irq;
++		while ((d = platform_find_device_by_driver(p, drv))) {
++			put_device(p);
++			component_match_add(dev, &match,
++					    ab8500_charger_compare_dev, d);
++			p = d;
+ 		}
+-		dev_dbg(dev, "Requested %s IRQ %d: %d\n",
+-			ab8500_charger_irq[i].name, irq, ret);
++		put_device(p);
++	}
++	if (!match) {
++		dev_err(dev, "no matching components\n");
++		return -ENODEV;
++	}
++	if (IS_ERR(match)) {
++		dev_err(dev, "could not create component match\n");
++		return PTR_ERR(match);
+ 	}
+ 
+-	platform_set_drvdata(pdev, di);
+-
+-	mutex_lock(&di->charger_attached_mutex);
++	/* Notifier for external charger enabling */
++	if (!di->ac_chg.enabled)
++		blocking_notifier_chain_register(
++			&charger_notifier_list, &charger_nb);
+ 
+-	ch_stat = ab8500_charger_detect_chargers(di, false);
+ 
+-	if ((ch_stat & AC_PW_CONN) == AC_PW_CONN) {
+-		if (is_ab8500(di->parent))
+-			queue_delayed_work(di->charger_wq,
+-					   &di->ac_charger_attached_work,
+-					   HZ);
++	di->usb_phy = usb_get_phy(USB_PHY_TYPE_USB2);
++	if (IS_ERR_OR_NULL(di->usb_phy)) {
++		dev_err(dev, "failed to get usb transceiver\n");
++		ret = -EINVAL;
++		goto out_charger_notifier;
+ 	}
+-	if ((ch_stat & USB_PW_CONN) == USB_PW_CONN) {
+-		if (is_ab8500(di->parent))
+-			queue_delayed_work(di->charger_wq,
+-					   &di->usb_charger_attached_work,
+-					   HZ);
++	di->nb.notifier_call = ab8500_charger_usb_notifier_call;
++	ret = usb_register_notifier(di->usb_phy, &di->nb);
++	if (ret) {
++		dev_err(dev, "failed to register usb notifier\n");
++		goto put_usb_phy;
+ 	}
+ 
+-	mutex_unlock(&di->charger_attached_mutex);
+ 
+-	return ret;
++	ret = component_master_add_with_match(&pdev->dev,
++					      &ab8500_charger_comp_ops,
++					      match);
++	if (ret) {
++		dev_err(dev, "failed to add component master\n");
++		goto free_notifier;
++	}
+ 
+-free_irq:
+-	usb_unregister_notifier(di->usb_phy, &di->nb);
++	return 0;
+ 
+-	/* We also have to free all successfully registered irqs */
+-	for (i = i - 1; i >= 0; i--) {
+-		irq = platform_get_irq_byname(pdev, ab8500_charger_irq[i].name);
+-		free_irq(irq, di);
+-	}
++free_notifier:
++	usb_unregister_notifier(di->usb_phy, &di->nb);
+ put_usb_phy:
+ 	usb_put_phy(di->usb_phy);
+-free_usb:
+-	if (di->usb_chg.enabled)
+-		power_supply_unregister(di->usb_chg.psy);
+-free_ac:
+-	if (di->ac_chg.enabled)
+-		power_supply_unregister(di->ac_chg.psy);
+-free_charger_wq:
+-	destroy_workqueue(di->charger_wq);
++out_charger_notifier:
++	if (!di->ac_chg.enabled)
++		blocking_notifier_chain_unregister(
++			&charger_notifier_list, &charger_nb);
+ 	return ret;
+ }
+ 
++static int ab8500_charger_remove(struct platform_device *pdev)
++{
++	struct ab8500_charger *di = platform_get_drvdata(pdev);
++
++	component_master_del(&pdev->dev, &ab8500_charger_comp_ops);
++
++	usb_unregister_notifier(di->usb_phy, &di->nb);
++	usb_put_phy(di->usb_phy);
++	if (!di->ac_chg.enabled)
++		blocking_notifier_chain_unregister(
++			&charger_notifier_list, &charger_nb);
++
++	return 0;
++}
++
+ static SIMPLE_DEV_PM_OPS(ab8500_charger_pm_ops, ab8500_charger_suspend, ab8500_charger_resume);
+ 
+ static const struct of_device_id ab8500_charger_match[] = {
+ 	{ .compatible = "stericsson,ab8500-charger", },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, ab8500_charger_match);
+ 
+ static struct platform_driver ab8500_charger_driver = {
+ 	.probe = ab8500_charger_probe,
+@@ -3657,15 +3722,24 @@ static struct platform_driver ab8500_charger_driver = {
+ 
+ static int __init ab8500_charger_init(void)
+ {
++	int ret;
++
++	ret = platform_register_drivers(ab8500_charger_component_drivers,
++			ARRAY_SIZE(ab8500_charger_component_drivers));
++	if (ret)
++		return ret;
++
+ 	return platform_driver_register(&ab8500_charger_driver);
+ }
+ 
+ static void __exit ab8500_charger_exit(void)
+ {
++	platform_unregister_drivers(ab8500_charger_component_drivers,
++			ARRAY_SIZE(ab8500_charger_component_drivers));
+ 	platform_driver_unregister(&ab8500_charger_driver);
+ }
+ 
+-subsys_initcall_sync(ab8500_charger_init);
++module_init(ab8500_charger_init);
+ module_exit(ab8500_charger_exit);
+ 
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/power/supply/ab8500_fg.c b/drivers/power/supply/ab8500_fg.c
+index 0c7c01a0d9790..46c718e9ebb7c 100644
+--- a/drivers/power/supply/ab8500_fg.c
++++ b/drivers/power/supply/ab8500_fg.c
+@@ -17,6 +17,7 @@
+ 
+ #include <linux/init.h>
+ #include <linux/module.h>
++#include <linux/component.h>
+ #include <linux/device.h>
+ #include <linux/interrupt.h>
+ #include <linux/platform_device.h>
+@@ -2980,27 +2981,6 @@ static int __maybe_unused ab8500_fg_suspend(struct device *dev)
+ 	return 0;
+ }
+ 
+-static int ab8500_fg_remove(struct platform_device *pdev)
+-{
+-	int ret = 0;
+-	struct ab8500_fg *di = platform_get_drvdata(pdev);
+-
+-	list_del(&di->node);
+-
+-	/* Disable coulomb counter */
+-	ret = ab8500_fg_coulomb_counter(di, false);
+-	if (ret)
+-		dev_err(di->dev, "failed to disable coulomb counter\n");
+-
+-	destroy_workqueue(di->fg_wq);
+-	ab8500_fg_sysfs_exit(di);
+-
+-	flush_scheduled_work();
+-	ab8500_fg_sysfs_psy_remove_attrs(di);
+-	power_supply_unregister(di->fg_psy);
+-	return ret;
+-}
+-
+ /* ab8500 fg driver interrupts and their respective isr */
+ static struct ab8500_fg_interrupts ab8500_fg_irq[] = {
+ 	{"NCONV_ACCU", ab8500_fg_cc_convend_handler},
+@@ -3024,11 +3004,50 @@ static const struct power_supply_desc ab8500_fg_desc = {
+ 	.external_power_changed	= ab8500_fg_external_power_changed,
+ };
+ 
++static int ab8500_fg_bind(struct device *dev, struct device *master,
++			  void *data)
++{
++	struct ab8500_fg *di = dev_get_drvdata(dev);
++
++	/* Create a work queue for running the FG algorithm */
++	di->fg_wq = alloc_ordered_workqueue("ab8500_fg_wq", WQ_MEM_RECLAIM);
++	if (di->fg_wq == NULL) {
++		dev_err(dev, "failed to create work queue\n");
++		return -ENOMEM;
++	}
++
++	/* Start the coulomb counter */
++	ab8500_fg_coulomb_counter(di, true);
++	/* Run the FG algorithm */
++	queue_delayed_work(di->fg_wq, &di->fg_periodic_work, 0);
++
++	return 0;
++}
++
++static void ab8500_fg_unbind(struct device *dev, struct device *master,
++			     void *data)
++{
++	struct ab8500_fg *di = dev_get_drvdata(dev);
++	int ret;
++
++	/* Disable coulomb counter */
++	ret = ab8500_fg_coulomb_counter(di, false);
++	if (ret)
++		dev_err(dev, "failed to disable coulomb counter\n");
++
++	destroy_workqueue(di->fg_wq);
++	flush_scheduled_work();
++}
++
++static const struct component_ops ab8500_fg_component_ops = {
++	.bind = ab8500_fg_bind,
++	.unbind = ab8500_fg_unbind,
++};
++
+ static int ab8500_fg_probe(struct platform_device *pdev)
+ {
+-	struct device_node *np = pdev->dev.of_node;
+-	struct power_supply_config psy_cfg = {};
+ 	struct device *dev = &pdev->dev;
++	struct power_supply_config psy_cfg = {};
+ 	struct ab8500_fg *di;
+ 	int i, irq;
+ 	int ret = 0;
+@@ -3074,13 +3093,6 @@ static int ab8500_fg_probe(struct platform_device *pdev)
+ 	ab8500_fg_charge_state_to(di, AB8500_FG_CHARGE_INIT);
+ 	ab8500_fg_discharge_state_to(di, AB8500_FG_DISCHARGE_INIT);
+ 
+-	/* Create a work queue for running the FG algorithm */
+-	di->fg_wq = alloc_ordered_workqueue("ab8500_fg_wq", WQ_MEM_RECLAIM);
+-	if (di->fg_wq == NULL) {
+-		dev_err(dev, "failed to create work queue\n");
+-		return -ENOMEM;
+-	}
+-
+ 	/* Init work for running the fg algorithm instantly */
+ 	INIT_WORK(&di->fg_work, ab8500_fg_instant_work);
+ 
+@@ -3113,7 +3125,7 @@ static int ab8500_fg_probe(struct platform_device *pdev)
+ 	ret = ab8500_fg_init_hw_registers(di);
+ 	if (ret) {
+ 		dev_err(dev, "failed to initialize registers\n");
+-		goto free_inst_curr_wq;
++		return ret;
+ 	}
+ 
+ 	/* Consider battery unknown until we're informed otherwise */
+@@ -3121,15 +3133,13 @@ static int ab8500_fg_probe(struct platform_device *pdev)
+ 	di->flags.batt_id_received = false;
+ 
+ 	/* Register FG power supply class */
+-	di->fg_psy = power_supply_register(dev, &ab8500_fg_desc, &psy_cfg);
++	di->fg_psy = devm_power_supply_register(dev, &ab8500_fg_desc, &psy_cfg);
+ 	if (IS_ERR(di->fg_psy)) {
+ 		dev_err(dev, "failed to register FG psy\n");
+-		ret = PTR_ERR(di->fg_psy);
+-		goto free_inst_curr_wq;
++		return PTR_ERR(di->fg_psy);
+ 	}
+ 
+ 	di->fg_samples = SEC_TO_SAMPLE(di->bm->fg_params->init_timer);
+-	ab8500_fg_coulomb_counter(di, true);
+ 
+ 	/*
+ 	 * Initialize completion used to notify completion and start
+@@ -3141,19 +3151,18 @@ static int ab8500_fg_probe(struct platform_device *pdev)
+ 	/* Register primary interrupt handlers */
+ 	for (i = 0; i < ARRAY_SIZE(ab8500_fg_irq); i++) {
+ 		irq = platform_get_irq_byname(pdev, ab8500_fg_irq[i].name);
+-		if (irq < 0) {
+-			ret = irq;
+-			goto free_irq;
+-		}
++		if (irq < 0)
++			return irq;
+ 
+-		ret = request_threaded_irq(irq, NULL, ab8500_fg_irq[i].isr,
++		ret = devm_request_threaded_irq(dev, irq, NULL,
++				  ab8500_fg_irq[i].isr,
+ 				  IRQF_SHARED | IRQF_NO_SUSPEND | IRQF_ONESHOT,
+ 				  ab8500_fg_irq[i].name, di);
+ 
+ 		if (ret != 0) {
+ 			dev_err(dev, "failed to request %s IRQ %d: %d\n",
+ 				ab8500_fg_irq[i].name, irq, ret);
+-			goto free_irq;
++			return ret;
+ 		}
+ 		dev_dbg(dev, "Requested %s IRQ %d: %d\n",
+ 			ab8500_fg_irq[i].name, irq, ret);
+@@ -3168,14 +3177,14 @@ static int ab8500_fg_probe(struct platform_device *pdev)
+ 	ret = ab8500_fg_sysfs_init(di);
+ 	if (ret) {
+ 		dev_err(dev, "failed to create sysfs entry\n");
+-		goto free_irq;
++		return ret;
+ 	}
+ 
+ 	ret = ab8500_fg_sysfs_psy_create_attrs(di);
+ 	if (ret) {
+ 		dev_err(dev, "failed to create FG psy\n");
+ 		ab8500_fg_sysfs_exit(di);
+-		goto free_irq;
++		return ret;
+ 	}
+ 
+ 	/* Calibrate the fg first time */
+@@ -3185,24 +3194,21 @@ static int ab8500_fg_probe(struct platform_device *pdev)
+ 	/* Use room temp as default value until we get an update from driver. */
+ 	di->bat_temp = 210;
+ 
+-	/* Run the FG algorithm */
+-	queue_delayed_work(di->fg_wq, &di->fg_periodic_work, 0);
+-
+ 	list_add_tail(&di->node, &ab8500_fg_list);
+ 
+-	return ret;
++	return component_add(dev, &ab8500_fg_component_ops);
++}
+ 
+-free_irq:
+-	/* We also have to free all registered irqs */
+-	while (--i >= 0) {
+-		/* Last assignment of i from primary interrupt handlers */
+-		irq = platform_get_irq_byname(pdev, ab8500_fg_irq[i].name);
+-		free_irq(irq, di);
+-	}
++static int ab8500_fg_remove(struct platform_device *pdev)
++{
++	int ret = 0;
++	struct ab8500_fg *di = platform_get_drvdata(pdev);
++
++	component_del(&pdev->dev, &ab8500_fg_component_ops);
++	list_del(&di->node);
++	ab8500_fg_sysfs_exit(di);
++	ab8500_fg_sysfs_psy_remove_attrs(di);
+ 
+-	power_supply_unregister(di->fg_psy);
+-free_inst_curr_wq:
+-	destroy_workqueue(di->fg_wq);
+ 	return ret;
+ }
+ 
+@@ -3212,8 +3218,9 @@ static const struct of_device_id ab8500_fg_match[] = {
+ 	{ .compatible = "stericsson,ab8500-fg", },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, ab8500_fg_match);
+ 
+-static struct platform_driver ab8500_fg_driver = {
++struct platform_driver ab8500_fg_driver = {
+ 	.probe = ab8500_fg_probe,
+ 	.remove = ab8500_fg_remove,
+ 	.driver = {
+@@ -3222,20 +3229,6 @@ static struct platform_driver ab8500_fg_driver = {
+ 		.pm = &ab8500_fg_pm_ops,
+ 	},
+ };
+-
+-static int __init ab8500_fg_init(void)
+-{
+-	return platform_driver_register(&ab8500_fg_driver);
+-}
+-
+-static void __exit ab8500_fg_exit(void)
+-{
+-	platform_driver_unregister(&ab8500_fg_driver);
+-}
+-
+-subsys_initcall_sync(ab8500_fg_init);
+-module_exit(ab8500_fg_exit);
+-
+ MODULE_LICENSE("GPL v2");
+ MODULE_AUTHOR("Johan Palsson, Karl Komierowski");
+ MODULE_ALIAS("platform:ab8500-fg");
+diff --git a/drivers/power/supply/abx500_chargalg.c b/drivers/power/supply/abx500_chargalg.c
+index f5b792243727d..599684ce0e4b0 100644
+--- a/drivers/power/supply/abx500_chargalg.c
++++ b/drivers/power/supply/abx500_chargalg.c
+@@ -15,6 +15,7 @@
+ #include <linux/init.h>
+ #include <linux/module.h>
+ #include <linux/device.h>
++#include <linux/component.h>
+ #include <linux/hrtimer.h>
+ #include <linux/interrupt.h>
+ #include <linux/delay.h>
+@@ -1943,13 +1944,44 @@ static int __maybe_unused abx500_chargalg_suspend(struct device *dev)
+ 	return 0;
+ }
+ 
+-static int abx500_chargalg_remove(struct platform_device *pdev)
++static char *supply_interface[] = {
++	"ab8500_fg",
++};
++
++static const struct power_supply_desc abx500_chargalg_desc = {
++	.name			= "abx500_chargalg",
++	.type			= POWER_SUPPLY_TYPE_BATTERY,
++	.properties		= abx500_chargalg_props,
++	.num_properties		= ARRAY_SIZE(abx500_chargalg_props),
++	.get_property		= abx500_chargalg_get_property,
++	.external_power_changed	= abx500_chargalg_external_power_changed,
++};
++
++static int abx500_chargalg_bind(struct device *dev, struct device *master,
++				void *data)
+ {
+-	struct abx500_chargalg *di = platform_get_drvdata(pdev);
++	struct abx500_chargalg *di = dev_get_drvdata(dev);
+ 
+-	/* sysfs interface to enable/disbale charging from user space */
+-	abx500_chargalg_sysfs_exit(di);
++	/* Create a work queue for the chargalg */
++	di->chargalg_wq = alloc_ordered_workqueue("abx500_chargalg_wq",
++						  WQ_MEM_RECLAIM);
++	if (di->chargalg_wq == NULL) {
++		dev_err(di->dev, "failed to create work queue\n");
++		return -ENOMEM;
++	}
++
++	/* Run the charging algorithm */
++	queue_delayed_work(di->chargalg_wq, &di->chargalg_periodic_work, 0);
++
++	return 0;
++}
+ 
++static void abx500_chargalg_unbind(struct device *dev, struct device *master,
++				   void *data)
++{
++	struct abx500_chargalg *di = dev_get_drvdata(dev);
++
++	/* Stop all timers and work */
+ 	hrtimer_cancel(&di->safety_timer);
+ 	hrtimer_cancel(&di->maintenance_timer);
+ 
+@@ -1959,48 +1991,36 @@ static int abx500_chargalg_remove(struct platform_device *pdev)
+ 
+ 	/* Delete the work queue */
+ 	destroy_workqueue(di->chargalg_wq);
+-
+-	power_supply_unregister(di->chargalg_psy);
+-
+-	return 0;
++	flush_scheduled_work();
+ }
+ 
+-static char *supply_interface[] = {
+-	"ab8500_fg",
+-};
+-
+-static const struct power_supply_desc abx500_chargalg_desc = {
+-	.name			= "abx500_chargalg",
+-	.type			= POWER_SUPPLY_TYPE_BATTERY,
+-	.properties		= abx500_chargalg_props,
+-	.num_properties		= ARRAY_SIZE(abx500_chargalg_props),
+-	.get_property		= abx500_chargalg_get_property,
+-	.external_power_changed	= abx500_chargalg_external_power_changed,
++static const struct component_ops abx500_chargalg_component_ops = {
++	.bind = abx500_chargalg_bind,
++	.unbind = abx500_chargalg_unbind,
+ };
+ 
+ static int abx500_chargalg_probe(struct platform_device *pdev)
+ {
+-	struct device_node *np = pdev->dev.of_node;
++	struct device *dev = &pdev->dev;
++	struct device_node *np = dev->of_node;
+ 	struct power_supply_config psy_cfg = {};
+ 	struct abx500_chargalg *di;
+ 	int ret = 0;
+ 
+-	di = devm_kzalloc(&pdev->dev, sizeof(*di), GFP_KERNEL);
+-	if (!di) {
+-		dev_err(&pdev->dev, "%s no mem for ab8500_chargalg\n", __func__);
++	di = devm_kzalloc(dev, sizeof(*di), GFP_KERNEL);
++	if (!di)
+ 		return -ENOMEM;
+-	}
+ 
+ 	di->bm = &ab8500_bm_data;
+ 
+-	ret = ab8500_bm_of_probe(&pdev->dev, np, di->bm);
++	ret = ab8500_bm_of_probe(dev, np, di->bm);
+ 	if (ret) {
+-		dev_err(&pdev->dev, "failed to get battery information\n");
++		dev_err(dev, "failed to get battery information\n");
+ 		return ret;
+ 	}
+ 
+ 	/* get device struct and parent */
+-	di->dev = &pdev->dev;
++	di->dev = dev;
+ 	di->parent = dev_get_drvdata(pdev->dev.parent);
+ 
+ 	psy_cfg.supplied_to = supply_interface;
+@@ -2016,14 +2036,6 @@ static int abx500_chargalg_probe(struct platform_device *pdev)
+ 	di->maintenance_timer.function =
+ 		abx500_chargalg_maintenance_timer_expired;
+ 
+-	/* Create a work queue for the chargalg */
+-	di->chargalg_wq = alloc_ordered_workqueue("abx500_chargalg_wq",
+-						   WQ_MEM_RECLAIM);
+-	if (di->chargalg_wq == NULL) {
+-		dev_err(di->dev, "failed to create work queue\n");
+-		return -ENOMEM;
+-	}
+-
+ 	/* Init work for chargalg */
+ 	INIT_DEFERRABLE_WORK(&di->chargalg_periodic_work,
+ 		abx500_chargalg_periodic_work);
+@@ -2037,12 +2049,12 @@ static int abx500_chargalg_probe(struct platform_device *pdev)
+ 	di->chg_info.prev_conn_chg = -1;
+ 
+ 	/* Register chargalg power supply class */
+-	di->chargalg_psy = power_supply_register(di->dev, &abx500_chargalg_desc,
++	di->chargalg_psy = devm_power_supply_register(di->dev,
++						 &abx500_chargalg_desc,
+ 						 &psy_cfg);
+ 	if (IS_ERR(di->chargalg_psy)) {
+ 		dev_err(di->dev, "failed to register chargalg psy\n");
+-		ret = PTR_ERR(di->chargalg_psy);
+-		goto free_chargalg_wq;
++		return PTR_ERR(di->chargalg_psy);
+ 	}
+ 
+ 	platform_set_drvdata(pdev, di);
+@@ -2051,21 +2063,24 @@ static int abx500_chargalg_probe(struct platform_device *pdev)
+ 	ret = abx500_chargalg_sysfs_init(di);
+ 	if (ret) {
+ 		dev_err(di->dev, "failed to create sysfs entry\n");
+-		goto free_psy;
++		return ret;
+ 	}
+ 	di->curr_status.curr_step = CHARGALG_CURR_STEP_HIGH;
+ 
+-	/* Run the charging algorithm */
+-	queue_delayed_work(di->chargalg_wq, &di->chargalg_periodic_work, 0);
+-
+ 	dev_info(di->dev, "probe success\n");
+-	return ret;
++	return component_add(dev, &abx500_chargalg_component_ops);
++}
+ 
+-free_psy:
+-	power_supply_unregister(di->chargalg_psy);
+-free_chargalg_wq:
+-	destroy_workqueue(di->chargalg_wq);
+-	return ret;
++static int abx500_chargalg_remove(struct platform_device *pdev)
++{
++	struct abx500_chargalg *di = platform_get_drvdata(pdev);
++
++	component_del(&pdev->dev, &abx500_chargalg_component_ops);
++
++	/* sysfs interface to enable/disable charging from user space */
++	abx500_chargalg_sysfs_exit(di);
++
++	return 0;
+ }
+ 
+ static SIMPLE_DEV_PM_OPS(abx500_chargalg_pm_ops, abx500_chargalg_suspend, abx500_chargalg_resume);
+@@ -2075,7 +2090,7 @@ static const struct of_device_id ab8500_chargalg_match[] = {
+ 	{ },
+ };
+ 
+-static struct platform_driver abx500_chargalg_driver = {
++struct platform_driver abx500_chargalg_driver = {
+ 	.probe = abx500_chargalg_probe,
+ 	.remove = abx500_chargalg_remove,
+ 	.driver = {
+@@ -2084,9 +2099,6 @@ static struct platform_driver abx500_chargalg_driver = {
+ 		.pm = &abx500_chargalg_pm_ops,
+ 	},
+ };
+-
+-module_platform_driver(abx500_chargalg_driver);
+-
+ MODULE_LICENSE("GPL v2");
+ MODULE_AUTHOR("Johan Palsson, Karl Komierowski");
+ MODULE_ALIAS("platform:abx500-chargalg");
+diff --git a/drivers/power/supply/axp288_fuel_gauge.c b/drivers/power/supply/axp288_fuel_gauge.c
+index 39e16ecb76386..37af0e216bc3a 100644
+--- a/drivers/power/supply/axp288_fuel_gauge.c
++++ b/drivers/power/supply/axp288_fuel_gauge.c
+@@ -723,15 +723,6 @@ static const struct dmi_system_id axp288_fuel_gauge_blacklist[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "MEEGOPAD T02"),
+ 		},
+ 	},
+-	{
+-		/* Meegopad T08 */
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Default string"),
+-			DMI_MATCH(DMI_BOARD_VENDOR, "To be filled by OEM."),
+-			DMI_MATCH(DMI_BOARD_NAME, "T3 MRD"),
+-			DMI_MATCH(DMI_BOARD_VERSION, "V1.1"),
+-		},
+-	},
+ 	{	/* Mele PCG03 Mini PC */
+ 		.matches = {
+ 			DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "Mini PC"),
+@@ -745,6 +736,15 @@ static const struct dmi_system_id axp288_fuel_gauge_blacklist[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "Z83-4"),
+ 		}
+ 	},
++	{
++		/* Various Ace PC/Meegopad/MinisForum/Wintel Mini-PCs/HDMI-sticks */
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "T3 MRD"),
++			DMI_MATCH(DMI_CHASSIS_TYPE, "3"),
++			DMI_MATCH(DMI_BIOS_VENDOR, "American Megatrends Inc."),
++			DMI_MATCH(DMI_BIOS_VERSION, "5.11"),
++		},
++	},
+ 	{}
+ };
+ 
+diff --git a/drivers/power/supply/charger-manager.c b/drivers/power/supply/charger-manager.c
+index 45da870aecca4..d67edb760c948 100644
+--- a/drivers/power/supply/charger-manager.c
++++ b/drivers/power/supply/charger-manager.c
+@@ -1279,6 +1279,7 @@ static const struct of_device_id charger_manager_match[] = {
+ 	},
+ 	{},
+ };
++MODULE_DEVICE_TABLE(of, charger_manager_match);
+ 
+ static struct charger_desc *of_cm_parse_desc(struct device *dev)
+ {
+diff --git a/drivers/power/supply/max17040_battery.c b/drivers/power/supply/max17040_battery.c
+index 1aab868adabf7..e80dd9141ae72 100644
+--- a/drivers/power/supply/max17040_battery.c
++++ b/drivers/power/supply/max17040_battery.c
+@@ -361,12 +361,10 @@ static irqreturn_t max17040_thread_handler(int id, void *dev)
+ static int max17040_enable_alert_irq(struct max17040_chip *chip)
+ {
+ 	struct i2c_client *client = chip->client;
+-	unsigned int flags;
+ 	int ret;
+ 
+-	flags = IRQF_TRIGGER_FALLING | IRQF_ONESHOT;
+ 	ret = devm_request_threaded_irq(&client->dev, client->irq, NULL,
+-					max17040_thread_handler, flags,
++					max17040_thread_handler, IRQF_ONESHOT,
+ 					chip->battery->desc->name, chip);
+ 
+ 	return ret;
+diff --git a/drivers/power/supply/max17042_battery.c b/drivers/power/supply/max17042_battery.c
+index 1d7326cd8fc63..ce2041b30a066 100644
+--- a/drivers/power/supply/max17042_battery.c
++++ b/drivers/power/supply/max17042_battery.c
+@@ -1104,7 +1104,7 @@ static int max17042_probe(struct i2c_client *client,
+ 	}
+ 
+ 	if (client->irq) {
+-		unsigned int flags = IRQF_TRIGGER_FALLING | IRQF_ONESHOT;
++		unsigned int flags = IRQF_ONESHOT;
+ 
+ 		/*
+ 		 * On ACPI systems the IRQ may be handled by ACPI-event code,
+diff --git a/drivers/power/supply/rt5033_battery.c b/drivers/power/supply/rt5033_battery.c
+index f330452341f02..9ad0afe83d1b7 100644
+--- a/drivers/power/supply/rt5033_battery.c
++++ b/drivers/power/supply/rt5033_battery.c
+@@ -164,9 +164,16 @@ static const struct i2c_device_id rt5033_battery_id[] = {
+ };
+ MODULE_DEVICE_TABLE(i2c, rt5033_battery_id);
+ 
++static const struct of_device_id rt5033_battery_of_match[] = {
++	{ .compatible = "richtek,rt5033-battery", },
++	{ }
++};
++MODULE_DEVICE_TABLE(of, rt5033_battery_of_match);
++
+ static struct i2c_driver rt5033_battery_driver = {
+ 	.driver = {
+ 		.name = "rt5033-battery",
++		.of_match_table = rt5033_battery_of_match,
+ 	},
+ 	.probe = rt5033_battery_probe,
+ 	.remove = rt5033_battery_remove,
+diff --git a/drivers/power/supply/sc2731_charger.c b/drivers/power/supply/sc2731_charger.c
+index 335cb857ef307..288b79836c139 100644
+--- a/drivers/power/supply/sc2731_charger.c
++++ b/drivers/power/supply/sc2731_charger.c
+@@ -524,6 +524,7 @@ static const struct of_device_id sc2731_charger_of_match[] = {
+ 	{ .compatible = "sprd,sc2731-charger", },
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(of, sc2731_charger_of_match);
+ 
+ static struct platform_driver sc2731_charger_driver = {
+ 	.driver = {
+diff --git a/drivers/power/supply/sc27xx_fuel_gauge.c b/drivers/power/supply/sc27xx_fuel_gauge.c
+index 9c627618c2249..1ae8374e1cebe 100644
+--- a/drivers/power/supply/sc27xx_fuel_gauge.c
++++ b/drivers/power/supply/sc27xx_fuel_gauge.c
+@@ -1342,6 +1342,7 @@ static const struct of_device_id sc27xx_fgu_of_match[] = {
+ 	{ .compatible = "sprd,sc2731-fgu", },
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(of, sc27xx_fgu_of_match);
+ 
+ static struct platform_driver sc27xx_fgu_driver = {
+ 	.probe = sc27xx_fgu_probe,
+diff --git a/drivers/power/supply/surface_battery.c b/drivers/power/supply/surface_battery.c
+index 7efa431a62b2e..5ec2e6bb2465c 100644
+--- a/drivers/power/supply/surface_battery.c
++++ b/drivers/power/supply/surface_battery.c
+@@ -345,6 +345,16 @@ static u32 spwr_notify_bat(struct ssam_event_notifier *nf, const struct ssam_eve
+ 	struct spwr_battery_device *bat = container_of(nf, struct spwr_battery_device, notif);
+ 	int status;
+ 
++	/*
++	 * We cannot use strict matching when registering the notifier as the
++	 * EC expects us to register it against instance ID 0. Strict matching
++	 * would thus drop events, as those may have non-zero instance IDs in
++	 * this subsystem. So we need to check the instance ID of the event
++	 * here manually.
++	 */
++	if (event->instance_id != bat->sdev->uid.instance)
++		return 0;
++
+ 	dev_dbg(&bat->sdev->dev, "power event (cid = %#04x, iid = %#04x, tid = %#04x)\n",
+ 		event->command_id, event->instance_id, event->target_id);
+ 
+@@ -720,8 +730,8 @@ static void spwr_battery_init(struct spwr_battery_device *bat, struct ssam_devic
+ 	bat->notif.base.fn = spwr_notify_bat;
+ 	bat->notif.event.reg = registry;
+ 	bat->notif.event.id.target_category = sdev->uid.category;
+-	bat->notif.event.id.instance = 0;
+-	bat->notif.event.mask = SSAM_EVENT_MASK_STRICT;
++	bat->notif.event.id.instance = 0;	/* need to register with instance 0 */
++	bat->notif.event.mask = SSAM_EVENT_MASK_TARGET;
+ 	bat->notif.event.flags = SSAM_EVENT_SEQUENCED;
+ 
+ 	bat->psy_desc.name = bat->name;
+diff --git a/drivers/power/supply/surface_charger.c b/drivers/power/supply/surface_charger.c
+index 81a5b79822c9b..a060c36c7766f 100644
+--- a/drivers/power/supply/surface_charger.c
++++ b/drivers/power/supply/surface_charger.c
+@@ -66,7 +66,7 @@ struct spwr_ac_device {
+ 
+ static int spwr_ac_update_unlocked(struct spwr_ac_device *ac)
+ {
+-	u32 old = ac->state;
++	__le32 old = ac->state;
+ 	int status;
+ 
+ 	lockdep_assert_held(&ac->lock);
+diff --git a/drivers/pwm/pwm-img.c b/drivers/pwm/pwm-img.c
+index cc37054589ccf..11b16ecc4f967 100644
+--- a/drivers/pwm/pwm-img.c
++++ b/drivers/pwm/pwm-img.c
+@@ -156,7 +156,7 @@ static int img_pwm_enable(struct pwm_chip *chip, struct pwm_device *pwm)
+ 	struct img_pwm_chip *pwm_chip = to_img_pwm_chip(chip);
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(chip->dev);
++	ret = pm_runtime_resume_and_get(chip->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/pwm/pwm-imx1.c b/drivers/pwm/pwm-imx1.c
+index c957b365448ee..e73858a8e464c 100644
+--- a/drivers/pwm/pwm-imx1.c
++++ b/drivers/pwm/pwm-imx1.c
+@@ -168,8 +168,6 @@ static int pwm_imx1_remove(struct platform_device *pdev)
+ {
+ 	struct pwm_imx1_chip *imx = platform_get_drvdata(pdev);
+ 
+-	pwm_imx1_clk_disable_unprepare(&imx->chip);
+-
+ 	return pwmchip_remove(&imx->chip);
+ }
+ 
+diff --git a/drivers/pwm/pwm-pca9685.c b/drivers/pwm/pwm-pca9685.c
+index 7c9f174de64ee..ec9f93006654e 100644
+--- a/drivers/pwm/pwm-pca9685.c
++++ b/drivers/pwm/pwm-pca9685.c
+@@ -23,11 +23,11 @@
+ #include <linux/bitmap.h>
+ 
+ /*
+- * Because the PCA9685 has only one prescaler per chip, changing the period of
+- * one channel affects the period of all 16 PWM outputs!
+- * However, the ratio between each configured duty cycle and the chip-wide
+- * period remains constant, because the OFF time is set in proportion to the
+- * counter range.
++ * Because the PCA9685 has only one prescaler per chip, only the first channel
++ * that is enabled is allowed to change the prescale register.
++ * PWM channels requested afterwards must use a period that results in the same
++ * prescale setting as the one set by the first requested channel.
++ * GPIOs do not count as enabled PWMs as they are not using the prescaler.
+  */
+ 
+ #define PCA9685_MODE1		0x00
+@@ -78,8 +78,9 @@
+ struct pca9685 {
+ 	struct pwm_chip chip;
+ 	struct regmap *regmap;
+-#if IS_ENABLED(CONFIG_GPIOLIB)
+ 	struct mutex lock;
++	DECLARE_BITMAP(pwms_enabled, PCA9685_MAXCHAN + 1);
++#if IS_ENABLED(CONFIG_GPIOLIB)
+ 	struct gpio_chip gpio;
+ 	DECLARE_BITMAP(pwms_inuse, PCA9685_MAXCHAN + 1);
+ #endif
+@@ -90,6 +91,22 @@ static inline struct pca9685 *to_pca(struct pwm_chip *chip)
+ 	return container_of(chip, struct pca9685, chip);
+ }
+ 
++/* This function is supposed to be called with the lock mutex held */
++static bool pca9685_prescaler_can_change(struct pca9685 *pca, int channel)
++{
++	/* No PWM enabled: Change allowed */
++	if (bitmap_empty(pca->pwms_enabled, PCA9685_MAXCHAN + 1))
++		return true;
++	/* More than one PWM enabled: Change not allowed */
++	if (bitmap_weight(pca->pwms_enabled, PCA9685_MAXCHAN + 1) > 1)
++		return false;
++	/*
++	 * Only one PWM enabled: Change allowed if the PWM about to
++	 * be changed is the one that is already enabled
++	 */
++	return test_bit(channel, pca->pwms_enabled);
++}
++
+ /* Helper function to set the duty cycle ratio to duty/4096 (e.g. duty=2048 -> 50%) */
+ static void pca9685_pwm_set_duty(struct pca9685 *pca, int channel, unsigned int duty)
+ {
+@@ -240,8 +257,6 @@ static int pca9685_pwm_gpio_probe(struct pca9685 *pca)
+ {
+ 	struct device *dev = pca->chip.dev;
+ 
+-	mutex_init(&pca->lock);
+-
+ 	pca->gpio.label = dev_name(dev);
+ 	pca->gpio.parent = dev;
+ 	pca->gpio.request = pca9685_pwm_gpio_request;
+@@ -285,8 +300,8 @@ static void pca9685_set_sleep_mode(struct pca9685 *pca, bool enable)
+ 	}
+ }
+ 
+-static int pca9685_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+-			     const struct pwm_state *state)
++static int __pca9685_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
++			       const struct pwm_state *state)
+ {
+ 	struct pca9685 *pca = to_pca(chip);
+ 	unsigned long long duty, prescale;
+@@ -309,6 +324,12 @@ static int pca9685_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 
+ 	regmap_read(pca->regmap, PCA9685_PRESCALE, &val);
+ 	if (prescale != val) {
++		if (!pca9685_prescaler_can_change(pca, pwm->hwpwm)) {
++			dev_err(chip->dev,
++				"pwm not changed: periods of enabled pwms must match!\n");
++			return -EBUSY;
++		}
++
+ 		/*
+ 		 * Putting the chip briefly into SLEEP mode
+ 		 * at this point won't interfere with the
+@@ -331,6 +352,25 @@ static int pca9685_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	return 0;
+ }
+ 
++static int pca9685_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
++			     const struct pwm_state *state)
++{
++	struct pca9685 *pca = to_pca(chip);
++	int ret;
++
++	mutex_lock(&pca->lock);
++	ret = __pca9685_pwm_apply(chip, pwm, state);
++	if (ret == 0) {
++		if (state->enabled)
++			set_bit(pwm->hwpwm, pca->pwms_enabled);
++		else
++			clear_bit(pwm->hwpwm, pca->pwms_enabled);
++	}
++	mutex_unlock(&pca->lock);
++
++	return ret;
++}
++
+ static void pca9685_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
+ 				  struct pwm_state *state)
+ {
+@@ -372,6 +412,14 @@ static int pca9685_pwm_request(struct pwm_chip *chip, struct pwm_device *pwm)
+ 
+ 	if (pca9685_pwm_test_and_set_inuse(pca, pwm->hwpwm))
+ 		return -EBUSY;
++
++	if (pwm->hwpwm < PCA9685_MAXCHAN) {
++		/* PWMs - except the "all LEDs" channel - default to enabled */
++		mutex_lock(&pca->lock);
++		set_bit(pwm->hwpwm, pca->pwms_enabled);
++		mutex_unlock(&pca->lock);
++	}
++
+ 	pm_runtime_get_sync(chip->dev);
+ 
+ 	return 0;
+@@ -381,7 +429,11 @@ static void pca9685_pwm_free(struct pwm_chip *chip, struct pwm_device *pwm)
+ {
+ 	struct pca9685 *pca = to_pca(chip);
+ 
++	mutex_lock(&pca->lock);
+ 	pca9685_pwm_set_duty(pca, pwm->hwpwm, 0);
++	clear_bit(pwm->hwpwm, pca->pwms_enabled);
++	mutex_unlock(&pca->lock);
++
+ 	pm_runtime_put(chip->dev);
+ 	pca9685_pwm_clear_inuse(pca, pwm->hwpwm);
+ }
+@@ -422,6 +474,8 @@ static int pca9685_pwm_probe(struct i2c_client *client,
+ 
+ 	i2c_set_clientdata(client, pca);
+ 
++	mutex_init(&pca->lock);
++
+ 	regmap_read(pca->regmap, PCA9685_MODE2, &reg);
+ 
+ 	if (device_property_read_bool(&client->dev, "invert"))
+diff --git a/drivers/pwm/pwm-spear.c b/drivers/pwm/pwm-spear.c
+index 1a1cedfd11ce2..6879b49581b34 100644
+--- a/drivers/pwm/pwm-spear.c
++++ b/drivers/pwm/pwm-spear.c
+@@ -228,10 +228,6 @@ static int spear_pwm_probe(struct platform_device *pdev)
+ static int spear_pwm_remove(struct platform_device *pdev)
+ {
+ 	struct spear_pwm_chip *pc = platform_get_drvdata(pdev);
+-	int i;
+-
+-	for (i = 0; i < NUM_PWM; i++)
+-		pwm_disable(&pc->chip.pwms[i]);
+ 
+ 	/* clk was prepared in probe, hence unprepare it here */
+ 	clk_unprepare(pc->clk);
+diff --git a/drivers/pwm/pwm-tegra.c b/drivers/pwm/pwm-tegra.c
+index c529a170bcdd1..5509ce0d54326 100644
+--- a/drivers/pwm/pwm-tegra.c
++++ b/drivers/pwm/pwm-tegra.c
+@@ -300,7 +300,6 @@ static int tegra_pwm_probe(struct platform_device *pdev)
+ static int tegra_pwm_remove(struct platform_device *pdev)
+ {
+ 	struct tegra_pwm_chip *pc = platform_get_drvdata(pdev);
+-	unsigned int i;
+ 	int err;
+ 
+ 	if (WARN_ON(!pc))
+@@ -310,18 +309,6 @@ static int tegra_pwm_remove(struct platform_device *pdev)
+ 	if (err < 0)
+ 		return err;
+ 
+-	for (i = 0; i < pc->chip.npwm; i++) {
+-		struct pwm_device *pwm = &pc->chip.pwms[i];
+-
+-		if (!pwm_is_enabled(pwm))
+-			if (clk_prepare_enable(pc->clk) < 0)
+-				continue;
+-
+-		pwm_writel(pc, i, 0);
+-
+-		clk_disable_unprepare(pc->clk);
+-	}
+-
+ 	reset_control_assert(pc->rst);
+ 	clk_disable_unprepare(pc->clk);
+ 
+diff --git a/drivers/pwm/pwm-visconti.c b/drivers/pwm/pwm-visconti.c
+index 46d9037863661..af4e37d3e3a67 100644
+--- a/drivers/pwm/pwm-visconti.c
++++ b/drivers/pwm/pwm-visconti.c
+@@ -82,17 +82,14 @@ static int visconti_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 		return -ERANGE;
+ 
+ 	/*
+-	 * PWMC controls a divider that divides the input clk by a
+-	 * power of two between 1 and 8. As a smaller divider yields
+-	 * higher precision, pick the smallest possible one.
++	 * PWMC controls a divider that divides the input clk by a power of two
++	 * between 1 and 8. As a smaller divider yields higher precision, pick
++	 * the smallest possible one. As period is at most 0xffff << 3, pwmc0 is
++	 * in the intended range [0..3].
+ 	 */
+-	if (period > 0xffff) {
+-		pwmc0 = ilog2(period >> 16);
+-		if (WARN_ON(pwmc0 > 3))
+-			return -EINVAL;
+-	} else {
+-		pwmc0 = 0;
+-	}
++	pwmc0 = fls(period >> 16);
++	if (WARN_ON(pwmc0 > 3))
++		return -EINVAL;
+ 
+ 	period >>= pwmc0;
+ 	duty_cycle >>= pwmc0;
+diff --git a/drivers/remoteproc/remoteproc_cdev.c b/drivers/remoteproc/remoteproc_cdev.c
+index 0b8a84c04f766..4ad98b0b8caa5 100644
+--- a/drivers/remoteproc/remoteproc_cdev.c
++++ b/drivers/remoteproc/remoteproc_cdev.c
+@@ -124,7 +124,7 @@ int rproc_char_device_add(struct rproc *rproc)
+ 
+ void rproc_char_device_remove(struct rproc *rproc)
+ {
+-	__unregister_chrdev(MAJOR(rproc->dev.devt), rproc->index, 1, "remoteproc");
++	cdev_del(&rproc->cdev);
+ }
+ 
+ void __init rproc_init_cdev(void)
+diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
+index 626a6b90fba2c..203e2bd8ebb4f 100644
+--- a/drivers/remoteproc/remoteproc_core.c
++++ b/drivers/remoteproc/remoteproc_core.c
+@@ -2602,7 +2602,6 @@ int rproc_del(struct rproc *rproc)
+ 	mutex_unlock(&rproc->lock);
+ 
+ 	rproc_delete_debug_dir(rproc);
+-	rproc_char_device_remove(rproc);
+ 
+ 	/* the rproc is downref'ed as soon as it's removed from the klist */
+ 	mutex_lock(&rproc_list_mutex);
+@@ -2613,6 +2612,7 @@ int rproc_del(struct rproc *rproc)
+ 	synchronize_rcu();
+ 
+ 	device_del(&rproc->dev);
++	rproc_char_device_remove(rproc);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/remoteproc/stm32_rproc.c b/drivers/remoteproc/stm32_rproc.c
+index 7353f9e7e7afe..b643efcf995a1 100644
+--- a/drivers/remoteproc/stm32_rproc.c
++++ b/drivers/remoteproc/stm32_rproc.c
+@@ -474,14 +474,12 @@ static int stm32_rproc_attach(struct rproc *rproc)
+ static int stm32_rproc_detach(struct rproc *rproc)
+ {
+ 	struct stm32_rproc *ddata = rproc->priv;
+-	int err, dummy_data, idx;
++	int err, idx;
+ 
+ 	/* Inform the remote processor of the detach */
+ 	idx = stm32_rproc_mbox_idx(rproc, STM32_MBX_DETACH);
+ 	if (idx >= 0 && ddata->mb[idx].chan) {
+-		/* A dummy data is sent to allow to block on transmit */
+-		err = mbox_send_message(ddata->mb[idx].chan,
+-					&dummy_data);
++		err = mbox_send_message(ddata->mb[idx].chan, "stop");
+ 		if (err < 0)
+ 			dev_warn(&rproc->dev, "warning: remote FW detach without ack\n");
+ 	}
+@@ -493,15 +491,13 @@ static int stm32_rproc_detach(struct rproc *rproc)
+ static int stm32_rproc_stop(struct rproc *rproc)
+ {
+ 	struct stm32_rproc *ddata = rproc->priv;
+-	int err, dummy_data, idx;
++	int err, idx;
+ 
+ 	/* request shutdown of the remote processor */
+ 	if (rproc->state != RPROC_OFFLINE) {
+ 		idx = stm32_rproc_mbox_idx(rproc, STM32_MBX_SHUTDOWN);
+ 		if (idx >= 0 && ddata->mb[idx].chan) {
+-			/* a dummy data is sent to allow to block on transmit */
+-			err = mbox_send_message(ddata->mb[idx].chan,
+-						&dummy_data);
++			err = mbox_send_message(ddata->mb[idx].chan, "detach");
+ 			if (err < 0)
+ 				dev_warn(&rproc->dev, "warning: remote FW shutdown without ack\n");
+ 		}
+@@ -556,7 +552,7 @@ static void stm32_rproc_kick(struct rproc *rproc, int vqid)
+ 			continue;
+ 		if (!ddata->mb[i].chan)
+ 			return;
+-		err = mbox_send_message(ddata->mb[i].chan, (void *)(long)vqid);
++		err = mbox_send_message(ddata->mb[i].chan, "kick");
+ 		if (err < 0)
+ 			dev_err(&rproc->dev, "%s: failed (%s, err:%d)\n",
+ 				__func__, ddata->mb[i].name, err);
+@@ -580,7 +576,7 @@ static int stm32_rproc_da_to_pa(struct rproc *rproc,
+ 			continue;
+ 
+ 		*pa = da - p_mem->dev_addr + p_mem->bus_addr;
+-		dev_dbg(dev, "da %llx to pa %#x\n", da, *pa);
++		dev_dbg(dev, "da %llx to pa %pap\n", da, pa);
+ 
+ 		return 0;
+ 	}
+diff --git a/drivers/remoteproc/ti_k3_r5_remoteproc.c b/drivers/remoteproc/ti_k3_r5_remoteproc.c
+index 5cf8d030a1f0f..4104e4846dbff 100644
+--- a/drivers/remoteproc/ti_k3_r5_remoteproc.c
++++ b/drivers/remoteproc/ti_k3_r5_remoteproc.c
+@@ -1272,9 +1272,9 @@ static int k3_r5_core_of_init(struct platform_device *pdev)
+ 
+ 	core->tsp = k3_r5_core_of_get_tsp(dev, core->ti_sci);
+ 	if (IS_ERR(core->tsp)) {
++		ret = PTR_ERR(core->tsp);
+ 		dev_err(dev, "failed to construct ti-sci proc control, ret = %d\n",
+ 			ret);
+-		ret = PTR_ERR(core->tsp);
+ 		goto err;
+ 	}
+ 
+diff --git a/drivers/reset/Kconfig b/drivers/reset/Kconfig
+index 3e7f55e44d849..bc8e90c8be529 100644
+--- a/drivers/reset/Kconfig
++++ b/drivers/reset/Kconfig
+@@ -59,7 +59,8 @@ config RESET_BRCMSTB
+ config RESET_BRCMSTB_RESCAL
+ 	bool "Broadcom STB RESCAL reset controller"
+ 	depends on HAS_IOMEM
+-	default ARCH_BRCMSTB || COMPILE_TEST
++	depends on ARCH_BRCMSTB || COMPILE_TEST
++	default ARCH_BRCMSTB
+ 	help
+ 	  This enables the RESCAL reset controller for SATA, PCIe0, or PCIe1 on
+ 	  BCM7216.
+@@ -82,6 +83,7 @@ config RESET_IMX7
+ 
+ config RESET_INTEL_GW
+ 	bool "Intel Reset Controller Driver"
++	depends on X86 || COMPILE_TEST
+ 	depends on OF && HAS_IOMEM
+ 	select REGMAP_MMIO
+ 	help
+diff --git a/drivers/reset/core.c b/drivers/reset/core.c
+index 71c1c8264b2db..e2a6a927b84ce 100644
+--- a/drivers/reset/core.c
++++ b/drivers/reset/core.c
+@@ -774,7 +774,10 @@ static struct reset_control *__reset_control_get_internal(
+ 	if (!rstc)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	try_module_get(rcdev->owner);
++	if (!try_module_get(rcdev->owner)) {
++		kfree(rstc);
++		return ERR_PTR(-ENODEV);
++	}
+ 
+ 	rstc->rcdev = rcdev;
+ 	list_add(&rstc->list, &rcdev->reset_control_head);
+diff --git a/drivers/reset/reset-a10sr.c b/drivers/reset/reset-a10sr.c
+index 7eacc89382f8a..99b3bc8382f35 100644
+--- a/drivers/reset/reset-a10sr.c
++++ b/drivers/reset/reset-a10sr.c
+@@ -118,6 +118,7 @@ static struct platform_driver a10sr_reset_driver = {
+ 	.probe	= a10sr_reset_probe,
+ 	.driver = {
+ 		.name		= "altr_a10sr_reset",
++		.of_match_table	= a10sr_reset_of_match,
+ 	},
+ };
+ module_platform_driver(a10sr_reset_driver);
+diff --git a/drivers/reset/reset-brcmstb.c b/drivers/reset/reset-brcmstb.c
+index f213264c8567b..42c9d5241c530 100644
+--- a/drivers/reset/reset-brcmstb.c
++++ b/drivers/reset/reset-brcmstb.c
+@@ -111,6 +111,7 @@ static const struct of_device_id brcmstb_reset_of_match[] = {
+ 	{ .compatible = "brcm,brcmstb-reset" },
+ 	{ /* sentinel */ }
+ };
++MODULE_DEVICE_TABLE(of, brcmstb_reset_of_match);
+ 
+ static struct platform_driver brcmstb_reset_driver = {
+ 	.probe	= brcmstb_reset_probe,
+diff --git a/drivers/rtc/Kconfig b/drivers/rtc/Kconfig
+index d8c13fded164f..914497abeef91 100644
+--- a/drivers/rtc/Kconfig
++++ b/drivers/rtc/Kconfig
+@@ -502,7 +502,8 @@ config RTC_DRV_M41T80_WDT
+ 
+ config RTC_DRV_BD70528
+ 	tristate "ROHM BD70528, BD71815 and BD71828 PMIC RTC"
+-	depends on MFD_ROHM_BD71828 || MFD_ROHM_BD70528 && (BD70528_WATCHDOG || !BD70528_WATCHDOG)
++	depends on MFD_ROHM_BD71828 || MFD_ROHM_BD70528
++	depends on BD70528_WATCHDOG || !BD70528_WATCHDOG
+ 	help
+ 	  If you say Y here you will get support for the RTC
+ 	  block on ROHM BD70528, BD71815 and BD71828 Power Management IC.
+diff --git a/drivers/rtc/proc.c b/drivers/rtc/proc.c
+index 73344598fc1be..cbcdbb19d848e 100644
+--- a/drivers/rtc/proc.c
++++ b/drivers/rtc/proc.c
+@@ -23,8 +23,8 @@ static bool is_rtc_hctosys(struct rtc_device *rtc)
+ 	int size;
+ 	char name[NAME_SIZE];
+ 
+-	size = scnprintf(name, NAME_SIZE, "rtc%d", rtc->id);
+-	if (size > NAME_SIZE)
++	size = snprintf(name, NAME_SIZE, "rtc%d", rtc->id);
++	if (size >= NAME_SIZE)
+ 		return false;
+ 
+ 	return !strncmp(name, CONFIG_RTC_HCTOSYS_DEVICE, NAME_SIZE);
+diff --git a/drivers/s390/char/sclp_vt220.c b/drivers/s390/char/sclp_vt220.c
+index 7f4445b0f8192..2f96c31e9b7b2 100644
+--- a/drivers/s390/char/sclp_vt220.c
++++ b/drivers/s390/char/sclp_vt220.c
+@@ -35,8 +35,8 @@
+ #define SCLP_VT220_MINOR		65
+ #define SCLP_VT220_DRIVER_NAME		"sclp_vt220"
+ #define SCLP_VT220_DEVICE_NAME		"ttysclp"
+-#define SCLP_VT220_CONSOLE_NAME		"ttyS"
+-#define SCLP_VT220_CONSOLE_INDEX	1	/* console=ttyS1 */
++#define SCLP_VT220_CONSOLE_NAME		"ttysclp"
++#define SCLP_VT220_CONSOLE_INDEX	0	/* console=ttysclp0 */
+ 
+ /* Representation of a single write request */
+ struct sclp_vt220_request {
+diff --git a/drivers/s390/scsi/zfcp_sysfs.c b/drivers/s390/scsi/zfcp_sysfs.c
+index 544efd4c42f02..b8cd75a872eeb 100644
+--- a/drivers/s390/scsi/zfcp_sysfs.c
++++ b/drivers/s390/scsi/zfcp_sysfs.c
+@@ -487,6 +487,7 @@ static ssize_t zfcp_sysfs_port_fc_security_show(struct device *dev,
+ 	if (0 == (status & ZFCP_STATUS_COMMON_OPEN) ||
+ 	    0 == (status & ZFCP_STATUS_COMMON_UNBLOCKED) ||
+ 	    0 == (status & ZFCP_STATUS_PORT_PHYS_OPEN) ||
++	    0 != (status & ZFCP_STATUS_PORT_LINK_TEST) ||
+ 	    0 != (status & ZFCP_STATUS_COMMON_ERP_FAILED) ||
+ 	    0 != (status & ZFCP_STATUS_COMMON_ACCESS_BOXED))
+ 		i = sprintf(buf, "unknown\n");
+diff --git a/drivers/scsi/arcmsr/arcmsr_hba.c b/drivers/scsi/arcmsr/arcmsr_hba.c
+index 4b79661275c90..42e494a7106cd 100644
+--- a/drivers/scsi/arcmsr/arcmsr_hba.c
++++ b/drivers/scsi/arcmsr/arcmsr_hba.c
+@@ -1923,8 +1923,12 @@ static void arcmsr_post_ccb(struct AdapterControlBlock *acb, struct CommandContr
+ 
+ 		if (ccb->arc_cdb_size <= 0x300)
+ 			arc_cdb_size = (ccb->arc_cdb_size - 1) >> 6 | 1;
+-		else
+-			arc_cdb_size = (((ccb->arc_cdb_size + 0xff) >> 8) + 2) << 1 | 1;
++		else {
++			arc_cdb_size = ((ccb->arc_cdb_size + 0xff) >> 8) + 2;
++			if (arc_cdb_size > 0xF)
++				arc_cdb_size = 0xF;
++			arc_cdb_size = (arc_cdb_size << 1) | 1;
++		}
+ 		ccb_post_stamp = (ccb->smid | arc_cdb_size);
+ 		writel(0, &pmu->inbound_queueport_high);
+ 		writel(ccb_post_stamp, &pmu->inbound_queueport_low);
+@@ -2415,10 +2419,17 @@ static void arcmsr_hbaD_doorbell_isr(struct AdapterControlBlock *pACB)
+ 
+ static void arcmsr_hbaE_doorbell_isr(struct AdapterControlBlock *pACB)
+ {
+-	uint32_t outbound_doorbell, in_doorbell, tmp;
++	uint32_t outbound_doorbell, in_doorbell, tmp, i;
+ 	struct MessageUnit_E __iomem *reg = pACB->pmuE;
+ 
+-	in_doorbell = readl(&reg->iobound_doorbell);
++	if (pACB->adapter_type == ACB_ADAPTER_TYPE_F) {
++		for (i = 0; i < 5; i++) {
++			in_doorbell = readl(&reg->iobound_doorbell);
++			if (in_doorbell != 0)
++				break;
++		}
++	} else
++		in_doorbell = readl(&reg->iobound_doorbell);
+ 	outbound_doorbell = in_doorbell ^ pACB->in_doorbell;
+ 	do {
+ 		writel(0, &reg->host_int_status); /* clear interrupt */
+diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_main.c
+index 27c4f1598f765..b98f2f1e782df 100644
+--- a/drivers/scsi/be2iscsi/be_main.c
++++ b/drivers/scsi/be2iscsi/be_main.c
+@@ -416,7 +416,7 @@ static struct beiscsi_hba *beiscsi_hba_alloc(struct pci_dev *pcidev)
+ 			"beiscsi_hba_alloc - iscsi_host_alloc failed\n");
+ 		return NULL;
+ 	}
+-	shost->max_id = BE2_MAX_SESSIONS;
++	shost->max_id = BE2_MAX_SESSIONS - 1;
+ 	shost->max_channel = 0;
+ 	shost->max_cmd_len = BEISCSI_MAX_CMD_LEN;
+ 	shost->max_lun = BEISCSI_NUM_MAX_LUN;
+@@ -5318,7 +5318,7 @@ static int beiscsi_enable_port(struct beiscsi_hba *phba)
+ 	/* Re-enable UER. If different TPE occurs then it is recoverable. */
+ 	beiscsi_set_uer_feature(phba);
+ 
+-	phba->shost->max_id = phba->params.cxns_per_ctrl;
++	phba->shost->max_id = phba->params.cxns_per_ctrl - 1;
+ 	phba->shost->can_queue = phba->params.ios_per_ctrl;
+ 	ret = beiscsi_init_port(phba);
+ 	if (ret < 0) {
+@@ -5745,6 +5745,7 @@ free_hba:
+ 	pci_disable_msix(phba->pcidev);
+ 	pci_dev_put(phba->pcidev);
+ 	iscsi_host_free(phba->shost);
++	pci_disable_pcie_error_reporting(pcidev);
+ 	pci_set_drvdata(pcidev, NULL);
+ disable_pci:
+ 	pci_release_regions(pcidev);
+diff --git a/drivers/scsi/bnx2i/bnx2i_iscsi.c b/drivers/scsi/bnx2i/bnx2i_iscsi.c
+index 2ad85c6b99fd2..8cf2f9a7cfdc2 100644
+--- a/drivers/scsi/bnx2i/bnx2i_iscsi.c
++++ b/drivers/scsi/bnx2i/bnx2i_iscsi.c
+@@ -791,7 +791,7 @@ struct bnx2i_hba *bnx2i_alloc_hba(struct cnic_dev *cnic)
+ 		return NULL;
+ 	shost->dma_boundary = cnic->pcidev->dma_mask;
+ 	shost->transportt = bnx2i_scsi_xport_template;
+-	shost->max_id = ISCSI_MAX_CONNS_PER_HBA;
++	shost->max_id = ISCSI_MAX_CONNS_PER_HBA - 1;
+ 	shost->max_channel = 0;
+ 	shost->max_lun = 512;
+ 	shost->max_cmd_len = 16;
+diff --git a/drivers/scsi/cxgbi/libcxgbi.c b/drivers/scsi/cxgbi/libcxgbi.c
+index f6bcae829c29b..506b561670af0 100644
+--- a/drivers/scsi/cxgbi/libcxgbi.c
++++ b/drivers/scsi/cxgbi/libcxgbi.c
+@@ -337,7 +337,7 @@ void cxgbi_hbas_remove(struct cxgbi_device *cdev)
+ EXPORT_SYMBOL_GPL(cxgbi_hbas_remove);
+ 
+ int cxgbi_hbas_add(struct cxgbi_device *cdev, u64 max_lun,
+-		unsigned int max_id, struct scsi_host_template *sht,
++		unsigned int max_conns, struct scsi_host_template *sht,
+ 		struct scsi_transport_template *stt)
+ {
+ 	struct cxgbi_hba *chba;
+@@ -357,7 +357,7 @@ int cxgbi_hbas_add(struct cxgbi_device *cdev, u64 max_lun,
+ 
+ 		shost->transportt = stt;
+ 		shost->max_lun = max_lun;
+-		shost->max_id = max_id;
++		shost->max_id = max_conns - 1;
+ 		shost->max_channel = 0;
+ 		shost->max_cmd_len = SCSI_MAX_VARLEN_CDB_SIZE;
+ 
+diff --git a/drivers/scsi/device_handler/scsi_dh_alua.c b/drivers/scsi/device_handler/scsi_dh_alua.c
+index efa8c03814767..005b97c61dc00 100644
+--- a/drivers/scsi/device_handler/scsi_dh_alua.c
++++ b/drivers/scsi/device_handler/scsi_dh_alua.c
+@@ -517,7 +517,8 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_port_group *pg)
+ 	int len, k, off, bufflen = ALUA_RTPG_SIZE;
+ 	int group_id_old, state_old, pref_old, valid_states_old;
+ 	unsigned char *desc, *buff;
+-	unsigned err, retval;
++	unsigned err;
++	int retval;
+ 	unsigned int tpg_desc_tbl_off;
+ 	unsigned char orig_transition_tmo;
+ 	unsigned long flags;
+@@ -562,12 +563,12 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_port_group *pg)
+ 			kfree(buff);
+ 			return SCSI_DH_OK;
+ 		}
+-		if (!scsi_sense_valid(&sense_hdr)) {
++		if (retval < 0 || !scsi_sense_valid(&sense_hdr)) {
+ 			sdev_printk(KERN_INFO, sdev,
+ 				    "%s: rtpg failed, result %d\n",
+ 				    ALUA_DH_NAME, retval);
+ 			kfree(buff);
+-			if (driver_byte(retval) == DRIVER_ERROR)
++			if (retval < 0)
+ 				return SCSI_DH_DEV_TEMP_BUSY;
+ 			return SCSI_DH_IO;
+ 		}
+@@ -791,11 +792,11 @@ static unsigned alua_stpg(struct scsi_device *sdev, struct alua_port_group *pg)
+ 	retval = submit_stpg(sdev, pg->group_id, &sense_hdr);
+ 
+ 	if (retval) {
+-		if (!scsi_sense_valid(&sense_hdr)) {
++		if (retval < 0 || !scsi_sense_valid(&sense_hdr)) {
+ 			sdev_printk(KERN_INFO, sdev,
+ 				    "%s: stpg failed, result %d",
+ 				    ALUA_DH_NAME, retval);
+-			if (driver_byte(retval) == DRIVER_ERROR)
++			if (retval < 0)
+ 				return SCSI_DH_DEV_TEMP_BUSY;
+ 		} else {
+ 			sdev_printk(KERN_INFO, sdev, "%s: stpg failed\n",
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+index 3e359ac752fd2..3cba7bfba2965 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+@@ -1649,7 +1649,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
+ 			if (irq < 0) {
+ 				dev_err(dev, "irq init: fail map phy interrupt %d\n",
+ 					idx);
+-				return -ENOENT;
++				return irq;
+ 			}
+ 
+ 			rc = devm_request_irq(dev, irq, phy_interrupts[j], 0,
+@@ -1657,7 +1657,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
+ 			if (rc) {
+ 				dev_err(dev, "irq init: could not request phy interrupt %d, rc=%d\n",
+ 					irq, rc);
+-				return -ENOENT;
++				return rc;
+ 			}
+ 		}
+ 	}
+@@ -1668,7 +1668,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
+ 		if (irq < 0) {
+ 			dev_err(dev, "irq init: could not map cq interrupt %d\n",
+ 				idx);
+-			return -ENOENT;
++			return irq;
+ 		}
+ 
+ 		rc = devm_request_irq(dev, irq, cq_interrupt_v1_hw, 0,
+@@ -1676,7 +1676,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
+ 		if (rc) {
+ 			dev_err(dev, "irq init: could not request cq interrupt %d, rc=%d\n",
+ 				irq, rc);
+-			return -ENOENT;
++			return rc;
+ 		}
+ 	}
+ 
+@@ -1686,7 +1686,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
+ 		if (irq < 0) {
+ 			dev_err(dev, "irq init: could not map fatal interrupt %d\n",
+ 				idx);
+-			return -ENOENT;
++			return irq;
+ 		}
+ 
+ 		rc = devm_request_irq(dev, irq, fatal_interrupts[i], 0,
+@@ -1694,7 +1694,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
+ 		if (rc) {
+ 			dev_err(dev, "irq init: could not request fatal interrupt %d, rc=%d\n",
+ 				irq, rc);
+-			return -ENOENT;
++			return rc;
+ 		}
+ 	}
+ 
+diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
+index cd52664920e1a..b40bb6bd5f5d0 100644
+--- a/drivers/scsi/hosts.c
++++ b/drivers/scsi/hosts.c
+@@ -220,6 +220,9 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
+ 		goto fail;
+ 	}
+ 
++	shost->cmd_per_lun = min_t(short, shost->cmd_per_lun,
++				   shost->can_queue);
++
+ 	error = scsi_init_sense_cache(shost);
+ 	if (error)
+ 		goto fail;
+@@ -485,6 +488,7 @@ struct Scsi_Host *scsi_host_alloc(struct scsi_host_template *sht, int privsize)
+ 		shost_printk(KERN_WARNING, shost,
+ 			"error handler thread failed to spawn, error = %ld\n",
+ 			PTR_ERR(shost->ehandler));
++		shost->ehandler = NULL;
+ 		goto fail;
+ 	}
+ 
+diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
+index 2aaf836786548..dfe906307e55d 100644
+--- a/drivers/scsi/libiscsi.c
++++ b/drivers/scsi/libiscsi.c
+@@ -230,11 +230,11 @@ static int iscsi_prep_ecdb_ahs(struct iscsi_task *task)
+  */
+ static int iscsi_check_tmf_restrictions(struct iscsi_task *task, int opcode)
+ {
+-	struct iscsi_conn *conn = task->conn;
+-	struct iscsi_tm *tmf = &conn->tmhdr;
++	struct iscsi_session *session = task->conn->session;
++	struct iscsi_tm *tmf = &session->tmhdr;
+ 	u64 hdr_lun;
+ 
+-	if (conn->tmf_state == TMF_INITIAL)
++	if (session->tmf_state == TMF_INITIAL)
+ 		return 0;
+ 
+ 	if ((tmf->opcode & ISCSI_OPCODE_MASK) != ISCSI_OP_SCSI_TMFUNC)
+@@ -254,24 +254,19 @@ static int iscsi_check_tmf_restrictions(struct iscsi_task *task, int opcode)
+ 		 * Fail all SCSI cmd PDUs
+ 		 */
+ 		if (opcode != ISCSI_OP_SCSI_DATA_OUT) {
+-			iscsi_conn_printk(KERN_INFO, conn,
+-					  "task [op %x itt "
+-					  "0x%x/0x%x] "
+-					  "rejected.\n",
+-					  opcode, task->itt,
+-					  task->hdr_itt);
++			iscsi_session_printk(KERN_INFO, session,
++					     "task [op %x itt 0x%x/0x%x] rejected.\n",
++					     opcode, task->itt, task->hdr_itt);
+ 			return -EACCES;
+ 		}
+ 		/*
+ 		 * And also all data-out PDUs in response to R2T
+ 		 * if fast_abort is set.
+ 		 */
+-		if (conn->session->fast_abort) {
+-			iscsi_conn_printk(KERN_INFO, conn,
+-					  "task [op %x itt "
+-					  "0x%x/0x%x] fast abort.\n",
+-					  opcode, task->itt,
+-					  task->hdr_itt);
++		if (session->fast_abort) {
++			iscsi_session_printk(KERN_INFO, session,
++					     "task [op %x itt 0x%x/0x%x] fast abort.\n",
++					     opcode, task->itt, task->hdr_itt);
+ 			return -EACCES;
+ 		}
+ 		break;
+@@ -284,7 +279,7 @@ static int iscsi_check_tmf_restrictions(struct iscsi_task *task, int opcode)
+ 		 */
+ 		if (opcode == ISCSI_OP_SCSI_DATA_OUT &&
+ 		    task->hdr_itt == tmf->rtt) {
+-			ISCSI_DBG_SESSION(conn->session,
++			ISCSI_DBG_SESSION(session,
+ 					  "Preventing task %x/%x from sending "
+ 					  "data-out due to abort task in "
+ 					  "progress\n", task->itt,
+@@ -936,20 +931,21 @@ iscsi_data_in_rsp(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
+ static void iscsi_tmf_rsp(struct iscsi_conn *conn, struct iscsi_hdr *hdr)
+ {
+ 	struct iscsi_tm_rsp *tmf = (struct iscsi_tm_rsp *)hdr;
++	struct iscsi_session *session = conn->session;
+ 
+ 	conn->exp_statsn = be32_to_cpu(hdr->statsn) + 1;
+ 	conn->tmfrsp_pdus_cnt++;
+ 
+-	if (conn->tmf_state != TMF_QUEUED)
++	if (session->tmf_state != TMF_QUEUED)
+ 		return;
+ 
+ 	if (tmf->response == ISCSI_TMF_RSP_COMPLETE)
+-		conn->tmf_state = TMF_SUCCESS;
++		session->tmf_state = TMF_SUCCESS;
+ 	else if (tmf->response == ISCSI_TMF_RSP_NO_TASK)
+-		conn->tmf_state = TMF_NOT_FOUND;
++		session->tmf_state = TMF_NOT_FOUND;
+ 	else
+-		conn->tmf_state = TMF_FAILED;
+-	wake_up(&conn->ehwait);
++		session->tmf_state = TMF_FAILED;
++	wake_up(&session->ehwait);
+ }
+ 
+ static int iscsi_send_nopout(struct iscsi_conn *conn, struct iscsi_nopin *rhdr)
+@@ -1361,7 +1357,6 @@ void iscsi_session_failure(struct iscsi_session *session,
+ 			   enum iscsi_err err)
+ {
+ 	struct iscsi_conn *conn;
+-	struct device *dev;
+ 
+ 	spin_lock_bh(&session->frwd_lock);
+ 	conn = session->leadconn;
+@@ -1370,10 +1365,8 @@ void iscsi_session_failure(struct iscsi_session *session,
+ 		return;
+ 	}
+ 
+-	dev = get_device(&conn->cls_conn->dev);
++	iscsi_get_conn(conn->cls_conn);
+ 	spin_unlock_bh(&session->frwd_lock);
+-	if (!dev)
+-	        return;
+ 	/*
+ 	 * if the host is being removed bypass the connection
+ 	 * recovery initialization because we are going to kill
+@@ -1383,7 +1376,7 @@ void iscsi_session_failure(struct iscsi_session *session,
+ 		iscsi_conn_error_event(conn->cls_conn, err);
+ 	else
+ 		iscsi_conn_failure(conn, err);
+-	put_device(dev);
++	iscsi_put_conn(conn->cls_conn);
+ }
+ EXPORT_SYMBOL_GPL(iscsi_session_failure);
+ 
+@@ -1829,15 +1822,14 @@ EXPORT_SYMBOL_GPL(iscsi_target_alloc);
+ 
+ static void iscsi_tmf_timedout(struct timer_list *t)
+ {
+-	struct iscsi_conn *conn = from_timer(conn, t, tmf_timer);
+-	struct iscsi_session *session = conn->session;
++	struct iscsi_session *session = from_timer(session, t, tmf_timer);
+ 
+ 	spin_lock(&session->frwd_lock);
+-	if (conn->tmf_state == TMF_QUEUED) {
+-		conn->tmf_state = TMF_TIMEDOUT;
++	if (session->tmf_state == TMF_QUEUED) {
++		session->tmf_state = TMF_TIMEDOUT;
+ 		ISCSI_DBG_EH(session, "tmf timedout\n");
+ 		/* unblock eh_abort() */
+-		wake_up(&conn->ehwait);
++		wake_up(&session->ehwait);
+ 	}
+ 	spin_unlock(&session->frwd_lock);
+ }
+@@ -1860,8 +1852,8 @@ static int iscsi_exec_task_mgmt_fn(struct iscsi_conn *conn,
+ 		return -EPERM;
+ 	}
+ 	conn->tmfcmd_pdus_cnt++;
+-	conn->tmf_timer.expires = timeout * HZ + jiffies;
+-	add_timer(&conn->tmf_timer);
++	session->tmf_timer.expires = timeout * HZ + jiffies;
++	add_timer(&session->tmf_timer);
+ 	ISCSI_DBG_EH(session, "tmf set timeout\n");
+ 
+ 	spin_unlock_bh(&session->frwd_lock);
+@@ -1875,12 +1867,12 @@ static int iscsi_exec_task_mgmt_fn(struct iscsi_conn *conn,
+ 	 * 3) session is terminated or restarted or userspace has
+ 	 * given up on recovery
+ 	 */
+-	wait_event_interruptible(conn->ehwait, age != session->age ||
++	wait_event_interruptible(session->ehwait, age != session->age ||
+ 				 session->state != ISCSI_STATE_LOGGED_IN ||
+-				 conn->tmf_state != TMF_QUEUED);
++				 session->tmf_state != TMF_QUEUED);
+ 	if (signal_pending(current))
+ 		flush_signals(current);
+-	del_timer_sync(&conn->tmf_timer);
++	del_timer_sync(&session->tmf_timer);
+ 
+ 	mutex_lock(&session->eh_mutex);
+ 	spin_lock_bh(&session->frwd_lock);
+@@ -2312,17 +2304,17 @@ int iscsi_eh_abort(struct scsi_cmnd *sc)
+ 	}
+ 
+ 	/* only have one tmf outstanding at a time */
+-	if (conn->tmf_state != TMF_INITIAL)
++	if (session->tmf_state != TMF_INITIAL)
+ 		goto failed;
+-	conn->tmf_state = TMF_QUEUED;
++	session->tmf_state = TMF_QUEUED;
+ 
+-	hdr = &conn->tmhdr;
++	hdr = &session->tmhdr;
+ 	iscsi_prep_abort_task_pdu(task, hdr);
+ 
+ 	if (iscsi_exec_task_mgmt_fn(conn, hdr, age, session->abort_timeout))
+ 		goto failed;
+ 
+-	switch (conn->tmf_state) {
++	switch (session->tmf_state) {
+ 	case TMF_SUCCESS:
+ 		spin_unlock_bh(&session->frwd_lock);
+ 		/*
+@@ -2337,7 +2329,7 @@ int iscsi_eh_abort(struct scsi_cmnd *sc)
+ 		 */
+ 		spin_lock_bh(&session->frwd_lock);
+ 		fail_scsi_task(task, DID_ABORT);
+-		conn->tmf_state = TMF_INITIAL;
++		session->tmf_state = TMF_INITIAL;
+ 		memset(hdr, 0, sizeof(*hdr));
+ 		spin_unlock_bh(&session->frwd_lock);
+ 		iscsi_start_tx(conn);
+@@ -2348,7 +2340,7 @@ int iscsi_eh_abort(struct scsi_cmnd *sc)
+ 		goto failed_unlocked;
+ 	case TMF_NOT_FOUND:
+ 		if (!sc->SCp.ptr) {
+-			conn->tmf_state = TMF_INITIAL;
++			session->tmf_state = TMF_INITIAL;
+ 			memset(hdr, 0, sizeof(*hdr));
+ 			/* task completed before tmf abort response */
+ 			ISCSI_DBG_EH(session, "sc completed while abort	in "
+@@ -2357,7 +2349,7 @@ int iscsi_eh_abort(struct scsi_cmnd *sc)
+ 		}
+ 		fallthrough;
+ 	default:
+-		conn->tmf_state = TMF_INITIAL;
++		session->tmf_state = TMF_INITIAL;
+ 		goto failed;
+ 	}
+ 
+@@ -2416,11 +2408,11 @@ int iscsi_eh_device_reset(struct scsi_cmnd *sc)
+ 	conn = session->leadconn;
+ 
+ 	/* only have one tmf outstanding at a time */
+-	if (conn->tmf_state != TMF_INITIAL)
++	if (session->tmf_state != TMF_INITIAL)
+ 		goto unlock;
+-	conn->tmf_state = TMF_QUEUED;
++	session->tmf_state = TMF_QUEUED;
+ 
+-	hdr = &conn->tmhdr;
++	hdr = &session->tmhdr;
+ 	iscsi_prep_lun_reset_pdu(sc, hdr);
+ 
+ 	if (iscsi_exec_task_mgmt_fn(conn, hdr, session->age,
+@@ -2429,7 +2421,7 @@ int iscsi_eh_device_reset(struct scsi_cmnd *sc)
+ 		goto unlock;
+ 	}
+ 
+-	switch (conn->tmf_state) {
++	switch (session->tmf_state) {
+ 	case TMF_SUCCESS:
+ 		break;
+ 	case TMF_TIMEDOUT:
+@@ -2437,7 +2429,7 @@ int iscsi_eh_device_reset(struct scsi_cmnd *sc)
+ 		iscsi_conn_failure(conn, ISCSI_ERR_SCSI_EH_SESSION_RST);
+ 		goto done;
+ 	default:
+-		conn->tmf_state = TMF_INITIAL;
++		session->tmf_state = TMF_INITIAL;
+ 		goto unlock;
+ 	}
+ 
+@@ -2449,7 +2441,7 @@ int iscsi_eh_device_reset(struct scsi_cmnd *sc)
+ 	spin_lock_bh(&session->frwd_lock);
+ 	memset(hdr, 0, sizeof(*hdr));
+ 	fail_scsi_tasks(conn, sc->device->lun, DID_ERROR);
+-	conn->tmf_state = TMF_INITIAL;
++	session->tmf_state = TMF_INITIAL;
+ 	spin_unlock_bh(&session->frwd_lock);
+ 
+ 	iscsi_start_tx(conn);
+@@ -2472,8 +2464,7 @@ void iscsi_session_recovery_timedout(struct iscsi_cls_session *cls_session)
+ 	spin_lock_bh(&session->frwd_lock);
+ 	if (session->state != ISCSI_STATE_LOGGED_IN) {
+ 		session->state = ISCSI_STATE_RECOVERY_FAILED;
+-		if (session->leadconn)
+-			wake_up(&session->leadconn->ehwait);
++		wake_up(&session->ehwait);
+ 	}
+ 	spin_unlock_bh(&session->frwd_lock);
+ }
+@@ -2518,7 +2509,7 @@ failed:
+ 	iscsi_conn_failure(conn, ISCSI_ERR_SCSI_EH_SESSION_RST);
+ 
+ 	ISCSI_DBG_EH(session, "wait for relogin\n");
+-	wait_event_interruptible(conn->ehwait,
++	wait_event_interruptible(session->ehwait,
+ 				 session->state == ISCSI_STATE_TERMINATE ||
+ 				 session->state == ISCSI_STATE_LOGGED_IN ||
+ 				 session->state == ISCSI_STATE_RECOVERY_FAILED);
+@@ -2579,11 +2570,11 @@ static int iscsi_eh_target_reset(struct scsi_cmnd *sc)
+ 	conn = session->leadconn;
+ 
+ 	/* only have one tmf outstanding at a time */
+-	if (conn->tmf_state != TMF_INITIAL)
++	if (session->tmf_state != TMF_INITIAL)
+ 		goto unlock;
+-	conn->tmf_state = TMF_QUEUED;
++	session->tmf_state = TMF_QUEUED;
+ 
+-	hdr = &conn->tmhdr;
++	hdr = &session->tmhdr;
+ 	iscsi_prep_tgt_reset_pdu(sc, hdr);
+ 
+ 	if (iscsi_exec_task_mgmt_fn(conn, hdr, session->age,
+@@ -2592,7 +2583,7 @@ static int iscsi_eh_target_reset(struct scsi_cmnd *sc)
+ 		goto unlock;
+ 	}
+ 
+-	switch (conn->tmf_state) {
++	switch (session->tmf_state) {
+ 	case TMF_SUCCESS:
+ 		break;
+ 	case TMF_TIMEDOUT:
+@@ -2600,7 +2591,7 @@ static int iscsi_eh_target_reset(struct scsi_cmnd *sc)
+ 		iscsi_conn_failure(conn, ISCSI_ERR_SCSI_EH_SESSION_RST);
+ 		goto done;
+ 	default:
+-		conn->tmf_state = TMF_INITIAL;
++		session->tmf_state = TMF_INITIAL;
+ 		goto unlock;
+ 	}
+ 
+@@ -2612,7 +2603,7 @@ static int iscsi_eh_target_reset(struct scsi_cmnd *sc)
+ 	spin_lock_bh(&session->frwd_lock);
+ 	memset(hdr, 0, sizeof(*hdr));
+ 	fail_scsi_tasks(conn, -1, DID_ERROR);
+-	conn->tmf_state = TMF_INITIAL;
++	session->tmf_state = TMF_INITIAL;
+ 	spin_unlock_bh(&session->frwd_lock);
+ 
+ 	iscsi_start_tx(conn);
+@@ -2942,7 +2933,10 @@ iscsi_session_setup(struct iscsi_transport *iscsit, struct Scsi_Host *shost,
+ 	session->tt = iscsit;
+ 	session->dd_data = cls_session->dd_data + sizeof(*session);
+ 
++	session->tmf_state = TMF_INITIAL;
++	timer_setup(&session->tmf_timer, iscsi_tmf_timedout, 0);
+ 	mutex_init(&session->eh_mutex);
++
+ 	spin_lock_init(&session->frwd_lock);
+ 	spin_lock_init(&session->back_lock);
+ 
+@@ -3046,7 +3040,6 @@ iscsi_conn_setup(struct iscsi_cls_session *cls_session, int dd_size,
+ 	conn->c_stage = ISCSI_CONN_INITIAL_STAGE;
+ 	conn->id = conn_idx;
+ 	conn->exp_statsn = 0;
+-	conn->tmf_state = TMF_INITIAL;
+ 
+ 	timer_setup(&conn->transport_timer, iscsi_check_transport_timeouts, 0);
+ 
+@@ -3071,8 +3064,7 @@ iscsi_conn_setup(struct iscsi_cls_session *cls_session, int dd_size,
+ 		goto login_task_data_alloc_fail;
+ 	conn->login_task->data = conn->data = data;
+ 
+-	timer_setup(&conn->tmf_timer, iscsi_tmf_timedout, 0);
+-	init_waitqueue_head(&conn->ehwait);
++	init_waitqueue_head(&session->ehwait);
+ 
+ 	return cls_conn;
+ 
+@@ -3107,7 +3099,7 @@ void iscsi_conn_teardown(struct iscsi_cls_conn *cls_conn)
+ 		 * leading connection? then give up on recovery.
+ 		 */
+ 		session->state = ISCSI_STATE_TERMINATE;
+-		wake_up(&conn->ehwait);
++		wake_up(&session->ehwait);
+ 	}
+ 	spin_unlock_bh(&session->frwd_lock);
+ 
+@@ -3182,7 +3174,7 @@ int iscsi_conn_start(struct iscsi_cls_conn *cls_conn)
+ 		 * commands after successful recovery
+ 		 */
+ 		conn->stop_stage = 0;
+-		conn->tmf_state = TMF_INITIAL;
++		session->tmf_state = TMF_INITIAL;
+ 		session->age++;
+ 		if (session->age == 16)
+ 			session->age = 0;
+@@ -3196,7 +3188,7 @@ int iscsi_conn_start(struct iscsi_cls_conn *cls_conn)
+ 	spin_unlock_bh(&session->frwd_lock);
+ 
+ 	iscsi_unblock_session(session->cls_session);
+-	wake_up(&conn->ehwait);
++	wake_up(&session->ehwait);
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(iscsi_conn_start);
+@@ -3290,7 +3282,7 @@ void iscsi_conn_stop(struct iscsi_cls_conn *cls_conn, int flag)
+ 	spin_lock_bh(&session->frwd_lock);
+ 	fail_scsi_tasks(conn, -1, DID_TRANSPORT_DISRUPTED);
+ 	fail_mgmt_tasks(session, conn);
+-	memset(&conn->tmhdr, 0, sizeof(conn->tmhdr));
++	memset(&session->tmhdr, 0, sizeof(session->tmhdr));
+ 	spin_unlock_bh(&session->frwd_lock);
+ 	mutex_unlock(&session->eh_mutex);
+ }
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index c3ca2ccf9f828..933927f738c7b 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -1175,6 +1175,15 @@ stop_rr_fcf_flogi:
+ 			phba->fcf.fcf_redisc_attempted = 0; /* reset */
+ 			goto out;
+ 		}
++	} else if (vport->port_state > LPFC_FLOGI &&
++		   vport->fc_flag & FC_PT2PT) {
++		/*
++		 * In a p2p topology, it is possible that discovery has
++		 * already progressed, and this completion can be ignored.
++		 * Recheck the indicated topology.
++		 */
++		if (!sp->cmn.fPort)
++			goto out;
+ 	}
+ 
+ flogifail:
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index bc0bcb0dccc9a..9ed826fd53e85 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -7959,7 +7959,7 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
+ 				"0393 Error %d during rpi post operation\n",
+ 				rc);
+ 		rc = -ENODEV;
+-		goto out_destroy_queue;
++		goto out_free_iocblist;
+ 	}
+ 	lpfc_sli4_node_prep(phba);
+ 
+@@ -8125,8 +8125,9 @@ out_io_buff_free:
+ out_unset_queue:
+ 	/* Unset all the queues set up in this routine when error out */
+ 	lpfc_sli4_queue_unset(phba);
+-out_destroy_queue:
++out_free_iocblist:
+ 	lpfc_free_iocb_list(phba);
++out_destroy_queue:
+ 	lpfc_sli4_queue_destroy(phba);
+ out_stop_timers:
+ 	lpfc_stop_hba_timers(phba);
+diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h
+index b5a765b73c761..a43b67299b084 100644
+--- a/drivers/scsi/megaraid/megaraid_sas.h
++++ b/drivers/scsi/megaraid/megaraid_sas.h
+@@ -2262,6 +2262,15 @@ enum MR_PERF_MODE {
+ 		 (mode) == MR_LATENCY_PERF_MODE ? "Latency" : \
+ 		 "Unknown")
+ 
++enum MEGASAS_LD_TARGET_ID_STATUS {
++	LD_TARGET_ID_INITIAL,
++	LD_TARGET_ID_ACTIVE,
++	LD_TARGET_ID_DELETED,
++};
++
++#define MEGASAS_TARGET_ID(sdev)						\
++	(((sdev->channel % 2) * MEGASAS_MAX_DEV_PER_CHANNEL) + sdev->id)
++
+ struct megasas_instance {
+ 
+ 	unsigned int *reply_map;
+@@ -2326,6 +2335,9 @@ struct megasas_instance {
+ 	struct megasas_pd_list          pd_list[MEGASAS_MAX_PD];
+ 	struct megasas_pd_list          local_pd_list[MEGASAS_MAX_PD];
+ 	u8 ld_ids[MEGASAS_MAX_LD_IDS];
++	u8 ld_tgtid_status[MEGASAS_MAX_LD_IDS];
++	u8 ld_ids_prev[MEGASAS_MAX_LD_IDS];
++	u8 ld_ids_from_raidmap[MEGASAS_MAX_LD_IDS];
+ 	s8 init_id;
+ 
+ 	u16 max_num_sge;
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index 4d4e9dbe5193f..3fecf4b2b0bd9 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -141,6 +141,8 @@ static int megasas_register_aen(struct megasas_instance *instance,
+ 				u32 seq_num, u32 class_locale_word);
+ static void megasas_get_pd_info(struct megasas_instance *instance,
+ 				struct scsi_device *sdev);
++static void
++megasas_set_ld_removed_by_fw(struct megasas_instance *instance);
+ 
+ /*
+  * PCI ID table for all supported controllers
+@@ -436,6 +438,12 @@ megasas_decode_evt(struct megasas_instance *instance)
+ 			(class_locale.members.locale),
+ 			format_class(class_locale.members.class),
+ 			evt_detail->description);
++
++	if (megasas_dbg_lvl & LD_PD_DEBUG)
++		dev_info(&instance->pdev->dev,
++			 "evt_detail.args.ld.target_id/index %d/%d\n",
++			 evt_detail->args.ld.target_id, evt_detail->args.ld.ld_index);
++
+ }
+ 
+ /*
+@@ -1779,6 +1787,7 @@ megasas_queue_command(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
+ {
+ 	struct megasas_instance *instance;
+ 	struct MR_PRIV_DEVICE *mr_device_priv_data;
++	u32 ld_tgt_id;
+ 
+ 	instance = (struct megasas_instance *)
+ 	    scmd->device->host->hostdata;
+@@ -1805,17 +1814,21 @@ megasas_queue_command(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
+ 		}
+ 	}
+ 
+-	if (atomic_read(&instance->adprecovery) == MEGASAS_HW_CRITICAL_ERROR) {
++	mr_device_priv_data = scmd->device->hostdata;
++	if (!mr_device_priv_data ||
++	    (atomic_read(&instance->adprecovery) == MEGASAS_HW_CRITICAL_ERROR)) {
+ 		scmd->result = DID_NO_CONNECT << 16;
+ 		scmd->scsi_done(scmd);
+ 		return 0;
+ 	}
+ 
+-	mr_device_priv_data = scmd->device->hostdata;
+-	if (!mr_device_priv_data) {
+-		scmd->result = DID_NO_CONNECT << 16;
+-		scmd->scsi_done(scmd);
+-		return 0;
++	if (MEGASAS_IS_LOGICAL(scmd->device)) {
++		ld_tgt_id = MEGASAS_TARGET_ID(scmd->device);
++		if (instance->ld_tgtid_status[ld_tgt_id] == LD_TARGET_ID_DELETED) {
++			scmd->result = DID_NO_CONNECT << 16;
++			scmd->scsi_done(scmd);
++			return 0;
++		}
+ 	}
+ 
+ 	if (atomic_read(&instance->adprecovery) != MEGASAS_HBA_OPERATIONAL)
+@@ -2095,7 +2108,7 @@ static int megasas_slave_configure(struct scsi_device *sdev)
+ 
+ static int megasas_slave_alloc(struct scsi_device *sdev)
+ {
+-	u16 pd_index = 0;
++	u16 pd_index = 0, ld_tgt_id;
+ 	struct megasas_instance *instance ;
+ 	struct MR_PRIV_DEVICE *mr_device_priv_data;
+ 
+@@ -2120,6 +2133,14 @@ scan_target:
+ 					GFP_KERNEL);
+ 	if (!mr_device_priv_data)
+ 		return -ENOMEM;
++
++	if (MEGASAS_IS_LOGICAL(sdev)) {
++		ld_tgt_id = MEGASAS_TARGET_ID(sdev);
++		instance->ld_tgtid_status[ld_tgt_id] = LD_TARGET_ID_ACTIVE;
++		if (megasas_dbg_lvl & LD_PD_DEBUG)
++			sdev_printk(KERN_INFO, sdev, "LD target ID %d created.\n", ld_tgt_id);
++	}
++
+ 	sdev->hostdata = mr_device_priv_data;
+ 
+ 	atomic_set(&mr_device_priv_data->r1_ldio_hint,
+@@ -2129,6 +2150,19 @@ scan_target:
+ 
+ static void megasas_slave_destroy(struct scsi_device *sdev)
+ {
++	u16 ld_tgt_id;
++	struct megasas_instance *instance;
++
++	instance = megasas_lookup_instance(sdev->host->host_no);
++
++	if (MEGASAS_IS_LOGICAL(sdev)) {
++		ld_tgt_id = MEGASAS_TARGET_ID(sdev);
++		instance->ld_tgtid_status[ld_tgt_id] = LD_TARGET_ID_DELETED;
++		if (megasas_dbg_lvl & LD_PD_DEBUG)
++			sdev_printk(KERN_INFO, sdev,
++				    "LD target ID %d removed from OS stack\n", ld_tgt_id);
++	}
++
+ 	kfree(sdev->hostdata);
+ 	sdev->hostdata = NULL;
+ }
+@@ -3525,6 +3559,22 @@ megasas_complete_abort(struct megasas_instance *instance,
+ 	}
+ }
+ 
++static void
++megasas_set_ld_removed_by_fw(struct megasas_instance *instance)
++{
++	uint i;
++
++	for (i = 0; (i < MEGASAS_MAX_LD_IDS); i++) {
++		if (instance->ld_ids_prev[i] != 0xff &&
++		    instance->ld_ids_from_raidmap[i] == 0xff) {
++			if (megasas_dbg_lvl & LD_PD_DEBUG)
++				dev_info(&instance->pdev->dev,
++					 "LD target ID %d removed from RAID map\n", i);
++			instance->ld_tgtid_status[i] = LD_TARGET_ID_DELETED;
++		}
++	}
++}
++
+ /**
+  * megasas_complete_cmd -	Completes a command
+  * @instance:			Adapter soft state
+@@ -3687,9 +3737,13 @@ megasas_complete_cmd(struct megasas_instance *instance, struct megasas_cmd *cmd,
+ 				fusion->fast_path_io = 0;
+ 			}
+ 
++			if (instance->adapter_type >= INVADER_SERIES)
++				megasas_set_ld_removed_by_fw(instance);
++
+ 			megasas_sync_map_info(instance);
+ 			spin_unlock_irqrestore(instance->host->host_lock,
+ 					       flags);
++
+ 			break;
+ 		}
+ 		if (opcode == MR_DCMD_CTRL_EVENT_GET_INFO ||
+@@ -7545,11 +7599,16 @@ static int megasas_probe_one(struct pci_dev *pdev,
+ 	return 0;
+ 
+ fail_start_aen:
++	instance->unload = 1;
++	scsi_remove_host(instance->host);
+ fail_io_attach:
+ 	megasas_mgmt_info.count--;
+ 	megasas_mgmt_info.max_index--;
+ 	megasas_mgmt_info.instance[megasas_mgmt_info.max_index] = NULL;
+ 
++	if (instance->requestorId && !instance->skip_heartbeat_timer_del)
++		del_timer_sync(&instance->sriov_heartbeat_timer);
++
+ 	instance->instancet->disable_intr(instance);
+ 	megasas_destroy_irqs(instance);
+ 
+@@ -7557,8 +7616,16 @@ fail_io_attach:
+ 		megasas_release_fusion(instance);
+ 	else
+ 		megasas_release_mfi(instance);
++
+ 	if (instance->msix_vectors)
+ 		pci_free_irq_vectors(instance->pdev);
++	instance->msix_vectors = 0;
++
++	if (instance->fw_crash_state != UNAVAILABLE)
++		megasas_free_host_crash_buffer(instance);
++
++	if (instance->adapter_type != MFI_SERIES)
++		megasas_fusion_stop_watchdog(instance);
+ fail_init_mfi:
+ 	scsi_host_put(host);
+ fail_alloc_instance:
+@@ -8818,8 +8885,10 @@ megasas_aen_polling(struct work_struct *work)
+ 	union megasas_evt_class_locale class_locale;
+ 	int event_type = 0;
+ 	u32 seq_num;
++	u16 ld_target_id;
+ 	int error;
+ 	u8  dcmd_ret = DCMD_SUCCESS;
++	struct scsi_device *sdev1;
+ 
+ 	if (!instance) {
+ 		printk(KERN_ERR "invalid instance!\n");
+@@ -8842,12 +8911,23 @@ megasas_aen_polling(struct work_struct *work)
+ 			break;
+ 
+ 		case MR_EVT_LD_OFFLINE:
+-		case MR_EVT_CFG_CLEARED:
+ 		case MR_EVT_LD_DELETED:
++			ld_target_id = instance->evt_detail->args.ld.target_id;
++			sdev1 = scsi_device_lookup(instance->host,
++						   MEGASAS_MAX_PD_CHANNELS +
++						   (ld_target_id / MEGASAS_MAX_DEV_PER_CHANNEL),
++						   (ld_target_id - MEGASAS_MAX_DEV_PER_CHANNEL),
++						   0);
++			if (sdev1)
++				megasas_remove_scsi_device(sdev1);
++
++			event_type = SCAN_VD_CHANNEL;
++			break;
+ 		case MR_EVT_LD_CREATED:
+ 			event_type = SCAN_VD_CHANNEL;
+ 			break;
+ 
++		case MR_EVT_CFG_CLEARED:
+ 		case MR_EVT_CTRL_HOST_BUS_SCAN_REQUESTED:
+ 		case MR_EVT_FOREIGN_CFG_IMPORTED:
+ 		case MR_EVT_LD_STATE_CHANGE:
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fp.c b/drivers/scsi/megaraid/megaraid_sas_fp.c
+index b6c08d6200335..83f69c33b01a9 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fp.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fp.c
+@@ -349,6 +349,10 @@ u8 MR_ValidateMapInfo(struct megasas_instance *instance, u64 map_id)
+ 
+ 	num_lds = le16_to_cpu(drv_map->raidMap.ldCount);
+ 
++	memcpy(instance->ld_ids_prev,
++	       instance->ld_ids_from_raidmap,
++	       sizeof(instance->ld_ids_from_raidmap));
++	memset(instance->ld_ids_from_raidmap, 0xff, MEGASAS_MAX_LD_IDS);
+ 	/*Convert Raid capability values to CPU arch */
+ 	for (i = 0; (num_lds > 0) && (i < MAX_LOGICAL_DRIVES_EXT); i++) {
+ 		ld = MR_TargetIdToLdGet(i, drv_map);
+@@ -359,7 +363,7 @@ u8 MR_ValidateMapInfo(struct megasas_instance *instance, u64 map_id)
+ 
+ 		raid = MR_LdRaidGet(ld, drv_map);
+ 		le32_to_cpus((u32 *)&raid->capability);
+-
++		instance->ld_ids_from_raidmap[i] = i;
+ 		num_lds--;
+ 	}
+ 
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+index cd94a0c81f835..142e60741094f 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+@@ -3745,6 +3745,7 @@ static void megasas_sync_irqs(unsigned long instance_addr)
+ 		if (irq_ctx->irq_poll_scheduled) {
+ 			irq_ctx->irq_poll_scheduled = false;
+ 			enable_irq(irq_ctx->os_irq);
++			complete_cmd_fusion(instance, irq_ctx->MSIxIndex, irq_ctx);
+ 		}
+ 	}
+ }
+@@ -3776,6 +3777,7 @@ int megasas_irqpoll(struct irq_poll *irqpoll, int budget)
+ 		irq_poll_complete(irqpoll);
+ 		irq_ctx->irq_poll_scheduled = false;
+ 		enable_irq(irq_ctx->os_irq);
++		complete_cmd_fusion(instance, irq_ctx->MSIxIndex, irq_ctx);
+ 	}
+ 
+ 	return num_entries;
+@@ -3792,6 +3794,7 @@ megasas_complete_cmd_dpc_fusion(unsigned long instance_addr)
+ {
+ 	struct megasas_instance *instance =
+ 		(struct megasas_instance *)instance_addr;
++	struct megasas_irq_context *irq_ctx = NULL;
+ 	u32 count, MSIxIndex;
+ 
+ 	count = instance->msix_vectors > 0 ? instance->msix_vectors : 1;
+@@ -3800,8 +3803,10 @@ megasas_complete_cmd_dpc_fusion(unsigned long instance_addr)
+ 	if (atomic_read(&instance->adprecovery) == MEGASAS_HW_CRITICAL_ERROR)
+ 		return;
+ 
+-	for (MSIxIndex = 0 ; MSIxIndex < count; MSIxIndex++)
+-		complete_cmd_fusion(instance, MSIxIndex, NULL);
++	for (MSIxIndex = 0 ; MSIxIndex < count; MSIxIndex++) {
++		irq_ctx = &instance->irq_context[MSIxIndex];
++		complete_cmd_fusion(instance, MSIxIndex, irq_ctx);
++	}
+ }
+ 
+ /**
+@@ -5272,6 +5277,7 @@ megasas_alloc_fusion_context(struct megasas_instance *instance)
+ 		if (!fusion->log_to_span) {
+ 			dev_err(&instance->pdev->dev, "Failed from %s %d\n",
+ 				__func__, __LINE__);
++			kfree(instance->ctrl_context);
+ 			return -ENOMEM;
+ 		}
+ 	}
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index a5f70f0e02871..b48f10f7cf310 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -3697,6 +3697,28 @@ _scsih_fw_event_cleanup_queue(struct MPT3SAS_ADAPTER *ioc)
+ 	ioc->fw_events_cleanup = 1;
+ 	while ((fw_event = dequeue_next_fw_event(ioc)) ||
+ 	     (fw_event = ioc->current_event)) {
++
++		/*
++		 * Don't call cancel_work_sync() for current_event
++		 * other than MPT3SAS_REMOVE_UNRESPONDING_DEVICES;
++		 * otherwise we may observe deadlock if current
++		 * hard reset issued as part of processing the current_event.
++		 *
++		 * Orginal logic of cleaning the current_event is added
++		 * for handling the back to back host reset issued by the user.
++		 * i.e. during back to back host reset, driver use to process
++		 * the two instances of MPT3SAS_REMOVE_UNRESPONDING_DEVICES
++		 * event back to back and this made the drives to unregister
++		 * the devices from SML.
++		 */
++
++		if (fw_event == ioc->current_event &&
++		    ioc->current_event->event !=
++		    MPT3SAS_REMOVE_UNRESPONDING_DEVICES) {
++			ioc->current_event = NULL;
++			continue;
++		}
++
+ 		/*
+ 		 * Wait on the fw_event to complete. If this returns 1, then
+ 		 * the event was never executed, and we need a put for the
+diff --git a/drivers/scsi/qedi/qedi.h b/drivers/scsi/qedi/qedi.h
+index c342defc3f522..ce199a7a16b80 100644
+--- a/drivers/scsi/qedi/qedi.h
++++ b/drivers/scsi/qedi/qedi.h
+@@ -284,6 +284,7 @@ struct qedi_ctx {
+ #define QEDI_IN_RECOVERY	5
+ #define QEDI_IN_OFFLINE		6
+ #define QEDI_IN_SHUTDOWN	7
++#define QEDI_BLOCK_IO		8
+ 
+ 	u8 mac[ETH_ALEN];
+ 	u32 src_ip[4];
+diff --git a/drivers/scsi/qedi/qedi_fw.c b/drivers/scsi/qedi/qedi_fw.c
+index 440ddd2309f1d..4c87640e6a911 100644
+--- a/drivers/scsi/qedi/qedi_fw.c
++++ b/drivers/scsi/qedi/qedi_fw.c
+@@ -73,7 +73,6 @@ static void qedi_process_logout_resp(struct qedi_ctx *qedi,
+ 	spin_unlock(&qedi_conn->list_lock);
+ 
+ 	cmd->state = RESPONSE_RECEIVED;
+-	qedi_clear_task_idx(qedi, cmd->task_id);
+ 	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr, NULL, 0);
+ 
+ 	spin_unlock(&session->back_lock);
+@@ -138,7 +137,6 @@ static void qedi_process_text_resp(struct qedi_ctx *qedi,
+ 	spin_unlock(&qedi_conn->list_lock);
+ 
+ 	cmd->state = RESPONSE_RECEIVED;
+-	qedi_clear_task_idx(qedi, cmd->task_id);
+ 
+ 	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr_ptr,
+ 			     qedi_conn->gen_pdu.resp_buf,
+@@ -161,16 +159,9 @@ static void qedi_tmf_resp_work(struct work_struct *work)
+ 	set_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
+ 	resp_hdr_ptr =  (struct iscsi_tm_rsp *)qedi_cmd->tmf_resp_buf;
+ 
+-	iscsi_block_session(session->cls_session);
+ 	rval = qedi_cleanup_all_io(qedi, qedi_conn, qedi_cmd->task, true);
+-	if (rval) {
+-		qedi_clear_task_idx(qedi, qedi_cmd->task_id);
+-		iscsi_unblock_session(session->cls_session);
++	if (rval)
+ 		goto exit_tmf_resp;
+-	}
+-
+-	iscsi_unblock_session(session->cls_session);
+-	qedi_clear_task_idx(qedi, qedi_cmd->task_id);
+ 
+ 	spin_lock(&session->back_lock);
+ 	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr_ptr, NULL, 0);
+@@ -245,8 +236,6 @@ static void qedi_process_tmf_resp(struct qedi_ctx *qedi,
+ 		goto unblock_sess;
+ 	}
+ 
+-	qedi_clear_task_idx(qedi, qedi_cmd->task_id);
+-
+ 	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)resp_hdr_ptr, NULL, 0);
+ 	kfree(resp_hdr_ptr);
+ 
+@@ -314,7 +303,6 @@ static void qedi_process_login_resp(struct qedi_ctx *qedi,
+ 		  "Freeing tid=0x%x for cid=0x%x\n",
+ 		  cmd->task_id, qedi_conn->iscsi_conn_id);
+ 	cmd->state = RESPONSE_RECEIVED;
+-	qedi_clear_task_idx(qedi, cmd->task_id);
+ }
+ 
+ static void qedi_get_rq_bdq_buf(struct qedi_ctx *qedi,
+@@ -468,7 +456,6 @@ static int qedi_process_nopin_mesg(struct qedi_ctx *qedi,
+ 		}
+ 
+ 		spin_unlock(&qedi_conn->list_lock);
+-		qedi_clear_task_idx(qedi, cmd->task_id);
+ 	}
+ 
+ done:
+@@ -673,7 +660,6 @@ static void qedi_scsi_completion(struct qedi_ctx *qedi,
+ 	if (qedi_io_tracing)
+ 		qedi_trace_io(qedi, task, cmd->task_id, QEDI_IO_TRACE_RSP);
+ 
+-	qedi_clear_task_idx(qedi, cmd->task_id);
+ 	__iscsi_complete_pdu(conn, (struct iscsi_hdr *)hdr,
+ 			     conn->data, datalen);
+ error:
+@@ -730,7 +716,6 @@ static void qedi_process_nopin_local_cmpl(struct qedi_ctx *qedi,
+ 		  cqe->itid, cmd->task_id);
+ 
+ 	cmd->state = RESPONSE_RECEIVED;
+-	qedi_clear_task_idx(qedi, cmd->task_id);
+ 
+ 	spin_lock_bh(&session->back_lock);
+ 	__iscsi_put_task(task);
+@@ -748,7 +733,6 @@ static void qedi_process_cmd_cleanup_resp(struct qedi_ctx *qedi,
+ 	itt_t protoitt = 0;
+ 	int found = 0;
+ 	struct qedi_cmd *qedi_cmd = NULL;
+-	u32 rtid = 0;
+ 	u32 iscsi_cid;
+ 	struct qedi_conn *qedi_conn;
+ 	struct qedi_cmd *dbg_cmd;
+@@ -779,7 +763,6 @@ static void qedi_process_cmd_cleanup_resp(struct qedi_ctx *qedi,
+ 			found = 1;
+ 			mtask = qedi_cmd->task;
+ 			tmf_hdr = (struct iscsi_tm *)mtask->hdr;
+-			rtid = work->rtid;
+ 
+ 			list_del_init(&work->list);
+ 			kfree(work);
+@@ -821,8 +804,6 @@ static void qedi_process_cmd_cleanup_resp(struct qedi_ctx *qedi,
+ 			if (qedi_cmd->state == CLEANUP_WAIT_FAILED)
+ 				qedi_cmd->state = CLEANUP_RECV;
+ 
+-			qedi_clear_task_idx(qedi_conn->qedi, rtid);
+-
+ 			spin_lock(&qedi_conn->list_lock);
+ 			if (likely(dbg_cmd->io_cmd_in_list)) {
+ 				dbg_cmd->io_cmd_in_list = false;
+@@ -856,7 +837,6 @@ static void qedi_process_cmd_cleanup_resp(struct qedi_ctx *qedi,
+ 		QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_TID,
+ 			  "Freeing tid=0x%x for cid=0x%x\n",
+ 			  cqe->itid, qedi_conn->iscsi_conn_id);
+-		qedi_clear_task_idx(qedi_conn->qedi, cqe->itid);
+ 
+ 	} else {
+ 		qedi_get_proto_itt(qedi, cqe->itid, &ptmp_itt);
+@@ -1453,7 +1433,7 @@ abort_ret:
+ 
+ ldel_exit:
+ 	spin_lock_bh(&qedi_conn->tmf_work_lock);
+-	if (!qedi_cmd->list_tmf_work) {
++	if (qedi_cmd->list_tmf_work) {
+ 		list_del_init(&list_work->list);
+ 		qedi_cmd->list_tmf_work = NULL;
+ 		kfree(list_work);
+diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c
+index 087c7ff28cd52..5f7e62f19d83a 100644
+--- a/drivers/scsi/qedi/qedi_iscsi.c
++++ b/drivers/scsi/qedi/qedi_iscsi.c
+@@ -330,12 +330,22 @@ free_conn:
+ 
+ void qedi_mark_device_missing(struct iscsi_cls_session *cls_session)
+ {
+-	iscsi_block_session(cls_session);
++	struct iscsi_session *session = cls_session->dd_data;
++	struct qedi_conn *qedi_conn = session->leadconn->dd_data;
++
++	spin_lock_bh(&session->frwd_lock);
++	set_bit(QEDI_BLOCK_IO, &qedi_conn->qedi->flags);
++	spin_unlock_bh(&session->frwd_lock);
+ }
+ 
+ void qedi_mark_device_available(struct iscsi_cls_session *cls_session)
+ {
+-	iscsi_unblock_session(cls_session);
++	struct iscsi_session *session = cls_session->dd_data;
++	struct qedi_conn *qedi_conn = session->leadconn->dd_data;
++
++	spin_lock_bh(&session->frwd_lock);
++	clear_bit(QEDI_BLOCK_IO, &qedi_conn->qedi->flags);
++	spin_unlock_bh(&session->frwd_lock);
+ }
+ 
+ static int qedi_bind_conn_to_iscsi_cid(struct qedi_ctx *qedi,
+@@ -783,7 +793,6 @@ static int qedi_mtask_xmit(struct iscsi_conn *conn, struct iscsi_task *task)
+ 	}
+ 
+ 	cmd->conn = conn->dd_data;
+-	cmd->scsi_cmd = NULL;
+ 	return qedi_iscsi_send_generic_request(task);
+ }
+ 
+@@ -794,9 +803,16 @@ static int qedi_task_xmit(struct iscsi_task *task)
+ 	struct qedi_cmd *cmd = task->dd_data;
+ 	struct scsi_cmnd *sc = task->sc;
+ 
++	/* Clear now so in cleanup_task we know it didn't make it */
++	cmd->scsi_cmd = NULL;
++	cmd->task_id = U16_MAX;
++
+ 	if (test_bit(QEDI_IN_SHUTDOWN, &qedi_conn->qedi->flags))
+ 		return -ENODEV;
+ 
++	if (test_bit(QEDI_BLOCK_IO, &qedi_conn->qedi->flags))
++		return -EACCES;
++
+ 	cmd->state = 0;
+ 	cmd->task = NULL;
+ 	cmd->use_slowpath = false;
+@@ -1394,13 +1410,24 @@ static umode_t qedi_attr_is_visible(int param_type, int param)
+ 
+ static void qedi_cleanup_task(struct iscsi_task *task)
+ {
+-	if (!task->sc || task->state == ISCSI_TASK_PENDING) {
++	struct qedi_cmd *cmd;
++
++	if (task->state == ISCSI_TASK_PENDING) {
+ 		QEDI_INFO(NULL, QEDI_LOG_IO, "Returning ref_cnt=%d\n",
+ 			  refcount_read(&task->refcount));
+ 		return;
+ 	}
+ 
+-	qedi_iscsi_unmap_sg_list(task->dd_data);
++	if (task->sc)
++		qedi_iscsi_unmap_sg_list(task->dd_data);
++
++	cmd = task->dd_data;
++	if (cmd->task_id != U16_MAX)
++		qedi_clear_task_idx(iscsi_host_priv(task->conn->session->host),
++				    cmd->task_id);
++
++	cmd->task_id = U16_MAX;
++	cmd->scsi_cmd = NULL;
+ }
+ 
+ struct iscsi_transport qedi_iscsi_transport = {
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index 2455d1448a7e3..edf9154327048 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -640,7 +640,7 @@ static struct qedi_ctx *qedi_host_alloc(struct pci_dev *pdev)
+ 		goto exit_setup_shost;
+ 	}
+ 
+-	shost->max_id = QEDI_MAX_ISCSI_CONNS_PER_HBA;
++	shost->max_id = QEDI_MAX_ISCSI_CONNS_PER_HBA - 1;
+ 	shost->max_channel = 0;
+ 	shost->max_lun = ~0;
+ 	shost->max_cmd_len = 16;
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 269bfb8f91655..89560562d9f69 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -2094,9 +2094,7 @@ EXPORT_SYMBOL_GPL(scsi_mode_select);
+  *	@sshdr: place to put sense data (or NULL if no sense to be collected).
+  *		must be SCSI_SENSE_BUFFERSIZE big.
+  *
+- *	Returns zero if unsuccessful, or the header offset (either 4
+- *	or 8 depending on whether a six or ten byte command was
+- *	issued) if successful.
++ *	Returns zero if successful, or a negative error number on failure
+  */
+ int
+ scsi_mode_sense(struct scsi_device *sdev, int dbd, int modepage,
+@@ -2143,6 +2141,8 @@ scsi_mode_sense(struct scsi_device *sdev, int dbd, int modepage,
+ 
+ 	result = scsi_execute_req(sdev, cmd, DMA_FROM_DEVICE, buffer, len,
+ 				  sshdr, timeout, retries, NULL);
++	if (result < 0)
++		return result;
+ 
+ 	/* This code looks awful: what it's doing is making sure an
+ 	 * ILLEGAL REQUEST sense return identifies the actual command
+@@ -2187,13 +2187,15 @@ scsi_mode_sense(struct scsi_device *sdev, int dbd, int modepage,
+ 			data->block_descriptor_length = buffer[3];
+ 		}
+ 		data->header_length = header_length;
++		result = 0;
+ 	} else if ((status_byte(result) == CHECK_CONDITION) &&
+ 		   scsi_sense_valid(sshdr) &&
+ 		   sshdr->sense_key == UNIT_ATTENTION && retry_count) {
+ 		retry_count--;
+ 		goto retry;
+ 	}
+-
++	if (result > 0)
++		result = -EIO;
+ 	return result;
+ }
+ EXPORT_SYMBOL(scsi_mode_sense);
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index 6ce1cc992d1d0..b07105ae7c917 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -2459,6 +2459,18 @@ int iscsi_destroy_conn(struct iscsi_cls_conn *conn)
+ }
+ EXPORT_SYMBOL_GPL(iscsi_destroy_conn);
+ 
++void iscsi_put_conn(struct iscsi_cls_conn *conn)
++{
++	put_device(&conn->dev);
++}
++EXPORT_SYMBOL_GPL(iscsi_put_conn);
++
++void iscsi_get_conn(struct iscsi_cls_conn *conn)
++{
++	get_device(&conn->dev);
++}
++EXPORT_SYMBOL_GPL(iscsi_get_conn);
++
+ /*
+  * iscsi interface functions
+  */
+diff --git a/drivers/scsi/scsi_transport_sas.c b/drivers/scsi/scsi_transport_sas.c
+index c9abed8429c9a..4a96fb05731d2 100644
+--- a/drivers/scsi/scsi_transport_sas.c
++++ b/drivers/scsi/scsi_transport_sas.c
+@@ -1229,16 +1229,15 @@ int sas_read_port_mode_page(struct scsi_device *sdev)
+ 	char *buffer = kzalloc(BUF_SIZE, GFP_KERNEL), *msdata;
+ 	struct sas_end_device *rdev = sas_sdev_to_rdev(sdev);
+ 	struct scsi_mode_data mode_data;
+-	int res, error;
++	int error;
+ 
+ 	if (!buffer)
+ 		return -ENOMEM;
+ 
+-	res = scsi_mode_sense(sdev, 1, 0x19, buffer, BUF_SIZE, 30*HZ, 3,
+-			      &mode_data, NULL);
++	error = scsi_mode_sense(sdev, 1, 0x19, buffer, BUF_SIZE, 30*HZ, 3,
++				&mode_data, NULL);
+ 
+-	error = -EINVAL;
+-	if (!scsi_status_is_good(res))
++	if (error)
+ 		goto out;
+ 
+ 	msdata = buffer +  mode_data.header_length +
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index a2c3d9ad9ee49..4df54d4b3c657 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -2684,18 +2684,18 @@ sd_read_write_protect_flag(struct scsi_disk *sdkp, unsigned char *buffer)
+ 		 * 5: Illegal Request, Sense Code 24: Invalid field in
+ 		 * CDB.
+ 		 */
+-		if (!scsi_status_is_good(res))
++		if (res < 0)
+ 			res = sd_do_mode_sense(sdkp, 0, 0, buffer, 4, &data, NULL);
+ 
+ 		/*
+ 		 * Third attempt: ask 255 bytes, as we did earlier.
+ 		 */
+-		if (!scsi_status_is_good(res))
++		if (res < 0)
+ 			res = sd_do_mode_sense(sdkp, 0, 0x3F, buffer, 255,
+ 					       &data, NULL);
+ 	}
+ 
+-	if (!scsi_status_is_good(res)) {
++	if (res < 0) {
+ 		sd_first_printk(KERN_WARNING, sdkp,
+ 			  "Test WP failed, assume Write Enabled\n");
+ 	} else {
+@@ -2756,7 +2756,7 @@ sd_read_cache_type(struct scsi_disk *sdkp, unsigned char *buffer)
+ 	res = sd_do_mode_sense(sdkp, dbd, modepage, buffer, first_len,
+ 			&data, &sshdr);
+ 
+-	if (!scsi_status_is_good(res))
++	if (res < 0)
+ 		goto bad_sense;
+ 
+ 	if (!data.header_length) {
+@@ -2788,7 +2788,7 @@ sd_read_cache_type(struct scsi_disk *sdkp, unsigned char *buffer)
+ 		res = sd_do_mode_sense(sdkp, dbd, modepage, buffer, len,
+ 				&data, &sshdr);
+ 
+-	if (scsi_status_is_good(res)) {
++	if (!res) {
+ 		int offset = data.header_length + data.block_descriptor_length;
+ 
+ 		while (offset < len) {
+@@ -2906,7 +2906,7 @@ static void sd_read_app_tag_own(struct scsi_disk *sdkp, unsigned char *buffer)
+ 	res = scsi_mode_sense(sdp, 1, 0x0a, buffer, 36, SD_TIMEOUT,
+ 			      sdkp->max_retries, &data, &sshdr);
+ 
+-	if (!scsi_status_is_good(res) || !data.header_length ||
++	if (res < 0 || !data.header_length ||
+ 	    data.length < 6) {
+ 		sd_first_printk(KERN_WARNING, sdkp,
+ 			  "getting Control mode page failed, assume no ATO\n");
+diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
+index 7815ed642d434..1a94c7b1de2df 100644
+--- a/drivers/scsi/sr.c
++++ b/drivers/scsi/sr.c
+@@ -913,7 +913,7 @@ static void get_capabilities(struct scsi_cd *cd)
+ 	rc = scsi_mode_sense(cd->device, 0, 0x2a, buffer, ms_len,
+ 			     SR_TIMEOUT, 3, &data, NULL);
+ 
+-	if (!scsi_status_is_good(rc) || data.length > ms_len ||
++	if (rc < 0 || data.length > ms_len ||
+ 	    data.header_length + data.block_descriptor_length > data.length) {
+ 		/* failed, drive doesn't have capabilities mode page */
+ 		cd->cdi.speed = 1;
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index e6718a74e5dae..b2e28197a0861 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -1009,17 +1009,40 @@ static void storvsc_handle_error(struct vmscsi_request *vm_srb,
+ 	struct storvsc_scan_work *wrk;
+ 	void (*process_err_fn)(struct work_struct *work);
+ 	struct hv_host_device *host_dev = shost_priv(host);
+-	bool do_work = false;
+ 
+-	switch (SRB_STATUS(vm_srb->srb_status)) {
+-	case SRB_STATUS_ERROR:
++	/*
++	 * In some situations, Hyper-V sets multiple bits in the
++	 * srb_status, such as ABORTED and ERROR. So process them
++	 * individually, with the most specific bits first.
++	 */
++
++	if (vm_srb->srb_status & SRB_STATUS_INVALID_LUN) {
++		set_host_byte(scmnd, DID_NO_CONNECT);
++		process_err_fn = storvsc_remove_lun;
++		goto do_work;
++	}
++
++	if (vm_srb->srb_status & SRB_STATUS_ABORTED) {
++		if (vm_srb->srb_status & SRB_STATUS_AUTOSENSE_VALID &&
++		    /* Capacity data has changed */
++		    (asc == 0x2a) && (ascq == 0x9)) {
++			process_err_fn = storvsc_device_scan;
++			/*
++			 * Retry the I/O that triggered this.
++			 */
++			set_host_byte(scmnd, DID_REQUEUE);
++			goto do_work;
++		}
++	}
++
++	if (vm_srb->srb_status & SRB_STATUS_ERROR) {
+ 		/*
+ 		 * Let upper layer deal with error when
+ 		 * sense message is present.
+ 		 */
+-
+ 		if (vm_srb->srb_status & SRB_STATUS_AUTOSENSE_VALID)
+-			break;
++			return;
++
+ 		/*
+ 		 * If there is an error; offline the device since all
+ 		 * error recovery strategies would have already been
+@@ -1032,37 +1055,19 @@ static void storvsc_handle_error(struct vmscsi_request *vm_srb,
+ 			set_host_byte(scmnd, DID_PASSTHROUGH);
+ 			break;
+ 		/*
+-		 * On Some Windows hosts TEST_UNIT_READY command can return
+-		 * SRB_STATUS_ERROR, let the upper level code deal with it
+-		 * based on the sense information.
++		 * On some Hyper-V hosts TEST_UNIT_READY command can
++		 * return SRB_STATUS_ERROR. Let the upper level code
++		 * deal with it based on the sense information.
+ 		 */
+ 		case TEST_UNIT_READY:
+ 			break;
+ 		default:
+ 			set_host_byte(scmnd, DID_ERROR);
+ 		}
+-		break;
+-	case SRB_STATUS_INVALID_LUN:
+-		set_host_byte(scmnd, DID_NO_CONNECT);
+-		do_work = true;
+-		process_err_fn = storvsc_remove_lun;
+-		break;
+-	case SRB_STATUS_ABORTED:
+-		if (vm_srb->srb_status & SRB_STATUS_AUTOSENSE_VALID &&
+-		    (asc == 0x2a) && (ascq == 0x9)) {
+-			do_work = true;
+-			process_err_fn = storvsc_device_scan;
+-			/*
+-			 * Retry the I/O that triggered this.
+-			 */
+-			set_host_byte(scmnd, DID_REQUEUE);
+-		}
+-		break;
+ 	}
++	return;
+ 
+-	if (!do_work)
+-		return;
+-
++do_work:
+ 	/*
+ 	 * We need to schedule work to process this error; schedule it.
+ 	 */
+diff --git a/drivers/soc/mediatek/mtk-pm-domains.c b/drivers/soc/mediatek/mtk-pm-domains.c
+index 0af00efa0ef83..b762bc40f56bd 100644
+--- a/drivers/soc/mediatek/mtk-pm-domains.c
++++ b/drivers/soc/mediatek/mtk-pm-domains.c
+@@ -211,7 +211,7 @@ static int scpsys_power_on(struct generic_pm_domain *genpd)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = clk_bulk_enable(pd->num_clks, pd->clks);
++	ret = clk_bulk_prepare_enable(pd->num_clks, pd->clks);
+ 	if (ret)
+ 		goto err_reg;
+ 
+@@ -229,7 +229,7 @@ static int scpsys_power_on(struct generic_pm_domain *genpd)
+ 	regmap_clear_bits(scpsys->base, pd->data->ctl_offs, PWR_ISO_BIT);
+ 	regmap_set_bits(scpsys->base, pd->data->ctl_offs, PWR_RST_B_BIT);
+ 
+-	ret = clk_bulk_enable(pd->num_subsys_clks, pd->subsys_clks);
++	ret = clk_bulk_prepare_enable(pd->num_subsys_clks, pd->subsys_clks);
+ 	if (ret)
+ 		goto err_pwr_ack;
+ 
+@@ -246,9 +246,9 @@ static int scpsys_power_on(struct generic_pm_domain *genpd)
+ err_disable_sram:
+ 	scpsys_sram_disable(pd);
+ err_disable_subsys_clks:
+-	clk_bulk_disable(pd->num_subsys_clks, pd->subsys_clks);
++	clk_bulk_disable_unprepare(pd->num_subsys_clks, pd->subsys_clks);
+ err_pwr_ack:
+-	clk_bulk_disable(pd->num_clks, pd->clks);
++	clk_bulk_disable_unprepare(pd->num_clks, pd->clks);
+ err_reg:
+ 	scpsys_regulator_disable(pd->supply);
+ 	return ret;
+@@ -269,7 +269,7 @@ static int scpsys_power_off(struct generic_pm_domain *genpd)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	clk_bulk_disable(pd->num_subsys_clks, pd->subsys_clks);
++	clk_bulk_disable_unprepare(pd->num_subsys_clks, pd->subsys_clks);
+ 
+ 	/* subsys power off */
+ 	regmap_clear_bits(scpsys->base, pd->data->ctl_offs, PWR_RST_B_BIT);
+@@ -284,7 +284,7 @@ static int scpsys_power_off(struct generic_pm_domain *genpd)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	clk_bulk_disable(pd->num_clks, pd->clks);
++	clk_bulk_disable_unprepare(pd->num_clks, pd->clks);
+ 
+ 	scpsys_regulator_disable(pd->supply);
+ 
+@@ -297,6 +297,7 @@ generic_pm_domain *scpsys_add_one_domain(struct scpsys *scpsys, struct device_no
+ 	const struct scpsys_domain_data *domain_data;
+ 	struct scpsys_domain *pd;
+ 	struct device_node *root_node = scpsys->dev->of_node;
++	struct device_node *smi_node;
+ 	struct property *prop;
+ 	const char *clk_name;
+ 	int i, ret, num_clks;
+@@ -352,9 +353,13 @@ generic_pm_domain *scpsys_add_one_domain(struct scpsys *scpsys, struct device_no
+ 	if (IS_ERR(pd->infracfg))
+ 		return ERR_CAST(pd->infracfg);
+ 
+-	pd->smi = syscon_regmap_lookup_by_phandle_optional(node, "mediatek,smi");
+-	if (IS_ERR(pd->smi))
+-		return ERR_CAST(pd->smi);
++	smi_node = of_parse_phandle(node, "mediatek,smi", 0);
++	if (smi_node) {
++		pd->smi = device_node_to_regmap(smi_node);
++		of_node_put(smi_node);
++		if (IS_ERR(pd->smi))
++			return ERR_CAST(pd->smi);
++	}
+ 
+ 	num_clks = of_clk_get_parent_count(node);
+ 	if (num_clks > 0) {
+@@ -405,14 +410,6 @@ generic_pm_domain *scpsys_add_one_domain(struct scpsys *scpsys, struct device_no
+ 		pd->subsys_clks[i].clk = clk;
+ 	}
+ 
+-	ret = clk_bulk_prepare(pd->num_clks, pd->clks);
+-	if (ret)
+-		goto err_put_subsys_clocks;
+-
+-	ret = clk_bulk_prepare(pd->num_subsys_clks, pd->subsys_clks);
+-	if (ret)
+-		goto err_unprepare_clocks;
+-
+ 	/*
+ 	 * Initially turn on all domains to make the domains usable
+ 	 * with !CONFIG_PM and to get the hardware in sync with the
+@@ -427,7 +424,7 @@ generic_pm_domain *scpsys_add_one_domain(struct scpsys *scpsys, struct device_no
+ 		ret = scpsys_power_on(&pd->genpd);
+ 		if (ret < 0) {
+ 			dev_err(scpsys->dev, "%pOF: failed to power on domain: %d\n", node, ret);
+-			goto err_unprepare_clocks;
++			goto err_put_subsys_clocks;
+ 		}
+ 	}
+ 
+@@ -435,7 +432,7 @@ generic_pm_domain *scpsys_add_one_domain(struct scpsys *scpsys, struct device_no
+ 		ret = -EINVAL;
+ 		dev_err(scpsys->dev,
+ 			"power domain with id %d already exists, check your device-tree\n", id);
+-		goto err_unprepare_subsys_clocks;
++		goto err_put_subsys_clocks;
+ 	}
+ 
+ 	if (!pd->data->name)
+@@ -455,10 +452,6 @@ generic_pm_domain *scpsys_add_one_domain(struct scpsys *scpsys, struct device_no
+ 
+ 	return scpsys->pd_data.domains[id];
+ 
+-err_unprepare_subsys_clocks:
+-	clk_bulk_unprepare(pd->num_subsys_clks, pd->subsys_clks);
+-err_unprepare_clocks:
+-	clk_bulk_unprepare(pd->num_clks, pd->clks);
+ err_put_subsys_clocks:
+ 	clk_bulk_put(pd->num_subsys_clks, pd->subsys_clks);
+ err_put_clocks:
+@@ -537,10 +530,7 @@ static void scpsys_remove_one_domain(struct scpsys_domain *pd)
+ 			"failed to remove domain '%s' : %d - state may be inconsistent\n",
+ 			pd->genpd.name, ret);
+ 
+-	clk_bulk_unprepare(pd->num_clks, pd->clks);
+ 	clk_bulk_put(pd->num_clks, pd->clks);
+-
+-	clk_bulk_unprepare(pd->num_subsys_clks, pd->subsys_clks);
+ 	clk_bulk_put(pd->num_subsys_clks, pd->subsys_clks);
+ }
+ 
+diff --git a/drivers/soundwire/bus.c b/drivers/soundwire/bus.c
+index a9e0aa72654dd..100d904bf7007 100644
+--- a/drivers/soundwire/bus.c
++++ b/drivers/soundwire/bus.c
+@@ -821,26 +821,6 @@ static void sdw_modify_slave_status(struct sdw_slave *slave,
+ 	mutex_unlock(&bus->bus_lock);
+ }
+ 
+-static enum sdw_clk_stop_mode sdw_get_clk_stop_mode(struct sdw_slave *slave)
+-{
+-	enum sdw_clk_stop_mode mode;
+-
+-	/*
+-	 * Query for clock stop mode if Slave implements
+-	 * ops->get_clk_stop_mode, else read from property.
+-	 */
+-	if (slave->ops && slave->ops->get_clk_stop_mode) {
+-		mode = slave->ops->get_clk_stop_mode(slave);
+-	} else {
+-		if (slave->prop.clk_stop_mode1)
+-			mode = SDW_CLK_STOP_MODE1;
+-		else
+-			mode = SDW_CLK_STOP_MODE0;
+-	}
+-
+-	return mode;
+-}
+-
+ static int sdw_slave_clk_stop_callback(struct sdw_slave *slave,
+ 				       enum sdw_clk_stop_mode mode,
+ 				       enum sdw_clk_stop_type type)
+@@ -849,11 +829,8 @@ static int sdw_slave_clk_stop_callback(struct sdw_slave *slave,
+ 
+ 	if (slave->ops && slave->ops->clk_stop) {
+ 		ret = slave->ops->clk_stop(slave, mode, type);
+-		if (ret < 0) {
+-			dev_err(&slave->dev,
+-				"Clk Stop type =%d failed: %d\n", type, ret);
++		if (ret < 0)
+ 			return ret;
+-		}
+ 	}
+ 
+ 	return 0;
+@@ -880,7 +857,8 @@ static int sdw_slave_clk_stop_prepare(struct sdw_slave *slave,
+ 	} else {
+ 		ret = sdw_read_no_pm(slave, SDW_SCP_SYSTEMCTRL);
+ 		if (ret < 0) {
+-			dev_err(&slave->dev, "SDW_SCP_SYSTEMCTRL read failed:%d\n", ret);
++			if (ret != -ENODATA)
++				dev_err(&slave->dev, "SDW_SCP_SYSTEMCTRL read failed:%d\n", ret);
+ 			return ret;
+ 		}
+ 		val = ret;
+@@ -889,9 +867,8 @@ static int sdw_slave_clk_stop_prepare(struct sdw_slave *slave,
+ 
+ 	ret = sdw_write_no_pm(slave, SDW_SCP_SYSTEMCTRL, val);
+ 
+-	if (ret < 0)
+-		dev_err(&slave->dev,
+-			"Clock Stop prepare failed for slave: %d", ret);
++	if (ret < 0 && ret != -ENODATA)
++		dev_err(&slave->dev, "SDW_SCP_SYSTEMCTRL write failed:%d\n", ret);
+ 
+ 	return ret;
+ }
+@@ -933,7 +910,6 @@ static int sdw_bus_wait_for_clk_prep_deprep(struct sdw_bus *bus, u16 dev_num)
+  */
+ int sdw_bus_prep_clk_stop(struct sdw_bus *bus)
+ {
+-	enum sdw_clk_stop_mode slave_mode;
+ 	bool simple_clk_stop = true;
+ 	struct sdw_slave *slave;
+ 	bool is_slave = false;
+@@ -943,6 +919,9 @@ int sdw_bus_prep_clk_stop(struct sdw_bus *bus)
+ 	 * In order to save on transition time, prepare
+ 	 * each Slave and then wait for all Slave(s) to be
+ 	 * prepared for clock stop.
++	 * If one of the Slave devices has lost sync and
++	 * replies with Command Ignored/-ENODATA, we continue
++	 * the loop
+ 	 */
+ 	list_for_each_entry(slave, &bus->slaves, node) {
+ 		if (!slave->dev_num)
+@@ -955,36 +934,45 @@ int sdw_bus_prep_clk_stop(struct sdw_bus *bus)
+ 		/* Identify if Slave(s) are available on Bus */
+ 		is_slave = true;
+ 
+-		slave_mode = sdw_get_clk_stop_mode(slave);
+-		slave->curr_clk_stop_mode = slave_mode;
+-
+-		ret = sdw_slave_clk_stop_callback(slave, slave_mode,
++		ret = sdw_slave_clk_stop_callback(slave,
++						  SDW_CLK_STOP_MODE0,
+ 						  SDW_CLK_PRE_PREPARE);
+-		if (ret < 0) {
+-			dev_err(&slave->dev,
+-				"pre-prepare failed:%d", ret);
+-			return ret;
+-		}
+-
+-		ret = sdw_slave_clk_stop_prepare(slave,
+-						 slave_mode, true);
+-		if (ret < 0) {
+-			dev_err(&slave->dev,
+-				"pre-prepare failed:%d", ret);
++		if (ret < 0 && ret != -ENODATA) {
++			dev_err(&slave->dev, "clock stop pre-prepare cb failed:%d\n", ret);
+ 			return ret;
+ 		}
+ 
+-		if (slave_mode == SDW_CLK_STOP_MODE1)
++		/* Only prepare a Slave device if needed */
++		if (!slave->prop.simple_clk_stop_capable) {
+ 			simple_clk_stop = false;
++
++			ret = sdw_slave_clk_stop_prepare(slave,
++							 SDW_CLK_STOP_MODE0,
++							 true);
++			if (ret < 0 && ret != -ENODATA) {
++				dev_err(&slave->dev, "clock stop prepare failed:%d\n", ret);
++				return ret;
++			}
++		}
+ 	}
+ 
+ 	/* Skip remaining clock stop preparation if no Slave is attached */
+ 	if (!is_slave)
+-		return ret;
++		return 0;
+ 
++	/*
++	 * Don't wait for all Slaves to be ready if they follow the simple
++	 * state machine
++	 */
+ 	if (!simple_clk_stop) {
+ 		ret = sdw_bus_wait_for_clk_prep_deprep(bus,
+ 						       SDW_BROADCAST_DEV_NUM);
++		/*
++		 * if there are no Slave devices present and the reply is
++		 * Command_Ignored/-ENODATA, we don't need to continue with the
++		 * flow and can just return here. The error code is not modified
++		 * and its handling left as an exercise for the caller.
++		 */
+ 		if (ret < 0)
+ 			return ret;
+ 	}
+@@ -998,21 +986,17 @@ int sdw_bus_prep_clk_stop(struct sdw_bus *bus)
+ 		    slave->status != SDW_SLAVE_ALERT)
+ 			continue;
+ 
+-		slave_mode = slave->curr_clk_stop_mode;
++		ret = sdw_slave_clk_stop_callback(slave,
++						  SDW_CLK_STOP_MODE0,
++						  SDW_CLK_POST_PREPARE);
+ 
+-		if (slave_mode == SDW_CLK_STOP_MODE1) {
+-			ret = sdw_slave_clk_stop_callback(slave,
+-							  slave_mode,
+-							  SDW_CLK_POST_PREPARE);
+-
+-			if (ret < 0) {
+-				dev_err(&slave->dev,
+-					"post-prepare failed:%d", ret);
+-			}
++		if (ret < 0 && ret != -ENODATA) {
++			dev_err(&slave->dev, "clock stop post-prepare cb failed:%d\n", ret);
++			return ret;
+ 		}
+ 	}
+ 
+-	return ret;
++	return 0;
+ }
+ EXPORT_SYMBOL(sdw_bus_prep_clk_stop);
+ 
+@@ -1035,12 +1019,8 @@ int sdw_bus_clk_stop(struct sdw_bus *bus)
+ 	ret = sdw_bwrite_no_pm(bus, SDW_BROADCAST_DEV_NUM,
+ 			       SDW_SCP_CTRL, SDW_SCP_CTRL_CLK_STP_NOW);
+ 	if (ret < 0) {
+-		if (ret == -ENODATA)
+-			dev_dbg(bus->dev,
+-				"ClockStopNow Broadcast msg ignored %d", ret);
+-		else
+-			dev_err(bus->dev,
+-				"ClockStopNow Broadcast msg failed %d", ret);
++		if (ret != -ENODATA)
++			dev_err(bus->dev, "ClockStopNow Broadcast msg failed %d\n", ret);
+ 		return ret;
+ 	}
+ 
+@@ -1059,7 +1039,6 @@ EXPORT_SYMBOL(sdw_bus_clk_stop);
+  */
+ int sdw_bus_exit_clk_stop(struct sdw_bus *bus)
+ {
+-	enum sdw_clk_stop_mode mode;
+ 	bool simple_clk_stop = true;
+ 	struct sdw_slave *slave;
+ 	bool is_slave = false;
+@@ -1081,33 +1060,36 @@ int sdw_bus_exit_clk_stop(struct sdw_bus *bus)
+ 		/* Identify if Slave(s) are available on Bus */
+ 		is_slave = true;
+ 
+-		mode = slave->curr_clk_stop_mode;
+-
+-		if (mode == SDW_CLK_STOP_MODE1) {
+-			simple_clk_stop = false;
+-			continue;
+-		}
+-
+-		ret = sdw_slave_clk_stop_callback(slave, mode,
++		ret = sdw_slave_clk_stop_callback(slave, SDW_CLK_STOP_MODE0,
+ 						  SDW_CLK_PRE_DEPREPARE);
+ 		if (ret < 0)
+-			dev_warn(&slave->dev,
+-				 "clk stop deprep failed:%d", ret);
++			dev_warn(&slave->dev, "clock stop pre-deprepare cb failed:%d\n", ret);
+ 
+-		ret = sdw_slave_clk_stop_prepare(slave, mode,
+-						 false);
++		/* Only de-prepare a Slave device if needed */
++		if (!slave->prop.simple_clk_stop_capable) {
++			simple_clk_stop = false;
+ 
+-		if (ret < 0)
+-			dev_warn(&slave->dev,
+-				 "clk stop deprep failed:%d", ret);
++			ret = sdw_slave_clk_stop_prepare(slave, SDW_CLK_STOP_MODE0,
++							 false);
++
++			if (ret < 0)
++				dev_warn(&slave->dev, "clock stop deprepare failed:%d\n", ret);
++		}
+ 	}
+ 
+ 	/* Skip remaining clock stop de-preparation if no Slave is attached */
+ 	if (!is_slave)
+ 		return 0;
+ 
+-	if (!simple_clk_stop)
+-		sdw_bus_wait_for_clk_prep_deprep(bus, SDW_BROADCAST_DEV_NUM);
++	/*
++	 * Don't wait for all Slaves to be ready if they follow the simple
++	 * state machine
++	 */
++	if (!simple_clk_stop) {
++		ret = sdw_bus_wait_for_clk_prep_deprep(bus, SDW_BROADCAST_DEV_NUM);
++		if (ret < 0)
++			dev_warn(&slave->dev, "clock stop deprepare wait failed:%d\n", ret);
++	}
+ 
+ 	list_for_each_entry(slave, &bus->slaves, node) {
+ 		if (!slave->dev_num)
+@@ -1117,9 +1099,10 @@ int sdw_bus_exit_clk_stop(struct sdw_bus *bus)
+ 		    slave->status != SDW_SLAVE_ALERT)
+ 			continue;
+ 
+-		mode = slave->curr_clk_stop_mode;
+-		sdw_slave_clk_stop_callback(slave, mode,
+-					    SDW_CLK_POST_DEPREPARE);
++		ret = sdw_slave_clk_stop_callback(slave, SDW_CLK_STOP_MODE0,
++						  SDW_CLK_POST_DEPREPARE);
++		if (ret < 0)
++			dev_warn(&slave->dev, "clock stop post-deprepare cb failed:%d\n", ret);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/staging/rtl8723bs/hal/odm.h b/drivers/staging/rtl8723bs/hal/odm.h
+index ff21343fbe0be..a428b0d7e37c7 100644
+--- a/drivers/staging/rtl8723bs/hal/odm.h
++++ b/drivers/staging/rtl8723bs/hal/odm.h
+@@ -197,10 +197,7 @@ struct odm_rate_adaptive {
+ 
+ #define AVG_THERMAL_NUM		8
+ #define IQK_Matrix_REG_NUM	8
+-#define IQK_Matrix_Settings_NUM	(14 + 24 + 21) /*   Channels_2_4G_NUM
+-						* + Channels_5G_20M_NUM
+-						* + Channels_5G
+-						*/
++#define IQK_Matrix_Settings_NUM	14 /* Channels_2_4G_NUM */
+ 
+ #define		DM_Type_ByFW			0
+ #define		DM_Type_ByDriver		1
+diff --git a/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c b/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c
+index 6d0d0beed402f..0cd5608e64969 100644
+--- a/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c
++++ b/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c
+@@ -2602,10 +2602,9 @@ static int rtw_dbg_port(struct net_device *dev,
+ 				case 0x12: /* set rx_stbc */
+ 				{
+ 					struct registry_priv *pregpriv = &padapter->registrypriv;
+-					/*  0: disable, bit(0):enable 2.4g, bit(1):enable 5g, 0x3: enable both 2.4g and 5g */
+-					/* default is set to enable 2.4GHZ for IOT issue with bufflao's AP at 5GHZ */
+-					if (extra_arg == 0 || extra_arg == 1 ||
+-					    extra_arg == 2 || extra_arg == 3)
++					/*  0: disable, bit(0):enable 2.4g */
++					/* default is set to enable 2.4GHZ */
++					if (extra_arg == 0 || extra_arg == 1)
+ 						pregpriv->rx_stbc = extra_arg;
+ 				}
+ 				break;
+diff --git a/drivers/thermal/rcar_gen3_thermal.c b/drivers/thermal/rcar_gen3_thermal.c
+index e1e412348076b..1a60adb1d30a0 100644
+--- a/drivers/thermal/rcar_gen3_thermal.c
++++ b/drivers/thermal/rcar_gen3_thermal.c
+@@ -143,7 +143,7 @@ static void rcar_gen3_thermal_calc_coefs(struct rcar_gen3_thermal_tsc *tsc,
+ 	 * Division is not scaled in BSP and if scaled it might overflow
+ 	 * the dividend (4095 * 4095 << 14 > INT_MAX) so keep it unscaled
+ 	 */
+-	tsc->tj_t = (FIXPT_INT((ptat[1] - ptat[2]) * 157)
++	tsc->tj_t = (FIXPT_INT((ptat[1] - ptat[2]) * (ths_tj_1 - TJ_3))
+ 		     / (ptat[0] - ptat[2])) + FIXPT_INT(TJ_3);
+ 
+ 	tsc->coef.a1 = FIXPT_DIV(FIXPT_INT(thcode[1] - thcode[2]),
+diff --git a/drivers/thermal/sprd_thermal.c b/drivers/thermal/sprd_thermal.c
+index 3682edb2f4669..fe06cccf14b38 100644
+--- a/drivers/thermal/sprd_thermal.c
++++ b/drivers/thermal/sprd_thermal.c
+@@ -532,6 +532,7 @@ static const struct of_device_id sprd_thermal_of_match[] = {
+ 	{ .compatible = "sprd,ums512-thermal", .data = &ums512_data },
+ 	{ },
+ };
++MODULE_DEVICE_TABLE(of, sprd_thermal_of_match);
+ 
+ static const struct dev_pm_ops sprd_thermal_pm_ops = {
+ 	SET_SYSTEM_SLEEP_PM_OPS(sprd_thm_suspend, sprd_thm_resume)
+diff --git a/drivers/thunderbolt/eeprom.c b/drivers/thunderbolt/eeprom.c
+index 46d0906a30707..470885e6f1c86 100644
+--- a/drivers/thunderbolt/eeprom.c
++++ b/drivers/thunderbolt/eeprom.c
+@@ -214,7 +214,10 @@ static u32 tb_crc32(void *data, size_t len)
+ 	return ~__crc32c_le(~0, data, len);
+ }
+ 
+-#define TB_DROM_DATA_START 13
++#define TB_DROM_DATA_START		13
++#define TB_DROM_HEADER_SIZE		22
++#define USB4_DROM_HEADER_SIZE		16
++
+ struct tb_drom_header {
+ 	/* BYTE 0 */
+ 	u8 uid_crc8; /* checksum for uid */
+@@ -224,9 +227,9 @@ struct tb_drom_header {
+ 	u32 data_crc32; /* checksum for data_len bytes starting at byte 13 */
+ 	/* BYTE 13 */
+ 	u8 device_rom_revision; /* should be <= 1 */
+-	u16 data_len:10;
+-	u8 __unknown1:6;
+-	/* BYTES 16-21 */
++	u16 data_len:12;
++	u8 reserved:4;
++	/* BYTES 16-21 - Only for TBT DROM, nonexistent in USB4 DROM */
+ 	u16 vendor_id;
+ 	u16 model_id;
+ 	u8 model_rev;
+@@ -401,10 +404,10 @@ static int tb_drom_parse_entry_port(struct tb_switch *sw,
+  *
+  * Drom must have been copied to sw->drom.
+  */
+-static int tb_drom_parse_entries(struct tb_switch *sw)
++static int tb_drom_parse_entries(struct tb_switch *sw, size_t header_size)
+ {
+ 	struct tb_drom_header *header = (void *) sw->drom;
+-	u16 pos = sizeof(*header);
++	u16 pos = header_size;
+ 	u16 drom_size = header->data_len + TB_DROM_DATA_START;
+ 	int res;
+ 
+@@ -566,7 +569,7 @@ static int tb_drom_parse(struct tb_switch *sw)
+ 			header->data_crc32, crc);
+ 	}
+ 
+-	return tb_drom_parse_entries(sw);
++	return tb_drom_parse_entries(sw, TB_DROM_HEADER_SIZE);
+ }
+ 
+ static int usb4_drom_parse(struct tb_switch *sw)
+@@ -583,7 +586,7 @@ static int usb4_drom_parse(struct tb_switch *sw)
+ 		return -EINVAL;
+ 	}
+ 
+-	return tb_drom_parse_entries(sw);
++	return tb_drom_parse_entries(sw, USB4_DROM_HEADER_SIZE);
+ }
+ 
+ /**
+diff --git a/drivers/tty/serial/8250/8250_of.c b/drivers/tty/serial/8250/8250_of.c
+index 0b077b45d6a94..bce28729dd7bd 100644
+--- a/drivers/tty/serial/8250/8250_of.c
++++ b/drivers/tty/serial/8250/8250_of.c
+@@ -192,6 +192,10 @@ static int of_platform_serial_probe(struct platform_device *ofdev)
+ 	u32 tx_threshold;
+ 	int ret;
+ 
++	if (IS_ENABLED(CONFIG_SERIAL_8250_BCM7271) &&
++	    of_device_is_compatible(ofdev->dev.of_node, "brcm,bcm7271-uart"))
++		return -ENODEV;
++
+ 	port_type = (unsigned long)of_device_get_match_data(&ofdev->dev);
+ 	if (port_type == PORT_UNKNOWN)
+ 		return -EINVAL;
+diff --git a/drivers/tty/serial/8250/serial_cs.c b/drivers/tty/serial/8250/serial_cs.c
+index 53f2697014a02..dc2ef05a10ebe 100644
+--- a/drivers/tty/serial/8250/serial_cs.c
++++ b/drivers/tty/serial/8250/serial_cs.c
+@@ -306,6 +306,7 @@ static int serial_resume(struct pcmcia_device *link)
+ static int serial_probe(struct pcmcia_device *link)
+ {
+ 	struct serial_info *info;
++	int ret;
+ 
+ 	dev_dbg(&link->dev, "serial_attach()\n");
+ 
+@@ -320,7 +321,15 @@ static int serial_probe(struct pcmcia_device *link)
+ 	if (do_sound)
+ 		link->config_flags |= CONF_ENABLE_SPKR;
+ 
+-	return serial_config(link);
++	ret = serial_config(link);
++	if (ret)
++		goto free_info;
++
++	return 0;
++
++free_info:
++	kfree(info);
++	return ret;
+ }
+ 
+ static void serial_detach(struct pcmcia_device *link)
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 9c78e43e669d7..0d7ea144a4a6d 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -1571,6 +1571,9 @@ static void lpuart_tx_dma_startup(struct lpuart_port *sport)
+ 	u32 uartbaud;
+ 	int ret;
+ 
++	if (uart_console(&sport->port))
++		goto err;
++
+ 	if (!sport->dma_tx_chan)
+ 		goto err;
+ 
+@@ -1600,6 +1603,9 @@ static void lpuart_rx_dma_startup(struct lpuart_port *sport)
+ 	int ret;
+ 	unsigned char cr3;
+ 
++	if (uart_console(&sport->port))
++		goto err;
++
+ 	if (!sport->dma_rx_chan)
+ 		goto err;
+ 
+@@ -2404,6 +2410,9 @@ lpuart32_console_get_options(struct lpuart_port *sport, int *baud,
+ 
+ 	bd = lpuart32_read(&sport->port, UARTBAUD);
+ 	bd &= UARTBAUD_SBR_MASK;
++	if (!bd)
++		return;
++
+ 	sbr = bd;
+ 	uartclk = lpuart_get_baud_clk_rate(sport);
+ 	/*
+diff --git a/drivers/tty/serial/uartlite.c b/drivers/tty/serial/uartlite.c
+index f42ccc40ffa64..a5f15f22d9efe 100644
+--- a/drivers/tty/serial/uartlite.c
++++ b/drivers/tty/serial/uartlite.c
+@@ -505,21 +505,23 @@ static void ulite_console_write(struct console *co, const char *s,
+ 
+ static int ulite_console_setup(struct console *co, char *options)
+ {
+-	struct uart_port *port;
++	struct uart_port *port = NULL;
+ 	int baud = 9600;
+ 	int bits = 8;
+ 	int parity = 'n';
+ 	int flow = 'n';
+ 
+-
+-	port = console_port;
++	if (co->index >= 0 && co->index < ULITE_NR_UARTS)
++		port = ulite_ports + co->index;
+ 
+ 	/* Has the device been initialized yet? */
+-	if (!port->mapbase) {
++	if (!port || !port->mapbase) {
+ 		pr_debug("console on ttyUL%i not present\n", co->index);
+ 		return -ENODEV;
+ 	}
+ 
++	console_port = port;
++
+ 	/* not initialized yet? */
+ 	if (!port->membase) {
+ 		if (ulite_request_port(port))
+@@ -655,17 +657,6 @@ static int ulite_assign(struct device *dev, int id, u32 base, int irq,
+ 
+ 	dev_set_drvdata(dev, port);
+ 
+-#ifdef CONFIG_SERIAL_UARTLITE_CONSOLE
+-	/*
+-	 * If console hasn't been found yet try to assign this port
+-	 * because it is required to be assigned for console setup function.
+-	 * If register_console() don't assign value, then console_port pointer
+-	 * is cleanup.
+-	 */
+-	if (ulite_uart_driver.cons->index == -1)
+-		console_port = port;
+-#endif
+-
+ 	/* Register the port */
+ 	rc = uart_add_one_port(&ulite_uart_driver, port);
+ 	if (rc) {
+@@ -675,12 +666,6 @@ static int ulite_assign(struct device *dev, int id, u32 base, int irq,
+ 		return rc;
+ 	}
+ 
+-#ifdef CONFIG_SERIAL_UARTLITE_CONSOLE
+-	/* This is not port which is used for console that's why clean it up */
+-	if (ulite_uart_driver.cons->index == -1)
+-		console_port = NULL;
+-#endif
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/usb/common/usb-conn-gpio.c b/drivers/usb/common/usb-conn-gpio.c
+index 6c4e3a19f42cb..c9545a4eff664 100644
+--- a/drivers/usb/common/usb-conn-gpio.c
++++ b/drivers/usb/common/usb-conn-gpio.c
+@@ -149,14 +149,32 @@ static int usb_charger_get_property(struct power_supply *psy,
+ 	return 0;
+ }
+ 
+-static int usb_conn_probe(struct platform_device *pdev)
++static int usb_conn_psy_register(struct usb_conn_info *info)
+ {
+-	struct device *dev = &pdev->dev;
+-	struct power_supply_desc *desc;
+-	struct usb_conn_info *info;
++	struct device *dev = info->dev;
++	struct power_supply_desc *desc = &info->desc;
+ 	struct power_supply_config cfg = {
+ 		.of_node = dev->of_node,
+ 	};
++
++	desc->name = "usb-charger";
++	desc->properties = usb_charger_properties;
++	desc->num_properties = ARRAY_SIZE(usb_charger_properties);
++	desc->get_property = usb_charger_get_property;
++	desc->type = POWER_SUPPLY_TYPE_USB;
++	cfg.drv_data = info;
++
++	info->charger = devm_power_supply_register(dev, desc, &cfg);
++	if (IS_ERR(info->charger))
++		dev_err(dev, "Unable to register charger\n");
++
++	return PTR_ERR_OR_ZERO(info->charger);
++}
++
++static int usb_conn_probe(struct platform_device *pdev)
++{
++	struct device *dev = &pdev->dev;
++	struct usb_conn_info *info;
+ 	bool need_vbus = true;
+ 	int ret = 0;
+ 
+@@ -218,6 +236,10 @@ static int usb_conn_probe(struct platform_device *pdev)
+ 		return PTR_ERR(info->role_sw);
+ 	}
+ 
++	ret = usb_conn_psy_register(info);
++	if (ret)
++		goto put_role_sw;
++
+ 	if (info->id_gpiod) {
+ 		info->id_irq = gpiod_to_irq(info->id_gpiod);
+ 		if (info->id_irq < 0) {
+@@ -252,20 +274,6 @@ static int usb_conn_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
+-	desc = &info->desc;
+-	desc->name = "usb-charger";
+-	desc->properties = usb_charger_properties;
+-	desc->num_properties = ARRAY_SIZE(usb_charger_properties);
+-	desc->get_property = usb_charger_get_property;
+-	desc->type = POWER_SUPPLY_TYPE_USB;
+-	cfg.drv_data = info;
+-
+-	info->charger = devm_power_supply_register(dev, desc, &cfg);
+-	if (IS_ERR(info->charger)) {
+-		dev_err(dev, "Unable to register charger\n");
+-		return PTR_ERR(info->charger);
+-	}
+-
+ 	platform_set_drvdata(pdev, info);
+ 
+ 	/* Perform initial detection */
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index 1e51460938b83..2b37bdd39a740 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -36,7 +36,7 @@
+ #define PCI_DEVICE_ID_INTEL_CNPH		0xa36e
+ #define PCI_DEVICE_ID_INTEL_CNPV		0xa3b0
+ #define PCI_DEVICE_ID_INTEL_ICLLP		0x34ee
+-#define PCI_DEVICE_ID_INTEL_EHLLP		0x4b7e
++#define PCI_DEVICE_ID_INTEL_EHL			0x4b7e
+ #define PCI_DEVICE_ID_INTEL_TGPLP		0xa0ee
+ #define PCI_DEVICE_ID_INTEL_TGPH		0x43ee
+ #define PCI_DEVICE_ID_INTEL_JSP			0x4dee
+@@ -167,7 +167,7 @@ static int dwc3_pci_quirks(struct dwc3_pci *dwc)
+ 	if (pdev->vendor == PCI_VENDOR_ID_INTEL) {
+ 		if (pdev->device == PCI_DEVICE_ID_INTEL_BXT ||
+ 		    pdev->device == PCI_DEVICE_ID_INTEL_BXT_M ||
+-		    pdev->device == PCI_DEVICE_ID_INTEL_EHLLP) {
++		    pdev->device == PCI_DEVICE_ID_INTEL_EHL) {
+ 			guid_parse(PCI_INTEL_BXT_DSM_GUID, &dwc->guid);
+ 			dwc->has_dsm_for_pm = true;
+ 		}
+@@ -375,8 +375,8 @@ static const struct pci_device_id dwc3_pci_id_table[] = {
+ 	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ICLLP),
+ 	  (kernel_ulong_t) &dwc3_pci_intel_swnode, },
+ 
+-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_EHLLP),
+-	  (kernel_ulong_t) &dwc3_pci_intel_swnode },
++	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_EHL),
++	  (kernel_ulong_t) &dwc3_pci_intel_swnode, },
+ 
+ 	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGPLP),
+ 	  (kernel_ulong_t) &dwc3_pci_intel_swnode, },
+diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c
+index e556993081170..a82b3de1a54be 100644
+--- a/drivers/usb/gadget/function/f_hid.c
++++ b/drivers/usb/gadget/function/f_hid.c
+@@ -88,7 +88,7 @@ static struct usb_interface_descriptor hidg_interface_desc = {
+ static struct hid_descriptor hidg_desc = {
+ 	.bLength			= sizeof hidg_desc,
+ 	.bDescriptorType		= HID_DT_HID,
+-	.bcdHID				= 0x0101,
++	.bcdHID				= cpu_to_le16(0x0101),
+ 	.bCountryCode			= 0x00,
+ 	.bNumDescriptors		= 0x1,
+ 	/*.desc[0].bDescriptorType	= DYNAMIC */
+diff --git a/drivers/usb/gadget/legacy/hid.c b/drivers/usb/gadget/legacy/hid.c
+index c4eda7fe7ab45..5b27d289443fe 100644
+--- a/drivers/usb/gadget/legacy/hid.c
++++ b/drivers/usb/gadget/legacy/hid.c
+@@ -171,8 +171,10 @@ static int hid_bind(struct usb_composite_dev *cdev)
+ 		struct usb_descriptor_header *usb_desc;
+ 
+ 		usb_desc = usb_otg_descriptor_alloc(gadget);
+-		if (!usb_desc)
++		if (!usb_desc) {
++			status = -ENOMEM;
+ 			goto put;
++		}
+ 		usb_otg_descriptor_init(gadget, usb_desc);
+ 		otg_desc[0] = usb_desc;
+ 		otg_desc[1] = NULL;
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 27283654ca080..9248ce8d09a4a 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -1361,12 +1361,17 @@ static void xhci_unmap_temp_buf(struct usb_hcd *hcd, struct urb *urb)
+ 				 urb->transfer_buffer_length,
+ 				 dir);
+ 
+-	if (usb_urb_dir_in(urb))
++	if (usb_urb_dir_in(urb)) {
+ 		len = sg_pcopy_from_buffer(urb->sg, urb->num_sgs,
+ 					   urb->transfer_buffer,
+ 					   buf_len,
+ 					   0);
+-
++		if (len != buf_len) {
++			xhci_dbg(hcd_to_xhci(hcd),
++				 "Copy from tmp buf to urb sg list failed\n");
++			urb->actual_length = len;
++		}
++	}
+ 	urb->transfer_flags &= ~URB_DMA_MAP_SINGLE;
+ 	kfree(urb->transfer_buffer);
+ 	urb->transfer_buffer = NULL;
+diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
+index 800cfd1967ada..cfa56a58b271b 100644
+--- a/drivers/vdpa/mlx5/core/mr.c
++++ b/drivers/vdpa/mlx5/core/mr.c
+@@ -219,11 +219,6 @@ static void destroy_indirect_key(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_m
+ 	mlx5_vdpa_destroy_mkey(mvdev, &mkey->mkey);
+ }
+ 
+-static struct device *get_dma_device(struct mlx5_vdpa_dev *mvdev)
+-{
+-	return &mvdev->mdev->pdev->dev;
+-}
+-
+ static int map_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr *mr,
+ 			 struct vhost_iotlb *iotlb)
+ {
+@@ -239,7 +234,7 @@ static int map_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr
+ 	u64 pa;
+ 	u64 paend;
+ 	struct scatterlist *sg;
+-	struct device *dma = get_dma_device(mvdev);
++	struct device *dma = mvdev->vdev.dma_dev;
+ 
+ 	for (map = vhost_iotlb_itree_first(iotlb, mr->start, mr->end - 1);
+ 	     map; map = vhost_iotlb_itree_next(map, start, mr->end - 1)) {
+@@ -298,7 +293,7 @@ err_map:
+ 
+ static void unmap_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr *mr)
+ {
+-	struct device *dma = get_dma_device(mvdev);
++	struct device *dma = mvdev->vdev.dma_dev;
+ 
+ 	destroy_direct_mr(mvdev, mr);
+ 	dma_unmap_sg_attrs(dma, mr->sg_head.sgl, mr->nsg, DMA_BIDIRECTIONAL, 0);
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index dda5dc6f77378..32dd5ed712cb4 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -611,8 +611,8 @@ static void cq_destroy(struct mlx5_vdpa_net *ndev, u16 idx)
+ 	mlx5_db_free(ndev->mvdev.mdev, &vcq->db);
+ }
+ 
+-static int umem_size(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *mvq, int num,
+-		     struct mlx5_vdpa_umem **umemp)
++static void set_umem_size(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *mvq, int num,
++			  struct mlx5_vdpa_umem **umemp)
+ {
+ 	struct mlx5_core_dev *mdev = ndev->mvdev.mdev;
+ 	int p_a;
+@@ -635,7 +635,7 @@ static int umem_size(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *mvq
+ 		*umemp = &mvq->umem3;
+ 		break;
+ 	}
+-	return p_a * mvq->num_ent + p_b;
++	(*umemp)->size = p_a * mvq->num_ent + p_b;
+ }
+ 
+ static void umem_frag_buf_free(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_umem *umem)
+@@ -651,15 +651,10 @@ static int create_umem(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *m
+ 	void *in;
+ 	int err;
+ 	__be64 *pas;
+-	int size;
+ 	struct mlx5_vdpa_umem *umem;
+ 
+-	size = umem_size(ndev, mvq, num, &umem);
+-	if (size < 0)
+-		return size;
+-
+-	umem->size = size;
+-	err = umem_frag_buf_alloc(ndev, umem, size);
++	set_umem_size(ndev, mvq, num, &umem);
++	err = umem_frag_buf_alloc(ndev, umem, umem->size);
+ 	if (err)
+ 		return err;
+ 
+@@ -829,9 +824,9 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtque
+ 	MLX5_SET(virtio_q, vq_ctx, umem_1_id, mvq->umem1.id);
+ 	MLX5_SET(virtio_q, vq_ctx, umem_1_size, mvq->umem1.size);
+ 	MLX5_SET(virtio_q, vq_ctx, umem_2_id, mvq->umem2.id);
+-	MLX5_SET(virtio_q, vq_ctx, umem_2_size, mvq->umem1.size);
++	MLX5_SET(virtio_q, vq_ctx, umem_2_size, mvq->umem2.size);
+ 	MLX5_SET(virtio_q, vq_ctx, umem_3_id, mvq->umem3.id);
+-	MLX5_SET(virtio_q, vq_ctx, umem_3_size, mvq->umem1.size);
++	MLX5_SET(virtio_q, vq_ctx, umem_3_size, mvq->umem3.size);
+ 	MLX5_SET(virtio_q, vq_ctx, pd, ndev->mvdev.res.pdn);
+ 	if (MLX5_CAP_DEV_VDPA_EMULATION(ndev->mvdev.mdev, eth_frame_offload_type))
+ 		MLX5_SET(virtio_q, vq_ctx, virtio_version_1_0, 1);
+@@ -1772,6 +1767,14 @@ out:
+ 	mutex_unlock(&ndev->reslock);
+ }
+ 
++static void clear_vqs_ready(struct mlx5_vdpa_net *ndev)
++{
++	int i;
++
++	for (i = 0; i < ndev->mvdev.max_vqs; i++)
++		ndev->vqs[i].ready = false;
++}
++
+ static void mlx5_vdpa_set_status(struct vdpa_device *vdev, u8 status)
+ {
+ 	struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
+@@ -1782,6 +1785,7 @@ static void mlx5_vdpa_set_status(struct vdpa_device *vdev, u8 status)
+ 	if (!status) {
+ 		mlx5_vdpa_info(mvdev, "performing device reset\n");
+ 		teardown_driver(ndev);
++		clear_vqs_ready(ndev);
+ 		mlx5_vdpa_destroy_mr(&ndev->mvdev);
+ 		ndev->mvdev.status = 0;
+ 		ndev->mvdev.mlx_features = 0;
+@@ -2037,7 +2041,7 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name)
+ 			goto err_mtu;
+ 	}
+ 
+-	mvdev->vdev.dma_dev = mdev->device;
++	mvdev->vdev.dma_dev = &mdev->pdev->dev;
+ 	err = mlx5_vdpa_alloc_resources(&ndev->mvdev);
+ 	if (err)
+ 		goto err_mpfs;
+diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c b/drivers/vdpa/virtio_pci/vp_vdpa.c
+index c76ebb5312121..9145e06245658 100644
+--- a/drivers/vdpa/virtio_pci/vp_vdpa.c
++++ b/drivers/vdpa/virtio_pci/vp_vdpa.c
+@@ -442,6 +442,7 @@ static int vp_vdpa_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 			vp_modern_map_vq_notify(mdev, i,
+ 						&vp_vdpa->vring[i].notify_pa);
+ 		if (!vp_vdpa->vring[i].notify) {
++			ret = -EINVAL;
+ 			dev_warn(&pdev->dev, "Fail to map vq notify %d\n", i);
+ 			goto err;
+ 		}
+diff --git a/drivers/video/backlight/lm3630a_bl.c b/drivers/video/backlight/lm3630a_bl.c
+index 662029d6a3dc9..419b0334cf087 100644
+--- a/drivers/video/backlight/lm3630a_bl.c
++++ b/drivers/video/backlight/lm3630a_bl.c
+@@ -190,7 +190,7 @@ static int lm3630a_bank_a_update_status(struct backlight_device *bl)
+ 	if ((pwm_ctrl & LM3630A_PWM_BANK_A) != 0) {
+ 		lm3630a_pwm_ctrl(pchip, bl->props.brightness,
+ 				 bl->props.max_brightness);
+-		return bl->props.brightness;
++		return 0;
+ 	}
+ 
+ 	/* disable sleep */
+@@ -210,8 +210,8 @@ static int lm3630a_bank_a_update_status(struct backlight_device *bl)
+ 	return 0;
+ 
+ out_i2c_err:
+-	dev_err(pchip->dev, "i2c failed to access\n");
+-	return bl->props.brightness;
++	dev_err(pchip->dev, "i2c failed to access (%pe)\n", ERR_PTR(ret));
++	return ret;
+ }
+ 
+ static int lm3630a_bank_a_get_brightness(struct backlight_device *bl)
+@@ -267,7 +267,7 @@ static int lm3630a_bank_b_update_status(struct backlight_device *bl)
+ 	if ((pwm_ctrl & LM3630A_PWM_BANK_B) != 0) {
+ 		lm3630a_pwm_ctrl(pchip, bl->props.brightness,
+ 				 bl->props.max_brightness);
+-		return bl->props.brightness;
++		return 0;
+ 	}
+ 
+ 	/* disable sleep */
+@@ -287,8 +287,8 @@ static int lm3630a_bank_b_update_status(struct backlight_device *bl)
+ 	return 0;
+ 
+ out_i2c_err:
+-	dev_err(pchip->dev, "i2c failed to access REG_CTRL\n");
+-	return bl->props.brightness;
++	dev_err(pchip->dev, "i2c failed to access (%pe)\n", ERR_PTR(ret));
++	return ret;
+ }
+ 
+ static int lm3630a_bank_b_get_brightness(struct backlight_device *bl)
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index 98f193078c05a..1c855145711ba 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -970,13 +970,11 @@ fb_set_var(struct fb_info *info, struct fb_var_screeninfo *var)
+ 		fb_var_to_videomode(&mode2, &info->var);
+ 		/* make sure we don't delete the videomode of current var */
+ 		ret = fb_mode_is_equal(&mode1, &mode2);
+-
+-		if (!ret)
+-			fbcon_mode_deleted(info, &mode1);
+-
+-		if (!ret)
+-			fb_delete_videomode(&mode1, &info->modelist);
+-
++		if (!ret) {
++			ret = fbcon_mode_deleted(info, &mode1);
++			if (!ret)
++				fb_delete_videomode(&mode1, &info->modelist);
++		}
+ 
+ 		return ret ? -EINVAL : 0;
+ 	}
+diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c
+index 10ec60d81e847..3bf08b5bb3595 100644
+--- a/drivers/virtio/virtio_mem.c
++++ b/drivers/virtio/virtio_mem.c
+@@ -2420,6 +2420,10 @@ static int virtio_mem_init(struct virtio_mem *vm)
+ 		dev_warn(&vm->vdev->dev,
+ 			 "Some device memory is not addressable/pluggable. This can make some memory unusable.\n");
+ 
++	/* Prepare the offline threshold - make sure we can add two blocks. */
++	vm->offline_threshold = max_t(uint64_t, 2 * memory_block_size_bytes(),
++				      VIRTIO_MEM_DEFAULT_OFFLINE_THRESHOLD);
++
+ 	/*
+ 	 * We want subblocks to span at least MAX_ORDER_NR_PAGES and
+ 	 * pageblock_nr_pages pages. This:
+@@ -2466,14 +2470,11 @@ static int virtio_mem_init(struct virtio_mem *vm)
+ 		       vm->bbm.bb_size - 1;
+ 		vm->bbm.first_bb_id = virtio_mem_phys_to_bb_id(vm, addr);
+ 		vm->bbm.next_bb_id = vm->bbm.first_bb_id;
+-	}
+ 
+-	/* Prepare the offline threshold - make sure we can add two blocks. */
+-	vm->offline_threshold = max_t(uint64_t, 2 * memory_block_size_bytes(),
+-				      VIRTIO_MEM_DEFAULT_OFFLINE_THRESHOLD);
+-	/* In BBM, we also want at least two big blocks. */
+-	vm->offline_threshold = max_t(uint64_t, 2 * vm->bbm.bb_size,
+-				      vm->offline_threshold);
++		/* Make sure we can add two big blocks. */
++		vm->offline_threshold = max_t(uint64_t, 2 * vm->bbm.bb_size,
++					      vm->offline_threshold);
++	}
+ 
+ 	dev_info(&vm->vdev->dev, "start address: 0x%llx", vm->addr);
+ 	dev_info(&vm->vdev->dev, "region size: 0x%llx", vm->region_size);
+diff --git a/drivers/w1/slaves/w1_ds2438.c b/drivers/w1/slaves/w1_ds2438.c
+index 5cfb0ae23e916..5698566b0ee01 100644
+--- a/drivers/w1/slaves/w1_ds2438.c
++++ b/drivers/w1/slaves/w1_ds2438.c
+@@ -62,13 +62,13 @@ static int w1_ds2438_get_page(struct w1_slave *sl, int pageno, u8 *buf)
+ 		if (w1_reset_select_slave(sl))
+ 			continue;
+ 		w1_buf[0] = W1_DS2438_RECALL_MEMORY;
+-		w1_buf[1] = 0x00;
++		w1_buf[1] = (u8)pageno;
+ 		w1_write_block(sl->master, w1_buf, 2);
+ 
+ 		if (w1_reset_select_slave(sl))
+ 			continue;
+ 		w1_buf[0] = W1_DS2438_READ_SCRATCH;
+-		w1_buf[1] = 0x00;
++		w1_buf[1] = (u8)pageno;
+ 		w1_write_block(sl->master, w1_buf, 2);
+ 
+ 		count = w1_read_block(sl->master, buf, DS2438_PAGE_SIZE + 1);
+diff --git a/drivers/watchdog/aspeed_wdt.c b/drivers/watchdog/aspeed_wdt.c
+index 7e00960651fa2..507fd815d7679 100644
+--- a/drivers/watchdog/aspeed_wdt.c
++++ b/drivers/watchdog/aspeed_wdt.c
+@@ -147,7 +147,7 @@ static int aspeed_wdt_set_timeout(struct watchdog_device *wdd,
+ 
+ 	wdd->timeout = timeout;
+ 
+-	actual = min(timeout, wdd->max_hw_heartbeat_ms * 1000);
++	actual = min(timeout, wdd->max_hw_heartbeat_ms / 1000);
+ 
+ 	writel(actual * WDT_RATE_1MHZ, wdt->base + WDT_RELOAD_VALUE);
+ 	writel(WDT_RESTART_MAGIC, wdt->base + WDT_RESTART);
+diff --git a/drivers/watchdog/iTCO_wdt.c b/drivers/watchdog/iTCO_wdt.c
+index bf31d7b67a697..3f1324871cfd6 100644
+--- a/drivers/watchdog/iTCO_wdt.c
++++ b/drivers/watchdog/iTCO_wdt.c
+@@ -71,6 +71,8 @@
+ #define TCOBASE(p)	((p)->tco_res->start)
+ /* SMI Control and Enable Register */
+ #define SMI_EN(p)	((p)->smi_res->start)
++#define TCO_EN		(1 << 13)
++#define GBL_SMI_EN	(1 << 0)
+ 
+ #define TCO_RLD(p)	(TCOBASE(p) + 0x00) /* TCO Timer Reload/Curr. Value */
+ #define TCOv1_TMR(p)	(TCOBASE(p) + 0x01) /* TCOv1 Timer Initial Value*/
+@@ -355,8 +357,12 @@ static int iTCO_wdt_set_timeout(struct watchdog_device *wd_dev, unsigned int t)
+ 
+ 	tmrval = seconds_to_ticks(p, t);
+ 
+-	/* For TCO v1 the timer counts down twice before rebooting */
+-	if (p->iTCO_version == 1)
++	/*
++	 * If TCO SMIs are off, the timer counts down twice before rebooting.
++	 * Otherwise, the BIOS generally reboots when the SMI triggers.
++	 */
++	if (p->smi_res &&
++	    (SMI_EN(p) & (TCO_EN | GBL_SMI_EN)) != (TCO_EN | GBL_SMI_EN))
+ 		tmrval /= 2;
+ 
+ 	/* from the specs: */
+@@ -521,7 +527,7 @@ static int iTCO_wdt_probe(struct platform_device *pdev)
+ 		 * Disables TCO logic generating an SMI#
+ 		 */
+ 		val32 = inl(SMI_EN(p));
+-		val32 &= 0xffffdfff;	/* Turn off SMI clearing watchdog */
++		val32 &= ~TCO_EN;	/* Turn off SMI clearing watchdog */
+ 		outl(val32, SMI_EN(p));
+ 	}
+ 
+diff --git a/drivers/watchdog/imx_sc_wdt.c b/drivers/watchdog/imx_sc_wdt.c
+index e9ee22a7cb456..8ac021748d160 100644
+--- a/drivers/watchdog/imx_sc_wdt.c
++++ b/drivers/watchdog/imx_sc_wdt.c
+@@ -183,16 +183,12 @@ static int imx_sc_wdt_probe(struct platform_device *pdev)
+ 	watchdog_stop_on_reboot(wdog);
+ 	watchdog_stop_on_unregister(wdog);
+ 
+-	ret = devm_watchdog_register_device(dev, wdog);
+-	if (ret)
+-		return ret;
+-
+ 	ret = imx_scu_irq_group_enable(SC_IRQ_GROUP_WDOG,
+ 				       SC_IRQ_WDOG,
+ 				       true);
+ 	if (ret) {
+ 		dev_warn(dev, "Enable irq failed, pretimeout NOT supported\n");
+-		return 0;
++		goto register_device;
+ 	}
+ 
+ 	imx_sc_wdd->wdt_notifier.notifier_call = imx_sc_wdt_notify;
+@@ -203,7 +199,7 @@ static int imx_sc_wdt_probe(struct platform_device *pdev)
+ 					 false);
+ 		dev_warn(dev,
+ 			 "Register irq notifier failed, pretimeout NOT supported\n");
+-		return 0;
++		goto register_device;
+ 	}
+ 
+ 	ret = devm_add_action_or_reset(dev, imx_sc_wdt_action,
+@@ -213,7 +209,8 @@ static int imx_sc_wdt_probe(struct platform_device *pdev)
+ 	else
+ 		dev_warn(dev, "Add action failed, pretimeout NOT supported\n");
+ 
+-	return 0;
++register_device:
++	return devm_watchdog_register_device(dev, wdog);
+ }
+ 
+ static int __maybe_unused imx_sc_wdt_suspend(struct device *dev)
+diff --git a/drivers/watchdog/jz4740_wdt.c b/drivers/watchdog/jz4740_wdt.c
+index bdf9564efa29e..395bde79e2920 100644
+--- a/drivers/watchdog/jz4740_wdt.c
++++ b/drivers/watchdog/jz4740_wdt.c
+@@ -176,9 +176,9 @@ static int jz4740_wdt_probe(struct platform_device *pdev)
+ 	watchdog_set_drvdata(jz4740_wdt, drvdata);
+ 
+ 	drvdata->map = device_node_to_regmap(dev->parent->of_node);
+-	if (!drvdata->map) {
++	if (IS_ERR(drvdata->map)) {
+ 		dev_err(dev, "regmap not found\n");
+-		return -EINVAL;
++		return PTR_ERR(drvdata->map);
+ 	}
+ 
+ 	return devm_watchdog_register_device(dev, &drvdata->wdt);
+diff --git a/drivers/watchdog/keembay_wdt.c b/drivers/watchdog/keembay_wdt.c
+index 547d3fea33ff6..dd192b8dff551 100644
+--- a/drivers/watchdog/keembay_wdt.c
++++ b/drivers/watchdog/keembay_wdt.c
+@@ -23,12 +23,14 @@
+ #define TIM_WDOG_EN		0x8
+ #define TIM_SAFE		0xc
+ 
+-#define WDT_ISR_MASK		GENMASK(9, 8)
++#define WDT_TH_INT_MASK		BIT(8)
++#define WDT_TO_INT_MASK		BIT(9)
+ #define WDT_ISR_CLEAR		0x8200ff18
+ #define WDT_UNLOCK		0xf1d0dead
+ #define WDT_LOAD_MAX		U32_MAX
+ #define WDT_LOAD_MIN		1
+ #define WDT_TIMEOUT		5
++#define WDT_PRETIMEOUT		4
+ 
+ static unsigned int timeout = WDT_TIMEOUT;
+ module_param(timeout, int, 0);
+@@ -82,7 +84,6 @@ static int keembay_wdt_start(struct watchdog_device *wdog)
+ {
+ 	struct keembay_wdt *wdt = watchdog_get_drvdata(wdog);
+ 
+-	keembay_wdt_set_timeout_reg(wdog);
+ 	keembay_wdt_writel(wdt, TIM_WDOG_EN, 1);
+ 
+ 	return 0;
+@@ -108,6 +109,7 @@ static int keembay_wdt_set_timeout(struct watchdog_device *wdog, u32 t)
+ {
+ 	wdog->timeout = t;
+ 	keembay_wdt_set_timeout_reg(wdog);
++	keembay_wdt_set_pretimeout_reg(wdog);
+ 
+ 	return 0;
+ }
+@@ -139,8 +141,7 @@ static irqreturn_t keembay_wdt_to_isr(int irq, void *dev_id)
+ 	struct keembay_wdt *wdt = dev_id;
+ 	struct arm_smccc_res res;
+ 
+-	keembay_wdt_writel(wdt, TIM_WATCHDOG, 1);
+-	arm_smccc_smc(WDT_ISR_CLEAR, WDT_ISR_MASK, 0, 0, 0, 0, 0, 0, &res);
++	arm_smccc_smc(WDT_ISR_CLEAR, WDT_TO_INT_MASK, 0, 0, 0, 0, 0, 0, &res);
+ 	dev_crit(wdt->wdd.parent, "Intel Keem Bay non-sec wdt timeout.\n");
+ 	emergency_restart();
+ 
+@@ -152,7 +153,9 @@ static irqreturn_t keembay_wdt_th_isr(int irq, void *dev_id)
+ 	struct keembay_wdt *wdt = dev_id;
+ 	struct arm_smccc_res res;
+ 
+-	arm_smccc_smc(WDT_ISR_CLEAR, WDT_ISR_MASK, 0, 0, 0, 0, 0, 0, &res);
++	keembay_wdt_set_pretimeout(&wdt->wdd, 0x0);
++
++	arm_smccc_smc(WDT_ISR_CLEAR, WDT_TH_INT_MASK, 0, 0, 0, 0, 0, 0, &res);
+ 	dev_crit(wdt->wdd.parent, "Intel Keem Bay non-sec wdt pre-timeout.\n");
+ 	watchdog_notify_pretimeout(&wdt->wdd);
+ 
+@@ -224,11 +227,13 @@ static int keembay_wdt_probe(struct platform_device *pdev)
+ 	wdt->wdd.min_timeout	= WDT_LOAD_MIN;
+ 	wdt->wdd.max_timeout	= WDT_LOAD_MAX / wdt->rate;
+ 	wdt->wdd.timeout	= WDT_TIMEOUT;
++	wdt->wdd.pretimeout	= WDT_PRETIMEOUT;
+ 
+ 	watchdog_set_drvdata(&wdt->wdd, wdt);
+ 	watchdog_set_nowayout(&wdt->wdd, nowayout);
+ 	watchdog_init_timeout(&wdt->wdd, timeout, dev);
+ 	keembay_wdt_set_timeout(&wdt->wdd, wdt->wdd.timeout);
++	keembay_wdt_set_pretimeout(&wdt->wdd, wdt->wdd.pretimeout);
+ 
+ 	ret = devm_watchdog_register_device(dev, &wdt->wdd);
+ 	if (ret)
+diff --git a/drivers/watchdog/lpc18xx_wdt.c b/drivers/watchdog/lpc18xx_wdt.c
+index 78cf11c949416..60b6d74f267dd 100644
+--- a/drivers/watchdog/lpc18xx_wdt.c
++++ b/drivers/watchdog/lpc18xx_wdt.c
+@@ -292,7 +292,7 @@ static int lpc18xx_wdt_remove(struct platform_device *pdev)
+ 	struct lpc18xx_wdt_dev *lpc18xx_wdt = platform_get_drvdata(pdev);
+ 
+ 	dev_warn(&pdev->dev, "I quit now, hardware will probably reboot!\n");
+-	del_timer(&lpc18xx_wdt->timer);
++	del_timer_sync(&lpc18xx_wdt->timer);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/watchdog/sbc60xxwdt.c b/drivers/watchdog/sbc60xxwdt.c
+index a947a63fb44ae..7b974802dfc7c 100644
+--- a/drivers/watchdog/sbc60xxwdt.c
++++ b/drivers/watchdog/sbc60xxwdt.c
+@@ -146,7 +146,7 @@ static void wdt_startup(void)
+ static void wdt_turnoff(void)
+ {
+ 	/* Stop the timer */
+-	del_timer(&timer);
++	del_timer_sync(&timer);
+ 	inb_p(wdt_stop);
+ 	pr_info("Watchdog timer is now disabled...\n");
+ }
+diff --git a/drivers/watchdog/sc520_wdt.c b/drivers/watchdog/sc520_wdt.c
+index e66e6b905964b..ca65468f4b9ce 100644
+--- a/drivers/watchdog/sc520_wdt.c
++++ b/drivers/watchdog/sc520_wdt.c
+@@ -186,7 +186,7 @@ static int wdt_startup(void)
+ static int wdt_turnoff(void)
+ {
+ 	/* Stop the timer */
+-	del_timer(&timer);
++	del_timer_sync(&timer);
+ 
+ 	/* Stop the watchdog */
+ 	wdt_config(0);
+diff --git a/drivers/watchdog/w83877f_wdt.c b/drivers/watchdog/w83877f_wdt.c
+index 5772cc5d37804..f2650863fd027 100644
+--- a/drivers/watchdog/w83877f_wdt.c
++++ b/drivers/watchdog/w83877f_wdt.c
+@@ -166,7 +166,7 @@ static void wdt_startup(void)
+ static void wdt_turnoff(void)
+ {
+ 	/* Stop the timer */
+-	del_timer(&timer);
++	del_timer_sync(&timer);
+ 
+ 	wdt_change(WDT_DISABLE);
+ 
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index 6d5c4e45cfef0..cf53713f8aa01 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1499,7 +1499,15 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
+ 	if (!btrfs_exclop_start(fs_info, BTRFS_EXCLOP_BALANCE))
+ 		return;
+ 
+-	mutex_lock(&fs_info->reclaim_bgs_lock);
++	/*
++	 * Long running balances can keep us blocked here for eternity, so
++	 * simply skip reclaim if we're unable to get the mutex.
++	 */
++	if (!mutex_trylock(&fs_info->reclaim_bgs_lock)) {
++		btrfs_exclop_finish(fs_info);
++		return;
++	}
++
+ 	spin_lock(&fs_info->unused_bgs_lock);
+ 	while (!list_empty(&fs_info->reclaim_bgs)) {
+ 		bg = list_first_entry(&fs_info->reclaim_bgs,
+@@ -1539,7 +1547,7 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
+ 			goto next;
+ 
+ 		btrfs_info(fs_info, "reclaiming chunk %llu with %llu%% used",
+-				bg->start, div_u64(bg->used * 100, bg->length));
++				bg->start, div64_u64(bg->used * 100, bg->length));
+ 		trace_btrfs_reclaim_block_group(bg);
+ 		ret = btrfs_relocate_chunk(fs_info, bg->start);
+ 		if (ret)
+@@ -2192,6 +2200,13 @@ error:
+ 	return ret;
+ }
+ 
++/*
++ * This function, insert_block_group_item(), belongs to the phase 2 of chunk
++ * allocation.
++ *
++ * See the comment at btrfs_chunk_alloc() for details about the chunk allocation
++ * phases.
++ */
+ static int insert_block_group_item(struct btrfs_trans_handle *trans,
+ 				   struct btrfs_block_group *block_group)
+ {
+@@ -2214,15 +2229,19 @@ static int insert_block_group_item(struct btrfs_trans_handle *trans,
+ 	return btrfs_insert_item(trans, root, &key, &bgi, sizeof(bgi));
+ }
+ 
++/*
++ * This function, btrfs_create_pending_block_groups(), belongs to the phase 2 of
++ * chunk allocation.
++ *
++ * See the comment at btrfs_chunk_alloc() for details about the chunk allocation
++ * phases.
++ */
+ void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans)
+ {
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
+ 	struct btrfs_block_group *block_group;
+ 	int ret = 0;
+ 
+-	if (!trans->can_flush_pending_bgs)
+-		return;
+-
+ 	while (!list_empty(&trans->new_bgs)) {
+ 		int index;
+ 
+@@ -2237,6 +2256,13 @@ void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans)
+ 		ret = insert_block_group_item(trans, block_group);
+ 		if (ret)
+ 			btrfs_abort_transaction(trans, ret);
++		if (!block_group->chunk_item_inserted) {
++			mutex_lock(&fs_info->chunk_mutex);
++			ret = btrfs_chunk_alloc_add_chunk_item(trans, block_group);
++			mutex_unlock(&fs_info->chunk_mutex);
++			if (ret)
++				btrfs_abort_transaction(trans, ret);
++		}
+ 		ret = btrfs_finish_chunk_alloc(trans, block_group->start,
+ 					block_group->length);
+ 		if (ret)
+@@ -2260,8 +2286,9 @@ next:
+ 	btrfs_trans_release_chunk_metadata(trans);
+ }
+ 
+-int btrfs_make_block_group(struct btrfs_trans_handle *trans, u64 bytes_used,
+-			   u64 type, u64 chunk_offset, u64 size)
++struct btrfs_block_group *btrfs_make_block_group(struct btrfs_trans_handle *trans,
++						 u64 bytes_used, u64 type,
++						 u64 chunk_offset, u64 size)
+ {
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
+ 	struct btrfs_block_group *cache;
+@@ -2271,7 +2298,7 @@ int btrfs_make_block_group(struct btrfs_trans_handle *trans, u64 bytes_used,
+ 
+ 	cache = btrfs_create_block_group_cache(fs_info, chunk_offset);
+ 	if (!cache)
+-		return -ENOMEM;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	cache->length = size;
+ 	set_free_space_tree_thresholds(cache);
+@@ -2285,7 +2312,7 @@ int btrfs_make_block_group(struct btrfs_trans_handle *trans, u64 bytes_used,
+ 	ret = btrfs_load_block_group_zone_info(cache, true);
+ 	if (ret) {
+ 		btrfs_put_block_group(cache);
+-		return ret;
++		return ERR_PTR(ret);
+ 	}
+ 
+ 	ret = exclude_super_stripes(cache);
+@@ -2293,7 +2320,7 @@ int btrfs_make_block_group(struct btrfs_trans_handle *trans, u64 bytes_used,
+ 		/* We may have excluded something, so call this just in case */
+ 		btrfs_free_excluded_extents(cache);
+ 		btrfs_put_block_group(cache);
+-		return ret;
++		return ERR_PTR(ret);
+ 	}
+ 
+ 	add_new_free_space(cache, chunk_offset, chunk_offset + size);
+@@ -2320,7 +2347,7 @@ int btrfs_make_block_group(struct btrfs_trans_handle *trans, u64 bytes_used,
+ 	if (ret) {
+ 		btrfs_remove_free_space_cache(cache);
+ 		btrfs_put_block_group(cache);
+-		return ret;
++		return ERR_PTR(ret);
+ 	}
+ 
+ 	/*
+@@ -2339,7 +2366,7 @@ int btrfs_make_block_group(struct btrfs_trans_handle *trans, u64 bytes_used,
+ 	btrfs_update_delayed_refs_rsv(trans);
+ 
+ 	set_avail_alloc_bits(fs_info, type);
+-	return 0;
++	return cache;
+ }
+ 
+ /*
+@@ -3219,11 +3246,203 @@ int btrfs_force_chunk_alloc(struct btrfs_trans_handle *trans, u64 type)
+ 	return btrfs_chunk_alloc(trans, alloc_flags, CHUNK_ALLOC_FORCE);
+ }
+ 
++static int do_chunk_alloc(struct btrfs_trans_handle *trans, u64 flags)
++{
++	struct btrfs_block_group *bg;
++	int ret;
++
++	/*
++	 * Check if we have enough space in the system space info because we
++	 * will need to update device items in the chunk btree and insert a new
++	 * chunk item in the chunk btree as well. This will allocate a new
++	 * system block group if needed.
++	 */
++	check_system_chunk(trans, flags);
++
++	bg = btrfs_alloc_chunk(trans, flags);
++	if (IS_ERR(bg)) {
++		ret = PTR_ERR(bg);
++		goto out;
++	}
++
++	/*
++	 * If this is a system chunk allocation then stop right here and do not
++	 * add the chunk item to the chunk btree. This is to prevent a deadlock
++	 * because this system chunk allocation can be triggered while COWing
++	 * some extent buffer of the chunk btree and while holding a lock on a
++	 * parent extent buffer, in which case attempting to insert the chunk
++	 * item (or update the device item) would result in a deadlock on that
++	 * parent extent buffer. In this case defer the chunk btree updates to
++	 * the second phase of chunk allocation and keep our reservation until
++	 * the second phase completes.
++	 *
++	 * This is a rare case and can only be triggered by the very few cases
++	 * we have where we need to touch the chunk btree outside chunk allocation
++	 * and chunk removal. These cases are basically adding a device, removing
++	 * a device or resizing a device.
++	 */
++	if (flags & BTRFS_BLOCK_GROUP_SYSTEM)
++		return 0;
++
++	ret = btrfs_chunk_alloc_add_chunk_item(trans, bg);
++	/*
++	 * Normally we are not expected to fail with -ENOSPC here, since we have
++	 * previously reserved space in the system space_info and allocated one
++	 * new system chunk if necessary. However there are two exceptions:
++	 *
++	 * 1) We may have enough free space in the system space_info but all the
++	 *    existing system block groups have a profile which can not be used
++	 *    for extent allocation.
++	 *
++	 *    This happens when mounting in degraded mode. For example we have a
++	 *    RAID1 filesystem with 2 devices, lose one device and mount the fs
++	 *    using the other device in degraded mode. If we then allocate a chunk,
++	 *    we may have enough free space in the existing system space_info, but
++	 *    none of the block groups can be used for extent allocation since they
++	 *    have a RAID1 profile, and because we are in degraded mode with a
++	 *    single device, we are forced to allocate a new system chunk with a
++	 *    SINGLE profile. Making check_system_chunk() iterate over all system
++	 *    block groups and check if they have a usable profile and enough space
++	 *    can be slow on very large filesystems, so we tolerate the -ENOSPC and
++	 *    try again after forcing allocation of a new system chunk. Like this
++	 *    we avoid paying the cost of that search in normal circumstances, when
++	 *    we were not mounted in degraded mode;
++	 *
++	 * 2) We had enough free space info the system space_info, and one suitable
++	 *    block group to allocate from when we called check_system_chunk()
++	 *    above. However right after we called it, the only system block group
++	 *    with enough free space got turned into RO mode by a running scrub,
++	 *    and in this case we have to allocate a new one and retry. We only
++	 *    need do this allocate and retry once, since we have a transaction
++	 *    handle and scrub uses the commit root to search for block groups.
++	 */
++	if (ret == -ENOSPC) {
++		const u64 sys_flags = btrfs_system_alloc_profile(trans->fs_info);
++		struct btrfs_block_group *sys_bg;
++
++		sys_bg = btrfs_alloc_chunk(trans, sys_flags);
++		if (IS_ERR(sys_bg)) {
++			ret = PTR_ERR(sys_bg);
++			btrfs_abort_transaction(trans, ret);
++			goto out;
++		}
++
++		ret = btrfs_chunk_alloc_add_chunk_item(trans, sys_bg);
++		if (ret) {
++			btrfs_abort_transaction(trans, ret);
++			goto out;
++		}
++
++		ret = btrfs_chunk_alloc_add_chunk_item(trans, bg);
++		if (ret) {
++			btrfs_abort_transaction(trans, ret);
++			goto out;
++		}
++	} else if (ret) {
++		btrfs_abort_transaction(trans, ret);
++		goto out;
++	}
++out:
++	btrfs_trans_release_chunk_metadata(trans);
++
++	return ret;
++}
++
+ /*
+- * If force is CHUNK_ALLOC_FORCE:
++ * Chunk allocation is done in 2 phases:
++ *
++ * 1) Phase 1 - through btrfs_chunk_alloc() we allocate device extents for
++ *    the chunk, the chunk mapping, create its block group and add the items
++ *    that belong in the chunk btree to it - more specifically, we need to
++ *    update device items in the chunk btree and add a new chunk item to it.
++ *
++ * 2) Phase 2 - through btrfs_create_pending_block_groups(), we add the block
++ *    group item to the extent btree and the device extent items to the devices
++ *    btree.
++ *
++ * This is done to prevent deadlocks. For example when COWing a node from the
++ * extent btree we are holding a write lock on the node's parent and if we
++ * trigger chunk allocation and attempted to insert the new block group item
++ * in the extent btree right way, we could deadlock because the path for the
++ * insertion can include that parent node. At first glance it seems impossible
++ * to trigger chunk allocation after starting a transaction since tasks should
++ * reserve enough transaction units (metadata space), however while that is true
++ * most of the time, chunk allocation may still be triggered for several reasons:
++ *
++ * 1) When reserving metadata, we check if there is enough free space in the
++ *    metadata space_info and therefore don't trigger allocation of a new chunk.
++ *    However later when the task actually tries to COW an extent buffer from
++ *    the extent btree or from the device btree for example, it is forced to
++ *    allocate a new block group (chunk) because the only one that had enough
++ *    free space was just turned to RO mode by a running scrub for example (or
++ *    device replace, block group reclaim thread, etc), so we can not use it
++ *    for allocating an extent and end up being forced to allocate a new one;
++ *
++ * 2) Because we only check that the metadata space_info has enough free bytes,
++ *    we end up not allocating a new metadata chunk in that case. However if
++ *    the filesystem was mounted in degraded mode, none of the existing block
++ *    groups might be suitable for extent allocation due to their incompatible
++ *    profile (for e.g. mounting a 2 devices filesystem, where all block groups
++ *    use a RAID1 profile, in degraded mode using a single device). In this case
++ *    when the task attempts to COW some extent buffer of the extent btree for
++ *    example, it will trigger allocation of a new metadata block group with a
++ *    suitable profile (SINGLE profile in the example of the degraded mount of
++ *    the RAID1 filesystem);
++ *
++ * 3) The task has reserved enough transaction units / metadata space, but when
++ *    it attempts to COW an extent buffer from the extent or device btree for
++ *    example, it does not find any free extent in any metadata block group,
++ *    therefore forced to try to allocate a new metadata block group.
++ *    This is because some other task allocated all available extents in the
++ *    meanwhile - this typically happens with tasks that don't reserve space
++ *    properly, either intentionally or as a bug. One example where this is
++ *    done intentionally is fsync, as it does not reserve any transaction units
++ *    and ends up allocating a variable number of metadata extents for log
++ *    tree extent buffers.
++ *
++ * We also need this 2 phases setup when adding a device to a filesystem with
++ * a seed device - we must create new metadata and system chunks without adding
++ * any of the block group items to the chunk, extent and device btrees. If we
++ * did not do it this way, we would get ENOSPC when attempting to update those
++ * btrees, since all the chunks from the seed device are read-only.
++ *
++ * Phase 1 does the updates and insertions to the chunk btree because if we had
++ * it done in phase 2 and have a thundering herd of tasks allocating chunks in
++ * parallel, we risk having too many system chunks allocated by many tasks if
++ * many tasks reach phase 1 without the previous ones completing phase 2. In the
++ * extreme case this leads to exhaustion of the system chunk array in the
++ * superblock. This is easier to trigger if using a btree node/leaf size of 64K
++ * and with RAID filesystems (so we have more device items in the chunk btree).
++ * This has happened before and commit eafa4fd0ad0607 ("btrfs: fix exhaustion of
++ * the system chunk array due to concurrent allocations") provides more details.
++ *
++ * For allocation of system chunks, we defer the updates and insertions into the
++ * chunk btree to phase 2. This is to prevent deadlocks on extent buffers because
++ * if the chunk allocation is triggered while COWing an extent buffer of the
++ * chunk btree, we are holding a lock on the parent of that extent buffer and
++ * doing the chunk btree updates and insertions can require locking that parent.
++ * This is for the very few and rare cases where we update the chunk btree that
++ * are not chunk allocation or chunk removal: adding a device, removing a device
++ * or resizing a device.
++ *
++ * The reservation of system space, done through check_system_chunk(), as well
++ * as all the updates and insertions into the chunk btree must be done while
++ * holding fs_info->chunk_mutex. This is important to guarantee that while COWing
++ * an extent buffer from the chunks btree we never trigger allocation of a new
++ * system chunk, which would result in a deadlock (trying to lock twice an
++ * extent buffer of the chunk btree, first time before triggering the chunk
++ * allocation and the second time during chunk allocation while attempting to
++ * update the chunks btree). The system chunk array is also updated while holding
++ * that mutex. The same logic applies to removing chunks - we must reserve system
++ * space, update the chunk btree and the system chunk array in the superblock
++ * while holding fs_info->chunk_mutex.
++ *
++ * This function, btrfs_chunk_alloc(), belongs to phase 1.
++ *
++ * If @force is CHUNK_ALLOC_FORCE:
+  *    - return 1 if it successfully allocates a chunk,
+  *    - return errors including -ENOSPC otherwise.
+- * If force is NOT CHUNK_ALLOC_FORCE:
++ * If @force is NOT CHUNK_ALLOC_FORCE:
+  *    - return 0 if it doesn't need to allocate a new chunk,
+  *    - return 1 if it successfully allocates a chunk,
+  *    - return errors including -ENOSPC otherwise.
+@@ -3240,6 +3459,13 @@ int btrfs_chunk_alloc(struct btrfs_trans_handle *trans, u64 flags,
+ 	/* Don't re-enter if we're already allocating a chunk */
+ 	if (trans->allocating_chunk)
+ 		return -ENOSPC;
++	/*
++	 * If we are removing a chunk, don't re-enter or we would deadlock.
++	 * System space reservation and system chunk allocation is done by the
++	 * chunk remove operation (btrfs_remove_chunk()).
++	 */
++	if (trans->removing_chunk)
++		return -ENOSPC;
+ 
+ 	space_info = btrfs_find_space_info(fs_info, flags);
+ 	ASSERT(space_info);
+@@ -3303,13 +3529,7 @@ int btrfs_chunk_alloc(struct btrfs_trans_handle *trans, u64 flags,
+ 			force_metadata_allocation(fs_info);
+ 	}
+ 
+-	/*
+-	 * Check if we have enough space in SYSTEM chunk because we may need
+-	 * to update devices.
+-	 */
+-	check_system_chunk(trans, flags);
+-
+-	ret = btrfs_alloc_chunk(trans, flags);
++	ret = do_chunk_alloc(trans, flags);
+ 	trans->allocating_chunk = false;
+ 
+ 	spin_lock(&space_info->lock);
+@@ -3328,22 +3548,6 @@ out:
+ 	space_info->chunk_alloc = 0;
+ 	spin_unlock(&space_info->lock);
+ 	mutex_unlock(&fs_info->chunk_mutex);
+-	/*
+-	 * When we allocate a new chunk we reserve space in the chunk block
+-	 * reserve to make sure we can COW nodes/leafs in the chunk tree or
+-	 * add new nodes/leafs to it if we end up needing to do it when
+-	 * inserting the chunk item and updating device items as part of the
+-	 * second phase of chunk allocation, performed by
+-	 * btrfs_finish_chunk_alloc(). So make sure we don't accumulate a
+-	 * large number of new block groups to create in our transaction
+-	 * handle's new_bgs list to avoid exhausting the chunk block reserve
+-	 * in extreme cases - like having a single transaction create many new
+-	 * block groups when starting to write out the free space caches of all
+-	 * the block groups that were made dirty during the lifetime of the
+-	 * transaction.
+-	 */
+-	if (trans->chunk_bytes_reserved >= (u64)SZ_2M)
+-		btrfs_create_pending_block_groups(trans);
+ 
+ 	return ret;
+ }
+@@ -3364,7 +3568,6 @@ static u64 get_profile_num_devs(struct btrfs_fs_info *fs_info, u64 type)
+  */
+ void check_system_chunk(struct btrfs_trans_handle *trans, u64 type)
+ {
+-	struct btrfs_transaction *cur_trans = trans->transaction;
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
+ 	struct btrfs_space_info *info;
+ 	u64 left;
+@@ -3379,7 +3582,6 @@ void check_system_chunk(struct btrfs_trans_handle *trans, u64 type)
+ 	lockdep_assert_held(&fs_info->chunk_mutex);
+ 
+ 	info = btrfs_find_space_info(fs_info, BTRFS_BLOCK_GROUP_SYSTEM);
+-again:
+ 	spin_lock(&info->lock);
+ 	left = info->total_bytes - btrfs_space_info_used(info, true);
+ 	spin_unlock(&info->lock);
+@@ -3398,76 +3600,39 @@ again:
+ 
+ 	if (left < thresh) {
+ 		u64 flags = btrfs_system_alloc_profile(fs_info);
+-		u64 reserved = atomic64_read(&cur_trans->chunk_bytes_reserved);
+-
+-		/*
+-		 * If there's not available space for the chunk tree (system
+-		 * space) and there are other tasks that reserved space for
+-		 * creating a new system block group, wait for them to complete
+-		 * the creation of their system block group and release excess
+-		 * reserved space. We do this because:
+-		 *
+-		 * *) We can end up allocating more system chunks than necessary
+-		 *    when there are multiple tasks that are concurrently
+-		 *    allocating block groups, which can lead to exhaustion of
+-		 *    the system array in the superblock;
+-		 *
+-		 * *) If we allocate extra and unnecessary system block groups,
+-		 *    despite being empty for a long time, and possibly forever,
+-		 *    they end not being added to the list of unused block groups
+-		 *    because that typically happens only when deallocating the
+-		 *    last extent from a block group - which never happens since
+-		 *    we never allocate from them in the first place. The few
+-		 *    exceptions are when mounting a filesystem or running scrub,
+-		 *    which add unused block groups to the list of unused block
+-		 *    groups, to be deleted by the cleaner kthread.
+-		 *    And even when they are added to the list of unused block
+-		 *    groups, it can take a long time until they get deleted,
+-		 *    since the cleaner kthread might be sleeping or busy with
+-		 *    other work (deleting subvolumes, running delayed iputs,
+-		 *    defrag scheduling, etc);
+-		 *
+-		 * This is rare in practice, but can happen when too many tasks
+-		 * are allocating blocks groups in parallel (via fallocate())
+-		 * and before the one that reserved space for a new system block
+-		 * group finishes the block group creation and releases the space
+-		 * reserved in excess (at btrfs_create_pending_block_groups()),
+-		 * other tasks end up here and see free system space temporarily
+-		 * not enough for updating the chunk tree.
+-		 *
+-		 * We unlock the chunk mutex before waiting for such tasks and
+-		 * lock it again after the wait, otherwise we would deadlock.
+-		 * It is safe to do so because allocating a system chunk is the
+-		 * first thing done while allocating a new block group.
+-		 */
+-		if (reserved > trans->chunk_bytes_reserved) {
+-			const u64 min_needed = reserved - thresh;
+-
+-			mutex_unlock(&fs_info->chunk_mutex);
+-			wait_event(cur_trans->chunk_reserve_wait,
+-			   atomic64_read(&cur_trans->chunk_bytes_reserved) <=
+-			   min_needed);
+-			mutex_lock(&fs_info->chunk_mutex);
+-			goto again;
+-		}
++		struct btrfs_block_group *bg;
+ 
+ 		/*
+ 		 * Ignore failure to create system chunk. We might end up not
+ 		 * needing it, as we might not need to COW all nodes/leafs from
+ 		 * the paths we visit in the chunk tree (they were already COWed
+ 		 * or created in the current transaction for example).
++		 *
++		 * Also, if our caller is allocating a system chunk, do not
++		 * attempt to insert the chunk item in the chunk btree, as we
++		 * could deadlock on an extent buffer since our caller may be
++		 * COWing an extent buffer from the chunk btree.
+ 		 */
+-		ret = btrfs_alloc_chunk(trans, flags);
++		bg = btrfs_alloc_chunk(trans, flags);
++		if (IS_ERR(bg)) {
++			ret = PTR_ERR(bg);
++		} else if (!(type & BTRFS_BLOCK_GROUP_SYSTEM)) {
++			/*
++			 * If we fail to add the chunk item here, we end up
++			 * trying again at phase 2 of chunk allocation, at
++			 * btrfs_create_pending_block_groups(). So ignore
++			 * any error here.
++			 */
++			btrfs_chunk_alloc_add_chunk_item(trans, bg);
++		}
+ 	}
+ 
+ 	if (!ret) {
+ 		ret = btrfs_block_rsv_add(fs_info->chunk_root,
+ 					  &fs_info->chunk_block_rsv,
+ 					  thresh, BTRFS_RESERVE_NO_FLUSH);
+-		if (!ret) {
+-			atomic64_add(thresh, &cur_trans->chunk_bytes_reserved);
++		if (!ret)
+ 			trans->chunk_bytes_reserved += thresh;
+-		}
+ 	}
+ }
+ 
+diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h
+index 7b927425dc715..c72a71efcb187 100644
+--- a/fs/btrfs/block-group.h
++++ b/fs/btrfs/block-group.h
+@@ -97,6 +97,7 @@ struct btrfs_block_group {
+ 	unsigned int removed:1;
+ 	unsigned int to_copy:1;
+ 	unsigned int relocating_repair:1;
++	unsigned int chunk_item_inserted:1;
+ 
+ 	int disk_cache_state;
+ 
+@@ -268,8 +269,9 @@ void btrfs_reclaim_bgs_work(struct work_struct *work);
+ void btrfs_reclaim_bgs(struct btrfs_fs_info *fs_info);
+ void btrfs_mark_bg_to_reclaim(struct btrfs_block_group *bg);
+ int btrfs_read_block_groups(struct btrfs_fs_info *info);
+-int btrfs_make_block_group(struct btrfs_trans_handle *trans, u64 bytes_used,
+-			   u64 type, u64 chunk_offset, u64 size);
++struct btrfs_block_group *btrfs_make_block_group(struct btrfs_trans_handle *trans,
++						 u64 bytes_used, u64 type,
++						 u64 chunk_offset, u64 size);
+ void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans);
+ int btrfs_inc_block_group_ro(struct btrfs_block_group *cache,
+ 			     bool do_chunk_alloc);
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 4bc3ca2cbd7d4..c5c08c87e1303 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -364,49 +364,6 @@ static noinline int update_ref_for_cow(struct btrfs_trans_handle *trans,
+ 	return 0;
+ }
+ 
+-static struct extent_buffer *alloc_tree_block_no_bg_flush(
+-					  struct btrfs_trans_handle *trans,
+-					  struct btrfs_root *root,
+-					  u64 parent_start,
+-					  const struct btrfs_disk_key *disk_key,
+-					  int level,
+-					  u64 hint,
+-					  u64 empty_size,
+-					  enum btrfs_lock_nesting nest)
+-{
+-	struct btrfs_fs_info *fs_info = root->fs_info;
+-	struct extent_buffer *ret;
+-
+-	/*
+-	 * If we are COWing a node/leaf from the extent, chunk, device or free
+-	 * space trees, make sure that we do not finish block group creation of
+-	 * pending block groups. We do this to avoid a deadlock.
+-	 * COWing can result in allocation of a new chunk, and flushing pending
+-	 * block groups (btrfs_create_pending_block_groups()) can be triggered
+-	 * when finishing allocation of a new chunk. Creation of a pending block
+-	 * group modifies the extent, chunk, device and free space trees,
+-	 * therefore we could deadlock with ourselves since we are holding a
+-	 * lock on an extent buffer that btrfs_create_pending_block_groups() may
+-	 * try to COW later.
+-	 * For similar reasons, we also need to delay flushing pending block
+-	 * groups when splitting a leaf or node, from one of those trees, since
+-	 * we are holding a write lock on it and its parent or when inserting a
+-	 * new root node for one of those trees.
+-	 */
+-	if (root == fs_info->extent_root ||
+-	    root == fs_info->chunk_root ||
+-	    root == fs_info->dev_root ||
+-	    root == fs_info->free_space_root)
+-		trans->can_flush_pending_bgs = false;
+-
+-	ret = btrfs_alloc_tree_block(trans, root, parent_start,
+-				     root->root_key.objectid, disk_key, level,
+-				     hint, empty_size, nest);
+-	trans->can_flush_pending_bgs = true;
+-
+-	return ret;
+-}
+-
+ /*
+  * does the dirty work in cow of a single block.  The parent block (if
+  * supplied) is updated to point to the new cow copy.  The new buffer is marked
+@@ -455,8 +412,9 @@ static noinline int __btrfs_cow_block(struct btrfs_trans_handle *trans,
+ 	if ((root->root_key.objectid == BTRFS_TREE_RELOC_OBJECTID) && parent)
+ 		parent_start = parent->start;
+ 
+-	cow = alloc_tree_block_no_bg_flush(trans, root, parent_start, &disk_key,
+-					   level, search_start, empty_size, nest);
++	cow = btrfs_alloc_tree_block(trans, root, parent_start,
++				     root->root_key.objectid, &disk_key, level,
++				     search_start, empty_size, nest);
+ 	if (IS_ERR(cow))
+ 		return PTR_ERR(cow);
+ 
+@@ -2458,9 +2416,9 @@ static noinline int insert_new_root(struct btrfs_trans_handle *trans,
+ 	else
+ 		btrfs_node_key(lower, &lower_key, 0);
+ 
+-	c = alloc_tree_block_no_bg_flush(trans, root, 0, &lower_key, level,
+-					 root->node->start, 0,
+-					 BTRFS_NESTING_NEW_ROOT);
++	c = btrfs_alloc_tree_block(trans, root, 0, root->root_key.objectid,
++				   &lower_key, level, root->node->start, 0,
++				   BTRFS_NESTING_NEW_ROOT);
+ 	if (IS_ERR(c))
+ 		return PTR_ERR(c);
+ 
+@@ -2589,8 +2547,9 @@ static noinline int split_node(struct btrfs_trans_handle *trans,
+ 	mid = (c_nritems + 1) / 2;
+ 	btrfs_node_key(c, &disk_key, mid);
+ 
+-	split = alloc_tree_block_no_bg_flush(trans, root, 0, &disk_key, level,
+-					     c->start, 0, BTRFS_NESTING_SPLIT);
++	split = btrfs_alloc_tree_block(trans, root, 0, root->root_key.objectid,
++				       &disk_key, level, c->start, 0,
++				       BTRFS_NESTING_SPLIT);
+ 	if (IS_ERR(split))
+ 		return PTR_ERR(split);
+ 
+@@ -3381,10 +3340,10 @@ again:
+ 	 * BTRFS_NESTING_SPLIT_THE_SPLITTENING if we need to, but for now just
+ 	 * use BTRFS_NESTING_NEW_ROOT.
+ 	 */
+-	right = alloc_tree_block_no_bg_flush(trans, root, 0, &disk_key, 0,
+-					     l->start, 0, num_doubles ?
+-					     BTRFS_NESTING_NEW_ROOT :
+-					     BTRFS_NESTING_SPLIT);
++	right = btrfs_alloc_tree_block(trans, root, 0, root->root_key.objectid,
++				       &disk_key, 0, l->start, 0,
++				       num_doubles ? BTRFS_NESTING_NEW_ROOT :
++				       BTRFS_NESTING_SPLIT);
+ 	if (IS_ERR(right))
+ 		return PTR_ERR(right);
+ 
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 9229549697ce7..272eff4441bcf 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -2260,13 +2260,127 @@ bool btrfs_bio_fits_in_ordered_extent(struct page *page, struct bio *bio,
+ 	return ret;
+ }
+ 
++/*
++ * Split an extent_map at [start, start + len]
++ *
++ * This function is intended to be used only for extract_ordered_extent().
++ */
++static int split_zoned_em(struct btrfs_inode *inode, u64 start, u64 len,
++			  u64 pre, u64 post)
++{
++	struct extent_map_tree *em_tree = &inode->extent_tree;
++	struct extent_map *em;
++	struct extent_map *split_pre = NULL;
++	struct extent_map *split_mid = NULL;
++	struct extent_map *split_post = NULL;
++	int ret = 0;
++	int modified;
++	unsigned long flags;
++
++	/* Sanity check */
++	if (pre == 0 && post == 0)
++		return 0;
++
++	split_pre = alloc_extent_map();
++	if (pre)
++		split_mid = alloc_extent_map();
++	if (post)
++		split_post = alloc_extent_map();
++	if (!split_pre || (pre && !split_mid) || (post && !split_post)) {
++		ret = -ENOMEM;
++		goto out;
++	}
++
++	ASSERT(pre + post < len);
++
++	lock_extent(&inode->io_tree, start, start + len - 1);
++	write_lock(&em_tree->lock);
++	em = lookup_extent_mapping(em_tree, start, len);
++	if (!em) {
++		ret = -EIO;
++		goto out_unlock;
++	}
++
++	ASSERT(em->len == len);
++	ASSERT(!test_bit(EXTENT_FLAG_COMPRESSED, &em->flags));
++	ASSERT(em->block_start < EXTENT_MAP_LAST_BYTE);
++
++	flags = em->flags;
++	clear_bit(EXTENT_FLAG_PINNED, &em->flags);
++	clear_bit(EXTENT_FLAG_LOGGING, &flags);
++	modified = !list_empty(&em->list);
++
++	/* First, replace the em with a new extent_map starting from * em->start */
++	split_pre->start = em->start;
++	split_pre->len = (pre ? pre : em->len - post);
++	split_pre->orig_start = split_pre->start;
++	split_pre->block_start = em->block_start;
++	split_pre->block_len = split_pre->len;
++	split_pre->orig_block_len = split_pre->block_len;
++	split_pre->ram_bytes = split_pre->len;
++	split_pre->flags = flags;
++	split_pre->compress_type = em->compress_type;
++	split_pre->generation = em->generation;
++
++	replace_extent_mapping(em_tree, em, split_pre, modified);
++
++	/*
++	 * Now we only have an extent_map at:
++	 *     [em->start, em->start + pre] if pre != 0
++	 *     [em->start, em->start + em->len - post] if pre == 0
++	 */
++
++	if (pre) {
++		/* Insert the middle extent_map */
++		split_mid->start = em->start + pre;
++		split_mid->len = em->len - pre - post;
++		split_mid->orig_start = split_mid->start;
++		split_mid->block_start = em->block_start + pre;
++		split_mid->block_len = split_mid->len;
++		split_mid->orig_block_len = split_mid->block_len;
++		split_mid->ram_bytes = split_mid->len;
++		split_mid->flags = flags;
++		split_mid->compress_type = em->compress_type;
++		split_mid->generation = em->generation;
++		add_extent_mapping(em_tree, split_mid, modified);
++	}
++
++	if (post) {
++		split_post->start = em->start + em->len - post;
++		split_post->len = post;
++		split_post->orig_start = split_post->start;
++		split_post->block_start = em->block_start + em->len - post;
++		split_post->block_len = split_post->len;
++		split_post->orig_block_len = split_post->block_len;
++		split_post->ram_bytes = split_post->len;
++		split_post->flags = flags;
++		split_post->compress_type = em->compress_type;
++		split_post->generation = em->generation;
++		add_extent_mapping(em_tree, split_post, modified);
++	}
++
++	/* Once for us */
++	free_extent_map(em);
++	/* Once for the tree */
++	free_extent_map(em);
++
++out_unlock:
++	write_unlock(&em_tree->lock);
++	unlock_extent(&inode->io_tree, start, start + len - 1);
++out:
++	free_extent_map(split_pre);
++	free_extent_map(split_mid);
++	free_extent_map(split_post);
++
++	return ret;
++}
++
+ static blk_status_t extract_ordered_extent(struct btrfs_inode *inode,
+ 					   struct bio *bio, loff_t file_offset)
+ {
+ 	struct btrfs_ordered_extent *ordered;
+-	struct extent_map *em = NULL, *em_new = NULL;
+-	struct extent_map_tree *em_tree = &inode->extent_tree;
+ 	u64 start = (u64)bio->bi_iter.bi_sector << SECTOR_SHIFT;
++	u64 file_len;
+ 	u64 len = bio->bi_iter.bi_size;
+ 	u64 end = start + len;
+ 	u64 ordered_end;
+@@ -2306,41 +2420,16 @@ static blk_status_t extract_ordered_extent(struct btrfs_inode *inode,
+ 		goto out;
+ 	}
+ 
++	file_len = ordered->num_bytes;
+ 	pre = start - ordered->disk_bytenr;
+ 	post = ordered_end - end;
+ 
+ 	ret = btrfs_split_ordered_extent(ordered, pre, post);
+ 	if (ret)
+ 		goto out;
+-
+-	read_lock(&em_tree->lock);
+-	em = lookup_extent_mapping(em_tree, ordered->file_offset, len);
+-	if (!em) {
+-		read_unlock(&em_tree->lock);
+-		ret = -EIO;
+-		goto out;
+-	}
+-	read_unlock(&em_tree->lock);
+-
+-	ASSERT(!test_bit(EXTENT_FLAG_COMPRESSED, &em->flags));
+-	/*
+-	 * We cannot reuse em_new here but have to create a new one, as
+-	 * unpin_extent_cache() expects the start of the extent map to be the
+-	 * logical offset of the file, which does not hold true anymore after
+-	 * splitting.
+-	 */
+-	em_new = create_io_em(inode, em->start + pre, len,
+-			      em->start + pre, em->block_start + pre, len,
+-			      len, len, BTRFS_COMPRESS_NONE,
+-			      BTRFS_ORDERED_REGULAR);
+-	if (IS_ERR(em_new)) {
+-		ret = PTR_ERR(em_new);
+-		goto out;
+-	}
+-	free_extent_map(em_new);
++	ret = split_zoned_em(inode, file_offset, file_len, pre, post);
+ 
+ out:
+-	free_extent_map(em);
+ 	btrfs_put_ordered_extent(ordered);
+ 
+ 	return errno_to_blk_status(ret);
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index 37450c7644ca0..d73b1afdc4166 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -254,23 +254,21 @@ static inline int extwriter_counter_read(struct btrfs_transaction *trans)
+ }
+ 
+ /*
+- * To be called after all the new block groups attached to the transaction
+- * handle have been created (btrfs_create_pending_block_groups()).
++ * To be called after doing the chunk btree updates right after allocating a new
++ * chunk (after btrfs_chunk_alloc_add_chunk_item() is called), when removing a
++ * chunk after all chunk btree updates and after finishing the second phase of
++ * chunk allocation (btrfs_create_pending_block_groups()) in case some block
++ * group had its chunk item insertion delayed to the second phase.
+  */
+ void btrfs_trans_release_chunk_metadata(struct btrfs_trans_handle *trans)
+ {
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
+-	struct btrfs_transaction *cur_trans = trans->transaction;
+ 
+ 	if (!trans->chunk_bytes_reserved)
+ 		return;
+ 
+-	WARN_ON_ONCE(!list_empty(&trans->new_bgs));
+-
+ 	btrfs_block_rsv_release(fs_info, &fs_info->chunk_block_rsv,
+ 				trans->chunk_bytes_reserved, NULL);
+-	atomic64_sub(trans->chunk_bytes_reserved, &cur_trans->chunk_bytes_reserved);
+-	cond_wake_up(&cur_trans->chunk_reserve_wait);
+ 	trans->chunk_bytes_reserved = 0;
+ }
+ 
+@@ -386,8 +384,6 @@ loop:
+ 	spin_lock_init(&cur_trans->dropped_roots_lock);
+ 	INIT_LIST_HEAD(&cur_trans->releasing_ebs);
+ 	spin_lock_init(&cur_trans->releasing_ebs_lock);
+-	atomic64_set(&cur_trans->chunk_bytes_reserved, 0);
+-	init_waitqueue_head(&cur_trans->chunk_reserve_wait);
+ 	list_add_tail(&cur_trans->list, &fs_info->trans_list);
+ 	extent_io_tree_init(fs_info, &cur_trans->dirty_pages,
+ 			IO_TREE_TRANS_DIRTY_PAGES, fs_info->btree_inode);
+@@ -704,7 +700,6 @@ again:
+ 	h->fs_info = root->fs_info;
+ 
+ 	h->type = type;
+-	h->can_flush_pending_bgs = true;
+ 	INIT_LIST_HEAD(&h->new_bgs);
+ 
+ 	smp_mb();
+diff --git a/fs/btrfs/transaction.h b/fs/btrfs/transaction.h
+index c49e2266b28ba..21b36f6e9322b 100644
+--- a/fs/btrfs/transaction.h
++++ b/fs/btrfs/transaction.h
+@@ -96,13 +96,6 @@ struct btrfs_transaction {
+ 
+ 	spinlock_t releasing_ebs_lock;
+ 	struct list_head releasing_ebs;
+-
+-	/*
+-	 * The number of bytes currently reserved, by all transaction handles
+-	 * attached to this transaction, for metadata extents of the chunk tree.
+-	 */
+-	atomic64_t chunk_bytes_reserved;
+-	wait_queue_head_t chunk_reserve_wait;
+ };
+ 
+ #define __TRANS_FREEZABLE	(1U << 0)
+@@ -141,7 +134,7 @@ struct btrfs_trans_handle {
+ 	short aborted;
+ 	bool adding_csums;
+ 	bool allocating_chunk;
+-	bool can_flush_pending_bgs;
++	bool removing_chunk;
+ 	bool reloc_reserved;
+ 	bool in_fsync;
+ 	struct btrfs_root *root;
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 760d950752f51..b5ed0a699ee5f 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -3173,7 +3173,7 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
+ 		if (!log_root_tree->node) {
+ 			ret = btrfs_alloc_log_tree_node(trans, log_root_tree);
+ 			if (ret) {
+-				mutex_unlock(&fs_info->tree_log_mutex);
++				mutex_unlock(&fs_info->tree_root->log_mutex);
+ 				goto out;
+ 			}
+ 		}
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 47d27059d0641..38633ab8108bb 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -1745,19 +1745,14 @@ again:
+ 		extent = btrfs_item_ptr(leaf, path->slots[0],
+ 					struct btrfs_dev_extent);
+ 	} else {
+-		btrfs_handle_fs_error(fs_info, ret, "Slot search failed");
+ 		goto out;
+ 	}
+ 
+ 	*dev_extent_len = btrfs_dev_extent_length(leaf, extent);
+ 
+ 	ret = btrfs_del_item(trans, root, path);
+-	if (ret) {
+-		btrfs_handle_fs_error(fs_info, ret,
+-				      "Failed to remove dev extent item");
+-	} else {
++	if (ret == 0)
+ 		set_bit(BTRFS_TRANS_HAVE_FREE_BGS, &trans->transaction->flags);
+-	}
+ out:
+ 	btrfs_free_path(path);
+ 	return ret;
+@@ -2942,7 +2937,7 @@ static int btrfs_del_sys_chunk(struct btrfs_fs_info *fs_info, u64 chunk_offset)
+ 	u32 cur;
+ 	struct btrfs_key key;
+ 
+-	mutex_lock(&fs_info->chunk_mutex);
++	lockdep_assert_held(&fs_info->chunk_mutex);
+ 	array_size = btrfs_super_sys_array_size(super_copy);
+ 
+ 	ptr = super_copy->sys_chunk_array;
+@@ -2972,7 +2967,6 @@ static int btrfs_del_sys_chunk(struct btrfs_fs_info *fs_info, u64 chunk_offset)
+ 			cur += len;
+ 		}
+ 	}
+-	mutex_unlock(&fs_info->chunk_mutex);
+ 	return ret;
+ }
+ 
+@@ -3012,6 +3006,29 @@ struct extent_map *btrfs_get_chunk_map(struct btrfs_fs_info *fs_info,
+ 	return em;
+ }
+ 
++static int remove_chunk_item(struct btrfs_trans_handle *trans,
++			     struct map_lookup *map, u64 chunk_offset)
++{
++	int i;
++
++	/*
++	 * Removing chunk items and updating the device items in the chunks btree
++	 * requires holding the chunk_mutex.
++	 * See the comment at btrfs_chunk_alloc() for the details.
++	 */
++	lockdep_assert_held(&trans->fs_info->chunk_mutex);
++
++	for (i = 0; i < map->num_stripes; i++) {
++		int ret;
++
++		ret = btrfs_update_device(trans, map->stripes[i].dev);
++		if (ret)
++			return ret;
++	}
++
++	return btrfs_free_chunk(trans, chunk_offset);
++}
++
+ int btrfs_remove_chunk(struct btrfs_trans_handle *trans, u64 chunk_offset)
+ {
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
+@@ -3032,14 +3049,16 @@ int btrfs_remove_chunk(struct btrfs_trans_handle *trans, u64 chunk_offset)
+ 		return PTR_ERR(em);
+ 	}
+ 	map = em->map_lookup;
+-	mutex_lock(&fs_info->chunk_mutex);
+-	check_system_chunk(trans, map->type);
+-	mutex_unlock(&fs_info->chunk_mutex);
+ 
+ 	/*
+-	 * Take the device list mutex to prevent races with the final phase of
+-	 * a device replace operation that replaces the device object associated
+-	 * with map stripes (dev-replace.c:btrfs_dev_replace_finishing()).
++	 * First delete the device extent items from the devices btree.
++	 * We take the device_list_mutex to avoid racing with the finishing phase
++	 * of a device replace operation. See the comment below before acquiring
++	 * fs_info->chunk_mutex. Note that here we do not acquire the chunk_mutex
++	 * because that can result in a deadlock when deleting the device extent
++	 * items from the devices btree - COWing an extent buffer from the btree
++	 * may result in allocating a new metadata chunk, which would attempt to
++	 * lock again fs_info->chunk_mutex.
+ 	 */
+ 	mutex_lock(&fs_devices->device_list_mutex);
+ 	for (i = 0; i < map->num_stripes; i++) {
+@@ -3061,18 +3080,73 @@ int btrfs_remove_chunk(struct btrfs_trans_handle *trans, u64 chunk_offset)
+ 			btrfs_clear_space_info_full(fs_info);
+ 			mutex_unlock(&fs_info->chunk_mutex);
+ 		}
++	}
++	mutex_unlock(&fs_devices->device_list_mutex);
+ 
+-		ret = btrfs_update_device(trans, device);
++	/*
++	 * We acquire fs_info->chunk_mutex for 2 reasons:
++	 *
++	 * 1) Just like with the first phase of the chunk allocation, we must
++	 *    reserve system space, do all chunk btree updates and deletions, and
++	 *    update the system chunk array in the superblock while holding this
++	 *    mutex. This is for similar reasons as explained on the comment at
++	 *    the top of btrfs_chunk_alloc();
++	 *
++	 * 2) Prevent races with the final phase of a device replace operation
++	 *    that replaces the device object associated with the map's stripes,
++	 *    because the device object's id can change at any time during that
++	 *    final phase of the device replace operation
++	 *    (dev-replace.c:btrfs_dev_replace_finishing()), so we could grab the
++	 *    replaced device and then see it with an ID of
++	 *    BTRFS_DEV_REPLACE_DEVID, which would cause a failure when updating
++	 *    the device item, which does not exists on the chunk btree.
++	 *    The finishing phase of device replace acquires both the
++	 *    device_list_mutex and the chunk_mutex, in that order, so we are
++	 *    safe by just acquiring the chunk_mutex.
++	 */
++	trans->removing_chunk = true;
++	mutex_lock(&fs_info->chunk_mutex);
++
++	check_system_chunk(trans, map->type);
++
++	ret = remove_chunk_item(trans, map, chunk_offset);
++	/*
++	 * Normally we should not get -ENOSPC since we reserved space before
++	 * through the call to check_system_chunk().
++	 *
++	 * Despite our system space_info having enough free space, we may not
++	 * be able to allocate extents from its block groups, because all have
++	 * an incompatible profile, which will force us to allocate a new system
++	 * block group with the right profile, or right after we called
++	 * check_system_space() above, a scrub turned the only system block group
++	 * with enough free space into RO mode.
++	 * This is explained with more detail at do_chunk_alloc().
++	 *
++	 * So if we get -ENOSPC, allocate a new system chunk and retry once.
++	 */
++	if (ret == -ENOSPC) {
++		const u64 sys_flags = btrfs_system_alloc_profile(fs_info);
++		struct btrfs_block_group *sys_bg;
++
++		sys_bg = btrfs_alloc_chunk(trans, sys_flags);
++		if (IS_ERR(sys_bg)) {
++			ret = PTR_ERR(sys_bg);
++			btrfs_abort_transaction(trans, ret);
++			goto out;
++		}
++
++		ret = btrfs_chunk_alloc_add_chunk_item(trans, sys_bg);
+ 		if (ret) {
+-			mutex_unlock(&fs_devices->device_list_mutex);
+ 			btrfs_abort_transaction(trans, ret);
+ 			goto out;
+ 		}
+-	}
+-	mutex_unlock(&fs_devices->device_list_mutex);
+ 
+-	ret = btrfs_free_chunk(trans, chunk_offset);
+-	if (ret) {
++		ret = remove_chunk_item(trans, map, chunk_offset);
++		if (ret) {
++			btrfs_abort_transaction(trans, ret);
++			goto out;
++		}
++	} else if (ret) {
+ 		btrfs_abort_transaction(trans, ret);
+ 		goto out;
+ 	}
+@@ -3087,6 +3161,15 @@ int btrfs_remove_chunk(struct btrfs_trans_handle *trans, u64 chunk_offset)
+ 		}
+ 	}
+ 
++	mutex_unlock(&fs_info->chunk_mutex);
++	trans->removing_chunk = false;
++
++	/*
++	 * We are done with chunk btree updates and deletions, so release the
++	 * system space we previously reserved (with check_system_chunk()).
++	 */
++	btrfs_trans_release_chunk_metadata(trans);
++
+ 	ret = btrfs_remove_block_group(trans, chunk_offset, em);
+ 	if (ret) {
+ 		btrfs_abort_transaction(trans, ret);
+@@ -3094,6 +3177,10 @@ int btrfs_remove_chunk(struct btrfs_trans_handle *trans, u64 chunk_offset)
+ 	}
+ 
+ out:
++	if (trans->removing_chunk) {
++		mutex_unlock(&fs_info->chunk_mutex);
++		trans->removing_chunk = false;
++	}
+ 	/* once for us */
+ 	free_extent_map(em);
+ 	return ret;
+@@ -4868,13 +4955,12 @@ static int btrfs_add_system_chunk(struct btrfs_fs_info *fs_info,
+ 	u32 array_size;
+ 	u8 *ptr;
+ 
+-	mutex_lock(&fs_info->chunk_mutex);
++	lockdep_assert_held(&fs_info->chunk_mutex);
++
+ 	array_size = btrfs_super_sys_array_size(super_copy);
+ 	if (array_size + item_size + sizeof(disk_key)
+-			> BTRFS_SYSTEM_CHUNK_ARRAY_SIZE) {
+-		mutex_unlock(&fs_info->chunk_mutex);
++			> BTRFS_SYSTEM_CHUNK_ARRAY_SIZE)
+ 		return -EFBIG;
+-	}
+ 
+ 	ptr = super_copy->sys_chunk_array + array_size;
+ 	btrfs_cpu_key_to_disk(&disk_key, key);
+@@ -4883,7 +4969,6 @@ static int btrfs_add_system_chunk(struct btrfs_fs_info *fs_info,
+ 	memcpy(ptr, chunk, item_size);
+ 	item_size += sizeof(disk_key);
+ 	btrfs_set_super_sys_array_size(super_copy, array_size + item_size);
+-	mutex_unlock(&fs_info->chunk_mutex);
+ 
+ 	return 0;
+ }
+@@ -5233,13 +5318,14 @@ static int decide_stripe_size(struct btrfs_fs_devices *fs_devices,
+ 	}
+ }
+ 
+-static int create_chunk(struct btrfs_trans_handle *trans,
++static struct btrfs_block_group *create_chunk(struct btrfs_trans_handle *trans,
+ 			struct alloc_chunk_ctl *ctl,
+ 			struct btrfs_device_info *devices_info)
+ {
+ 	struct btrfs_fs_info *info = trans->fs_info;
+ 	struct map_lookup *map = NULL;
+ 	struct extent_map_tree *em_tree;
++	struct btrfs_block_group *block_group;
+ 	struct extent_map *em;
+ 	u64 start = ctl->start;
+ 	u64 type = ctl->type;
+@@ -5249,7 +5335,7 @@ static int create_chunk(struct btrfs_trans_handle *trans,
+ 
+ 	map = kmalloc(map_lookup_size(ctl->num_stripes), GFP_NOFS);
+ 	if (!map)
+-		return -ENOMEM;
++		return ERR_PTR(-ENOMEM);
+ 	map->num_stripes = ctl->num_stripes;
+ 
+ 	for (i = 0; i < ctl->ndevs; ++i) {
+@@ -5271,7 +5357,7 @@ static int create_chunk(struct btrfs_trans_handle *trans,
+ 	em = alloc_extent_map();
+ 	if (!em) {
+ 		kfree(map);
+-		return -ENOMEM;
++		return ERR_PTR(-ENOMEM);
+ 	}
+ 	set_bit(EXTENT_FLAG_FS_MAPPING, &em->flags);
+ 	em->map_lookup = map;
+@@ -5287,12 +5373,12 @@ static int create_chunk(struct btrfs_trans_handle *trans,
+ 	if (ret) {
+ 		write_unlock(&em_tree->lock);
+ 		free_extent_map(em);
+-		return ret;
++		return ERR_PTR(ret);
+ 	}
+ 	write_unlock(&em_tree->lock);
+ 
+-	ret = btrfs_make_block_group(trans, 0, type, start, ctl->chunk_size);
+-	if (ret)
++	block_group = btrfs_make_block_group(trans, 0, type, start, ctl->chunk_size);
++	if (IS_ERR(block_group))
+ 		goto error_del_extent;
+ 
+ 	for (i = 0; i < map->num_stripes; i++) {
+@@ -5312,7 +5398,7 @@ static int create_chunk(struct btrfs_trans_handle *trans,
+ 	check_raid56_incompat_flag(info, type);
+ 	check_raid1c34_incompat_flag(info, type);
+ 
+-	return 0;
++	return block_group;
+ 
+ error_del_extent:
+ 	write_lock(&em_tree->lock);
+@@ -5324,34 +5410,36 @@ error_del_extent:
+ 	/* One for the tree reference */
+ 	free_extent_map(em);
+ 
+-	return ret;
++	return block_group;
+ }
+ 
+-int btrfs_alloc_chunk(struct btrfs_trans_handle *trans, u64 type)
++struct btrfs_block_group *btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
++					    u64 type)
+ {
+ 	struct btrfs_fs_info *info = trans->fs_info;
+ 	struct btrfs_fs_devices *fs_devices = info->fs_devices;
+ 	struct btrfs_device_info *devices_info = NULL;
+ 	struct alloc_chunk_ctl ctl;
++	struct btrfs_block_group *block_group;
+ 	int ret;
+ 
+ 	lockdep_assert_held(&info->chunk_mutex);
+ 
+ 	if (!alloc_profile_is_valid(type, 0)) {
+ 		ASSERT(0);
+-		return -EINVAL;
++		return ERR_PTR(-EINVAL);
+ 	}
+ 
+ 	if (list_empty(&fs_devices->alloc_list)) {
+ 		if (btrfs_test_opt(info, ENOSPC_DEBUG))
+ 			btrfs_debug(info, "%s: no writable device", __func__);
+-		return -ENOSPC;
++		return ERR_PTR(-ENOSPC);
+ 	}
+ 
+ 	if (!(type & BTRFS_BLOCK_GROUP_TYPE_MASK)) {
+ 		btrfs_err(info, "invalid chunk type 0x%llx requested", type);
+ 		ASSERT(0);
+-		return -EINVAL;
++		return ERR_PTR(-EINVAL);
+ 	}
+ 
+ 	ctl.start = find_next_chunk(info);
+@@ -5361,46 +5449,43 @@ int btrfs_alloc_chunk(struct btrfs_trans_handle *trans, u64 type)
+ 	devices_info = kcalloc(fs_devices->rw_devices, sizeof(*devices_info),
+ 			       GFP_NOFS);
+ 	if (!devices_info)
+-		return -ENOMEM;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	ret = gather_device_info(fs_devices, &ctl, devices_info);
+-	if (ret < 0)
++	if (ret < 0) {
++		block_group = ERR_PTR(ret);
+ 		goto out;
++	}
+ 
+ 	ret = decide_stripe_size(fs_devices, &ctl, devices_info);
+-	if (ret < 0)
++	if (ret < 0) {
++		block_group = ERR_PTR(ret);
+ 		goto out;
++	}
+ 
+-	ret = create_chunk(trans, &ctl, devices_info);
++	block_group = create_chunk(trans, &ctl, devices_info);
+ 
+ out:
+ 	kfree(devices_info);
+-	return ret;
++	return block_group;
+ }
+ 
+ /*
+- * Chunk allocation falls into two parts. The first part does work
+- * that makes the new allocated chunk usable, but does not do any operation
+- * that modifies the chunk tree. The second part does the work that
+- * requires modifying the chunk tree. This division is important for the
+- * bootstrap process of adding storage to a seed btrfs.
++ * This function, btrfs_finish_chunk_alloc(), belongs to phase 2.
++ *
++ * See the comment at btrfs_chunk_alloc() for details about the chunk allocation
++ * phases.
+  */
+ int btrfs_finish_chunk_alloc(struct btrfs_trans_handle *trans,
+ 			     u64 chunk_offset, u64 chunk_size)
+ {
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
+-	struct btrfs_root *extent_root = fs_info->extent_root;
+-	struct btrfs_root *chunk_root = fs_info->chunk_root;
+-	struct btrfs_key key;
+ 	struct btrfs_device *device;
+-	struct btrfs_chunk *chunk;
+-	struct btrfs_stripe *stripe;
+ 	struct extent_map *em;
+ 	struct map_lookup *map;
+-	size_t item_size;
+ 	u64 dev_offset;
+ 	u64 stripe_size;
+-	int i = 0;
++	int i;
+ 	int ret = 0;
+ 
+ 	em = btrfs_get_chunk_map(fs_info, chunk_offset, chunk_size);
+@@ -5408,53 +5493,117 @@ int btrfs_finish_chunk_alloc(struct btrfs_trans_handle *trans,
+ 		return PTR_ERR(em);
+ 
+ 	map = em->map_lookup;
+-	item_size = btrfs_chunk_item_size(map->num_stripes);
+ 	stripe_size = em->orig_block_len;
+ 
+-	chunk = kzalloc(item_size, GFP_NOFS);
+-	if (!chunk) {
+-		ret = -ENOMEM;
+-		goto out;
+-	}
+-
+ 	/*
+ 	 * Take the device list mutex to prevent races with the final phase of
+ 	 * a device replace operation that replaces the device object associated
+ 	 * with the map's stripes, because the device object's id can change
+ 	 * at any time during that final phase of the device replace operation
+-	 * (dev-replace.c:btrfs_dev_replace_finishing()).
++	 * (dev-replace.c:btrfs_dev_replace_finishing()), so we could grab the
++	 * replaced device and then see it with an ID of BTRFS_DEV_REPLACE_DEVID,
++	 * resulting in persisting a device extent item with such ID.
+ 	 */
+ 	mutex_lock(&fs_info->fs_devices->device_list_mutex);
+ 	for (i = 0; i < map->num_stripes; i++) {
+ 		device = map->stripes[i].dev;
+ 		dev_offset = map->stripes[i].physical;
+ 
+-		ret = btrfs_update_device(trans, device);
+-		if (ret)
+-			break;
+ 		ret = btrfs_alloc_dev_extent(trans, device, chunk_offset,
+ 					     dev_offset, stripe_size);
+ 		if (ret)
+ 			break;
+ 	}
+-	if (ret) {
+-		mutex_unlock(&fs_info->fs_devices->device_list_mutex);
++	mutex_unlock(&fs_info->fs_devices->device_list_mutex);
++
++	free_extent_map(em);
++	return ret;
++}
++
++/*
++ * This function, btrfs_chunk_alloc_add_chunk_item(), typically belongs to the
++ * phase 1 of chunk allocation. It belongs to phase 2 only when allocating system
++ * chunks.
++ *
++ * See the comment at btrfs_chunk_alloc() for details about the chunk allocation
++ * phases.
++ */
++int btrfs_chunk_alloc_add_chunk_item(struct btrfs_trans_handle *trans,
++				     struct btrfs_block_group *bg)
++{
++	struct btrfs_fs_info *fs_info = trans->fs_info;
++	struct btrfs_root *extent_root = fs_info->extent_root;
++	struct btrfs_root *chunk_root = fs_info->chunk_root;
++	struct btrfs_key key;
++	struct btrfs_chunk *chunk;
++	struct btrfs_stripe *stripe;
++	struct extent_map *em;
++	struct map_lookup *map;
++	size_t item_size;
++	int i;
++	int ret;
++
++	/*
++	 * We take the chunk_mutex for 2 reasons:
++	 *
++	 * 1) Updates and insertions in the chunk btree must be done while holding
++	 *    the chunk_mutex, as well as updating the system chunk array in the
++	 *    superblock. See the comment on top of btrfs_chunk_alloc() for the
++	 *    details;
++	 *
++	 * 2) To prevent races with the final phase of a device replace operation
++	 *    that replaces the device object associated with the map's stripes,
++	 *    because the device object's id can change at any time during that
++	 *    final phase of the device replace operation
++	 *    (dev-replace.c:btrfs_dev_replace_finishing()), so we could grab the
++	 *    replaced device and then see it with an ID of BTRFS_DEV_REPLACE_DEVID,
++	 *    which would cause a failure when updating the device item, which does
++	 *    not exists, or persisting a stripe of the chunk item with such ID.
++	 *    Here we can't use the device_list_mutex because our caller already
++	 *    has locked the chunk_mutex, and the final phase of device replace
++	 *    acquires both mutexes - first the device_list_mutex and then the
++	 *    chunk_mutex. Using any of those two mutexes protects us from a
++	 *    concurrent device replace.
++	 */
++	lockdep_assert_held(&fs_info->chunk_mutex);
++
++	em = btrfs_get_chunk_map(fs_info, bg->start, bg->length);
++	if (IS_ERR(em)) {
++		ret = PTR_ERR(em);
++		btrfs_abort_transaction(trans, ret);
++		return ret;
++	}
++
++	map = em->map_lookup;
++	item_size = btrfs_chunk_item_size(map->num_stripes);
++
++	chunk = kzalloc(item_size, GFP_NOFS);
++	if (!chunk) {
++		ret = -ENOMEM;
++		btrfs_abort_transaction(trans, ret);
+ 		goto out;
+ 	}
+ 
++	for (i = 0; i < map->num_stripes; i++) {
++		struct btrfs_device *device = map->stripes[i].dev;
++
++		ret = btrfs_update_device(trans, device);
++		if (ret)
++			goto out;
++	}
++
+ 	stripe = &chunk->stripe;
+ 	for (i = 0; i < map->num_stripes; i++) {
+-		device = map->stripes[i].dev;
+-		dev_offset = map->stripes[i].physical;
++		struct btrfs_device *device = map->stripes[i].dev;
++		const u64 dev_offset = map->stripes[i].physical;
+ 
+ 		btrfs_set_stack_stripe_devid(stripe, device->devid);
+ 		btrfs_set_stack_stripe_offset(stripe, dev_offset);
+ 		memcpy(stripe->dev_uuid, device->uuid, BTRFS_UUID_SIZE);
+ 		stripe++;
+ 	}
+-	mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+ 
+-	btrfs_set_stack_chunk_length(chunk, chunk_size);
++	btrfs_set_stack_chunk_length(chunk, bg->length);
+ 	btrfs_set_stack_chunk_owner(chunk, extent_root->root_key.objectid);
+ 	btrfs_set_stack_chunk_stripe_len(chunk, map->stripe_len);
+ 	btrfs_set_stack_chunk_type(chunk, map->type);
+@@ -5466,15 +5615,18 @@ int btrfs_finish_chunk_alloc(struct btrfs_trans_handle *trans,
+ 
+ 	key.objectid = BTRFS_FIRST_CHUNK_TREE_OBJECTID;
+ 	key.type = BTRFS_CHUNK_ITEM_KEY;
+-	key.offset = chunk_offset;
++	key.offset = bg->start;
+ 
+ 	ret = btrfs_insert_item(trans, chunk_root, &key, chunk, item_size);
+-	if (ret == 0 && map->type & BTRFS_BLOCK_GROUP_SYSTEM) {
+-		/*
+-		 * TODO: Cleanup of inserted chunk root in case of
+-		 * failure.
+-		 */
++	if (ret)
++		goto out;
++
++	bg->chunk_item_inserted = 1;
++
++	if (map->type & BTRFS_BLOCK_GROUP_SYSTEM) {
+ 		ret = btrfs_add_system_chunk(fs_info, &key, chunk, item_size);
++		if (ret)
++			goto out;
+ 	}
+ 
+ out:
+@@ -5487,16 +5639,41 @@ static noinline int init_first_rw_device(struct btrfs_trans_handle *trans)
+ {
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
+ 	u64 alloc_profile;
+-	int ret;
++	struct btrfs_block_group *meta_bg;
++	struct btrfs_block_group *sys_bg;
++
++	/*
++	 * When adding a new device for sprouting, the seed device is read-only
++	 * so we must first allocate a metadata and a system chunk. But before
++	 * adding the block group items to the extent, device and chunk btrees,
++	 * we must first:
++	 *
++	 * 1) Create both chunks without doing any changes to the btrees, as
++	 *    otherwise we would get -ENOSPC since the block groups from the
++	 *    seed device are read-only;
++	 *
++	 * 2) Add the device item for the new sprout device - finishing the setup
++	 *    of a new block group requires updating the device item in the chunk
++	 *    btree, so it must exist when we attempt to do it. The previous step
++	 *    ensures this does not fail with -ENOSPC.
++	 *
++	 * After that we can add the block group items to their btrees:
++	 * update existing device item in the chunk btree, add a new block group
++	 * item to the extent btree, add a new chunk item to the chunk btree and
++	 * finally add the new device extent items to the devices btree.
++	 */
+ 
+ 	alloc_profile = btrfs_metadata_alloc_profile(fs_info);
+-	ret = btrfs_alloc_chunk(trans, alloc_profile);
+-	if (ret)
+-		return ret;
++	meta_bg = btrfs_alloc_chunk(trans, alloc_profile);
++	if (IS_ERR(meta_bg))
++		return PTR_ERR(meta_bg);
+ 
+ 	alloc_profile = btrfs_system_alloc_profile(fs_info);
+-	ret = btrfs_alloc_chunk(trans, alloc_profile);
+-	return ret;
++	sys_bg = btrfs_alloc_chunk(trans, alloc_profile);
++	if (IS_ERR(sys_bg))
++		return PTR_ERR(sys_bg);
++
++	return 0;
+ }
+ 
+ static inline int btrfs_chunk_max_errors(struct map_lookup *map)
+@@ -7425,10 +7602,18 @@ int btrfs_read_chunk_tree(struct btrfs_fs_info *fs_info)
+ 			total_dev++;
+ 		} else if (found_key.type == BTRFS_CHUNK_ITEM_KEY) {
+ 			struct btrfs_chunk *chunk;
++
++			/*
++			 * We are only called at mount time, so no need to take
++			 * fs_info->chunk_mutex. Plus, to avoid lockdep warnings,
++			 * we always lock first fs_info->chunk_mutex before
++			 * acquiring any locks on the chunk tree. This is a
++			 * requirement for chunk allocation, see the comment on
++			 * top of btrfs_chunk_alloc() for details.
++			 */
++			ASSERT(!test_bit(BTRFS_FS_OPEN, &fs_info->flags));
+ 			chunk = btrfs_item_ptr(leaf, slot, struct btrfs_chunk);
+-			mutex_lock(&fs_info->chunk_mutex);
+ 			ret = read_one_chunk(&found_key, leaf, chunk);
+-			mutex_unlock(&fs_info->chunk_mutex);
+ 			if (ret)
+ 				goto error;
+ 		}
+diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
+index 9c0d84e5ec066..0f6ee7152b8b6 100644
+--- a/fs/btrfs/volumes.h
++++ b/fs/btrfs/volumes.h
+@@ -447,7 +447,8 @@ int btrfs_get_io_geometry(struct btrfs_fs_info *fs_info, struct extent_map *map,
+ 			  struct btrfs_io_geometry *io_geom);
+ int btrfs_read_sys_array(struct btrfs_fs_info *fs_info);
+ int btrfs_read_chunk_tree(struct btrfs_fs_info *fs_info);
+-int btrfs_alloc_chunk(struct btrfs_trans_handle *trans, u64 type);
++struct btrfs_block_group *btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
++					    u64 type);
+ void btrfs_mapping_tree_free(struct extent_map_tree *tree);
+ blk_status_t btrfs_map_bio(struct btrfs_fs_info *fs_info, struct bio *bio,
+ 			   int mirror_num);
+@@ -506,6 +507,8 @@ unsigned long btrfs_full_stripe_len(struct btrfs_fs_info *fs_info,
+ 				    u64 logical);
+ int btrfs_finish_chunk_alloc(struct btrfs_trans_handle *trans,
+ 			     u64 chunk_offset, u64 chunk_size);
++int btrfs_chunk_alloc_add_chunk_item(struct btrfs_trans_handle *trans,
++				     struct btrfs_block_group *bg);
+ int btrfs_remove_chunk(struct btrfs_trans_handle *trans, u64 chunk_offset);
+ struct extent_map *btrfs_get_chunk_map(struct btrfs_fs_info *fs_info,
+ 				       u64 logical, u64 length);
+diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
+index c1570fada3d8e..998dc4dfdc6b1 100644
+--- a/fs/ceph/addr.c
++++ b/fs/ceph/addr.c
+@@ -82,10 +82,6 @@ static int ceph_set_page_dirty(struct page *page)
+ 	struct inode *inode;
+ 	struct ceph_inode_info *ci;
+ 	struct ceph_snap_context *snapc;
+-	int ret;
+-
+-	if (unlikely(!mapping))
+-		return !TestSetPageDirty(page);
+ 
+ 	if (PageDirty(page)) {
+ 		dout("%p set_page_dirty %p idx %lu -- already dirty\n",
+@@ -130,11 +126,7 @@ static int ceph_set_page_dirty(struct page *page)
+ 	BUG_ON(PagePrivate(page));
+ 	attach_page_private(page, snapc);
+ 
+-	ret = __set_page_dirty_nobuffers(page);
+-	WARN_ON(!PageLocked(page));
+-	WARN_ON(!page->mapping);
+-
+-	return ret;
++	return __set_page_dirty_nobuffers(page);
+ }
+ 
+ /*
+diff --git a/fs/cifs/cifs_dfs_ref.c b/fs/cifs/cifs_dfs_ref.c
+index c87c37cf29143..8dec4edc8a9fe 100644
+--- a/fs/cifs/cifs_dfs_ref.c
++++ b/fs/cifs/cifs_dfs_ref.c
+@@ -173,7 +173,7 @@ char *cifs_compose_mount_options(const char *sb_mountdata,
+ 		}
+ 	}
+ 
+-	rc = dns_resolve_server_name_to_ip(name, &srvIP);
++	rc = dns_resolve_server_name_to_ip(name, &srvIP, NULL);
+ 	if (rc < 0) {
+ 		cifs_dbg(FYI, "%s: Failed to resolve server part of %s to IP: %d\n",
+ 			 __func__, name, rc);
+@@ -208,6 +208,10 @@ char *cifs_compose_mount_options(const char *sb_mountdata,
+ 		else
+ 			noff = tkn_e - (sb_mountdata + off) + 1;
+ 
++		if (strncasecmp(sb_mountdata + off, "cruid=", 6) == 0) {
++			off += noff;
++			continue;
++		}
+ 		if (strncasecmp(sb_mountdata + off, "unc=", 4) == 0) {
+ 			off += noff;
+ 			continue;
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 706a2aeba1dec..016082024d7d1 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -84,6 +84,9 @@
+ #define SMB_ECHO_INTERVAL_MAX 600
+ #define SMB_ECHO_INTERVAL_DEFAULT 60
+ 
++/* dns resolution interval in seconds */
++#define SMB_DNS_RESOLVE_INTERVAL_DEFAULT 600
++
+ /* maximum number of PDUs in one compound */
+ #define MAX_COMPOUND 5
+ 
+@@ -654,6 +657,7 @@ struct TCP_Server_Info {
+ 	/* point to the SMBD connection if RDMA is used instead of socket */
+ 	struct smbd_connection *smbd_conn;
+ 	struct delayed_work	echo; /* echo ping workqueue job */
++	struct delayed_work	resolve; /* dns resolution workqueue job */
+ 	char	*smallbuf;	/* pointer to current "small" buffer */
+ 	char	*bigbuf;	/* pointer to current "big" buffer */
+ 	/* Total size of this PDU. Only valid from cifs_demultiplex_thread */
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index eb6c10fa67410..584acc1372508 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -90,6 +90,8 @@ static int reconn_set_ipaddr_from_hostname(struct TCP_Server_Info *server)
+ 	int rc;
+ 	int len;
+ 	char *unc, *ipaddr = NULL;
++	time64_t expiry, now;
++	unsigned long ttl = SMB_DNS_RESOLVE_INTERVAL_DEFAULT;
+ 
+ 	if (!server->hostname)
+ 		return -EINVAL;
+@@ -103,13 +105,13 @@ static int reconn_set_ipaddr_from_hostname(struct TCP_Server_Info *server)
+ 	}
+ 	scnprintf(unc, len, "\\\\%s", server->hostname);
+ 
+-	rc = dns_resolve_server_name_to_ip(unc, &ipaddr);
++	rc = dns_resolve_server_name_to_ip(unc, &ipaddr, &expiry);
+ 	kfree(unc);
+ 
+ 	if (rc < 0) {
+ 		cifs_dbg(FYI, "%s: failed to resolve server part of %s to IP: %d\n",
+ 			 __func__, server->hostname, rc);
+-		return rc;
++		goto requeue_resolve;
+ 	}
+ 
+ 	spin_lock(&cifs_tcp_ses_lock);
+@@ -118,7 +120,45 @@ static int reconn_set_ipaddr_from_hostname(struct TCP_Server_Info *server)
+ 	spin_unlock(&cifs_tcp_ses_lock);
+ 	kfree(ipaddr);
+ 
+-	return !rc ? -1 : 0;
++	/* rc == 1 means success here */
++	if (rc) {
++		now = ktime_get_real_seconds();
++		if (expiry && expiry > now)
++			/*
++			 * To make sure we don't use the cached entry, retry 1s
++			 * after expiry.
++			 */
++			ttl = (expiry - now + 1);
++	}
++	rc = !rc ? -1 : 0;
++
++requeue_resolve:
++	cifs_dbg(FYI, "%s: next dns resolution scheduled for %lu seconds in the future\n",
++		 __func__, ttl);
++	mod_delayed_work(cifsiod_wq, &server->resolve, (ttl * HZ));
++
++	return rc;
++}
++
++
++static void cifs_resolve_server(struct work_struct *work)
++{
++	int rc;
++	struct TCP_Server_Info *server = container_of(work,
++					struct TCP_Server_Info, resolve.work);
++
++	mutex_lock(&server->srv_mutex);
++
++	/*
++	 * Resolve the hostname again to make sure that IP address is up-to-date.
++	 */
++	rc = reconn_set_ipaddr_from_hostname(server);
++	if (rc) {
++		cifs_dbg(FYI, "%s: failed to resolve hostname: %d\n",
++				__func__, rc);
++	}
++
++	mutex_unlock(&server->srv_mutex);
+ }
+ 
+ #ifdef CONFIG_CIFS_DFS_UPCALL
+@@ -698,6 +738,7 @@ static void clean_demultiplex_info(struct TCP_Server_Info *server)
+ 	spin_unlock(&cifs_tcp_ses_lock);
+ 
+ 	cancel_delayed_work_sync(&server->echo);
++	cancel_delayed_work_sync(&server->resolve);
+ 
+ 	spin_lock(&GlobalMid_Lock);
+ 	server->tcpStatus = CifsExiting;
+@@ -1278,6 +1319,7 @@ cifs_put_tcp_session(struct TCP_Server_Info *server, int from_reconnect)
+ 	spin_unlock(&cifs_tcp_ses_lock);
+ 
+ 	cancel_delayed_work_sync(&server->echo);
++	cancel_delayed_work_sync(&server->resolve);
+ 
+ 	if (from_reconnect)
+ 		/*
+@@ -1360,6 +1402,7 @@ cifs_get_tcp_session(struct smb3_fs_context *ctx)
+ 	INIT_LIST_HEAD(&tcp_ses->tcp_ses_list);
+ 	INIT_LIST_HEAD(&tcp_ses->smb_ses_list);
+ 	INIT_DELAYED_WORK(&tcp_ses->echo, cifs_echo_request);
++	INIT_DELAYED_WORK(&tcp_ses->resolve, cifs_resolve_server);
+ 	INIT_DELAYED_WORK(&tcp_ses->reconnect, smb2_reconnect_server);
+ 	mutex_init(&tcp_ses->reconnect_mutex);
+ 	memcpy(&tcp_ses->srcaddr, &ctx->srcaddr,
+@@ -1440,6 +1483,12 @@ smbd_connected:
+ 	/* queue echo request delayed work */
+ 	queue_delayed_work(cifsiod_wq, &tcp_ses->echo, tcp_ses->echo_interval);
+ 
++	/* queue dns resolution delayed work */
++	cifs_dbg(FYI, "%s: next dns resolution scheduled for %d seconds in the future\n",
++		 __func__, SMB_DNS_RESOLVE_INTERVAL_DEFAULT);
++
++	queue_delayed_work(cifsiod_wq, &tcp_ses->resolve, (SMB_DNS_RESOLVE_INTERVAL_DEFAULT * HZ));
++
+ 	return tcp_ses;
+ 
+ out_err_crypto_release:
+@@ -4106,7 +4155,8 @@ int cifs_tree_connect(const unsigned int xid, struct cifs_tcon *tcon, const stru
+ 	if (!tree)
+ 		return -ENOMEM;
+ 
+-	if (!tcon->dfs_path) {
++	/* If it is not dfs or there was no cached dfs referral, then reconnect to same share */
++	if (!tcon->dfs_path || dfs_cache_noreq_find(tcon->dfs_path + 1, &ref, &tl)) {
+ 		if (tcon->ipc) {
+ 			scnprintf(tree, MAX_TREE_SIZE, "\\\\%s\\IPC$", server->hostname);
+ 			rc = ops->tree_connect(xid, tcon->ses, tree, tcon, nlsc);
+@@ -4116,9 +4166,6 @@ int cifs_tree_connect(const unsigned int xid, struct cifs_tcon *tcon, const stru
+ 		goto out;
+ 	}
+ 
+-	rc = dfs_cache_noreq_find(tcon->dfs_path + 1, &ref, &tl);
+-	if (rc)
+-		goto out;
+ 	isroot = ref.server_type == DFS_TYPE_ROOT;
+ 	free_dfs_info_param(&ref);
+ 
+diff --git a/fs/cifs/dns_resolve.c b/fs/cifs/dns_resolve.c
+index 534cbba72789f..8c78b48faf015 100644
+--- a/fs/cifs/dns_resolve.c
++++ b/fs/cifs/dns_resolve.c
+@@ -36,6 +36,7 @@
+  * dns_resolve_server_name_to_ip - Resolve UNC server name to ip address.
+  * @unc: UNC path specifying the server (with '/' as delimiter)
+  * @ip_addr: Where to return the IP address.
++ * @expiry: Where to return the expiry time for the dns record.
+  *
+  * The IP address will be returned in string form, and the caller is
+  * responsible for freeing it.
+@@ -43,7 +44,7 @@
+  * Returns length of result on success, -ve on error.
+  */
+ int
+-dns_resolve_server_name_to_ip(const char *unc, char **ip_addr)
++dns_resolve_server_name_to_ip(const char *unc, char **ip_addr, time64_t *expiry)
+ {
+ 	struct sockaddr_storage ss;
+ 	const char *hostname, *sep;
+@@ -78,13 +79,14 @@ dns_resolve_server_name_to_ip(const char *unc, char **ip_addr)
+ 
+ 	/* Perform the upcall */
+ 	rc = dns_query(current->nsproxy->net_ns, NULL, hostname, len,
+-		       NULL, ip_addr, NULL, false);
++		       NULL, ip_addr, expiry, false);
+ 	if (rc < 0)
+ 		cifs_dbg(FYI, "%s: unable to resolve: %*.*s\n",
+ 			 __func__, len, len, hostname);
+ 	else
+-		cifs_dbg(FYI, "%s: resolved: %*.*s to %s\n",
+-			 __func__, len, len, hostname, *ip_addr);
++		cifs_dbg(FYI, "%s: resolved: %*.*s to %s expiry %llu\n",
++			 __func__, len, len, hostname, *ip_addr,
++			 expiry ? (*expiry) : 0);
+ 	return rc;
+ 
+ name_is_IP_address:
+diff --git a/fs/cifs/dns_resolve.h b/fs/cifs/dns_resolve.h
+index d3f5d27f4d06e..ff5483d5244d2 100644
+--- a/fs/cifs/dns_resolve.h
++++ b/fs/cifs/dns_resolve.h
+@@ -24,7 +24,7 @@
+ #define _DNS_RESOLVE_H
+ 
+ #ifdef __KERNEL__
+-extern int dns_resolve_server_name_to_ip(const char *unc, char **ip_addr);
++extern int dns_resolve_server_name_to_ip(const char *unc, char **ip_addr, time64_t *expiry);
+ #endif /* KERNEL */
+ 
+ #endif /* _DNS_RESOLVE_H */
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index 7207a63819cbf..cccaccfb02d04 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -1199,7 +1199,7 @@ int match_target_ip(struct TCP_Server_Info *server,
+ 
+ 	cifs_dbg(FYI, "%s: target name: %s\n", __func__, target + 2);
+ 
+-	rc = dns_resolve_server_name_to_ip(target, &tip);
++	rc = dns_resolve_server_name_to_ip(target, &tip, NULL);
+ 	if (rc < 0)
+ 		goto out;
+ 
+diff --git a/fs/ext4/ext4_jbd2.c b/fs/ext4/ext4_jbd2.c
+index be799040a4154..b96ecba918990 100644
+--- a/fs/ext4/ext4_jbd2.c
++++ b/fs/ext4/ext4_jbd2.c
+@@ -327,6 +327,7 @@ int __ext4_handle_dirty_metadata(const char *where, unsigned int line,
+ 
+ 	set_buffer_meta(bh);
+ 	set_buffer_prio(bh);
++	set_buffer_uptodate(bh);
+ 	if (ext4_handle_valid(handle)) {
+ 		err = jbd2_journal_dirty_metadata(handle, bh);
+ 		/* Errors can only happen due to aborted journal or a nasty bug */
+@@ -355,7 +356,6 @@ int __ext4_handle_dirty_metadata(const char *where, unsigned int line,
+ 					 err);
+ 		}
+ 	} else {
+-		set_buffer_uptodate(bh);
+ 		if (inode)
+ 			mark_buffer_dirty_inode(bh, inode);
+ 		else
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 09f1f02e1d6d6..6a4e040ea9b3a 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -705,15 +705,23 @@ static void flush_stashed_error_work(struct work_struct *work)
+ 	 * ext4 error handling code during handling of previous errors.
+ 	 */
+ 	if (!sb_rdonly(sbi->s_sb) && journal) {
++		struct buffer_head *sbh = sbi->s_sbh;
+ 		handle = jbd2_journal_start(journal, 1);
+ 		if (IS_ERR(handle))
+ 			goto write_directly;
+-		if (jbd2_journal_get_write_access(handle, sbi->s_sbh)) {
++		if (jbd2_journal_get_write_access(handle, sbh)) {
+ 			jbd2_journal_stop(handle);
+ 			goto write_directly;
+ 		}
+ 		ext4_update_super(sbi->s_sb);
+-		if (jbd2_journal_dirty_metadata(handle, sbi->s_sbh)) {
++		if (buffer_write_io_error(sbh) || !buffer_uptodate(sbh)) {
++			ext4_msg(sbi->s_sb, KERN_ERR, "previous I/O error to "
++				 "superblock detected");
++			clear_buffer_write_io_error(sbh);
++			set_buffer_uptodate(sbh);
++		}
++
++		if (jbd2_journal_dirty_metadata(handle, sbh)) {
+ 			jbd2_journal_stop(handle);
+ 			goto write_directly;
+ 		}
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 8d1f17ab94d82..ab63951c08cbc 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1450,10 +1450,8 @@ next_step:
+ 
+ 		if (phase == 3) {
+ 			inode = f2fs_iget(sb, dni.ino);
+-			if (IS_ERR(inode) || is_bad_inode(inode)) {
+-				set_sbi_flag(sbi, SBI_NEED_FSCK);
++			if (IS_ERR(inode) || is_bad_inode(inode))
+ 				continue;
+-			}
+ 
+ 			if (!down_write_trylock(
+ 				&F2FS_I(inode)->i_gc_rwsem[WRITE])) {
+@@ -1822,6 +1820,7 @@ static void init_atgc_management(struct f2fs_sb_info *sbi)
+ 	am->candidate_ratio = DEF_GC_THREAD_CANDIDATE_RATIO;
+ 	am->max_candidate_count = DEF_GC_THREAD_MAX_CANDIDATE_COUNT;
+ 	am->age_weight = DEF_GC_THREAD_AGE_WEIGHT;
++	am->age_threshold = DEF_GC_THREAD_AGE_THRESHOLD;
+ }
+ 
+ void f2fs_build_gc_manager(struct f2fs_sb_info *sbi)
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index a9cd9cf972296..d4139e166b958 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -153,7 +153,8 @@ fail_drop:
+ 	return ERR_PTR(err);
+ }
+ 
+-static inline int is_extension_exist(const unsigned char *s, const char *sub)
++static inline int is_extension_exist(const unsigned char *s, const char *sub,
++						bool tmp_ext)
+ {
+ 	size_t slen = strlen(s);
+ 	size_t sublen = strlen(sub);
+@@ -169,6 +170,13 @@ static inline int is_extension_exist(const unsigned char *s, const char *sub)
+ 	if (slen < sublen + 2)
+ 		return 0;
+ 
++	if (!tmp_ext) {
++		/* file has no temp extension */
++		if (s[slen - sublen - 1] != '.')
++			return 0;
++		return !strncasecmp(s + slen - sublen, sub, sublen);
++	}
++
+ 	for (i = 1; i < slen - sublen; i++) {
+ 		if (s[i] != '.')
+ 			continue;
+@@ -194,7 +202,7 @@ static inline void set_file_temperature(struct f2fs_sb_info *sbi, struct inode *
+ 	hot_count = sbi->raw_super->hot_ext_count;
+ 
+ 	for (i = 0; i < cold_count + hot_count; i++) {
+-		if (is_extension_exist(name, extlist[i]))
++		if (is_extension_exist(name, extlist[i], true))
+ 			break;
+ 	}
+ 
+@@ -295,7 +303,7 @@ static void set_compress_inode(struct f2fs_sb_info *sbi, struct inode *inode,
+ 	hot_count = sbi->raw_super->hot_ext_count;
+ 
+ 	for (i = cold_count; i < cold_count + hot_count; i++) {
+-		if (is_extension_exist(name, extlist[i])) {
++		if (is_extension_exist(name, extlist[i], false)) {
+ 			up_read(&sbi->sb_lock);
+ 			return;
+ 		}
+@@ -306,7 +314,7 @@ static void set_compress_inode(struct f2fs_sb_info *sbi, struct inode *inode,
+ 	ext = F2FS_OPTION(sbi).extensions;
+ 
+ 	for (i = 0; i < ext_cnt; i++) {
+-		if (!is_extension_exist(name, ext[i]))
++		if (!is_extension_exist(name, ext[i], false))
+ 			continue;
+ 
+ 		set_compress_context(inode);
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 096492caaa6bf..b29de80ab60e8 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -4321,4 +4321,5 @@ module_exit(exit_f2fs_fs)
+ MODULE_AUTHOR("Samsung Electronics's Praesto Team");
+ MODULE_DESCRIPTION("Flash Friendly File System");
+ MODULE_LICENSE("GPL");
++MODULE_SOFTDEP("pre: crc32");
+ 
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index 3fa8604c21d52..d296b0d19c273 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -252,7 +252,7 @@ static int fuse_dentry_revalidate(struct dentry *entry, unsigned int flags)
+ 		if (ret == -ENOMEM)
+ 			goto out;
+ 		if (ret || fuse_invalid_attr(&outarg.attr) ||
+-		    inode_wrong_type(inode, outarg.attr.mode))
++		    fuse_stale_inode(inode, outarg.generation, &outarg.attr))
+ 			goto invalid;
+ 
+ 		forget_all_cached_acls(inode);
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index 7e463e2200531..120f9c5908d19 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -761,6 +761,9 @@ struct fuse_conn {
+ 	/* Auto-mount submounts announced by the server */
+ 	unsigned int auto_submounts:1;
+ 
++	/* Propagate syncfs() to server */
++	unsigned int sync_fs:1;
++
+ 	/** The number of requests waiting for completion */
+ 	atomic_t num_waiting;
+ 
+@@ -867,6 +870,13 @@ static inline u64 fuse_get_attr_version(struct fuse_conn *fc)
+ 	return atomic64_read(&fc->attr_version);
+ }
+ 
++static inline bool fuse_stale_inode(const struct inode *inode, int generation,
++				    struct fuse_attr *attr)
++{
++	return inode->i_generation != generation ||
++		inode_wrong_type(inode, attr->mode);
++}
++
+ static inline void fuse_make_bad(struct inode *inode)
+ {
+ 	remove_inode_hash(inode);
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index 393e36b74dc44..cf16d6d3a6038 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -350,8 +350,8 @@ retry:
+ 		inode->i_generation = generation;
+ 		fuse_init_inode(inode, attr);
+ 		unlock_new_inode(inode);
+-	} else if (inode_wrong_type(inode, attr->mode)) {
+-		/* Inode has changed type, any I/O on the old should fail */
++	} else if (fuse_stale_inode(inode, generation, attr)) {
++		/* nodeid was reused, any I/O on the old inode should fail */
+ 		fuse_make_bad(inode);
+ 		iput(inode);
+ 		goto retry;
+@@ -506,6 +506,45 @@ static int fuse_statfs(struct dentry *dentry, struct kstatfs *buf)
+ 	return err;
+ }
+ 
++static int fuse_sync_fs(struct super_block *sb, int wait)
++{
++	struct fuse_mount *fm = get_fuse_mount_super(sb);
++	struct fuse_conn *fc = fm->fc;
++	struct fuse_syncfs_in inarg;
++	FUSE_ARGS(args);
++	int err;
++
++	/*
++	 * Userspace cannot handle the wait == 0 case.  Avoid a
++	 * gratuitous roundtrip.
++	 */
++	if (!wait)
++		return 0;
++
++	/* The filesystem is being unmounted.  Nothing to do. */
++	if (!sb->s_root)
++		return 0;
++
++	if (!fc->sync_fs)
++		return 0;
++
++	memset(&inarg, 0, sizeof(inarg));
++	args.in_numargs = 1;
++	args.in_args[0].size = sizeof(inarg);
++	args.in_args[0].value = &inarg;
++	args.opcode = FUSE_SYNCFS;
++	args.nodeid = get_node_id(sb->s_root->d_inode);
++	args.out_numargs = 0;
++
++	err = fuse_simple_request(fm, &args);
++	if (err == -ENOSYS) {
++		fc->sync_fs = 0;
++		err = 0;
++	}
++
++	return err;
++}
++
+ enum {
+ 	OPT_SOURCE,
+ 	OPT_SUBTYPE,
+@@ -909,6 +948,7 @@ static const struct super_operations fuse_super_operations = {
+ 	.put_super	= fuse_put_super,
+ 	.umount_begin	= fuse_umount_begin,
+ 	.statfs		= fuse_statfs,
++	.sync_fs	= fuse_sync_fs,
+ 	.show_options	= fuse_show_options,
+ };
+ 
+diff --git a/fs/fuse/readdir.c b/fs/fuse/readdir.c
+index 277f7041d55aa..bc267832310c7 100644
+--- a/fs/fuse/readdir.c
++++ b/fs/fuse/readdir.c
+@@ -200,9 +200,12 @@ retry:
+ 	if (!d_in_lookup(dentry)) {
+ 		struct fuse_inode *fi;
+ 		inode = d_inode(dentry);
++		if (inode && get_node_id(inode) != o->nodeid)
++			inode = NULL;
+ 		if (!inode ||
+-		    get_node_id(inode) != o->nodeid ||
+-		    inode_wrong_type(inode, o->attr.mode)) {
++		    fuse_stale_inode(inode, o->generation, &o->attr)) {
++			if (inode)
++				fuse_make_bad(inode);
+ 			d_invalidate(dentry);
+ 			dput(dentry);
+ 			goto retry;
+diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
+index bcb8a02e2d8b5..f9809b1b82f04 100644
+--- a/fs/fuse/virtio_fs.c
++++ b/fs/fuse/virtio_fs.c
+@@ -1447,6 +1447,7 @@ static int virtio_fs_get_tree(struct fs_context *fsc)
+ 	fc->release = fuse_free_conn;
+ 	fc->delete_stale = true;
+ 	fc->auto_submounts = true;
++	fc->sync_fs = true;
+ 
+ 	/* Tell FUSE to split requests that exceed the virtqueue's size */
+ 	fc->max_pages_limit = min_t(unsigned int, fc->max_pages_limit,
+diff --git a/fs/io-wq.h b/fs/io-wq.h
+index af2df0680ee22..32c7b4e824845 100644
+--- a/fs/io-wq.h
++++ b/fs/io-wq.h
+@@ -87,7 +87,6 @@ static inline void wq_list_del(struct io_wq_work_list *list,
+ 
+ struct io_wq_work {
+ 	struct io_wq_work_node list;
+-	const struct cred *creds;
+ 	unsigned flags;
+ };
+ 
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index ad1f31fafe445..eeea6b8c8bee0 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -299,11 +299,8 @@ struct io_sq_data {
+ struct io_comp_state {
+ 	struct io_kiocb		*reqs[IO_COMPL_BATCH];
+ 	unsigned int		nr;
+-	unsigned int		locked_free_nr;
+ 	/* inline/task_work completion list, under ->uring_lock */
+ 	struct list_head	free_list;
+-	/* IRQ completion list, under ->completion_lock */
+-	struct list_head	locked_free_list;
+ };
+ 
+ struct io_submit_link {
+@@ -369,9 +366,6 @@ struct io_ring_ctx {
+ 		unsigned		cached_cq_overflow;
+ 		unsigned long		sq_check_overflow;
+ 
+-		/* hashed buffered write serialization */
+-		struct io_wq_hash	*hash_map;
+-
+ 		struct list_head	defer_list;
+ 		struct list_head	timeout_list;
+ 		struct list_head	cq_overflow_list;
+@@ -385,12 +379,12 @@ struct io_ring_ctx {
+ 	} ____cacheline_aligned_in_smp;
+ 
+ 	struct io_submit_state		submit_state;
++	/* IRQ completion list, under ->completion_lock */
++	struct list_head	locked_free_list;
++	unsigned int		locked_free_nr;
+ 
+ 	struct io_rings	*rings;
+ 
+-	/* Only used for accounting purposes */
+-	struct mm_struct	*mm_account;
+-
+ 	const struct cred	*sq_creds;	/* cred used for __io_sq_thread() */
+ 	struct io_sq_data	*sq_data;	/* if using sq thread polling */
+ 
+@@ -411,14 +405,6 @@ struct io_ring_ctx {
+ 	unsigned		nr_user_bufs;
+ 	struct io_mapped_ubuf	**user_bufs;
+ 
+-	struct user_struct	*user;
+-
+-	struct completion	ref_comp;
+-
+-#if defined(CONFIG_UNIX)
+-	struct socket		*ring_sock;
+-#endif
+-
+ 	struct xarray		io_buffers;
+ 
+ 	struct xarray		personalities;
+@@ -462,12 +448,24 @@ struct io_ring_ctx {
+ 
+ 	struct io_restriction		restrictions;
+ 
+-	/* exit task_work */
+-	struct callback_head		*exit_task_work;
+-
+ 	/* Keep this last, we don't need it for the fast path */
+-	struct work_struct		exit_work;
+-	struct list_head		tctx_list;
++	struct {
++		#if defined(CONFIG_UNIX)
++			struct socket		*ring_sock;
++		#endif
++		/* hashed buffered write serialization */
++		struct io_wq_hash		*hash_map;
++
++		/* Only used for accounting purposes */
++		struct user_struct		*user;
++		struct mm_struct		*mm_account;
++
++		/* ctx exit and cancelation */
++		struct callback_head		*exit_task_work;
++		struct work_struct		exit_work;
++		struct list_head		tctx_list;
++		struct completion		ref_comp;
++	};
+ };
+ 
+ struct io_uring_task {
+@@ -851,6 +849,8 @@ struct io_kiocb {
+ 	struct hlist_node		hash_node;
+ 	struct async_poll		*apoll;
+ 	struct io_wq_work		work;
++	const struct cred 		*creds;
++
+ 	/* store used ubuf, so we can prevent reloading */
+ 	struct io_mapped_ubuf		*imu;
+ };
+@@ -1037,7 +1037,7 @@ static bool io_disarm_next(struct io_kiocb *req);
+ static void io_uring_del_task_file(unsigned long index);
+ static void io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
+ 					 struct task_struct *task,
+-					 struct files_struct *files);
++					 bool cancel_all);
+ static void io_uring_cancel_sqpoll(struct io_sq_data *sqd);
+ static struct io_rsrc_node *io_rsrc_node_alloc(struct io_ring_ctx *ctx);
+ 
+@@ -1106,15 +1106,14 @@ static void io_refs_resurrect(struct percpu_ref *ref, struct completion *compl)
+ 		percpu_ref_put(ref);
+ }
+ 
+-static bool io_match_task(struct io_kiocb *head,
+-			  struct task_struct *task,
+-			  struct files_struct *files)
++static bool io_match_task(struct io_kiocb *head, struct task_struct *task,
++			  bool cancel_all)
+ {
+ 	struct io_kiocb *req;
+ 
+ 	if (task && head->task != task)
+ 		return false;
+-	if (!files)
++	if (cancel_all)
+ 		return true;
+ 
+ 	io_for_each_link(req, head) {
+@@ -1196,7 +1195,7 @@ static struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
+ 	init_llist_head(&ctx->rsrc_put_llist);
+ 	INIT_LIST_HEAD(&ctx->tctx_list);
+ 	INIT_LIST_HEAD(&ctx->submit_state.comp.free_list);
+-	INIT_LIST_HEAD(&ctx->submit_state.comp.locked_free_list);
++	INIT_LIST_HEAD(&ctx->locked_free_list);
+ 	return ctx;
+ err:
+ 	kfree(ctx->dummy_ubuf);
+@@ -1230,8 +1229,8 @@ static void io_prep_async_work(struct io_kiocb *req)
+ 	const struct io_op_def *def = &io_op_defs[req->opcode];
+ 	struct io_ring_ctx *ctx = req->ctx;
+ 
+-	if (!req->work.creds)
+-		req->work.creds = get_current_cred();
++	if (!req->creds)
++		req->creds = get_current_cred();
+ 
+ 	req->work.list.next = NULL;
+ 	req->work.flags = 0;
+@@ -1593,8 +1592,6 @@ static void io_req_complete_post(struct io_kiocb *req, long res,
+ 	 * free_list cache.
+ 	 */
+ 	if (req_ref_put_and_test(req)) {
+-		struct io_comp_state *cs = &ctx->submit_state.comp;
+-
+ 		if (req->flags & (REQ_F_LINK | REQ_F_HARDLINK)) {
+ 			if (req->flags & (REQ_F_LINK_TIMEOUT | REQ_F_FAIL_LINK))
+ 				io_disarm_next(req);
+@@ -1605,8 +1602,8 @@ static void io_req_complete_post(struct io_kiocb *req, long res,
+ 		}
+ 		io_dismantle_req(req);
+ 		io_put_task(req->task, 1);
+-		list_add(&req->compl.list, &cs->locked_free_list);
+-		cs->locked_free_nr++;
++		list_add(&req->compl.list, &ctx->locked_free_list);
++		ctx->locked_free_nr++;
+ 	} else {
+ 		if (!percpu_ref_tryget(&ctx->refs))
+ 			req = NULL;
+@@ -1661,8 +1658,8 @@ static void io_flush_cached_locked_reqs(struct io_ring_ctx *ctx,
+ 					struct io_comp_state *cs)
+ {
+ 	spin_lock_irq(&ctx->completion_lock);
+-	list_splice_init(&cs->locked_free_list, &cs->free_list);
+-	cs->locked_free_nr = 0;
++	list_splice_init(&ctx->locked_free_list, &cs->free_list);
++	ctx->locked_free_nr = 0;
+ 	spin_unlock_irq(&ctx->completion_lock);
+ }
+ 
+@@ -1678,7 +1675,7 @@ static bool io_flush_cached_reqs(struct io_ring_ctx *ctx)
+ 	 * locked cache, grab the lock and move them over to our submission
+ 	 * side cache.
+ 	 */
+-	if (READ_ONCE(cs->locked_free_nr) > IO_COMPL_BATCH)
++	if (READ_ONCE(ctx->locked_free_nr) > IO_COMPL_BATCH)
+ 		io_flush_cached_locked_reqs(ctx, cs);
+ 
+ 	nr = state->free_reqs;
+@@ -1747,9 +1744,9 @@ static void io_dismantle_req(struct io_kiocb *req)
+ 		percpu_ref_put(req->fixed_rsrc_refs);
+ 	if (req->async_data)
+ 		kfree(req->async_data);
+-	if (req->work.creds) {
+-		put_cred(req->work.creds);
+-		req->work.creds = NULL;
++	if (req->creds) {
++		put_cred(req->creds);
++		req->creds = NULL;
+ 	}
+ }
+ 
+@@ -1886,48 +1883,43 @@ static void ctx_flush_and_put(struct io_ring_ctx *ctx)
+ 	percpu_ref_put(&ctx->refs);
+ }
+ 
+-static bool __tctx_task_work(struct io_uring_task *tctx)
+-{
+-	struct io_ring_ctx *ctx = NULL;
+-	struct io_wq_work_list list;
+-	struct io_wq_work_node *node;
+-
+-	if (wq_list_empty(&tctx->task_list))
+-		return false;
+-
+-	spin_lock_irq(&tctx->task_lock);
+-	list = tctx->task_list;
+-	INIT_WQ_LIST(&tctx->task_list);
+-	spin_unlock_irq(&tctx->task_lock);
+-
+-	node = list.first;
+-	while (node) {
+-		struct io_wq_work_node *next = node->next;
+-		struct io_kiocb *req;
+-
+-		req = container_of(node, struct io_kiocb, io_task_work.node);
+-		if (req->ctx != ctx) {
+-			ctx_flush_and_put(ctx);
+-			ctx = req->ctx;
+-			percpu_ref_get(&ctx->refs);
+-		}
+-
+-		req->task_work.func(&req->task_work);
+-		node = next;
+-	}
+-
+-	ctx_flush_and_put(ctx);
+-	return list.first != NULL;
+-}
+-
+ static void tctx_task_work(struct callback_head *cb)
+ {
+-	struct io_uring_task *tctx = container_of(cb, struct io_uring_task, task_work);
++	struct io_uring_task *tctx = container_of(cb, struct io_uring_task,
++						  task_work);
+ 
+ 	clear_bit(0, &tctx->task_state);
+ 
+-	while (__tctx_task_work(tctx))
++	while (!wq_list_empty(&tctx->task_list)) {
++		struct io_ring_ctx *ctx = NULL;
++		struct io_wq_work_list list;
++		struct io_wq_work_node *node;
++
++		spin_lock_irq(&tctx->task_lock);
++		list = tctx->task_list;
++		INIT_WQ_LIST(&tctx->task_list);
++		spin_unlock_irq(&tctx->task_lock);
++
++		node = list.first;
++		while (node) {
++			struct io_wq_work_node *next = node->next;
++			struct io_kiocb *req = container_of(node, struct io_kiocb,
++							    io_task_work.node);
++
++			if (req->ctx != ctx) {
++				ctx_flush_and_put(ctx);
++				ctx = req->ctx;
++				percpu_ref_get(&ctx->refs);
++			}
++			req->task_work.func(&req->task_work);
++			node = next;
++		}
++
++		ctx_flush_and_put(ctx);
++		if (!list.first)
++			break;
+ 		cond_resched();
++	}
+ }
+ 
+ static int io_req_task_work_add(struct io_kiocb *req)
+@@ -2040,7 +2032,7 @@ static void __io_req_task_submit(struct io_kiocb *req)
+ 
+ 	/* ctx stays valid until unlock, even if we drop all ours ctx->refs */
+ 	mutex_lock(&ctx->uring_lock);
+-	if (!(current->flags & PF_EXITING) && !current->in_execve)
++	if (!(req->task->flags & PF_EXITING) && !req->task->in_execve)
+ 		__io_queue_sqe(req);
+ 	else
+ 		io_req_complete_failed(req, -EFAULT);
+@@ -2235,12 +2227,6 @@ static inline unsigned int io_put_rw_kbuf(struct io_kiocb *req)
+ 
+ static inline bool io_run_task_work(void)
+ {
+-	/*
+-	 * Not safe to run on exiting task, and the task_work handling will
+-	 * not add work to such a task.
+-	 */
+-	if (unlikely(current->flags & PF_EXITING))
+-		return false;
+ 	if (current->task_works) {
+ 		__set_current_state(TASK_RUNNING);
+ 		task_work_run();
+@@ -5265,7 +5251,7 @@ static bool io_poll_remove_one(struct io_kiocb *req)
+  * Returns true if we found and killed one or more poll requests
+  */
+ static bool io_poll_remove_all(struct io_ring_ctx *ctx, struct task_struct *tsk,
+-			       struct files_struct *files)
++			       bool cancel_all)
+ {
+ 	struct hlist_node *tmp;
+ 	struct io_kiocb *req;
+@@ -5277,7 +5263,7 @@ static bool io_poll_remove_all(struct io_ring_ctx *ctx, struct task_struct *tsk,
+ 
+ 		list = &ctx->cancel_hash[i];
+ 		hlist_for_each_entry_safe(req, tmp, list, hash_node) {
+-			if (io_match_task(req, tsk, files))
++			if (io_match_task(req, tsk, cancel_all))
+ 				posted += io_poll_remove_one(req);
+ 		}
+ 	}
+@@ -6115,8 +6101,8 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
+ 	const struct cred *creds = NULL;
+ 	int ret;
+ 
+-	if (req->work.creds && req->work.creds != current_cred())
+-		creds = override_creds(req->work.creds);
++	if (req->creds && req->creds != current_cred())
++		creds = override_creds(req->creds);
+ 
+ 	switch (req->opcode) {
+ 	case IORING_OP_NOP:
+@@ -6523,7 +6509,7 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
+ 	atomic_set(&req->refs, 2);
+ 	req->task = current;
+ 	req->result = 0;
+-	req->work.creds = NULL;
++	req->creds = NULL;
+ 
+ 	/* enforce forwards compatibility on users */
+ 	if (unlikely(sqe_flags & ~SQE_VALID_FLAGS))
+@@ -6539,10 +6525,10 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
+ 
+ 	personality = READ_ONCE(sqe->personality);
+ 	if (personality) {
+-		req->work.creds = xa_load(&ctx->personalities, personality);
+-		if (!req->work.creds)
++		req->creds = xa_load(&ctx->personalities, personality);
++		if (!req->creds)
+ 			return -EINVAL;
+-		get_cred(req->work.creds);
++		get_cred(req->creds);
+ 	}
+ 	state = &ctx->submit_state;
+ 
+@@ -8747,7 +8733,7 @@ static void io_ring_exit_work(struct work_struct *work)
+ 	 * as nobody else will be looking for them.
+ 	 */
+ 	do {
+-		io_uring_try_cancel_requests(ctx, NULL, NULL);
++		io_uring_try_cancel_requests(ctx, NULL, true);
+ 		if (ctx->sq_data) {
+ 			struct io_sq_data *sqd = ctx->sq_data;
+ 			struct task_struct *tsk;
+@@ -8798,14 +8784,14 @@ static void io_ring_exit_work(struct work_struct *work)
+ 
+ /* Returns true if we found and killed one or more timeouts */
+ static bool io_kill_timeouts(struct io_ring_ctx *ctx, struct task_struct *tsk,
+-			     struct files_struct *files)
++			     bool cancel_all)
+ {
+ 	struct io_kiocb *req, *tmp;
+ 	int canceled = 0;
+ 
+ 	spin_lock_irq(&ctx->completion_lock);
+ 	list_for_each_entry_safe(req, tmp, &ctx->timeout_list, timeout.list) {
+-		if (io_match_task(req, tsk, files)) {
++		if (io_match_task(req, tsk, cancel_all)) {
+ 			io_kill_timeout(req, -ECANCELED);
+ 			canceled++;
+ 		}
+@@ -8831,8 +8817,8 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
+ 		io_unregister_personality(ctx, index);
+ 	mutex_unlock(&ctx->uring_lock);
+ 
+-	io_kill_timeouts(ctx, NULL, NULL);
+-	io_poll_remove_all(ctx, NULL, NULL);
++	io_kill_timeouts(ctx, NULL, true);
++	io_poll_remove_all(ctx, NULL, true);
+ 
+ 	/* if we failed setting up the ctx, we might not have any rings */
+ 	io_iopoll_try_reap_events(ctx);
+@@ -8858,7 +8844,7 @@ static int io_uring_release(struct inode *inode, struct file *file)
+ 
+ struct io_task_cancel {
+ 	struct task_struct *task;
+-	struct files_struct *files;
++	bool all;
+ };
+ 
+ static bool io_cancel_task_cb(struct io_wq_work *work, void *data)
+@@ -8867,30 +8853,29 @@ static bool io_cancel_task_cb(struct io_wq_work *work, void *data)
+ 	struct io_task_cancel *cancel = data;
+ 	bool ret;
+ 
+-	if (cancel->files && (req->flags & REQ_F_LINK_TIMEOUT)) {
++	if (!cancel->all && (req->flags & REQ_F_LINK_TIMEOUT)) {
+ 		unsigned long flags;
+ 		struct io_ring_ctx *ctx = req->ctx;
+ 
+ 		/* protect against races with linked timeouts */
+ 		spin_lock_irqsave(&ctx->completion_lock, flags);
+-		ret = io_match_task(req, cancel->task, cancel->files);
++		ret = io_match_task(req, cancel->task, cancel->all);
+ 		spin_unlock_irqrestore(&ctx->completion_lock, flags);
+ 	} else {
+-		ret = io_match_task(req, cancel->task, cancel->files);
++		ret = io_match_task(req, cancel->task, cancel->all);
+ 	}
+ 	return ret;
+ }
+ 
+ static bool io_cancel_defer_files(struct io_ring_ctx *ctx,
+-				  struct task_struct *task,
+-				  struct files_struct *files)
++				  struct task_struct *task, bool cancel_all)
+ {
+ 	struct io_defer_entry *de;
+ 	LIST_HEAD(list);
+ 
+ 	spin_lock_irq(&ctx->completion_lock);
+ 	list_for_each_entry_reverse(de, &ctx->defer_list, list) {
+-		if (io_match_task(de->req, task, files)) {
++		if (io_match_task(de->req, task, cancel_all)) {
+ 			list_cut_position(&list, &ctx->defer_list, &de->list);
+ 			break;
+ 		}
+@@ -8934,9 +8919,9 @@ static bool io_uring_try_cancel_iowq(struct io_ring_ctx *ctx)
+ 
+ static void io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
+ 					 struct task_struct *task,
+-					 struct files_struct *files)
++					 bool cancel_all)
+ {
+-	struct io_task_cancel cancel = { .task = task, .files = files, };
++	struct io_task_cancel cancel = { .task = task, .all = cancel_all, };
+ 	struct io_uring_task *tctx = task ? task->io_uring : NULL;
+ 
+ 	while (1) {
+@@ -8956,7 +8941,7 @@ static void io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
+ 		}
+ 
+ 		/* SQPOLL thread does its own polling */
+-		if ((!(ctx->flags & IORING_SETUP_SQPOLL) && !files) ||
++		if ((!(ctx->flags & IORING_SETUP_SQPOLL) && cancel_all) ||
+ 		    (ctx->sq_data && ctx->sq_data->thread == current)) {
+ 			while (!list_empty_careful(&ctx->iopoll_list)) {
+ 				io_iopoll_try_reap_events(ctx);
+@@ -8964,10 +8949,11 @@ static void io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
+ 			}
+ 		}
+ 
+-		ret |= io_cancel_defer_files(ctx, task, files);
+-		ret |= io_poll_remove_all(ctx, task, files);
+-		ret |= io_kill_timeouts(ctx, task, files);
+-		ret |= io_run_task_work();
++		ret |= io_cancel_defer_files(ctx, task, cancel_all);
++		ret |= io_poll_remove_all(ctx, task, cancel_all);
++		ret |= io_kill_timeouts(ctx, task, cancel_all);
++		if (task)
++			ret |= io_run_task_work();
+ 		ret |= io_run_ctx_fallback(ctx);
+ 		if (!ret)
+ 			break;
+@@ -9072,7 +9058,7 @@ static s64 tctx_inflight(struct io_uring_task *tctx, bool tracked)
+ 	return percpu_counter_sum(&tctx->inflight);
+ }
+ 
+-static void io_uring_try_cancel(struct files_struct *files)
++static void io_uring_try_cancel(bool cancel_all)
+ {
+ 	struct io_uring_task *tctx = current->io_uring;
+ 	struct io_tctx_node *node;
+@@ -9083,7 +9069,7 @@ static void io_uring_try_cancel(struct files_struct *files)
+ 
+ 		/* sqpoll task will cancel all its requests */
+ 		if (!ctx->sq_data)
+-			io_uring_try_cancel_requests(ctx, current, files);
++			io_uring_try_cancel_requests(ctx, current, cancel_all);
+ 	}
+ }
+ 
+@@ -9109,7 +9095,7 @@ static void io_uring_cancel_sqpoll(struct io_sq_data *sqd)
+ 		if (!inflight)
+ 			break;
+ 		list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
+-			io_uring_try_cancel_requests(ctx, current, NULL);
++			io_uring_try_cancel_requests(ctx, current, true);
+ 
+ 		prepare_to_wait(&tctx->wait, &wait, TASK_UNINTERRUPTIBLE);
+ 		/*
+@@ -9133,6 +9119,7 @@ void __io_uring_cancel(struct files_struct *files)
+ 	struct io_uring_task *tctx = current->io_uring;
+ 	DEFINE_WAIT(wait);
+ 	s64 inflight;
++	bool cancel_all = !files;
+ 
+ 	if (tctx->io_wq)
+ 		io_wq_exit_start(tctx->io_wq);
+@@ -9141,10 +9128,10 @@ void __io_uring_cancel(struct files_struct *files)
+ 	atomic_inc(&tctx->in_idle);
+ 	do {
+ 		/* read completions before cancelations */
+-		inflight = tctx_inflight(tctx, !!files);
++		inflight = tctx_inflight(tctx, !cancel_all);
+ 		if (!inflight)
+ 			break;
+-		io_uring_try_cancel(files);
++		io_uring_try_cancel(cancel_all);
+ 		prepare_to_wait(&tctx->wait, &wait, TASK_UNINTERRUPTIBLE);
+ 
+ 		/*
+@@ -9152,14 +9139,14 @@ void __io_uring_cancel(struct files_struct *files)
+ 		 * avoids a race where a completion comes in before we did
+ 		 * prepare_to_wait().
+ 		 */
+-		if (inflight == tctx_inflight(tctx, !!files))
++		if (inflight == tctx_inflight(tctx, !cancel_all))
+ 			schedule();
+ 		finish_wait(&tctx->wait, &wait);
+ 	} while (1);
+ 	atomic_dec(&tctx->in_idle);
+ 
+ 	io_uring_clean_tctx(tctx);
+-	if (!files) {
++	if (cancel_all) {
+ 		/* for exec all current's requests should be gone, kill tctx */
+ 		__io_uring_free(current);
+ 	}
+diff --git a/fs/jfs/jfs_logmgr.c b/fs/jfs/jfs_logmgr.c
+index 9330eff210e0c..78fd136ac13b9 100644
+--- a/fs/jfs/jfs_logmgr.c
++++ b/fs/jfs/jfs_logmgr.c
+@@ -1324,6 +1324,7 @@ int lmLogInit(struct jfs_log * log)
+ 		} else {
+ 			if (!uuid_equal(&logsuper->uuid, &log->uuid)) {
+ 				jfs_warn("wrong uuid on JFS log device");
++				rc = -EINVAL;
+ 				goto errout20;
+ 			}
+ 			log->size = le32_to_cpu(logsuper->size);
+diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
+index e6ec6f09ac6e4..7c45ac3c3b0b5 100644
+--- a/fs/nfs/delegation.c
++++ b/fs/nfs/delegation.c
+@@ -75,6 +75,13 @@ void nfs_mark_delegation_referenced(struct nfs_delegation *delegation)
+ 	set_bit(NFS_DELEGATION_REFERENCED, &delegation->flags);
+ }
+ 
++static void nfs_mark_return_delegation(struct nfs_server *server,
++				       struct nfs_delegation *delegation)
++{
++	set_bit(NFS_DELEGATION_RETURN, &delegation->flags);
++	set_bit(NFS4CLNT_DELEGRETURN, &server->nfs_client->cl_state);
++}
++
+ static bool
+ nfs4_is_valid_delegation(const struct nfs_delegation *delegation,
+ 		fmode_t flags)
+@@ -293,6 +300,7 @@ nfs_start_delegation_return_locked(struct nfs_inode *nfsi)
+ 		goto out;
+ 	spin_lock(&delegation->lock);
+ 	if (!test_and_set_bit(NFS_DELEGATION_RETURNING, &delegation->flags)) {
++		clear_bit(NFS_DELEGATION_RETURN_DELAYED, &delegation->flags);
+ 		/* Refcount matched in nfs_end_delegation_return() */
+ 		ret = nfs_get_delegation(delegation);
+ 	}
+@@ -314,16 +322,17 @@ nfs_start_delegation_return(struct nfs_inode *nfsi)
+ 	return delegation;
+ }
+ 
+-static void
+-nfs_abort_delegation_return(struct nfs_delegation *delegation,
+-		struct nfs_client *clp)
++static void nfs_abort_delegation_return(struct nfs_delegation *delegation,
++					struct nfs_client *clp, int err)
+ {
+ 
+ 	spin_lock(&delegation->lock);
+ 	clear_bit(NFS_DELEGATION_RETURNING, &delegation->flags);
+-	set_bit(NFS_DELEGATION_RETURN, &delegation->flags);
++	if (err == -EAGAIN) {
++		set_bit(NFS_DELEGATION_RETURN_DELAYED, &delegation->flags);
++		set_bit(NFS4CLNT_DELEGRETURN_DELAYED, &clp->cl_state);
++	}
+ 	spin_unlock(&delegation->lock);
+-	set_bit(NFS4CLNT_DELEGRETURN, &clp->cl_state);
+ }
+ 
+ static struct nfs_delegation *
+@@ -539,7 +548,7 @@ static int nfs_end_delegation_return(struct inode *inode, struct nfs_delegation
+ 	} while (err == 0);
+ 
+ 	if (err) {
+-		nfs_abort_delegation_return(delegation, clp);
++		nfs_abort_delegation_return(delegation, clp, err);
+ 		goto out;
+ 	}
+ 
+@@ -568,6 +577,7 @@ static bool nfs_delegation_need_return(struct nfs_delegation *delegation)
+ 	if (ret)
+ 		clear_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags);
+ 	if (test_bit(NFS_DELEGATION_RETURNING, &delegation->flags) ||
++	    test_bit(NFS_DELEGATION_RETURN_DELAYED, &delegation->flags) ||
+ 	    test_bit(NFS_DELEGATION_REVOKED, &delegation->flags))
+ 		ret = false;
+ 
+@@ -647,6 +657,38 @@ out:
+ 	return err;
+ }
+ 
++static bool nfs_server_clear_delayed_delegations(struct nfs_server *server)
++{
++	struct nfs_delegation *d;
++	bool ret = false;
++
++	list_for_each_entry_rcu (d, &server->delegations, super_list) {
++		if (!test_bit(NFS_DELEGATION_RETURN_DELAYED, &d->flags))
++			continue;
++		nfs_mark_return_delegation(server, d);
++		clear_bit(NFS_DELEGATION_RETURN_DELAYED, &d->flags);
++		ret = true;
++	}
++	return ret;
++}
++
++static bool nfs_client_clear_delayed_delegations(struct nfs_client *clp)
++{
++	struct nfs_server *server;
++	bool ret = false;
++
++	if (!test_and_clear_bit(NFS4CLNT_DELEGRETURN_DELAYED, &clp->cl_state))
++		goto out;
++	rcu_read_lock();
++	list_for_each_entry_rcu (server, &clp->cl_superblocks, client_link) {
++		if (nfs_server_clear_delayed_delegations(server))
++			ret = true;
++	}
++	rcu_read_unlock();
++out:
++	return ret;
++}
++
+ /**
+  * nfs_client_return_marked_delegations - return previously marked delegations
+  * @clp: nfs_client to process
+@@ -659,8 +701,14 @@ out:
+  */
+ int nfs_client_return_marked_delegations(struct nfs_client *clp)
+ {
+-	return nfs_client_for_each_server(clp,
+-			nfs_server_return_marked_delegations, NULL);
++	int err = nfs_client_for_each_server(
++		clp, nfs_server_return_marked_delegations, NULL);
++	if (err)
++		return err;
++	/* If a return was delayed, sleep to prevent hard looping */
++	if (nfs_client_clear_delayed_delegations(clp))
++		ssleep(1);
++	return 0;
+ }
+ 
+ /**
+@@ -775,13 +823,6 @@ static void nfs_mark_return_if_closed_delegation(struct nfs_server *server,
+ 	set_bit(NFS4CLNT_DELEGRETURN, &server->nfs_client->cl_state);
+ }
+ 
+-static void nfs_mark_return_delegation(struct nfs_server *server,
+-		struct nfs_delegation *delegation)
+-{
+-	set_bit(NFS_DELEGATION_RETURN, &delegation->flags);
+-	set_bit(NFS4CLNT_DELEGRETURN, &server->nfs_client->cl_state);
+-}
+-
+ static bool nfs_server_mark_return_all_delegations(struct nfs_server *server)
+ {
+ 	struct nfs_delegation *delegation;
+diff --git a/fs/nfs/delegation.h b/fs/nfs/delegation.h
+index c19b4fd207812..1c378992b7c0f 100644
+--- a/fs/nfs/delegation.h
++++ b/fs/nfs/delegation.h
+@@ -36,6 +36,7 @@ enum {
+ 	NFS_DELEGATION_REVOKED,
+ 	NFS_DELEGATION_TEST_EXPIRED,
+ 	NFS_DELEGATION_INODE_FREEING,
++	NFS_DELEGATION_RETURN_DELAYED,
+ };
+ 
+ int nfs_inode_set_delegation(struct inode *inode, const struct cred *cred,
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index 2d30a4da49fa0..2e894fec036b0 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -700,8 +700,8 @@ static void nfs_direct_write_completion(struct nfs_pgio_header *hdr)
+ {
+ 	struct nfs_direct_req *dreq = hdr->dreq;
+ 	struct nfs_commit_info cinfo;
+-	bool request_commit = false;
+ 	struct nfs_page *req = nfs_list_entry(hdr->pages.next);
++	int flags = NFS_ODIRECT_DONE;
+ 
+ 	nfs_init_cinfo_from_dreq(&cinfo, dreq);
+ 
+@@ -713,15 +713,9 @@ static void nfs_direct_write_completion(struct nfs_pgio_header *hdr)
+ 
+ 	nfs_direct_count_bytes(dreq, hdr);
+ 	if (hdr->good_bytes != 0 && nfs_write_need_commit(hdr)) {
+-		switch (dreq->flags) {
+-		case 0:
++		if (!dreq->flags)
+ 			dreq->flags = NFS_ODIRECT_DO_COMMIT;
+-			request_commit = true;
+-			break;
+-		case NFS_ODIRECT_RESCHED_WRITES:
+-		case NFS_ODIRECT_DO_COMMIT:
+-			request_commit = true;
+-		}
++		flags = dreq->flags;
+ 	}
+ 	spin_unlock(&dreq->lock);
+ 
+@@ -729,12 +723,15 @@ static void nfs_direct_write_completion(struct nfs_pgio_header *hdr)
+ 
+ 		req = nfs_list_entry(hdr->pages.next);
+ 		nfs_list_remove_request(req);
+-		if (request_commit) {
++		if (flags == NFS_ODIRECT_DO_COMMIT) {
+ 			kref_get(&req->wb_kref);
+ 			memcpy(&req->wb_verf, &hdr->verf.verifier,
+ 			       sizeof(req->wb_verf));
+ 			nfs_mark_request_commit(req, hdr->lseg, &cinfo,
+ 				hdr->ds_commit_idx);
++		} else if (flags == NFS_ODIRECT_RESCHED_WRITES) {
++			kref_get(&req->wb_kref);
++			nfs_mark_request_commit(req, NULL, &cinfo, 0);
+ 		}
+ 		nfs_unlock_and_release_request(req);
+ 	}
+diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c
+index c4c021c6ebbd8..d743629e05e12 100644
+--- a/fs/nfs/fscache.c
++++ b/fs/nfs/fscache.c
+@@ -385,12 +385,15 @@ static void nfs_readpage_from_fscache_complete(struct page *page,
+ 		 "NFS: readpage_from_fscache_complete (0x%p/0x%p/%d)\n",
+ 		 page, context, error);
+ 
+-	/* if the read completes with an error, we just unlock the page and let
+-	 * the VM reissue the readpage */
+-	if (!error) {
++	/*
++	 * If the read completes with an error, mark the page with PG_checked,
++	 * unlock the page, and let the VM reissue the readpage.
++	 */
++	if (!error)
+ 		SetPageUptodate(page);
+-		unlock_page(page);
+-	}
++	else
++		SetPageChecked(page);
++	unlock_page(page);
+ }
+ 
+ /*
+@@ -405,6 +408,11 @@ int __nfs_readpage_from_fscache(struct nfs_open_context *ctx,
+ 		 "NFS: readpage_from_fscache(fsc:%p/p:%p(i:%lx f:%lx)/0x%p)\n",
+ 		 nfs_i_fscache(inode), page, page->index, page->flags, inode);
+ 
++	if (PageChecked(page)) {
++		ClearPageChecked(page);
++		return 1;
++	}
++
+ 	ret = fscache_read_or_alloc_page(nfs_i_fscache(inode),
+ 					 page,
+ 					 nfs_readpage_from_fscache_complete,
+diff --git a/fs/nfs/getroot.c b/fs/nfs/getroot.c
+index aaeeb4659bff3..59355c106eceb 100644
+--- a/fs/nfs/getroot.c
++++ b/fs/nfs/getroot.c
+@@ -67,7 +67,7 @@ static int nfs_superblock_set_dummy_root(struct super_block *sb, struct inode *i
+ int nfs_get_root(struct super_block *s, struct fs_context *fc)
+ {
+ 	struct nfs_fs_context *ctx = nfs_fc2context(fc);
+-	struct nfs_server *server = NFS_SB(s);
++	struct nfs_server *server = NFS_SB(s), *clone_server;
+ 	struct nfs_fsinfo fsinfo;
+ 	struct dentry *root;
+ 	struct inode *inode;
+@@ -127,7 +127,7 @@ int nfs_get_root(struct super_block *s, struct fs_context *fc)
+ 	}
+ 	spin_unlock(&root->d_lock);
+ 	fc->root = root;
+-	if (NFS_SB(s)->caps & NFS_CAP_SECURITY_LABEL)
++	if (server->caps & NFS_CAP_SECURITY_LABEL)
+ 		kflags |= SECURITY_LSM_NATIVE_LABELS;
+ 	if (ctx->clone_data.sb) {
+ 		if (d_inode(fc->root)->i_fop != &nfs_dir_operations) {
+@@ -137,15 +137,19 @@ int nfs_get_root(struct super_block *s, struct fs_context *fc)
+ 		/* clone lsm security options from the parent to the new sb */
+ 		error = security_sb_clone_mnt_opts(ctx->clone_data.sb,
+ 						   s, kflags, &kflags_out);
++		if (error)
++			goto error_splat_root;
++		clone_server = NFS_SB(ctx->clone_data.sb);
++		server->has_sec_mnt_opts = clone_server->has_sec_mnt_opts;
+ 	} else {
+ 		error = security_sb_set_mnt_opts(s, fc->security,
+ 							kflags, &kflags_out);
+ 	}
+ 	if (error)
+ 		goto error_splat_root;
+-	if (NFS_SB(s)->caps & NFS_CAP_SECURITY_LABEL &&
++	if (server->caps & NFS_CAP_SECURITY_LABEL &&
+ 		!(kflags_out & SECURITY_LSM_NATIVE_LABELS))
+-		NFS_SB(s)->caps &= ~NFS_CAP_SECURITY_LABEL;
++		server->caps &= ~NFS_CAP_SECURITY_LABEL;
+ 
+ 	nfs_setsecurity(inode, fsinfo.fattr, fsinfo.fattr->label);
+ 	error = 0;
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index 529c4099f482a..1acae1716df12 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -1101,6 +1101,7 @@ EXPORT_SYMBOL_GPL(nfs_inode_attach_open_context);
+ void nfs_file_set_open_context(struct file *filp, struct nfs_open_context *ctx)
+ {
+ 	filp->private_data = get_nfs_open_context(ctx);
++	set_bit(NFS_CONTEXT_FILE_OPEN, &ctx->flags);
+ 	if (list_empty(&ctx->list))
+ 		nfs_inode_attach_open_context(ctx);
+ }
+@@ -1120,6 +1121,8 @@ struct nfs_open_context *nfs_find_open_context(struct inode *inode, const struct
+ 			continue;
+ 		if ((pos->mode & (FMODE_READ|FMODE_WRITE)) != mode)
+ 			continue;
++		if (!test_bit(NFS_CONTEXT_FILE_OPEN, &pos->flags))
++			continue;
+ 		ctx = get_nfs_open_context(pos);
+ 		if (ctx)
+ 			break;
+@@ -1135,6 +1138,7 @@ void nfs_file_clear_open_context(struct file *filp)
+ 	if (ctx) {
+ 		struct inode *inode = d_inode(ctx->dentry);
+ 
++		clear_bit(NFS_CONTEXT_FILE_OPEN, &ctx->flags);
+ 		/*
+ 		 * We fatal error on write before. Try to writeback
+ 		 * every page again.
+@@ -2066,24 +2070,22 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ 	} else {
+ 		nfsi->cache_validity |=
+ 			save_cache_validity & NFS_INO_INVALID_CHANGE;
+-		cache_revalidated = false;
++		if (!have_delegation ||
++		    (nfsi->cache_validity & NFS_INO_INVALID_CHANGE) != 0)
++			cache_revalidated = false;
+ 	}
+ 
+-	if (fattr->valid & NFS_ATTR_FATTR_MTIME) {
++	if (fattr->valid & NFS_ATTR_FATTR_MTIME)
+ 		inode->i_mtime = fattr->mtime;
+-	} else if (fattr_supported & NFS_ATTR_FATTR_MTIME) {
++	else if (fattr_supported & NFS_ATTR_FATTR_MTIME)
+ 		nfsi->cache_validity |=
+ 			save_cache_validity & NFS_INO_INVALID_MTIME;
+-		cache_revalidated = false;
+-	}
+ 
+-	if (fattr->valid & NFS_ATTR_FATTR_CTIME) {
++	if (fattr->valid & NFS_ATTR_FATTR_CTIME)
+ 		inode->i_ctime = fattr->ctime;
+-	} else if (fattr_supported & NFS_ATTR_FATTR_CTIME) {
++	else if (fattr_supported & NFS_ATTR_FATTR_CTIME)
+ 		nfsi->cache_validity |=
+ 			save_cache_validity & NFS_INO_INVALID_CTIME;
+-		cache_revalidated = false;
+-	}
+ 
+ 	/* Check if our cached file size is stale */
+ 	if (fattr->valid & NFS_ATTR_FATTR_SIZE) {
+@@ -2111,19 +2113,15 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ 			fattr->du.nfs3.used = 0;
+ 			fattr->valid |= NFS_ATTR_FATTR_SPACE_USED;
+ 		}
+-	} else {
++	} else
+ 		nfsi->cache_validity |=
+ 			save_cache_validity & NFS_INO_INVALID_SIZE;
+-		cache_revalidated = false;
+-	}
+ 
+ 	if (fattr->valid & NFS_ATTR_FATTR_ATIME)
+ 		inode->i_atime = fattr->atime;
+-	else if (fattr_supported & NFS_ATTR_FATTR_ATIME) {
++	else if (fattr_supported & NFS_ATTR_FATTR_ATIME)
+ 		nfsi->cache_validity |=
+ 			save_cache_validity & NFS_INO_INVALID_ATIME;
+-		cache_revalidated = false;
+-	}
+ 
+ 	if (fattr->valid & NFS_ATTR_FATTR_MODE) {
+ 		if ((inode->i_mode & S_IALLUGO) != (fattr->mode & S_IALLUGO)) {
+@@ -2134,11 +2132,9 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ 				| NFS_INO_INVALID_ACL;
+ 			attr_changed = true;
+ 		}
+-	} else if (fattr_supported & NFS_ATTR_FATTR_MODE) {
++	} else if (fattr_supported & NFS_ATTR_FATTR_MODE)
+ 		nfsi->cache_validity |=
+ 			save_cache_validity & NFS_INO_INVALID_MODE;
+-		cache_revalidated = false;
+-	}
+ 
+ 	if (fattr->valid & NFS_ATTR_FATTR_OWNER) {
+ 		if (!uid_eq(inode->i_uid, fattr->uid)) {
+@@ -2147,11 +2143,9 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ 			inode->i_uid = fattr->uid;
+ 			attr_changed = true;
+ 		}
+-	} else if (fattr_supported & NFS_ATTR_FATTR_OWNER) {
++	} else if (fattr_supported & NFS_ATTR_FATTR_OWNER)
+ 		nfsi->cache_validity |=
+ 			save_cache_validity & NFS_INO_INVALID_OTHER;
+-		cache_revalidated = false;
+-	}
+ 
+ 	if (fattr->valid & NFS_ATTR_FATTR_GROUP) {
+ 		if (!gid_eq(inode->i_gid, fattr->gid)) {
+@@ -2160,11 +2154,9 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ 			inode->i_gid = fattr->gid;
+ 			attr_changed = true;
+ 		}
+-	} else if (fattr_supported & NFS_ATTR_FATTR_GROUP) {
++	} else if (fattr_supported & NFS_ATTR_FATTR_GROUP)
+ 		nfsi->cache_validity |=
+ 			save_cache_validity & NFS_INO_INVALID_OTHER;
+-		cache_revalidated = false;
+-	}
+ 
+ 	if (fattr->valid & NFS_ATTR_FATTR_NLINK) {
+ 		if (inode->i_nlink != fattr->nlink) {
+@@ -2173,30 +2165,24 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
+ 			set_nlink(inode, fattr->nlink);
+ 			attr_changed = true;
+ 		}
+-	} else if (fattr_supported & NFS_ATTR_FATTR_NLINK) {
++	} else if (fattr_supported & NFS_ATTR_FATTR_NLINK)
+ 		nfsi->cache_validity |=
+ 			save_cache_validity & NFS_INO_INVALID_NLINK;
+-		cache_revalidated = false;
+-	}
+ 
+ 	if (fattr->valid & NFS_ATTR_FATTR_SPACE_USED) {
+ 		/*
+ 		 * report the blocks in 512byte units
+ 		 */
+ 		inode->i_blocks = nfs_calc_block_size(fattr->du.nfs3.used);
+-	} else if (fattr_supported & NFS_ATTR_FATTR_SPACE_USED) {
++	} else if (fattr_supported & NFS_ATTR_FATTR_SPACE_USED)
+ 		nfsi->cache_validity |=
+ 			save_cache_validity & NFS_INO_INVALID_BLOCKS;
+-		cache_revalidated = false;
+-	}
+ 
+-	if (fattr->valid & NFS_ATTR_FATTR_BLOCKS_USED) {
++	if (fattr->valid & NFS_ATTR_FATTR_BLOCKS_USED)
+ 		inode->i_blocks = fattr->du.nfs2.blocks;
+-	} else if (fattr_supported & NFS_ATTR_FATTR_BLOCKS_USED) {
++	else if (fattr_supported & NFS_ATTR_FATTR_BLOCKS_USED)
+ 		nfsi->cache_validity |=
+ 			save_cache_validity & NFS_INO_INVALID_BLOCKS;
+-		cache_revalidated = false;
+-	}
+ 
+ 	/* Update attrtimeo value if we're out of the unstable period */
+ 	if (attr_changed) {
+diff --git a/fs/nfs/nfs3proc.c b/fs/nfs/nfs3proc.c
+index 5c4e23abc3451..2299446b3b89b 100644
+--- a/fs/nfs/nfs3proc.c
++++ b/fs/nfs/nfs3proc.c
+@@ -385,7 +385,7 @@ nfs3_proc_create(struct inode *dir, struct dentry *dentry, struct iattr *sattr,
+ 				break;
+ 
+ 			case NFS3_CREATE_UNCHECKED:
+-				goto out;
++				goto out_release_acls;
+ 		}
+ 		nfs_fattr_init(data->res.dir_attr);
+ 		nfs_fattr_init(data->res.fattr);
+@@ -751,7 +751,7 @@ nfs3_proc_mknod(struct inode *dir, struct dentry *dentry, struct iattr *sattr,
+ 		break;
+ 	default:
+ 		status = -EINVAL;
+-		goto out;
++		goto out_release_acls;
+ 	}
+ 
+ 	d_alias = nfs3_do_create(dir, dentry, data);
+diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h
+index 543d916f79abb..3e344bec3647b 100644
+--- a/fs/nfs/nfs4_fs.h
++++ b/fs/nfs/nfs4_fs.h
+@@ -45,6 +45,7 @@ enum nfs4_client_state {
+ 	NFS4CLNT_RECALL_RUNNING,
+ 	NFS4CLNT_RECALL_ANY_LAYOUT_READ,
+ 	NFS4CLNT_RECALL_ANY_LAYOUT_RW,
++	NFS4CLNT_DELEGRETURN_DELAYED,
+ };
+ 
+ #define NFS4_RENEW_TIMEOUT		0x01
+diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c
+index 42719384e25fe..28431acd1230b 100644
+--- a/fs/nfs/nfs4client.c
++++ b/fs/nfs/nfs4client.c
+@@ -197,8 +197,11 @@ void nfs40_shutdown_client(struct nfs_client *clp)
+ 
+ struct nfs_client *nfs4_alloc_client(const struct nfs_client_initdata *cl_init)
+ {
+-	int err;
++	char buf[INET6_ADDRSTRLEN + 1];
++	const char *ip_addr = cl_init->ip_addr;
+ 	struct nfs_client *clp = nfs_alloc_client(cl_init);
++	int err;
++
+ 	if (IS_ERR(clp))
+ 		return clp;
+ 
+@@ -222,6 +225,44 @@ struct nfs_client *nfs4_alloc_client(const struct nfs_client_initdata *cl_init)
+ 	init_waitqueue_head(&clp->cl_lock_waitq);
+ #endif
+ 	INIT_LIST_HEAD(&clp->pending_cb_stateids);
++
++	if (cl_init->minorversion != 0)
++		__set_bit(NFS_CS_INFINITE_SLOTS, &clp->cl_flags);
++	__set_bit(NFS_CS_DISCRTRY, &clp->cl_flags);
++	__set_bit(NFS_CS_NO_RETRANS_TIMEOUT, &clp->cl_flags);
++
++	/*
++	 * Set up the connection to the server before we add add to the
++	 * global list.
++	 */
++	err = nfs_create_rpc_client(clp, cl_init, RPC_AUTH_GSS_KRB5I);
++	if (err == -EINVAL)
++		err = nfs_create_rpc_client(clp, cl_init, RPC_AUTH_UNIX);
++	if (err < 0)
++		goto error;
++
++	/* If no clientaddr= option was specified, find a usable cb address */
++	if (ip_addr == NULL) {
++		struct sockaddr_storage cb_addr;
++		struct sockaddr *sap = (struct sockaddr *)&cb_addr;
++
++		err = rpc_localaddr(clp->cl_rpcclient, sap, sizeof(cb_addr));
++		if (err < 0)
++			goto error;
++		err = rpc_ntop(sap, buf, sizeof(buf));
++		if (err < 0)
++			goto error;
++		ip_addr = (const char *)buf;
++	}
++	strlcpy(clp->cl_ipaddr, ip_addr, sizeof(clp->cl_ipaddr));
++
++	err = nfs_idmap_new(clp);
++	if (err < 0) {
++		dprintk("%s: failed to create idmapper. Error = %d\n",
++			__func__, err);
++		goto error;
++	}
++	__set_bit(NFS_CS_IDMAP, &clp->cl_res_state);
+ 	return clp;
+ 
+ error:
+@@ -372,8 +413,6 @@ static int nfs4_init_client_minor_version(struct nfs_client *clp)
+ struct nfs_client *nfs4_init_client(struct nfs_client *clp,
+ 				    const struct nfs_client_initdata *cl_init)
+ {
+-	char buf[INET6_ADDRSTRLEN + 1];
+-	const char *ip_addr = cl_init->ip_addr;
+ 	struct nfs_client *old;
+ 	int error;
+ 
+@@ -381,43 +420,6 @@ struct nfs_client *nfs4_init_client(struct nfs_client *clp,
+ 		/* the client is initialised already */
+ 		return clp;
+ 
+-	/* Check NFS protocol revision and initialize RPC op vector */
+-	clp->rpc_ops = &nfs_v4_clientops;
+-
+-	if (clp->cl_minorversion != 0)
+-		__set_bit(NFS_CS_INFINITE_SLOTS, &clp->cl_flags);
+-	__set_bit(NFS_CS_DISCRTRY, &clp->cl_flags);
+-	__set_bit(NFS_CS_NO_RETRANS_TIMEOUT, &clp->cl_flags);
+-
+-	error = nfs_create_rpc_client(clp, cl_init, RPC_AUTH_GSS_KRB5I);
+-	if (error == -EINVAL)
+-		error = nfs_create_rpc_client(clp, cl_init, RPC_AUTH_UNIX);
+-	if (error < 0)
+-		goto error;
+-
+-	/* If no clientaddr= option was specified, find a usable cb address */
+-	if (ip_addr == NULL) {
+-		struct sockaddr_storage cb_addr;
+-		struct sockaddr *sap = (struct sockaddr *)&cb_addr;
+-
+-		error = rpc_localaddr(clp->cl_rpcclient, sap, sizeof(cb_addr));
+-		if (error < 0)
+-			goto error;
+-		error = rpc_ntop(sap, buf, sizeof(buf));
+-		if (error < 0)
+-			goto error;
+-		ip_addr = (const char *)buf;
+-	}
+-	strlcpy(clp->cl_ipaddr, ip_addr, sizeof(clp->cl_ipaddr));
+-
+-	error = nfs_idmap_new(clp);
+-	if (error < 0) {
+-		dprintk("%s: failed to create idmapper. Error = %d\n",
+-			__func__, error);
+-		goto error;
+-	}
+-	__set_bit(NFS_CS_IDMAP, &clp->cl_res_state);
+-
+ 	error = nfs4_init_client_minor_version(clp);
+ 	if (error < 0)
+ 		goto error;
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index e653654c10bcd..451d3d56d80c7 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -1205,12 +1205,12 @@ nfs4_update_changeattr_locked(struct inode *inode,
+ 	u64 change_attr = inode_peek_iversion_raw(inode);
+ 
+ 	cache_validity |= NFS_INO_INVALID_CTIME | NFS_INO_INVALID_MTIME;
++	if (S_ISDIR(inode->i_mode))
++		cache_validity |= NFS_INO_INVALID_DATA;
+ 
+ 	switch (NFS_SERVER(inode)->change_attr_type) {
+ 	case NFS4_CHANGE_TYPE_IS_UNDEFINED:
+-		break;
+-	case NFS4_CHANGE_TYPE_IS_TIME_METADATA:
+-		if ((s64)(change_attr - cinfo->after) > 0)
++		if (cinfo->after == change_attr)
+ 			goto out;
+ 		break;
+ 	default:
+@@ -1218,24 +1218,21 @@ nfs4_update_changeattr_locked(struct inode *inode,
+ 			goto out;
+ 	}
+ 
+-	if (cinfo->atomic && cinfo->before == change_attr) {
+-		nfsi->attrtimeo_timestamp = jiffies;
+-	} else {
+-		if (S_ISDIR(inode->i_mode)) {
+-			cache_validity |= NFS_INO_INVALID_DATA;
++	inode_set_iversion_raw(inode, cinfo->after);
++	if (!cinfo->atomic || cinfo->before != change_attr) {
++		if (S_ISDIR(inode->i_mode))
+ 			nfs_force_lookup_revalidate(inode);
+-		} else {
+-			if (!NFS_PROTO(inode)->have_delegation(inode,
+-							       FMODE_READ))
+-				cache_validity |= NFS_INO_REVAL_PAGECACHE;
+-		}
+ 
+-		if (cinfo->before != change_attr)
+-			cache_validity |= NFS_INO_INVALID_ACCESS |
+-					  NFS_INO_INVALID_ACL |
+-					  NFS_INO_INVALID_XATTR;
++		if (!NFS_PROTO(inode)->have_delegation(inode, FMODE_READ))
++			cache_validity |=
++				NFS_INO_INVALID_ACCESS | NFS_INO_INVALID_ACL |
++				NFS_INO_INVALID_SIZE | NFS_INO_INVALID_OTHER |
++				NFS_INO_INVALID_BLOCKS | NFS_INO_INVALID_NLINK |
++				NFS_INO_INVALID_MODE | NFS_INO_INVALID_XATTR |
++				NFS_INO_REVAL_PAGECACHE;
++		nfsi->attrtimeo = NFS_MINATTRTIMEO(inode);
+ 	}
+-	inode_set_iversion_raw(inode, cinfo->after);
++	nfsi->attrtimeo_timestamp = jiffies;
+ 	nfsi->read_cache_jiffies = timestamp;
+ 	nfsi->attr_gencount = nfs_inc_attr_generation_counter();
+ 	nfsi->cache_validity &= ~NFS_INO_INVALID_CHANGE;
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 2c01ee805306c..be960e47d7f61 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -966,10 +966,8 @@ void
+ pnfs_set_layout_stateid(struct pnfs_layout_hdr *lo, const nfs4_stateid *new,
+ 			const struct cred *cred, bool update_barrier)
+ {
+-	u32 oldseq, newseq, new_barrier = 0;
+-
+-	oldseq = be32_to_cpu(lo->plh_stateid.seqid);
+-	newseq = be32_to_cpu(new->seqid);
++	u32 oldseq = be32_to_cpu(lo->plh_stateid.seqid);
++	u32 newseq = be32_to_cpu(new->seqid);
+ 
+ 	if (!pnfs_layout_is_valid(lo)) {
+ 		pnfs_set_layout_cred(lo, cred);
+@@ -979,19 +977,21 @@ pnfs_set_layout_stateid(struct pnfs_layout_hdr *lo, const nfs4_stateid *new,
+ 		clear_bit(NFS_LAYOUT_INVALID_STID, &lo->plh_flags);
+ 		return;
+ 	}
+-	if (pnfs_seqid_is_newer(newseq, oldseq)) {
++
++	if (pnfs_seqid_is_newer(newseq, oldseq))
+ 		nfs4_stateid_copy(&lo->plh_stateid, new);
+-		/*
+-		 * Because of wraparound, we want to keep the barrier
+-		 * "close" to the current seqids.
+-		 */
+-		new_barrier = newseq - atomic_read(&lo->plh_outstanding);
+-	}
+-	if (update_barrier)
+-		new_barrier = be32_to_cpu(new->seqid);
+-	else if (new_barrier == 0)
++
++	if (update_barrier) {
++		pnfs_barrier_update(lo, newseq);
+ 		return;
+-	pnfs_barrier_update(lo, new_barrier);
++	}
++	/*
++	 * Because of wraparound, we want to keep the barrier
++	 * "close" to the current seqids. We really only want to
++	 * get here from a layoutget call.
++	 */
++	if (atomic_read(&lo->plh_outstanding) == 1)
++		 pnfs_barrier_update(lo, be32_to_cpu(lo->plh_stateid.seqid));
+ }
+ 
+ static bool
+@@ -2014,7 +2014,7 @@ lookup_again:
+ 	 * If the layout segment list is empty, but there are outstanding
+ 	 * layoutget calls, then they might be subject to a layoutrecall.
+ 	 */
+-	if (list_empty(&lo->plh_segs) &&
++	if ((list_empty(&lo->plh_segs) || !pnfs_layout_is_valid(lo)) &&
+ 	    atomic_read(&lo->plh_outstanding) != 0) {
+ 		spin_unlock(&ino->i_lock);
+ 		lseg = ERR_PTR(wait_var_event_killable(&lo->plh_outstanding,
+@@ -2390,11 +2390,13 @@ pnfs_layout_process(struct nfs4_layoutget *lgp)
+ 		goto out_forget;
+ 	}
+ 
++	if (!pnfs_layout_is_valid(lo) && !pnfs_is_first_layoutget(lo))
++		goto out_forget;
++
+ 	if (nfs4_stateid_match_other(&lo->plh_stateid, &res->stateid)) {
+ 		/* existing state ID, make sure the sequence number matches. */
+ 		if (pnfs_layout_stateid_blocked(lo, &res->stateid)) {
+-			if (!pnfs_layout_is_valid(lo) &&
+-			    pnfs_is_first_layoutget(lo))
++			if (!pnfs_layout_is_valid(lo))
+ 				lo->plh_barrier = 0;
+ 			dprintk("%s forget reply due to sequence\n", __func__);
+ 			goto out_forget;
+@@ -2413,8 +2415,6 @@ pnfs_layout_process(struct nfs4_layoutget *lgp)
+ 		goto out_forget;
+ 	} else {
+ 		/* We have a completely new layout */
+-		if (!pnfs_is_first_layoutget(lo))
+-			goto out_forget;
+ 		pnfs_set_layout_stateid(lo, &res->stateid, lgp->cred, true);
+ 	}
+ 
+diff --git a/fs/nfs/pnfs_nfs.c b/fs/nfs/pnfs_nfs.c
+index 49d3389bd8130..1c2c0d08614ee 100644
+--- a/fs/nfs/pnfs_nfs.c
++++ b/fs/nfs/pnfs_nfs.c
+@@ -805,19 +805,16 @@ out:
+ }
+ EXPORT_SYMBOL_GPL(nfs4_pnfs_ds_add);
+ 
+-static void nfs4_wait_ds_connect(struct nfs4_pnfs_ds *ds)
++static int nfs4_wait_ds_connect(struct nfs4_pnfs_ds *ds)
+ {
+ 	might_sleep();
+-	wait_on_bit(&ds->ds_state, NFS4DS_CONNECTING,
+-			TASK_KILLABLE);
++	return wait_on_bit(&ds->ds_state, NFS4DS_CONNECTING, TASK_KILLABLE);
+ }
+ 
+ static void nfs4_clear_ds_conn_bit(struct nfs4_pnfs_ds *ds)
+ {
+ 	smp_mb__before_atomic();
+-	clear_bit(NFS4DS_CONNECTING, &ds->ds_state);
+-	smp_mb__after_atomic();
+-	wake_up_bit(&ds->ds_state, NFS4DS_CONNECTING);
++	clear_and_wake_up_bit(NFS4DS_CONNECTING, &ds->ds_state);
+ }
+ 
+ static struct nfs_client *(*get_v3_ds_connect)(
+@@ -993,30 +990,33 @@ int nfs4_pnfs_ds_connect(struct nfs_server *mds_srv, struct nfs4_pnfs_ds *ds,
+ {
+ 	int err;
+ 
+-again:
+-	err = 0;
+-	if (test_and_set_bit(NFS4DS_CONNECTING, &ds->ds_state) == 0) {
+-		if (version == 3) {
+-			err = _nfs4_pnfs_v3_ds_connect(mds_srv, ds, timeo,
+-						       retrans);
+-		} else if (version == 4) {
+-			err = _nfs4_pnfs_v4_ds_connect(mds_srv, ds, timeo,
+-						       retrans, minor_version);
+-		} else {
+-			dprintk("%s: unsupported DS version %d\n", __func__,
+-				version);
+-			err = -EPROTONOSUPPORT;
+-		}
++	do {
++		err = nfs4_wait_ds_connect(ds);
++		if (err || ds->ds_clp)
++			goto out;
++		if (nfs4_test_deviceid_unavailable(devid))
++			return -ENODEV;
++	} while (test_and_set_bit(NFS4DS_CONNECTING, &ds->ds_state) != 0);
+ 
+-		nfs4_clear_ds_conn_bit(ds);
+-	} else {
+-		nfs4_wait_ds_connect(ds);
++	if (ds->ds_clp)
++		goto connect_done;
+ 
+-		/* what was waited on didn't connect AND didn't mark unavail */
+-		if (!ds->ds_clp && !nfs4_test_deviceid_unavailable(devid))
+-			goto again;
++	switch (version) {
++	case 3:
++		err = _nfs4_pnfs_v3_ds_connect(mds_srv, ds, timeo, retrans);
++		break;
++	case 4:
++		err = _nfs4_pnfs_v4_ds_connect(mds_srv, ds, timeo, retrans,
++					       minor_version);
++		break;
++	default:
++		dprintk("%s: unsupported DS version %d\n", __func__, version);
++		err = -EPROTONOSUPPORT;
+ 	}
+ 
++connect_done:
++	nfs4_clear_ds_conn_bit(ds);
++out:
+ 	/*
+ 	 * At this point the ds->ds_clp should be ready, but it might have
+ 	 * hit an error.
+diff --git a/fs/nfs/read.c b/fs/nfs/read.c
+index d2b6dce1f99f7..d13a22fc94a7d 100644
+--- a/fs/nfs/read.c
++++ b/fs/nfs/read.c
+@@ -363,22 +363,23 @@ int nfs_readpage(struct file *file, struct page *page)
+ 	} else
+ 		desc.ctx = get_nfs_open_context(nfs_file_open_context(file));
+ 
++	xchg(&desc.ctx->error, 0);
+ 	if (!IS_SYNC(inode)) {
+ 		ret = nfs_readpage_from_fscache(desc.ctx, inode, page);
+ 		if (ret == 0)
+-			goto out;
++			goto out_wait;
+ 	}
+ 
+-	xchg(&desc.ctx->error, 0);
+ 	nfs_pageio_init_read(&desc.pgio, inode, false,
+ 			     &nfs_async_read_completion_ops);
+ 
+ 	ret = readpage_async_filler(&desc, page);
++	if (ret)
++		goto out;
+ 
+-	if (!ret)
+-		nfs_pageio_complete_read(&desc.pgio, inode);
+-
++	nfs_pageio_complete_read(&desc.pgio, inode);
+ 	ret = desc.pgio.pg_error < 0 ? desc.pgio.pg_error : 0;
++out_wait:
+ 	if (!ret) {
+ 		ret = wait_on_page_locked_killable(page);
+ 		if (!PageUptodate(page) && !ret)
+diff --git a/fs/nfsd/nfs3acl.c b/fs/nfsd/nfs3acl.c
+index a1591feeea22c..5dfe7644a5172 100644
+--- a/fs/nfsd/nfs3acl.c
++++ b/fs/nfsd/nfs3acl.c
+@@ -172,7 +172,7 @@ static int nfs3svc_encode_getaclres(struct svc_rqst *rqstp, __be32 *p)
+ 	struct nfsd3_getaclres *resp = rqstp->rq_resp;
+ 	struct dentry *dentry = resp->fh.fh_dentry;
+ 	struct kvec *head = rqstp->rq_res.head;
+-	struct inode *inode = d_inode(dentry);
++	struct inode *inode;
+ 	unsigned int base;
+ 	int n;
+ 	int w;
+@@ -181,6 +181,7 @@ static int nfs3svc_encode_getaclres(struct svc_rqst *rqstp, __be32 *p)
+ 		return 0;
+ 	switch (resp->status) {
+ 	case nfs_ok:
++		inode = d_inode(dentry);
+ 		if (!svcxdr_encode_post_op_attr(rqstp, xdr, &resp->fh))
+ 			return 0;
+ 		if (xdr_stream_encode_u32(xdr, resp->mask) < 0)
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index b517a8794400e..90e81f6491ff5 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -2816,14 +2816,11 @@ move_to_confirmed(struct nfs4_client *clp)
+ 
+ 	lockdep_assert_held(&nn->client_lock);
+ 
+-	dprintk("NFSD: move_to_confirm nfs4_client %p\n", clp);
+ 	list_move(&clp->cl_idhash, &nn->conf_id_hashtbl[idhashval]);
+ 	rb_erase(&clp->cl_namenode, &nn->unconf_name_tree);
+ 	add_clp_to_name_tree(clp, &nn->conf_name_tree);
+-	if (!test_and_set_bit(NFSD4_CLIENT_CONFIRMED, &clp->cl_flags) &&
+-	    clp->cl_nfsd_dentry &&
+-	    clp->cl_nfsd_info_dentry)
+-		fsnotify_dentry(clp->cl_nfsd_info_dentry, FS_MODIFY);
++	set_bit(NFSD4_CLIENT_CONFIRMED, &clp->cl_flags);
++	trace_nfsd_clid_confirmed(&clp->cl_clientid);
+ 	renew_client_locked(clp);
+ }
+ 
+@@ -3471,6 +3468,8 @@ nfsd4_create_session(struct svc_rqst *rqstp,
+ 	/* cache solo and embedded create sessions under the client_lock */
+ 	nfsd4_cache_create_session(cr_ses, cs_slot, status);
+ 	spin_unlock(&nn->client_lock);
++	if (conf == unconf)
++		fsnotify_dentry(conf->cl_nfsd_info_dentry, FS_MODIFY);
+ 	/* init connection and backchannel */
+ 	nfsd4_init_conn(rqstp, conn, new);
+ 	nfsd4_put_session(new);
+@@ -4071,6 +4070,8 @@ nfsd4_setclientid_confirm(struct svc_rqst *rqstp,
+ 	}
+ 	get_client_locked(conf);
+ 	spin_unlock(&nn->client_lock);
++	if (conf == unconf)
++		fsnotify_dentry(conf->cl_nfsd_info_dentry, FS_MODIFY);
+ 	nfsd4_probe_callback(conf);
+ 	spin_lock(&nn->client_lock);
+ 	put_client_renew_locked(conf);
+@@ -7229,7 +7230,6 @@ nfs4_client_to_reclaim(struct xdr_netobj name, struct xdr_netobj princhash,
+ 	unsigned int strhashval;
+ 	struct nfs4_client_reclaim *crp;
+ 
+-	trace_nfsd_clid_reclaim(nn, name.len, name.data);
+ 	crp = alloc_reclaim();
+ 	if (crp) {
+ 		strhashval = clientstr_hashval(name);
+@@ -7279,8 +7279,6 @@ nfsd4_find_reclaim_client(struct xdr_netobj name, struct nfsd_net *nn)
+ 	unsigned int strhashval;
+ 	struct nfs4_client_reclaim *crp = NULL;
+ 
+-	trace_nfsd_clid_find(nn, name.len, name.data);
+-
+ 	strhashval = clientstr_hashval(name);
+ 	list_for_each_entry(crp, &nn->reclaim_str_hashtbl[strhashval], cr_strhash) {
+ 		if (compare_blob(&crp->cr_name, &name) == 0) {
+diff --git a/fs/nfsd/trace.h b/fs/nfsd/trace.h
+index 27a93ebd1d809..f3961506fe16c 100644
+--- a/fs/nfsd/trace.h
++++ b/fs/nfsd/trace.h
+@@ -408,7 +408,6 @@ TRACE_EVENT(nfsd_dirent,
+ 		__entry->ino = ino;
+ 		__entry->len = namlen;
+ 		memcpy(__get_str(name), name, namlen);
+-		__assign_str(name, name);
+ 	),
+ 	TP_printk("fh_hash=0x%08x ino=%llu name=%.*s",
+ 		__entry->fh_hash, __entry->ino,
+@@ -511,6 +510,7 @@ DEFINE_EVENT(nfsd_clientid_class, nfsd_clid_##name, \
+ 	TP_PROTO(const clientid_t *clid), \
+ 	TP_ARGS(clid))
+ 
++DEFINE_CLIENTID_EVENT(confirmed);
+ DEFINE_CLIENTID_EVENT(expired);
+ DEFINE_CLIENTID_EVENT(purged);
+ DEFINE_CLIENTID_EVENT(renew);
+@@ -536,35 +536,6 @@ DEFINE_EVENT(nfsd_net_class, nfsd_##name, \
+ DEFINE_NET_EVENT(grace_start);
+ DEFINE_NET_EVENT(grace_complete);
+ 
+-DECLARE_EVENT_CLASS(nfsd_clid_class,
+-	TP_PROTO(const struct nfsd_net *nn,
+-		 unsigned int namelen,
+-		 const unsigned char *namedata),
+-	TP_ARGS(nn, namelen, namedata),
+-	TP_STRUCT__entry(
+-		__field(unsigned long long, boot_time)
+-		__field(unsigned int, namelen)
+-		__dynamic_array(unsigned char,  name, namelen)
+-	),
+-	TP_fast_assign(
+-		__entry->boot_time = nn->boot_time;
+-		__entry->namelen = namelen;
+-		memcpy(__get_dynamic_array(name), namedata, namelen);
+-	),
+-	TP_printk("boot_time=%16llx nfs4_clientid=%.*s",
+-		__entry->boot_time, __entry->namelen, __get_str(name))
+-)
+-
+-#define DEFINE_CLID_EVENT(name) \
+-DEFINE_EVENT(nfsd_clid_class, nfsd_clid_##name, \
+-	TP_PROTO(const struct nfsd_net *nn, \
+-		 unsigned int namelen, \
+-		 const unsigned char *namedata), \
+-	TP_ARGS(nn, namelen, namedata))
+-
+-DEFINE_CLID_EVENT(find);
+-DEFINE_CLID_EVENT(reclaim);
+-
+ TRACE_EVENT(nfsd_clid_inuse_err,
+ 	TP_PROTO(const struct nfs4_client *clp),
+ 	TP_ARGS(clp),
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index 15adf1f6ab21f..46485c04740d9 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -1123,6 +1123,19 @@ out:
+ }
+ 
+ #ifdef CONFIG_NFSD_V3
++static int
++nfsd_filemap_write_and_wait_range(struct nfsd_file *nf, loff_t offset,
++				  loff_t end)
++{
++	struct address_space *mapping = nf->nf_file->f_mapping;
++	int ret = filemap_fdatawrite_range(mapping, offset, end);
++
++	if (ret)
++		return ret;
++	filemap_fdatawait_range_keep_errors(mapping, offset, end);
++	return 0;
++}
++
+ /*
+  * Commit all pending writes to stable storage.
+  *
+@@ -1153,10 +1166,11 @@ nfsd_commit(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ 	if (err)
+ 		goto out;
+ 	if (EX_ISSYNC(fhp->fh_export)) {
+-		int err2;
++		int err2 = nfsd_filemap_write_and_wait_range(nf, offset, end);
+ 
+ 		down_write(&nf->nf_rwsem);
+-		err2 = vfs_fsync_range(nf->nf_file, offset, end, 0);
++		if (!err2)
++			err2 = vfs_fsync_range(nf->nf_file, offset, end, 0);
+ 		switch (err2) {
+ 		case 0:
+ 			nfsd_copy_boot_verifier(verf, net_generic(nf->nf_net,
+diff --git a/fs/orangefs/super.c b/fs/orangefs/super.c
+index ee5efdc35cc1e..2f2e430461b21 100644
+--- a/fs/orangefs/super.c
++++ b/fs/orangefs/super.c
+@@ -209,7 +209,7 @@ static int orangefs_statfs(struct dentry *dentry, struct kstatfs *buf)
+ 	buf->f_bavail = (sector_t) new_op->downcall.resp.statfs.blocks_avail;
+ 	buf->f_files = (sector_t) new_op->downcall.resp.statfs.files_total;
+ 	buf->f_ffree = (sector_t) new_op->downcall.resp.statfs.files_avail;
+-	buf->f_frsize = sb->s_blocksize;
++	buf->f_frsize = 0;
+ 
+ out_op_release:
+ 	op_release(new_op);
+diff --git a/fs/seq_file.c b/fs/seq_file.c
+index 5059248f2d648..d6aacbac793ad 100644
+--- a/fs/seq_file.c
++++ b/fs/seq_file.c
+@@ -32,6 +32,9 @@ static void seq_set_overflow(struct seq_file *m)
+ 
+ static void *seq_buf_alloc(unsigned long size)
+ {
++	if (unlikely(size > MAX_RW_COUNT))
++		return NULL;
++
+ 	return kvmalloc(size, GFP_KERNEL_ACCOUNT);
+ }
+ 
+diff --git a/fs/ubifs/dir.c b/fs/ubifs/dir.c
+index 5bd8482e660aa..7c61d0ec0159e 100644
+--- a/fs/ubifs/dir.c
++++ b/fs/ubifs/dir.c
+@@ -1337,7 +1337,10 @@ static int do_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 			goto out_release;
+ 		}
+ 
++		spin_lock(&whiteout->i_lock);
+ 		whiteout->i_state |= I_LINKABLE;
++		spin_unlock(&whiteout->i_lock);
++
+ 		whiteout_ui = ubifs_inode(whiteout);
+ 		whiteout_ui->data = dev;
+ 		whiteout_ui->data_len = ubifs_encode_dev(dev, MKDEV(0, 0));
+@@ -1430,7 +1433,11 @@ static int do_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 
+ 		inc_nlink(whiteout);
+ 		mark_inode_dirty(whiteout);
++
++		spin_lock(&whiteout->i_lock);
+ 		whiteout->i_state &= ~I_LINKABLE;
++		spin_unlock(&whiteout->i_lock);
++
+ 		iput(whiteout);
+ 	}
+ 
+diff --git a/fs/ubifs/journal.c b/fs/ubifs/journal.c
+index 2857e64d673d1..230717384a38e 100644
+--- a/fs/ubifs/journal.c
++++ b/fs/ubifs/journal.c
+@@ -882,6 +882,7 @@ int ubifs_jnl_write_inode(struct ubifs_info *c, const struct inode *inode)
+ 		struct ubifs_dent_node *xent, *pxent = NULL;
+ 
+ 		if (ui->xattr_cnt > ubifs_xattr_max_cnt(c)) {
++			err = -EPERM;
+ 			ubifs_err(c, "Cannot delete inode, it has too much xattrs!");
+ 			goto out_release;
+ 		}
+diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h
+index adbe76b203e2f..49b0ac8b6fd32 100644
+--- a/include/linux/compiler-clang.h
++++ b/include/linux/compiler-clang.h
+@@ -13,6 +13,12 @@
+ /* all clang versions usable with the kernel support KASAN ABI version 5 */
+ #define KASAN_ABI_VERSION 5
+ 
++/*
++ * Note: Checking __has_feature(*_sanitizer) is only true if the feature is
++ * enabled. Therefore it is not required to additionally check defined(CONFIG_*)
++ * to avoid adding redundant attributes in other configurations.
++ */
++
+ #if __has_feature(address_sanitizer) || __has_feature(hwaddress_sanitizer)
+ /* Emulate GCC's __SANITIZE_ADDRESS__ flag */
+ #define __SANITIZE_ADDRESS__
+@@ -45,6 +51,17 @@
+ #define __no_sanitize_undefined
+ #endif
+ 
++/*
++ * Support for __has_feature(coverage_sanitizer) was added in Clang 13 together
++ * with no_sanitize("coverage"). Prior versions of Clang support coverage
++ * instrumentation, but cannot be queried for support by the preprocessor.
++ */
++#if __has_feature(coverage_sanitizer)
++#define __no_sanitize_coverage __attribute__((no_sanitize("coverage")))
++#else
++#define __no_sanitize_coverage
++#endif
++
+ /*
+  * Not all versions of clang implement the type-generic versions
+  * of the builtin overflow checkers. Fortunately, clang implements
+diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
+index 5d97ef738a575..cb9217fc60af0 100644
+--- a/include/linux/compiler-gcc.h
++++ b/include/linux/compiler-gcc.h
+@@ -122,6 +122,12 @@
+ #define __no_sanitize_undefined
+ #endif
+ 
++#if defined(CONFIG_KCOV) && __has_attribute(__no_sanitize_coverage__)
++#define __no_sanitize_coverage __attribute__((no_sanitize_coverage))
++#else
++#define __no_sanitize_coverage
++#endif
++
+ #if GCC_VERSION >= 50100
+ #define COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW 1
+ #endif
+diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
+index d29bda7f6ebdd..cc2bee7f09777 100644
+--- a/include/linux/compiler_types.h
++++ b/include/linux/compiler_types.h
+@@ -210,7 +210,7 @@ struct ftrace_likely_data {
+ /* Section for code which can't be instrumented at all */
+ #define noinstr								\
+ 	noinline notrace __attribute((__section__(".noinstr.text")))	\
+-	__no_kcsan __no_sanitize_address
++	__no_kcsan __no_sanitize_address __no_sanitize_coverage
+ 
+ #endif /* __KERNEL__ */
+ 
+diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
+index ffba254d2098a..ce64745948722 100644
+--- a/include/linux/nfs_fs.h
++++ b/include/linux/nfs_fs.h
+@@ -84,6 +84,7 @@ struct nfs_open_context {
+ #define NFS_CONTEXT_RESEND_WRITES	(1)
+ #define NFS_CONTEXT_BAD			(2)
+ #define NFS_CONTEXT_UNLOCK	(3)
++#define NFS_CONTEXT_FILE_OPEN		(4)
+ 	int error;
+ 
+ 	struct list_head list;
+diff --git a/include/linux/pci-ecam.h b/include/linux/pci-ecam.h
+index fbdadd4d83774..adea5a4771cf8 100644
+--- a/include/linux/pci-ecam.h
++++ b/include/linux/pci-ecam.h
+@@ -55,6 +55,7 @@ struct pci_ecam_ops {
+ struct pci_config_window {
+ 	struct resource			res;
+ 	struct resource			busr;
++	unsigned int			bus_shift;
+ 	void				*priv;
+ 	const struct pci_ecam_ops	*ops;
+ 	union {
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index 9455476c5ba22..1199ffd305d1a 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -315,7 +315,7 @@ static inline int rcu_read_lock_any_held(void)
+ #define RCU_LOCKDEP_WARN(c, s)						\
+ 	do {								\
+ 		static bool __section(".data.unlikely") __warned;	\
+-		if (debug_lockdep_rcu_enabled() && !__warned && (c)) {	\
++		if ((c) && debug_lockdep_rcu_enabled() && !__warned) {	\
+ 			__warned = true;				\
+ 			lockdep_rcu_suspicious(__FILE__, __LINE__, s);	\
+ 		}							\
+diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h
+index 7f4278fa21fef..0d7fec79d28ff 100644
+--- a/include/linux/sched/signal.h
++++ b/include/linux/sched/signal.h
+@@ -538,6 +538,17 @@ static inline int kill_cad_pid(int sig, int priv)
+ #define SEND_SIG_NOINFO ((struct kernel_siginfo *) 0)
+ #define SEND_SIG_PRIV	((struct kernel_siginfo *) 1)
+ 
++static inline int __on_sig_stack(unsigned long sp)
++{
++#ifdef CONFIG_STACK_GROWSUP
++	return sp >= current->sas_ss_sp &&
++		sp - current->sas_ss_sp < current->sas_ss_size;
++#else
++	return sp > current->sas_ss_sp &&
++		sp - current->sas_ss_sp <= current->sas_ss_size;
++#endif
++}
++
+ /*
+  * True if we are on the alternate signal stack.
+  */
+@@ -555,13 +566,7 @@ static inline int on_sig_stack(unsigned long sp)
+ 	if (current->sas_ss_flags & SS_AUTODISARM)
+ 		return 0;
+ 
+-#ifdef CONFIG_STACK_GROWSUP
+-	return sp >= current->sas_ss_sp &&
+-		sp - current->sas_ss_sp < current->sas_ss_size;
+-#else
+-	return sp > current->sas_ss_sp &&
+-		sp - current->sas_ss_sp <= current->sas_ss_size;
+-#endif
++	return __on_sig_stack(sp);
+ }
+ 
+ static inline int sas_ss_flags(unsigned long sp)
+diff --git a/include/linux/soundwire/sdw.h b/include/linux/soundwire/sdw.h
+index ced07f8fde870..5d93d99496538 100644
+--- a/include/linux/soundwire/sdw.h
++++ b/include/linux/soundwire/sdw.h
+@@ -624,7 +624,6 @@ struct sdw_slave_ops {
+ 	int (*port_prep)(struct sdw_slave *slave,
+ 			 struct sdw_prepare_ch *prepare_ch,
+ 			 enum sdw_port_prep_ops pre_ops);
+-	int (*get_clk_stop_mode)(struct sdw_slave *slave);
+ 	int (*clk_stop)(struct sdw_slave *slave,
+ 			enum sdw_clk_stop_mode mode,
+ 			enum sdw_clk_stop_type type);
+@@ -675,7 +674,6 @@ struct sdw_slave {
+ 	struct list_head node;
+ 	struct completion port_ready[SDW_MAX_PORTS];
+ 	unsigned int m_port_map[SDW_MAX_PORTS];
+-	enum sdw_clk_stop_mode curr_clk_stop_mode;
+ 	u16 dev_num;
+ 	u16 dev_num_sticky;
+ 	bool probed;
+diff --git a/include/scsi/libiscsi.h b/include/scsi/libiscsi.h
+index 091f284bd6e93..2bb452a8f134c 100644
+--- a/include/scsi/libiscsi.h
++++ b/include/scsi/libiscsi.h
+@@ -195,12 +195,6 @@ struct iscsi_conn {
+ 	unsigned long		suspend_tx;	/* suspend Tx */
+ 	unsigned long		suspend_rx;	/* suspend Rx */
+ 
+-	/* abort */
+-	wait_queue_head_t	ehwait;		/* used in eh_abort() */
+-	struct iscsi_tm		tmhdr;
+-	struct timer_list	tmf_timer;
+-	int			tmf_state;	/* see TMF_INITIAL, etc.*/
+-
+ 	/* negotiated params */
+ 	unsigned		max_recv_dlength; /* initiator_max_recv_dsl*/
+ 	unsigned		max_xmit_dlength; /* target_max_recv_dsl */
+@@ -270,6 +264,11 @@ struct iscsi_session {
+ 	 * and recv lock.
+ 	 */
+ 	struct mutex		eh_mutex;
++	/* abort */
++	wait_queue_head_t	ehwait;		/* used in eh_abort() */
++	struct iscsi_tm		tmhdr;
++	struct timer_list	tmf_timer;
++	int			tmf_state;	/* see TMF_INITIAL, etc.*/
+ 
+ 	/* iSCSI session-wide sequencing */
+ 	uint32_t		cmdsn;
+diff --git a/include/scsi/scsi_transport_iscsi.h b/include/scsi/scsi_transport_iscsi.h
+index 3974329d4d023..c5d7810fd7926 100644
+--- a/include/scsi/scsi_transport_iscsi.h
++++ b/include/scsi/scsi_transport_iscsi.h
+@@ -443,6 +443,8 @@ extern void iscsi_remove_session(struct iscsi_cls_session *session);
+ extern void iscsi_free_session(struct iscsi_cls_session *session);
+ extern struct iscsi_cls_conn *iscsi_create_conn(struct iscsi_cls_session *sess,
+ 						int dd_size, uint32_t cid);
++extern void iscsi_put_conn(struct iscsi_cls_conn *conn);
++extern void iscsi_get_conn(struct iscsi_cls_conn *conn);
+ extern int iscsi_destroy_conn(struct iscsi_cls_conn *conn);
+ extern void iscsi_unblock_session(struct iscsi_cls_session *session);
+ extern void iscsi_block_session(struct iscsi_cls_session *session);
+diff --git a/include/uapi/linux/fuse.h b/include/uapi/linux/fuse.h
+index 271ae90a9bb7d..36ed092227fa5 100644
+--- a/include/uapi/linux/fuse.h
++++ b/include/uapi/linux/fuse.h
+@@ -181,6 +181,9 @@
+  *  - add FUSE_OPEN_KILL_SUIDGID
+  *  - extend fuse_setxattr_in, add FUSE_SETXATTR_EXT
+  *  - add FUSE_SETXATTR_ACL_KILL_SGID
++ *
++ *  7.34
++ *  - add FUSE_SYNCFS
+  */
+ 
+ #ifndef _LINUX_FUSE_H
+@@ -216,7 +219,7 @@
+ #define FUSE_KERNEL_VERSION 7
+ 
+ /** Minor version number of this interface */
+-#define FUSE_KERNEL_MINOR_VERSION 33
++#define FUSE_KERNEL_MINOR_VERSION 34
+ 
+ /** The node ID of the root inode */
+ #define FUSE_ROOT_ID 1
+@@ -509,6 +512,7 @@ enum fuse_opcode {
+ 	FUSE_COPY_FILE_RANGE	= 47,
+ 	FUSE_SETUPMAPPING	= 48,
+ 	FUSE_REMOVEMAPPING	= 49,
++	FUSE_SYNCFS		= 50,
+ 
+ 	/* CUSE specific operations */
+ 	CUSE_INIT		= 4096,
+@@ -971,4 +975,8 @@ struct fuse_removemapping_one {
+ #define FUSE_REMOVEMAPPING_MAX_ENTRY   \
+ 		(PAGE_SIZE / sizeof(struct fuse_removemapping_one))
+ 
++struct fuse_syncfs_in {
++	uint64_t	padding;
++};
++
+ #endif /* _LINUX_FUSE_H */
+diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
+index 1f274d7fc934e..d189eff4c92ff 100644
+--- a/kernel/cgroup/cgroup-v1.c
++++ b/kernel/cgroup/cgroup-v1.c
+@@ -912,6 +912,8 @@ int cgroup1_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ 	opt = fs_parse(fc, cgroup1_fs_parameters, param, &result);
+ 	if (opt == -ENOPARAM) {
+ 		if (strcmp(param->key, "source") == 0) {
++			if (param->type != fs_value_is_string)
++				return invalf(fc, "Non-string source");
+ 			if (fc->source)
+ 				return invalf(fc, "Multiple sources not supported");
+ 			fc->source = param->string;
+diff --git a/kernel/jump_label.c b/kernel/jump_label.c
+index ba39fbb1f8e73..af520ca263603 100644
+--- a/kernel/jump_label.c
++++ b/kernel/jump_label.c
+@@ -316,14 +316,16 @@ static int addr_conflict(struct jump_entry *entry, void *start, void *end)
+ }
+ 
+ static int __jump_label_text_reserved(struct jump_entry *iter_start,
+-		struct jump_entry *iter_stop, void *start, void *end)
++		struct jump_entry *iter_stop, void *start, void *end, bool init)
+ {
+ 	struct jump_entry *iter;
+ 
+ 	iter = iter_start;
+ 	while (iter < iter_stop) {
+-		if (addr_conflict(iter, start, end))
+-			return 1;
++		if (init || !jump_entry_is_init(iter)) {
++			if (addr_conflict(iter, start, end))
++				return 1;
++		}
+ 		iter++;
+ 	}
+ 
+@@ -561,7 +563,7 @@ static int __jump_label_mod_text_reserved(void *start, void *end)
+ 
+ 	ret = __jump_label_text_reserved(mod->jump_entries,
+ 				mod->jump_entries + mod->num_jump_entries,
+-				start, end);
++				start, end, mod->state == MODULE_STATE_COMING);
+ 
+ 	module_put(mod);
+ 
+@@ -786,8 +788,9 @@ early_initcall(jump_label_init_module);
+  */
+ int jump_label_text_reserved(void *start, void *end)
+ {
++	bool init = system_state < SYSTEM_RUNNING;
+ 	int ret = __jump_label_text_reserved(__start___jump_table,
+-			__stop___jump_table, start, end);
++			__stop___jump_table, start, end, init);
+ 
+ 	if (ret)
+ 		return ret;
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 745f08fdd7a69..c6b4c66b8fa27 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -35,6 +35,7 @@
+ #include <linux/ftrace.h>
+ #include <linux/cpu.h>
+ #include <linux/jump_label.h>
++#include <linux/static_call.h>
+ #include <linux/perf_event.h>
+ 
+ #include <asm/sections.h>
+@@ -1569,6 +1570,7 @@ static int check_kprobe_address_safe(struct kprobe *p,
+ 	if (!kernel_text_address((unsigned long) p->addr) ||
+ 	    within_kprobe_blacklist((unsigned long) p->addr) ||
+ 	    jump_label_text_reserved(p->addr, p->addr) ||
++	    static_call_text_reserved(p->addr, p->addr) ||
+ 	    find_bug((unsigned long)p->addr)) {
+ 		ret = -EINVAL;
+ 		goto out;
+diff --git a/kernel/module.c b/kernel/module.c
+index 927d46cb8eb93..0b850188419ae 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -4425,9 +4425,10 @@ int module_kallsyms_on_each_symbol(int (*fn)(void *, const char *,
+ 			ret = fn(data, kallsyms_symbol_name(kallsyms, i),
+ 				 mod, kallsyms_symbol_value(sym));
+ 			if (ret != 0)
+-				break;
++				goto out;
+ 		}
+ 	}
++out:
+ 	mutex_unlock(&module_mutex);
+ 	return ret;
+ }
+diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
+index bf0827d4b6593..cfd06fb5ba6d0 100644
+--- a/kernel/rcu/rcu.h
++++ b/kernel/rcu/rcu.h
+@@ -308,6 +308,8 @@ static inline void rcu_init_levelspread(int *levelspread, const int *levelcnt)
+ 	}
+ }
+ 
++extern void rcu_init_geometry(void);
++
+ /* Returns a pointer to the first leaf rcu_node structure. */
+ #define rcu_first_leaf_node() (rcu_state.level[rcu_num_lvls - 1])
+ 
+diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
+index e26547b34ad33..072e47288f1f4 100644
+--- a/kernel/rcu/srcutree.c
++++ b/kernel/rcu/srcutree.c
+@@ -90,6 +90,9 @@ static void init_srcu_struct_nodes(struct srcu_struct *ssp, bool is_static)
+ 	struct srcu_node *snp;
+ 	struct srcu_node *snp_first;
+ 
++	/* Initialize geometry if it has not already been initialized. */
++	rcu_init_geometry();
++
+ 	/* Work out the overall tree geometry. */
+ 	ssp->level[0] = &ssp->node[0];
+ 	for (i = 1; i < rcu_num_lvls; i++)
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 9a1396a70c520..afbb0a337c32a 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -4582,11 +4582,25 @@ static void __init rcu_init_one(void)
+  * replace the definitions in tree.h because those are needed to size
+  * the ->node array in the rcu_state structure.
+  */
+-static void __init rcu_init_geometry(void)
++void rcu_init_geometry(void)
+ {
+ 	ulong d;
+ 	int i;
++	static unsigned long old_nr_cpu_ids;
+ 	int rcu_capacity[RCU_NUM_LVLS];
++	static bool initialized;
++
++	if (initialized) {
++		/*
++		 * Warn if setup_nr_cpu_ids() had not yet been invoked,
++		 * unless nr_cpus_ids == NR_CPUS, in which case who cares?
++		 */
++		WARN_ON_ONCE(old_nr_cpu_ids != nr_cpu_ids);
++		return;
++	}
++
++	old_nr_cpu_ids = nr_cpu_ids;
++	initialized = true;
+ 
+ 	/*
+ 	 * Initialize any unspecified boot parameters.
+diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
+index b95ae86c40a7d..dd94a602a6d25 100644
+--- a/kernel/rcu/update.c
++++ b/kernel/rcu/update.c
+@@ -277,7 +277,7 @@ EXPORT_SYMBOL_GPL(rcu_callback_map);
+ 
+ noinstr int notrace debug_lockdep_rcu_enabled(void)
+ {
+-	return rcu_scheduler_active != RCU_SCHEDULER_INACTIVE && debug_locks &&
++	return rcu_scheduler_active != RCU_SCHEDULER_INACTIVE && READ_ONCE(debug_locks) &&
+ 	       current->lockdep_recursion == 0;
+ }
+ EXPORT_SYMBOL_GPL(debug_lockdep_rcu_enabled);
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index a189bec137291..35f7efed75c4c 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -2539,20 +2539,27 @@ static __always_inline
+ unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
+ 				  struct task_struct *p)
+ {
+-	unsigned long min_util;
+-	unsigned long max_util;
++	unsigned long min_util = 0;
++	unsigned long max_util = 0;
+ 
+ 	if (!static_branch_likely(&sched_uclamp_used))
+ 		return util;
+ 
+-	min_util = READ_ONCE(rq->uclamp[UCLAMP_MIN].value);
+-	max_util = READ_ONCE(rq->uclamp[UCLAMP_MAX].value);
+-
+ 	if (p) {
+-		min_util = max(min_util, uclamp_eff_value(p, UCLAMP_MIN));
+-		max_util = max(max_util, uclamp_eff_value(p, UCLAMP_MAX));
++		min_util = uclamp_eff_value(p, UCLAMP_MIN);
++		max_util = uclamp_eff_value(p, UCLAMP_MAX);
++
++		/*
++		 * Ignore last runnable task's max clamp, as this task will
++		 * reset it. Similarly, no need to read the rq's min clamp.
++		 */
++		if (rq->uclamp_flags & UCLAMP_FLAG_IDLE)
++			goto out;
+ 	}
+ 
++	min_util = max_t(unsigned long, min_util, READ_ONCE(rq->uclamp[UCLAMP_MIN].value));
++	max_util = max_t(unsigned long, max_util, READ_ONCE(rq->uclamp[UCLAMP_MAX].value));
++out:
+ 	/*
+ 	 * Since CPU's {min,max}_util clamps are MAX aggregated considering
+ 	 * RUNNABLE tasks with _different_ clamps, we can end up with an
+diff --git a/kernel/static_call.c b/kernel/static_call.c
+index 723fcc9d20db6..43ba0b1e0edbb 100644
+--- a/kernel/static_call.c
++++ b/kernel/static_call.c
+@@ -292,13 +292,15 @@ static int addr_conflict(struct static_call_site *site, void *start, void *end)
+ 
+ static int __static_call_text_reserved(struct static_call_site *iter_start,
+ 				       struct static_call_site *iter_stop,
+-				       void *start, void *end)
++				       void *start, void *end, bool init)
+ {
+ 	struct static_call_site *iter = iter_start;
+ 
+ 	while (iter < iter_stop) {
+-		if (addr_conflict(iter, start, end))
+-			return 1;
++		if (init || !static_call_is_init(iter)) {
++			if (addr_conflict(iter, start, end))
++				return 1;
++		}
+ 		iter++;
+ 	}
+ 
+@@ -324,7 +326,7 @@ static int __static_call_mod_text_reserved(void *start, void *end)
+ 
+ 	ret = __static_call_text_reserved(mod->static_call_sites,
+ 			mod->static_call_sites + mod->num_static_call_sites,
+-			start, end);
++			start, end, mod->state == MODULE_STATE_COMING);
+ 
+ 	module_put(mod);
+ 
+@@ -459,8 +461,9 @@ static inline int __static_call_mod_text_reserved(void *start, void *end)
+ 
+ int static_call_text_reserved(void *start, void *end)
+ {
++	bool init = system_state < SYSTEM_RUNNING;
+ 	int ret = __static_call_text_reserved(__start_static_call_sites,
+-			__stop_static_call_sites, start, end);
++			__stop_static_call_sites, start, end, init);
+ 
+ 	if (ret)
+ 		return ret;
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 797e096bc1f3d..53b4ab9ea305c 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -1689,7 +1689,9 @@ static struct hist_field *create_hist_field(struct hist_trigger_data *hist_data,
+ 	if (WARN_ON_ONCE(!field))
+ 		goto out;
+ 
+-	if (is_string_field(field)) {
++	/* Pointers to strings are just pointers and dangerous to dereference */
++	if (is_string_field(field) &&
++	    (field->filter_type != FILTER_PTR_STRING)) {
+ 		flags |= HIST_FIELD_FL_STRING;
+ 
+ 		hist_field->size = MAX_FILTER_STR_VAL;
+@@ -4495,8 +4497,6 @@ static inline void add_to_key(char *compound_key, void *key,
+ 		field = key_field->field;
+ 		if (field->filter_type == FILTER_DYN_STRING)
+ 			size = *(u32 *)(rec + field->offset) >> 16;
+-		else if (field->filter_type == FILTER_PTR_STRING)
+-			size = strlen(key);
+ 		else if (field->filter_type == FILTER_STATIC_STRING)
+ 			size = field->size;
+ 
+diff --git a/lib/decompress_unlz4.c b/lib/decompress_unlz4.c
+index c0cfcfd486be0..e6327391b6b66 100644
+--- a/lib/decompress_unlz4.c
++++ b/lib/decompress_unlz4.c
+@@ -112,6 +112,9 @@ STATIC inline int INIT unlz4(u8 *input, long in_len,
+ 				error("data corrupted");
+ 				goto exit_2;
+ 			}
++		} else if (size < 4) {
++			/* empty or end-of-file */
++			goto exit_3;
+ 		}
+ 
+ 		chunksize = get_unaligned_le32(inp);
+@@ -125,6 +128,10 @@ STATIC inline int INIT unlz4(u8 *input, long in_len,
+ 			continue;
+ 		}
+ 
++		if (!fill && chunksize == 0) {
++			/* empty or end-of-file */
++			goto exit_3;
++		}
+ 
+ 		if (posp)
+ 			*posp += 4;
+@@ -184,6 +191,7 @@ STATIC inline int INIT unlz4(u8 *input, long in_len,
+ 		}
+ 	}
+ 
++exit_3:
+ 	ret = 0;
+ exit_2:
+ 	if (!input)
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index 9eb7c31688cc8..459c33c26beae 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -1117,8 +1117,6 @@ static inline void pipe_truncate(struct iov_iter *i)
+ static void pipe_advance(struct iov_iter *i, size_t size)
+ {
+ 	struct pipe_inode_info *pipe = i->pipe;
+-	if (unlikely(i->count < size))
+-		size = i->count;
+ 	if (size) {
+ 		struct pipe_buffer *buf;
+ 		unsigned int p_mask = pipe->ring_size - 1;
+@@ -1159,6 +1157,8 @@ static void iov_iter_bvec_advance(struct iov_iter *i, size_t size)
+ 
+ void iov_iter_advance(struct iov_iter *i, size_t size)
+ {
++	if (unlikely(i->count < size))
++		size = i->count;
+ 	if (unlikely(iov_iter_is_pipe(i))) {
+ 		pipe_advance(i, size);
+ 		return;
+@@ -1168,7 +1168,6 @@ void iov_iter_advance(struct iov_iter *i, size_t size)
+ 		return;
+ 	}
+ 	if (unlikely(iov_iter_is_xarray(i))) {
+-		size = min(size, i->count);
+ 		i->iov_offset += size;
+ 		i->count -= size;
+ 		return;
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 65e0e8642ded8..8363f737d5adb 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -5216,8 +5216,9 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
+ 			continue;
+ 		}
+ 
+-		refs = min3(pages_per_huge_page(h) - pfn_offset,
+-			    (vma->vm_end - vaddr) >> PAGE_SHIFT, remainder);
++		/* vaddr may not be aligned to PAGE_SIZE */
++		refs = min3(pages_per_huge_page(h) - pfn_offset, remainder,
++		    (vma->vm_end - ALIGN_DOWN(vaddr, PAGE_SIZE)) >> PAGE_SHIFT);
+ 
+ 		if (pages || vmas)
+ 			record_subpages_vmas(mem_map_offset(page, pfn_offset),
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index 226bb05c3b42d..869f1608c98a3 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -3087,7 +3087,9 @@ static void br_multicast_pim(struct net_bridge *br,
+ 	    pim_hdr_type(pimhdr) != PIM_TYPE_HELLO)
+ 		return;
+ 
++	spin_lock(&br->multicast_lock);
+ 	br_multicast_mark_router(br, port);
++	spin_unlock(&br->multicast_lock);
+ }
+ 
+ static int br_ip4_multicast_mrd_rcv(struct net_bridge *br,
+@@ -3098,7 +3100,9 @@ static int br_ip4_multicast_mrd_rcv(struct net_bridge *br,
+ 	    igmp_hdr(skb)->type != IGMP_MRDISC_ADV)
+ 		return -ENOMSG;
+ 
++	spin_lock(&br->multicast_lock);
+ 	br_multicast_mark_router(br, port);
++	spin_unlock(&br->multicast_lock);
+ 
+ 	return 0;
+ }
+@@ -3166,7 +3170,9 @@ static void br_ip6_multicast_mrd_rcv(struct net_bridge *br,
+ 	if (icmp6_hdr(skb)->icmp6_type != ICMPV6_MRDISC_ADV)
+ 		return;
+ 
++	spin_lock(&br->multicast_lock);
+ 	br_multicast_mark_router(br, port);
++	spin_unlock(&br->multicast_lock);
+ }
+ 
+ static int br_multicast_ipv6_rcv(struct net_bridge *br,
+diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
+index 3964ff74ee518..ca10ba2626f27 100644
+--- a/net/sunrpc/xdr.c
++++ b/net/sunrpc/xdr.c
+@@ -1230,10 +1230,9 @@ static unsigned int xdr_set_page_base(struct xdr_stream *xdr,
+ 	void *kaddr;
+ 
+ 	maxlen = xdr->buf->page_len;
+-	if (base >= maxlen) {
+-		base = maxlen;
+-		maxlen = 0;
+-	} else
++	if (base >= maxlen)
++		return 0;
++	else
+ 		maxlen -= base;
+ 	if (len > maxlen)
+ 		len = maxlen;
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index 316d049455876..3228b7a1836aa 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -1689,7 +1689,8 @@ static int xs_bind(struct sock_xprt *transport, struct socket *sock)
+ 		err = kernel_bind(sock, (struct sockaddr *)&myaddr,
+ 				transport->xprt.addrlen);
+ 		if (err == 0) {
+-			transport->srcport = port;
++			if (transport->xprt.reuseport)
++				transport->srcport = port;
+ 			break;
+ 		}
+ 		last = port;
+diff --git a/sound/ac97/bus.c b/sound/ac97/bus.c
+index d9077e91382bf..6ddf646cda659 100644
+--- a/sound/ac97/bus.c
++++ b/sound/ac97/bus.c
+@@ -520,7 +520,7 @@ static int ac97_bus_remove(struct device *dev)
+ 	struct ac97_codec_driver *adrv = to_ac97_driver(dev->driver);
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/sound/core/control_led.c b/sound/core/control_led.c
+index a90e31dbde61f..ff7fd5e295518 100644
+--- a/sound/core/control_led.c
++++ b/sound/core/control_led.c
+@@ -397,7 +397,7 @@ static ssize_t show_mode(struct device *dev,
+ 			 struct device_attribute *attr, char *buf)
+ {
+ 	struct snd_ctl_led *led = container_of(dev, struct snd_ctl_led, dev);
+-	const char *str;
++	const char *str = NULL;
+ 
+ 	switch (led->mode) {
+ 	case MODE_FOLLOW_MUTE:	str = "follow-mute"; break;
+diff --git a/sound/firewire/Kconfig b/sound/firewire/Kconfig
+index 9897bd26a4388..12664c3a14141 100644
+--- a/sound/firewire/Kconfig
++++ b/sound/firewire/Kconfig
+@@ -38,7 +38,7 @@ config SND_OXFW
+ 	   * Mackie(Loud) Onyx 1640i (former model)
+ 	   * Mackie(Loud) Onyx Satellite
+ 	   * Mackie(Loud) Tapco Link.Firewire
+-	   * Mackie(Loud) d.4 pro
++	   * Mackie(Loud) d.2 pro/d.4 pro (built-in FireWire card with OXFW971 ASIC)
+ 	   * Mackie(Loud) U.420/U.420d
+ 	   * TASCAM FireOne
+ 	   * Stanton Controllers & Systems 1 Deck/Mixer
+@@ -84,7 +84,7 @@ config SND_BEBOB
+ 	  * PreSonus FIREBOX/FIREPOD/FP10/Inspire1394
+ 	  * BridgeCo RDAudio1/Audio5
+ 	  * Mackie Onyx 1220/1620/1640 (FireWire I/O Card)
+-	  * Mackie d.2 (FireWire Option) and d.2 Pro
++	  * Mackie d.2 (optional FireWire card with DM1000 ASIC)
+ 	  * Stanton FinalScratch 2 (ScratchAmp)
+ 	  * Tascam IF-FW/DM
+ 	  * Behringer XENIX UFX 1204/1604
+@@ -110,6 +110,7 @@ config SND_BEBOB
+ 	  * M-Audio Ozonic/NRV10/ProfireLightBridge
+ 	  * M-Audio FireWire 1814/ProjectMix IO
+ 	  * Digidesign Mbox 2 Pro
++	  * ToneWeal FW66
+ 
+ 	  To compile this driver as a module, choose M here: the module
+ 	  will be called snd-bebob.
+diff --git a/sound/firewire/bebob/bebob.c b/sound/firewire/bebob/bebob.c
+index daeecfa8b9aac..67fa0f2178b01 100644
+--- a/sound/firewire/bebob/bebob.c
++++ b/sound/firewire/bebob/bebob.c
+@@ -59,6 +59,7 @@ static DECLARE_BITMAP(devices_used, SNDRV_CARDS);
+ #define VEN_MAUDIO1	0x00000d6c
+ #define VEN_MAUDIO2	0x000007f5
+ #define VEN_DIGIDESIGN	0x00a07e
++#define OUI_SHOUYO	0x002327
+ 
+ #define MODEL_FOCUSRITE_SAFFIRE_BOTH	0x00000000
+ #define MODEL_MAUDIO_AUDIOPHILE_BOTH	0x00010060
+@@ -387,7 +388,7 @@ static const struct ieee1394_device_id bebob_id_table[] = {
+ 	SND_BEBOB_DEV_ENTRY(VEN_BRIDGECO, 0x00010049, &spec_normal),
+ 	/* Mackie, Onyx 1220/1620/1640 (Firewire I/O Card) */
+ 	SND_BEBOB_DEV_ENTRY(VEN_MACKIE2, 0x00010065, &spec_normal),
+-	// Mackie, d.2 (Firewire option card) and d.2 Pro (the card is built-in).
++	// Mackie, d.2 (optional Firewire card with DM1000).
+ 	SND_BEBOB_DEV_ENTRY(VEN_MACKIE1, 0x00010067, &spec_normal),
+ 	/* Stanton, ScratchAmp */
+ 	SND_BEBOB_DEV_ENTRY(VEN_STANTON, 0x00000001, &spec_normal),
+@@ -486,6 +487,8 @@ static const struct ieee1394_device_id bebob_id_table[] = {
+ 			    &maudio_special_spec),
+ 	/* Digidesign Mbox 2 Pro */
+ 	SND_BEBOB_DEV_ENTRY(VEN_DIGIDESIGN, 0x0000a9, &spec_normal),
++	// Toneweal FW66.
++	SND_BEBOB_DEV_ENTRY(OUI_SHOUYO, 0x020002, &spec_normal),
+ 	/* IDs are unknown but able to be supported */
+ 	/*  Apogee, Mini-ME Firewire */
+ 	/*  Apogee, Mini-DAC Firewire */
+diff --git a/sound/firewire/motu/motu-protocol-v2.c b/sound/firewire/motu/motu-protocol-v2.c
+index 784073aa10265..f0a0ecad4d74a 100644
+--- a/sound/firewire/motu/motu-protocol-v2.c
++++ b/sound/firewire/motu/motu-protocol-v2.c
+@@ -86,24 +86,23 @@ static int detect_clock_source_optical_model(struct snd_motu *motu, u32 data,
+ 		*src = SND_MOTU_CLOCK_SOURCE_INTERNAL;
+ 		break;
+ 	case 1:
++		*src = SND_MOTU_CLOCK_SOURCE_ADAT_ON_OPT;
++		break;
++	case 2:
+ 	{
+ 		__be32 reg;
+ 
+ 		// To check the configuration of optical interface.
+-		int err = snd_motu_transaction_read(motu, V2_IN_OUT_CONF_OFFSET,
+-						    &reg, sizeof(reg));
++		int err = snd_motu_transaction_read(motu, V2_IN_OUT_CONF_OFFSET, &reg, sizeof(reg));
+ 		if (err < 0)
+ 			return err;
+ 
+-		if (be32_to_cpu(reg) & 0x00000200)
++		if (((data & V2_OPT_IN_IFACE_MASK) >> V2_OPT_IN_IFACE_SHIFT) == V2_OPT_IFACE_MODE_SPDIF)
+ 			*src = SND_MOTU_CLOCK_SOURCE_SPDIF_ON_OPT;
+ 		else
+-			*src = SND_MOTU_CLOCK_SOURCE_ADAT_ON_OPT;
++			*src = SND_MOTU_CLOCK_SOURCE_SPDIF_ON_COAX;
+ 		break;
+ 	}
+-	case 2:
+-		*src = SND_MOTU_CLOCK_SOURCE_SPDIF_ON_COAX;
+-		break;
+ 	case 3:
+ 		*src = SND_MOTU_CLOCK_SOURCE_SPH;
+ 		break;
+diff --git a/sound/firewire/oxfw/oxfw.c b/sound/firewire/oxfw/oxfw.c
+index 9eea25c46dc7e..5490637d278a4 100644
+--- a/sound/firewire/oxfw/oxfw.c
++++ b/sound/firewire/oxfw/oxfw.c
+@@ -355,7 +355,7 @@ static const struct ieee1394_device_id oxfw_id_table[] = {
+ 	 *  Onyx-i series (former models):	0x081216
+ 	 *  Mackie Onyx Satellite:		0x00200f
+ 	 *  Tapco LINK.firewire 4x6:		0x000460
+-	 *  d.4 pro:				Unknown
++	 *  d.2 pro/d.4 pro (built-in card):	Unknown
+ 	 *  U.420:				Unknown
+ 	 *  U.420d:				Unknown
+ 	 */
+diff --git a/sound/isa/cmi8330.c b/sound/isa/cmi8330.c
+index bc112df10fc57..d1ac9fc994916 100644
+--- a/sound/isa/cmi8330.c
++++ b/sound/isa/cmi8330.c
+@@ -547,7 +547,7 @@ static int snd_cmi8330_probe(struct snd_card *card, int dev)
+ 	}
+ 	if (acard->sb->hardware != SB_HW_16) {
+ 		snd_printk(KERN_ERR PFX "SB16 not found during probe\n");
+-		return err;
++		return -ENODEV;
+ 	}
+ 
+ 	snd_wss_out(acard->wss, CS4231_MISC_INFO, 0x40); /* switch on MODE2 */
+diff --git a/sound/isa/sb/sb16_csp.c b/sound/isa/sb/sb16_csp.c
+index 4789345a8fdde..c98ccd421a2ec 100644
+--- a/sound/isa/sb/sb16_csp.c
++++ b/sound/isa/sb/sb16_csp.c
+@@ -1072,10 +1072,14 @@ static void snd_sb_qsound_destroy(struct snd_sb_csp * p)
+ 	card = p->chip->card;	
+ 	
+ 	down_write(&card->controls_rwsem);
+-	if (p->qsound_switch)
++	if (p->qsound_switch) {
+ 		snd_ctl_remove(card, p->qsound_switch);
+-	if (p->qsound_space)
++		p->qsound_switch = NULL;
++	}
++	if (p->qsound_space) {
+ 		snd_ctl_remove(card, p->qsound_space);
++		p->qsound_space = NULL;
++	}
+ 	up_write(&card->controls_rwsem);
+ 
+ 	/* cancel pending transfer of QSound parameters */
+diff --git a/sound/mips/snd-n64.c b/sound/mips/snd-n64.c
+index e35e93157755f..463a6fe589eb2 100644
+--- a/sound/mips/snd-n64.c
++++ b/sound/mips/snd-n64.c
+@@ -338,6 +338,10 @@ static int __init n64audio_probe(struct platform_device *pdev)
+ 	strcpy(card->longname, "N64 Audio");
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
++	if (!res) {
++		err = -EINVAL;
++		goto fail_dma_alloc;
++	}
+ 	if (devm_request_irq(&pdev->dev, res->start, n64audio_isr,
+ 				IRQF_SHARED, "N64 Audio", priv)) {
+ 		err = -EBUSY;
+diff --git a/sound/pci/hda/hda_tegra.c b/sound/pci/hda/hda_tegra.c
+index 6f2b743b9d75c..6c6dc3fcde607 100644
+--- a/sound/pci/hda/hda_tegra.c
++++ b/sound/pci/hda/hda_tegra.c
+@@ -262,6 +262,9 @@ static int hda_tegra_first_init(struct azx *chip, struct platform_device *pdev)
+ 	const char *sname, *drv_name = "tegra-hda";
+ 	struct device_node *np = pdev->dev.of_node;
+ 
++	if (irq_id < 0)
++		return irq_id;
++
+ 	err = hda_tegra_init_chip(chip, pdev);
+ 	if (err)
+ 		return err;
+diff --git a/sound/ppc/powermac.c b/sound/ppc/powermac.c
+index 9fb51ebafde16..8088f77d5a74b 100644
+--- a/sound/ppc/powermac.c
++++ b/sound/ppc/powermac.c
+@@ -76,7 +76,11 @@ static int snd_pmac_probe(struct platform_device *devptr)
+ 		sprintf(card->shortname, "PowerMac %s", name_ext);
+ 		sprintf(card->longname, "%s (Dev %d) Sub-frame %d",
+ 			card->shortname, chip->device_id, chip->subframe);
+-		if ( snd_pmac_tumbler_init(chip) < 0 || snd_pmac_tumbler_post_init() < 0)
++		err = snd_pmac_tumbler_init(chip);
++		if (err < 0)
++			goto __error;
++		err = snd_pmac_tumbler_post_init();
++		if (err < 0)
+ 			goto __error;
+ 		break;
+ 	case PMAC_AWACS:
+diff --git a/sound/soc/codecs/cs42l42.c b/sound/soc/codecs/cs42l42.c
+index 77473c226f9ec..8434c48354f16 100644
+--- a/sound/soc/codecs/cs42l42.c
++++ b/sound/soc/codecs/cs42l42.c
+@@ -581,6 +581,7 @@ struct cs42l42_pll_params {
+ 	u8 pll_divout;
+ 	u32 mclk_int;
+ 	u8 pll_cal_ratio;
++	u8 n;
+ };
+ 
+ /*
+@@ -588,21 +589,21 @@ struct cs42l42_pll_params {
+  * Table 4-5 from the Datasheet
+  */
+ static const struct cs42l42_pll_params pll_ratio_table[] = {
+-	{ 1536000, 0, 1, 0x00, 0x7D, 0x000000, 0x03, 0x10, 12000000, 125 },
+-	{ 2822400, 0, 1, 0x00, 0x40, 0x000000, 0x03, 0x10, 11289600, 128 },
+-	{ 3000000, 0, 1, 0x00, 0x40, 0x000000, 0x03, 0x10, 12000000, 128 },
+-	{ 3072000, 0, 1, 0x00, 0x3E, 0x800000, 0x03, 0x10, 12000000, 125 },
+-	{ 4000000, 0, 1, 0x00, 0x30, 0x800000, 0x03, 0x10, 12000000, 96 },
+-	{ 4096000, 0, 1, 0x00, 0x2E, 0xE00000, 0x03, 0x10, 12000000, 94 },
+-	{ 5644800, 0, 1, 0x01, 0x40, 0x000000, 0x03, 0x10, 11289600, 128 },
+-	{ 6000000, 0, 1, 0x01, 0x40, 0x000000, 0x03, 0x10, 12000000, 128 },
+-	{ 6144000, 0, 1, 0x01, 0x3E, 0x800000, 0x03, 0x10, 12000000, 125 },
+-	{ 11289600, 0, 0, 0, 0, 0, 0, 0, 11289600, 0 },
+-	{ 12000000, 0, 0, 0, 0, 0, 0, 0, 12000000, 0 },
+-	{ 12288000, 0, 0, 0, 0, 0, 0, 0, 12288000, 0 },
+-	{ 22579200, 1, 0, 0, 0, 0, 0, 0, 22579200, 0 },
+-	{ 24000000, 1, 0, 0, 0, 0, 0, 0, 24000000, 0 },
+-	{ 24576000, 1, 0, 0, 0, 0, 0, 0, 24576000, 0 }
++	{ 1536000, 0, 1, 0x00, 0x7D, 0x000000, 0x03, 0x10, 12000000, 125, 2},
++	{ 2822400, 0, 1, 0x00, 0x40, 0x000000, 0x03, 0x10, 11289600, 128, 1},
++	{ 3000000, 0, 1, 0x00, 0x40, 0x000000, 0x03, 0x10, 12000000, 128, 1},
++	{ 3072000, 0, 1, 0x00, 0x3E, 0x800000, 0x03, 0x10, 12000000, 125, 1},
++	{ 4000000, 0, 1, 0x00, 0x30, 0x800000, 0x03, 0x10, 12000000,  96, 1},
++	{ 4096000, 0, 1, 0x00, 0x2E, 0xE00000, 0x03, 0x10, 12000000,  94, 1},
++	{ 5644800, 0, 1, 0x01, 0x40, 0x000000, 0x03, 0x10, 11289600, 128, 1},
++	{ 6000000, 0, 1, 0x01, 0x40, 0x000000, 0x03, 0x10, 12000000, 128, 1},
++	{ 6144000, 0, 1, 0x01, 0x3E, 0x800000, 0x03, 0x10, 12000000, 125, 1},
++	{ 11289600, 0, 0, 0, 0, 0, 0, 0, 11289600, 0, 1},
++	{ 12000000, 0, 0, 0, 0, 0, 0, 0, 12000000, 0, 1},
++	{ 12288000, 0, 0, 0, 0, 0, 0, 0, 12288000, 0, 1},
++	{ 22579200, 1, 0, 0, 0, 0, 0, 0, 22579200, 0, 1},
++	{ 24000000, 1, 0, 0, 0, 0, 0, 0, 24000000, 0, 1},
++	{ 24576000, 1, 0, 0, 0, 0, 0, 0, 24576000, 0, 1}
+ };
+ 
+ static int cs42l42_pll_config(struct snd_soc_component *component)
+@@ -738,8 +739,12 @@ static int cs42l42_pll_config(struct snd_soc_component *component)
+ 				snd_soc_component_update_bits(component,
+ 					CS42L42_PLL_CTL3,
+ 					CS42L42_PLL_DIVOUT_MASK,
+-					pll_ratio_table[i].pll_divout
++					(pll_ratio_table[i].pll_divout * pll_ratio_table[i].n)
+ 					<< CS42L42_PLL_DIVOUT_SHIFT);
++				if (pll_ratio_table[i].n != 1)
++					cs42l42->pll_divout = pll_ratio_table[i].pll_divout;
++				else
++					cs42l42->pll_divout = 0;
+ 				snd_soc_component_update_bits(component,
+ 					CS42L42_PLL_CAL_RATIO,
+ 					CS42L42_PLL_CAL_RATIO_MASK,
+@@ -894,6 +899,16 @@ static int cs42l42_mute_stream(struct snd_soc_dai *dai, int mute, int stream)
+ 			if ((cs42l42->bclk < 11289600) && (cs42l42->sclk < 11289600)) {
+ 				snd_soc_component_update_bits(component, CS42L42_PLL_CTL1,
+ 							      CS42L42_PLL_START_MASK, 1);
++
++				if (cs42l42->pll_divout) {
++					usleep_range(CS42L42_PLL_DIVOUT_TIME_US,
++						     CS42L42_PLL_DIVOUT_TIME_US * 2);
++					snd_soc_component_update_bits(component, CS42L42_PLL_CTL3,
++								      CS42L42_PLL_DIVOUT_MASK,
++								      cs42l42->pll_divout <<
++								      CS42L42_PLL_DIVOUT_SHIFT);
++				}
++
+ 				ret = regmap_read_poll_timeout(cs42l42->regmap,
+ 							       CS42L42_PLL_LOCK_STATUS,
+ 							       regval,
+diff --git a/sound/soc/codecs/cs42l42.h b/sound/soc/codecs/cs42l42.h
+index 386c40f9ed311..5384105afe503 100644
+--- a/sound/soc/codecs/cs42l42.h
++++ b/sound/soc/codecs/cs42l42.h
+@@ -755,6 +755,7 @@
+ 
+ #define CS42L42_NUM_SUPPLIES	5
+ #define CS42L42_BOOT_TIME_US	3000
++#define CS42L42_PLL_DIVOUT_TIME_US	800
+ #define CS42L42_CLOCK_SWITCH_DELAY_US 150
+ #define CS42L42_PLL_LOCK_POLL_US	250
+ #define CS42L42_PLL_LOCK_TIMEOUT_US	1250
+@@ -777,6 +778,7 @@ struct  cs42l42_private {
+ 	int bclk;
+ 	u32 sclk;
+ 	u32 srate;
++	u8 pll_divout;
+ 	u8 plug_state;
+ 	u8 hs_type;
+ 	u8 ts_inv;
+diff --git a/sound/soc/fsl/fsl_xcvr.c b/sound/soc/fsl/fsl_xcvr.c
+index 46f3f2c687566..535e17251a35e 100644
+--- a/sound/soc/fsl/fsl_xcvr.c
++++ b/sound/soc/fsl/fsl_xcvr.c
+@@ -1202,6 +1202,10 @@ static int fsl_xcvr_probe(struct platform_device *pdev)
+ 
+ 	rx_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rxfifo");
+ 	tx_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "txfifo");
++	if (!rx_res || !tx_res) {
++		dev_err(dev, "could not find rxfifo or txfifo resource\n");
++		return -EINVAL;
++	}
+ 	xcvr->dma_prms_rx.chan_name = "rx";
+ 	xcvr->dma_prms_tx.chan_name = "tx";
+ 	xcvr->dma_prms_rx.addr = rx_res->start;
+diff --git a/sound/soc/img/img-i2s-in.c b/sound/soc/img/img-i2s-in.c
+index 0843235d73c91..fd3432a1d6ab8 100644
+--- a/sound/soc/img/img-i2s-in.c
++++ b/sound/soc/img/img-i2s-in.c
+@@ -464,7 +464,7 @@ static int img_i2s_in_probe(struct platform_device *pdev)
+ 		if (ret)
+ 			goto err_pm_disable;
+ 	}
+-	ret = pm_runtime_get_sync(&pdev->dev);
++	ret = pm_runtime_resume_and_get(&pdev->dev);
+ 	if (ret < 0)
+ 		goto err_suspend;
+ 
+diff --git a/sound/soc/intel/boards/kbl_da7219_max98357a.c b/sound/soc/intel/boards/kbl_da7219_max98357a.c
+index c0d8a73c6d218..7ca3347dbd2e4 100644
+--- a/sound/soc/intel/boards/kbl_da7219_max98357a.c
++++ b/sound/soc/intel/boards/kbl_da7219_max98357a.c
+@@ -644,7 +644,7 @@ static int kabylake_audio_probe(struct platform_device *pdev)
+ 
+ static const struct platform_device_id kbl_board_ids[] = {
+ 	{
+-		.name = "kbl_da7219_max98357a",
++		.name = "kbl_da7219_mx98357a",
+ 		.driver_data =
+ 			(kernel_ulong_t)&kabylake_audio_card_da7219_m98357a,
+ 	},
+@@ -666,4 +666,4 @@ module_platform_driver(kabylake_audio)
+ MODULE_DESCRIPTION("Audio Machine driver-DA7219 & MAX98357A in I2S mode");
+ MODULE_AUTHOR("Naveen Manohar <naveen.m@intel.com>");
+ MODULE_LICENSE("GPL v2");
+-MODULE_ALIAS("platform:kbl_da7219_max98357a");
++MODULE_ALIAS("platform:kbl_da7219_mx98357a");
+diff --git a/sound/soc/intel/boards/sof_da7219_max98373.c b/sound/soc/intel/boards/sof_da7219_max98373.c
+index f3cb0773e70ee..8d1ad892e86b6 100644
+--- a/sound/soc/intel/boards/sof_da7219_max98373.c
++++ b/sound/soc/intel/boards/sof_da7219_max98373.c
+@@ -440,6 +440,7 @@ static const struct platform_device_id board_ids[] = {
+ 	},
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(platform, board_ids);
+ 
+ static struct platform_driver audio = {
+ 	.probe = audio_probe,
+diff --git a/sound/soc/intel/boards/sof_rt5682.c b/sound/soc/intel/boards/sof_rt5682.c
+index 58548ea0d915f..cf1d053733e22 100644
+--- a/sound/soc/intel/boards/sof_rt5682.c
++++ b/sound/soc/intel/boards/sof_rt5682.c
+@@ -968,6 +968,7 @@ static const struct platform_device_id board_ids[] = {
+ 	},
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(platform, board_ids);
+ 
+ static struct platform_driver sof_audio = {
+ 	.probe = sof_audio_probe,
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index dfad2ad129abb..5827a16773c90 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -354,6 +354,7 @@ static struct sof_sdw_codec_info codec_info_list[] = {
+ 		.part_id = 0x714,
+ 		.version_id = 3,
+ 		.direction = {false, true},
++		.ignore_pch_dmic = true,
+ 		.dai_name = "rt715-aif2",
+ 		.init = sof_sdw_rt715_sdca_init,
+ 	},
+@@ -361,6 +362,7 @@ static struct sof_sdw_codec_info codec_info_list[] = {
+ 		.part_id = 0x715,
+ 		.version_id = 3,
+ 		.direction = {false, true},
++		.ignore_pch_dmic = true,
+ 		.dai_name = "rt715-aif2",
+ 		.init = sof_sdw_rt715_sdca_init,
+ 	},
+@@ -368,6 +370,7 @@ static struct sof_sdw_codec_info codec_info_list[] = {
+ 		.part_id = 0x714,
+ 		.version_id = 2,
+ 		.direction = {false, true},
++		.ignore_pch_dmic = true,
+ 		.dai_name = "rt715-aif2",
+ 		.init = sof_sdw_rt715_init,
+ 	},
+@@ -375,6 +378,7 @@ static struct sof_sdw_codec_info codec_info_list[] = {
+ 		.part_id = 0x715,
+ 		.version_id = 2,
+ 		.direction = {false, true},
++		.ignore_pch_dmic = true,
+ 		.dai_name = "rt715-aif2",
+ 		.init = sof_sdw_rt715_init,
+ 	},
+@@ -730,7 +734,8 @@ static int create_sdw_dailink(struct device *dev, int *be_index,
+ 			      int *cpu_id, bool *group_generated,
+ 			      struct snd_soc_codec_conf *codec_conf,
+ 			      int codec_count,
+-			      int *codec_conf_index)
++			      int *codec_conf_index,
++			      bool *ignore_pch_dmic)
+ {
+ 	const struct snd_soc_acpi_link_adr *link_next;
+ 	struct snd_soc_dai_link_component *codecs;
+@@ -783,6 +788,9 @@ static int create_sdw_dailink(struct device *dev, int *be_index,
+ 	if (codec_index < 0)
+ 		return codec_index;
+ 
++	if (codec_info_list[codec_index].ignore_pch_dmic)
++		*ignore_pch_dmic = true;
++
+ 	cpu_dai_index = *cpu_id;
+ 	for_each_pcm_streams(stream) {
+ 		char *name, *cpu_name;
+@@ -914,6 +922,7 @@ static int sof_card_dai_links_create(struct device *dev,
+ 	const struct snd_soc_acpi_link_adr *adr_link;
+ 	struct snd_soc_dai_link_component *cpus;
+ 	struct snd_soc_codec_conf *codec_conf;
++	bool ignore_pch_dmic = false;
+ 	int codec_conf_count;
+ 	int codec_conf_index = 0;
+ 	bool group_generated[SDW_MAX_GROUPS];
+@@ -1020,7 +1029,8 @@ static int sof_card_dai_links_create(struct device *dev,
+ 					 sdw_cpu_dai_num, cpus, adr_link,
+ 					 &cpu_id, group_generated,
+ 					 codec_conf, codec_conf_count,
+-					 &codec_conf_index);
++					 &codec_conf_index,
++					 &ignore_pch_dmic);
+ 		if (ret < 0) {
+ 			dev_err(dev, "failed to create dai link %d", be_id);
+ 			return -ENOMEM;
+@@ -1088,6 +1098,10 @@ SSP:
+ DMIC:
+ 	/* dmic */
+ 	if (dmic_num > 0) {
++		if (ignore_pch_dmic) {
++			dev_warn(dev, "Ignoring PCH DMIC\n");
++			goto HDMI;
++		}
+ 		cpus[cpu_id].dai_name = "DMIC01 Pin";
+ 		init_dai_link(dev, links + link_id, be_id, "dmic01",
+ 			      0, 1, // DMIC only supports capture
+@@ -1106,6 +1120,7 @@ DMIC:
+ 		INC_ID(be_id, cpu_id, link_id);
+ 	}
+ 
++HDMI:
+ 	/* HDMI */
+ 	if (hdmi_num > 0) {
+ 		idisp_components = devm_kcalloc(dev, hdmi_num,
+diff --git a/sound/soc/intel/boards/sof_sdw_common.h b/sound/soc/intel/boards/sof_sdw_common.h
+index f3cb6796363e7..ea60e8ed215c5 100644
+--- a/sound/soc/intel/boards/sof_sdw_common.h
++++ b/sound/soc/intel/boards/sof_sdw_common.h
+@@ -56,6 +56,7 @@ struct sof_sdw_codec_info {
+ 	int amp_num;
+ 	const u8 acpi_id[ACPI_ID_LEN];
+ 	const bool direction[2]; // playback & capture support
++	const bool ignore_pch_dmic;
+ 	const char *dai_name;
+ 	const struct snd_soc_ops *ops;
+ 
+diff --git a/sound/soc/intel/common/soc-acpi-intel-kbl-match.c b/sound/soc/intel/common/soc-acpi-intel-kbl-match.c
+index 47dadc9d5d2a0..ba5ff468c265a 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-kbl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-kbl-match.c
+@@ -113,7 +113,7 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_kbl_machines[] = {
+ 	},
+ 	{
+ 		.id = "DLGS7219",
+-		.drv_name = "kbl_da7219_max98373",
++		.drv_name = "kbl_da7219_mx98373",
+ 		.fw_filename = "intel/dsp_fw_kbl.bin",
+ 		.machine_quirk = snd_soc_acpi_codec_list,
+ 		.quirk_data = &kbl_7219_98373_codecs,
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index a76974ccfce10..af0129cf90a2b 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -2793,7 +2793,7 @@ int snd_soc_of_parse_audio_routing(struct snd_soc_card *card,
+ 	if (!routes) {
+ 		dev_err(card->dev,
+ 			"ASoC: Could not allocate DAPM route table\n");
+-		return -EINVAL;
++		return -ENOMEM;
+ 	}
+ 
+ 	for (i = 0; i < num_routes; i++) {
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 8659089a87a09..46513bb979044 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -1700,7 +1700,7 @@ static int dpcm_apply_symmetry(struct snd_pcm_substream *fe_substream,
+ 	struct snd_soc_dpcm *dpcm;
+ 	struct snd_soc_pcm_runtime *fe = asoc_substream_to_rtd(fe_substream);
+ 	struct snd_soc_dai *fe_cpu_dai;
+-	int err;
++	int err = 0;
+ 	int i;
+ 
+ 	/* apply symmetry for FE */
+diff --git a/sound/soc/sof/topology.c b/sound/soc/sof/topology.c
+index 59abcfc9bd558..cb16b2bd8c21b 100644
+--- a/sound/soc/sof/topology.c
++++ b/sound/soc/sof/topology.c
+@@ -3335,7 +3335,7 @@ static int sof_link_load(struct snd_soc_component *scomp, int index,
+ 	/* Copy common data to all config ipc structs */
+ 	for (i = 0; i < num_conf; i++) {
+ 		config[i].hdr.cmd = SOF_IPC_GLB_DAI_MSG | SOF_IPC_DAI_CONFIG;
+-		config[i].format = hw_config[i].fmt;
++		config[i].format = le32_to_cpu(hw_config[i].fmt);
+ 		config[i].type = common_config.type;
+ 		config[i].dai_index = common_config.dai_index;
+ 	}
+diff --git a/sound/usb/mixer_scarlett_gen2.c b/sound/usb/mixer_scarlett_gen2.c
+index bca3e7fe27df6..38f4a2a37e0f7 100644
+--- a/sound/usb/mixer_scarlett_gen2.c
++++ b/sound/usb/mixer_scarlett_gen2.c
+@@ -254,10 +254,10 @@ static const struct scarlett2_device_info s6i6_gen2_info = {
+ 	.pad_input_count = 2,
+ 
+ 	.line_out_descrs = {
+-		"Monitor L",
+-		"Monitor R",
+-		"Headphones L",
+-		"Headphones R",
++		"Headphones 1 L",
++		"Headphones 1 R",
++		"Headphones 2 L",
++		"Headphones 2 R",
+ 	},
+ 
+ 	.ports = {
+@@ -356,7 +356,7 @@ static const struct scarlett2_device_info s18i8_gen2_info = {
+ 		},
+ 		[SCARLETT2_PORT_TYPE_PCM] = {
+ 			.id = 0x600,
+-			.num = { 20, 18, 18, 14, 10 },
++			.num = { 8, 18, 18, 14, 10 },
+ 			.src_descr = "PCM %d",
+ 			.src_num_offset = 1,
+ 			.dst_descr = "PCM %02d Capture"
+@@ -1033,11 +1033,10 @@ static int scarlett2_master_volume_ctl_get(struct snd_kcontrol *kctl,
+ 	struct usb_mixer_interface *mixer = elem->head.mixer;
+ 	struct scarlett2_mixer_data *private = mixer->private_data;
+ 
+-	if (private->vol_updated) {
+-		mutex_lock(&private->data_mutex);
++	mutex_lock(&private->data_mutex);
++	if (private->vol_updated)
+ 		scarlett2_update_volumes(mixer);
+-		mutex_unlock(&private->data_mutex);
+-	}
++	mutex_unlock(&private->data_mutex);
+ 
+ 	ucontrol->value.integer.value[0] = private->master_vol;
+ 	return 0;
+@@ -1051,11 +1050,10 @@ static int scarlett2_volume_ctl_get(struct snd_kcontrol *kctl,
+ 	struct scarlett2_mixer_data *private = mixer->private_data;
+ 	int index = elem->control;
+ 
+-	if (private->vol_updated) {
+-		mutex_lock(&private->data_mutex);
++	mutex_lock(&private->data_mutex);
++	if (private->vol_updated)
+ 		scarlett2_update_volumes(mixer);
+-		mutex_unlock(&private->data_mutex);
+-	}
++	mutex_unlock(&private->data_mutex);
+ 
+ 	ucontrol->value.integer.value[0] = private->vol[index];
+ 	return 0;
+@@ -1186,6 +1184,8 @@ static int scarlett2_sw_hw_enum_ctl_put(struct snd_kcontrol *kctl,
+ 	/* Send SW/HW switch change to the device */
+ 	err = scarlett2_usb_set_config(mixer, SCARLETT2_CONFIG_SW_HW_SWITCH,
+ 				       index, val);
++	if (err == 0)
++		err = 1;
+ 
+ unlock:
+ 	mutex_unlock(&private->data_mutex);
+@@ -1246,6 +1246,8 @@ static int scarlett2_level_enum_ctl_put(struct snd_kcontrol *kctl,
+ 	/* Send switch change to the device */
+ 	err = scarlett2_usb_set_config(mixer, SCARLETT2_CONFIG_LEVEL_SWITCH,
+ 				       index, val);
++	if (err == 0)
++		err = 1;
+ 
+ unlock:
+ 	mutex_unlock(&private->data_mutex);
+@@ -1296,6 +1298,8 @@ static int scarlett2_pad_ctl_put(struct snd_kcontrol *kctl,
+ 	/* Send switch change to the device */
+ 	err = scarlett2_usb_set_config(mixer, SCARLETT2_CONFIG_PAD_SWITCH,
+ 				       index, val);
++	if (err == 0)
++		err = 1;
+ 
+ unlock:
+ 	mutex_unlock(&private->data_mutex);
+@@ -1319,11 +1323,10 @@ static int scarlett2_button_ctl_get(struct snd_kcontrol *kctl,
+ 	struct usb_mixer_interface *mixer = elem->head.mixer;
+ 	struct scarlett2_mixer_data *private = mixer->private_data;
+ 
+-	if (private->vol_updated) {
+-		mutex_lock(&private->data_mutex);
++	mutex_lock(&private->data_mutex);
++	if (private->vol_updated)
+ 		scarlett2_update_volumes(mixer);
+-		mutex_unlock(&private->data_mutex);
+-	}
++	mutex_unlock(&private->data_mutex);
+ 
+ 	ucontrol->value.enumerated.item[0] = private->buttons[elem->control];
+ 	return 0;
+@@ -1352,6 +1355,8 @@ static int scarlett2_button_ctl_put(struct snd_kcontrol *kctl,
+ 	/* Send switch change to the device */
+ 	err = scarlett2_usb_set_config(mixer, SCARLETT2_CONFIG_BUTTONS,
+ 				       index, val);
++	if (err == 0)
++		err = 1;
+ 
+ unlock:
+ 	mutex_unlock(&private->data_mutex);
+diff --git a/sound/usb/usx2y/usX2Yhwdep.c b/sound/usb/usx2y/usX2Yhwdep.c
+index 22412cd69e985..10868c3fb6561 100644
+--- a/sound/usb/usx2y/usX2Yhwdep.c
++++ b/sound/usb/usx2y/usX2Yhwdep.c
+@@ -29,7 +29,7 @@ static vm_fault_t snd_us428ctls_vm_fault(struct vm_fault *vmf)
+ 		   vmf->pgoff);
+ 	
+ 	offset = vmf->pgoff << PAGE_SHIFT;
+-	vaddr = (char *)((struct usX2Ydev *)vmf->vma->vm_private_data)->us428ctls_sharedmem + offset;
++	vaddr = (char *)((struct usx2ydev *)vmf->vma->vm_private_data)->us428ctls_sharedmem + offset;
+ 	page = virt_to_page(vaddr);
+ 	get_page(page);
+ 	vmf->page = page;
+@@ -47,7 +47,7 @@ static const struct vm_operations_struct us428ctls_vm_ops = {
+ static int snd_us428ctls_mmap(struct snd_hwdep * hw, struct file *filp, struct vm_area_struct *area)
+ {
+ 	unsigned long	size = (unsigned long)(area->vm_end - area->vm_start);
+-	struct usX2Ydev	*us428 = hw->private_data;
++	struct usx2ydev	*us428 = hw->private_data;
+ 
+ 	// FIXME this hwdep interface is used twice: fpga download and mmap for controlling Lights etc. Maybe better using 2 hwdep devs?
+ 	// so as long as the device isn't fully initialised yet we return -EBUSY here.
+@@ -66,7 +66,7 @@ static int snd_us428ctls_mmap(struct snd_hwdep * hw, struct file *filp, struct v
+ 		if (!us428->us428ctls_sharedmem)
+ 			return -ENOMEM;
+ 		memset(us428->us428ctls_sharedmem, -1, sizeof(struct us428ctls_sharedmem));
+-		us428->us428ctls_sharedmem->CtlSnapShotLast = -2;
++		us428->us428ctls_sharedmem->ctl_snapshot_last = -2;
+ 	}
+ 	area->vm_ops = &us428ctls_vm_ops;
+ 	area->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+@@ -77,21 +77,21 @@ static int snd_us428ctls_mmap(struct snd_hwdep * hw, struct file *filp, struct v
+ static __poll_t snd_us428ctls_poll(struct snd_hwdep *hw, struct file *file, poll_table *wait)
+ {
+ 	__poll_t	mask = 0;
+-	struct usX2Ydev	*us428 = hw->private_data;
++	struct usx2ydev	*us428 = hw->private_data;
+ 	struct us428ctls_sharedmem *shm = us428->us428ctls_sharedmem;
+ 	if (us428->chip_status & USX2Y_STAT_CHIP_HUP)
+ 		return EPOLLHUP;
+ 
+ 	poll_wait(file, &us428->us428ctls_wait_queue_head, wait);
+ 
+-	if (shm != NULL && shm->CtlSnapShotLast != shm->CtlSnapShotRed)
++	if (shm != NULL && shm->ctl_snapshot_last != shm->ctl_snapshot_red)
+ 		mask |= EPOLLIN;
+ 
+ 	return mask;
+ }
+ 
+ 
+-static int snd_usX2Y_hwdep_dsp_status(struct snd_hwdep *hw,
++static int snd_usx2y_hwdep_dsp_status(struct snd_hwdep *hw,
+ 				      struct snd_hwdep_dsp_status *info)
+ {
+ 	static const char * const type_ids[USX2Y_TYPE_NUMS] = {
+@@ -99,7 +99,7 @@ static int snd_usX2Y_hwdep_dsp_status(struct snd_hwdep *hw,
+ 		[USX2Y_TYPE_224] = "us224",
+ 		[USX2Y_TYPE_428] = "us428",
+ 	};
+-	struct usX2Ydev	*us428 = hw->private_data;
++	struct usx2ydev	*us428 = hw->private_data;
+ 	int id = -1;
+ 
+ 	switch (le16_to_cpu(us428->dev->descriptor.idProduct)) {
+@@ -124,7 +124,7 @@ static int snd_usX2Y_hwdep_dsp_status(struct snd_hwdep *hw,
+ }
+ 
+ 
+-static int usX2Y_create_usbmidi(struct snd_card *card)
++static int usx2y_create_usbmidi(struct snd_card *card)
+ {
+ 	static const struct snd_usb_midi_endpoint_info quirk_data_1 = {
+ 		.out_ep = 0x06,
+@@ -152,28 +152,28 @@ static int usX2Y_create_usbmidi(struct snd_card *card)
+        		.type = QUIRK_MIDI_FIXED_ENDPOINT,
+ 		.data = &quirk_data_2
+ 	};
+-	struct usb_device *dev = usX2Y(card)->dev;
++	struct usb_device *dev = usx2y(card)->dev;
+ 	struct usb_interface *iface = usb_ifnum_to_if(dev, 0);
+ 	const struct snd_usb_audio_quirk *quirk =
+ 		le16_to_cpu(dev->descriptor.idProduct) == USB_ID_US428 ?
+ 		&quirk_2 : &quirk_1;
+ 
+-	snd_printdd("usX2Y_create_usbmidi \n");
+-	return snd_usbmidi_create(card, iface, &usX2Y(card)->midi_list, quirk);
++	snd_printdd("usx2y_create_usbmidi \n");
++	return snd_usbmidi_create(card, iface, &usx2y(card)->midi_list, quirk);
+ }
+ 
+-static int usX2Y_create_alsa_devices(struct snd_card *card)
++static int usx2y_create_alsa_devices(struct snd_card *card)
+ {
+ 	int err;
+ 
+ 	do {
+-		if ((err = usX2Y_create_usbmidi(card)) < 0) {
+-			snd_printk(KERN_ERR "usX2Y_create_alsa_devices: usX2Y_create_usbmidi error %i \n", err);
++		if ((err = usx2y_create_usbmidi(card)) < 0) {
++			snd_printk(KERN_ERR "usx2y_create_alsa_devices: usx2y_create_usbmidi error %i \n", err);
+ 			break;
+ 		}
+-		if ((err = usX2Y_audio_create(card)) < 0) 
++		if ((err = usx2y_audio_create(card)) < 0) 
+ 			break;
+-		if ((err = usX2Y_hwdep_pcm_new(card)) < 0)
++		if ((err = usx2y_hwdep_pcm_new(card)) < 0)
+ 			break;
+ 		if ((err = snd_card_register(card)) < 0)
+ 			break;
+@@ -182,10 +182,10 @@ static int usX2Y_create_alsa_devices(struct snd_card *card)
+ 	return err;
+ } 
+ 
+-static int snd_usX2Y_hwdep_dsp_load(struct snd_hwdep *hw,
++static int snd_usx2y_hwdep_dsp_load(struct snd_hwdep *hw,
+ 				    struct snd_hwdep_dsp_image *dsp)
+ {
+-	struct usX2Ydev *priv = hw->private_data;
++	struct usx2ydev *priv = hw->private_data;
+ 	struct usb_device* dev = priv->dev;
+ 	int lret, err;
+ 	char *buf;
+@@ -206,19 +206,19 @@ static int snd_usX2Y_hwdep_dsp_load(struct snd_hwdep *hw,
+ 		return err;
+ 	if (dsp->index == 1) {
+ 		msleep(250);				// give the device some time
+-		err = usX2Y_AsyncSeq04_init(priv);
++		err = usx2y_async_seq04_init(priv);
+ 		if (err) {
+-			snd_printk(KERN_ERR "usX2Y_AsyncSeq04_init error \n");
++			snd_printk(KERN_ERR "usx2y_async_seq04_init error \n");
+ 			return err;
+ 		}
+-		err = usX2Y_In04_init(priv);
++		err = usx2y_in04_init(priv);
+ 		if (err) {
+-			snd_printk(KERN_ERR "usX2Y_In04_init error \n");
++			snd_printk(KERN_ERR "usx2y_in04_init error \n");
+ 			return err;
+ 		}
+-		err = usX2Y_create_alsa_devices(hw->card);
++		err = usx2y_create_alsa_devices(hw->card);
+ 		if (err) {
+-			snd_printk(KERN_ERR "usX2Y_create_alsa_devices error %i \n", err);
++			snd_printk(KERN_ERR "usx2y_create_alsa_devices error %i \n", err);
+ 			snd_card_free(hw->card);
+ 			return err;
+ 		}
+@@ -229,7 +229,7 @@ static int snd_usX2Y_hwdep_dsp_load(struct snd_hwdep *hw,
+ }
+ 
+ 
+-int usX2Y_hwdep_new(struct snd_card *card, struct usb_device* device)
++int usx2y_hwdep_new(struct snd_card *card, struct usb_device* device)
+ {
+ 	int err;
+ 	struct snd_hwdep *hw;
+@@ -238,9 +238,9 @@ int usX2Y_hwdep_new(struct snd_card *card, struct usb_device* device)
+ 		return err;
+ 
+ 	hw->iface = SNDRV_HWDEP_IFACE_USX2Y;
+-	hw->private_data = usX2Y(card);
+-	hw->ops.dsp_status = snd_usX2Y_hwdep_dsp_status;
+-	hw->ops.dsp_load = snd_usX2Y_hwdep_dsp_load;
++	hw->private_data = usx2y(card);
++	hw->ops.dsp_status = snd_usx2y_hwdep_dsp_status;
++	hw->ops.dsp_load = snd_usx2y_hwdep_dsp_load;
+ 	hw->ops.mmap = snd_us428ctls_mmap;
+ 	hw->ops.poll = snd_us428ctls_poll;
+ 	hw->exclusive = 1;
+diff --git a/sound/usb/usx2y/usX2Yhwdep.h b/sound/usb/usx2y/usX2Yhwdep.h
+index 457199b5ed03b..34cef625712c6 100644
+--- a/sound/usb/usx2y/usX2Yhwdep.h
++++ b/sound/usb/usx2y/usX2Yhwdep.h
+@@ -2,6 +2,6 @@
+ #ifndef USX2YHWDEP_H
+ #define USX2YHWDEP_H
+ 
+-int usX2Y_hwdep_new(struct snd_card *card, struct usb_device* device);
++int usx2y_hwdep_new(struct snd_card *card, struct usb_device* device);
+ 
+ #endif
+diff --git a/sound/usb/usx2y/usb_stream.c b/sound/usb/usx2y/usb_stream.c
+index 091c071b270af..cff684942c4f0 100644
+--- a/sound/usb/usx2y/usb_stream.c
++++ b/sound/usb/usx2y/usb_stream.c
+@@ -142,8 +142,11 @@ void usb_stream_free(struct usb_stream_kernel *sk)
+ 	if (!s)
+ 		return;
+ 
+-	free_pages_exact(sk->write_page, s->write_size);
+-	sk->write_page = NULL;
++	if (sk->write_page) {
++		free_pages_exact(sk->write_page, s->write_size);
++		sk->write_page = NULL;
++	}
++
+ 	free_pages_exact(s, s->read_size);
+ 	sk->s = NULL;
+ }
+diff --git a/sound/usb/usx2y/usbus428ctldefs.h b/sound/usb/usx2y/usbus428ctldefs.h
+index 5a7518ea3aeb4..7366a940ffbba 100644
+--- a/sound/usb/usx2y/usbus428ctldefs.h
++++ b/sound/usb/usx2y/usbus428ctldefs.h
+@@ -4,28 +4,28 @@
+  * Copyright (c) 2003 by Karsten Wiese <annabellesgarden@yahoo.de>
+  */
+ 
+-enum E_In84{
+-	eFader0 = 0,
+-	eFader1,
+-	eFader2,
+-	eFader3,
+-	eFader4,
+-	eFader5,
+-	eFader6,
+-	eFader7,
+-	eFaderM,
+-	eTransport,
+-	eModifier = 10,
+-	eFilterSelect,
+-	eSelect,
+-	eMute,
++enum E_IN84 {
++	E_FADER_0 = 0,
++	E_FADER_1,
++	E_FADER_2,
++	E_FADER_3,
++	E_FADER_4,
++	E_FADER_5,
++	E_FADER_6,
++	E_FADER_7,
++	E_FADER_M,
++	E_TRANSPORT,
++	E_MODIFIER = 10,
++	E_FILTER_SELECT,
++	E_SELECT,
++	E_MUTE,
+ 
+-	eSwitch   = 15,
+-	eWheelGain,
+-	eWheelFreq,
+-	eWheelQ,
+-	eWheelPan,
+-	eWheel    = 20
++	E_SWITCH   = 15,
++	E_WHEEL_GAIN,
++	E_WHEEL_FREQ,
++	E_WHEEL_Q,
++	E_WHEEL_PAN,
++	E_WHEEL    = 20
+ };
+ 
+ #define T_RECORD   1
+@@ -39,53 +39,53 @@ enum E_In84{
+ 
+ 
+ struct us428_ctls {
+-	unsigned char   Fader[9];
+-	unsigned char 	Transport;
+-	unsigned char 	Modifier;
+-	unsigned char 	FilterSelect;
+-	unsigned char 	Select;
+-	unsigned char   Mute;
+-	unsigned char   UNKNOWN;
+-	unsigned char   Switch;	     
+-	unsigned char   Wheel[5];
++	unsigned char   fader[9];
++	unsigned char 	transport;
++	unsigned char 	modifier;
++	unsigned char 	filters_elect;
++	unsigned char 	select;
++	unsigned char   mute;
++	unsigned char   unknown;
++	unsigned char   wswitch;	     
++	unsigned char   wheel[5];
+ };
+ 
+-struct us428_setByte {
+-	unsigned char Offset,
+-		Value;
++struct us428_set_byte {
++	unsigned char offset,
++		value;
+ };
+ 
+ enum {
+-	eLT_Volume = 0,
+-	eLT_Light
++	ELT_VOLUME = 0,
++	ELT_LIGHT
+ };
+ 
+-struct usX2Y_volume {
+-	unsigned char Channel,
+-		LH,
+-		LL,
+-		RH,
+-		RL;
++struct usx2y_volume {
++	unsigned char channel,
++		lh,
++		ll,
++		rh,
++		rl;
+ };
+ 
+ struct us428_lights {
+-	struct us428_setByte Light[7];
++	struct us428_set_byte light[7];
+ };
+ 
+ struct us428_p4out {
+ 	char type;
+ 	union {
+-		struct usX2Y_volume vol;
++		struct usx2y_volume vol;
+ 		struct us428_lights lights;
+ 	} val;
+ };
+ 
+-#define N_us428_ctl_BUFS 16
+-#define N_us428_p4out_BUFS 16
+-struct us428ctls_sharedmem{
+-	struct us428_ctls	CtlSnapShot[N_us428_ctl_BUFS];
+-	int			CtlSnapShotDiffersAt[N_us428_ctl_BUFS];
+-	int			CtlSnapShotLast, CtlSnapShotRed;
+-	struct us428_p4out	p4out[N_us428_p4out_BUFS];
+-	int			p4outLast, p4outSent;
++#define N_US428_CTL_BUFS 16
++#define N_US428_P4OUT_BUFS 16
++struct us428ctls_sharedmem {
++	struct us428_ctls	ctl_snapshot[N_US428_CTL_BUFS];
++	int			ctl_snapshot_differs_at[N_US428_CTL_BUFS];
++	int			ctl_snapshot_last, ctl_snapshot_red;
++	struct us428_p4out	p4out[N_US428_P4OUT_BUFS];
++	int			p4out_last, p4out_sent;
+ };
+diff --git a/sound/usb/usx2y/usbusx2y.c b/sound/usb/usx2y/usbusx2y.c
+index 3cd28d24f0a73..cdbb27a96e040 100644
+--- a/sound/usb/usx2y/usbusx2y.c
++++ b/sound/usb/usx2y/usbusx2y.c
+@@ -17,7 +17,7 @@
+ 
+ 2004-10-26 Karsten Wiese
+ 	Version 0.8.6:
+-	wake_up() process waiting in usX2Y_urbs_start() on error.
++	wake_up() process waiting in usx2y_urbs_start() on error.
+ 
+ 2004-10-21 Karsten Wiese
+ 	Version 0.8.5:
+@@ -48,7 +48,7 @@
+ 2004-06-12 Karsten Wiese
+ 	Version 0.6.3:
+ 	Made it thus the following rule is enforced:
+-	"All pcm substreams of one usX2Y have to operate at the same rate & format."
++	"All pcm substreams of one usx2y have to operate at the same rate & format."
+ 
+ 2004-04-06 Karsten Wiese
+ 	Version 0.6.0:
+@@ -150,161 +150,161 @@ module_param_array(enable, bool, NULL, 0444);
+ MODULE_PARM_DESC(enable, "Enable "NAME_ALLCAPS".");
+ 
+ 
+-static int snd_usX2Y_card_used[SNDRV_CARDS];
++static int snd_usx2y_card_used[SNDRV_CARDS];
+ 
+-static void usX2Y_usb_disconnect(struct usb_device* usb_device, void* ptr);
+-static void snd_usX2Y_card_private_free(struct snd_card *card);
++static void usx2y_usb_disconnect(struct usb_device* usb_device, void* ptr);
++static void snd_usx2y_card_private_free(struct snd_card *card);
+ 
+ /* 
+  * pipe 4 is used for switching the lamps, setting samplerate, volumes ....   
+  */
+-static void i_usX2Y_Out04Int(struct urb *urb)
++static void i_usx2y_out04_int(struct urb *urb)
+ {
+ #ifdef CONFIG_SND_DEBUG
+ 	if (urb->status) {
+ 		int 		i;
+-		struct usX2Ydev *usX2Y = urb->context;
+-		for (i = 0; i < 10 && usX2Y->AS04.urb[i] != urb; i++);
+-		snd_printdd("i_usX2Y_Out04Int() urb %i status=%i\n", i, urb->status);
++		struct usx2ydev *usx2y = urb->context;
++		for (i = 0; i < 10 && usx2y->as04.urb[i] != urb; i++);
++		snd_printdd("i_usx2y_out04_int() urb %i status=%i\n", i, urb->status);
+ 	}
+ #endif
+ }
+ 
+-static void i_usX2Y_In04Int(struct urb *urb)
++static void i_usx2y_in04_int(struct urb *urb)
+ {
+ 	int			err = 0;
+-	struct usX2Ydev		*usX2Y = urb->context;
+-	struct us428ctls_sharedmem	*us428ctls = usX2Y->us428ctls_sharedmem;
++	struct usx2ydev		*usx2y = urb->context;
++	struct us428ctls_sharedmem	*us428ctls = usx2y->us428ctls_sharedmem;
+ 
+-	usX2Y->In04IntCalls++;
++	usx2y->in04_int_calls++;
+ 
+ 	if (urb->status) {
+ 		snd_printdd("Interrupt Pipe 4 came back with status=%i\n", urb->status);
+ 		return;
+ 	}
+ 
+-	//	printk("%i:0x%02X ", 8, (int)((unsigned char*)usX2Y->In04Buf)[8]); Master volume shows 0 here if fader is at max during boot ?!?
++	//	printk("%i:0x%02X ", 8, (int)((unsigned char*)usx2y->in04_buf)[8]); Master volume shows 0 here if fader is at max during boot ?!?
+ 	if (us428ctls) {
+ 		int diff = -1;
+-		if (-2 == us428ctls->CtlSnapShotLast) {
++		if (-2 == us428ctls->ctl_snapshot_last) {
+ 			diff = 0;
+-			memcpy(usX2Y->In04Last, usX2Y->In04Buf, sizeof(usX2Y->In04Last));
+-			us428ctls->CtlSnapShotLast = -1;
++			memcpy(usx2y->in04_last, usx2y->in04_buf, sizeof(usx2y->in04_last));
++			us428ctls->ctl_snapshot_last = -1;
+ 		} else {
+ 			int i;
+ 			for (i = 0; i < 21; i++) {
+-				if (usX2Y->In04Last[i] != ((char*)usX2Y->In04Buf)[i]) {
++				if (usx2y->in04_last[i] != ((char*)usx2y->in04_buf)[i]) {
+ 					if (diff < 0)
+ 						diff = i;
+-					usX2Y->In04Last[i] = ((char*)usX2Y->In04Buf)[i];
++					usx2y->in04_last[i] = ((char*)usx2y->in04_buf)[i];
+ 				}
+ 			}
+ 		}
+ 		if (0 <= diff) {
+-			int n = us428ctls->CtlSnapShotLast + 1;
+-			if (n >= N_us428_ctl_BUFS  ||  n < 0)
++			int n = us428ctls->ctl_snapshot_last + 1;
++			if (n >= N_US428_CTL_BUFS  ||  n < 0)
+ 				n = 0;
+-			memcpy(us428ctls->CtlSnapShot + n, usX2Y->In04Buf, sizeof(us428ctls->CtlSnapShot[0]));
+-			us428ctls->CtlSnapShotDiffersAt[n] = diff;
+-			us428ctls->CtlSnapShotLast = n;
+-			wake_up(&usX2Y->us428ctls_wait_queue_head);
++			memcpy(us428ctls->ctl_snapshot + n, usx2y->in04_buf, sizeof(us428ctls->ctl_snapshot[0]));
++			us428ctls->ctl_snapshot_differs_at[n] = diff;
++			us428ctls->ctl_snapshot_last = n;
++			wake_up(&usx2y->us428ctls_wait_queue_head);
+ 		}
+ 	}
+ 	
+ 	
+-	if (usX2Y->US04) {
+-		if (0 == usX2Y->US04->submitted)
++	if (usx2y->us04) {
++		if (0 == usx2y->us04->submitted)
+ 			do {
+-				err = usb_submit_urb(usX2Y->US04->urb[usX2Y->US04->submitted++], GFP_ATOMIC);
+-			} while (!err && usX2Y->US04->submitted < usX2Y->US04->len);
++				err = usb_submit_urb(usx2y->us04->urb[usx2y->us04->submitted++], GFP_ATOMIC);
++			} while (!err && usx2y->us04->submitted < usx2y->us04->len);
+ 	} else
+-		if (us428ctls && us428ctls->p4outLast >= 0 && us428ctls->p4outLast < N_us428_p4out_BUFS) {
+-			if (us428ctls->p4outLast != us428ctls->p4outSent) {
+-				int j, send = us428ctls->p4outSent + 1;
+-				if (send >= N_us428_p4out_BUFS)
++		if (us428ctls && us428ctls->p4out_last >= 0 && us428ctls->p4out_last < N_US428_P4OUT_BUFS) {
++			if (us428ctls->p4out_last != us428ctls->p4out_sent) {
++				int j, send = us428ctls->p4out_sent + 1;
++				if (send >= N_US428_P4OUT_BUFS)
+ 					send = 0;
+-				for (j = 0; j < URBS_AsyncSeq  &&  !err; ++j)
+-					if (0 == usX2Y->AS04.urb[j]->status) {
++				for (j = 0; j < URBS_ASYNC_SEQ  &&  !err; ++j)
++					if (0 == usx2y->as04.urb[j]->status) {
+ 						struct us428_p4out *p4out = us428ctls->p4out + send;	// FIXME if more than 1 p4out is new, 1 gets lost.
+-						usb_fill_bulk_urb(usX2Y->AS04.urb[j], usX2Y->dev,
+-								  usb_sndbulkpipe(usX2Y->dev, 0x04), &p4out->val.vol,
+-								  p4out->type == eLT_Light ? sizeof(struct us428_lights) : 5,
+-								  i_usX2Y_Out04Int, usX2Y);
+-						err = usb_submit_urb(usX2Y->AS04.urb[j], GFP_ATOMIC);
+-						us428ctls->p4outSent = send;
++						usb_fill_bulk_urb(usx2y->as04.urb[j], usx2y->dev,
++								  usb_sndbulkpipe(usx2y->dev, 0x04), &p4out->val.vol,
++								  p4out->type == ELT_LIGHT ? sizeof(struct us428_lights) : 5,
++								  i_usx2y_out04_int, usx2y);
++						err = usb_submit_urb(usx2y->as04.urb[j], GFP_ATOMIC);
++						us428ctls->p4out_sent = send;
+ 						break;
+ 					}
+ 			}
+ 		}
+ 
+ 	if (err)
+-		snd_printk(KERN_ERR "In04Int() usb_submit_urb err=%i\n", err);
++		snd_printk(KERN_ERR "in04_int() usb_submit_urb err=%i\n", err);
+ 
+-	urb->dev = usX2Y->dev;
++	urb->dev = usx2y->dev;
+ 	usb_submit_urb(urb, GFP_ATOMIC);
+ }
+ 
+ /*
+  * Prepare some urbs
+  */
+-int usX2Y_AsyncSeq04_init(struct usX2Ydev *usX2Y)
++int usx2y_async_seq04_init(struct usx2ydev *usx2y)
+ {
+ 	int	err = 0,
+ 		i;
+ 
+-	usX2Y->AS04.buffer = kmalloc_array(URBS_AsyncSeq,
+-					   URB_DataLen_AsyncSeq, GFP_KERNEL);
+-	if (NULL == usX2Y->AS04.buffer) {
++	usx2y->as04.buffer = kmalloc_array(URBS_ASYNC_SEQ,
++					   URB_DATA_LEN_ASYNC_SEQ, GFP_KERNEL);
++	if (NULL == usx2y->as04.buffer) {
+ 		err = -ENOMEM;
+ 	} else
+-		for (i = 0; i < URBS_AsyncSeq; ++i) {
+-			if (NULL == (usX2Y->AS04.urb[i] = usb_alloc_urb(0, GFP_KERNEL))) {
++		for (i = 0; i < URBS_ASYNC_SEQ; ++i) {
++			if (NULL == (usx2y->as04.urb[i] = usb_alloc_urb(0, GFP_KERNEL))) {
+ 				err = -ENOMEM;
+ 				break;
+ 			}
+-			usb_fill_bulk_urb(	usX2Y->AS04.urb[i], usX2Y->dev,
+-						usb_sndbulkpipe(usX2Y->dev, 0x04),
+-						usX2Y->AS04.buffer + URB_DataLen_AsyncSeq*i, 0,
+-						i_usX2Y_Out04Int, usX2Y
++			usb_fill_bulk_urb(	usx2y->as04.urb[i], usx2y->dev,
++						usb_sndbulkpipe(usx2y->dev, 0x04),
++						usx2y->as04.buffer + URB_DATA_LEN_ASYNC_SEQ*i, 0,
++						i_usx2y_out04_int, usx2y
+ 				);
+-			err = usb_urb_ep_type_check(usX2Y->AS04.urb[i]);
++			err = usb_urb_ep_type_check(usx2y->as04.urb[i]);
+ 			if (err < 0)
+ 				break;
+ 		}
+ 	return err;
+ }
+ 
+-int usX2Y_In04_init(struct usX2Ydev *usX2Y)
++int usx2y_in04_init(struct usx2ydev *usx2y)
+ {
+-	if (! (usX2Y->In04urb = usb_alloc_urb(0, GFP_KERNEL)))
++	if (! (usx2y->in04_urb = usb_alloc_urb(0, GFP_KERNEL)))
+ 		return -ENOMEM;
+ 
+-	if (! (usX2Y->In04Buf = kmalloc(21, GFP_KERNEL)))
++	if (! (usx2y->in04_buf = kmalloc(21, GFP_KERNEL)))
+ 		return -ENOMEM;
+ 	 
+-	init_waitqueue_head(&usX2Y->In04WaitQueue);
+-	usb_fill_int_urb(usX2Y->In04urb, usX2Y->dev, usb_rcvintpipe(usX2Y->dev, 0x4),
+-			 usX2Y->In04Buf, 21,
+-			 i_usX2Y_In04Int, usX2Y,
++	init_waitqueue_head(&usx2y->in04_wait_queue);
++	usb_fill_int_urb(usx2y->in04_urb, usx2y->dev, usb_rcvintpipe(usx2y->dev, 0x4),
++			 usx2y->in04_buf, 21,
++			 i_usx2y_in04_int, usx2y,
+ 			 10);
+-	if (usb_urb_ep_type_check(usX2Y->In04urb))
++	if (usb_urb_ep_type_check(usx2y->in04_urb))
+ 		return -EINVAL;
+-	return usb_submit_urb(usX2Y->In04urb, GFP_KERNEL);
++	return usb_submit_urb(usx2y->in04_urb, GFP_KERNEL);
+ }
+ 
+-static void usX2Y_unlinkSeq(struct snd_usX2Y_AsyncSeq *S)
++static void usx2y_unlinkseq(struct snd_usx2y_async_seq *s)
+ {
+ 	int	i;
+-	for (i = 0; i < URBS_AsyncSeq; ++i) {
+-		usb_kill_urb(S->urb[i]);
+-		usb_free_urb(S->urb[i]);
+-		S->urb[i] = NULL;
++	for (i = 0; i < URBS_ASYNC_SEQ; ++i) {
++		usb_kill_urb(s->urb[i]);
++		usb_free_urb(s->urb[i]);
++		s->urb[i] = NULL;
+ 	}
+-	kfree(S->buffer);
++	kfree(s->buffer);
+ }
+ 
+ 
+-static const struct usb_device_id snd_usX2Y_usb_id_table[] = {
++static const struct usb_device_id snd_usx2y_usb_id_table[] = {
+ 	{
+ 		.match_flags =	USB_DEVICE_ID_MATCH_DEVICE,
+ 		.idVendor =	0x1604,
+@@ -323,7 +323,7 @@ static const struct usb_device_id snd_usX2Y_usb_id_table[] = {
+ 	{ /* terminator */ }
+ };
+ 
+-static int usX2Y_create_card(struct usb_device *device,
++static int usx2y_create_card(struct usb_device *device,
+ 			     struct usb_interface *intf,
+ 			     struct snd_card **cardp)
+ {
+@@ -332,20 +332,20 @@ static int usX2Y_create_card(struct usb_device *device,
+ 	int err;
+ 
+ 	for (dev = 0; dev < SNDRV_CARDS; ++dev)
+-		if (enable[dev] && !snd_usX2Y_card_used[dev])
++		if (enable[dev] && !snd_usx2y_card_used[dev])
+ 			break;
+ 	if (dev >= SNDRV_CARDS)
+ 		return -ENODEV;
+ 	err = snd_card_new(&intf->dev, index[dev], id[dev], THIS_MODULE,
+-			   sizeof(struct usX2Ydev), &card);
++			   sizeof(struct usx2ydev), &card);
+ 	if (err < 0)
+ 		return err;
+-	snd_usX2Y_card_used[usX2Y(card)->card_index = dev] = 1;
+-	card->private_free = snd_usX2Y_card_private_free;
+-	usX2Y(card)->dev = device;
+-	init_waitqueue_head(&usX2Y(card)->prepare_wait_queue);
+-	mutex_init(&usX2Y(card)->pcm_mutex);
+-	INIT_LIST_HEAD(&usX2Y(card)->midi_list);
++	snd_usx2y_card_used[usx2y(card)->card_index = dev] = 1;
++	card->private_free = snd_usx2y_card_private_free;
++	usx2y(card)->dev = device;
++	init_waitqueue_head(&usx2y(card)->prepare_wait_queue);
++	mutex_init(&usx2y(card)->pcm_mutex);
++	INIT_LIST_HEAD(&usx2y(card)->midi_list);
+ 	strcpy(card->driver, "USB "NAME_ALLCAPS"");
+ 	sprintf(card->shortname, "TASCAM "NAME_ALLCAPS"");
+ 	sprintf(card->longname, "%s (%x:%x if %d at %03d/%03d)",
+@@ -353,14 +353,14 @@ static int usX2Y_create_card(struct usb_device *device,
+ 		le16_to_cpu(device->descriptor.idVendor),
+ 		le16_to_cpu(device->descriptor.idProduct),
+ 		0,//us428(card)->usbmidi.ifnum,
+-		usX2Y(card)->dev->bus->busnum, usX2Y(card)->dev->devnum
++		usx2y(card)->dev->bus->busnum, usx2y(card)->dev->devnum
+ 		);
+ 	*cardp = card;
+ 	return 0;
+ }
+ 
+ 
+-static int usX2Y_usb_probe(struct usb_device *device,
++static int usx2y_usb_probe(struct usb_device *device,
+ 			   struct usb_interface *intf,
+ 			   const struct usb_device_id *device_id,
+ 			   struct snd_card **cardp)
+@@ -375,10 +375,10 @@ static int usX2Y_usb_probe(struct usb_device *device,
+ 	     le16_to_cpu(device->descriptor.idProduct) != USB_ID_US428))
+ 		return -EINVAL;
+ 
+-	err = usX2Y_create_card(device, intf, &card);
++	err = usx2y_create_card(device, intf, &card);
+ 	if (err < 0)
+ 		return err;
+-	if ((err = usX2Y_hwdep_new(card, device)) < 0  ||
++	if ((err = usx2y_hwdep_new(card, device)) < 0  ||
+ 	    (err = snd_card_register(card)) < 0) {
+ 		snd_card_free(card);
+ 		return err;
+@@ -390,64 +390,64 @@ static int usX2Y_usb_probe(struct usb_device *device,
+ /*
+  * new 2.5 USB kernel API
+  */
+-static int snd_usX2Y_probe(struct usb_interface *intf, const struct usb_device_id *id)
++static int snd_usx2y_probe(struct usb_interface *intf, const struct usb_device_id *id)
+ {
+ 	struct snd_card *card;
+ 	int err;
+ 
+-	err = usX2Y_usb_probe(interface_to_usbdev(intf), intf, id, &card);
++	err = usx2y_usb_probe(interface_to_usbdev(intf), intf, id, &card);
+ 	if (err < 0)
+ 		return err;
+ 	dev_set_drvdata(&intf->dev, card);
+ 	return 0;
+ }
+ 
+-static void snd_usX2Y_disconnect(struct usb_interface *intf)
++static void snd_usx2y_disconnect(struct usb_interface *intf)
+ {
+-	usX2Y_usb_disconnect(interface_to_usbdev(intf),
++	usx2y_usb_disconnect(interface_to_usbdev(intf),
+ 				 usb_get_intfdata(intf));
+ }
+ 
+-MODULE_DEVICE_TABLE(usb, snd_usX2Y_usb_id_table);
+-static struct usb_driver snd_usX2Y_usb_driver = {
++MODULE_DEVICE_TABLE(usb, snd_usx2y_usb_id_table);
++static struct usb_driver snd_usx2y_usb_driver = {
+ 	.name =		"snd-usb-usx2y",
+-	.probe =	snd_usX2Y_probe,
+-	.disconnect =	snd_usX2Y_disconnect,
+-	.id_table =	snd_usX2Y_usb_id_table,
++	.probe =	snd_usx2y_probe,
++	.disconnect =	snd_usx2y_disconnect,
++	.id_table =	snd_usx2y_usb_id_table,
+ };
+ 
+-static void snd_usX2Y_card_private_free(struct snd_card *card)
++static void snd_usx2y_card_private_free(struct snd_card *card)
+ {
+-	kfree(usX2Y(card)->In04Buf);
+-	usb_free_urb(usX2Y(card)->In04urb);
+-	if (usX2Y(card)->us428ctls_sharedmem)
+-		free_pages_exact(usX2Y(card)->us428ctls_sharedmem,
+-				 sizeof(*usX2Y(card)->us428ctls_sharedmem));
+-	if (usX2Y(card)->card_index >= 0  &&  usX2Y(card)->card_index < SNDRV_CARDS)
+-		snd_usX2Y_card_used[usX2Y(card)->card_index] = 0;
++	kfree(usx2y(card)->in04_buf);
++	usb_free_urb(usx2y(card)->in04_urb);
++	if (usx2y(card)->us428ctls_sharedmem)
++		free_pages_exact(usx2y(card)->us428ctls_sharedmem,
++				 sizeof(*usx2y(card)->us428ctls_sharedmem));
++	if (usx2y(card)->card_index >= 0  &&  usx2y(card)->card_index < SNDRV_CARDS)
++		snd_usx2y_card_used[usx2y(card)->card_index] = 0;
+ }
+ 
+ /*
+  * Frees the device.
+  */
+-static void usX2Y_usb_disconnect(struct usb_device *device, void* ptr)
++static void usx2y_usb_disconnect(struct usb_device *device, void* ptr)
+ {
+ 	if (ptr) {
+ 		struct snd_card *card = ptr;
+-		struct usX2Ydev *usX2Y = usX2Y(card);
++		struct usx2ydev *usx2y = usx2y(card);
+ 		struct list_head *p;
+-		usX2Y->chip_status = USX2Y_STAT_CHIP_HUP;
+-		usX2Y_unlinkSeq(&usX2Y->AS04);
+-		usb_kill_urb(usX2Y->In04urb);
++		usx2y->chip_status = USX2Y_STAT_CHIP_HUP;
++		usx2y_unlinkseq(&usx2y->as04);
++		usb_kill_urb(usx2y->in04_urb);
+ 		snd_card_disconnect(card);
+ 		/* release the midi resources */
+-		list_for_each(p, &usX2Y->midi_list) {
++		list_for_each(p, &usx2y->midi_list) {
+ 			snd_usbmidi_disconnect(p);
+ 		}
+-		if (usX2Y->us428ctls_sharedmem) 
+-			wake_up(&usX2Y->us428ctls_wait_queue_head);
++		if (usx2y->us428ctls_sharedmem) 
++			wake_up(&usx2y->us428ctls_wait_queue_head);
+ 		snd_card_free(card);
+ 	}
+ }
+ 
+-module_usb_driver(snd_usX2Y_usb_driver);
++module_usb_driver(snd_usx2y_usb_driver);
+diff --git a/sound/usb/usx2y/usbusx2y.h b/sound/usb/usx2y/usbusx2y.h
+index 144b85f57bd2a..c330af628bccd 100644
+--- a/sound/usb/usx2y/usbusx2y.h
++++ b/sound/usb/usx2y/usbusx2y.h
+@@ -8,14 +8,14 @@
+ #define NRURBS	        2	
+ 
+ 
+-#define URBS_AsyncSeq 10
+-#define URB_DataLen_AsyncSeq 32
+-struct snd_usX2Y_AsyncSeq {
+-	struct urb	*urb[URBS_AsyncSeq];
++#define URBS_ASYNC_SEQ 10
++#define URB_DATA_LEN_ASYNC_SEQ 32
++struct snd_usx2y_async_seq {
++	struct urb	*urb[URBS_ASYNC_SEQ];
+ 	char		*buffer;
+ };
+ 
+-struct snd_usX2Y_urbSeq {
++struct snd_usx2y_urb_seq {
+ 	int	submitted;
+ 	int	len;
+ 	struct urb	*urb[];
+@@ -23,17 +23,17 @@ struct snd_usX2Y_urbSeq {
+ 
+ #include "usx2yhwdeppcm.h"
+ 
+-struct usX2Ydev {
++struct usx2ydev {
+ 	struct usb_device	*dev;
+ 	int			card_index;
+ 	int			stride;
+-	struct urb		*In04urb;
+-	void			*In04Buf;
+-	char			In04Last[24];
+-	unsigned		In04IntCalls;
+-	struct snd_usX2Y_urbSeq	*US04;
+-	wait_queue_head_t	In04WaitQueue;
+-	struct snd_usX2Y_AsyncSeq	AS04;
++	struct urb		*in04_urb;
++	void			*in04_buf;
++	char			in04_last[24];
++	unsigned		in04_int_calls;
++	struct snd_usx2y_urb_seq	*us04;
++	wait_queue_head_t	in04_wait_queue;
++	struct snd_usx2y_async_seq	as04;
+ 	unsigned int		rate,
+ 				format;
+ 	int			chip_status;
+@@ -41,9 +41,9 @@ struct usX2Ydev {
+ 	struct us428ctls_sharedmem	*us428ctls_sharedmem;
+ 	int			wait_iso_frame;
+ 	wait_queue_head_t	us428ctls_wait_queue_head;
+-	struct snd_usX2Y_hwdep_pcm_shm	*hwdep_pcm_shm;
+-	struct snd_usX2Y_substream	*subs[4];
+-	struct snd_usX2Y_substream	* volatile  prepare_subs;
++	struct snd_usx2y_hwdep_pcm_shm	*hwdep_pcm_shm;
++	struct snd_usx2y_substream	*subs[4];
++	struct snd_usx2y_substream	* volatile  prepare_subs;
+ 	wait_queue_head_t	prepare_wait_queue;
+ 	struct list_head	midi_list;
+ 	struct list_head	pcm_list;
+@@ -51,21 +51,21 @@ struct usX2Ydev {
+ };
+ 
+ 
+-struct snd_usX2Y_substream {
+-	struct usX2Ydev	*usX2Y;
++struct snd_usx2y_substream {
++	struct usx2ydev	*usx2y;
+ 	struct snd_pcm_substream *pcm_substream;
+ 
+ 	int			endpoint;		
+ 	unsigned int		maxpacksize;		/* max packet size in bytes */
+ 
+ 	atomic_t		state;
+-#define state_STOPPED	0
+-#define state_STARTING1 1
+-#define state_STARTING2 2
+-#define state_STARTING3 3
+-#define state_PREPARED	4
+-#define state_PRERUNNING  6
+-#define state_RUNNING	8
++#define STATE_STOPPED	0
++#define STATE_STARTING1 1
++#define STATE_STARTING2 2
++#define STATE_STARTING3 3
++#define STATE_PREPARED	4
++#define STATE_PRERUNNING  6
++#define STATE_RUNNING	8
+ 
+ 	int			hwptr;			/* free frame position in the buffer (only for playback) */
+ 	int			hwptr_done;		/* processed frame position in the buffer */
+@@ -77,12 +77,12 @@ struct snd_usX2Y_substream {
+ };
+ 
+ 
+-#define usX2Y(c) ((struct usX2Ydev *)(c)->private_data)
++#define usx2y(c) ((struct usx2ydev *)(c)->private_data)
+ 
+-int usX2Y_audio_create(struct snd_card *card);
++int usx2y_audio_create(struct snd_card *card);
+ 
+-int usX2Y_AsyncSeq04_init(struct usX2Ydev *usX2Y);
+-int usX2Y_In04_init(struct usX2Ydev *usX2Y);
++int usx2y_async_seq04_init(struct usx2ydev *usx2y);
++int usx2y_in04_init(struct usx2ydev *usx2y);
+ 
+ #define NAME_ALLCAPS "US-X2Y"
+ 
+diff --git a/sound/usb/usx2y/usbusx2yaudio.c b/sound/usb/usx2y/usbusx2yaudio.c
+index ecaf41265dcd0..8033bb7255d5c 100644
+--- a/sound/usb/usx2y/usbusx2yaudio.c
++++ b/sound/usb/usx2y/usbusx2yaudio.c
+@@ -54,13 +54,13 @@
+ #endif
+ 
+ 
+-static int usX2Y_urb_capt_retire(struct snd_usX2Y_substream *subs)
++static int usx2y_urb_capt_retire(struct snd_usx2y_substream *subs)
+ {
+ 	struct urb	*urb = subs->completed_urb;
+ 	struct snd_pcm_runtime *runtime = subs->pcm_substream->runtime;
+ 	unsigned char	*cp;
+ 	int 		i, len, lens = 0, hwptr_done = subs->hwptr_done;
+-	struct usX2Ydev	*usX2Y = subs->usX2Y;
++	struct usx2ydev	*usx2y = subs->usx2y;
+ 
+ 	for (i = 0; i < nr_of_packs(); i++) {
+ 		cp = (unsigned char*)urb->transfer_buffer + urb->iso_frame_desc[i].offset;
+@@ -70,7 +70,7 @@ static int usX2Y_urb_capt_retire(struct snd_usX2Y_substream *subs)
+ 				   urb->iso_frame_desc[i].status);
+ 			return urb->iso_frame_desc[i].status;
+ 		}
+-		len = urb->iso_frame_desc[i].actual_length / usX2Y->stride;
++		len = urb->iso_frame_desc[i].actual_length / usx2y->stride;
+ 		if (! len) {
+ 			snd_printd("0 == len ERROR!\n");
+ 			continue;
+@@ -79,12 +79,12 @@ static int usX2Y_urb_capt_retire(struct snd_usX2Y_substream *subs)
+ 		/* copy a data chunk */
+ 		if ((hwptr_done + len) > runtime->buffer_size) {
+ 			int cnt = runtime->buffer_size - hwptr_done;
+-			int blen = cnt * usX2Y->stride;
+-			memcpy(runtime->dma_area + hwptr_done * usX2Y->stride, cp, blen);
+-			memcpy(runtime->dma_area, cp + blen, len * usX2Y->stride - blen);
++			int blen = cnt * usx2y->stride;
++			memcpy(runtime->dma_area + hwptr_done * usx2y->stride, cp, blen);
++			memcpy(runtime->dma_area, cp + blen, len * usx2y->stride - blen);
+ 		} else {
+-			memcpy(runtime->dma_area + hwptr_done * usX2Y->stride, cp,
+-			       len * usX2Y->stride);
++			memcpy(runtime->dma_area + hwptr_done * usx2y->stride, cp,
++			       len * usx2y->stride);
+ 		}
+ 		lens += len;
+ 		if ((hwptr_done += len) >= runtime->buffer_size)
+@@ -110,18 +110,18 @@ static int usX2Y_urb_capt_retire(struct snd_usX2Y_substream *subs)
+  * it directly from the buffer.  thus the data is once copied to
+  * a temporary buffer and urb points to that.
+  */
+-static int usX2Y_urb_play_prepare(struct snd_usX2Y_substream *subs,
++static int usx2y_urb_play_prepare(struct snd_usx2y_substream *subs,
+ 				  struct urb *cap_urb,
+ 				  struct urb *urb)
+ {
+ 	int count, counts, pack;
+-	struct usX2Ydev *usX2Y = subs->usX2Y;
++	struct usx2ydev *usx2y = subs->usx2y;
+ 	struct snd_pcm_runtime *runtime = subs->pcm_substream->runtime;
+ 
+ 	count = 0;
+ 	for (pack = 0; pack <  nr_of_packs(); pack++) {
+ 		/* calculate the size of a packet */
+-		counts = cap_urb->iso_frame_desc[pack].actual_length / usX2Y->stride;
++		counts = cap_urb->iso_frame_desc[pack].actual_length / usx2y->stride;
+ 		count += counts;
+ 		if (counts < 43 || counts > 50) {
+ 			snd_printk(KERN_ERR "should not be here with counts=%i\n", counts);
+@@ -134,7 +134,7 @@ static int usX2Y_urb_play_prepare(struct snd_usX2Y_substream *subs,
+ 			0;
+ 		urb->iso_frame_desc[pack].length = cap_urb->iso_frame_desc[pack].actual_length;
+ 	}
+-	if (atomic_read(&subs->state) >= state_PRERUNNING)
++	if (atomic_read(&subs->state) >= STATE_PRERUNNING)
+ 		if (subs->hwptr + count > runtime->buffer_size) {
+ 			/* err, the transferred area goes over buffer boundary.
+ 			 * copy the data to the temp buffer.
+@@ -143,20 +143,20 @@ static int usX2Y_urb_play_prepare(struct snd_usX2Y_substream *subs,
+ 			len = runtime->buffer_size - subs->hwptr;
+ 			urb->transfer_buffer = subs->tmpbuf;
+ 			memcpy(subs->tmpbuf, runtime->dma_area +
+-			       subs->hwptr * usX2Y->stride, len * usX2Y->stride);
+-			memcpy(subs->tmpbuf + len * usX2Y->stride,
+-			       runtime->dma_area, (count - len) * usX2Y->stride);
++			       subs->hwptr * usx2y->stride, len * usx2y->stride);
++			memcpy(subs->tmpbuf + len * usx2y->stride,
++			       runtime->dma_area, (count - len) * usx2y->stride);
+ 			subs->hwptr += count;
+ 			subs->hwptr -= runtime->buffer_size;
+ 		} else {
+ 			/* set the buffer pointer */
+-			urb->transfer_buffer = runtime->dma_area + subs->hwptr * usX2Y->stride;
++			urb->transfer_buffer = runtime->dma_area + subs->hwptr * usx2y->stride;
+ 			if ((subs->hwptr += count) >= runtime->buffer_size)
+ 				subs->hwptr -= runtime->buffer_size;
+ 		}
+ 	else
+ 		urb->transfer_buffer = subs->tmpbuf;
+-	urb->transfer_buffer_length = count * usX2Y->stride;
++	urb->transfer_buffer_length = count * usx2y->stride;
+ 	return 0;
+ }
+ 
+@@ -165,10 +165,10 @@ static int usX2Y_urb_play_prepare(struct snd_usX2Y_substream *subs,
+  *
+  * update the current position and call callback if a period is processed.
+  */
+-static void usX2Y_urb_play_retire(struct snd_usX2Y_substream *subs, struct urb *urb)
++static void usx2y_urb_play_retire(struct snd_usx2y_substream *subs, struct urb *urb)
+ {
+ 	struct snd_pcm_runtime *runtime = subs->pcm_substream->runtime;
+-	int		len = urb->actual_length / subs->usX2Y->stride;
++	int		len = urb->actual_length / subs->usx2y->stride;
+ 
+ 	subs->transfer_done += len;
+ 	subs->hwptr_done +=  len;
+@@ -180,14 +180,14 @@ static void usX2Y_urb_play_retire(struct snd_usX2Y_substream *subs, struct urb *
+ 	}
+ }
+ 
+-static int usX2Y_urb_submit(struct snd_usX2Y_substream *subs, struct urb *urb, int frame)
++static int usx2y_urb_submit(struct snd_usx2y_substream *subs, struct urb *urb, int frame)
+ {
+ 	int err;
+ 	if (!urb)
+ 		return -ENODEV;
+ 	urb->start_frame = (frame + NRURBS * nr_of_packs());  // let hcd do rollover sanity checks
+ 	urb->hcpriv = NULL;
+-	urb->dev = subs->usX2Y->dev; /* we need to set this at each time */
++	urb->dev = subs->usx2y->dev; /* we need to set this at each time */
+ 	if ((err = usb_submit_urb(urb, GFP_ATOMIC)) < 0) {
+ 		snd_printk(KERN_ERR "usb_submit_urb() returned %i\n", err);
+ 		return err;
+@@ -195,8 +195,8 @@ static int usX2Y_urb_submit(struct snd_usX2Y_substream *subs, struct urb *urb, i
+ 	return 0;
+ }
+ 
+-static inline int usX2Y_usbframe_complete(struct snd_usX2Y_substream *capsubs,
+-					  struct snd_usX2Y_substream *playbacksubs,
++static inline int usx2y_usbframe_complete(struct snd_usx2y_substream *capsubs,
++					  struct snd_usx2y_substream *playbacksubs,
+ 					  int frame)
+ {
+ 	int err, state;
+@@ -204,25 +204,25 @@ static inline int usX2Y_usbframe_complete(struct snd_usX2Y_substream *capsubs,
+ 
+ 	state = atomic_read(&playbacksubs->state);
+ 	if (NULL != urb) {
+-		if (state == state_RUNNING)
+-			usX2Y_urb_play_retire(playbacksubs, urb);
+-		else if (state >= state_PRERUNNING)
++		if (state == STATE_RUNNING)
++			usx2y_urb_play_retire(playbacksubs, urb);
++		else if (state >= STATE_PRERUNNING)
+ 			atomic_inc(&playbacksubs->state);
+ 	} else {
+ 		switch (state) {
+-		case state_STARTING1:
++		case STATE_STARTING1:
+ 			urb = playbacksubs->urb[0];
+ 			atomic_inc(&playbacksubs->state);
+ 			break;
+-		case state_STARTING2:
++		case STATE_STARTING2:
+ 			urb = playbacksubs->urb[1];
+ 			atomic_inc(&playbacksubs->state);
+ 			break;
+ 		}
+ 	}
+ 	if (urb) {
+-		if ((err = usX2Y_urb_play_prepare(playbacksubs, capsubs->completed_urb, urb)) ||
+-		    (err = usX2Y_urb_submit(playbacksubs, urb, frame))) {
++		if ((err = usx2y_urb_play_prepare(playbacksubs, capsubs->completed_urb, urb)) ||
++		    (err = usx2y_urb_submit(playbacksubs, urb, frame))) {
+ 			return err;
+ 		}
+ 	}
+@@ -230,13 +230,13 @@ static inline int usX2Y_usbframe_complete(struct snd_usX2Y_substream *capsubs,
+ 	playbacksubs->completed_urb = NULL;
+ 
+ 	state = atomic_read(&capsubs->state);
+-	if (state >= state_PREPARED) {
+-		if (state == state_RUNNING) {
+-			if ((err = usX2Y_urb_capt_retire(capsubs)))
++	if (state >= STATE_PREPARED) {
++		if (state == STATE_RUNNING) {
++			if ((err = usx2y_urb_capt_retire(capsubs)))
+ 				return err;
+-		} else if (state >= state_PRERUNNING)
++		} else if (state >= STATE_PRERUNNING)
+ 			atomic_inc(&capsubs->state);
+-		if ((err = usX2Y_urb_submit(capsubs, capsubs->completed_urb, frame)))
++		if ((err = usx2y_urb_submit(capsubs, capsubs->completed_urb, frame)))
+ 			return err;
+ 	}
+ 	capsubs->completed_urb = NULL;
+@@ -244,21 +244,21 @@ static inline int usX2Y_usbframe_complete(struct snd_usX2Y_substream *capsubs,
+ }
+ 
+ 
+-static void usX2Y_clients_stop(struct usX2Ydev *usX2Y)
++static void usx2y_clients_stop(struct usx2ydev *usx2y)
+ {
+ 	int s, u;
+ 
+ 	for (s = 0; s < 4; s++) {
+-		struct snd_usX2Y_substream *subs = usX2Y->subs[s];
++		struct snd_usx2y_substream *subs = usx2y->subs[s];
+ 		if (subs) {
+ 			snd_printdd("%i %p state=%i\n", s, subs, atomic_read(&subs->state));
+-			atomic_set(&subs->state, state_STOPPED);
++			atomic_set(&subs->state, STATE_STOPPED);
+ 		}
+ 	}
+ 	for (s = 0; s < 4; s++) {
+-		struct snd_usX2Y_substream *subs = usX2Y->subs[s];
++		struct snd_usx2y_substream *subs = usx2y->subs[s];
+ 		if (subs) {
+-			if (atomic_read(&subs->state) >= state_PRERUNNING)
++			if (atomic_read(&subs->state) >= STATE_PRERUNNING)
+ 				snd_pcm_stop_xrun(subs->pcm_substream);
+ 			for (u = 0; u < NRURBS; u++) {
+ 				struct urb *urb = subs->urb[u];
+@@ -268,60 +268,60 @@ static void usX2Y_clients_stop(struct usX2Ydev *usX2Y)
+ 			}
+ 		}
+ 	}
+-	usX2Y->prepare_subs = NULL;
+-	wake_up(&usX2Y->prepare_wait_queue);
++	usx2y->prepare_subs = NULL;
++	wake_up(&usx2y->prepare_wait_queue);
+ }
+ 
+-static void usX2Y_error_urb_status(struct usX2Ydev *usX2Y,
+-				   struct snd_usX2Y_substream *subs, struct urb *urb)
++static void usx2y_error_urb_status(struct usx2ydev *usx2y,
++				   struct snd_usx2y_substream *subs, struct urb *urb)
+ {
+ 	snd_printk(KERN_ERR "ep=%i stalled with status=%i\n", subs->endpoint, urb->status);
+ 	urb->status = 0;
+-	usX2Y_clients_stop(usX2Y);
++	usx2y_clients_stop(usx2y);
+ }
+ 
+-static void i_usX2Y_urb_complete(struct urb *urb)
++static void i_usx2y_urb_complete(struct urb *urb)
+ {
+-	struct snd_usX2Y_substream *subs = urb->context;
+-	struct usX2Ydev *usX2Y = subs->usX2Y;
++	struct snd_usx2y_substream *subs = urb->context;
++	struct usx2ydev *usx2y = subs->usx2y;
+ 
+-	if (unlikely(atomic_read(&subs->state) < state_PREPARED)) {
++	if (unlikely(atomic_read(&subs->state) < STATE_PREPARED)) {
+ 		snd_printdd("hcd_frame=%i ep=%i%s status=%i start_frame=%i\n",
+-			    usb_get_current_frame_number(usX2Y->dev),
++			    usb_get_current_frame_number(usx2y->dev),
+ 			    subs->endpoint, usb_pipein(urb->pipe) ? "in" : "out",
+ 			    urb->status, urb->start_frame);
+ 		return;
+ 	}
+ 	if (unlikely(urb->status)) {
+-		usX2Y_error_urb_status(usX2Y, subs, urb);
++		usx2y_error_urb_status(usx2y, subs, urb);
+ 		return;
+ 	}
+ 
+ 	subs->completed_urb = urb;
+ 
+ 	{
+-		struct snd_usX2Y_substream *capsubs = usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE],
+-			*playbacksubs = usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
++		struct snd_usx2y_substream *capsubs = usx2y->subs[SNDRV_PCM_STREAM_CAPTURE],
++			*playbacksubs = usx2y->subs[SNDRV_PCM_STREAM_PLAYBACK];
+ 		if (capsubs->completed_urb &&
+-		    atomic_read(&capsubs->state) >= state_PREPARED &&
++		    atomic_read(&capsubs->state) >= STATE_PREPARED &&
+ 		    (playbacksubs->completed_urb ||
+-		     atomic_read(&playbacksubs->state) < state_PREPARED)) {
+-			if (!usX2Y_usbframe_complete(capsubs, playbacksubs, urb->start_frame))
+-				usX2Y->wait_iso_frame += nr_of_packs();
++		     atomic_read(&playbacksubs->state) < STATE_PREPARED)) {
++			if (!usx2y_usbframe_complete(capsubs, playbacksubs, urb->start_frame))
++				usx2y->wait_iso_frame += nr_of_packs();
+ 			else {
+ 				snd_printdd("\n");
+-				usX2Y_clients_stop(usX2Y);
++				usx2y_clients_stop(usx2y);
+ 			}
+ 		}
+ 	}
+ }
+ 
+-static void usX2Y_urbs_set_complete(struct usX2Ydev * usX2Y,
++static void usx2y_urbs_set_complete(struct usx2ydev * usx2y,
+ 				    void (*complete)(struct urb *))
+ {
+ 	int s, u;
+ 	for (s = 0; s < 4; s++) {
+-		struct snd_usX2Y_substream *subs = usX2Y->subs[s];
++		struct snd_usx2y_substream *subs = usx2y->subs[s];
+ 		if (NULL != subs)
+ 			for (u = 0; u < NRURBS; u++) {
+ 				struct urb * urb = subs->urb[u];
+@@ -331,30 +331,30 @@ static void usX2Y_urbs_set_complete(struct usX2Ydev * usX2Y,
+ 	}
+ }
+ 
+-static void usX2Y_subs_startup_finish(struct usX2Ydev * usX2Y)
++static void usx2y_subs_startup_finish(struct usx2ydev * usx2y)
+ {
+-	usX2Y_urbs_set_complete(usX2Y, i_usX2Y_urb_complete);
+-	usX2Y->prepare_subs = NULL;
++	usx2y_urbs_set_complete(usx2y, i_usx2y_urb_complete);
++	usx2y->prepare_subs = NULL;
+ }
+ 
+-static void i_usX2Y_subs_startup(struct urb *urb)
++static void i_usx2y_subs_startup(struct urb *urb)
+ {
+-	struct snd_usX2Y_substream *subs = urb->context;
+-	struct usX2Ydev *usX2Y = subs->usX2Y;
+-	struct snd_usX2Y_substream *prepare_subs = usX2Y->prepare_subs;
++	struct snd_usx2y_substream *subs = urb->context;
++	struct usx2ydev *usx2y = subs->usx2y;
++	struct snd_usx2y_substream *prepare_subs = usx2y->prepare_subs;
+ 	if (NULL != prepare_subs)
+ 		if (urb->start_frame == prepare_subs->urb[0]->start_frame) {
+-			usX2Y_subs_startup_finish(usX2Y);
++			usx2y_subs_startup_finish(usx2y);
+ 			atomic_inc(&prepare_subs->state);
+-			wake_up(&usX2Y->prepare_wait_queue);
++			wake_up(&usx2y->prepare_wait_queue);
+ 		}
+ 
+-	i_usX2Y_urb_complete(urb);
++	i_usx2y_urb_complete(urb);
+ }
+ 
+-static void usX2Y_subs_prepare(struct snd_usX2Y_substream *subs)
++static void usx2y_subs_prepare(struct snd_usx2y_substream *subs)
+ {
+-	snd_printdd("usX2Y_substream_prepare(%p) ep=%i urb0=%p urb1=%p\n",
++	snd_printdd("usx2y_substream_prepare(%p) ep=%i urb0=%p urb1=%p\n",
+ 		    subs, subs->endpoint, subs->urb[0], subs->urb[1]);
+ 	/* reset the pointer */
+ 	subs->hwptr = 0;
+@@ -363,7 +363,7 @@ static void usX2Y_subs_prepare(struct snd_usX2Y_substream *subs)
+ }
+ 
+ 
+-static void usX2Y_urb_release(struct urb **urb, int free_tb)
++static void usx2y_urb_release(struct urb **urb, int free_tb)
+ {
+ 	if (*urb) {
+ 		usb_kill_urb(*urb);
+@@ -376,13 +376,13 @@ static void usX2Y_urb_release(struct urb **urb, int free_tb)
+ /*
+  * release a substreams urbs
+  */
+-static void usX2Y_urbs_release(struct snd_usX2Y_substream *subs)
++static void usx2y_urbs_release(struct snd_usx2y_substream *subs)
+ {
+ 	int i;
+-	snd_printdd("usX2Y_urbs_release() %i\n", subs->endpoint);
++	snd_printdd("usx2y_urbs_release() %i\n", subs->endpoint);
+ 	for (i = 0; i < NRURBS; i++)
+-		usX2Y_urb_release(subs->urb + i,
+-				  subs != subs->usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK]);
++		usx2y_urb_release(subs->urb + i,
++				  subs != subs->usx2y->subs[SNDRV_PCM_STREAM_PLAYBACK]);
+ 
+ 	kfree(subs->tmpbuf);
+ 	subs->tmpbuf = NULL;
+@@ -390,12 +390,12 @@ static void usX2Y_urbs_release(struct snd_usX2Y_substream *subs)
+ /*
+  * initialize a substream's urbs
+  */
+-static int usX2Y_urbs_allocate(struct snd_usX2Y_substream *subs)
++static int usx2y_urbs_allocate(struct snd_usx2y_substream *subs)
+ {
+ 	int i;
+ 	unsigned int pipe;
+-	int is_playback = subs == subs->usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
+-	struct usb_device *dev = subs->usX2Y->dev;
++	int is_playback = subs == subs->usx2y->subs[SNDRV_PCM_STREAM_PLAYBACK];
++	struct usb_device *dev = subs->usx2y->dev;
+ 
+ 	pipe = is_playback ? usb_sndisocpipe(dev, subs->endpoint) :
+ 			usb_rcvisocpipe(dev, subs->endpoint);
+@@ -417,7 +417,7 @@ static int usX2Y_urbs_allocate(struct snd_usX2Y_substream *subs)
+ 		}
+ 		*purb = usb_alloc_urb(nr_of_packs(), GFP_KERNEL);
+ 		if (NULL == *purb) {
+-			usX2Y_urbs_release(subs);
++			usx2y_urbs_release(subs);
+ 			return -ENOMEM;
+ 		}
+ 		if (!is_playback && !(*purb)->transfer_buffer) {
+@@ -426,7 +426,7 @@ static int usX2Y_urbs_allocate(struct snd_usX2Y_substream *subs)
+ 				kmalloc_array(subs->maxpacksize,
+ 					      nr_of_packs(), GFP_KERNEL);
+ 			if (NULL == (*purb)->transfer_buffer) {
+-				usX2Y_urbs_release(subs);
++				usx2y_urbs_release(subs);
+ 				return -ENOMEM;
+ 			}
+ 		}
+@@ -435,43 +435,43 @@ static int usX2Y_urbs_allocate(struct snd_usX2Y_substream *subs)
+ 		(*purb)->number_of_packets = nr_of_packs();
+ 		(*purb)->context = subs;
+ 		(*purb)->interval = 1;
+-		(*purb)->complete = i_usX2Y_subs_startup;
++		(*purb)->complete = i_usx2y_subs_startup;
+ 	}
+ 	return 0;
+ }
+ 
+-static void usX2Y_subs_startup(struct snd_usX2Y_substream *subs)
++static void usx2y_subs_startup(struct snd_usx2y_substream *subs)
+ {
+-	struct usX2Ydev *usX2Y = subs->usX2Y;
+-	usX2Y->prepare_subs = subs;
++	struct usx2ydev *usx2y = subs->usx2y;
++	usx2y->prepare_subs = subs;
+ 	subs->urb[0]->start_frame = -1;
+ 	wmb();
+-	usX2Y_urbs_set_complete(usX2Y, i_usX2Y_subs_startup);
++	usx2y_urbs_set_complete(usx2y, i_usx2y_subs_startup);
+ }
+ 
+-static int usX2Y_urbs_start(struct snd_usX2Y_substream *subs)
++static int usx2y_urbs_start(struct snd_usx2y_substream *subs)
+ {
+ 	int i, err;
+-	struct usX2Ydev *usX2Y = subs->usX2Y;
++	struct usx2ydev *usx2y = subs->usx2y;
+ 
+-	if ((err = usX2Y_urbs_allocate(subs)) < 0)
++	if ((err = usx2y_urbs_allocate(subs)) < 0)
+ 		return err;
+ 	subs->completed_urb = NULL;
+ 	for (i = 0; i < 4; i++) {
+-		struct snd_usX2Y_substream *subs = usX2Y->subs[i];
+-		if (subs != NULL && atomic_read(&subs->state) >= state_PREPARED)
++		struct snd_usx2y_substream *subs = usx2y->subs[i];
++		if (subs != NULL && atomic_read(&subs->state) >= STATE_PREPARED)
+ 			goto start;
+ 	}
+ 
+  start:
+-	usX2Y_subs_startup(subs);
++	usx2y_subs_startup(subs);
+ 	for (i = 0; i < NRURBS; i++) {
+ 		struct urb *urb = subs->urb[i];
+ 		if (usb_pipein(urb->pipe)) {
+ 			unsigned long pack;
+ 			if (0 == i)
+-				atomic_set(&subs->state, state_STARTING3);
+-			urb->dev = usX2Y->dev;
++				atomic_set(&subs->state, STATE_STARTING3);
++			urb->dev = usx2y->dev;
+ 			for (pack = 0; pack < nr_of_packs(); pack++) {
+ 				urb->iso_frame_desc[pack].offset = subs->maxpacksize * pack;
+ 				urb->iso_frame_desc[pack].length = subs->maxpacksize;
+@@ -483,22 +483,22 @@ static int usX2Y_urbs_start(struct snd_usX2Y_substream *subs)
+ 				goto cleanup;
+ 			} else
+ 				if (i == 0)
+-					usX2Y->wait_iso_frame = urb->start_frame;
++					usx2y->wait_iso_frame = urb->start_frame;
+ 			urb->transfer_flags = 0;
+ 		} else {
+-			atomic_set(&subs->state, state_STARTING1);
++			atomic_set(&subs->state, STATE_STARTING1);
+ 			break;
+ 		}
+ 	}
+ 	err = 0;
+-	wait_event(usX2Y->prepare_wait_queue, NULL == usX2Y->prepare_subs);
+-	if (atomic_read(&subs->state) != state_PREPARED)
++	wait_event(usx2y->prepare_wait_queue, NULL == usx2y->prepare_subs);
++	if (atomic_read(&subs->state) != STATE_PREPARED)
+ 		err = -EPIPE;
+ 
+  cleanup:
+ 	if (err) {
+-		usX2Y_subs_startup_finish(usX2Y);
+-		usX2Y_clients_stop(usX2Y);		// something is completely wroong > stop evrything
++		usx2y_subs_startup_finish(usx2y);
++		usx2y_clients_stop(usx2y);		// something is completely wroong > stop evrything
+ 	}
+ 	return err;
+ }
+@@ -506,33 +506,33 @@ static int usX2Y_urbs_start(struct snd_usX2Y_substream *subs)
+ /*
+  * return the current pcm pointer.  just return the hwptr_done value.
+  */
+-static snd_pcm_uframes_t snd_usX2Y_pcm_pointer(struct snd_pcm_substream *substream)
++static snd_pcm_uframes_t snd_usx2y_pcm_pointer(struct snd_pcm_substream *substream)
+ {
+-	struct snd_usX2Y_substream *subs = substream->runtime->private_data;
++	struct snd_usx2y_substream *subs = substream->runtime->private_data;
+ 	return subs->hwptr_done;
+ }
+ /*
+  * start/stop substream
+  */
+-static int snd_usX2Y_pcm_trigger(struct snd_pcm_substream *substream, int cmd)
++static int snd_usx2y_pcm_trigger(struct snd_pcm_substream *substream, int cmd)
+ {
+-	struct snd_usX2Y_substream *subs = substream->runtime->private_data;
++	struct snd_usx2y_substream *subs = substream->runtime->private_data;
+ 
+ 	switch (cmd) {
+ 	case SNDRV_PCM_TRIGGER_START:
+-		snd_printdd("snd_usX2Y_pcm_trigger(START)\n");
+-		if (atomic_read(&subs->state) == state_PREPARED &&
+-		    atomic_read(&subs->usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE]->state) >= state_PREPARED) {
+-			atomic_set(&subs->state, state_PRERUNNING);
++		snd_printdd("snd_usx2y_pcm_trigger(START)\n");
++		if (atomic_read(&subs->state) == STATE_PREPARED &&
++		    atomic_read(&subs->usx2y->subs[SNDRV_PCM_STREAM_CAPTURE]->state) >= STATE_PREPARED) {
++			atomic_set(&subs->state, STATE_PRERUNNING);
+ 		} else {
+ 			snd_printdd("\n");
+ 			return -EPIPE;
+ 		}
+ 		break;
+ 	case SNDRV_PCM_TRIGGER_STOP:
+-		snd_printdd("snd_usX2Y_pcm_trigger(STOP)\n");
+-		if (atomic_read(&subs->state) >= state_PRERUNNING)
+-			atomic_set(&subs->state, state_PREPARED);
++		snd_printdd("snd_usx2y_pcm_trigger(STOP)\n");
++		if (atomic_read(&subs->state) >= STATE_PRERUNNING)
++			atomic_set(&subs->state, STATE_PREPARED);
+ 		break;
+ 	default:
+ 		return -EINVAL;
+@@ -553,7 +553,7 @@ static const struct s_c2
+ {
+ 	char c1, c2;
+ }
+-	SetRate44100[] =
++	setrate_44100[] =
+ {
+ 	{ 0x14, 0x08},	// this line sets 44100, well actually a little less
+ 	{ 0x18, 0x40},	// only tascam / frontier design knows the further lines .......
+@@ -589,7 +589,7 @@ static const struct s_c2
+ 	{ 0x18, 0x7C},
+ 	{ 0x18, 0x7E}
+ };
+-static const struct s_c2 SetRate48000[] =
++static const struct s_c2 setrate_48000[] =
+ {
+ 	{ 0x14, 0x09},	// this line sets 48000, well actually a little less
+ 	{ 0x18, 0x40},	// only tascam / frontier design knows the further lines .......
+@@ -625,26 +625,26 @@ static const struct s_c2 SetRate48000[] =
+ 	{ 0x18, 0x7C},
+ 	{ 0x18, 0x7E}
+ };
+-#define NOOF_SETRATE_URBS ARRAY_SIZE(SetRate48000)
++#define NOOF_SETRATE_URBS ARRAY_SIZE(setrate_48000)
+ 
+-static void i_usX2Y_04Int(struct urb *urb)
++static void i_usx2y_04int(struct urb *urb)
+ {
+-	struct usX2Ydev *usX2Y = urb->context;
++	struct usx2ydev *usx2y = urb->context;
+ 	
+ 	if (urb->status)
+-		snd_printk(KERN_ERR "snd_usX2Y_04Int() urb->status=%i\n", urb->status);
+-	if (0 == --usX2Y->US04->len)
+-		wake_up(&usX2Y->In04WaitQueue);
++		snd_printk(KERN_ERR "snd_usx2y_04int() urb->status=%i\n", urb->status);
++	if (0 == --usx2y->us04->len)
++		wake_up(&usx2y->in04_wait_queue);
+ }
+ 
+-static int usX2Y_rate_set(struct usX2Ydev *usX2Y, int rate)
++static int usx2y_rate_set(struct usx2ydev *usx2y, int rate)
+ {
+ 	int			err = 0, i;
+-	struct snd_usX2Y_urbSeq	*us = NULL;
++	struct snd_usx2y_urb_seq	*us = NULL;
+ 	int			*usbdata = NULL;
+-	const struct s_c2	*ra = rate == 48000 ? SetRate48000 : SetRate44100;
++	const struct s_c2	*ra = rate == 48000 ? setrate_48000 : setrate_44100;
+ 
+-	if (usX2Y->rate != rate) {
++	if (usx2y->rate != rate) {
+ 		us = kzalloc(sizeof(*us) + sizeof(struct urb*) * NOOF_SETRATE_URBS, GFP_KERNEL);
+ 		if (NULL == us) {
+ 			err = -ENOMEM;
+@@ -663,17 +663,17 @@ static int usX2Y_rate_set(struct usX2Ydev *usX2Y, int rate)
+ 			}
+ 			((char*)(usbdata + i))[0] = ra[i].c1;
+ 			((char*)(usbdata + i))[1] = ra[i].c2;
+-			usb_fill_bulk_urb(us->urb[i], usX2Y->dev, usb_sndbulkpipe(usX2Y->dev, 4),
+-					  usbdata + i, 2, i_usX2Y_04Int, usX2Y);
++			usb_fill_bulk_urb(us->urb[i], usx2y->dev, usb_sndbulkpipe(usx2y->dev, 4),
++					  usbdata + i, 2, i_usx2y_04int, usx2y);
+ 		}
+ 		err = usb_urb_ep_type_check(us->urb[0]);
+ 		if (err < 0)
+ 			goto cleanup;
+ 		us->submitted =	0;
+ 		us->len =	NOOF_SETRATE_URBS;
+-		usX2Y->US04 =	us;
+-		wait_event_timeout(usX2Y->In04WaitQueue, 0 == us->len, HZ);
+-		usX2Y->US04 =	NULL;
++		usx2y->us04 =	us;
++		wait_event_timeout(usx2y->in04_wait_queue, 0 == us->len, HZ);
++		usx2y->us04 =	NULL;
+ 		if (us->len)
+ 			err = -ENODEV;
+ 	cleanup:
+@@ -690,11 +690,11 @@ static int usX2Y_rate_set(struct usX2Ydev *usX2Y, int rate)
+ 				}
+ 				usb_free_urb(urb);
+ 			}
+-			usX2Y->US04 = NULL;
++			usx2y->us04 = NULL;
+ 			kfree(usbdata);
+ 			kfree(us);
+ 			if (!err)
+-				usX2Y->rate = rate;
++				usx2y->rate = rate;
+ 		}
+ 	}
+ 
+@@ -702,53 +702,53 @@ static int usX2Y_rate_set(struct usX2Ydev *usX2Y, int rate)
+ }
+ 
+ 
+-static int usX2Y_format_set(struct usX2Ydev *usX2Y, snd_pcm_format_t format)
++static int usx2y_format_set(struct usx2ydev *usx2y, snd_pcm_format_t format)
+ {
+ 	int alternate, err;
+ 	struct list_head* p;
+ 	if (format == SNDRV_PCM_FORMAT_S24_3LE) {
+ 		alternate = 2;
+-		usX2Y->stride = 6;
++		usx2y->stride = 6;
+ 	} else {
+ 		alternate = 1;
+-		usX2Y->stride = 4;
++		usx2y->stride = 4;
+ 	}
+-	list_for_each(p, &usX2Y->midi_list) {
++	list_for_each(p, &usx2y->midi_list) {
+ 		snd_usbmidi_input_stop(p);
+ 	}
+-	usb_kill_urb(usX2Y->In04urb);
+-	if ((err = usb_set_interface(usX2Y->dev, 0, alternate))) {
++	usb_kill_urb(usx2y->in04_urb);
++	if ((err = usb_set_interface(usx2y->dev, 0, alternate))) {
+ 		snd_printk(KERN_ERR "usb_set_interface error \n");
+ 		return err;
+ 	}
+-	usX2Y->In04urb->dev = usX2Y->dev;
+-	err = usb_submit_urb(usX2Y->In04urb, GFP_KERNEL);
+-	list_for_each(p, &usX2Y->midi_list) {
++	usx2y->in04_urb->dev = usx2y->dev;
++	err = usb_submit_urb(usx2y->in04_urb, GFP_KERNEL);
++	list_for_each(p, &usx2y->midi_list) {
+ 		snd_usbmidi_input_start(p);
+ 	}
+-	usX2Y->format = format;
+-	usX2Y->rate = 0;
++	usx2y->format = format;
++	usx2y->rate = 0;
+ 	return err;
+ }
+ 
+ 
+-static int snd_usX2Y_pcm_hw_params(struct snd_pcm_substream *substream,
++static int snd_usx2y_pcm_hw_params(struct snd_pcm_substream *substream,
+ 				   struct snd_pcm_hw_params *hw_params)
+ {
+ 	int			err = 0;
+ 	unsigned int		rate = params_rate(hw_params);
+ 	snd_pcm_format_t	format = params_format(hw_params);
+ 	struct snd_card *card = substream->pstr->pcm->card;
+-	struct usX2Ydev	*dev = usX2Y(card);
++	struct usx2ydev	*dev = usx2y(card);
+ 	int i;
+ 
+-	mutex_lock(&usX2Y(card)->pcm_mutex);
+-	snd_printdd("snd_usX2Y_hw_params(%p, %p)\n", substream, hw_params);
+-	/* all pcm substreams off one usX2Y have to operate at the same
++	mutex_lock(&usx2y(card)->pcm_mutex);
++	snd_printdd("snd_usx2y_hw_params(%p, %p)\n", substream, hw_params);
++	/* all pcm substreams off one usx2y have to operate at the same
+ 	 * rate & format
+ 	 */
+ 	for (i = 0; i < dev->pcm_devs * 2; i++) {
+-		struct snd_usX2Y_substream *subs = dev->subs[i];
++		struct snd_usx2y_substream *subs = dev->subs[i];
+ 		struct snd_pcm_substream *test_substream;
+ 
+ 		if (!subs)
+@@ -767,39 +767,39 @@ static int snd_usX2Y_pcm_hw_params(struct snd_pcm_substream *substream,
+ 	}
+ 
+  error:
+-	mutex_unlock(&usX2Y(card)->pcm_mutex);
++	mutex_unlock(&usx2y(card)->pcm_mutex);
+ 	return err;
+ }
+ 
+ /*
+  * free the buffer
+  */
+-static int snd_usX2Y_pcm_hw_free(struct snd_pcm_substream *substream)
++static int snd_usx2y_pcm_hw_free(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+-	struct snd_usX2Y_substream *subs = runtime->private_data;
+-	mutex_lock(&subs->usX2Y->pcm_mutex);
+-	snd_printdd("snd_usX2Y_hw_free(%p)\n", substream);
++	struct snd_usx2y_substream *subs = runtime->private_data;
++	mutex_lock(&subs->usx2y->pcm_mutex);
++	snd_printdd("snd_usx2y_hw_free(%p)\n", substream);
+ 
+ 	if (SNDRV_PCM_STREAM_PLAYBACK == substream->stream) {
+-		struct snd_usX2Y_substream *cap_subs = subs->usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE];
+-		atomic_set(&subs->state, state_STOPPED);
+-		usX2Y_urbs_release(subs);
++		struct snd_usx2y_substream *cap_subs = subs->usx2y->subs[SNDRV_PCM_STREAM_CAPTURE];
++		atomic_set(&subs->state, STATE_STOPPED);
++		usx2y_urbs_release(subs);
+ 		if (!cap_subs->pcm_substream ||
+ 		    !cap_subs->pcm_substream->runtime ||
+ 		    !cap_subs->pcm_substream->runtime->status ||
+ 		    cap_subs->pcm_substream->runtime->status->state < SNDRV_PCM_STATE_PREPARED) {
+-			atomic_set(&cap_subs->state, state_STOPPED);
+-			usX2Y_urbs_release(cap_subs);
++			atomic_set(&cap_subs->state, STATE_STOPPED);
++			usx2y_urbs_release(cap_subs);
+ 		}
+ 	} else {
+-		struct snd_usX2Y_substream *playback_subs = subs->usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
+-		if (atomic_read(&playback_subs->state) < state_PREPARED) {
+-			atomic_set(&subs->state, state_STOPPED);
+-			usX2Y_urbs_release(subs);
++		struct snd_usx2y_substream *playback_subs = subs->usx2y->subs[SNDRV_PCM_STREAM_PLAYBACK];
++		if (atomic_read(&playback_subs->state) < STATE_PREPARED) {
++			atomic_set(&subs->state, STATE_STOPPED);
++			usx2y_urbs_release(subs);
+ 		}
+ 	}
+-	mutex_unlock(&subs->usX2Y->pcm_mutex);
++	mutex_unlock(&subs->usx2y->pcm_mutex);
+ 	return 0;
+ }
+ /*
+@@ -807,40 +807,40 @@ static int snd_usX2Y_pcm_hw_free(struct snd_pcm_substream *substream)
+  *
+  * set format and initialize urbs
+  */
+-static int snd_usX2Y_pcm_prepare(struct snd_pcm_substream *substream)
++static int snd_usx2y_pcm_prepare(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+-	struct snd_usX2Y_substream *subs = runtime->private_data;
+-	struct usX2Ydev *usX2Y = subs->usX2Y;
+-	struct snd_usX2Y_substream *capsubs = subs->usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE];
++	struct snd_usx2y_substream *subs = runtime->private_data;
++	struct usx2ydev *usx2y = subs->usx2y;
++	struct snd_usx2y_substream *capsubs = subs->usx2y->subs[SNDRV_PCM_STREAM_CAPTURE];
+ 	int err = 0;
+-	snd_printdd("snd_usX2Y_pcm_prepare(%p)\n", substream);
++	snd_printdd("snd_usx2y_pcm_prepare(%p)\n", substream);
+ 
+-	mutex_lock(&usX2Y->pcm_mutex);
+-	usX2Y_subs_prepare(subs);
++	mutex_lock(&usx2y->pcm_mutex);
++	usx2y_subs_prepare(subs);
+ // Start hardware streams
+ // SyncStream first....
+-	if (atomic_read(&capsubs->state) < state_PREPARED) {
+-		if (usX2Y->format != runtime->format)
+-			if ((err = usX2Y_format_set(usX2Y, runtime->format)) < 0)
++	if (atomic_read(&capsubs->state) < STATE_PREPARED) {
++		if (usx2y->format != runtime->format)
++			if ((err = usx2y_format_set(usx2y, runtime->format)) < 0)
+ 				goto up_prepare_mutex;
+-		if (usX2Y->rate != runtime->rate)
+-			if ((err = usX2Y_rate_set(usX2Y, runtime->rate)) < 0)
++		if (usx2y->rate != runtime->rate)
++			if ((err = usx2y_rate_set(usx2y, runtime->rate)) < 0)
+ 				goto up_prepare_mutex;
+ 		snd_printdd("starting capture pipe for %s\n", subs == capsubs ? "self" : "playpipe");
+-		if (0 > (err = usX2Y_urbs_start(capsubs)))
++		if (0 > (err = usx2y_urbs_start(capsubs)))
+ 			goto up_prepare_mutex;
+ 	}
+ 
+-	if (subs != capsubs && atomic_read(&subs->state) < state_PREPARED)
+-		err = usX2Y_urbs_start(subs);
++	if (subs != capsubs && atomic_read(&subs->state) < STATE_PREPARED)
++		err = usx2y_urbs_start(subs);
+ 
+  up_prepare_mutex:
+-	mutex_unlock(&usX2Y->pcm_mutex);
++	mutex_unlock(&usx2y->pcm_mutex);
+ 	return err;
+ }
+ 
+-static const struct snd_pcm_hardware snd_usX2Y_2c =
++static const struct snd_pcm_hardware snd_usx2y_2c =
+ {
+ 	.info =			(SNDRV_PCM_INFO_MMAP | SNDRV_PCM_INFO_INTERLEAVED |
+ 				 SNDRV_PCM_INFO_BLOCK_TRANSFER |
+@@ -862,16 +862,16 @@ static const struct snd_pcm_hardware snd_usX2Y_2c =
+ 
+ 
+ 
+-static int snd_usX2Y_pcm_open(struct snd_pcm_substream *substream)
++static int snd_usx2y_pcm_open(struct snd_pcm_substream *substream)
+ {
+-	struct snd_usX2Y_substream	*subs = ((struct snd_usX2Y_substream **)
++	struct snd_usx2y_substream	*subs = ((struct snd_usx2y_substream **)
+ 					 snd_pcm_substream_chip(substream))[substream->stream];
+ 	struct snd_pcm_runtime	*runtime = substream->runtime;
+ 
+-	if (subs->usX2Y->chip_status & USX2Y_STAT_CHIP_MMAP_PCM_URBS)
++	if (subs->usx2y->chip_status & USX2Y_STAT_CHIP_MMAP_PCM_URBS)
+ 		return -EBUSY;
+ 
+-	runtime->hw = snd_usX2Y_2c;
++	runtime->hw = snd_usx2y_2c;
+ 	runtime->private_data = subs;
+ 	subs->pcm_substream = substream;
+ 	snd_pcm_hw_constraint_minmax(runtime, SNDRV_PCM_HW_PARAM_PERIOD_TIME, 1000, 200000);
+@@ -880,10 +880,10 @@ static int snd_usX2Y_pcm_open(struct snd_pcm_substream *substream)
+ 
+ 
+ 
+-static int snd_usX2Y_pcm_close(struct snd_pcm_substream *substream)
++static int snd_usx2y_pcm_close(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+-	struct snd_usX2Y_substream *subs = runtime->private_data;
++	struct snd_usx2y_substream *subs = runtime->private_data;
+ 
+ 	subs->pcm_substream = NULL;
+ 
+@@ -891,75 +891,75 @@ static int snd_usX2Y_pcm_close(struct snd_pcm_substream *substream)
+ }
+ 
+ 
+-static const struct snd_pcm_ops snd_usX2Y_pcm_ops =
++static const struct snd_pcm_ops snd_usx2y_pcm_ops =
+ {
+-	.open =		snd_usX2Y_pcm_open,
+-	.close =	snd_usX2Y_pcm_close,
+-	.hw_params =	snd_usX2Y_pcm_hw_params,
+-	.hw_free =	snd_usX2Y_pcm_hw_free,
+-	.prepare =	snd_usX2Y_pcm_prepare,
+-	.trigger =	snd_usX2Y_pcm_trigger,
+-	.pointer =	snd_usX2Y_pcm_pointer,
++	.open =		snd_usx2y_pcm_open,
++	.close =	snd_usx2y_pcm_close,
++	.hw_params =	snd_usx2y_pcm_hw_params,
++	.hw_free =	snd_usx2y_pcm_hw_free,
++	.prepare =	snd_usx2y_pcm_prepare,
++	.trigger =	snd_usx2y_pcm_trigger,
++	.pointer =	snd_usx2y_pcm_pointer,
+ };
+ 
+ 
+ /*
+  * free a usb stream instance
+  */
+-static void usX2Y_audio_stream_free(struct snd_usX2Y_substream **usX2Y_substream)
++static void usx2y_audio_stream_free(struct snd_usx2y_substream **usx2y_substream)
+ {
+ 	int stream;
+ 
+ 	for_each_pcm_streams(stream) {
+-		kfree(usX2Y_substream[stream]);
+-		usX2Y_substream[stream] = NULL;
++		kfree(usx2y_substream[stream]);
++		usx2y_substream[stream] = NULL;
+ 	}
+ }
+ 
+-static void snd_usX2Y_pcm_private_free(struct snd_pcm *pcm)
++static void snd_usx2y_pcm_private_free(struct snd_pcm *pcm)
+ {
+-	struct snd_usX2Y_substream **usX2Y_stream = pcm->private_data;
+-	if (usX2Y_stream)
+-		usX2Y_audio_stream_free(usX2Y_stream);
++	struct snd_usx2y_substream **usx2y_stream = pcm->private_data;
++	if (usx2y_stream)
++		usx2y_audio_stream_free(usx2y_stream);
+ }
+ 
+-static int usX2Y_audio_stream_new(struct snd_card *card, int playback_endpoint, int capture_endpoint)
++static int usx2y_audio_stream_new(struct snd_card *card, int playback_endpoint, int capture_endpoint)
+ {
+ 	struct snd_pcm *pcm;
+ 	int err, i;
+-	struct snd_usX2Y_substream **usX2Y_substream =
+-		usX2Y(card)->subs + 2 * usX2Y(card)->pcm_devs;
++	struct snd_usx2y_substream **usx2y_substream =
++		usx2y(card)->subs + 2 * usx2y(card)->pcm_devs;
+ 
+ 	for (i = playback_endpoint ? SNDRV_PCM_STREAM_PLAYBACK : SNDRV_PCM_STREAM_CAPTURE;
+ 	     i <= SNDRV_PCM_STREAM_CAPTURE; ++i) {
+-		usX2Y_substream[i] = kzalloc(sizeof(struct snd_usX2Y_substream), GFP_KERNEL);
+-		if (!usX2Y_substream[i])
++		usx2y_substream[i] = kzalloc(sizeof(struct snd_usx2y_substream), GFP_KERNEL);
++		if (!usx2y_substream[i])
+ 			return -ENOMEM;
+ 
+-		usX2Y_substream[i]->usX2Y = usX2Y(card);
++		usx2y_substream[i]->usx2y = usx2y(card);
+ 	}
+ 
+ 	if (playback_endpoint)
+-		usX2Y_substream[SNDRV_PCM_STREAM_PLAYBACK]->endpoint = playback_endpoint;
+-	usX2Y_substream[SNDRV_PCM_STREAM_CAPTURE]->endpoint = capture_endpoint;
++		usx2y_substream[SNDRV_PCM_STREAM_PLAYBACK]->endpoint = playback_endpoint;
++	usx2y_substream[SNDRV_PCM_STREAM_CAPTURE]->endpoint = capture_endpoint;
+ 
+-	err = snd_pcm_new(card, NAME_ALLCAPS" Audio", usX2Y(card)->pcm_devs,
++	err = snd_pcm_new(card, NAME_ALLCAPS" Audio", usx2y(card)->pcm_devs,
+ 			  playback_endpoint ? 1 : 0, 1,
+ 			  &pcm);
+ 	if (err < 0) {
+-		usX2Y_audio_stream_free(usX2Y_substream);
++		usx2y_audio_stream_free(usx2y_substream);
+ 		return err;
+ 	}
+ 
+ 	if (playback_endpoint)
+-		snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_usX2Y_pcm_ops);
+-	snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_usX2Y_pcm_ops);
++		snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_usx2y_pcm_ops);
++	snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_usx2y_pcm_ops);
+ 
+-	pcm->private_data = usX2Y_substream;
+-	pcm->private_free = snd_usX2Y_pcm_private_free;
++	pcm->private_data = usx2y_substream;
++	pcm->private_free = snd_usx2y_pcm_private_free;
+ 	pcm->info_flags = 0;
+ 
+-	sprintf(pcm->name, NAME_ALLCAPS" Audio #%d", usX2Y(card)->pcm_devs);
++	sprintf(pcm->name, NAME_ALLCAPS" Audio #%d", usx2y(card)->pcm_devs);
+ 
+ 	if (playback_endpoint) {
+ 		snd_pcm_set_managed_buffer(pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream,
+@@ -972,7 +972,7 @@ static int usX2Y_audio_stream_new(struct snd_card *card, int playback_endpoint,
+ 				   SNDRV_DMA_TYPE_CONTINUOUS,
+ 				   NULL,
+ 				   64*1024, 128*1024);
+-	usX2Y(card)->pcm_devs++;
++	usx2y(card)->pcm_devs++;
+ 
+ 	return 0;
+ }
+@@ -980,18 +980,18 @@ static int usX2Y_audio_stream_new(struct snd_card *card, int playback_endpoint,
+ /*
+  * create a chip instance and set its names.
+  */
+-int usX2Y_audio_create(struct snd_card *card)
++int usx2y_audio_create(struct snd_card *card)
+ {
+ 	int err = 0;
+ 	
+-	INIT_LIST_HEAD(&usX2Y(card)->pcm_list);
++	INIT_LIST_HEAD(&usx2y(card)->pcm_list);
+ 
+-	if (0 > (err = usX2Y_audio_stream_new(card, 0xA, 0x8)))
++	if (0 > (err = usx2y_audio_stream_new(card, 0xA, 0x8)))
+ 		return err;
+-	if (le16_to_cpu(usX2Y(card)->dev->descriptor.idProduct) == USB_ID_US428)
+-	     if (0 > (err = usX2Y_audio_stream_new(card, 0, 0xA)))
++	if (le16_to_cpu(usx2y(card)->dev->descriptor.idProduct) == USB_ID_US428)
++	     if (0 > (err = usx2y_audio_stream_new(card, 0, 0xA)))
+ 		     return err;
+-	if (le16_to_cpu(usX2Y(card)->dev->descriptor.idProduct) != USB_ID_US122)
+-		err = usX2Y_rate_set(usX2Y(card), 44100);	// Lets us428 recognize output-volume settings, disturbs us122.
++	if (le16_to_cpu(usx2y(card)->dev->descriptor.idProduct) != USB_ID_US122)
++		err = usx2y_rate_set(usx2y(card), 44100);	// Lets us428 recognize output-volume settings, disturbs us122.
+ 	return err;
+ }
+diff --git a/sound/usb/usx2y/usx2yhwdeppcm.c b/sound/usb/usx2y/usx2yhwdeppcm.c
+index 8253669c6a7d7..399470e51c411 100644
+--- a/sound/usb/usx2y/usx2yhwdeppcm.c
++++ b/sound/usb/usx2y/usx2yhwdeppcm.c
+@@ -47,17 +47,17 @@
+ #include <sound/hwdep.h>
+ 
+ 
+-static int usX2Y_usbpcm_urb_capt_retire(struct snd_usX2Y_substream *subs)
++static int usx2y_usbpcm_urb_capt_retire(struct snd_usx2y_substream *subs)
+ {
+ 	struct urb	*urb = subs->completed_urb;
+ 	struct snd_pcm_runtime *runtime = subs->pcm_substream->runtime;
+ 	int 		i, lens = 0, hwptr_done = subs->hwptr_done;
+-	struct usX2Ydev	*usX2Y = subs->usX2Y;
+-	if (0 > usX2Y->hwdep_pcm_shm->capture_iso_start) { //FIXME
+-		int head = usX2Y->hwdep_pcm_shm->captured_iso_head + 1;
+-		if (head >= ARRAY_SIZE(usX2Y->hwdep_pcm_shm->captured_iso))
++	struct usx2ydev	*usx2y = subs->usx2y;
++	if (0 > usx2y->hwdep_pcm_shm->capture_iso_start) { //FIXME
++		int head = usx2y->hwdep_pcm_shm->captured_iso_head + 1;
++		if (head >= ARRAY_SIZE(usx2y->hwdep_pcm_shm->captured_iso))
+ 			head = 0;
+-		usX2Y->hwdep_pcm_shm->capture_iso_start = head;
++		usx2y->hwdep_pcm_shm->capture_iso_start = head;
+ 		snd_printdd("cap start %i\n", head);
+ 	}
+ 	for (i = 0; i < nr_of_packs(); i++) {
+@@ -65,7 +65,7 @@ static int usX2Y_usbpcm_urb_capt_retire(struct snd_usX2Y_substream *subs)
+ 			snd_printk(KERN_ERR "active frame status %i. Most probably some hardware problem.\n", urb->iso_frame_desc[i].status);
+ 			return urb->iso_frame_desc[i].status;
+ 		}
+-		lens += urb->iso_frame_desc[i].actual_length / usX2Y->stride;
++		lens += urb->iso_frame_desc[i].actual_length / usx2y->stride;
+ 	}
+ 	if ((hwptr_done += lens) >= runtime->buffer_size)
+ 		hwptr_done -= runtime->buffer_size;
+@@ -79,10 +79,10 @@ static int usX2Y_usbpcm_urb_capt_retire(struct snd_usX2Y_substream *subs)
+ 	return 0;
+ }
+ 
+-static inline int usX2Y_iso_frames_per_buffer(struct snd_pcm_runtime *runtime,
+-					      struct usX2Ydev * usX2Y)
++static inline int usx2y_iso_frames_per_buffer(struct snd_pcm_runtime *runtime,
++					      struct usx2ydev * usx2y)
+ {
+-	return (runtime->buffer_size * 1000) / usX2Y->rate + 1;	//FIXME: so far only correct period_size == 2^x ?
++	return (runtime->buffer_size * 1000) / usx2y->rate + 1;	//FIXME: so far only correct period_size == 2^x ?
+ }
+ 
+ /*
+@@ -95,17 +95,17 @@ static inline int usX2Y_iso_frames_per_buffer(struct snd_pcm_runtime *runtime,
+  * it directly from the buffer.  thus the data is once copied to
+  * a temporary buffer and urb points to that.
+  */
+-static int usX2Y_hwdep_urb_play_prepare(struct snd_usX2Y_substream *subs,
++static int usx2y_hwdep_urb_play_prepare(struct snd_usx2y_substream *subs,
+ 					struct urb *urb)
+ {
+ 	int count, counts, pack;
+-	struct usX2Ydev *usX2Y = subs->usX2Y;
+-	struct snd_usX2Y_hwdep_pcm_shm *shm = usX2Y->hwdep_pcm_shm;
++	struct usx2ydev *usx2y = subs->usx2y;
++	struct snd_usx2y_hwdep_pcm_shm *shm = usx2y->hwdep_pcm_shm;
+ 	struct snd_pcm_runtime *runtime = subs->pcm_substream->runtime;
+ 
+ 	if (0 > shm->playback_iso_start) {
+ 		shm->playback_iso_start = shm->captured_iso_head -
+-			usX2Y_iso_frames_per_buffer(runtime, usX2Y);
++			usx2y_iso_frames_per_buffer(runtime, usx2y);
+ 		if (0 > shm->playback_iso_start)
+ 			shm->playback_iso_start += ARRAY_SIZE(shm->captured_iso);
+ 		shm->playback_iso_head = shm->playback_iso_start;
+@@ -114,7 +114,7 @@ static int usX2Y_hwdep_urb_play_prepare(struct snd_usX2Y_substream *subs,
+ 	count = 0;
+ 	for (pack = 0; pack < nr_of_packs(); pack++) {
+ 		/* calculate the size of a packet */
+-		counts = shm->captured_iso[shm->playback_iso_head].length / usX2Y->stride;
++		counts = shm->captured_iso[shm->playback_iso_head].length / usx2y->stride;
+ 		if (counts < 43 || counts > 50) {
+ 			snd_printk(KERN_ERR "should not be here with counts=%i\n", counts);
+ 			return -EPIPE;
+@@ -122,26 +122,26 @@ static int usX2Y_hwdep_urb_play_prepare(struct snd_usX2Y_substream *subs,
+ 		/* set up descriptor */
+ 		urb->iso_frame_desc[pack].offset = shm->captured_iso[shm->playback_iso_head].offset;
+ 		urb->iso_frame_desc[pack].length = shm->captured_iso[shm->playback_iso_head].length;
+-		if (atomic_read(&subs->state) != state_RUNNING)
++		if (atomic_read(&subs->state) != STATE_RUNNING)
+ 			memset((char *)urb->transfer_buffer + urb->iso_frame_desc[pack].offset, 0,
+ 			       urb->iso_frame_desc[pack].length);
+ 		if (++shm->playback_iso_head >= ARRAY_SIZE(shm->captured_iso))
+ 			shm->playback_iso_head = 0;
+ 		count += counts;
+ 	}
+-	urb->transfer_buffer_length = count * usX2Y->stride;
++	urb->transfer_buffer_length = count * usx2y->stride;
+ 	return 0;
+ }
+ 
+ 
+-static inline void usX2Y_usbpcm_urb_capt_iso_advance(struct snd_usX2Y_substream *subs,
++static inline void usx2y_usbpcm_urb_capt_iso_advance(struct snd_usx2y_substream *subs,
+ 						     struct urb *urb)
+ {
+ 	int pack;
+ 	for (pack = 0; pack < nr_of_packs(); ++pack) {
+ 		struct usb_iso_packet_descriptor *desc = urb->iso_frame_desc + pack;
+ 		if (NULL != subs) {
+-			struct snd_usX2Y_hwdep_pcm_shm *shm = subs->usX2Y->hwdep_pcm_shm;
++			struct snd_usx2y_hwdep_pcm_shm *shm = subs->usx2y->hwdep_pcm_shm;
+ 			int head = shm->captured_iso_head + 1;
+ 			if (head >= ARRAY_SIZE(shm->captured_iso))
+ 				head = 0;
+@@ -157,9 +157,9 @@ static inline void usX2Y_usbpcm_urb_capt_iso_advance(struct snd_usX2Y_substream
+ 	}
+ }
+ 
+-static inline int usX2Y_usbpcm_usbframe_complete(struct snd_usX2Y_substream *capsubs,
+-						 struct snd_usX2Y_substream *capsubs2,
+-						 struct snd_usX2Y_substream *playbacksubs,
++static inline int usx2y_usbpcm_usbframe_complete(struct snd_usx2y_substream *capsubs,
++						 struct snd_usx2y_substream *capsubs2,
++						 struct snd_usx2y_substream *playbacksubs,
+ 						 int frame)
+ {
+ 	int err, state;
+@@ -167,25 +167,25 @@ static inline int usX2Y_usbpcm_usbframe_complete(struct snd_usX2Y_substream *cap
+ 
+ 	state = atomic_read(&playbacksubs->state);
+ 	if (NULL != urb) {
+-		if (state == state_RUNNING)
+-			usX2Y_urb_play_retire(playbacksubs, urb);
+-		else if (state >= state_PRERUNNING)
++		if (state == STATE_RUNNING)
++			usx2y_urb_play_retire(playbacksubs, urb);
++		else if (state >= STATE_PRERUNNING)
+ 			atomic_inc(&playbacksubs->state);
+ 	} else {
+ 		switch (state) {
+-		case state_STARTING1:
++		case STATE_STARTING1:
+ 			urb = playbacksubs->urb[0];
+ 			atomic_inc(&playbacksubs->state);
+ 			break;
+-		case state_STARTING2:
++		case STATE_STARTING2:
+ 			urb = playbacksubs->urb[1];
+ 			atomic_inc(&playbacksubs->state);
+ 			break;
+ 		}
+ 	}
+ 	if (urb) {
+-		if ((err = usX2Y_hwdep_urb_play_prepare(playbacksubs, urb)) ||
+-		    (err = usX2Y_urb_submit(playbacksubs, urb, frame))) {
++		if ((err = usx2y_hwdep_urb_play_prepare(playbacksubs, urb)) ||
++		    (err = usx2y_urb_submit(playbacksubs, urb, frame))) {
+ 			return err;
+ 		}
+ 	}
+@@ -193,19 +193,19 @@ static inline int usX2Y_usbpcm_usbframe_complete(struct snd_usX2Y_substream *cap
+ 	playbacksubs->completed_urb = NULL;
+ 
+ 	state = atomic_read(&capsubs->state);
+-	if (state >= state_PREPARED) {
+-		if (state == state_RUNNING) {
+-			if ((err = usX2Y_usbpcm_urb_capt_retire(capsubs)))
++	if (state >= STATE_PREPARED) {
++		if (state == STATE_RUNNING) {
++			if ((err = usx2y_usbpcm_urb_capt_retire(capsubs)))
+ 				return err;
+-		} else if (state >= state_PRERUNNING)
++		} else if (state >= STATE_PRERUNNING)
+ 			atomic_inc(&capsubs->state);
+-		usX2Y_usbpcm_urb_capt_iso_advance(capsubs, capsubs->completed_urb);
++		usx2y_usbpcm_urb_capt_iso_advance(capsubs, capsubs->completed_urb);
+ 		if (NULL != capsubs2)
+-			usX2Y_usbpcm_urb_capt_iso_advance(NULL, capsubs2->completed_urb);
+-		if ((err = usX2Y_urb_submit(capsubs, capsubs->completed_urb, frame)))
++			usx2y_usbpcm_urb_capt_iso_advance(NULL, capsubs2->completed_urb);
++		if ((err = usx2y_urb_submit(capsubs, capsubs->completed_urb, frame)))
+ 			return err;
+ 		if (NULL != capsubs2)
+-			if ((err = usX2Y_urb_submit(capsubs2, capsubs2->completed_urb, frame)))
++			if ((err = usx2y_urb_submit(capsubs2, capsubs2->completed_urb, frame)))
+ 				return err;
+ 	}
+ 	capsubs->completed_urb = NULL;
+@@ -215,42 +215,42 @@ static inline int usX2Y_usbpcm_usbframe_complete(struct snd_usX2Y_substream *cap
+ }
+ 
+ 
+-static void i_usX2Y_usbpcm_urb_complete(struct urb *urb)
++static void i_usx2y_usbpcm_urb_complete(struct urb *urb)
+ {
+-	struct snd_usX2Y_substream *subs = urb->context;
+-	struct usX2Ydev *usX2Y = subs->usX2Y;
+-	struct snd_usX2Y_substream *capsubs, *capsubs2, *playbacksubs;
++	struct snd_usx2y_substream *subs = urb->context;
++	struct usx2ydev *usx2y = subs->usx2y;
++	struct snd_usx2y_substream *capsubs, *capsubs2, *playbacksubs;
+ 
+-	if (unlikely(atomic_read(&subs->state) < state_PREPARED)) {
++	if (unlikely(atomic_read(&subs->state) < STATE_PREPARED)) {
+ 		snd_printdd("hcd_frame=%i ep=%i%s status=%i start_frame=%i\n",
+-			    usb_get_current_frame_number(usX2Y->dev),
++			    usb_get_current_frame_number(usx2y->dev),
+ 			    subs->endpoint, usb_pipein(urb->pipe) ? "in" : "out",
+ 			    urb->status, urb->start_frame);
+ 		return;
+ 	}
+ 	if (unlikely(urb->status)) {
+-		usX2Y_error_urb_status(usX2Y, subs, urb);
++		usx2y_error_urb_status(usx2y, subs, urb);
+ 		return;
+ 	}
+ 
+ 	subs->completed_urb = urb;
+-	capsubs = usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE];
+-	capsubs2 = usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE + 2];
+-	playbacksubs = usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
+-	if (capsubs->completed_urb && atomic_read(&capsubs->state) >= state_PREPARED &&
++	capsubs = usx2y->subs[SNDRV_PCM_STREAM_CAPTURE];
++	capsubs2 = usx2y->subs[SNDRV_PCM_STREAM_CAPTURE + 2];
++	playbacksubs = usx2y->subs[SNDRV_PCM_STREAM_PLAYBACK];
++	if (capsubs->completed_urb && atomic_read(&capsubs->state) >= STATE_PREPARED &&
+ 	    (NULL == capsubs2 || capsubs2->completed_urb) &&
+-	    (playbacksubs->completed_urb || atomic_read(&playbacksubs->state) < state_PREPARED)) {
+-		if (!usX2Y_usbpcm_usbframe_complete(capsubs, capsubs2, playbacksubs, urb->start_frame))
+-			usX2Y->wait_iso_frame += nr_of_packs();
++	    (playbacksubs->completed_urb || atomic_read(&playbacksubs->state) < STATE_PREPARED)) {
++		if (!usx2y_usbpcm_usbframe_complete(capsubs, capsubs2, playbacksubs, urb->start_frame))
++			usx2y->wait_iso_frame += nr_of_packs();
+ 		else {
+ 			snd_printdd("\n");
+-			usX2Y_clients_stop(usX2Y);
++			usx2y_clients_stop(usx2y);
+ 		}
+ 	}
+ }
+ 
+ 
+-static void usX2Y_hwdep_urb_release(struct urb **urb)
++static void usx2y_hwdep_urb_release(struct urb **urb)
+ {
+ 	usb_kill_urb(*urb);
+ 	usb_free_urb(*urb);
+@@ -260,49 +260,49 @@ static void usX2Y_hwdep_urb_release(struct urb **urb)
+ /*
+  * release a substream
+  */
+-static void usX2Y_usbpcm_urbs_release(struct snd_usX2Y_substream *subs)
++static void usx2y_usbpcm_urbs_release(struct snd_usx2y_substream *subs)
+ {
+ 	int i;
+-	snd_printdd("snd_usX2Y_urbs_release() %i\n", subs->endpoint);
++	snd_printdd("snd_usx2y_urbs_release() %i\n", subs->endpoint);
+ 	for (i = 0; i < NRURBS; i++)
+-		usX2Y_hwdep_urb_release(subs->urb + i);
++		usx2y_hwdep_urb_release(subs->urb + i);
+ }
+ 
+-static void usX2Y_usbpcm_subs_startup_finish(struct usX2Ydev * usX2Y)
++static void usx2y_usbpcm_subs_startup_finish(struct usx2ydev * usx2y)
+ {
+-	usX2Y_urbs_set_complete(usX2Y, i_usX2Y_usbpcm_urb_complete);
+-	usX2Y->prepare_subs = NULL;
++	usx2y_urbs_set_complete(usx2y, i_usx2y_usbpcm_urb_complete);
++	usx2y->prepare_subs = NULL;
+ }
+ 
+-static void i_usX2Y_usbpcm_subs_startup(struct urb *urb)
++static void i_usx2y_usbpcm_subs_startup(struct urb *urb)
+ {
+-	struct snd_usX2Y_substream *subs = urb->context;
+-	struct usX2Ydev *usX2Y = subs->usX2Y;
+-	struct snd_usX2Y_substream *prepare_subs = usX2Y->prepare_subs;
++	struct snd_usx2y_substream *subs = urb->context;
++	struct usx2ydev *usx2y = subs->usx2y;
++	struct snd_usx2y_substream *prepare_subs = usx2y->prepare_subs;
+ 	if (NULL != prepare_subs &&
+ 	    urb->start_frame == prepare_subs->urb[0]->start_frame) {
+ 		atomic_inc(&prepare_subs->state);
+-		if (prepare_subs == usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE]) {
+-			struct snd_usX2Y_substream *cap_subs2 = usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE + 2];
++		if (prepare_subs == usx2y->subs[SNDRV_PCM_STREAM_CAPTURE]) {
++			struct snd_usx2y_substream *cap_subs2 = usx2y->subs[SNDRV_PCM_STREAM_CAPTURE + 2];
+ 			if (cap_subs2 != NULL)
+ 				atomic_inc(&cap_subs2->state);
+ 		}
+-		usX2Y_usbpcm_subs_startup_finish(usX2Y);
+-		wake_up(&usX2Y->prepare_wait_queue);
++		usx2y_usbpcm_subs_startup_finish(usx2y);
++		wake_up(&usx2y->prepare_wait_queue);
+ 	}
+ 
+-	i_usX2Y_usbpcm_urb_complete(urb);
++	i_usx2y_usbpcm_urb_complete(urb);
+ }
+ 
+ /*
+  * initialize a substream's urbs
+  */
+-static int usX2Y_usbpcm_urbs_allocate(struct snd_usX2Y_substream *subs)
++static int usx2y_usbpcm_urbs_allocate(struct snd_usx2y_substream *subs)
+ {
+ 	int i;
+ 	unsigned int pipe;
+-	int is_playback = subs == subs->usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
+-	struct usb_device *dev = subs->usX2Y->dev;
++	int is_playback = subs == subs->usx2y->subs[SNDRV_PCM_STREAM_PLAYBACK];
++	struct usb_device *dev = subs->usx2y->dev;
+ 
+ 	pipe = is_playback ? usb_sndisocpipe(dev, subs->endpoint) :
+ 			usb_rcvisocpipe(dev, subs->endpoint);
+@@ -319,21 +319,21 @@ static int usX2Y_usbpcm_urbs_allocate(struct snd_usX2Y_substream *subs)
+ 		}
+ 		*purb = usb_alloc_urb(nr_of_packs(), GFP_KERNEL);
+ 		if (NULL == *purb) {
+-			usX2Y_usbpcm_urbs_release(subs);
++			usx2y_usbpcm_urbs_release(subs);
+ 			return -ENOMEM;
+ 		}
+ 		(*purb)->transfer_buffer = is_playback ?
+-			subs->usX2Y->hwdep_pcm_shm->playback : (
++			subs->usx2y->hwdep_pcm_shm->playback : (
+ 				subs->endpoint == 0x8 ?
+-				subs->usX2Y->hwdep_pcm_shm->capture0x8 :
+-				subs->usX2Y->hwdep_pcm_shm->capture0xA);
++				subs->usx2y->hwdep_pcm_shm->capture0x8 :
++				subs->usx2y->hwdep_pcm_shm->capture0xA);
+ 
+ 		(*purb)->dev = dev;
+ 		(*purb)->pipe = pipe;
+ 		(*purb)->number_of_packets = nr_of_packs();
+ 		(*purb)->context = subs;
+ 		(*purb)->interval = 1;
+-		(*purb)->complete = i_usX2Y_usbpcm_subs_startup;
++		(*purb)->complete = i_usx2y_usbpcm_subs_startup;
+ 	}
+ 	return 0;
+ }
+@@ -341,91 +341,91 @@ static int usX2Y_usbpcm_urbs_allocate(struct snd_usX2Y_substream *subs)
+ /*
+  * free the buffer
+  */
+-static int snd_usX2Y_usbpcm_hw_free(struct snd_pcm_substream *substream)
++static int snd_usx2y_usbpcm_hw_free(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+-	struct snd_usX2Y_substream *subs = runtime->private_data,
+-		*cap_subs2 = subs->usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE + 2];
+-	mutex_lock(&subs->usX2Y->pcm_mutex);
+-	snd_printdd("snd_usX2Y_usbpcm_hw_free(%p)\n", substream);
++	struct snd_usx2y_substream *subs = runtime->private_data,
++		*cap_subs2 = subs->usx2y->subs[SNDRV_PCM_STREAM_CAPTURE + 2];
++	mutex_lock(&subs->usx2y->pcm_mutex);
++	snd_printdd("snd_usx2y_usbpcm_hw_free(%p)\n", substream);
+ 
+ 	if (SNDRV_PCM_STREAM_PLAYBACK == substream->stream) {
+-		struct snd_usX2Y_substream *cap_subs = subs->usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE];
+-		atomic_set(&subs->state, state_STOPPED);
+-		usX2Y_usbpcm_urbs_release(subs);
++		struct snd_usx2y_substream *cap_subs = subs->usx2y->subs[SNDRV_PCM_STREAM_CAPTURE];
++		atomic_set(&subs->state, STATE_STOPPED);
++		usx2y_usbpcm_urbs_release(subs);
+ 		if (!cap_subs->pcm_substream ||
+ 		    !cap_subs->pcm_substream->runtime ||
+ 		    !cap_subs->pcm_substream->runtime->status ||
+ 		    cap_subs->pcm_substream->runtime->status->state < SNDRV_PCM_STATE_PREPARED) {
+-			atomic_set(&cap_subs->state, state_STOPPED);
++			atomic_set(&cap_subs->state, STATE_STOPPED);
+ 			if (NULL != cap_subs2)
+-				atomic_set(&cap_subs2->state, state_STOPPED);
+-			usX2Y_usbpcm_urbs_release(cap_subs);
++				atomic_set(&cap_subs2->state, STATE_STOPPED);
++			usx2y_usbpcm_urbs_release(cap_subs);
+ 			if (NULL != cap_subs2)
+-				usX2Y_usbpcm_urbs_release(cap_subs2);
++				usx2y_usbpcm_urbs_release(cap_subs2);
+ 		}
+ 	} else {
+-		struct snd_usX2Y_substream *playback_subs = subs->usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
+-		if (atomic_read(&playback_subs->state) < state_PREPARED) {
+-			atomic_set(&subs->state, state_STOPPED);
++		struct snd_usx2y_substream *playback_subs = subs->usx2y->subs[SNDRV_PCM_STREAM_PLAYBACK];
++		if (atomic_read(&playback_subs->state) < STATE_PREPARED) {
++			atomic_set(&subs->state, STATE_STOPPED);
+ 			if (NULL != cap_subs2)
+-				atomic_set(&cap_subs2->state, state_STOPPED);
+-			usX2Y_usbpcm_urbs_release(subs);
++				atomic_set(&cap_subs2->state, STATE_STOPPED);
++			usx2y_usbpcm_urbs_release(subs);
+ 			if (NULL != cap_subs2)
+-				usX2Y_usbpcm_urbs_release(cap_subs2);
++				usx2y_usbpcm_urbs_release(cap_subs2);
+ 		}
+ 	}
+-	mutex_unlock(&subs->usX2Y->pcm_mutex);
++	mutex_unlock(&subs->usx2y->pcm_mutex);
+ 	return 0;
+ }
+ 
+-static void usX2Y_usbpcm_subs_startup(struct snd_usX2Y_substream *subs)
++static void usx2y_usbpcm_subs_startup(struct snd_usx2y_substream *subs)
+ {
+-	struct usX2Ydev * usX2Y = subs->usX2Y;
+-	usX2Y->prepare_subs = subs;
++	struct usx2ydev * usx2y = subs->usx2y;
++	usx2y->prepare_subs = subs;
+ 	subs->urb[0]->start_frame = -1;
+-	smp_wmb();	// Make sure above modifications are seen by i_usX2Y_subs_startup()
+-	usX2Y_urbs_set_complete(usX2Y, i_usX2Y_usbpcm_subs_startup);
++	smp_wmb();	// Make sure above modifications are seen by i_usx2y_subs_startup()
++	usx2y_urbs_set_complete(usx2y, i_usx2y_usbpcm_subs_startup);
+ }
+ 
+-static int usX2Y_usbpcm_urbs_start(struct snd_usX2Y_substream *subs)
++static int usx2y_usbpcm_urbs_start(struct snd_usx2y_substream *subs)
+ {
+ 	int	p, u, err,
+ 		stream = subs->pcm_substream->stream;
+-	struct usX2Ydev *usX2Y = subs->usX2Y;
++	struct usx2ydev *usx2y = subs->usx2y;
+ 
+ 	if (SNDRV_PCM_STREAM_CAPTURE == stream) {
+-		usX2Y->hwdep_pcm_shm->captured_iso_head = -1;
+-		usX2Y->hwdep_pcm_shm->captured_iso_frames = 0;
++		usx2y->hwdep_pcm_shm->captured_iso_head = -1;
++		usx2y->hwdep_pcm_shm->captured_iso_frames = 0;
+ 	}
+ 
+ 	for (p = 0; 3 >= (stream + p); p += 2) {
+-		struct snd_usX2Y_substream *subs = usX2Y->subs[stream + p];
++		struct snd_usx2y_substream *subs = usx2y->subs[stream + p];
+ 		if (subs != NULL) {
+-			if ((err = usX2Y_usbpcm_urbs_allocate(subs)) < 0)
++			if ((err = usx2y_usbpcm_urbs_allocate(subs)) < 0)
+ 				return err;
+ 			subs->completed_urb = NULL;
+ 		}
+ 	}
+ 
+ 	for (p = 0; p < 4; p++) {
+-		struct snd_usX2Y_substream *subs = usX2Y->subs[p];
+-		if (subs != NULL && atomic_read(&subs->state) >= state_PREPARED)
++		struct snd_usx2y_substream *subs = usx2y->subs[p];
++		if (subs != NULL && atomic_read(&subs->state) >= STATE_PREPARED)
+ 			goto start;
+ 	}
+ 
+  start:
+-	usX2Y_usbpcm_subs_startup(subs);
++	usx2y_usbpcm_subs_startup(subs);
+ 	for (u = 0; u < NRURBS; u++) {
+ 		for (p = 0; 3 >= (stream + p); p += 2) {
+-			struct snd_usX2Y_substream *subs = usX2Y->subs[stream + p];
++			struct snd_usx2y_substream *subs = usx2y->subs[stream + p];
+ 			if (subs != NULL) {
+ 				struct urb *urb = subs->urb[u];
+ 				if (usb_pipein(urb->pipe)) {
+ 					unsigned long pack;
+ 					if (0 == u)
+-						atomic_set(&subs->state, state_STARTING3);
+-					urb->dev = usX2Y->dev;
++						atomic_set(&subs->state, STATE_STARTING3);
++					urb->dev = usx2y->dev;
+ 					for (pack = 0; pack < nr_of_packs(); pack++) {
+ 						urb->iso_frame_desc[pack].offset = subs->maxpacksize * (pack + u * nr_of_packs());
+ 						urb->iso_frame_desc[pack].length = subs->maxpacksize;
+@@ -438,25 +438,25 @@ static int usX2Y_usbpcm_urbs_start(struct snd_usX2Y_substream *subs)
+ 					}  else {
+ 						snd_printdd("%i\n", urb->start_frame);
+ 						if (u == 0)
+-							usX2Y->wait_iso_frame = urb->start_frame;
++							usx2y->wait_iso_frame = urb->start_frame;
+ 					}
+ 					urb->transfer_flags = 0;
+ 				} else {
+-					atomic_set(&subs->state, state_STARTING1);
++					atomic_set(&subs->state, STATE_STARTING1);
+ 					break;
+ 				}			
+ 			}
+ 		}
+ 	}
+ 	err = 0;
+-	wait_event(usX2Y->prepare_wait_queue, NULL == usX2Y->prepare_subs);
+-	if (atomic_read(&subs->state) != state_PREPARED)
++	wait_event(usx2y->prepare_wait_queue, NULL == usx2y->prepare_subs);
++	if (atomic_read(&subs->state) != STATE_PREPARED)
+ 		err = -EPIPE;
+ 		
+  cleanup:
+ 	if (err) {
+-		usX2Y_subs_startup_finish(usX2Y);	// Call it now
+-		usX2Y_clients_stop(usX2Y);		// something is completely wroong > stop evrything			
++		usx2y_subs_startup_finish(usx2y);	// Call it now
++		usx2y_clients_stop(usx2y);		// something is completely wroong > stop evrything			
+ 	}
+ 	return err;
+ }
+@@ -466,69 +466,69 @@ static int usX2Y_usbpcm_urbs_start(struct snd_usX2Y_substream *subs)
+  *
+  * set format and initialize urbs
+  */
+-static int snd_usX2Y_usbpcm_prepare(struct snd_pcm_substream *substream)
++static int snd_usx2y_usbpcm_prepare(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+-	struct snd_usX2Y_substream *subs = runtime->private_data;
+-	struct usX2Ydev *usX2Y = subs->usX2Y;
+-	struct snd_usX2Y_substream *capsubs = subs->usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE];
++	struct snd_usx2y_substream *subs = runtime->private_data;
++	struct usx2ydev *usx2y = subs->usx2y;
++	struct snd_usx2y_substream *capsubs = subs->usx2y->subs[SNDRV_PCM_STREAM_CAPTURE];
+ 	int err = 0;
+-	snd_printdd("snd_usX2Y_pcm_prepare(%p)\n", substream);
++	snd_printdd("snd_usx2y_pcm_prepare(%p)\n", substream);
+ 
+-	if (NULL == usX2Y->hwdep_pcm_shm) {
+-		usX2Y->hwdep_pcm_shm = alloc_pages_exact(sizeof(struct snd_usX2Y_hwdep_pcm_shm),
++	if (NULL == usx2y->hwdep_pcm_shm) {
++		usx2y->hwdep_pcm_shm = alloc_pages_exact(sizeof(struct snd_usx2y_hwdep_pcm_shm),
+ 							 GFP_KERNEL);
+-		if (!usX2Y->hwdep_pcm_shm)
++		if (!usx2y->hwdep_pcm_shm)
+ 			return -ENOMEM;
+-		memset(usX2Y->hwdep_pcm_shm, 0, sizeof(struct snd_usX2Y_hwdep_pcm_shm));
++		memset(usx2y->hwdep_pcm_shm, 0, sizeof(struct snd_usx2y_hwdep_pcm_shm));
+ 	}
+ 
+-	mutex_lock(&usX2Y->pcm_mutex);
+-	usX2Y_subs_prepare(subs);
++	mutex_lock(&usx2y->pcm_mutex);
++	usx2y_subs_prepare(subs);
+ // Start hardware streams
+ // SyncStream first....
+-	if (atomic_read(&capsubs->state) < state_PREPARED) {
+-		if (usX2Y->format != runtime->format)
+-			if ((err = usX2Y_format_set(usX2Y, runtime->format)) < 0)
++	if (atomic_read(&capsubs->state) < STATE_PREPARED) {
++		if (usx2y->format != runtime->format)
++			if ((err = usx2y_format_set(usx2y, runtime->format)) < 0)
+ 				goto up_prepare_mutex;
+-		if (usX2Y->rate != runtime->rate)
+-			if ((err = usX2Y_rate_set(usX2Y, runtime->rate)) < 0)
++		if (usx2y->rate != runtime->rate)
++			if ((err = usx2y_rate_set(usx2y, runtime->rate)) < 0)
+ 				goto up_prepare_mutex;
+ 		snd_printdd("starting capture pipe for %s\n", subs == capsubs ?
+ 			    "self" : "playpipe");
+-		if (0 > (err = usX2Y_usbpcm_urbs_start(capsubs)))
++		if (0 > (err = usx2y_usbpcm_urbs_start(capsubs)))
+ 			goto up_prepare_mutex;
+ 	}
+ 
+ 	if (subs != capsubs) {
+-		usX2Y->hwdep_pcm_shm->playback_iso_start = -1;
+-		if (atomic_read(&subs->state) < state_PREPARED) {
+-			while (usX2Y_iso_frames_per_buffer(runtime, usX2Y) >
+-			       usX2Y->hwdep_pcm_shm->captured_iso_frames) {
++		usx2y->hwdep_pcm_shm->playback_iso_start = -1;
++		if (atomic_read(&subs->state) < STATE_PREPARED) {
++			while (usx2y_iso_frames_per_buffer(runtime, usx2y) >
++			       usx2y->hwdep_pcm_shm->captured_iso_frames) {
+ 				snd_printdd("Wait: iso_frames_per_buffer=%i,"
+ 					    "captured_iso_frames=%i\n",
+-					    usX2Y_iso_frames_per_buffer(runtime, usX2Y),
+-					    usX2Y->hwdep_pcm_shm->captured_iso_frames);
++					    usx2y_iso_frames_per_buffer(runtime, usx2y),
++					    usx2y->hwdep_pcm_shm->captured_iso_frames);
+ 				if (msleep_interruptible(10)) {
+ 					err = -ERESTARTSYS;
+ 					goto up_prepare_mutex;
+ 				}
+ 			} 
+-			if (0 > (err = usX2Y_usbpcm_urbs_start(subs)))
++			if (0 > (err = usx2y_usbpcm_urbs_start(subs)))
+ 				goto up_prepare_mutex;
+ 		}
+ 		snd_printdd("Ready: iso_frames_per_buffer=%i,captured_iso_frames=%i\n",
+-			    usX2Y_iso_frames_per_buffer(runtime, usX2Y),
+-			    usX2Y->hwdep_pcm_shm->captured_iso_frames);
++			    usx2y_iso_frames_per_buffer(runtime, usx2y),
++			    usx2y->hwdep_pcm_shm->captured_iso_frames);
+ 	} else
+-		usX2Y->hwdep_pcm_shm->capture_iso_start = -1;
++		usx2y->hwdep_pcm_shm->capture_iso_start = -1;
+ 
+  up_prepare_mutex:
+-	mutex_unlock(&usX2Y->pcm_mutex);
++	mutex_unlock(&usx2y->pcm_mutex);
+ 	return err;
+ }
+ 
+-static const struct snd_pcm_hardware snd_usX2Y_4c =
++static const struct snd_pcm_hardware snd_usx2y_4c =
+ {
+ 	.info =			(SNDRV_PCM_INFO_MMAP | SNDRV_PCM_INFO_INTERLEAVED |
+ 				 SNDRV_PCM_INFO_BLOCK_TRANSFER |
+@@ -549,17 +549,17 @@ static const struct snd_pcm_hardware snd_usX2Y_4c =
+ 
+ 
+ 
+-static int snd_usX2Y_usbpcm_open(struct snd_pcm_substream *substream)
++static int snd_usx2y_usbpcm_open(struct snd_pcm_substream *substream)
+ {
+-	struct snd_usX2Y_substream	*subs = ((struct snd_usX2Y_substream **)
++	struct snd_usx2y_substream	*subs = ((struct snd_usx2y_substream **)
+ 					 snd_pcm_substream_chip(substream))[substream->stream];
+ 	struct snd_pcm_runtime	*runtime = substream->runtime;
+ 
+-	if (!(subs->usX2Y->chip_status & USX2Y_STAT_CHIP_MMAP_PCM_URBS))
++	if (!(subs->usx2y->chip_status & USX2Y_STAT_CHIP_MMAP_PCM_URBS))
+ 		return -EBUSY;
+ 
+-	runtime->hw = SNDRV_PCM_STREAM_PLAYBACK == substream->stream ? snd_usX2Y_2c :
+-		(subs->usX2Y->subs[3] ? snd_usX2Y_4c : snd_usX2Y_2c);
++	runtime->hw = SNDRV_PCM_STREAM_PLAYBACK == substream->stream ? snd_usx2y_2c :
++		(subs->usx2y->subs[3] ? snd_usx2y_4c : snd_usx2y_2c);
+ 	runtime->private_data = subs;
+ 	subs->pcm_substream = substream;
+ 	snd_pcm_hw_constraint_minmax(runtime, SNDRV_PCM_HW_PARAM_PERIOD_TIME, 1000, 200000);
+@@ -567,35 +567,35 @@ static int snd_usX2Y_usbpcm_open(struct snd_pcm_substream *substream)
+ }
+ 
+ 
+-static int snd_usX2Y_usbpcm_close(struct snd_pcm_substream *substream)
++static int snd_usx2y_usbpcm_close(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+-	struct snd_usX2Y_substream *subs = runtime->private_data;
++	struct snd_usx2y_substream *subs = runtime->private_data;
+ 
+ 	subs->pcm_substream = NULL;
+ 	return 0;
+ }
+ 
+ 
+-static const struct snd_pcm_ops snd_usX2Y_usbpcm_ops =
++static const struct snd_pcm_ops snd_usx2y_usbpcm_ops =
+ {
+-	.open =		snd_usX2Y_usbpcm_open,
+-	.close =	snd_usX2Y_usbpcm_close,
+-	.hw_params =	snd_usX2Y_pcm_hw_params,
+-	.hw_free =	snd_usX2Y_usbpcm_hw_free,
+-	.prepare =	snd_usX2Y_usbpcm_prepare,
+-	.trigger =	snd_usX2Y_pcm_trigger,
+-	.pointer =	snd_usX2Y_pcm_pointer,
++	.open =		snd_usx2y_usbpcm_open,
++	.close =	snd_usx2y_usbpcm_close,
++	.hw_params =	snd_usx2y_pcm_hw_params,
++	.hw_free =	snd_usx2y_usbpcm_hw_free,
++	.prepare =	snd_usx2y_usbpcm_prepare,
++	.trigger =	snd_usx2y_pcm_trigger,
++	.pointer =	snd_usx2y_pcm_pointer,
+ };
+ 
+ 
+-static int usX2Y_pcms_busy_check(struct snd_card *card)
++static int usx2y_pcms_busy_check(struct snd_card *card)
+ {
+-	struct usX2Ydev	*dev = usX2Y(card);
++	struct usx2ydev	*dev = usx2y(card);
+ 	int i;
+ 
+ 	for (i = 0; i < dev->pcm_devs * 2; i++) {
+-		struct snd_usX2Y_substream *subs = dev->subs[i];
++		struct snd_usx2y_substream *subs = dev->subs[i];
+ 		if (subs && subs->pcm_substream &&
+ 		    SUBSTREAM_BUSY(subs->pcm_substream))
+ 			return -EBUSY;
+@@ -603,102 +603,102 @@ static int usX2Y_pcms_busy_check(struct snd_card *card)
+ 	return 0;
+ }
+ 
+-static int snd_usX2Y_hwdep_pcm_open(struct snd_hwdep *hw, struct file *file)
++static int snd_usx2y_hwdep_pcm_open(struct snd_hwdep *hw, struct file *file)
+ {
+ 	struct snd_card *card = hw->card;
+ 	int err;
+ 
+-	mutex_lock(&usX2Y(card)->pcm_mutex);
+-	err = usX2Y_pcms_busy_check(card);
++	mutex_lock(&usx2y(card)->pcm_mutex);
++	err = usx2y_pcms_busy_check(card);
+ 	if (!err)
+-		usX2Y(card)->chip_status |= USX2Y_STAT_CHIP_MMAP_PCM_URBS;
+-	mutex_unlock(&usX2Y(card)->pcm_mutex);
++		usx2y(card)->chip_status |= USX2Y_STAT_CHIP_MMAP_PCM_URBS;
++	mutex_unlock(&usx2y(card)->pcm_mutex);
+ 	return err;
+ }
+ 
+ 
+-static int snd_usX2Y_hwdep_pcm_release(struct snd_hwdep *hw, struct file *file)
++static int snd_usx2y_hwdep_pcm_release(struct snd_hwdep *hw, struct file *file)
+ {
+ 	struct snd_card *card = hw->card;
+ 	int err;
+ 
+-	mutex_lock(&usX2Y(card)->pcm_mutex);
+-	err = usX2Y_pcms_busy_check(card);
++	mutex_lock(&usx2y(card)->pcm_mutex);
++	err = usx2y_pcms_busy_check(card);
+ 	if (!err)
+-		usX2Y(hw->card)->chip_status &= ~USX2Y_STAT_CHIP_MMAP_PCM_URBS;
+-	mutex_unlock(&usX2Y(card)->pcm_mutex);
++		usx2y(hw->card)->chip_status &= ~USX2Y_STAT_CHIP_MMAP_PCM_URBS;
++	mutex_unlock(&usx2y(card)->pcm_mutex);
+ 	return err;
+ }
+ 
+ 
+-static void snd_usX2Y_hwdep_pcm_vm_open(struct vm_area_struct *area)
++static void snd_usx2y_hwdep_pcm_vm_open(struct vm_area_struct *area)
+ {
+ }
+ 
+ 
+-static void snd_usX2Y_hwdep_pcm_vm_close(struct vm_area_struct *area)
++static void snd_usx2y_hwdep_pcm_vm_close(struct vm_area_struct *area)
+ {
+ }
+ 
+ 
+-static vm_fault_t snd_usX2Y_hwdep_pcm_vm_fault(struct vm_fault *vmf)
++static vm_fault_t snd_usx2y_hwdep_pcm_vm_fault(struct vm_fault *vmf)
+ {
+ 	unsigned long offset;
+ 	void *vaddr;
+ 
+ 	offset = vmf->pgoff << PAGE_SHIFT;
+-	vaddr = (char *)((struct usX2Ydev *)vmf->vma->vm_private_data)->hwdep_pcm_shm + offset;
++	vaddr = (char *)((struct usx2ydev *)vmf->vma->vm_private_data)->hwdep_pcm_shm + offset;
+ 	vmf->page = virt_to_page(vaddr);
+ 	get_page(vmf->page);
+ 	return 0;
+ }
+ 
+ 
+-static const struct vm_operations_struct snd_usX2Y_hwdep_pcm_vm_ops = {
+-	.open = snd_usX2Y_hwdep_pcm_vm_open,
+-	.close = snd_usX2Y_hwdep_pcm_vm_close,
+-	.fault = snd_usX2Y_hwdep_pcm_vm_fault,
++static const struct vm_operations_struct snd_usx2y_hwdep_pcm_vm_ops = {
++	.open = snd_usx2y_hwdep_pcm_vm_open,
++	.close = snd_usx2y_hwdep_pcm_vm_close,
++	.fault = snd_usx2y_hwdep_pcm_vm_fault,
+ };
+ 
+ 
+-static int snd_usX2Y_hwdep_pcm_mmap(struct snd_hwdep * hw, struct file *filp, struct vm_area_struct *area)
++static int snd_usx2y_hwdep_pcm_mmap(struct snd_hwdep * hw, struct file *filp, struct vm_area_struct *area)
+ {
+ 	unsigned long	size = (unsigned long)(area->vm_end - area->vm_start);
+-	struct usX2Ydev	*usX2Y = hw->private_data;
++	struct usx2ydev	*usx2y = hw->private_data;
+ 
+-	if (!(usX2Y->chip_status & USX2Y_STAT_CHIP_INIT))
++	if (!(usx2y->chip_status & USX2Y_STAT_CHIP_INIT))
+ 		return -EBUSY;
+ 
+ 	/* if userspace tries to mmap beyond end of our buffer, fail */ 
+-	if (size > PAGE_ALIGN(sizeof(struct snd_usX2Y_hwdep_pcm_shm))) {
+-		snd_printd("%lu > %lu\n", size, (unsigned long)sizeof(struct snd_usX2Y_hwdep_pcm_shm)); 
++	if (size > PAGE_ALIGN(sizeof(struct snd_usx2y_hwdep_pcm_shm))) {
++		snd_printd("%lu > %lu\n", size, (unsigned long)sizeof(struct snd_usx2y_hwdep_pcm_shm)); 
+ 		return -EINVAL;
+ 	}
+ 
+-	if (!usX2Y->hwdep_pcm_shm) {
++	if (!usx2y->hwdep_pcm_shm) {
+ 		return -ENODEV;
+ 	}
+-	area->vm_ops = &snd_usX2Y_hwdep_pcm_vm_ops;
++	area->vm_ops = &snd_usx2y_hwdep_pcm_vm_ops;
+ 	area->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+ 	area->vm_private_data = hw->private_data;
+ 	return 0;
+ }
+ 
+ 
+-static void snd_usX2Y_hwdep_pcm_private_free(struct snd_hwdep *hwdep)
++static void snd_usx2y_hwdep_pcm_private_free(struct snd_hwdep *hwdep)
+ {
+-	struct usX2Ydev *usX2Y = hwdep->private_data;
+-	if (NULL != usX2Y->hwdep_pcm_shm)
+-		free_pages_exact(usX2Y->hwdep_pcm_shm, sizeof(struct snd_usX2Y_hwdep_pcm_shm));
++	struct usx2ydev *usx2y = hwdep->private_data;
++	if (NULL != usx2y->hwdep_pcm_shm)
++		free_pages_exact(usx2y->hwdep_pcm_shm, sizeof(struct snd_usx2y_hwdep_pcm_shm));
+ }
+ 
+ 
+-int usX2Y_hwdep_pcm_new(struct snd_card *card)
++int usx2y_hwdep_pcm_new(struct snd_card *card)
+ {
+ 	int err;
+ 	struct snd_hwdep *hw;
+ 	struct snd_pcm *pcm;
+-	struct usb_device *dev = usX2Y(card)->dev;
++	struct usb_device *dev = usx2y(card)->dev;
+ 	if (1 != nr_of_packs())
+ 		return 0;
+ 
+@@ -706,11 +706,11 @@ int usX2Y_hwdep_pcm_new(struct snd_card *card)
+ 		return err;
+ 
+ 	hw->iface = SNDRV_HWDEP_IFACE_USX2Y_PCM;
+-	hw->private_data = usX2Y(card);
+-	hw->private_free = snd_usX2Y_hwdep_pcm_private_free;
+-	hw->ops.open = snd_usX2Y_hwdep_pcm_open;
+-	hw->ops.release = snd_usX2Y_hwdep_pcm_release;
+-	hw->ops.mmap = snd_usX2Y_hwdep_pcm_mmap;
++	hw->private_data = usx2y(card);
++	hw->private_free = snd_usx2y_hwdep_pcm_private_free;
++	hw->ops.open = snd_usx2y_hwdep_pcm_open;
++	hw->ops.release = snd_usx2y_hwdep_pcm_release;
++	hw->ops.mmap = snd_usx2y_hwdep_pcm_mmap;
+ 	hw->exclusive = 1;
+ 	sprintf(hw->name, "/dev/bus/usb/%03d/%03d/hwdeppcm", dev->bus->busnum, dev->devnum);
+ 
+@@ -718,10 +718,10 @@ int usX2Y_hwdep_pcm_new(struct snd_card *card)
+ 	if (err < 0) {
+ 		return err;
+ 	}
+-	snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_usX2Y_usbpcm_ops);
+-	snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_usX2Y_usbpcm_ops);
++	snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_usx2y_usbpcm_ops);
++	snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_usx2y_usbpcm_ops);
+ 
+-	pcm->private_data = usX2Y(card)->subs;
++	pcm->private_data = usx2y(card)->subs;
+ 	pcm->info_flags = 0;
+ 
+ 	sprintf(pcm->name, NAME_ALLCAPS" hwdep Audio");
+@@ -739,7 +739,7 @@ int usX2Y_hwdep_pcm_new(struct snd_card *card)
+ 
+ #else
+ 
+-int usX2Y_hwdep_pcm_new(struct snd_card *card)
++int usx2y_hwdep_pcm_new(struct snd_card *card)
+ {
+ 	return 0;
+ }
+diff --git a/sound/usb/usx2y/usx2yhwdeppcm.h b/sound/usb/usx2y/usx2yhwdeppcm.h
+index eb5a46466f0e6..731b1c5a34741 100644
+--- a/sound/usb/usx2y/usx2yhwdeppcm.h
++++ b/sound/usb/usx2y/usx2yhwdeppcm.h
+@@ -4,7 +4,7 @@
+ #define MAXSTRIDE 3
+ 
+ #define SSS (((MAXPACK*MAXBUFFERMS*MAXSTRIDE + 4096) / 4096) * 4096)
+-struct snd_usX2Y_hwdep_pcm_shm {
++struct snd_usx2y_hwdep_pcm_shm {
+ 	char playback[SSS];
+ 	char capture0x8[SSS];
+ 	char capture0xA[SSS];
+@@ -20,4 +20,4 @@ struct snd_usX2Y_hwdep_pcm_shm {
+ 	int capture_iso_start;
+ };
+ 
+-int usX2Y_hwdep_pcm_new(struct snd_card *card);
++int usx2y_hwdep_pcm_new(struct snd_card *card);
+diff --git a/tools/perf/util/parse-events.y b/tools/perf/util/parse-events.y
+index aba12a4d488ef..9321bd0e2f763 100644
+--- a/tools/perf/util/parse-events.y
++++ b/tools/perf/util/parse-events.y
+@@ -316,7 +316,7 @@ event_pmu_name opt_pmu_config
+ 			if (!strncmp(name, "uncore_", 7) &&
+ 			    strncmp($1, "uncore_", 7))
+ 				name += 7;
+-			if (!fnmatch(pattern, name, 0)) {
++			if (!perf_pmu__match(pattern, name, $1)) {
+ 				if (parse_events_copy_term_list(orig_terms, &terms))
+ 					CLEANUP_YYABORT;
+ 				if (!parse_events_add_pmu(_parse_state, list, pmu->name, terms, true, false))
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index 88c8ecdc60b0e..44b90d638ad5f 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -3,6 +3,7 @@
+ #include <linux/compiler.h>
+ #include <linux/string.h>
+ #include <linux/zalloc.h>
++#include <linux/ctype.h>
+ #include <subcmd/pager.h>
+ #include <sys/types.h>
+ #include <errno.h>
+@@ -17,6 +18,7 @@
+ #include <locale.h>
+ #include <regex.h>
+ #include <perf/cpumap.h>
++#include <fnmatch.h>
+ #include "debug.h"
+ #include "evsel.h"
+ #include "pmu.h"
+@@ -740,6 +742,27 @@ struct pmu_events_map *__weak pmu_events_map__find(void)
+ 	return perf_pmu__find_map(NULL);
+ }
+ 
++static bool perf_pmu__valid_suffix(char *pmu_name, char *tok)
++{
++	char *p;
++
++	if (strncmp(pmu_name, tok, strlen(tok)))
++		return false;
++
++	p = pmu_name + strlen(tok);
++	if (*p == 0)
++		return true;
++
++	if (*p != '_')
++		return false;
++
++	++p;
++	if (*p == 0 || !isdigit(*p))
++		return false;
++
++	return true;
++}
++
+ bool pmu_uncore_alias_match(const char *pmu_name, const char *name)
+ {
+ 	char *tmp = NULL, *tok, *str;
+@@ -768,7 +791,7 @@ bool pmu_uncore_alias_match(const char *pmu_name, const char *name)
+ 	 */
+ 	for (; tok; name += strlen(tok), tok = strtok_r(NULL, ",", &tmp)) {
+ 		name = strstr(name, tok);
+-		if (!name) {
++		if (!name || !perf_pmu__valid_suffix((char *)name, tok)) {
+ 			res = false;
+ 			goto out;
+ 		}
+@@ -1872,3 +1895,14 @@ bool perf_pmu__has_hybrid(void)
+ 
+ 	return !list_empty(&perf_pmu__hybrid_pmus);
+ }
++
++int perf_pmu__match(char *pattern, char *name, char *tok)
++{
++	if (fnmatch(pattern, name, 0))
++		return -1;
++
++	if (tok && !perf_pmu__valid_suffix(name, tok))
++		return -1;
++
++	return 0;
++}
+diff --git a/tools/perf/util/pmu.h b/tools/perf/util/pmu.h
+index a790ef758171f..926da483a141c 100644
+--- a/tools/perf/util/pmu.h
++++ b/tools/perf/util/pmu.h
+@@ -133,5 +133,6 @@ void perf_pmu__warn_invalid_config(struct perf_pmu *pmu, __u64 config,
+ 				   char *name);
+ 
+ bool perf_pmu__has_hybrid(void);
++int perf_pmu__match(char *pattern, char *name, char *tok);
+ 
+ #endif /* __PMU_H */
+diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
+index 3dfc543327af8..18dbd9cddda8b 100644
+--- a/tools/perf/util/scripting-engines/trace-event-python.c
++++ b/tools/perf/util/scripting-engines/trace-event-python.c
+@@ -687,7 +687,7 @@ static void set_sample_datasrc_in_dict(PyObject *dict,
+ 			_PyUnicode_FromString(decode));
+ }
+ 
+-static int regs_map(struct regs_dump *regs, uint64_t mask, char *bf, int size)
++static void regs_map(struct regs_dump *regs, uint64_t mask, char *bf, int size)
+ {
+ 	unsigned int i = 0, r;
+ 	int printed = 0;
+@@ -695,7 +695,7 @@ static int regs_map(struct regs_dump *regs, uint64_t mask, char *bf, int size)
+ 	bf[0] = 0;
+ 
+ 	if (!regs || !regs->regs)
+-		return 0;
++		return;
+ 
+ 	for_each_set_bit(r, (unsigned long *) &mask, sizeof(mask) * 8) {
+ 		u64 val = regs->regs[i++];
+@@ -704,8 +704,6 @@ static int regs_map(struct regs_dump *regs, uint64_t mask, char *bf, int size)
+ 				     "%5s:0x%" PRIx64 " ",
+ 				     perf_reg_name(r), val);
+ 	}
+-
+-	return printed;
+ }
+ 
+ static void set_regs_in_dict(PyObject *dict,
+@@ -713,7 +711,16 @@ static void set_regs_in_dict(PyObject *dict,
+ 			     struct evsel *evsel)
+ {
+ 	struct perf_event_attr *attr = &evsel->core.attr;
+-	char bf[512];
++
++	/*
++	 * Here value 28 is a constant size which can be used to print
++	 * one register value and its corresponds to:
++	 * 16 chars is to specify 64 bit register in hexadecimal.
++	 * 2 chars is for appending "0x" to the hexadecimal value and
++	 * 10 chars is for register name.
++	 */
++	int size = __sw_hweight64(attr->sample_regs_intr) * 28;
++	char bf[size];
+ 
+ 	regs_map(&sample->intr_regs, attr->sample_regs_intr, bf, sizeof(bf));
+ 
+diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c
+index d8812f27648ca..d31f54ac4e982 100644
+--- a/tools/testing/selftests/kvm/set_memory_region_test.c
++++ b/tools/testing/selftests/kvm/set_memory_region_test.c
+@@ -377,7 +377,8 @@ static void test_add_max_memory_regions(void)
+ 		(max_mem_slots - 1), MEM_REGION_SIZE >> 10);
+ 
+ 	mem = mmap(NULL, (size_t)max_mem_slots * MEM_REGION_SIZE + alignment,
+-		   PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
++		   PROT_READ | PROT_WRITE,
++		   MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0);
+ 	TEST_ASSERT(mem != MAP_FAILED, "Failed to mmap() host");
+ 	mem_aligned = (void *)(((size_t) mem + alignment - 1) & ~(alignment - 1));
+ 
+diff --git a/tools/testing/selftests/powerpc/pmu/ebb/no_handler_test.c b/tools/testing/selftests/powerpc/pmu/ebb/no_handler_test.c
+index fc5bf4870d8e6..01e827c31169d 100644
+--- a/tools/testing/selftests/powerpc/pmu/ebb/no_handler_test.c
++++ b/tools/testing/selftests/powerpc/pmu/ebb/no_handler_test.c
+@@ -50,8 +50,6 @@ static int no_handler_test(void)
+ 
+ 	event_close(&event);
+ 
+-	dump_ebb_state();
+-
+ 	/* The real test is that we never took an EBB at 0x0 */
+ 
+ 	return 0;
+diff --git a/tools/testing/selftests/timers/rtcpie.c b/tools/testing/selftests/timers/rtcpie.c
+index 47b5bad1b3933..4ef2184f15588 100644
+--- a/tools/testing/selftests/timers/rtcpie.c
++++ b/tools/testing/selftests/timers/rtcpie.c
+@@ -18,6 +18,8 @@
+ #include <stdlib.h>
+ #include <errno.h>
+ 
++#include "../kselftest.h"
++
+ /*
+  * This expects the new RTC class driver framework, working with
+  * clocks that will often not be clones of what the PC-AT had.
+@@ -35,8 +37,14 @@ int main(int argc, char **argv)
+ 	switch (argc) {
+ 	case 2:
+ 		rtc = argv[1];
+-		/* FALLTHROUGH */
++		break;
+ 	case 1:
++		fd = open(default_rtc, O_RDONLY);
++		if (fd == -1) {
++			printf("Default RTC %s does not exist. Test Skipped!\n", default_rtc);
++			exit(KSFT_SKIP);
++		}
++		close(fd);
+ 		break;
+ 	default:
+ 		fprintf(stderr, "usage:  rtctest [rtcdev] [d]\n");
+diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c
+index f08f5e82460b1..0be80c213f7f2 100644
+--- a/virt/kvm/coalesced_mmio.c
++++ b/virt/kvm/coalesced_mmio.c
+@@ -186,7 +186,6 @@ int kvm_vm_ioctl_unregister_coalesced_mmio(struct kvm *kvm,
+ 		    coalesced_mmio_in_range(dev, zone->addr, zone->size)) {
+ 			r = kvm_io_bus_unregister_dev(kvm,
+ 				zone->pio ? KVM_PIO_BUS : KVM_MMIO_BUS, &dev->dev);
+-			kvm_iodevice_destructor(&dev->dev);
+ 
+ 			/*
+ 			 * On failure, unregister destroys all devices on the
+@@ -196,6 +195,7 @@ int kvm_vm_ioctl_unregister_coalesced_mmio(struct kvm *kvm,
+ 			 */
+ 			if (r)
+ 				break;
++			kvm_iodevice_destructor(&dev->dev);
+ 		}
+ 	}
+ 


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-07-25 17:25 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-07-25 17:25 UTC (permalink / raw
  To: gentoo-commits

commit:     361b0a55c6e9c46bd6ef1a7d7742d31354b84cb7
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Jul 25 17:25:27 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jul 25 17:25:27 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=361b0a55

Linux patch 5.13.5

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1004_linux-5.13.5.patch | 5954 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5958 insertions(+)

diff --git a/0000_README b/0000_README
index 1873ca7..4c49407 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch:  1003_linux-5.13.4.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.13.4
 
+Patch:  1004_linux-5.13.5.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.13.5
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1004_linux-5.13.5.patch b/1004_linux-5.13.5.patch
new file mode 100644
index 0000000..2d98737
--- /dev/null
+++ b/1004_linux-5.13.5.patch
@@ -0,0 +1,5954 @@
+diff --git a/Makefile b/Makefile
+index 975acb16046d2..41be12f806e0d 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 13
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+@@ -721,11 +721,12 @@ $(KCONFIG_CONFIG):
+ # This exploits the 'multi-target pattern rule' trick.
+ # The syncconfig should be executed only once to make all the targets.
+ # (Note: use the grouped target '&:' when we bump to GNU Make 4.3)
+-quiet_cmd_syncconfig = SYNC    $@
+-      cmd_syncconfig = $(MAKE) -f $(srctree)/Makefile syncconfig
+-
++#
++# Do not use $(call cmd,...) here. That would suppress prompts from syncconfig,
++# so you cannot notice that Kconfig is waiting for the user input.
+ %/config/auto.conf %/config/auto.conf.cmd %/generated/autoconf.h: $(KCONFIG_CONFIG)
+-	+$(call cmd,syncconfig)
++	$(Q)$(kecho) "  SYNC    $@"
++	$(Q)$(MAKE) -f $(srctree)/Makefile syncconfig
+ else # !may-sync-config
+ # External modules and some install targets need include/generated/autoconf.h
+ # and include/config/auto.conf but do not care if they are up-to-date.
+diff --git a/arch/arm/boot/dts/am335x-baltos.dtsi b/arch/arm/boot/dts/am335x-baltos.dtsi
+index 3ea2861803825..1103a2cb836fb 100644
+--- a/arch/arm/boot/dts/am335x-baltos.dtsi
++++ b/arch/arm/boot/dts/am335x-baltos.dtsi
+@@ -393,10 +393,10 @@
+ 	status = "okay";
+ };
+ 
+-&gpio0 {
++&gpio0_target {
+ 	ti,no-reset-on-init;
+ };
+ 
+-&gpio3 {
++&gpio3_target {
+ 	ti,no-reset-on-init;
+ };
+diff --git a/arch/arm/boot/dts/am335x-evmsk.dts b/arch/arm/boot/dts/am335x-evmsk.dts
+index d5f8d5e2eb5d2..45bf0273ecd85 100644
+--- a/arch/arm/boot/dts/am335x-evmsk.dts
++++ b/arch/arm/boot/dts/am335x-evmsk.dts
+@@ -646,7 +646,7 @@
+ 	status = "okay";
+ };
+ 
+-&gpio0 {
++&gpio0_target {
+ 	ti,no-reset-on-init;
+ };
+ 
+diff --git a/arch/arm/boot/dts/am335x-moxa-uc-2100-common.dtsi b/arch/arm/boot/dts/am335x-moxa-uc-2100-common.dtsi
+index 4e90f9c23d2e5..8121a199607cc 100644
+--- a/arch/arm/boot/dts/am335x-moxa-uc-2100-common.dtsi
++++ b/arch/arm/boot/dts/am335x-moxa-uc-2100-common.dtsi
+@@ -150,7 +150,7 @@
+ 	status = "okay";
+ };
+ 
+-&gpio0 {
++&gpio0_target {
+ 	ti,no-reset-on-init;
+ };
+ 
+diff --git a/arch/arm/boot/dts/am335x-moxa-uc-8100-common.dtsi b/arch/arm/boot/dts/am335x-moxa-uc-8100-common.dtsi
+index 98d8ed4ad9677..39e5d2ce600a1 100644
+--- a/arch/arm/boot/dts/am335x-moxa-uc-8100-common.dtsi
++++ b/arch/arm/boot/dts/am335x-moxa-uc-8100-common.dtsi
+@@ -353,7 +353,7 @@
+ 	status = "okay";
+ };
+ 
+-&gpio0 {
++&gpio0_target {
+ 	ti,no-reset-on-init;
+ };
+ 
+diff --git a/arch/arm/boot/dts/am33xx-l4.dtsi b/arch/arm/boot/dts/am33xx-l4.dtsi
+index 039a9ab4c7eaa..dcce5e3e001eb 100644
+--- a/arch/arm/boot/dts/am33xx-l4.dtsi
++++ b/arch/arm/boot/dts/am33xx-l4.dtsi
+@@ -1789,7 +1789,7 @@
+ 			};
+ 		};
+ 
+-		target-module@ae000 {			/* 0x481ae000, ap 56 3a.0 */
++		gpio3_target: target-module@ae000 {		/* 0x481ae000, ap 56 3a.0 */
+ 			compatible = "ti,sysc-omap2", "ti,sysc";
+ 			reg = <0xae000 0x4>,
+ 			      <0xae010 0x4>,
+diff --git a/arch/arm/boot/dts/am437x-gp-evm.dts b/arch/arm/boot/dts/am437x-gp-evm.dts
+index 6e4d05d649e98..033b984ff637d 100644
+--- a/arch/arm/boot/dts/am437x-gp-evm.dts
++++ b/arch/arm/boot/dts/am437x-gp-evm.dts
+@@ -813,11 +813,14 @@
+ 	status = "okay";
+ };
+ 
++&gpio5_target {
++	ti,no-reset-on-init;
++};
++
+ &gpio5 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&display_mux_pins>;
+ 	status = "okay";
+-	ti,no-reset-on-init;
+ 
+ 	p8 {
+ 		/*
+diff --git a/arch/arm/boot/dts/am437x-l4.dtsi b/arch/arm/boot/dts/am437x-l4.dtsi
+index e217ffc097705..a6f19ae7d3e6b 100644
+--- a/arch/arm/boot/dts/am437x-l4.dtsi
++++ b/arch/arm/boot/dts/am437x-l4.dtsi
+@@ -2070,7 +2070,7 @@
+ 			};
+ 		};
+ 
+-		target-module@22000 {			/* 0x48322000, ap 116 64.0 */
++		gpio5_target: target-module@22000 {		/* 0x48322000, ap 116 64.0 */
+ 			compatible = "ti,sysc-omap2", "ti,sysc";
+ 			reg = <0x22000 0x4>,
+ 			      <0x22010 0x4>,
+diff --git a/arch/arm/boot/dts/am57xx-cl-som-am57x.dts b/arch/arm/boot/dts/am57xx-cl-som-am57x.dts
+index 0d5fe2bfb683a..aed81568a297d 100644
+--- a/arch/arm/boot/dts/am57xx-cl-som-am57x.dts
++++ b/arch/arm/boot/dts/am57xx-cl-som-am57x.dts
+@@ -454,20 +454,20 @@
+ 
+ &mailbox5 {
+ 	status = "okay";
+-	mbox_ipu1_ipc3x: mbox_ipu1_ipc3x {
++	mbox_ipu1_ipc3x: mbox-ipu1-ipc3x {
+ 		status = "okay";
+ 	};
+-	mbox_dsp1_ipc3x: mbox_dsp1_ipc3x {
++	mbox_dsp1_ipc3x: mbox-dsp1-ipc3x {
+ 		status = "okay";
+ 	};
+ };
+ 
+ &mailbox6 {
+ 	status = "okay";
+-	mbox_ipu2_ipc3x: mbox_ipu2_ipc3x {
++	mbox_ipu2_ipc3x: mbox-ipu2-ipc3x {
+ 		status = "okay";
+ 	};
+-	mbox_dsp2_ipc3x: mbox_dsp2_ipc3x {
++	mbox_dsp2_ipc3x: mbox-dsp2-ipc3x {
+ 		status = "okay";
+ 	};
+ };
+@@ -610,12 +610,11 @@
+ 	>;
+ };
+ 
+-&gpio3 {
+-	status = "okay";
++&gpio3_target {
+ 	ti,no-reset-on-init;
+ };
+ 
+-&gpio2 {
++&gpio2_target {
+ 	status = "okay";
+ 	ti,no-reset-on-init;
+ };
+diff --git a/arch/arm/boot/dts/aspeed-bmc-ibm-everest.dts b/arch/arm/boot/dts/aspeed-bmc-ibm-everest.dts
+index 3295c8c7c05c8..c619047f6bb1e 100644
+--- a/arch/arm/boot/dts/aspeed-bmc-ibm-everest.dts
++++ b/arch/arm/boot/dts/aspeed-bmc-ibm-everest.dts
+@@ -353,10 +353,47 @@
+ 
+ &i2c1 {
+ 	status = "okay";
++};
++
++&i2c2 {
++	status = "okay";
++};
+ 
+-	pca2: pca9552@61 {
++&i2c3 {
++	status = "okay";
++
++	eeprom@54 {
++		compatible = "atmel,24c128";
++		reg = <0x54>;
++	};
++
++	power-supply@68 {
++		compatible = "ibm,cffps";
++		reg = <0x68>;
++	};
++
++	power-supply@69 {
++		compatible = "ibm,cffps";
++		reg = <0x69>;
++	};
++
++	power-supply@6a {
++		compatible = "ibm,cffps";
++		reg = <0x6a>;
++	};
++
++	power-supply@6b {
++		compatible = "ibm,cffps";
++		reg = <0x6b>;
++	};
++};
++
++&i2c4 {
++	status = "okay";
++
++	pca2: pca9552@65 {
+ 		compatible = "nxp,pca9552";
+-		reg = <0x61>;
++		reg = <0x65>;
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 
+@@ -424,12 +461,54 @@
+ 			reg = <9>;
+ 			type = <PCA955X_TYPE_GPIO>;
+ 		};
++	};
+ 
++	i2c-switch@70 {
++		compatible = "nxp,pca9546";
++		reg = <0x70>;
++		#address-cells = <1>;
++		#size-cells = <0>;
++		status = "okay";
++		i2c-mux-idle-disconnect;
++
++		i2c4mux0chn0: i2c@0 {
++			#address-cells = <1>;
++			#size-cells = <0>;
++			reg = <0>;
++			eeprom@52 {
++				compatible = "atmel,24c64";
++				reg = <0x52>;
++			};
++		};
++
++		i2c4mux0chn1: i2c@1 {
++			#address-cells = <1>;
++			#size-cells = <0>;
++			reg = <1>;
++			eeprom@50 {
++				compatible = "atmel,24c64";
++				reg = <0x50>;
++			};
++		};
++
++		i2c4mux0chn2: i2c@2 {
++			#address-cells = <1>;
++			#size-cells = <0>;
++			reg = <2>;
++			eeprom@51 {
++				compatible = "atmel,24c64";
++				reg = <0x51>;
++			};
++		};
+ 	};
++};
+ 
+-	pca3: pca9552@62 {
++&i2c5 {
++	status = "okay";
++
++	pca3: pca9552@66 {
+ 		compatible = "nxp,pca9552";
+-		reg = <0x62>;
++		reg = <0x66>;
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 
+@@ -512,87 +591,6 @@
+ 
+ 	};
+ 
+-};
+-
+-&i2c2 {
+-	status = "okay";
+-};
+-
+-&i2c3 {
+-	status = "okay";
+-
+-	eeprom@54 {
+-		compatible = "atmel,24c128";
+-		reg = <0x54>;
+-	};
+-
+-	power-supply@68 {
+-		compatible = "ibm,cffps";
+-		reg = <0x68>;
+-	};
+-
+-	power-supply@69 {
+-		compatible = "ibm,cffps";
+-		reg = <0x69>;
+-	};
+-
+-	power-supply@6a {
+-		compatible = "ibm,cffps";
+-		reg = <0x6a>;
+-	};
+-
+-	power-supply@6b {
+-		compatible = "ibm,cffps";
+-		reg = <0x6b>;
+-	};
+-};
+-
+-&i2c4 {
+-	status = "okay";
+-
+-	i2c-switch@70 {
+-		compatible = "nxp,pca9546";
+-		reg = <0x70>;
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+-		status = "okay";
+-		i2c-mux-idle-disconnect;
+-
+-		i2c4mux0chn0: i2c@0 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+-			reg = <0>;
+-			eeprom@52 {
+-				compatible = "atmel,24c64";
+-				reg = <0x52>;
+-			};
+-		};
+-
+-		i2c4mux0chn1: i2c@1 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+-			reg = <1>;
+-			eeprom@50 {
+-				compatible = "atmel,24c64";
+-				reg = <0x50>;
+-			};
+-		};
+-
+-		i2c4mux0chn2: i2c@2 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+-			reg = <2>;
+-			eeprom@51 {
+-				compatible = "atmel,24c64";
+-				reg = <0x51>;
+-			};
+-		};
+-	};
+-};
+-
+-&i2c5 {
+-	status = "okay";
+-
+ 	i2c-switch@70 {
+ 		compatible = "nxp,pca9546";
+ 		reg = <0x70>;
+@@ -1070,6 +1068,7 @@
+ 
+ &emmc {
+ 	status = "okay";
++	clk-phase-mmc-hs200 = <180>, <180>;
+ };
+ 
+ &fsim0 {
+diff --git a/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts b/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts
+index 941c0489479ac..481d0ee1f85fb 100644
+--- a/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts
++++ b/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts
+@@ -280,10 +280,7 @@
+ 	/*W0-W7*/	"","","","","","","","",
+ 	/*X0-X7*/	"","","","","","","","",
+ 	/*Y0-Y7*/	"","","","","","","","",
+-	/*Z0-Z7*/	"","","","","","","","",
+-	/*AA0-AA7*/	"","","","","","","","",
+-	/*AB0-AB7*/	"","","","","","","","",
+-	/*AC0-AC7*/	"","","","","","","","";
++	/*Z0-Z7*/	"","","","","","","","";
+ 
+ 	pin_mclr_vpp {
+ 		gpio-hog;
+diff --git a/arch/arm/boot/dts/aspeed-bmc-opp-tacoma.dts b/arch/arm/boot/dts/aspeed-bmc-opp-tacoma.dts
+index c1478d2db6026..ebe69cbd6609b 100644
+--- a/arch/arm/boot/dts/aspeed-bmc-opp-tacoma.dts
++++ b/arch/arm/boot/dts/aspeed-bmc-opp-tacoma.dts
+@@ -136,10 +136,7 @@
+ 	/*W0-W7*/	"","","","","","","","",
+ 	/*X0-X7*/	"","","","","","","","",
+ 	/*Y0-Y7*/	"","","","","","","","",
+-	/*Z0-Z7*/	"","","","","","","","",
+-	/*AA0-AA7*/	"","","","","","","","",
+-	/*AB0-AB7*/	"","","","","","","","",
+-	/*AC0-AC7*/	"","","","","","","","";
++	/*Z0-Z7*/	"","","","","","","","";
+ };
+ 
+ &fmc {
+@@ -189,6 +186,7 @@
+ 
+ &emmc {
+ 	status = "okay";
++	clk-phase-mmc-hs200 = <36>, <270>;
+ };
+ 
+ &fsim0 {
+diff --git a/arch/arm/boot/dts/bcm-cygnus.dtsi b/arch/arm/boot/dts/bcm-cygnus.dtsi
+index 0025c88f660cd..8ecb7861ce106 100644
+--- a/arch/arm/boot/dts/bcm-cygnus.dtsi
++++ b/arch/arm/boot/dts/bcm-cygnus.dtsi
+@@ -460,7 +460,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		nand: nand@18046000 {
++		nand_controller: nand-controller@18046000 {
+ 			compatible = "brcm,nand-iproc", "brcm,brcmnand-v6.1";
+ 			reg = <0x18046000 0x600>, <0xf8105408 0x600>,
+ 			      <0x18046f00 0x20>;
+diff --git a/arch/arm/boot/dts/bcm-hr2.dtsi b/arch/arm/boot/dts/bcm-hr2.dtsi
+index e8df458aad392..84cda16f68a23 100644
+--- a/arch/arm/boot/dts/bcm-hr2.dtsi
++++ b/arch/arm/boot/dts/bcm-hr2.dtsi
+@@ -179,7 +179,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		nand: nand@26000 {
++		nand_controller: nand-controller@26000 {
+ 			compatible = "brcm,nand-iproc", "brcm,brcmnand-v6.1";
+ 			reg = <0x26000 0x600>,
+ 			      <0x11b408 0x600>,
+diff --git a/arch/arm/boot/dts/bcm-nsp.dtsi b/arch/arm/boot/dts/bcm-nsp.dtsi
+index b4d2cc70afb19..748df7955ae67 100644
+--- a/arch/arm/boot/dts/bcm-nsp.dtsi
++++ b/arch/arm/boot/dts/bcm-nsp.dtsi
+@@ -269,7 +269,7 @@
+ 			dma-coherent;
+ 		};
+ 
+-		nand: nand@26000 {
++		nand_controller: nand-controller@26000 {
+ 			compatible = "brcm,nand-iproc", "brcm,brcmnand-v6.1";
+ 			reg = <0x026000 0x600>,
+ 			      <0x11b408 0x600>,
+diff --git a/arch/arm/boot/dts/bcm2711-rpi-4-b.dts b/arch/arm/boot/dts/bcm2711-rpi-4-b.dts
+index 3b4ab947492ac..27d2f859adfca 100644
+--- a/arch/arm/boot/dts/bcm2711-rpi-4-b.dts
++++ b/arch/arm/boot/dts/bcm2711-rpi-4-b.dts
+@@ -29,11 +29,11 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 42 GPIO_ACTIVE_HIGH>;
+ 		};
+ 
+-		pwr {
++		led-pwr {
+ 			label = "PWR";
+ 			gpios = <&expgpio 2 GPIO_ACTIVE_LOW>;
+ 			default-state = "keep";
+diff --git a/arch/arm/boot/dts/bcm2711.dtsi b/arch/arm/boot/dts/bcm2711.dtsi
+index 720beec54d617..d872064db7616 100644
+--- a/arch/arm/boot/dts/bcm2711.dtsi
++++ b/arch/arm/boot/dts/bcm2711.dtsi
+@@ -413,7 +413,7 @@
+ 		ranges = <0x0 0x7e000000  0x0 0xfe000000  0x01800000>;
+ 		dma-ranges = <0x0 0xc0000000  0x0 0x00000000  0x40000000>;
+ 
+-		emmc2: emmc2@7e340000 {
++		emmc2: mmc@7e340000 {
+ 			compatible = "brcm,bcm2711-emmc2";
+ 			reg = <0x0 0x7e340000 0x100>;
+ 			interrupts = <GIC_SPI 126 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-a-plus.dts b/arch/arm/boot/dts/bcm2835-rpi-a-plus.dts
+index 6c8ce39833bf6..40b9405f1a8e4 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-a-plus.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-a-plus.dts
+@@ -14,11 +14,11 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 47 GPIO_ACTIVE_HIGH>;
+ 		};
+ 
+-		pwr {
++		led-pwr {
+ 			label = "PWR";
+ 			gpios = <&gpio 35 GPIO_ACTIVE_HIGH>;
+ 			default-state = "keep";
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-a.dts b/arch/arm/boot/dts/bcm2835-rpi-a.dts
+index 17fdd48346ffb..11edb581dbaf0 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-a.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-a.dts
+@@ -14,7 +14,7 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 16 GPIO_ACTIVE_LOW>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-b-plus.dts b/arch/arm/boot/dts/bcm2835-rpi-b-plus.dts
+index b0355c229cdc2..1b435c64bd9c1 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-b-plus.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-b-plus.dts
+@@ -15,11 +15,11 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 47 GPIO_ACTIVE_HIGH>;
+ 		};
+ 
+-		pwr {
++		led-pwr {
+ 			label = "PWR";
+ 			gpios = <&gpio 35 GPIO_ACTIVE_HIGH>;
+ 			default-state = "keep";
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-b-rev2.dts b/arch/arm/boot/dts/bcm2835-rpi-b-rev2.dts
+index 33b3b5c025219..a23c25c00eea7 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-b-rev2.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-b-rev2.dts
+@@ -15,7 +15,7 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 16 GPIO_ACTIVE_LOW>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-b.dts b/arch/arm/boot/dts/bcm2835-rpi-b.dts
+index 2b69957e0113e..1b63d6b19750b 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-b.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-b.dts
+@@ -15,7 +15,7 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 16 GPIO_ACTIVE_LOW>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-cm1.dtsi b/arch/arm/boot/dts/bcm2835-rpi-cm1.dtsi
+index 58059c2ce1294..e4e6b6abbfc13 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-cm1.dtsi
++++ b/arch/arm/boot/dts/bcm2835-rpi-cm1.dtsi
+@@ -5,7 +5,7 @@
+ 
+ / {
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 47 GPIO_ACTIVE_LOW>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts b/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts
+index f65448c01e317..33b2b77aa47db 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts
+@@ -23,7 +23,7 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 47 GPIO_ACTIVE_LOW>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-zero.dts b/arch/arm/boot/dts/bcm2835-rpi-zero.dts
+index 6dd93c6f49666..6f9b3a908f287 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-zero.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-zero.dts
+@@ -18,7 +18,7 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 47 GPIO_ACTIVE_HIGH>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm2835-rpi.dtsi b/arch/arm/boot/dts/bcm2835-rpi.dtsi
+index d94357b21f7e9..87ddcad760834 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi.dtsi
++++ b/arch/arm/boot/dts/bcm2835-rpi.dtsi
+@@ -4,7 +4,7 @@
+ 	leds {
+ 		compatible = "gpio-leds";
+ 
+-		act {
++		led-act {
+ 			label = "ACT";
+ 			default-state = "keep";
+ 			linux,default-trigger = "heartbeat";
+diff --git a/arch/arm/boot/dts/bcm2836-rpi-2-b.dts b/arch/arm/boot/dts/bcm2836-rpi-2-b.dts
+index 0455a680394a2..d8af8eeac7b6f 100644
+--- a/arch/arm/boot/dts/bcm2836-rpi-2-b.dts
++++ b/arch/arm/boot/dts/bcm2836-rpi-2-b.dts
+@@ -15,11 +15,11 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 47 GPIO_ACTIVE_HIGH>;
+ 		};
+ 
+-		pwr {
++		led-pwr {
+ 			label = "PWR";
+ 			gpios = <&gpio 35 GPIO_ACTIVE_HIGH>;
+ 			default-state = "keep";
+diff --git a/arch/arm/boot/dts/bcm2837-rpi-3-a-plus.dts b/arch/arm/boot/dts/bcm2837-rpi-3-a-plus.dts
+index 28be0332c1c81..77099a7871b03 100644
+--- a/arch/arm/boot/dts/bcm2837-rpi-3-a-plus.dts
++++ b/arch/arm/boot/dts/bcm2837-rpi-3-a-plus.dts
+@@ -19,11 +19,11 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 29 GPIO_ACTIVE_HIGH>;
+ 		};
+ 
+-		pwr {
++		led-pwr {
+ 			label = "PWR";
+ 			gpios = <&expgpio 2 GPIO_ACTIVE_LOW>;
+ 			default-state = "keep";
+diff --git a/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts b/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts
+index 37343148643db..61010266ca9a3 100644
+--- a/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts
++++ b/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts
+@@ -20,11 +20,11 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&gpio 29 GPIO_ACTIVE_HIGH>;
+ 		};
+ 
+-		pwr {
++		led-pwr {
+ 			label = "PWR";
+ 			gpios = <&expgpio 2 GPIO_ACTIVE_LOW>;
+ 			default-state = "keep";
+diff --git a/arch/arm/boot/dts/bcm2837-rpi-3-b.dts b/arch/arm/boot/dts/bcm2837-rpi-3-b.dts
+index 054ecaa355c9a..dd4a486040971 100644
+--- a/arch/arm/boot/dts/bcm2837-rpi-3-b.dts
++++ b/arch/arm/boot/dts/bcm2837-rpi-3-b.dts
+@@ -20,7 +20,7 @@
+ 	};
+ 
+ 	leds {
+-		act {
++		led-act {
+ 			gpios = <&expgpio 2 GPIO_ACTIVE_HIGH>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm2837-rpi-cm3.dtsi b/arch/arm/boot/dts/bcm2837-rpi-cm3.dtsi
+index 925cb37c22f06..828a20561b969 100644
+--- a/arch/arm/boot/dts/bcm2837-rpi-cm3.dtsi
++++ b/arch/arm/boot/dts/bcm2837-rpi-cm3.dtsi
+@@ -14,7 +14,7 @@
+ 		 * Since there is no upstream GPIO driver yet,
+ 		 * remove the incomplete node.
+ 		 */
+-		/delete-node/ act;
++		/delete-node/ led-act;
+ 	};
+ 
+ 	reg_3v3: fixed-regulator {
+diff --git a/arch/arm/boot/dts/bcm283x.dtsi b/arch/arm/boot/dts/bcm283x.dtsi
+index b83a864e2e8ba..0f3be55201a5b 100644
+--- a/arch/arm/boot/dts/bcm283x.dtsi
++++ b/arch/arm/boot/dts/bcm283x.dtsi
+@@ -420,7 +420,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		sdhci: sdhci@7e300000 {
++		sdhci: mmc@7e300000 {
+ 			compatible = "brcm,bcm2835-sdhci";
+ 			reg = <0x7e300000 0x100>;
+ 			interrupts = <2 30>;
+diff --git a/arch/arm/boot/dts/bcm4708-luxul-xwc-1000.dts b/arch/arm/boot/dts/bcm4708-luxul-xwc-1000.dts
+index 8636600385fd4..c81944cd6d0bb 100644
+--- a/arch/arm/boot/dts/bcm4708-luxul-xwc-1000.dts
++++ b/arch/arm/boot/dts/bcm4708-luxul-xwc-1000.dts
+@@ -24,8 +24,8 @@
+ 		reg = <0x00000000 0x08000000>;
+ 	};
+ 
+-	nand: nand@18028000 {
+-		nandcs@0 {
++	nand_controller: nand-controller@18028000 {
++		nand@0 {
+ 			partitions {
+ 				compatible = "fixed-partitions";
+ 				#address-cells = <1>;
+diff --git a/arch/arm/boot/dts/bcm47094-dlink-dir-885l.dts b/arch/arm/boot/dts/bcm47094-dlink-dir-885l.dts
+index e635a15041dd8..a6e2aeb286754 100644
+--- a/arch/arm/boot/dts/bcm47094-dlink-dir-885l.dts
++++ b/arch/arm/boot/dts/bcm47094-dlink-dir-885l.dts
+@@ -25,8 +25,8 @@
+ 		      <0x88000000 0x08000000>;
+ 	};
+ 
+-	nand: nand@18028000 {
+-		nandcs@0 {
++	nand_controller: nand-controller@18028000 {
++		nand@0 {
+ 			partitions {
+ 				compatible = "fixed-partitions";
+ 				#address-cells = <1>;
+diff --git a/arch/arm/boot/dts/bcm47094.dtsi b/arch/arm/boot/dts/bcm47094.dtsi
+index 2a8f7312d1be1..6282363313e18 100644
+--- a/arch/arm/boot/dts/bcm47094.dtsi
++++ b/arch/arm/boot/dts/bcm47094.dtsi
+@@ -11,7 +11,7 @@
+ &pinctrl {
+ 	compatible = "brcm,bcm4709-pinmux";
+ 
+-	pinmux_mdio: mdio {
++	pinmux_mdio: mdio-pins {
+ 		groups = "mdio_grp";
+ 		function = "mdio";
+ 	};
+diff --git a/arch/arm/boot/dts/bcm5301x-nand-cs0.dtsi b/arch/arm/boot/dts/bcm5301x-nand-cs0.dtsi
+index 925a7c9ce5b7f..be9a00ff752d9 100644
+--- a/arch/arm/boot/dts/bcm5301x-nand-cs0.dtsi
++++ b/arch/arm/boot/dts/bcm5301x-nand-cs0.dtsi
+@@ -6,8 +6,8 @@
+  */
+ 
+ / {
+-	nand@18028000 {
+-		nandcs: nandcs@0 {
++	nand-controller@18028000 {
++		nandcs: nand@0 {
+ 			compatible = "brcm,nandcs";
+ 			reg = <0>;
+ 			#address-cells = <1>;
+diff --git a/arch/arm/boot/dts/bcm5301x.dtsi b/arch/arm/boot/dts/bcm5301x.dtsi
+index 86872e12c355e..f92089290ccd5 100644
+--- a/arch/arm/boot/dts/bcm5301x.dtsi
++++ b/arch/arm/boot/dts/bcm5301x.dtsi
+@@ -458,18 +458,18 @@
+ 					function = "spi";
+ 				};
+ 
+-				pinmux_i2c: i2c {
++				pinmux_i2c: i2c-pins {
+ 					groups = "i2c_grp";
+ 					function = "i2c";
+ 				};
+ 
+-				pinmux_pwm: pwm {
++				pinmux_pwm: pwm-pins {
+ 					groups = "pwm0_grp", "pwm1_grp",
+ 						 "pwm2_grp", "pwm3_grp";
+ 					function = "pwm";
+ 				};
+ 
+-				pinmux_uart1: uart1 {
++				pinmux_uart1: uart1-pins {
+ 					groups = "uart1_grp";
+ 					function = "uart1";
+ 				};
+@@ -501,7 +501,7 @@
+ 		reg = <0x18004000 0x14>;
+ 	};
+ 
+-	nand: nand@18028000 {
++	nand_controller: nand-controller@18028000 {
+ 		compatible = "brcm,nand-iproc", "brcm,brcmnand-v6.1", "brcm,brcmnand";
+ 		reg = <0x18028000 0x600>, <0x1811a408 0x600>, <0x18028f00 0x20>;
+ 		reg-names = "nand", "iproc-idm", "iproc-ext";
+diff --git a/arch/arm/boot/dts/bcm63138.dtsi b/arch/arm/boot/dts/bcm63138.dtsi
+index 9c0325cf9e22e..cca49a2e2d623 100644
+--- a/arch/arm/boot/dts/bcm63138.dtsi
++++ b/arch/arm/boot/dts/bcm63138.dtsi
+@@ -203,7 +203,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		nand: nand@2000 {
++		nand_controller: nand-controller@2000 {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 			compatible = "brcm,nand-bcm63138", "brcm,brcmnand-v7.0", "brcm,brcmnand";
+diff --git a/arch/arm/boot/dts/bcm7445-bcm97445svmb.dts b/arch/arm/boot/dts/bcm7445-bcm97445svmb.dts
+index 8313b7cad5427..f92d2cf859726 100644
+--- a/arch/arm/boot/dts/bcm7445-bcm97445svmb.dts
++++ b/arch/arm/boot/dts/bcm7445-bcm97445svmb.dts
+@@ -14,10 +14,10 @@
+ 	};
+ };
+ 
+-&nand {
++&nand_controller {
+ 	status = "okay";
+ 
+-	nandcs@1 {
++	nand@1 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <1>;
+ 		nand-ecc-step-size = <512>;
+diff --git a/arch/arm/boot/dts/bcm7445.dtsi b/arch/arm/boot/dts/bcm7445.dtsi
+index 58f67c9b830b8..5ac2042515b8f 100644
+--- a/arch/arm/boot/dts/bcm7445.dtsi
++++ b/arch/arm/boot/dts/bcm7445.dtsi
+@@ -148,7 +148,7 @@
+ 			reg-names = "aon-ctrl", "aon-sram";
+ 		};
+ 
+-		nand: nand@3e2800 {
++		nand_controller: nand-controller@3e2800 {
+ 			status = "disabled";
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+diff --git a/arch/arm/boot/dts/bcm911360_entphn.dts b/arch/arm/boot/dts/bcm911360_entphn.dts
+index b2d323f4a5aba..a76c74b44bbaf 100644
+--- a/arch/arm/boot/dts/bcm911360_entphn.dts
++++ b/arch/arm/boot/dts/bcm911360_entphn.dts
+@@ -82,8 +82,8 @@
+ 	status = "okay";
+ };
+ 
+-&nand {
+-	nandcs@1 {
++&nand_controller {
++	nand@1 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/bcm953012k.dts b/arch/arm/boot/dts/bcm953012k.dts
+index 046c59fb48462..de40bd59a5faf 100644
+--- a/arch/arm/boot/dts/bcm953012k.dts
++++ b/arch/arm/boot/dts/bcm953012k.dts
+@@ -49,8 +49,8 @@
+ 	};
+ };
+ 
+-&nand {
+-	nandcs@0 {
++&nand_controller {
++	nand@0 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/bcm958300k.dts b/arch/arm/boot/dts/bcm958300k.dts
+index b4a1392bd5a6c..dda3e11b711f6 100644
+--- a/arch/arm/boot/dts/bcm958300k.dts
++++ b/arch/arm/boot/dts/bcm958300k.dts
+@@ -60,8 +60,8 @@
+ 	status = "okay";
+ };
+ 
+-&nand {
+-	nandcs@1 {
++&nand_controller {
++	nand@1 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/bcm958305k.dts b/arch/arm/boot/dts/bcm958305k.dts
+index 3378683321d3c..ea3c6b88b313b 100644
+--- a/arch/arm/boot/dts/bcm958305k.dts
++++ b/arch/arm/boot/dts/bcm958305k.dts
+@@ -68,8 +68,8 @@
+ 	status = "okay";
+ };
+ 
+-&nand {
+-	nandcs@1 {
++&nand_controller {
++	nand@1 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/bcm958522er.dts b/arch/arm/boot/dts/bcm958522er.dts
+index 5443fc079e6e4..1f73885ec274a 100644
+--- a/arch/arm/boot/dts/bcm958522er.dts
++++ b/arch/arm/boot/dts/bcm958522er.dts
+@@ -74,8 +74,8 @@
+ 	status = "okay";
+ };
+ 
+-&nand {
+-	nandcs@0 {
++&nand_controller {
++	nand@0 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/bcm958525er.dts b/arch/arm/boot/dts/bcm958525er.dts
+index e1e3c26cef190..b6b9ca8b09726 100644
+--- a/arch/arm/boot/dts/bcm958525er.dts
++++ b/arch/arm/boot/dts/bcm958525er.dts
+@@ -74,8 +74,8 @@
+ 	status = "okay";
+ };
+ 
+-&nand {
+-	nandcs@0 {
++&nand_controller {
++	nand@0 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/bcm958525xmc.dts b/arch/arm/boot/dts/bcm958525xmc.dts
+index f161ba2e7e5e1..ecf426f6ad5d5 100644
+--- a/arch/arm/boot/dts/bcm958525xmc.dts
++++ b/arch/arm/boot/dts/bcm958525xmc.dts
+@@ -90,8 +90,8 @@
+ 	};
+ };
+ 
+-&nand {
+-	nandcs@0 {
++&nand_controller {
++	nand@0 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/bcm958622hr.dts b/arch/arm/boot/dts/bcm958622hr.dts
+index 83cb877d63dbc..8ca18da981adf 100644
+--- a/arch/arm/boot/dts/bcm958622hr.dts
++++ b/arch/arm/boot/dts/bcm958622hr.dts
+@@ -78,8 +78,8 @@
+ 	status = "okay";
+ };
+ 
+-&nand {
+-	nandcs@0 {
++&nand_controller {
++	nand@0 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/bcm958623hr.dts b/arch/arm/boot/dts/bcm958623hr.dts
+index 4e106ce1384a3..9747378db5314 100644
+--- a/arch/arm/boot/dts/bcm958623hr.dts
++++ b/arch/arm/boot/dts/bcm958623hr.dts
+@@ -78,8 +78,8 @@
+ 	status = "okay";
+ };
+ 
+-&nand {
+-	nandcs@0 {
++&nand_controller {
++	nand@0 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/bcm958625hr.dts b/arch/arm/boot/dts/bcm958625hr.dts
+index cda6cc281e186..0f92b773afb8d 100644
+--- a/arch/arm/boot/dts/bcm958625hr.dts
++++ b/arch/arm/boot/dts/bcm958625hr.dts
+@@ -89,8 +89,8 @@
+ 	status = "okay";
+ };
+ 
+-&nand {
+-	nandcs@0 {
++&nand_controller {
++	nand@0 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/bcm958625k.dts b/arch/arm/boot/dts/bcm958625k.dts
+index ffbff0014c652..9e984ca0e6df4 100644
+--- a/arch/arm/boot/dts/bcm958625k.dts
++++ b/arch/arm/boot/dts/bcm958625k.dts
+@@ -68,8 +68,8 @@
+ 	status = "okay";
+ };
+ 
+-&nand {
+-	nandcs@0 {
++&nand_controller {
++	nand@0 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/bcm963138dvt.dts b/arch/arm/boot/dts/bcm963138dvt.dts
+index 5b177274f1826..df5c8ab906273 100644
+--- a/arch/arm/boot/dts/bcm963138dvt.dts
++++ b/arch/arm/boot/dts/bcm963138dvt.dts
+@@ -31,10 +31,10 @@
+ 	status = "okay";
+ };
+ 
+-&nand {
++&nand_controller {
+ 	status = "okay";
+ 
+-	nandcs@0 {
++	nand@0 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-ecc-strength = <4>;
+diff --git a/arch/arm/boot/dts/bcm988312hr.dts b/arch/arm/boot/dts/bcm988312hr.dts
+index 3fd39c479a3ce..5475dab8181d2 100644
+--- a/arch/arm/boot/dts/bcm988312hr.dts
++++ b/arch/arm/boot/dts/bcm988312hr.dts
+@@ -74,8 +74,8 @@
+ 	status = "okay";
+ };
+ 
+-&nand {
+-	nandcs@0 {
++&nand_controller {
++	nand@0 {
+ 		compatible = "brcm,nandcs";
+ 		reg = <0>;
+ 		nand-on-flash-bbt;
+diff --git a/arch/arm/boot/dts/dm816x.dtsi b/arch/arm/boot/dts/dm816x.dtsi
+index 3551a64963f8d..1825d912b8ab4 100644
+--- a/arch/arm/boot/dts/dm816x.dtsi
++++ b/arch/arm/boot/dts/dm816x.dtsi
+@@ -351,7 +351,7 @@
+ 			#mbox-cells = <1>;
+ 			ti,mbox-num-users = <4>;
+ 			ti,mbox-num-fifos = <12>;
+-			mbox_dsp: mbox_dsp {
++			mbox_dsp: mbox-dsp {
+ 				ti,mbox-tx = <3 0 0>;
+ 				ti,mbox-rx = <0 0 0>;
+ 			};
+diff --git a/arch/arm/boot/dts/dra7-ipu-dsp-common.dtsi b/arch/arm/boot/dts/dra7-ipu-dsp-common.dtsi
+index a25749a1c3659..a5bdc6431d8d0 100644
+--- a/arch/arm/boot/dts/dra7-ipu-dsp-common.dtsi
++++ b/arch/arm/boot/dts/dra7-ipu-dsp-common.dtsi
+@@ -5,17 +5,17 @@
+ 
+ &mailbox5 {
+ 	status = "okay";
+-	mbox_ipu1_ipc3x: mbox_ipu1_ipc3x {
++	mbox_ipu1_ipc3x: mbox-ipu1-ipc3x {
+ 		status = "okay";
+ 	};
+-	mbox_dsp1_ipc3x: mbox_dsp1_ipc3x {
++	mbox_dsp1_ipc3x: mbox-dsp1-ipc3x {
+ 		status = "okay";
+ 	};
+ };
+ 
+ &mailbox6 {
+ 	status = "okay";
+-	mbox_ipu2_ipc3x: mbox_ipu2_ipc3x {
++	mbox_ipu2_ipc3x: mbox-ipu2-ipc3x {
+ 		status = "okay";
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/dra7-l4.dtsi b/arch/arm/boot/dts/dra7-l4.dtsi
+index 648d23f7f7481..06e26182eabf0 100644
+--- a/arch/arm/boot/dts/dra7-l4.dtsi
++++ b/arch/arm/boot/dts/dra7-l4.dtsi
+@@ -1343,7 +1343,7 @@
+ 			};
+ 		};
+ 
+-		target-module@55000 {			/* 0x48055000, ap 13 0e.0 */
++		gpio2_target: target-module@55000 {		/* 0x48055000, ap 13 0e.0 */
+ 			compatible = "ti,sysc-omap2", "ti,sysc";
+ 			reg = <0x55000 0x4>,
+ 			      <0x55010 0x4>,
+@@ -1376,7 +1376,7 @@
+ 			};
+ 		};
+ 
+-		target-module@57000 {			/* 0x48057000, ap 15 06.0 */
++		gpio3_target: target-module@57000 {		/* 0x48057000, ap 15 06.0 */
+ 			compatible = "ti,sysc-omap2", "ti,sysc";
+ 			reg = <0x57000 0x4>,
+ 			      <0x57010 0x4>,
+diff --git a/arch/arm/boot/dts/dra72x.dtsi b/arch/arm/boot/dts/dra72x.dtsi
+index f3e934ef7d3e2..90617261373cf 100644
+--- a/arch/arm/boot/dts/dra72x.dtsi
++++ b/arch/arm/boot/dts/dra72x.dtsi
+@@ -77,12 +77,12 @@
+ };
+ 
+ &mailbox5 {
+-	mbox_ipu1_ipc3x: mbox_ipu1_ipc3x {
++	mbox_ipu1_ipc3x: mbox-ipu1-ipc3x {
+ 		ti,mbox-tx = <6 2 2>;
+ 		ti,mbox-rx = <4 2 2>;
+ 		status = "disabled";
+ 	};
+-	mbox_dsp1_ipc3x: mbox_dsp1_ipc3x {
++	mbox_dsp1_ipc3x: mbox-dsp1-ipc3x {
+ 		ti,mbox-tx = <5 2 2>;
+ 		ti,mbox-rx = <1 2 2>;
+ 		status = "disabled";
+@@ -90,7 +90,7 @@
+ };
+ 
+ &mailbox6 {
+-	mbox_ipu2_ipc3x: mbox_ipu2_ipc3x {
++	mbox_ipu2_ipc3x: mbox-ipu2-ipc3x {
+ 		ti,mbox-tx = <6 2 2>;
+ 		ti,mbox-rx = <4 2 2>;
+ 		status = "disabled";
+diff --git a/arch/arm/boot/dts/dra74-ipu-dsp-common.dtsi b/arch/arm/boot/dts/dra74-ipu-dsp-common.dtsi
+index b1147a4b77f9d..3256631510c56 100644
+--- a/arch/arm/boot/dts/dra74-ipu-dsp-common.dtsi
++++ b/arch/arm/boot/dts/dra74-ipu-dsp-common.dtsi
+@@ -6,7 +6,7 @@
+ #include "dra7-ipu-dsp-common.dtsi"
+ 
+ &mailbox6 {
+-	mbox_dsp2_ipc3x: mbox_dsp2_ipc3x {
++	mbox_dsp2_ipc3x: mbox-dsp2-ipc3x {
+ 		status = "okay";
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/dra74x.dtsi b/arch/arm/boot/dts/dra74x.dtsi
+index b4e07d99ffde1..cfb39dde49300 100644
+--- a/arch/arm/boot/dts/dra74x.dtsi
++++ b/arch/arm/boot/dts/dra74x.dtsi
+@@ -145,12 +145,12 @@
+ };
+ 
+ &mailbox5 {
+-	mbox_ipu1_ipc3x: mbox_ipu1_ipc3x {
++	mbox_ipu1_ipc3x: mbox-ipu1-ipc3x {
+ 		ti,mbox-tx = <6 2 2>;
+ 		ti,mbox-rx = <4 2 2>;
+ 		status = "disabled";
+ 	};
+-	mbox_dsp1_ipc3x: mbox_dsp1_ipc3x {
++	mbox_dsp1_ipc3x: mbox-dsp1-ipc3x {
+ 		ti,mbox-tx = <5 2 2>;
+ 		ti,mbox-rx = <1 2 2>;
+ 		status = "disabled";
+@@ -158,12 +158,12 @@
+ };
+ 
+ &mailbox6 {
+-	mbox_ipu2_ipc3x: mbox_ipu2_ipc3x {
++	mbox_ipu2_ipc3x: mbox-ipu2-ipc3x {
+ 		ti,mbox-tx = <6 2 2>;
+ 		ti,mbox-rx = <4 2 2>;
+ 		status = "disabled";
+ 	};
+-	mbox_dsp2_ipc3x: mbox_dsp2_ipc3x {
++	mbox_dsp2_ipc3x: mbox-dsp2-ipc3x {
+ 		ti,mbox-tx = <5 2 2>;
+ 		ti,mbox-rx = <1 2 2>;
+ 		status = "disabled";
+diff --git a/arch/arm/boot/dts/gemini-dlink-dns-313.dts b/arch/arm/boot/dts/gemini-dlink-dns-313.dts
+index c6f3d90e3e90c..b8acc6eaaa6d7 100644
+--- a/arch/arm/boot/dts/gemini-dlink-dns-313.dts
++++ b/arch/arm/boot/dts/gemini-dlink-dns-313.dts
+@@ -140,7 +140,7 @@
+ 		};
+ 	};
+ 
+-	mdio0: ethernet-phy {
++	mdio0: mdio {
+ 		compatible = "virtual,mdio-gpio";
+ 		/* Uses MDC and MDIO */
+ 		gpios = <&gpio0 22 GPIO_ACTIVE_HIGH>, /* MDC */
+diff --git a/arch/arm/boot/dts/gemini-nas4220b.dts b/arch/arm/boot/dts/gemini-nas4220b.dts
+index 43c45f7e1e0a3..13112a8a5dd88 100644
+--- a/arch/arm/boot/dts/gemini-nas4220b.dts
++++ b/arch/arm/boot/dts/gemini-nas4220b.dts
+@@ -62,7 +62,7 @@
+ 		};
+ 	};
+ 
+-	mdio0: ethernet-phy {
++	mdio0: mdio {
+ 		compatible = "virtual,mdio-gpio";
+ 		gpios = <&gpio0 22 GPIO_ACTIVE_HIGH>, /* MDC */
+ 			<&gpio0 21 GPIO_ACTIVE_HIGH>; /* MDIO */
+diff --git a/arch/arm/boot/dts/gemini-rut1xx.dts b/arch/arm/boot/dts/gemini-rut1xx.dts
+index 08091d2a64e15..0ebda4efd9d0f 100644
+--- a/arch/arm/boot/dts/gemini-rut1xx.dts
++++ b/arch/arm/boot/dts/gemini-rut1xx.dts
+@@ -56,7 +56,7 @@
+ 		};
+ 	};
+ 
+-	mdio0: ethernet-phy {
++	mdio0: mdio {
+ 		compatible = "virtual,mdio-gpio";
+ 		gpios = <&gpio0 22 GPIO_ACTIVE_HIGH>, /* MDC */
+ 			<&gpio0 21 GPIO_ACTIVE_HIGH>; /* MDIO */
+diff --git a/arch/arm/boot/dts/gemini-wbd111.dts b/arch/arm/boot/dts/gemini-wbd111.dts
+index 3a2761dd460f9..5602ba8f30f2f 100644
+--- a/arch/arm/boot/dts/gemini-wbd111.dts
++++ b/arch/arm/boot/dts/gemini-wbd111.dts
+@@ -68,7 +68,7 @@
+ 		};
+ 	};
+ 
+-	mdio0: ethernet-phy {
++	mdio0: mdio {
+ 		compatible = "virtual,mdio-gpio";
+ 		gpios = <&gpio0 22 GPIO_ACTIVE_HIGH>, /* MDC */
+ 			<&gpio0 21 GPIO_ACTIVE_HIGH>; /* MDIO */
+diff --git a/arch/arm/boot/dts/gemini-wbd222.dts b/arch/arm/boot/dts/gemini-wbd222.dts
+index 52b4dbc0c0723..a4a260c36d752 100644
+--- a/arch/arm/boot/dts/gemini-wbd222.dts
++++ b/arch/arm/boot/dts/gemini-wbd222.dts
+@@ -67,7 +67,7 @@
+ 		};
+ 	};
+ 
+-	mdio0: ethernet-phy {
++	mdio0: mdio {
+ 		compatible = "virtual,mdio-gpio";
+ 		gpios = <&gpio0 22 GPIO_ACTIVE_HIGH>, /* MDC */
+ 			<&gpio0 21 GPIO_ACTIVE_HIGH>; /* MDIO */
+diff --git a/arch/arm/boot/dts/gemini.dtsi b/arch/arm/boot/dts/gemini.dtsi
+index 065ed10a79fa7..07448c03dac9e 100644
+--- a/arch/arm/boot/dts/gemini.dtsi
++++ b/arch/arm/boot/dts/gemini.dtsi
+@@ -286,6 +286,7 @@
+ 			clock-names = "PCLK", "PCICLK";
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&pci_default_pins>;
++			device_type = "pci";
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 			#interrupt-cells = <1>;
+diff --git a/arch/arm/boot/dts/imx6dl-riotboard.dts b/arch/arm/boot/dts/imx6dl-riotboard.dts
+index 065d3ab0f50a7..e7d9bfbfd0e4d 100644
+--- a/arch/arm/boot/dts/imx6dl-riotboard.dts
++++ b/arch/arm/boot/dts/imx6dl-riotboard.dts
+@@ -106,6 +106,8 @@
+ 			reset-gpios = <&gpio3 31 GPIO_ACTIVE_LOW>;
+ 			reset-assert-us = <10000>;
+ 			reset-deassert-us = <1000>;
++			qca,smarteee-tw-us-1g = <24>;
++			qca,clk-out-frequency = <125000000>;
+ 		};
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi b/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
+index 7bd658b7bdda0..f3236204cb5a9 100644
+--- a/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
+@@ -322,8 +322,8 @@
+ 			fsl,pins = <
+ 				MX6QDL_PAD_EIM_D24__UART3_TX_DATA	0x1b0b1
+ 				MX6QDL_PAD_EIM_D25__UART3_RX_DATA	0x1b0b1
+-				MX6QDL_PAD_EIM_D30__UART3_RTS_B		0x1b0b1
+-				MX6QDL_PAD_EIM_D31__UART3_CTS_B		0x1b0b1
++				MX6QDL_PAD_EIM_D31__UART3_RTS_B		0x1b0b1
++				MX6QDL_PAD_EIM_D30__UART3_CTS_B		0x1b0b1
+ 			>;
+ 		};
+ 
+@@ -410,6 +410,7 @@
+ &uart3 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_uart3>;
++	uart-has-rtscts;
+ 	status = "disabled";
+ };
+ 
+diff --git a/arch/arm/boot/dts/omap4-l4.dtsi b/arch/arm/boot/dts/omap4-l4.dtsi
+index 99721673d7afd..46b8f9efd4131 100644
+--- a/arch/arm/boot/dts/omap4-l4.dtsi
++++ b/arch/arm/boot/dts/omap4-l4.dtsi
+@@ -600,11 +600,11 @@
+ 				#mbox-cells = <1>;
+ 				ti,mbox-num-users = <3>;
+ 				ti,mbox-num-fifos = <8>;
+-				mbox_ipu: mbox_ipu {
++				mbox_ipu: mbox-ipu {
+ 					ti,mbox-tx = <0 0 0>;
+ 					ti,mbox-rx = <1 0 0>;
+ 				};
+-				mbox_dsp: mbox_dsp {
++				mbox_dsp: mbox-dsp {
+ 					ti,mbox-tx = <3 0 0>;
+ 					ti,mbox-rx = <2 0 0>;
+ 				};
+diff --git a/arch/arm/boot/dts/omap5-l4.dtsi b/arch/arm/boot/dts/omap5-l4.dtsi
+index b148b289e8301..06cc3a19ddaa9 100644
+--- a/arch/arm/boot/dts/omap5-l4.dtsi
++++ b/arch/arm/boot/dts/omap5-l4.dtsi
+@@ -616,11 +616,11 @@
+ 				#mbox-cells = <1>;
+ 				ti,mbox-num-users = <3>;
+ 				ti,mbox-num-fifos = <8>;
+-				mbox_ipu: mbox_ipu {
++				mbox_ipu: mbox-ipu {
+ 					ti,mbox-tx = <0 0 0>;
+ 					ti,mbox-rx = <1 0 0>;
+ 				};
+-				mbox_dsp: mbox_dsp {
++				mbox_dsp: mbox-dsp {
+ 					ti,mbox-tx = <3 0 0>;
+ 					ti,mbox-rx = <2 0 0>;
+ 				};
+diff --git a/arch/arm/boot/dts/rk3036-kylin.dts b/arch/arm/boot/dts/rk3036-kylin.dts
+index 7154b827ea2f0..e817eba8c622b 100644
+--- a/arch/arm/boot/dts/rk3036-kylin.dts
++++ b/arch/arm/boot/dts/rk3036-kylin.dts
+@@ -390,7 +390,7 @@
+ 		};
+ 	};
+ 
+-	sleep {
++	suspend {
+ 		global_pwroff: global-pwroff {
+ 			rockchip,pins = <2 RK_PA7 1 &pcfg_pull_none>;
+ 		};
+diff --git a/arch/arm/boot/dts/rk3066a.dtsi b/arch/arm/boot/dts/rk3066a.dtsi
+index 252750c97f97f..bbc3bff508560 100644
+--- a/arch/arm/boot/dts/rk3066a.dtsi
++++ b/arch/arm/boot/dts/rk3066a.dtsi
+@@ -755,7 +755,7 @@
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 
+-		pd_vio@RK3066_PD_VIO {
++		power-domain@RK3066_PD_VIO {
+ 			reg = <RK3066_PD_VIO>;
+ 			clocks = <&cru ACLK_LCDC0>,
+ 				 <&cru ACLK_LCDC1>,
+@@ -782,7 +782,7 @@
+ 				 <&qos_rga>;
+ 		};
+ 
+-		pd_video@RK3066_PD_VIDEO {
++		power-domain@RK3066_PD_VIDEO {
+ 			reg = <RK3066_PD_VIDEO>;
+ 			clocks = <&cru ACLK_VDPU>,
+ 				 <&cru ACLK_VEPU>,
+@@ -791,7 +791,7 @@
+ 			pm_qos = <&qos_vpu>;
+ 		};
+ 
+-		pd_gpu@RK3066_PD_GPU {
++		power-domain@RK3066_PD_GPU {
+ 			reg = <RK3066_PD_GPU>;
+ 			clocks = <&cru ACLK_GPU>;
+ 			pm_qos = <&qos_gpu>;
+diff --git a/arch/arm/boot/dts/rk3188.dtsi b/arch/arm/boot/dts/rk3188.dtsi
+index 2298a8d840ba3..b6bde9d12c2be 100644
+--- a/arch/arm/boot/dts/rk3188.dtsi
++++ b/arch/arm/boot/dts/rk3188.dtsi
+@@ -150,16 +150,16 @@
+ 		compatible = "rockchip,rk3188-timer", "rockchip,rk3288-timer";
+ 		reg = <0x2000e000 0x20>;
+ 		interrupts = <GIC_SPI 46 IRQ_TYPE_LEVEL_HIGH>;
+-		clocks = <&cru SCLK_TIMER3>, <&cru PCLK_TIMER3>;
+-		clock-names = "timer", "pclk";
++		clocks = <&cru PCLK_TIMER3>, <&cru SCLK_TIMER3>;
++		clock-names = "pclk", "timer";
+ 	};
+ 
+ 	timer6: timer@200380a0 {
+ 		compatible = "rockchip,rk3188-timer", "rockchip,rk3288-timer";
+ 		reg = <0x200380a0 0x20>;
+ 		interrupts = <GIC_SPI 64 IRQ_TYPE_LEVEL_HIGH>;
+-		clocks = <&cru SCLK_TIMER6>, <&cru PCLK_TIMER0>;
+-		clock-names = "timer", "pclk";
++		clocks = <&cru PCLK_TIMER0>, <&cru SCLK_TIMER6>;
++		clock-names = "pclk", "timer";
+ 	};
+ 
+ 	i2s0: i2s@1011a000 {
+@@ -699,7 +699,7 @@
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 
+-		pd_vio@RK3188_PD_VIO {
++		power-domain@RK3188_PD_VIO {
+ 			reg = <RK3188_PD_VIO>;
+ 			clocks = <&cru ACLK_LCDC0>,
+ 				 <&cru ACLK_LCDC1>,
+@@ -721,7 +721,7 @@
+ 				 <&qos_rga>;
+ 		};
+ 
+-		pd_video@RK3188_PD_VIDEO {
++		power-domain@RK3188_PD_VIDEO {
+ 			reg = <RK3188_PD_VIDEO>;
+ 			clocks = <&cru ACLK_VDPU>,
+ 				 <&cru ACLK_VEPU>,
+@@ -730,7 +730,7 @@
+ 			pm_qos = <&qos_vpu>;
+ 		};
+ 
+-		pd_gpu@RK3188_PD_GPU {
++		power-domain@RK3188_PD_GPU {
+ 			reg = <RK3188_PD_GPU>;
+ 			clocks = <&cru ACLK_GPU>;
+ 			pm_qos = <&qos_gpu>;
+diff --git a/arch/arm/boot/dts/rk322x.dtsi b/arch/arm/boot/dts/rk322x.dtsi
+index 208f212450953..25f83f2f5618e 100644
+--- a/arch/arm/boot/dts/rk322x.dtsi
++++ b/arch/arm/boot/dts/rk322x.dtsi
+@@ -517,7 +517,7 @@
+ 		pinctrl-0 = <&otp_pin>;
+ 		pinctrl-1 = <&otp_out>;
+ 		pinctrl-2 = <&otp_pin>;
+-		#thermal-sensor-cells = <0>;
++		#thermal-sensor-cells = <1>;
+ 		rockchip,hw-tshut-temp = <95000>;
+ 		status = "disabled";
+ 	};
+@@ -558,10 +558,9 @@
+ 		compatible = "rockchip,iommu";
+ 		reg = <0x20020800 0x100>;
+ 		interrupts = <GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>;
+-		interrupt-names = "vpu_mmu";
+ 		clocks = <&cru ACLK_VPU>, <&cru HCLK_VPU>;
+ 		clock-names = "aclk", "iface";
+-		iommu-cells = <0>;
++		#iommu-cells = <0>;
+ 		status = "disabled";
+ 	};
+ 
+@@ -569,10 +568,9 @@
+ 		compatible = "rockchip,iommu";
+ 		reg = <0x20030480 0x40>, <0x200304c0 0x40>;
+ 		interrupts = <GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>;
+-		interrupt-names = "vdec_mmu";
+ 		clocks = <&cru ACLK_RKVDEC>, <&cru HCLK_RKVDEC>;
+ 		clock-names = "aclk", "iface";
+-		iommu-cells = <0>;
++		#iommu-cells = <0>;
+ 		status = "disabled";
+ 	};
+ 
+@@ -602,7 +600,6 @@
+ 		compatible = "rockchip,iommu";
+ 		reg = <0x20053f00 0x100>;
+ 		interrupts = <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>;
+-		interrupt-names = "vop_mmu";
+ 		clocks = <&cru ACLK_VOP>, <&cru HCLK_VOP>;
+ 		clock-names = "aclk", "iface";
+ 		#iommu-cells = <0>;
+@@ -623,10 +620,9 @@
+ 		compatible = "rockchip,iommu";
+ 		reg = <0x20070800 0x100>;
+ 		interrupts = <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>;
+-		interrupt-names = "iep_mmu";
+ 		clocks = <&cru ACLK_IEP>, <&cru HCLK_IEP>;
+ 		clock-names = "aclk", "iface";
+-		iommu-cells = <0>;
++		#iommu-cells = <0>;
+ 		status = "disabled";
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/rk3288-rock2-som.dtsi b/arch/arm/boot/dts/rk3288-rock2-som.dtsi
+index 44bb5e6f83b12..76363b8afcb9b 100644
+--- a/arch/arm/boot/dts/rk3288-rock2-som.dtsi
++++ b/arch/arm/boot/dts/rk3288-rock2-som.dtsi
+@@ -218,7 +218,7 @@
+ 	flash0-supply = <&vcc_flash>;
+ 	flash1-supply = <&vccio_pmu>;
+ 	gpio30-supply = <&vccio_pmu>;
+-	gpio1830 = <&vcc_io>;
++	gpio1830-supply = <&vcc_io>;
+ 	lcdc-supply = <&vcc_io>;
+ 	sdcard-supply = <&vccio_sd>;
+ 	wifi-supply = <&vcc_18>;
+diff --git a/arch/arm/boot/dts/rk3288-vyasa.dts b/arch/arm/boot/dts/rk3288-vyasa.dts
+index aa50f8ed4ca0a..b156a83eb7d78 100644
+--- a/arch/arm/boot/dts/rk3288-vyasa.dts
++++ b/arch/arm/boot/dts/rk3288-vyasa.dts
+@@ -379,10 +379,10 @@
+ 	audio-supply = <&vcc_18>;
+ 	bb-supply = <&vcc_io>;
+ 	dvp-supply = <&vcc_io>;
+-	flash0-suuply = <&vcc_18>;
++	flash0-supply = <&vcc_18>;
+ 	flash1-supply = <&vcc_lan>;
+ 	gpio30-supply = <&vcc_io>;
+-	gpio1830 = <&vcc_io>;
++	gpio1830-supply = <&vcc_io>;
+ 	lcdc-supply = <&vcc_io>;
+ 	sdcard-supply = <&vccio_sd>;
+ 	wifi-supply = <&vcc_18>;
+diff --git a/arch/arm/boot/dts/rk3288.dtsi b/arch/arm/boot/dts/rk3288.dtsi
+index 05557ad02b33c..d6dbfbd995684 100644
+--- a/arch/arm/boot/dts/rk3288.dtsi
++++ b/arch/arm/boot/dts/rk3288.dtsi
+@@ -196,8 +196,8 @@
+ 		compatible = "rockchip,rk3288-timer";
+ 		reg = <0x0 0xff810000 0x0 0x20>;
+ 		interrupts = <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>;
+-		clocks = <&xin24m>, <&cru PCLK_TIMER>;
+-		clock-names = "timer", "pclk";
++		clocks = <&cru PCLK_TIMER>, <&xin24m>;
++		clock-names = "pclk", "timer";
+ 	};
+ 
+ 	display-subsystem {
+@@ -765,7 +765,7 @@
+ 			 *	*_HDMI		HDMI
+ 			 *	*_MIPI_*	MIPI
+ 			 */
+-			pd_vio@RK3288_PD_VIO {
++			power-domain@RK3288_PD_VIO {
+ 				reg = <RK3288_PD_VIO>;
+ 				clocks = <&cru ACLK_IEP>,
+ 					 <&cru ACLK_ISP>,
+@@ -807,7 +807,7 @@
+ 			 * Note: The following 3 are HEVC(H.265) clocks,
+ 			 * and on the ACLK_HEVC_NIU (NOC).
+ 			 */
+-			pd_hevc@RK3288_PD_HEVC {
++			power-domain@RK3288_PD_HEVC {
+ 				reg = <RK3288_PD_HEVC>;
+ 				clocks = <&cru ACLK_HEVC>,
+ 					 <&cru SCLK_HEVC_CABAC>,
+@@ -821,7 +821,7 @@
+ 			 * (video endecoder & decoder) clocks that on the
+ 			 * ACLK_VCODEC_NIU and HCLK_VCODEC_NIU (NOC).
+ 			 */
+-			pd_video@RK3288_PD_VIDEO {
++			power-domain@RK3288_PD_VIDEO {
+ 				reg = <RK3288_PD_VIDEO>;
+ 				clocks = <&cru ACLK_VCODEC>,
+ 					 <&cru HCLK_VCODEC>;
+@@ -832,7 +832,7 @@
+ 			 * Note: ACLK_GPU is the GPU clock,
+ 			 * and on the ACLK_GPU_NIU (NOC).
+ 			 */
+-			pd_gpu@RK3288_PD_GPU {
++			power-domain@RK3288_PD_GPU {
+ 				reg = <RK3288_PD_GPU>;
+ 				clocks = <&cru ACLK_GPU>;
+ 				pm_qos = <&qos_gpu_r>,
+@@ -1582,7 +1582,7 @@
+ 			drive-strength = <12>;
+ 		};
+ 
+-		sleep {
++		suspend {
+ 			global_pwroff: global-pwroff {
+ 				rockchip,pins = <0 RK_PA0 1 &pcfg_pull_none>;
+ 			};
+diff --git a/arch/arm/boot/dts/ste-ab8500.dtsi b/arch/arm/boot/dts/ste-ab8500.dtsi
+index a16a00fb5fa5d..d0fe3f9aa183a 100644
+--- a/arch/arm/boot/dts/ste-ab8500.dtsi
++++ b/arch/arm/boot/dts/ste-ab8500.dtsi
+@@ -34,7 +34,7 @@
+ 					#clock-cells = <1>;
+ 				};
+ 
+-				ab8500_gpio: ab8500-gpio {
++				ab8500_gpio: ab8500-gpiocontroller {
+ 					compatible = "stericsson,ab8500-gpio";
+ 					gpio-controller;
+ 					#gpio-cells = <2>;
+@@ -42,15 +42,15 @@
+ 
+ 				ab8500-rtc {
+ 					compatible = "stericsson,ab8500-rtc";
+-					interrupts = <17 IRQ_TYPE_LEVEL_HIGH
+-						      18 IRQ_TYPE_LEVEL_HIGH>;
++					interrupts = <17 IRQ_TYPE_LEVEL_HIGH>,
++						     <18 IRQ_TYPE_LEVEL_HIGH>;
+ 					interrupt-names = "60S", "ALARM";
+ 				};
+ 
+ 				gpadc: ab8500-gpadc {
+ 					compatible = "stericsson,ab8500-gpadc";
+-					interrupts = <32 IRQ_TYPE_LEVEL_HIGH
+-						      39 IRQ_TYPE_LEVEL_HIGH>;
++					interrupts = <32 IRQ_TYPE_LEVEL_HIGH>,
++						     <39 IRQ_TYPE_LEVEL_HIGH>;
+ 					interrupt-names = "HW_CONV_END", "SW_CONV_END";
+ 					vddadc-supply = <&ab8500_ldo_tvout_reg>;
+ 					#address-cells = <1>;
+@@ -219,13 +219,13 @@
+ 
+ 				ab8500_usb {
+ 					compatible = "stericsson,ab8500-usb";
+-					interrupts = < 90 IRQ_TYPE_LEVEL_HIGH
+-						       96 IRQ_TYPE_LEVEL_HIGH
+-						       14 IRQ_TYPE_LEVEL_HIGH
+-						       15 IRQ_TYPE_LEVEL_HIGH
+-						       79 IRQ_TYPE_LEVEL_HIGH
+-						       74 IRQ_TYPE_LEVEL_HIGH
+-						       75 IRQ_TYPE_LEVEL_HIGH>;
++					interrupts = <90 IRQ_TYPE_LEVEL_HIGH>,
++						     <96 IRQ_TYPE_LEVEL_HIGH>,
++						     <14 IRQ_TYPE_LEVEL_HIGH>,
++						     <15 IRQ_TYPE_LEVEL_HIGH>,
++						     <79 IRQ_TYPE_LEVEL_HIGH>,
++						     <74 IRQ_TYPE_LEVEL_HIGH>,
++						     <75 IRQ_TYPE_LEVEL_HIGH>;
+ 					interrupt-names = "ID_WAKEUP_R",
+ 							  "ID_WAKEUP_F",
+ 							  "VBUS_DET_F",
+@@ -242,8 +242,8 @@
+ 
+ 				ab8500-ponkey {
+ 					compatible = "stericsson,ab8500-poweron-key";
+-					interrupts = <6 IRQ_TYPE_LEVEL_HIGH
+-						      7 IRQ_TYPE_LEVEL_HIGH>;
++					interrupts = <6 IRQ_TYPE_LEVEL_HIGH>,
++						     <7 IRQ_TYPE_LEVEL_HIGH>;
+ 					interrupt-names = "ONKEY_DBF", "ONKEY_DBR";
+ 				};
+ 
+diff --git a/arch/arm/boot/dts/ste-ab8505.dtsi b/arch/arm/boot/dts/ste-ab8505.dtsi
+index cc045b2fc217d..0defc15b9bbc6 100644
+--- a/arch/arm/boot/dts/ste-ab8505.dtsi
++++ b/arch/arm/boot/dts/ste-ab8505.dtsi
+@@ -31,7 +31,7 @@
+ 					#clock-cells = <1>;
+ 				};
+ 
+-				ab8505_gpio: ab8505-gpio {
++				ab8505_gpio: ab8505-gpiocontroller {
+ 					compatible = "stericsson,ab8505-gpio";
+ 					gpio-controller;
+ 					#gpio-cells = <2>;
+@@ -39,8 +39,8 @@
+ 
+ 				ab8500-rtc {
+ 					compatible = "stericsson,ab8500-rtc";
+-					interrupts = <17 IRQ_TYPE_LEVEL_HIGH
+-						      18 IRQ_TYPE_LEVEL_HIGH>;
++					interrupts = <17 IRQ_TYPE_LEVEL_HIGH>,
++						     <18 IRQ_TYPE_LEVEL_HIGH>;
+ 					interrupt-names = "60S", "ALARM";
+ 				};
+ 
+@@ -182,13 +182,13 @@
+ 
+ 				ab8500_usb: ab8500_usb {
+ 					compatible = "stericsson,ab8500-usb";
+-					interrupts = < 90 IRQ_TYPE_LEVEL_HIGH
+-						       96 IRQ_TYPE_LEVEL_HIGH
+-						       14 IRQ_TYPE_LEVEL_HIGH
+-						       15 IRQ_TYPE_LEVEL_HIGH
+-						       79 IRQ_TYPE_LEVEL_HIGH
+-						       74 IRQ_TYPE_LEVEL_HIGH
+-						       75 IRQ_TYPE_LEVEL_HIGH>;
++					interrupts = <90 IRQ_TYPE_LEVEL_HIGH>,
++						     <96 IRQ_TYPE_LEVEL_HIGH>,
++						     <14 IRQ_TYPE_LEVEL_HIGH>,
++						     <15 IRQ_TYPE_LEVEL_HIGH>,
++						     <79 IRQ_TYPE_LEVEL_HIGH>,
++						     <74 IRQ_TYPE_LEVEL_HIGH>,
++						     <75 IRQ_TYPE_LEVEL_HIGH>;
+ 					interrupt-names = "ID_WAKEUP_R",
+ 							  "ID_WAKEUP_F",
+ 							  "VBUS_DET_F",
+@@ -205,8 +205,8 @@
+ 
+ 				ab8500-ponkey {
+ 					compatible = "stericsson,ab8500-poweron-key";
+-					interrupts = <6 IRQ_TYPE_LEVEL_HIGH
+-						      7 IRQ_TYPE_LEVEL_HIGH>;
++					interrupts = <6 IRQ_TYPE_LEVEL_HIGH>,
++						     <7 IRQ_TYPE_LEVEL_HIGH>;
+ 					interrupt-names = "ONKEY_DBF", "ONKEY_DBR";
+ 				};
+ 
+diff --git a/arch/arm/boot/dts/ste-href-ab8500.dtsi b/arch/arm/boot/dts/ste-href-ab8500.dtsi
+index 4946743de7b9b..3ccb7b5c71625 100644
+--- a/arch/arm/boot/dts/ste-href-ab8500.dtsi
++++ b/arch/arm/boot/dts/ste-href-ab8500.dtsi
+@@ -9,7 +9,7 @@
+ 	soc {
+ 		prcmu@80157000 {
+ 			ab8500 {
+-				ab8500-gpio {
++				ab8500-gpiocontroller {
+ 					/* Hog a few default settings */
+ 					pinctrl-names = "default";
+ 					pinctrl-0 = <&gpio2_default_mode>,
+diff --git a/arch/arm/boot/dts/ste-href-tvk1281618-r3.dtsi b/arch/arm/boot/dts/ste-href-tvk1281618-r3.dtsi
+index 70f058352efca..511e097cc33e0 100644
+--- a/arch/arm/boot/dts/ste-href-tvk1281618-r3.dtsi
++++ b/arch/arm/boot/dts/ste-href-tvk1281618-r3.dtsi
+@@ -89,6 +89,9 @@
+ 					     <19 IRQ_TYPE_EDGE_RISING>;
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&accel_tvk_mode>;
++				mount-matrix = "0", "-1", "0",
++					       "-1", "0", "0",
++					       "0", "0", "-1";
+ 			};
+ 			magnetometer@1e {
+ 				compatible = "st,lsm303dlm-magn";
+diff --git a/arch/arm/boot/dts/ste-href.dtsi b/arch/arm/boot/dts/ste-href.dtsi
+index 13d2161929042..c97e8d29004f8 100644
+--- a/arch/arm/boot/dts/ste-href.dtsi
++++ b/arch/arm/boot/dts/ste-href.dtsi
+@@ -209,7 +209,7 @@
+ 
+ 		prcmu@80157000 {
+ 			ab8500 {
+-				ab8500-gpio {
++				ab8500-gpiocontroller {
+ 				};
+ 
+ 				ab8500_usb {
+diff --git a/arch/arm/boot/dts/ste-snowball.dts b/arch/arm/boot/dts/ste-snowball.dts
+index b344b3748143a..40f1d7c9c1d4c 100644
+--- a/arch/arm/boot/dts/ste-snowball.dts
++++ b/arch/arm/boot/dts/ste-snowball.dts
+@@ -376,7 +376,7 @@
+ 
+ 		prcmu@80157000 {
+ 			ab8500 {
+-				ab8500-gpio {
++				ab8500-gpiocontroller {
+ 					/*
+ 					 * AB8500 GPIOs are numbered starting from 1, so the first
+ 					 * index 0 is what in the datasheet is called "GPIO1", and
+diff --git a/arch/arm/boot/dts/ste-ux500-samsung-golden.dts b/arch/arm/boot/dts/ste-ux500-samsung-golden.dts
+index 0d43ee6583cf8..40df7c61bf69e 100644
+--- a/arch/arm/boot/dts/ste-ux500-samsung-golden.dts
++++ b/arch/arm/boot/dts/ste-ux500-samsung-golden.dts
+@@ -121,7 +121,7 @@
+ 			#size-cells = <0>;
+ 
+ 			wifi@1 {
+-				compatible = "brcm,bcm4329-fmac";
++				compatible = "brcm,bcm4334-fmac", "brcm,bcm4329-fmac";
+ 				reg = <1>;
+ 
+ 				/* GPIO216 (WLAN_HOST_WAKE) */
+@@ -162,6 +162,7 @@
+ 			pinctrl-1 = <&u0_a_1_sleep>;
+ 
+ 			bluetooth {
++				/* BCM4334B0 actually */
+ 				compatible = "brcm,bcm4330-bt";
+ 				/* GPIO222 (BT_VREG_ON) */
+ 				shutdown-gpios = <&gpio6 30 GPIO_ACTIVE_HIGH>;
+diff --git a/arch/arm/boot/dts/ste-ux500-samsung-janice.dts b/arch/arm/boot/dts/ste-ux500-samsung-janice.dts
+index f24369873ce2c..25af066f6f3ae 100644
+--- a/arch/arm/boot/dts/ste-ux500-samsung-janice.dts
++++ b/arch/arm/boot/dts/ste-ux500-samsung-janice.dts
+@@ -401,8 +401,7 @@
+ 			status = "okay";
+ 
+ 			wifi@1 {
+-				/* Actually BRCM4330 */
+-				compatible = "brcm,bcm4329-fmac";
++				compatible = "brcm,bcm4330-fmac", "brcm,bcm4329-fmac";
+ 				reg = <1>;
+ 				/* GPIO216 WL_HOST_WAKE */
+ 				interrupt-parent = <&gpio6>;
+@@ -436,6 +435,7 @@
+ 			status = "okay";
+ 
+ 			bluetooth {
++				/* BCM4330B1 actually */
+ 				compatible = "brcm,bcm4330-bt";
+ 				/* GPIO222 rail BT_VREG_EN to BT_REG_ON */
+ 				shutdown-gpios = <&gpio6 30 GPIO_ACTIVE_HIGH>;
+@@ -583,10 +583,9 @@
+ 					accelerometer@08 {
+ 						compatible = "bosch,bma222";
+ 						reg = <0x08>;
+-						/* FIXME: no idea about this */
+-						mount-matrix = "1", "0", "0",
+-							       "0", "1", "0",
+-							       "0", "0", "1";
++						mount-matrix = "0", "1", "0",
++							       "-1", "0", "0",
++							       "0", "0", "-1";
+ 						vddio-supply = <&ab8500_ldo_aux2_reg>; // 1.8V
+ 						vdd-supply = <&ab8500_ldo_aux1_reg>; // 3V
+ 					};
+diff --git a/arch/arm/boot/dts/ste-ux500-samsung-skomer.dts b/arch/arm/boot/dts/ste-ux500-samsung-skomer.dts
+index d28a00757d0b9..94afd7a0fe1f0 100644
+--- a/arch/arm/boot/dts/ste-ux500-samsung-skomer.dts
++++ b/arch/arm/boot/dts/ste-ux500-samsung-skomer.dts
+@@ -211,7 +211,7 @@
+ 			#size-cells = <0>;
+ 
+ 			wifi@1 {
+-				compatible = "brcm,bcm4329-fmac";
++				compatible = "brcm,bcm4334-fmac", "brcm,bcm4329-fmac";
+ 				reg = <1>;
+ 				/* GPIO216 WL_HOST_WAKE */
+ 				interrupt-parent = <&gpio6>;
+@@ -247,6 +247,7 @@
+ 
+ 			/* FIXME: not quite working yet, probably needs regulators */
+ 			bluetooth {
++				/* BCM4334B0 actually */
+ 				compatible = "brcm,bcm4330-bt";
+ 				shutdown-gpios = <&gpio6 30 GPIO_ACTIVE_HIGH>;
+ 				device-wakeup-gpios = <&gpio6 7 GPIO_ACTIVE_HIGH>;
+diff --git a/arch/arm/boot/dts/stm32429i-eval.dts b/arch/arm/boot/dts/stm32429i-eval.dts
+index 7e10ae744c9d1..9ac1ffe53413c 100644
+--- a/arch/arm/boot/dts/stm32429i-eval.dts
++++ b/arch/arm/boot/dts/stm32429i-eval.dts
+@@ -119,17 +119,15 @@
+ 		};
+ 	};
+ 
+-	gpio_keys {
++	gpio-keys {
+ 		compatible = "gpio-keys";
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+ 		autorepeat;
+-		button@0 {
++		button-0 {
+ 			label = "Wake up";
+ 			linux,code = <KEY_WAKEUP>;
+ 			gpios = <&gpioa 0 0>;
+ 		};
+-		button@1 {
++		button-1 {
+ 			label = "Tamper";
+ 			linux,code = <KEY_RESTART>;
+ 			gpios = <&gpioc 13 0>;
+diff --git a/arch/arm/boot/dts/stm32746g-eval.dts b/arch/arm/boot/dts/stm32746g-eval.dts
+index ca8c192449ee9..327613fd9666c 100644
+--- a/arch/arm/boot/dts/stm32746g-eval.dts
++++ b/arch/arm/boot/dts/stm32746g-eval.dts
+@@ -81,12 +81,10 @@
+ 		};
+ 	};
+ 
+-	gpio_keys {
++	gpio-keys {
+ 		compatible = "gpio-keys";
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+ 		autorepeat;
+-		button@0 {
++		button-0 {
+ 			label = "Wake up";
+ 			linux,code = <KEY_WAKEUP>;
+ 			gpios = <&gpioc 13 0>;
+diff --git a/arch/arm/boot/dts/stm32f429-disco.dts b/arch/arm/boot/dts/stm32f429-disco.dts
+index 3dc068b91ca15..075ac57d0bf4a 100644
+--- a/arch/arm/boot/dts/stm32f429-disco.dts
++++ b/arch/arm/boot/dts/stm32f429-disco.dts
+@@ -81,12 +81,10 @@
+ 		};
+ 	};
+ 
+-	gpio_keys {
++	gpio-keys {
+ 		compatible = "gpio-keys";
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+ 		autorepeat;
+-		button@0 {
++		button-0 {
+ 			label = "User";
+ 			linux,code = <KEY_HOME>;
+ 			gpios = <&gpioa 0 0>;
+diff --git a/arch/arm/boot/dts/stm32f429.dtsi b/arch/arm/boot/dts/stm32f429.dtsi
+index f6530d724d004..8748d58502980 100644
+--- a/arch/arm/boot/dts/stm32f429.dtsi
++++ b/arch/arm/boot/dts/stm32f429.dtsi
+@@ -283,8 +283,6 @@
+ 		};
+ 
+ 		timers13: timers@40001c00 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			compatible = "st,stm32-timers";
+ 			reg = <0x40001C00 0x400>;
+ 			clocks = <&rcc 0 STM32F4_APB1_CLOCK(TIM13)>;
+@@ -299,8 +297,6 @@
+ 		};
+ 
+ 		timers14: timers@40002000 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			compatible = "st,stm32-timers";
+ 			reg = <0x40002000 0x400>;
+ 			clocks = <&rcc 0 STM32F4_APB1_CLOCK(TIM14)>;
+@@ -633,8 +629,6 @@
+ 		};
+ 
+ 		timers10: timers@40014400 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			compatible = "st,stm32-timers";
+ 			reg = <0x40014400 0x400>;
+ 			clocks = <&rcc 0 STM32F4_APB2_CLOCK(TIM10)>;
+@@ -649,8 +643,6 @@
+ 		};
+ 
+ 		timers11: timers@40014800 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			compatible = "st,stm32-timers";
+ 			reg = <0x40014800 0x400>;
+ 			clocks = <&rcc 0 STM32F4_APB2_CLOCK(TIM11)>;
+@@ -709,7 +701,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		rcc: rcc@40023810 {
++		rcc: rcc@40023800 {
+ 			#reset-cells = <1>;
+ 			#clock-cells = <2>;
+ 			compatible = "st,stm32f42xx-rcc", "st,stm32-rcc";
+diff --git a/arch/arm/boot/dts/stm32f469-disco.dts b/arch/arm/boot/dts/stm32f469-disco.dts
+index 2e1b3bbbe4b5b..8c982ae79f432 100644
+--- a/arch/arm/boot/dts/stm32f469-disco.dts
++++ b/arch/arm/boot/dts/stm32f469-disco.dts
+@@ -104,12 +104,10 @@
+ 		};
+ 	};
+ 
+-	gpio_keys {
++	gpio-keys {
+ 		compatible = "gpio-keys";
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+ 		autorepeat;
+-		button@0 {
++		button-0 {
+ 			label = "User";
+ 			linux,code = <KEY_WAKEUP>;
+ 			gpios = <&gpioa 0 GPIO_ACTIVE_HIGH>;
+diff --git a/arch/arm/boot/dts/stm32f746.dtsi b/arch/arm/boot/dts/stm32f746.dtsi
+index e1df603fc9816..014b416f57e6f 100644
+--- a/arch/arm/boot/dts/stm32f746.dtsi
++++ b/arch/arm/boot/dts/stm32f746.dtsi
+@@ -265,8 +265,6 @@
+ 		};
+ 
+ 		timers13: timers@40001c00 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			compatible = "st,stm32-timers";
+ 			reg = <0x40001C00 0x400>;
+ 			clocks = <&rcc 0 STM32F7_APB1_CLOCK(TIM13)>;
+@@ -281,8 +279,6 @@
+ 		};
+ 
+ 		timers14: timers@40002000 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			compatible = "st,stm32-timers";
+ 			reg = <0x40002000 0x400>;
+ 			clocks = <&rcc 0 STM32F7_APB1_CLOCK(TIM14)>;
+@@ -364,9 +360,9 @@
+ 			status = "disabled";
+ 		};
+ 
+-		i2c3: i2c@40005C00 {
++		i2c3: i2c@40005c00 {
+ 			compatible = "st,stm32f7-i2c";
+-			reg = <0x40005C00 0x400>;
++			reg = <0x40005c00 0x400>;
+ 			interrupts = <72>,
+ 				     <73>;
+ 			resets = <&rcc STM32F7_APB1_RESET(I2C3)>;
+@@ -531,8 +527,6 @@
+ 		};
+ 
+ 		timers10: timers@40014400 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			compatible = "st,stm32-timers";
+ 			reg = <0x40014400 0x400>;
+ 			clocks = <&rcc 0 STM32F7_APB2_CLOCK(TIM10)>;
+@@ -547,8 +541,6 @@
+ 		};
+ 
+ 		timers11: timers@40014800 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			compatible = "st,stm32-timers";
+ 			reg = <0x40014800 0x400>;
+ 			clocks = <&rcc 0 STM32F7_APB2_CLOCK(TIM11)>;
+diff --git a/arch/arm/boot/dts/stm32f769-disco.dts b/arch/arm/boot/dts/stm32f769-disco.dts
+index 0ce7fbc20fa4d..be943b7019806 100644
+--- a/arch/arm/boot/dts/stm32f769-disco.dts
++++ b/arch/arm/boot/dts/stm32f769-disco.dts
+@@ -75,12 +75,10 @@
+ 		};
+ 	};
+ 
+-	gpio_keys {
++	gpio-keys {
+ 		compatible = "gpio-keys";
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+ 		autorepeat;
+-		button@0 {
++		button-0 {
+ 			label = "User";
+ 			linux,code = <KEY_HOME>;
+ 			gpios = <&gpioa 0 GPIO_ACTIVE_HIGH>;
+diff --git a/arch/arm/boot/dts/stm32h743.dtsi b/arch/arm/boot/dts/stm32h743.dtsi
+index 05ecdf9ff03a4..6e42ca2dada2c 100644
+--- a/arch/arm/boot/dts/stm32h743.dtsi
++++ b/arch/arm/boot/dts/stm32h743.dtsi
+@@ -485,8 +485,6 @@
+ 		};
+ 
+ 		lptimer4: timer@58002c00 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			compatible = "st,stm32-lptimer";
+ 			reg = <0x58002c00 0x400>;
+ 			clocks = <&rcc LPTIM4_CK>;
+@@ -501,8 +499,6 @@
+ 		};
+ 
+ 		lptimer5: timer@58003000 {
+-			#address-cells = <1>;
+-			#size-cells = <0>;
+ 			compatible = "st,stm32-lptimer";
+ 			reg = <0x58003000 0x400>;
+ 			clocks = <&rcc LPTIM5_CK>;
+diff --git a/arch/arm/boot/dts/stm32mp151.dtsi b/arch/arm/boot/dts/stm32mp151.dtsi
+index fcd3230c469b7..23234f950de66 100644
+--- a/arch/arm/boot/dts/stm32mp151.dtsi
++++ b/arch/arm/boot/dts/stm32mp151.dtsi
+@@ -1416,12 +1416,6 @@
+ 			status = "disabled";
+ 		};
+ 
+-		stmmac_axi_config_0: stmmac-axi-config {
+-			snps,wr_osr_lmt = <0x7>;
+-			snps,rd_osr_lmt = <0x7>;
+-			snps,blen = <0 0 0 0 16 8 4>;
+-		};
+-
+ 		ethernet0: ethernet@5800a000 {
+ 			compatible = "st,stm32mp1-dwmac", "snps,dwmac-4.20a";
+ 			reg = <0x5800a000 0x2000>;
+@@ -1447,6 +1441,12 @@
+ 			snps,axi-config = <&stmmac_axi_config_0>;
+ 			snps,tso;
+ 			status = "disabled";
++
++			stmmac_axi_config_0: stmmac-axi-config {
++				snps,wr_osr_lmt = <0x7>;
++				snps,rd_osr_lmt = <0x7>;
++				snps,blen = <0 0 0 0 16 8 4>;
++			};
+ 		};
+ 
+ 		usbh_ohci: usb@5800c000 {
+diff --git a/arch/arm/boot/dts/stm32mp157a-microgea-stm32mp1-microdev2.0-of7.dts b/arch/arm/boot/dts/stm32mp157a-microgea-stm32mp1-microdev2.0-of7.dts
+index 674b2d330dc41..5670b23812a21 100644
+--- a/arch/arm/boot/dts/stm32mp157a-microgea-stm32mp1-microdev2.0-of7.dts
++++ b/arch/arm/boot/dts/stm32mp157a-microgea-stm32mp1-microdev2.0-of7.dts
+@@ -89,7 +89,7 @@
+ };
+ 
+ &pinctrl {
+-	ltdc_pins: ltdc {
++	ltdc_pins: ltdc-0 {
+ 		pins {
+ 			pinmux = <STM32_PINMUX('G', 10, AF14)>,	/* LTDC_B2 */
+ 				 <STM32_PINMUX('H', 12, AF14)>,	/* LTDC_R6 */
+diff --git a/arch/arm/boot/dts/stm32mp157a-stinger96.dtsi b/arch/arm/boot/dts/stm32mp157a-stinger96.dtsi
+index 113c48b2ef93d..a4b14ef3caeeb 100644
+--- a/arch/arm/boot/dts/stm32mp157a-stinger96.dtsi
++++ b/arch/arm/boot/dts/stm32mp157a-stinger96.dtsi
+@@ -184,8 +184,6 @@
+ 
+ 			vdd_usb: ldo4 {
+ 				regulator-name = "vdd_usb";
+-				regulator-min-microvolt = <3300000>;
+-				regulator-max-microvolt = <3300000>;
+ 				interrupts = <IT_CURLIM_LDO4 0>;
+ 			};
+ 
+@@ -208,7 +206,6 @@
+ 			vref_ddr: vref_ddr {
+ 				regulator-name = "vref_ddr";
+ 				regulator-always-on;
+-				regulator-over-current-protection;
+ 			};
+ 
+ 			bst_out: boost {
+@@ -219,13 +216,13 @@
+ 			vbus_otg: pwr_sw1 {
+ 				regulator-name = "vbus_otg";
+ 				interrupts = <IT_OCP_OTG 0>;
+-				regulator-active-discharge;
++				regulator-active-discharge = <1>;
+ 			};
+ 
+ 			vbus_sw: pwr_sw2 {
+ 				regulator-name = "vbus_sw";
+ 				interrupts = <IT_OCP_SWOUT 0>;
+-				regulator-active-discharge;
++				regulator-active-discharge = <1>;
+ 			};
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/stm32mp157c-odyssey-som.dtsi b/arch/arm/boot/dts/stm32mp157c-odyssey-som.dtsi
+index 6cf49a0a9e694..2d9461006810c 100644
+--- a/arch/arm/boot/dts/stm32mp157c-odyssey-som.dtsi
++++ b/arch/arm/boot/dts/stm32mp157c-odyssey-som.dtsi
+@@ -173,8 +173,6 @@
+ 
+ 			vdd_usb: ldo4 {
+ 				regulator-name = "vdd_usb";
+-				regulator-min-microvolt = <3300000>;
+-				regulator-max-microvolt = <3300000>;
+ 				interrupts = <IT_CURLIM_LDO4 0>;
+ 			};
+ 
+@@ -197,7 +195,6 @@
+ 			vref_ddr: vref_ddr {
+ 				regulator-name = "vref_ddr";
+ 				regulator-always-on;
+-				regulator-over-current-protection;
+ 			};
+ 
+ 			 bst_out: boost {
+@@ -213,7 +210,7 @@
+ 			 vbus_sw: pwr_sw2 {
+ 				regulator-name = "vbus_sw";
+ 				interrupts = <IT_OCP_SWOUT 0>;
+-				regulator-active-discharge;
++				regulator-active-discharge = <1>;
+ 			 };
+ 		};
+ 
+@@ -269,7 +266,7 @@
+ 	st,neg-edge;
+ 	bus-width = <8>;
+ 	vmmc-supply = <&v3v3>;
+-	vqmmc-supply = <&v3v3>;
++	vqmmc-supply = <&vdd>;
+ 	mmc-ddr-3_3v;
+ 	status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/stm32mp157c-odyssey.dts b/arch/arm/boot/dts/stm32mp157c-odyssey.dts
+index a7ffec8f15164..be1dd5e9e7443 100644
+--- a/arch/arm/boot/dts/stm32mp157c-odyssey.dts
++++ b/arch/arm/boot/dts/stm32mp157c-odyssey.dts
+@@ -64,7 +64,7 @@
+ 	pinctrl-0 = <&sdmmc1_b4_pins_a>;
+ 	pinctrl-1 = <&sdmmc1_b4_od_pins_a>;
+ 	pinctrl-2 = <&sdmmc1_b4_sleep_pins_a>;
+-	cd-gpios = <&gpiob 7 (GPIO_ACTIVE_LOW | GPIO_PULL_UP)>;
++	cd-gpios = <&gpioi 3 (GPIO_ACTIVE_LOW | GPIO_PULL_UP)>;
+ 	disable-wp;
+ 	st,neg-edge;
+ 	bus-width = <4>;
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
+index 5523f4138fd69..c5ea08fec535f 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
+@@ -34,7 +34,6 @@
+ 
+ 	gpio-keys-polled {
+ 		compatible = "gpio-keys-polled";
+-		#size-cells = <0>;
+ 		poll-interval = <20>;
+ 
+ 		/*
+@@ -60,7 +59,6 @@
+ 
+ 	gpio-keys {
+ 		compatible = "gpio-keys";
+-		#size-cells = <0>;
+ 
+ 		button-1 {
+ 			label = "TA2-GPIO-B";
+@@ -184,12 +182,11 @@
+ 
+ 	};
+ 
+-	polytouch@38 {
+-		compatible = "edt,edt-ft5x06";
++	touchscreen@38 {
++		compatible = "edt,edt-ft5406";
+ 		reg = <0x38>;
+ 		interrupt-parent = <&gpiog>;
+ 		interrupts = <2 IRQ_TYPE_EDGE_FALLING>; /* GPIO E */
+-		linux,wakeup;
+ 	};
+ };
+ 
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
+index 31d08423a32f0..2af0a67526747 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
+@@ -150,7 +150,7 @@
+ 	pinctrl-1 = <&fmc_sleep_pins_b>;
+ 	status = "okay";
+ 
+-	ksz8851: ks8851mll@1,0 {
++	ksz8851: ethernet@1,0 {
+ 		compatible = "micrel,ks8851-mll";
+ 		reg = <1 0x0 0x2>, <1 0x2 0x20000>;
+ 		interrupt-parent = <&gpioc>;
+@@ -333,8 +333,6 @@
+ 
+ 			vdd_usb: ldo4 {
+ 				regulator-name = "vdd_usb";
+-				regulator-min-microvolt = <3300000>;
+-				regulator-max-microvolt = <3300000>;
+ 				interrupts = <IT_CURLIM_LDO4 0>;
+ 			};
+ 
+@@ -356,7 +354,6 @@
+ 			vref_ddr: vref_ddr {
+ 				regulator-name = "vref_ddr";
+ 				regulator-always-on;
+-				regulator-over-current-protection;
+ 			};
+ 
+ 			bst_out: boost {
+@@ -372,7 +369,7 @@
+ 			vbus_sw: pwr_sw2 {
+ 				regulator-name = "vbus_sw";
+ 				interrupts = <IT_OCP_SWOUT 0>;
+-				regulator-active-discharge;
++				regulator-active-discharge = <1>;
+ 			};
+ 		};
+ 
+@@ -437,7 +434,7 @@
+ 	#size-cells = <0>;
+ 	status = "okay";
+ 
+-	flash0: mx66l51235l@0 {
++	flash0: flash@0 {
+ 		compatible = "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-rx-bus-width = <4>;
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi
+index 013ae369791d7..2b0ac605549d7 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi
+@@ -198,7 +198,7 @@
+ 	#size-cells = <0>;
+ 	status = "okay";
+ 
+-	flash0: spi-flash@0 {
++	flash0: flash@0 {
+ 		compatible = "jedec,spi-nor";
+ 		reg = <0>;
+ 		spi-rx-bus-width = <4>;
+diff --git a/arch/arm/boot/dts/stm32mp15xx-osd32.dtsi b/arch/arm/boot/dts/stm32mp15xx-osd32.dtsi
+index 713485a95795a..6706d8311a665 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-osd32.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-osd32.dtsi
+@@ -146,8 +146,6 @@
+ 
+ 			vdd_usb: ldo4 {
+ 				regulator-name = "vdd_usb";
+-				regulator-min-microvolt = <3300000>;
+-				regulator-max-microvolt = <3300000>;
+ 				interrupts = <IT_CURLIM_LDO4 0>;
+ 			};
+ 
+@@ -171,7 +169,6 @@
+ 			vref_ddr: vref_ddr {
+ 				regulator-name = "vref_ddr";
+ 				regulator-always-on;
+-				regulator-over-current-protection;
+ 			};
+ 
+ 			bst_out: boost {
+@@ -182,13 +179,13 @@
+ 			vbus_otg: pwr_sw1 {
+ 				regulator-name = "vbus_otg";
+ 				interrupts = <IT_OCP_OTG 0>;
+-				regulator-active-discharge;
++				regulator-active-discharge = <1>;
+ 			};
+ 
+ 			vbus_sw: pwr_sw2 {
+ 				regulator-name = "vbus_sw";
+ 				interrupts = <IT_OCP_SWOUT 0>;
+-				regulator-active-discharge;
++				regulator-active-discharge = <1>;
+ 			};
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts b/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
+index 2298fc034183b..14cd3238355b7 100644
+--- a/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
++++ b/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
+@@ -1030,7 +1030,7 @@
+ 		nvidia,audio-codec = <&wm8903>;
+ 
+ 		nvidia,spkr-en-gpios = <&wm8903 2 GPIO_ACTIVE_HIGH>;
+-		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(W, 2) GPIO_ACTIVE_HIGH>;
++		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(W, 2) GPIO_ACTIVE_LOW>;
+ 		nvidia,int-mic-en-gpios = <&wm8903 1 GPIO_ACTIVE_HIGH>;
+ 		nvidia,headset;
+ 
+diff --git a/arch/arm/boot/dts/tegra20-harmony.dts b/arch/arm/boot/dts/tegra20-harmony.dts
+index 86494cb4d5a1d..ae4312eedcbd5 100644
+--- a/arch/arm/boot/dts/tegra20-harmony.dts
++++ b/arch/arm/boot/dts/tegra20-harmony.dts
+@@ -748,7 +748,7 @@
+ 
+ 		nvidia,spkr-en-gpios = <&wm8903 2 GPIO_ACTIVE_HIGH>;
+ 		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(W, 2)
+-			GPIO_ACTIVE_HIGH>;
++			GPIO_ACTIVE_LOW>;
+ 		nvidia,int-mic-en-gpios = <&gpio TEGRA_GPIO(X, 0)
+ 			GPIO_ACTIVE_HIGH>;
+ 		nvidia,ext-mic-en-gpios = <&gpio TEGRA_GPIO(X, 1)
+diff --git a/arch/arm/boot/dts/tegra20-medcom-wide.dts b/arch/arm/boot/dts/tegra20-medcom-wide.dts
+index a348ca30e522b..b31c9bca16e6a 100644
+--- a/arch/arm/boot/dts/tegra20-medcom-wide.dts
++++ b/arch/arm/boot/dts/tegra20-medcom-wide.dts
+@@ -84,7 +84,7 @@
+ 		nvidia,audio-codec = <&wm8903>;
+ 
+ 		nvidia,spkr-en-gpios = <&wm8903 2 GPIO_ACTIVE_HIGH>;
+-		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(W, 2) GPIO_ACTIVE_HIGH>;
++		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(W, 2) GPIO_ACTIVE_LOW>;
+ 
+ 		clocks = <&tegra_car TEGRA20_CLK_PLL_A>,
+ 			 <&tegra_car TEGRA20_CLK_PLL_A_OUT0>,
+diff --git a/arch/arm/boot/dts/tegra20-plutux.dts b/arch/arm/boot/dts/tegra20-plutux.dts
+index 378f23b2958b1..5811b7006a9bf 100644
+--- a/arch/arm/boot/dts/tegra20-plutux.dts
++++ b/arch/arm/boot/dts/tegra20-plutux.dts
+@@ -52,7 +52,7 @@
+ 		nvidia,audio-codec = <&wm8903>;
+ 
+ 		nvidia,spkr-en-gpios = <&wm8903 2 GPIO_ACTIVE_HIGH>;
+-		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(W, 2) GPIO_ACTIVE_HIGH>;
++		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(W, 2) GPIO_ACTIVE_LOW>;
+ 
+ 		clocks = <&tegra_car TEGRA20_CLK_PLL_A>,
+ 			 <&tegra_car TEGRA20_CLK_PLL_A_OUT0>,
+diff --git a/arch/arm/boot/dts/tegra20-seaboard.dts b/arch/arm/boot/dts/tegra20-seaboard.dts
+index c24d4a37613e9..92d494b8c3d25 100644
+--- a/arch/arm/boot/dts/tegra20-seaboard.dts
++++ b/arch/arm/boot/dts/tegra20-seaboard.dts
+@@ -911,7 +911,7 @@
+ 		nvidia,audio-codec = <&wm8903>;
+ 
+ 		nvidia,spkr-en-gpios = <&wm8903 2 GPIO_ACTIVE_HIGH>;
+-		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(X, 1) GPIO_ACTIVE_HIGH>;
++		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(X, 1) GPIO_ACTIVE_LOW>;
+ 
+ 		clocks = <&tegra_car TEGRA20_CLK_PLL_A>,
+ 			 <&tegra_car TEGRA20_CLK_PLL_A_OUT0>,
+diff --git a/arch/arm/boot/dts/tegra20-tec.dts b/arch/arm/boot/dts/tegra20-tec.dts
+index 44ced60315de1..10ff09d86efa7 100644
+--- a/arch/arm/boot/dts/tegra20-tec.dts
++++ b/arch/arm/boot/dts/tegra20-tec.dts
+@@ -61,7 +61,7 @@
+ 
+ 		nvidia,spkr-en-gpios = <&wm8903 2 GPIO_ACTIVE_HIGH>;
+ 		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(W, 2)
+-			GPIO_ACTIVE_HIGH>;
++			GPIO_ACTIVE_LOW>;
+ 
+ 		clocks = <&tegra_car TEGRA20_CLK_PLL_A>,
+ 			 <&tegra_car TEGRA20_CLK_PLL_A_OUT0>,
+diff --git a/arch/arm/boot/dts/tegra20-ventana.dts b/arch/arm/boot/dts/tegra20-ventana.dts
+index 99a356c1ccec3..5a2578b3707f4 100644
+--- a/arch/arm/boot/dts/tegra20-ventana.dts
++++ b/arch/arm/boot/dts/tegra20-ventana.dts
+@@ -709,7 +709,7 @@
+ 		nvidia,audio-codec = <&wm8903>;
+ 
+ 		nvidia,spkr-en-gpios = <&wm8903 2 GPIO_ACTIVE_HIGH>;
+-		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(W, 2) GPIO_ACTIVE_HIGH>;
++		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(W, 2) GPIO_ACTIVE_LOW>;
+ 		nvidia,int-mic-en-gpios = <&gpio TEGRA_GPIO(X, 0)
+ 			GPIO_ACTIVE_HIGH>;
+ 		nvidia,ext-mic-en-gpios = <&gpio TEGRA_GPIO(X, 1)
+diff --git a/arch/arm/boot/dts/tegra30-asus-nexus7-grouper-ti-pmic.dtsi b/arch/arm/boot/dts/tegra30-asus-nexus7-grouper-ti-pmic.dtsi
+index b97da45ebdb48..e1325ee0a3c46 100644
+--- a/arch/arm/boot/dts/tegra30-asus-nexus7-grouper-ti-pmic.dtsi
++++ b/arch/arm/boot/dts/tegra30-asus-nexus7-grouper-ti-pmic.dtsi
+@@ -144,7 +144,7 @@
+ 	};
+ 
+ 	vdd_3v3_sys: regulator@1 {
+-		gpio = <&pmic 7 GPIO_ACTIVE_HIGH>;
++		gpio = <&pmic 6 GPIO_ACTIVE_HIGH>;
+ 		enable-active-high;
+ 	};
+ };
+diff --git a/arch/arm/boot/dts/tegra30-cardhu.dtsi b/arch/arm/boot/dts/tegra30-cardhu.dtsi
+index 2dff14b87f3e6..d9dd11569d4b2 100644
+--- a/arch/arm/boot/dts/tegra30-cardhu.dtsi
++++ b/arch/arm/boot/dts/tegra30-cardhu.dtsi
+@@ -630,7 +630,7 @@
+ 
+ 		nvidia,spkr-en-gpios = <&wm8903 2 GPIO_ACTIVE_HIGH>;
+ 		nvidia,hp-det-gpios = <&gpio TEGRA_GPIO(W, 2)
+-			GPIO_ACTIVE_HIGH>;
++			GPIO_ACTIVE_LOW>;
+ 
+ 		clocks = <&tegra_car TEGRA30_CLK_PLL_A>,
+ 			 <&tegra_car TEGRA30_CLK_PLL_A_OUT0>,
+diff --git a/arch/arm/mach-imx/suspend-imx53.S b/arch/arm/mach-imx/suspend-imx53.S
+index 41b8aad653634..46570ec2fbcfe 100644
+--- a/arch/arm/mach-imx/suspend-imx53.S
++++ b/arch/arm/mach-imx/suspend-imx53.S
+@@ -28,11 +28,11 @@
+  *                              ^
+  *                              ^
+  *                      imx53_suspend code
+- *              PM_INFO structure(imx53_suspend_info)
++ *              PM_INFO structure(imx5_cpu_suspend_info)
+  * ======================== low address =======================
+  */
+ 
+-/* Offsets of members of struct imx53_suspend_info */
++/* Offsets of members of struct imx5_cpu_suspend_info */
+ #define SUSPEND_INFO_MX53_M4IF_V_OFFSET		0x0
+ #define SUSPEND_INFO_MX53_IOMUXC_V_OFFSET	0x4
+ #define SUSPEND_INFO_MX53_IO_COUNT_OFFSET	0x8
+diff --git a/arch/arm/mach-omap2/pm33xx-core.c b/arch/arm/mach-omap2/pm33xx-core.c
+index 56f2c0bcae5a3..bf0d25fd2cea4 100644
+--- a/arch/arm/mach-omap2/pm33xx-core.c
++++ b/arch/arm/mach-omap2/pm33xx-core.c
+@@ -8,6 +8,7 @@
+ 
+ #include <linux/cpuidle.h>
+ #include <linux/platform_data/pm33xx.h>
++#include <linux/suspend.h>
+ #include <asm/cpuidle.h>
+ #include <asm/smp_scu.h>
+ #include <asm/suspend.h>
+@@ -324,6 +325,44 @@ static struct am33xx_pm_platform_data *am33xx_pm_get_pdata(void)
+ 		return NULL;
+ }
+ 
++#ifdef CONFIG_SUSPEND
++/*
++ * Block system suspend initially. Later on pm33xx sets up it's own
++ * platform_suspend_ops after probe. That depends also on loaded
++ * wkup_m3_ipc and booted am335x-pm-firmware.elf.
++ */
++static int amx3_suspend_block(suspend_state_t state)
++{
++	pr_warn("PM not initialized for pm33xx, wkup_m3_ipc, or am335x-pm-firmware.elf\n");
++
++	return -EINVAL;
++}
++
++static int amx3_pm_valid(suspend_state_t state)
++{
++	switch (state) {
++	case PM_SUSPEND_STANDBY:
++		return 1;
++	default:
++		return 0;
++	}
++}
++
++static const struct platform_suspend_ops amx3_blocked_pm_ops = {
++	.begin = amx3_suspend_block,
++	.valid = amx3_pm_valid,
++};
++
++static void __init amx3_block_suspend(void)
++{
++	suspend_set_ops(&amx3_blocked_pm_ops);
++}
++#else
++static inline void amx3_block_suspend(void)
++{
++}
++#endif	/* CONFIG_SUSPEND */
++
+ int __init amx3_common_pm_init(void)
+ {
+ 	struct am33xx_pm_platform_data *pdata;
+@@ -337,6 +376,7 @@ int __init amx3_common_pm_init(void)
+ 	devinfo.size_data = sizeof(*pdata);
+ 	devinfo.id = -1;
+ 	platform_device_register_full(&devinfo);
++	amx3_block_suspend();
+ 
+ 	return 0;
+ }
+diff --git a/arch/arm64/boot/dts/arm/juno-base.dtsi b/arch/arm64/boot/dts/arm/juno-base.dtsi
+index 1cc7fdcec51bc..8e7a66943b018 100644
+--- a/arch/arm64/boot/dts/arm/juno-base.dtsi
++++ b/arch/arm64/boot/dts/arm/juno-base.dtsi
+@@ -568,13 +568,13 @@
+ 		clocks {
+ 			compatible = "arm,scpi-clocks";
+ 
+-			scpi_dvfs: scpi-dvfs {
++			scpi_dvfs: clocks-0 {
+ 				compatible = "arm,scpi-dvfs-clocks";
+ 				#clock-cells = <1>;
+ 				clock-indices = <0>, <1>, <2>;
+ 				clock-output-names = "atlclk", "aplclk","gpuclk";
+ 			};
+-			scpi_clk: scpi-clk {
++			scpi_clk: clocks-1 {
+ 				compatible = "arm,scpi-variable-clocks";
+ 				#clock-cells = <1>;
+ 				clock-indices = <3>;
+@@ -582,7 +582,7 @@
+ 			};
+ 		};
+ 
+-		scpi_devpd: scpi-power-domains {
++		scpi_devpd: power-controller {
+ 			compatible = "arm,scpi-power-domains";
+ 			num-domains = <2>;
+ 			#power-domain-cells = <1>;
+diff --git a/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908.dtsi b/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908.dtsi
+index 8060178b365d8..a5a64d17d9ea6 100644
+--- a/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908.dtsi
++++ b/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908.dtsi
+@@ -306,7 +306,7 @@
+ 			interrupt-names = "nand";
+ 			status = "okay";
+ 
+-			nandcs: nandcs@0 {
++			nandcs: nand@0 {
+ 				compatible = "brcm,nandcs";
+ 				reg = <0>;
+ 			};
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls208xa.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls208xa.dtsi
+index 135ac82108714..801ba9612d361 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls208xa.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls208xa.dtsi
+@@ -929,7 +929,6 @@
+ 					    QORIQ_CLK_PLL_DIV(4)>;
+ 			clock-names = "dspi";
+ 			spi-num-chipselects = <5>;
+-			bus-num = <0>;
+ 		};
+ 
+ 		esdhc: esdhc@2140000 {
+diff --git a/arch/arm64/boot/dts/freescale/imx8-ss-conn.dtsi b/arch/arm64/boot/dts/freescale/imx8-ss-conn.dtsi
+index e1e81ca0ca698..a79f42a9618ec 100644
+--- a/arch/arm64/boot/dts/freescale/imx8-ss-conn.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8-ss-conn.dtsi
+@@ -77,9 +77,12 @@ conn_subsys: bus@5b000000 {
+ 			     <GIC_SPI 259 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&enet0_lpcg IMX_LPCG_CLK_4>,
+ 			 <&enet0_lpcg IMX_LPCG_CLK_2>,
+-			 <&enet0_lpcg IMX_LPCG_CLK_1>,
++			 <&enet0_lpcg IMX_LPCG_CLK_3>,
+ 			 <&enet0_lpcg IMX_LPCG_CLK_0>;
+ 		clock-names = "ipg", "ahb", "enet_clk_ref", "ptp";
++		assigned-clocks = <&clk IMX_SC_R_ENET_0 IMX_SC_PM_CLK_PER>,
++				  <&clk IMX_SC_R_ENET_0 IMX_SC_C_CLKDIV>;
++		assigned-clock-rates = <250000000>, <125000000>;
+ 		fsl,num-tx-queues=<3>;
+ 		fsl,num-rx-queues=<3>;
+ 		power-domains = <&pd IMX_SC_R_ENET_0>;
+@@ -94,9 +97,12 @@ conn_subsys: bus@5b000000 {
+ 				<GIC_SPI 263 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&enet1_lpcg IMX_LPCG_CLK_4>,
+ 			 <&enet1_lpcg IMX_LPCG_CLK_2>,
+-			 <&enet1_lpcg IMX_LPCG_CLK_1>,
++			 <&enet1_lpcg IMX_LPCG_CLK_3>,
+ 			 <&enet1_lpcg IMX_LPCG_CLK_0>;
+ 		clock-names = "ipg", "ahb", "enet_clk_ref", "ptp";
++		assigned-clocks = <&clk IMX_SC_R_ENET_1 IMX_SC_PM_CLK_PER>,
++				  <&clk IMX_SC_R_ENET_1 IMX_SC_C_CLKDIV>;
++		assigned-clock-rates = <250000000>, <125000000>;
+ 		fsl,num-tx-queues=<3>;
+ 		fsl,num-rx-queues=<3>;
+ 		power-domains = <&pd IMX_SC_R_ENET_1>;
+@@ -152,15 +158,19 @@ conn_subsys: bus@5b000000 {
+ 		#clock-cells = <1>;
+ 		clocks = <&clk IMX_SC_R_ENET_0 IMX_SC_PM_CLK_PER>,
+ 			 <&clk IMX_SC_R_ENET_0 IMX_SC_PM_CLK_PER>,
+-			 <&conn_axi_clk>, <&conn_ipg_clk>, <&conn_ipg_clk>;
++			 <&conn_axi_clk>,
++			 <&clk IMX_SC_R_ENET_0 IMX_SC_C_TXCLK>,
++			 <&conn_ipg_clk>,
++			 <&conn_ipg_clk>;
+ 		clock-indices = <IMX_LPCG_CLK_0>, <IMX_LPCG_CLK_1>,
+-				<IMX_LPCG_CLK_2>, <IMX_LPCG_CLK_4>,
+-				<IMX_LPCG_CLK_5>;
+-		clock-output-names = "enet0_ipg_root_clk",
+-				     "enet0_tx_clk",
+-				     "enet0_ahb_clk",
+-				     "enet0_ipg_clk",
+-				     "enet0_ipg_s_clk";
++				<IMX_LPCG_CLK_2>, <IMX_LPCG_CLK_3>,
++				<IMX_LPCG_CLK_4>, <IMX_LPCG_CLK_5>;
++		clock-output-names = "enet0_lpcg_timer_clk",
++				     "enet0_lpcg_txc_sampling_clk",
++				     "enet0_lpcg_ahb_clk",
++				     "enet0_lpcg_rgmii_txc_clk",
++				     "enet0_lpcg_ipg_clk",
++				     "enet0_lpcg_ipg_s_clk";
+ 		power-domains = <&pd IMX_SC_R_ENET_0>;
+ 	};
+ 
+@@ -170,15 +180,19 @@ conn_subsys: bus@5b000000 {
+ 		#clock-cells = <1>;
+ 		clocks = <&clk IMX_SC_R_ENET_1 IMX_SC_PM_CLK_PER>,
+ 			 <&clk IMX_SC_R_ENET_1 IMX_SC_PM_CLK_PER>,
+-			 <&conn_axi_clk>, <&conn_ipg_clk>, <&conn_ipg_clk>;
++			 <&conn_axi_clk>,
++			 <&clk IMX_SC_R_ENET_1 IMX_SC_C_TXCLK>,
++			 <&conn_ipg_clk>,
++			 <&conn_ipg_clk>;
+ 		clock-indices = <IMX_LPCG_CLK_0>, <IMX_LPCG_CLK_1>,
+-				<IMX_LPCG_CLK_2>, <IMX_LPCG_CLK_4>,
+-				<IMX_LPCG_CLK_5>;
+-		clock-output-names = "enet1_ipg_root_clk",
+-				     "enet1_tx_clk",
+-				     "enet1_ahb_clk",
+-				     "enet1_ipg_clk",
+-				     "enet1_ipg_s_clk";
++				<IMX_LPCG_CLK_2>, <IMX_LPCG_CLK_3>,
++				<IMX_LPCG_CLK_4>, <IMX_LPCG_CLK_5>;
++		clock-output-names = "enet1_lpcg_timer_clk",
++				     "enet1_lpcg_txc_sampling_clk",
++				     "enet1_lpcg_ahb_clk",
++				     "enet1_lpcg_rgmii_txc_clk",
++				     "enet1_lpcg_ipg_clk",
++				     "enet1_lpcg_ipg_s_clk";
+ 		power-domains = <&pd IMX_SC_R_ENET_1>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi
+index c35eeaff958f5..54eaf3d6055b1 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mn-beacon-som.dtsi
+@@ -120,6 +120,9 @@
+ 		interrupt-parent = <&gpio1>;
+ 		interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
+ 		rohm,reset-snvs-powered;
++		#clock-cells = <0>;
++		clocks = <&osc_32k 0>;
++		clock-output-names = "clk-32k-out";
+ 
+ 		regulators {
+ 			buck1_reg: BUCK1 {
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq.dtsi b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+index 17c449e12c2e5..91df9c5350ae5 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+@@ -1383,6 +1383,14 @@
+ 			         <&src IMX8MQ_RESET_PCIE_CTRL_APPS_EN>,
+ 			         <&src IMX8MQ_RESET_PCIE_CTRL_APPS_TURNOFF>;
+ 			reset-names = "pciephy", "apps", "turnoff";
++			assigned-clocks = <&clk IMX8MQ_CLK_PCIE1_CTRL>,
++			                  <&clk IMX8MQ_CLK_PCIE1_PHY>,
++			                  <&clk IMX8MQ_CLK_PCIE1_AUX>;
++			assigned-clock-parents = <&clk IMX8MQ_SYS2_PLL_250M>,
++			                         <&clk IMX8MQ_SYS2_PLL_100M>,
++			                         <&clk IMX8MQ_SYS1_PLL_80M>;
++			assigned-clock-rates = <250000000>, <100000000>,
++			                       <10000000>;
+ 			status = "disabled";
+ 		};
+ 
+@@ -1413,6 +1421,14 @@
+ 			         <&src IMX8MQ_RESET_PCIE2_CTRL_APPS_EN>,
+ 			         <&src IMX8MQ_RESET_PCIE2_CTRL_APPS_TURNOFF>;
+ 			reset-names = "pciephy", "apps", "turnoff";
++			assigned-clocks = <&clk IMX8MQ_CLK_PCIE2_CTRL>,
++			                  <&clk IMX8MQ_CLK_PCIE2_PHY>,
++			                  <&clk IMX8MQ_CLK_PCIE2_AUX>;
++			assigned-clock-parents = <&clk IMX8MQ_SYS2_PLL_250M>,
++			                         <&clk IMX8MQ_SYS2_PLL_100M>,
++			                         <&clk IMX8MQ_SYS1_PLL_80M>;
++			assigned-clock-rates = <250000000>, <100000000>,
++			                       <10000000>;
+ 			status = "disabled";
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+index 53e817c5f6f36..ce2bcddf396f8 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+@@ -109,10 +109,8 @@
+ 	};
+ 
+ 	firmware {
+-		turris-mox-rwtm {
+-			compatible = "cznic,turris-mox-rwtm";
+-			mboxes = <&rwtm 0>;
+-			status = "okay";
++		armada-3700-rwtm {
++			compatible = "marvell,armada-3700-rwtm-firmware", "cznic,turris-mox-rwtm";
+ 		};
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+index 6ffbb099fcac7..5db81a416cd65 100644
+--- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+@@ -504,4 +504,12 @@
+ 			};
+ 		};
+ 	};
++
++	firmware {
++		armada-3700-rwtm {
++			compatible = "marvell,armada-3700-rwtm-firmware";
++			mboxes = <&rwtm 0>;
++			status = "okay";
++		};
++	};
+ };
+diff --git a/arch/arm64/boot/dts/marvell/cn9130-db.dts b/arch/arm64/boot/dts/marvell/cn9130-db.dts
+index 2c2af001619b0..9758609541c7b 100644
+--- a/arch/arm64/boot/dts/marvell/cn9130-db.dts
++++ b/arch/arm64/boot/dts/marvell/cn9130-db.dts
+@@ -260,7 +260,7 @@
+ 			};
+ 			partition@200000 {
+ 				label = "Linux";
+-				reg = <0x200000 0xd00000>;
++				reg = <0x200000 0xe00000>;
+ 			};
+ 			partition@1000000 {
+ 				label = "Filesystem";
+diff --git a/arch/arm64/boot/dts/nvidia/tegra194.dtsi b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+index 9449156fae391..2e40b60472833 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra194.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+@@ -2345,6 +2345,20 @@
+ 		};
+ 	};
+ 
++	pmu {
++		compatible = "arm,armv8-pmuv3";
++		interrupts = <GIC_SPI 384 IRQ_TYPE_LEVEL_HIGH>,
++			     <GIC_SPI 385 IRQ_TYPE_LEVEL_HIGH>,
++			     <GIC_SPI 386 IRQ_TYPE_LEVEL_HIGH>,
++			     <GIC_SPI 387 IRQ_TYPE_LEVEL_HIGH>,
++			     <GIC_SPI 388 IRQ_TYPE_LEVEL_HIGH>,
++			     <GIC_SPI 389 IRQ_TYPE_LEVEL_HIGH>,
++			     <GIC_SPI 390 IRQ_TYPE_LEVEL_HIGH>,
++			     <GIC_SPI 391 IRQ_TYPE_LEVEL_HIGH>;
++		interrupt-affinity = <&cpu0_0 &cpu0_1 &cpu1_0 &cpu1_1
++				      &cpu2_0 &cpu2_1 &cpu3_0 &cpu3_1>;
++	};
++
+ 	psci {
+ 		compatible = "arm,psci-1.0";
+ 		status = "okay";
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-idp.dts b/arch/arm64/boot/dts/qcom/sc7180-idp.dts
+index e77a7926034a7..afe0f9c258164 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-idp.dts
++++ b/arch/arm64/boot/dts/qcom/sc7180-idp.dts
+@@ -45,7 +45,7 @@
+ 
+ /* Increase the size from 2MB to 8MB */
+ &rmtfs_mem {
+-	reg = <0x0 0x84400000 0x0 0x800000>;
++	reg = <0x0 0x94600000 0x0 0x800000>;
+ };
+ 
+ / {
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi
+index 4c6e433c8226a..3eb8550da1fc5 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi
+@@ -23,6 +23,7 @@ ap_h1_spi: &spi0 {};
+ 	adau7002: audio-codec-1 {
+ 		compatible = "adi,adau7002";
+ 		IOVDD-supply = <&pp1800_l15a>;
++		wakeup-delay-ms = <15>;
+ 		#sound-dai-cells = <0>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm8150-hdk.dts b/arch/arm64/boot/dts/qcom/sm8150-hdk.dts
+index fb2cf3d987a18..50ee3bb97325a 100644
+--- a/arch/arm64/boot/dts/qcom/sm8150-hdk.dts
++++ b/arch/arm64/boot/dts/qcom/sm8150-hdk.dts
+@@ -354,7 +354,11 @@
+ 	};
+ };
+ 
+-&qupv3_id_1 {
++&gmu {
++	status = "okay";
++};
++
++&gpu {
+ 	status = "okay";
+ };
+ 
+@@ -372,6 +376,10 @@
+ 	};
+ };
+ 
++&qupv3_id_1 {
++	status = "okay";
++};
++
+ &remoteproc_adsp {
+ 	status = "okay";
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm8150-mtp.dts b/arch/arm64/boot/dts/qcom/sm8150-mtp.dts
+index 3774f8e634164..7de54b2e497e8 100644
+--- a/arch/arm64/boot/dts/qcom/sm8150-mtp.dts
++++ b/arch/arm64/boot/dts/qcom/sm8150-mtp.dts
+@@ -349,7 +349,11 @@
+ 	};
+ };
+ 
+-&qupv3_id_1 {
++&gmu {
++	status = "okay";
++};
++
++&gpu {
+ 	status = "okay";
+ };
+ 
+@@ -367,6 +371,10 @@
+ 	};
+ };
+ 
++&qupv3_id_1 {
++	status = "okay";
++};
++
+ &remoteproc_adsp {
+ 	status = "okay";
+ 	firmware-name = "qcom/sm8150/adsp.mdt";
+diff --git a/arch/arm64/boot/dts/qcom/sm8150.dtsi b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+index 51235a9521c24..618a1e64f8080 100644
+--- a/arch/arm64/boot/dts/qcom/sm8150.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+@@ -1082,6 +1082,8 @@
+ 
+ 			qcom,gmu = <&gmu>;
+ 
++			status = "disabled";
++
+ 			zap-shader {
+ 				memory-region = <&gpu_mem>;
+ 			};
+@@ -1149,6 +1151,8 @@
+ 
+ 			operating-points-v2 = <&gmu_opp_table>;
+ 
++			status = "disabled";
++
+ 			gmu_opp_table: opp-table {
+ 				compatible = "operating-points-v2";
+ 
+@@ -1496,6 +1500,8 @@
+ 			qcom,smem-states = <&modem_smp2p_out 0>;
+ 			qcom,smem-state-names = "stop";
+ 
++			status = "disabled";
++
+ 			glink-edge {
+ 				interrupts = <GIC_SPI 449 IRQ_TYPE_EDGE_RISING>;
+ 				label = "modem";
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index 4c0de12aaba6f..09b5523965579 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -1470,7 +1470,7 @@
+ 
+ 			status = "disabled";
+ 
+-			pcie2_lane: lanes@1c0e200 {
++			pcie2_lane: lanes@1c16200 {
+ 				reg = <0 0x1c16200 0 0x170>, /* tx0 */
+ 				      <0 0x1c16400 0 0x200>, /* rx0 */
+ 				      <0 0x1c16a00 0 0x1f0>, /* pcs */
+@@ -2370,7 +2370,7 @@
+ 		};
+ 
+ 		mdss: mdss@ae00000 {
+-			compatible = "qcom,sdm845-mdss";
++			compatible = "qcom,sm8250-mdss";
+ 			reg = <0 0x0ae00000 0 0x1000>;
+ 			reg-names = "mdss";
+ 
+@@ -2402,7 +2402,7 @@
+ 			ranges;
+ 
+ 			mdss_mdp: mdp@ae01000 {
+-				compatible = "qcom,sdm845-dpu";
++				compatible = "qcom,sm8250-dpu";
+ 				reg = <0 0x0ae01000 0 0x8f000>,
+ 				      <0 0x0aeb0000 0 0x2008>;
+ 				reg-names = "mdp", "vbif";
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index ed0b51bc03ea7..a2382eb8619ba 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -689,7 +689,7 @@
+ 			interrupt-controller;
+ 		};
+ 
+-		tsens0: thermal-sensor@c222000 {
++		tsens0: thermal-sensor@c263000 {
+ 			compatible = "qcom,sm8350-tsens", "qcom,tsens-v2";
+ 			reg = <0 0x0c263000 0 0x1ff>, /* TM */
+ 			      <0 0x0c222000 0 0x8>; /* SROT */
+@@ -700,7 +700,7 @@
+ 			#thermal-sensor-cells = <1>;
+ 		};
+ 
+-		tsens1: thermal-sensor@c223000 {
++		tsens1: thermal-sensor@c265000 {
+ 			compatible = "qcom,sm8350-tsens", "qcom,tsens-v2";
+ 			reg = <0 0x0c265000 0 0x1ff>, /* TM */
+ 			      <0 0x0c223000 0 0x8>; /* SROT */
+@@ -1176,7 +1176,7 @@
+ 			};
+ 		};
+ 
+-		dc_noc: interconnect@90e0000 {
++		dc_noc: interconnect@90c0000 {
+ 			compatible = "qcom,sm8350-dc-noc";
+ 			reg = <0 0x090c0000 0 0x4200>;
+ 			#interconnect-cells = <1>;
+diff --git a/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi b/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
+index d8046fedf9c12..e3c8b2fe143ea 100644
+--- a/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
++++ b/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
+@@ -271,12 +271,12 @@
+ &ehci0 {
+ 	dr_mode = "otg";
+ 	status = "okay";
+-	clocks = <&cpg CPG_MOD 703>, <&cpg CPG_MOD 704>;
++	clocks = <&cpg CPG_MOD 703>, <&cpg CPG_MOD 704>, <&usb2_clksel>, <&versaclock5 3>;
+ };
+ 
+ &ehci1 {
+ 	status = "okay";
+-	clocks = <&cpg CPG_MOD 703>, <&cpg CPG_MOD 704>;
++	clocks = <&cpg CPG_MOD 703>, <&cpg CPG_MOD 704>, <&usb2_clksel>, <&versaclock5 3>;
+ };
+ 
+ &hdmi0 {
+diff --git a/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi b/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi
+index 8d3a4d6ee8853..bd3d26b2a2bb8 100644
+--- a/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi
++++ b/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi
+@@ -319,8 +319,10 @@
+ 	status = "okay";
+ };
+ 
+-&usb_extal_clk {
+-	clock-frequency = <50000000>;
++&usb2_clksel {
++	clocks = <&cpg CPG_MOD 703>, <&cpg CPG_MOD 704>,
++		  <&versaclock5 3>, <&usb3s0_clk>;
++	status = "okay";
+ };
+ 
+ &usb3s0_clk {
+diff --git a/arch/arm64/boot/dts/rockchip/px30.dtsi b/arch/arm64/boot/dts/rockchip/px30.dtsi
+index 09baa8a167ce6..2b43c3d72dd92 100644
+--- a/arch/arm64/boot/dts/rockchip/px30.dtsi
++++ b/arch/arm64/boot/dts/rockchip/px30.dtsi
+@@ -244,20 +244,20 @@
+ 			#size-cells = <0>;
+ 
+ 			/* These power domains are grouped by VD_LOGIC */
+-			pd_usb@PX30_PD_USB {
++			power-domain@PX30_PD_USB {
+ 				reg = <PX30_PD_USB>;
+ 				clocks = <&cru HCLK_HOST>,
+ 					 <&cru HCLK_OTG>,
+ 					 <&cru SCLK_OTG_ADP>;
+ 				pm_qos = <&qos_usb_host>, <&qos_usb_otg>;
+ 			};
+-			pd_sdcard@PX30_PD_SDCARD {
++			power-domain@PX30_PD_SDCARD {
+ 				reg = <PX30_PD_SDCARD>;
+ 				clocks = <&cru HCLK_SDMMC>,
+ 					 <&cru SCLK_SDMMC>;
+ 				pm_qos = <&qos_sdmmc>;
+ 			};
+-			pd_gmac@PX30_PD_GMAC {
++			power-domain@PX30_PD_GMAC {
+ 				reg = <PX30_PD_GMAC>;
+ 				clocks = <&cru ACLK_GMAC>,
+ 					 <&cru PCLK_GMAC>,
+@@ -265,7 +265,7 @@
+ 					 <&cru SCLK_GMAC_RX_TX>;
+ 				pm_qos = <&qos_gmac>;
+ 			};
+-			pd_mmc_nand@PX30_PD_MMC_NAND {
++			power-domain@PX30_PD_MMC_NAND {
+ 				reg = <PX30_PD_MMC_NAND>;
+ 				clocks =  <&cru HCLK_NANDC>,
+ 					  <&cru HCLK_EMMC>,
+@@ -278,14 +278,14 @@
+ 				pm_qos = <&qos_emmc>, <&qos_nand>,
+ 					 <&qos_sdio>, <&qos_sfc>;
+ 			};
+-			pd_vpu@PX30_PD_VPU {
++			power-domain@PX30_PD_VPU {
+ 				reg = <PX30_PD_VPU>;
+ 				clocks = <&cru ACLK_VPU>,
+ 					 <&cru HCLK_VPU>,
+ 					 <&cru SCLK_CORE_VPU>;
+ 				pm_qos = <&qos_vpu>, <&qos_vpu_r128>;
+ 			};
+-			pd_vo@PX30_PD_VO {
++			power-domain@PX30_PD_VO {
+ 				reg = <PX30_PD_VO>;
+ 				clocks = <&cru ACLK_RGA>,
+ 					 <&cru ACLK_VOPB>,
+@@ -301,7 +301,7 @@
+ 				pm_qos = <&qos_rga_rd>, <&qos_rga_wr>,
+ 					 <&qos_vop_m0>, <&qos_vop_m1>;
+ 			};
+-			pd_vi@PX30_PD_VI {
++			power-domain@PX30_PD_VI {
+ 				reg = <PX30_PD_VI>;
+ 				clocks = <&cru ACLK_CIF>,
+ 					 <&cru ACLK_ISP>,
+@@ -312,7 +312,7 @@
+ 					 <&qos_isp_wr>, <&qos_isp_m1>,
+ 					 <&qos_vip>;
+ 			};
+-			pd_gpu@PX30_PD_GPU {
++			power-domain@PX30_PD_GPU {
+ 				reg = <PX30_PD_GPU>;
+ 				clocks = <&cru SCLK_GPU>;
+ 				pm_qos = <&qos_gpu>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts b/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
+index 3dddd4742c3a0..665b2e69455dd 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3308-roc-cc.dts
+@@ -84,8 +84,8 @@
+ 		regulator-min-microvolt = <1800000>;
+ 		regulator-max-microvolt = <3300000>;
+ 		gpios = <&gpio0 RK_PA7 GPIO_ACTIVE_HIGH>;
+-		states = <1800000 0x0
+-			  3300000 0x1>;
++		states = <1800000 0x0>,
++			 <3300000 0x1>;
+ 		vin-supply = <&vcc5v0_sys>;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328-nanopi-r2s.dts b/arch/arm64/boot/dts/rockchip/rk3328-nanopi-r2s.dts
+index f807bc066ccb9..d5001d13e3746 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328-nanopi-r2s.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3328-nanopi-r2s.dts
+@@ -76,8 +76,8 @@
+ 		regulator-settling-time-us = <5000>;
+ 		regulator-type = "voltage";
+ 		startup-delay-us = <2000>;
+-		states = <1800000 0x1
+-			  3300000 0x0>;
++		states = <1800000 0x1>,
++			 <3300000 0x0>;
+ 		vin-supply = <&vcc_io_33>;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts b/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts
+index a05732b59f380..a99979afd3739 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts
+@@ -50,8 +50,8 @@
+ 	vcc_sdio: sdmmcio-regulator {
+ 		compatible = "regulator-gpio";
+ 		gpios = <&grf_gpio 0 GPIO_ACTIVE_HIGH>;
+-		states = <1800000 0x1
+-			  3300000 0x0>;
++		states = <1800000 0x1>,
++			 <3300000 0x0>;
+ 		regulator-name = "vcc_sdio";
+ 		regulator-type = "voltage";
+ 		regulator-min-microvolt = <1800000>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328.dtsi b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+index 3ed69ecbcf3ce..d2d8b675c9e94 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+@@ -300,13 +300,13 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 
+-			pd_hevc@RK3328_PD_HEVC {
++			power-domain@RK3328_PD_HEVC {
+ 				reg = <RK3328_PD_HEVC>;
+ 			};
+-			pd_video@RK3328_PD_VIDEO {
++			power-domain@RK3328_PD_VIDEO {
+ 				reg = <RK3328_PD_VIDEO>;
+ 			};
+-			pd_vpu@RK3328_PD_VPU {
++			power-domain@RK3328_PD_VPU {
+ 				reg = <RK3328_PD_VPU>;
+ 				clocks = <&cru ACLK_VPU>, <&cru HCLK_VPU>;
+ 			};
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-gru-scarlet.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-gru-scarlet.dtsi
+index beee5fbb34437..5d7a9d96d1633 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-gru-scarlet.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-gru-scarlet.dtsi
+@@ -245,7 +245,7 @@ pp1800_pcie: &pp1800_s0 {
+ };
+ 
+ &ppvar_sd_card_io {
+-	states = <1800000 0x0 3300000 0x1>;
++	states = <1800000 0x0>, <3300000 0x1>;
+ 	regulator-max-microvolt = <3300000>;
+ };
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-gru.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-gru.dtsi
+index 4002742fed4c1..c1bcc8ca3769d 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-gru.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-gru.dtsi
+@@ -252,8 +252,8 @@
+ 		enable-active-high;
+ 		enable-gpio = <&gpio2 2 GPIO_ACTIVE_HIGH>;
+ 		gpios = <&gpio2 28 GPIO_ACTIVE_HIGH>;
+-		states = <1800000 0x1
+-			  3000000 0x0>;
++		states = <1800000 0x1>,
++			 <3000000 0x0>;
+ 
+ 		regulator-min-microvolt = <1800000>;
+ 		regulator-max-microvolt = <3000000>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-nanopi4.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-nanopi4.dtsi
+index 16fd58c4a80f3..8c0ff6c96e037 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-nanopi4.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-nanopi4.dtsi
+@@ -510,7 +510,6 @@
+ };
+ 
+ &pcie0 {
+-	max-link-speed = <2>;
+ 	num-lanes = <2>;
+ 	vpcie0v9-supply = <&vcca0v9_s3>;
+ 	vpcie1v8-supply = <&vcca1v8_s3>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi
+index 7d0a7c6977039..b28888ea9262e 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dtsi
+@@ -474,7 +474,6 @@
+ 
+ &pcie0 {
+ 	ep-gpios = <&gpio4 RK_PD3 GPIO_ACTIVE_HIGH>;
+-	max-link-speed = <2>;
+ 	num-lanes = <4>;
+ 	pinctrl-0 = <&pcie_clkreqnb_cpm>;
+ 	pinctrl-names = "default";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399.dtsi b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+index 634a91af8e83b..7f8081f9e30ee 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+@@ -227,7 +227,7 @@
+ 		       <&pcie_phy 2>, <&pcie_phy 3>;
+ 		phy-names = "pcie-phy-0", "pcie-phy-1",
+ 			    "pcie-phy-2", "pcie-phy-3";
+-		ranges = <0x83000000 0x0 0xfa000000 0x0 0xfa000000 0x0 0x1e00000>,
++		ranges = <0x82000000 0x0 0xfa000000 0x0 0xfa000000 0x0 0x1e00000>,
+ 			 <0x81000000 0x0 0xfbe00000 0x0 0xfbe00000 0x0 0x100000>;
+ 		resets = <&cru SRST_PCIE_CORE>, <&cru SRST_PCIE_MGMT>,
+ 			 <&cru SRST_PCIE_MGMT_STICKY>, <&cru SRST_PCIE_PIPE>,
+@@ -968,26 +968,26 @@
+ 			#size-cells = <0>;
+ 
+ 			/* These power domains are grouped by VD_CENTER */
+-			pd_iep@RK3399_PD_IEP {
++			power-domain@RK3399_PD_IEP {
+ 				reg = <RK3399_PD_IEP>;
+ 				clocks = <&cru ACLK_IEP>,
+ 					 <&cru HCLK_IEP>;
+ 				pm_qos = <&qos_iep>;
+ 			};
+-			pd_rga@RK3399_PD_RGA {
++			power-domain@RK3399_PD_RGA {
+ 				reg = <RK3399_PD_RGA>;
+ 				clocks = <&cru ACLK_RGA>,
+ 					 <&cru HCLK_RGA>;
+ 				pm_qos = <&qos_rga_r>,
+ 					 <&qos_rga_w>;
+ 			};
+-			pd_vcodec@RK3399_PD_VCODEC {
++			power-domain@RK3399_PD_VCODEC {
+ 				reg = <RK3399_PD_VCODEC>;
+ 				clocks = <&cru ACLK_VCODEC>,
+ 					 <&cru HCLK_VCODEC>;
+ 				pm_qos = <&qos_video_m0>;
+ 			};
+-			pd_vdu@RK3399_PD_VDU {
++			power-domain@RK3399_PD_VDU {
+ 				reg = <RK3399_PD_VDU>;
+ 				clocks = <&cru ACLK_VDU>,
+ 					 <&cru HCLK_VDU>;
+@@ -996,94 +996,94 @@
+ 			};
+ 
+ 			/* These power domains are grouped by VD_GPU */
+-			pd_gpu@RK3399_PD_GPU {
++			power-domain@RK3399_PD_GPU {
+ 				reg = <RK3399_PD_GPU>;
+ 				clocks = <&cru ACLK_GPU>;
+ 				pm_qos = <&qos_gpu>;
+ 			};
+ 
+ 			/* These power domains are grouped by VD_LOGIC */
+-			pd_edp@RK3399_PD_EDP {
++			power-domain@RK3399_PD_EDP {
+ 				reg = <RK3399_PD_EDP>;
+ 				clocks = <&cru PCLK_EDP_CTRL>;
+ 			};
+-			pd_emmc@RK3399_PD_EMMC {
++			power-domain@RK3399_PD_EMMC {
+ 				reg = <RK3399_PD_EMMC>;
+ 				clocks = <&cru ACLK_EMMC>;
+ 				pm_qos = <&qos_emmc>;
+ 			};
+-			pd_gmac@RK3399_PD_GMAC {
++			power-domain@RK3399_PD_GMAC {
+ 				reg = <RK3399_PD_GMAC>;
+ 				clocks = <&cru ACLK_GMAC>,
+ 					 <&cru PCLK_GMAC>;
+ 				pm_qos = <&qos_gmac>;
+ 			};
+-			pd_sd@RK3399_PD_SD {
++			power-domain@RK3399_PD_SD {
+ 				reg = <RK3399_PD_SD>;
+ 				clocks = <&cru HCLK_SDMMC>,
+ 					 <&cru SCLK_SDMMC>;
+ 				pm_qos = <&qos_sd>;
+ 			};
+-			pd_sdioaudio@RK3399_PD_SDIOAUDIO {
++			power-domain@RK3399_PD_SDIOAUDIO {
+ 				reg = <RK3399_PD_SDIOAUDIO>;
+ 				clocks = <&cru HCLK_SDIO>;
+ 				pm_qos = <&qos_sdioaudio>;
+ 			};
+-			pd_tcpc0@RK3399_PD_TCPD0 {
++			power-domain@RK3399_PD_TCPD0 {
+ 				reg = <RK3399_PD_TCPD0>;
+ 				clocks = <&cru SCLK_UPHY0_TCPDCORE>,
+ 					 <&cru SCLK_UPHY0_TCPDPHY_REF>;
+ 			};
+-			pd_tcpc1@RK3399_PD_TCPD1 {
++			power-domain@RK3399_PD_TCPD1 {
+ 				reg = <RK3399_PD_TCPD1>;
+ 				clocks = <&cru SCLK_UPHY1_TCPDCORE>,
+ 					 <&cru SCLK_UPHY1_TCPDPHY_REF>;
+ 			};
+-			pd_usb3@RK3399_PD_USB3 {
++			power-domain@RK3399_PD_USB3 {
+ 				reg = <RK3399_PD_USB3>;
+ 				clocks = <&cru ACLK_USB3>;
+ 				pm_qos = <&qos_usb_otg0>,
+ 					 <&qos_usb_otg1>;
+ 			};
+-			pd_vio@RK3399_PD_VIO {
++			power-domain@RK3399_PD_VIO {
+ 				reg = <RK3399_PD_VIO>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+ 
+-				pd_hdcp@RK3399_PD_HDCP {
++				power-domain@RK3399_PD_HDCP {
+ 					reg = <RK3399_PD_HDCP>;
+ 					clocks = <&cru ACLK_HDCP>,
+ 						 <&cru HCLK_HDCP>,
+ 						 <&cru PCLK_HDCP>;
+ 					pm_qos = <&qos_hdcp>;
+ 				};
+-				pd_isp0@RK3399_PD_ISP0 {
++				power-domain@RK3399_PD_ISP0 {
+ 					reg = <RK3399_PD_ISP0>;
+ 					clocks = <&cru ACLK_ISP0>,
+ 						 <&cru HCLK_ISP0>;
+ 					pm_qos = <&qos_isp0_m0>,
+ 						 <&qos_isp0_m1>;
+ 				};
+-				pd_isp1@RK3399_PD_ISP1 {
++				power-domain@RK3399_PD_ISP1 {
+ 					reg = <RK3399_PD_ISP1>;
+ 					clocks = <&cru ACLK_ISP1>,
+ 						 <&cru HCLK_ISP1>;
+ 					pm_qos = <&qos_isp1_m0>,
+ 						 <&qos_isp1_m1>;
+ 				};
+-				pd_vo@RK3399_PD_VO {
++				power-domain@RK3399_PD_VO {
+ 					reg = <RK3399_PD_VO>;
+ 					#address-cells = <1>;
+ 					#size-cells = <0>;
+ 
+-					pd_vopb@RK3399_PD_VOPB {
++					power-domain@RK3399_PD_VOPB {
+ 						reg = <RK3399_PD_VOPB>;
+ 						clocks = <&cru ACLK_VOP0>,
+ 							 <&cru HCLK_VOP0>;
+ 						pm_qos = <&qos_vop_big_r>,
+ 							 <&qos_vop_big_w>;
+ 					};
+-					pd_vopl@RK3399_PD_VOPL {
++					power-domain@RK3399_PD_VOPL {
+ 						reg = <RK3399_PD_VOPL>;
+ 						clocks = <&cru ACLK_VOP1>,
+ 							 <&cru HCLK_VOP1>;
+@@ -2354,7 +2354,7 @@
+ 			};
+ 		};
+ 
+-		sleep {
++		suspend {
+ 			ap_pwroff: ap-pwroff {
+ 				rockchip,pins = <1 RK_PA5 1 &pcfg_pull_none>;
+ 			};
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399pro-vmarc-som.dtsi b/arch/arm64/boot/dts/rockchip/rk3399pro-vmarc-som.dtsi
+index c0074b3ed4af7..01d1a75c8b4d3 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399pro-vmarc-som.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399pro-vmarc-som.dtsi
+@@ -329,7 +329,6 @@
+ 
+ &pcie0 {
+ 	ep-gpios = <&gpio0 RK_PB4 GPIO_ACTIVE_HIGH>;
+-	max-link-speed = <2>;
+ 	num-lanes = <4>;
+ 	pinctrl-0 = <&pcie_clkreqnb_cpm>;
+ 	pinctrl-names = "default";
+diff --git a/arch/arm64/boot/dts/ti/k3-am654-base-board.dts b/arch/arm64/boot/dts/ti/k3-am654-base-board.dts
+index 1b947e2c2e74f..faa8e280cf2b9 100644
+--- a/arch/arm64/boot/dts/ti/k3-am654-base-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-am654-base-board.dts
+@@ -136,7 +136,7 @@
+ 			AM65X_WKUP_IOPAD(0x007c, PIN_INPUT, 0) /* (L5) MCU_RGMII1_RD2 */
+ 			AM65X_WKUP_IOPAD(0x0080, PIN_INPUT, 0) /* (M6) MCU_RGMII1_RD1 */
+ 			AM65X_WKUP_IOPAD(0x0084, PIN_INPUT, 0) /* (L6) MCU_RGMII1_RD0 */
+-			AM65X_WKUP_IOPAD(0x0070, PIN_INPUT, 0) /* (N1) MCU_RGMII1_TXC */
++			AM65X_WKUP_IOPAD(0x0070, PIN_OUTPUT, 0) /* (N1) MCU_RGMII1_TXC */
+ 			AM65X_WKUP_IOPAD(0x0074, PIN_INPUT, 0) /* (M1) MCU_RGMII1_RXC */
+ 		>;
+ 	};
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+index bedd01b7a32ce..d14f3c18b65fc 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+@@ -90,7 +90,7 @@
+ 			J721E_WKUP_IOPAD(0x008c, PIN_INPUT, 0) /* MCU_RGMII1_RD2 */
+ 			J721E_WKUP_IOPAD(0x0090, PIN_INPUT, 0) /* MCU_RGMII1_RD1 */
+ 			J721E_WKUP_IOPAD(0x0094, PIN_INPUT, 0) /* MCU_RGMII1_RD0 */
+-			J721E_WKUP_IOPAD(0x0080, PIN_INPUT, 0) /* MCU_RGMII1_TXC */
++			J721E_WKUP_IOPAD(0x0080, PIN_OUTPUT, 0) /* MCU_RGMII1_TXC */
+ 			J721E_WKUP_IOPAD(0x0084, PIN_INPUT, 0) /* MCU_RGMII1_RXC */
+ 		>;
+ 	};
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts
+index ffccbc53f1e75..de3c3f7f2b7a7 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-j721e-common-proc-board.dts
+@@ -238,7 +238,7 @@
+ 			J721E_WKUP_IOPAD(0x007c, PIN_INPUT, 0) /* MCU_RGMII1_RD2 */
+ 			J721E_WKUP_IOPAD(0x0080, PIN_INPUT, 0) /* MCU_RGMII1_RD1 */
+ 			J721E_WKUP_IOPAD(0x0084, PIN_INPUT, 0) /* MCU_RGMII1_RD0 */
+-			J721E_WKUP_IOPAD(0x0070, PIN_INPUT, 0) /* MCU_RGMII1_TXC */
++			J721E_WKUP_IOPAD(0x0070, PIN_OUTPUT, 0) /* MCU_RGMII1_TXC */
+ 			J721E_WKUP_IOPAD(0x0074, PIN_INPUT, 0) /* MCU_RGMII1_RXC */
+ 		>;
+ 	};
+diff --git a/arch/s390/include/asm/stacktrace.h b/arch/s390/include/asm/stacktrace.h
+index 76c6034428be8..b4d936580fbf9 100644
+--- a/arch/s390/include/asm/stacktrace.h
++++ b/arch/s390/include/asm/stacktrace.h
+@@ -129,6 +129,103 @@ struct stack_frame {
+ 	r2;								\
+ })
+ 
++#define CALL_LARGS_0(...)						\
++	long dummy = 0
++#define CALL_LARGS_1(t1, a1)						\
++	long arg1  = (long)(t1)(a1)
++#define CALL_LARGS_2(t1, a1, t2, a2)					\
++	CALL_LARGS_1(t1, a1);						\
++	long arg2 = (long)(t2)(a2)
++#define CALL_LARGS_3(t1, a1, t2, a2, t3, a3)				\
++	CALL_LARGS_2(t1, a1, t2, a2);					\
++	long arg3 = (long)(t3)(a3)
++#define CALL_LARGS_4(t1, a1, t2, a2, t3, a3, t4, a4)			\
++	CALL_LARGS_3(t1, a1, t2, a2, t3, a3);				\
++	long arg4  = (long)(t4)(a4)
++#define CALL_LARGS_5(t1, a1, t2, a2, t3, a3, t4, a4, t5, a5)		\
++	CALL_LARGS_4(t1, a1, t2, a2, t3, a3, t4, a4);			\
++	long arg5 = (long)(t5)(a5)
++
++#define CALL_REGS_0							\
++	register long r2 asm("2") = dummy
++#define CALL_REGS_1							\
++	register long r2 asm("2") = arg1
++#define CALL_REGS_2							\
++	CALL_REGS_1;							\
++	register long r3 asm("3") = arg2
++#define CALL_REGS_3							\
++	CALL_REGS_2;							\
++	register long r4 asm("4") = arg3
++#define CALL_REGS_4							\
++	CALL_REGS_3;							\
++	register long r5 asm("5") = arg4
++#define CALL_REGS_5							\
++	CALL_REGS_4;							\
++	register long r6 asm("6") = arg5
++
++#define CALL_TYPECHECK_0(...)
++#define CALL_TYPECHECK_1(t, a, ...)					\
++	typecheck(t, a)
++#define CALL_TYPECHECK_2(t, a, ...)					\
++	CALL_TYPECHECK_1(__VA_ARGS__);					\
++	typecheck(t, a)
++#define CALL_TYPECHECK_3(t, a, ...)					\
++	CALL_TYPECHECK_2(__VA_ARGS__);					\
++	typecheck(t, a)
++#define CALL_TYPECHECK_4(t, a, ...)					\
++	CALL_TYPECHECK_3(__VA_ARGS__);					\
++	typecheck(t, a)
++#define CALL_TYPECHECK_5(t, a, ...)					\
++	CALL_TYPECHECK_4(__VA_ARGS__);					\
++	typecheck(t, a)
++
++#define CALL_PARM_0(...) void
++#define CALL_PARM_1(t, a, ...) t
++#define CALL_PARM_2(t, a, ...) t, CALL_PARM_1(__VA_ARGS__)
++#define CALL_PARM_3(t, a, ...) t, CALL_PARM_2(__VA_ARGS__)
++#define CALL_PARM_4(t, a, ...) t, CALL_PARM_3(__VA_ARGS__)
++#define CALL_PARM_5(t, a, ...) t, CALL_PARM_4(__VA_ARGS__)
++#define CALL_PARM_6(t, a, ...) t, CALL_PARM_5(__VA_ARGS__)
++
++/*
++ * Use call_on_stack() to call a function switching to a specified
++ * stack. Proper sign and zero extension of function arguments is
++ * done. Usage:
++ *
++ * rc = call_on_stack(nr, stack, rettype, fn, t1, a1, t2, a2, ...)
++ *
++ * - nr specifies the number of function arguments of fn.
++ * - stack specifies the stack to be used.
++ * - fn is the function to be called.
++ * - rettype is the return type of fn.
++ * - t1, a1, ... are pairs, where t1 must match the type of the first
++ *   argument of fn, t2 the second, etc. a1 is the corresponding
++ *   first function argument (not name), etc.
++ */
++#define call_on_stack(nr, stack, rettype, fn, ...)			\
++({									\
++	rettype (*__fn)(CALL_PARM_##nr(__VA_ARGS__)) = fn;		\
++	unsigned long frame = current_frame_address();			\
++	unsigned long __stack = stack;					\
++	unsigned long prev;						\
++	CALL_LARGS_##nr(__VA_ARGS__);					\
++	CALL_REGS_##nr;							\
++									\
++	CALL_TYPECHECK_##nr(__VA_ARGS__);				\
++	asm volatile(							\
++		"	lgr	%[_prev],15\n"				\
++		"	lg	15,%[_stack]\n"				\
++		"	stg	%[_frame],%[_bc](15)\n"			\
++		"	brasl	14,%[_fn]\n"				\
++		"	lgr	15,%[_prev]\n"				\
++		: [_prev] "=&d" (prev), CALL_FMT_##nr			\
++		: [_stack] "R" (__stack),				\
++		  [_bc] "i" (offsetof(struct stack_frame, back_chain)),	\
++		  [_frame] "d" (frame),					\
++		  [_fn] "X" (__fn) : CALL_CLOBBER_##nr);		\
++	(rettype)r2;							\
++})
++
+ #define CALL_ON_STACK_NORETURN(fn, stack)				\
+ ({									\
+ 	asm volatile(							\
+diff --git a/arch/s390/kernel/traps.c b/arch/s390/kernel/traps.c
+index 8dd23c703718b..662f52eb76392 100644
+--- a/arch/s390/kernel/traps.c
++++ b/arch/s390/kernel/traps.c
+@@ -277,6 +277,8 @@ static void __init test_monitor_call(void)
+ {
+ 	int val = 1;
+ 
++	if (!IS_ENABLED(CONFIG_BUG))
++		return;
+ 	asm volatile(
+ 		"	mc	0,0\n"
+ 		"0:	xgr	%0,%0\n"
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index 3a75a2c601c2a..1f7bb4898a9d2 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -3789,11 +3789,11 @@ static int skx_iio_set_mapping(struct intel_uncore_type *type)
+ 	/* One more for NULL. */
+ 	attrs = kcalloc((uncore_max_dies() + 1), sizeof(*attrs), GFP_KERNEL);
+ 	if (!attrs)
+-		goto err;
++		goto clear_topology;
+ 
+ 	eas = kcalloc(uncore_max_dies(), sizeof(*eas), GFP_KERNEL);
+ 	if (!eas)
+-		goto err;
++		goto clear_attrs;
+ 
+ 	for (die = 0; die < uncore_max_dies(); die++) {
+ 		sprintf(buf, "die%ld", die);
+@@ -3814,7 +3814,9 @@ err:
+ 	for (; die >= 0; die--)
+ 		kfree(eas[die].attr.attr.name);
+ 	kfree(eas);
++clear_attrs:
+ 	kfree(attrs);
++clear_topology:
+ 	kfree(type->topology);
+ clear_attr_update:
+ 	type->attr_update = NULL;
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index a3d867f221531..66e304a84deb0 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -576,6 +576,9 @@ static void bpf_tail_call_direct_fixup(struct bpf_prog *prog)
+ 
+ 	for (i = 0; i < prog->aux->size_poke_tab; i++) {
+ 		poke = &prog->aux->poke_tab[i];
++		if (poke->aux && poke->aux != prog->aux)
++			continue;
++
+ 		WARN_ON_ONCE(READ_ONCE(poke->tailcall_target_stable));
+ 
+ 		if (poke->reason != BPF_POKE_REASON_TAIL_CALL)
+diff --git a/drivers/dma-buf/sync_file.c b/drivers/dma-buf/sync_file.c
+index 20d9bddbb985b..394e6e1e96860 100644
+--- a/drivers/dma-buf/sync_file.c
++++ b/drivers/dma-buf/sync_file.c
+@@ -211,8 +211,8 @@ static struct sync_file *sync_file_merge(const char *name, struct sync_file *a,
+ 					 struct sync_file *b)
+ {
+ 	struct sync_file *sync_file;
+-	struct dma_fence **fences, **nfences, **a_fences, **b_fences;
+-	int i, i_a, i_b, num_fences, a_num_fences, b_num_fences;
++	struct dma_fence **fences = NULL, **nfences, **a_fences, **b_fences;
++	int i = 0, i_a, i_b, num_fences, a_num_fences, b_num_fences;
+ 
+ 	sync_file = sync_file_alloc();
+ 	if (!sync_file)
+@@ -236,7 +236,7 @@ static struct sync_file *sync_file_merge(const char *name, struct sync_file *a,
+ 	 * If a sync_file can only be created with sync_file_merge
+ 	 * and sync_file_create, this is a reasonable assumption.
+ 	 */
+-	for (i = i_a = i_b = 0; i_a < a_num_fences && i_b < b_num_fences; ) {
++	for (i_a = i_b = 0; i_a < a_num_fences && i_b < b_num_fences; ) {
+ 		struct dma_fence *pt_a = a_fences[i_a];
+ 		struct dma_fence *pt_b = b_fences[i_b];
+ 
+@@ -277,15 +277,16 @@ static struct sync_file *sync_file_merge(const char *name, struct sync_file *a,
+ 		fences = nfences;
+ 	}
+ 
+-	if (sync_file_set_fence(sync_file, fences, i) < 0) {
+-		kfree(fences);
++	if (sync_file_set_fence(sync_file, fences, i) < 0)
+ 		goto err;
+-	}
+ 
+ 	strlcpy(sync_file->user_name, name, sizeof(sync_file->user_name));
+ 	return sync_file;
+ 
+ err:
++	while (i)
++		dma_fence_put(fences[--i]);
++	kfree(fences);
+ 	fput(sync_file->file);
+ 	return NULL;
+ 
+diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig
+index db0ea2d2d75a2..a9c613c32282b 100644
+--- a/drivers/firmware/Kconfig
++++ b/drivers/firmware/Kconfig
+@@ -9,7 +9,7 @@ menu "Firmware Drivers"
+ config ARM_SCMI_PROTOCOL
+ 	tristate "ARM System Control and Management Interface (SCMI) Message Protocol"
+ 	depends on ARM || ARM64 || COMPILE_TEST
+-	depends on MAILBOX
++	depends on MAILBOX || HAVE_ARM_SMCCC_DISCOVERY
+ 	help
+ 	  ARM System Control and Management Interface (SCMI) protocol is a
+ 	  set of operating system-independent software interfaces that are
+diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
+index 228bf4a71d237..8685619d38f96 100644
+--- a/drivers/firmware/arm_scmi/common.h
++++ b/drivers/firmware/arm_scmi/common.h
+@@ -331,7 +331,7 @@ struct scmi_desc {
+ };
+ 
+ extern const struct scmi_desc scmi_mailbox_desc;
+-#ifdef CONFIG_HAVE_ARM_SMCCC
++#ifdef CONFIG_HAVE_ARM_SMCCC_DISCOVERY
+ extern const struct scmi_desc scmi_smc_desc;
+ #endif
+ 
+diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
+index c2983ed534949..74986bf96656f 100644
+--- a/drivers/firmware/arm_scmi/driver.c
++++ b/drivers/firmware/arm_scmi/driver.c
+@@ -1575,7 +1575,9 @@ ATTRIBUTE_GROUPS(versions);
+ 
+ /* Each compatible listed below must have descriptor associated with it */
+ static const struct of_device_id scmi_of_match[] = {
++#ifdef CONFIG_MAILBOX
+ 	{ .compatible = "arm,scmi", .data = &scmi_mailbox_desc },
++#endif
+ #ifdef CONFIG_HAVE_ARM_SMCCC_DISCOVERY
+ 	{ .compatible = "arm,scmi-smc", .data = &scmi_smc_desc},
+ #endif
+diff --git a/drivers/firmware/arm_scmi/sensors.c b/drivers/firmware/arm_scmi/sensors.c
+index 2c88aa2215597..308471586381f 100644
+--- a/drivers/firmware/arm_scmi/sensors.c
++++ b/drivers/firmware/arm_scmi/sensors.c
+@@ -166,7 +166,8 @@ struct scmi_msg_sensor_reading_get {
+ 
+ struct scmi_resp_sensor_reading_complete {
+ 	__le32 id;
+-	__le64 readings;
++	__le32 readings_low;
++	__le32 readings_high;
+ };
+ 
+ struct scmi_sensor_reading_resp {
+@@ -717,7 +718,8 @@ static int scmi_sensor_reading_get(const struct scmi_protocol_handle *ph,
+ 
+ 			resp = t->rx.buf;
+ 			if (le32_to_cpu(resp->id) == sensor_id)
+-				*value = get_unaligned_le64(&resp->readings);
++				*value =
++					get_unaligned_le64(&resp->readings_low);
+ 			else
+ 				ret = -EPROTO;
+ 		}
+diff --git a/drivers/firmware/tegra/Makefile b/drivers/firmware/tegra/Makefile
+index 49c87e00fafb3..620cf3fdd6074 100644
+--- a/drivers/firmware/tegra/Makefile
++++ b/drivers/firmware/tegra/Makefile
+@@ -3,6 +3,7 @@ tegra-bpmp-y			= bpmp.o
+ tegra-bpmp-$(CONFIG_ARCH_TEGRA_210_SOC)	+= bpmp-tegra210.o
+ tegra-bpmp-$(CONFIG_ARCH_TEGRA_186_SOC)	+= bpmp-tegra186.o
+ tegra-bpmp-$(CONFIG_ARCH_TEGRA_194_SOC)	+= bpmp-tegra186.o
++tegra-bpmp-$(CONFIG_ARCH_TEGRA_234_SOC)	+= bpmp-tegra186.o
+ tegra-bpmp-$(CONFIG_DEBUG_FS)	+= bpmp-debugfs.o
+ obj-$(CONFIG_TEGRA_BPMP)	+= tegra-bpmp.o
+ obj-$(CONFIG_TEGRA_IVC)		+= ivc.o
+diff --git a/drivers/firmware/tegra/bpmp-private.h b/drivers/firmware/tegra/bpmp-private.h
+index 54d560c48398e..182bfe3965161 100644
+--- a/drivers/firmware/tegra/bpmp-private.h
++++ b/drivers/firmware/tegra/bpmp-private.h
+@@ -24,7 +24,8 @@ struct tegra_bpmp_ops {
+ };
+ 
+ #if IS_ENABLED(CONFIG_ARCH_TEGRA_186_SOC) || \
+-    IS_ENABLED(CONFIG_ARCH_TEGRA_194_SOC)
++    IS_ENABLED(CONFIG_ARCH_TEGRA_194_SOC) || \
++    IS_ENABLED(CONFIG_ARCH_TEGRA_234_SOC)
+ extern const struct tegra_bpmp_ops tegra186_bpmp_ops;
+ #endif
+ #if IS_ENABLED(CONFIG_ARCH_TEGRA_210_SOC)
+diff --git a/drivers/firmware/tegra/bpmp.c b/drivers/firmware/tegra/bpmp.c
+index 0742a90cb844e..5654c5e9862b1 100644
+--- a/drivers/firmware/tegra/bpmp.c
++++ b/drivers/firmware/tegra/bpmp.c
+@@ -809,7 +809,8 @@ static const struct dev_pm_ops tegra_bpmp_pm_ops = {
+ };
+ 
+ #if IS_ENABLED(CONFIG_ARCH_TEGRA_186_SOC) || \
+-    IS_ENABLED(CONFIG_ARCH_TEGRA_194_SOC)
++    IS_ENABLED(CONFIG_ARCH_TEGRA_194_SOC) || \
++    IS_ENABLED(CONFIG_ARCH_TEGRA_234_SOC)
+ static const struct tegra_bpmp_soc tegra186_soc = {
+ 	.channels = {
+ 		.cpu_tx = {
+diff --git a/drivers/firmware/turris-mox-rwtm.c b/drivers/firmware/turris-mox-rwtm.c
+index 1cf4f10874923..c2d34dc8ba462 100644
+--- a/drivers/firmware/turris-mox-rwtm.c
++++ b/drivers/firmware/turris-mox-rwtm.c
+@@ -569,6 +569,7 @@ static int turris_mox_rwtm_remove(struct platform_device *pdev)
+ 
+ static const struct of_device_id turris_mox_rwtm_match[] = {
+ 	{ .compatible = "cznic,turris-mox-rwtm", },
++	{ .compatible = "marvell,armada-3700-rwtm-firmware", },
+ 	{ },
+ };
+ 
+diff --git a/drivers/gpu/drm/panel/panel-novatek-nt35510.c b/drivers/gpu/drm/panel/panel-novatek-nt35510.c
+index ef70140c5b09d..873cbd38e6d3a 100644
+--- a/drivers/gpu/drm/panel/panel-novatek-nt35510.c
++++ b/drivers/gpu/drm/panel/panel-novatek-nt35510.c
+@@ -706,9 +706,7 @@ static int nt35510_power_on(struct nt35510 *nt)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = nt35510_read_id(nt);
+-	if (ret)
+-		return ret;
++	nt35510_read_id(nt);
+ 
+ 	/* Set up stuff in  manufacturer control, page 1 */
+ 	ret = nt35510_send_long(nt, dsi, MCS_CMD_MAUCCTR,
+diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c
+index 1f6ba42218173..eeb49b5d90efc 100644
+--- a/drivers/i3c/master/svc-i3c-master.c
++++ b/drivers/i3c/master/svc-i3c-master.c
+@@ -1448,7 +1448,6 @@ static int svc_i3c_master_remove(struct platform_device *pdev)
+ 	if (ret)
+ 		return ret;
+ 
+-	free_irq(master->irq, master);
+ 	clk_disable_unprepare(master->pclk);
+ 	clk_disable_unprepare(master->fclk);
+ 	clk_disable_unprepare(master->sclk);
+diff --git a/drivers/memory/tegra/tegra124-emc.c b/drivers/memory/tegra/tegra124-emc.c
+index 5699d909abc22..a21ca8e0841aa 100644
+--- a/drivers/memory/tegra/tegra124-emc.c
++++ b/drivers/memory/tegra/tegra124-emc.c
+@@ -272,8 +272,8 @@
+ #define EMC_PUTERM_ADJ				0x574
+ 
+ #define DRAM_DEV_SEL_ALL			0
+-#define DRAM_DEV_SEL_0				(2 << 30)
+-#define DRAM_DEV_SEL_1				(1 << 30)
++#define DRAM_DEV_SEL_0				BIT(31)
++#define DRAM_DEV_SEL_1				BIT(30)
+ 
+ #define EMC_CFG_POWER_FEATURES_MASK		\
+ 	(EMC_CFG_DYN_SREF | EMC_CFG_DRAM_ACPD | EMC_CFG_DRAM_CLKSTOP_SR | \
+diff --git a/drivers/memory/tegra/tegra30-emc.c b/drivers/memory/tegra/tegra30-emc.c
+index 829f6d673c961..a2f2738ccb942 100644
+--- a/drivers/memory/tegra/tegra30-emc.c
++++ b/drivers/memory/tegra/tegra30-emc.c
+@@ -150,8 +150,8 @@
+ #define EMC_SELF_REF_CMD_ENABLED		BIT(0)
+ 
+ #define DRAM_DEV_SEL_ALL			(0 << 30)
+-#define DRAM_DEV_SEL_0				(2 << 30)
+-#define DRAM_DEV_SEL_1				(1 << 30)
++#define DRAM_DEV_SEL_0				BIT(31)
++#define DRAM_DEV_SEL_1				BIT(30)
+ #define DRAM_BROADCAST(num) \
+ 	((num) > 1 ? DRAM_DEV_SEL_ALL : DRAM_DEV_SEL_0)
+ 
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 961fa6b75cad8..beb41572d04ea 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -3583,6 +3583,7 @@ static const struct mv88e6xxx_ops mv88e6141_ops = {
+ 	.port_set_speed_duplex = mv88e6341_port_set_speed_duplex,
+ 	.port_max_speed_mode = mv88e6341_port_max_speed_mode,
+ 	.port_tag_remap = mv88e6095_port_tag_remap,
++	.port_set_policy = mv88e6352_port_set_policy,
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_ucast_flood = mv88e6352_port_set_ucast_flood,
+ 	.port_set_mcast_flood = mv88e6352_port_set_mcast_flood,
+@@ -3596,7 +3597,7 @@ static const struct mv88e6xxx_ops mv88e6141_ops = {
+ 	.port_set_cmode = mv88e6341_port_set_cmode,
+ 	.port_setup_message_port = mv88e6xxx_setup_message_port,
+ 	.stats_snapshot = mv88e6390_g1_stats_snapshot,
+-	.stats_set_histogram = mv88e6095_g1_stats_set_histogram,
++	.stats_set_histogram = mv88e6390_g1_stats_set_histogram,
+ 	.stats_get_sset_count = mv88e6320_stats_get_sset_count,
+ 	.stats_get_strings = mv88e6320_stats_get_strings,
+ 	.stats_get_stats = mv88e6390_stats_get_stats,
+@@ -3606,6 +3607,9 @@ static const struct mv88e6xxx_ops mv88e6141_ops = {
+ 	.mgmt_rsvd2cpu =  mv88e6390_g1_mgmt_rsvd2cpu,
+ 	.pot_clear = mv88e6xxx_g2_pot_clear,
+ 	.reset = mv88e6352_g1_reset,
++	.rmu_disable = mv88e6390_g1_rmu_disable,
++	.atu_get_hash = mv88e6165_g1_atu_get_hash,
++	.atu_set_hash = mv88e6165_g1_atu_set_hash,
+ 	.vtu_getnext = mv88e6352_g1_vtu_getnext,
+ 	.vtu_loadpurge = mv88e6352_g1_vtu_loadpurge,
+ 	.serdes_power = mv88e6390_serdes_power,
+@@ -3619,6 +3623,11 @@ static const struct mv88e6xxx_ops mv88e6141_ops = {
+ 	.serdes_irq_enable = mv88e6390_serdes_irq_enable,
+ 	.serdes_irq_status = mv88e6390_serdes_irq_status,
+ 	.gpio_ops = &mv88e6352_gpio_ops,
++	.serdes_get_sset_count = mv88e6390_serdes_get_sset_count,
++	.serdes_get_strings = mv88e6390_serdes_get_strings,
++	.serdes_get_stats = mv88e6390_serdes_get_stats,
++	.serdes_get_regs_len = mv88e6390_serdes_get_regs_len,
++	.serdes_get_regs = mv88e6390_serdes_get_regs,
+ 	.phylink_validate = mv88e6341_phylink_validate,
+ };
+ 
+@@ -4383,6 +4392,7 @@ static const struct mv88e6xxx_ops mv88e6341_ops = {
+ 	.port_set_speed_duplex = mv88e6341_port_set_speed_duplex,
+ 	.port_max_speed_mode = mv88e6341_port_max_speed_mode,
+ 	.port_tag_remap = mv88e6095_port_tag_remap,
++	.port_set_policy = mv88e6352_port_set_policy,
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_ucast_flood = mv88e6352_port_set_ucast_flood,
+ 	.port_set_mcast_flood = mv88e6352_port_set_mcast_flood,
+@@ -4396,7 +4406,7 @@ static const struct mv88e6xxx_ops mv88e6341_ops = {
+ 	.port_set_cmode = mv88e6341_port_set_cmode,
+ 	.port_setup_message_port = mv88e6xxx_setup_message_port,
+ 	.stats_snapshot = mv88e6390_g1_stats_snapshot,
+-	.stats_set_histogram = mv88e6095_g1_stats_set_histogram,
++	.stats_set_histogram = mv88e6390_g1_stats_set_histogram,
+ 	.stats_get_sset_count = mv88e6320_stats_get_sset_count,
+ 	.stats_get_strings = mv88e6320_stats_get_strings,
+ 	.stats_get_stats = mv88e6390_stats_get_stats,
+@@ -4406,6 +4416,9 @@ static const struct mv88e6xxx_ops mv88e6341_ops = {
+ 	.mgmt_rsvd2cpu =  mv88e6390_g1_mgmt_rsvd2cpu,
+ 	.pot_clear = mv88e6xxx_g2_pot_clear,
+ 	.reset = mv88e6352_g1_reset,
++	.rmu_disable = mv88e6390_g1_rmu_disable,
++	.atu_get_hash = mv88e6165_g1_atu_get_hash,
++	.atu_set_hash = mv88e6165_g1_atu_set_hash,
+ 	.vtu_getnext = mv88e6352_g1_vtu_getnext,
+ 	.vtu_loadpurge = mv88e6352_g1_vtu_loadpurge,
+ 	.serdes_power = mv88e6390_serdes_power,
+@@ -4421,6 +4434,11 @@ static const struct mv88e6xxx_ops mv88e6341_ops = {
+ 	.gpio_ops = &mv88e6352_gpio_ops,
+ 	.avb_ops = &mv88e6390_avb_ops,
+ 	.ptp_ops = &mv88e6352_ptp_ops,
++	.serdes_get_sset_count = mv88e6390_serdes_get_sset_count,
++	.serdes_get_strings = mv88e6390_serdes_get_strings,
++	.serdes_get_stats = mv88e6390_serdes_get_stats,
++	.serdes_get_regs_len = mv88e6390_serdes_get_regs_len,
++	.serdes_get_regs = mv88e6390_serdes_get_regs,
+ 	.phylink_validate = mv88e6341_phylink_validate,
+ };
+ 
+diff --git a/drivers/net/dsa/mv88e6xxx/serdes.c b/drivers/net/dsa/mv88e6xxx/serdes.c
+index e4fbef81bc52d..b1d46dd8eaabc 100644
+--- a/drivers/net/dsa/mv88e6xxx/serdes.c
++++ b/drivers/net/dsa/mv88e6xxx/serdes.c
+@@ -722,7 +722,7 @@ static struct mv88e6390_serdes_hw_stat mv88e6390_serdes_hw_stats[] = {
+ 
+ int mv88e6390_serdes_get_sset_count(struct mv88e6xxx_chip *chip, int port)
+ {
+-	if (mv88e6390_serdes_get_lane(chip, port) < 0)
++	if (mv88e6xxx_serdes_get_lane(chip, port) < 0)
+ 		return 0;
+ 
+ 	return ARRAY_SIZE(mv88e6390_serdes_hw_stats);
+@@ -734,7 +734,7 @@ int mv88e6390_serdes_get_strings(struct mv88e6xxx_chip *chip,
+ 	struct mv88e6390_serdes_hw_stat *stat;
+ 	int i;
+ 
+-	if (mv88e6390_serdes_get_lane(chip, port) < 0)
++	if (mv88e6xxx_serdes_get_lane(chip, port) < 0)
+ 		return 0;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(mv88e6390_serdes_hw_stats); i++) {
+@@ -770,7 +770,7 @@ int mv88e6390_serdes_get_stats(struct mv88e6xxx_chip *chip, int port,
+ 	int lane;
+ 	int i;
+ 
+-	lane = mv88e6390_serdes_get_lane(chip, port);
++	lane = mv88e6xxx_serdes_get_lane(chip, port);
+ 	if (lane < 0)
+ 		return 0;
+ 
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index 41f7f078cd27c..db74241935ab4 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -1640,7 +1640,8 @@ static void bcmgenet_power_up(struct bcmgenet_priv *priv,
+ 
+ 	switch (mode) {
+ 	case GENET_POWER_PASSIVE:
+-		reg &= ~(EXT_PWR_DOWN_DLL | EXT_PWR_DOWN_BIAS);
++		reg &= ~(EXT_PWR_DOWN_DLL | EXT_PWR_DOWN_BIAS |
++			 EXT_ENERGY_DET_MASK);
+ 		if (GENET_IS_V5(priv)) {
+ 			reg &= ~(EXT_PWR_DOWN_PHY_EN |
+ 				 EXT_PWR_DOWN_PHY_RD |
+@@ -3237,15 +3238,21 @@ static void bcmgenet_get_hw_addr(struct bcmgenet_priv *priv,
+ /* Returns a reusable dma control register value */
+ static u32 bcmgenet_dma_disable(struct bcmgenet_priv *priv)
+ {
++	unsigned int i;
+ 	u32 reg;
+ 	u32 dma_ctrl;
+ 
+ 	/* disable DMA */
+ 	dma_ctrl = 1 << (DESC_INDEX + DMA_RING_BUF_EN_SHIFT) | DMA_EN;
++	for (i = 0; i < priv->hw_params->tx_queues; i++)
++		dma_ctrl |= (1 << (i + DMA_RING_BUF_EN_SHIFT));
+ 	reg = bcmgenet_tdma_readl(priv, DMA_CTRL);
+ 	reg &= ~dma_ctrl;
+ 	bcmgenet_tdma_writel(priv, reg, DMA_CTRL);
+ 
++	dma_ctrl = 1 << (DESC_INDEX + DMA_RING_BUF_EN_SHIFT) | DMA_EN;
++	for (i = 0; i < priv->hw_params->rx_queues; i++)
++		dma_ctrl |= (1 << (i + DMA_RING_BUF_EN_SHIFT));
+ 	reg = bcmgenet_rdma_readl(priv, DMA_CTRL);
+ 	reg &= ~dma_ctrl;
+ 	bcmgenet_rdma_writel(priv, reg, DMA_CTRL);
+@@ -3292,7 +3299,6 @@ static int bcmgenet_open(struct net_device *dev)
+ {
+ 	struct bcmgenet_priv *priv = netdev_priv(dev);
+ 	unsigned long dma_ctrl;
+-	u32 reg;
+ 	int ret;
+ 
+ 	netif_dbg(priv, ifup, dev, "bcmgenet_open\n");
+@@ -3318,12 +3324,6 @@ static int bcmgenet_open(struct net_device *dev)
+ 
+ 	bcmgenet_set_hw_addr(priv, dev->dev_addr);
+ 
+-	if (priv->internal_phy) {
+-		reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT);
+-		reg |= EXT_ENERGY_DET_MASK;
+-		bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
+-	}
+-
+ 	/* Disable RX/TX DMA and flush TX queues */
+ 	dma_ctrl = bcmgenet_dma_disable(priv);
+ 
+@@ -4139,7 +4139,6 @@ static int bcmgenet_resume(struct device *d)
+ 	struct bcmgenet_priv *priv = netdev_priv(dev);
+ 	struct bcmgenet_rxnfc_rule *rule;
+ 	unsigned long dma_ctrl;
+-	u32 reg;
+ 	int ret;
+ 
+ 	if (!netif_running(dev))
+@@ -4176,12 +4175,6 @@ static int bcmgenet_resume(struct device *d)
+ 		if (rule->state != BCMGENET_RXNFC_STATE_UNUSED)
+ 			bcmgenet_hfb_create_rxnfc_filter(priv, rule);
+ 
+-	if (priv->internal_phy) {
+-		reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT);
+-		reg |= EXT_ENERGY_DET_MASK;
+-		bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
+-	}
+-
+ 	/* Disable RX/TX DMA and flush TX queues */
+ 	dma_ctrl = bcmgenet_dma_disable(priv);
+ 
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
+index facde824bcaab..e31a5a397f114 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
+@@ -186,12 +186,6 @@ int bcmgenet_wol_power_down_cfg(struct bcmgenet_priv *priv,
+ 	reg |= CMD_RX_EN;
+ 	bcmgenet_umac_writel(priv, reg, UMAC_CMD);
+ 
+-	if (priv->hw_params->flags & GENET_HAS_EXT) {
+-		reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT);
+-		reg &= ~EXT_ENERGY_DET_MASK;
+-		bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
+-	}
+-
+ 	reg = UMAC_IRQ_MPD_R;
+ 	if (hfb_enable)
+ 		reg |=  UMAC_IRQ_HFB_SM | UMAC_IRQ_HFB_MM;
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index 7d5cd9bc6c99d..6186230141800 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -2303,19 +2303,19 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
+ 		skb_frag_off_set(frag, pp->rx_offset_correction);
+ 		skb_frag_size_set(frag, data_len);
+ 		__skb_frag_set_page(frag, page);
+-
+-		/* last fragment */
+-		if (len == *size) {
+-			struct skb_shared_info *sinfo;
+-
+-			sinfo = xdp_get_shared_info_from_buff(xdp);
+-			sinfo->nr_frags = xdp_sinfo->nr_frags;
+-			memcpy(sinfo->frags, xdp_sinfo->frags,
+-			       sinfo->nr_frags * sizeof(skb_frag_t));
+-		}
+ 	} else {
+ 		page_pool_put_full_page(rxq->page_pool, page, true);
+ 	}
++
++	/* last fragment */
++	if (len == *size) {
++		struct skb_shared_info *sinfo;
++
++		sinfo = xdp_get_shared_info_from_buff(xdp);
++		sinfo->nr_frags = xdp_sinfo->nr_frags;
++		memcpy(sinfo->frags, xdp_sinfo->frags,
++		       sinfo->nr_frags * sizeof(skb_frag_t));
++	}
+ 	*size -= len;
+ }
+ 
+diff --git a/drivers/net/ethernet/moxa/moxart_ether.c b/drivers/net/ethernet/moxa/moxart_ether.c
+index 5249b64f4fc54..49def6934cad1 100644
+--- a/drivers/net/ethernet/moxa/moxart_ether.c
++++ b/drivers/net/ethernet/moxa/moxart_ether.c
+@@ -540,10 +540,8 @@ static int moxart_mac_probe(struct platform_device *pdev)
+ 	SET_NETDEV_DEV(ndev, &pdev->dev);
+ 
+ 	ret = register_netdev(ndev);
+-	if (ret) {
+-		free_netdev(ndev);
++	if (ret)
+ 		goto init_fail;
+-	}
+ 
+ 	netdev_dbg(ndev, "%s: IRQ=%d address=%pM\n",
+ 		   __func__, ndev->irq, ndev->dev_addr);
+diff --git a/drivers/net/ethernet/qualcomm/emac/emac.c b/drivers/net/ethernet/qualcomm/emac/emac.c
+index 8543bf3c34840..ad655f0a4965c 100644
+--- a/drivers/net/ethernet/qualcomm/emac/emac.c
++++ b/drivers/net/ethernet/qualcomm/emac/emac.c
+@@ -735,12 +735,13 @@ static int emac_remove(struct platform_device *pdev)
+ 
+ 	put_device(&adpt->phydev->mdio.dev);
+ 	mdiobus_unregister(adpt->mii_bus);
+-	free_netdev(netdev);
+ 
+ 	if (adpt->phy.digital)
+ 		iounmap(adpt->phy.digital);
+ 	iounmap(adpt->phy.base);
+ 
++	free_netdev(netdev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/ti/tlan.c b/drivers/net/ethernet/ti/tlan.c
+index 0b2ce4bdc2c3d..e0cb713193ea4 100644
+--- a/drivers/net/ethernet/ti/tlan.c
++++ b/drivers/net/ethernet/ti/tlan.c
+@@ -313,9 +313,8 @@ static void tlan_remove_one(struct pci_dev *pdev)
+ 	pci_release_regions(pdev);
+ #endif
+ 
+-	free_netdev(dev);
+-
+ 	cancel_work_sync(&priv->tlan_tqueue);
++	free_netdev(dev);
+ }
+ 
+ static void tlan_start(struct net_device *dev)
+diff --git a/drivers/net/fddi/defza.c b/drivers/net/fddi/defza.c
+index 14f07050b6b1b..0de2c4552f5eb 100644
+--- a/drivers/net/fddi/defza.c
++++ b/drivers/net/fddi/defza.c
+@@ -1504,9 +1504,8 @@ err_out_resource:
+ 	release_mem_region(start, len);
+ 
+ err_out_kfree:
+-	free_netdev(dev);
+-
+ 	pr_err("%s: initialization failure, aborting!\n", fp->name);
++	free_netdev(dev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/netdevsim/ipsec.c b/drivers/net/netdevsim/ipsec.c
+index 3811f1bde84e7..b80ed2ffd45eb 100644
+--- a/drivers/net/netdevsim/ipsec.c
++++ b/drivers/net/netdevsim/ipsec.c
+@@ -85,7 +85,7 @@ static int nsim_ipsec_parse_proto_keys(struct xfrm_state *xs,
+ 				       u32 *mykey, u32 *mysalt)
+ {
+ 	const char aes_gcm_name[] = "rfc4106(gcm(aes))";
+-	struct net_device *dev = xs->xso.dev;
++	struct net_device *dev = xs->xso.real_dev;
+ 	unsigned char *key_data;
+ 	char *alg_name = NULL;
+ 	int key_len;
+@@ -134,7 +134,7 @@ static int nsim_ipsec_add_sa(struct xfrm_state *xs)
+ 	u16 sa_idx;
+ 	int ret;
+ 
+-	dev = xs->xso.dev;
++	dev = xs->xso.real_dev;
+ 	ns = netdev_priv(dev);
+ 	ipsec = &ns->ipsec;
+ 
+@@ -194,7 +194,7 @@ static int nsim_ipsec_add_sa(struct xfrm_state *xs)
+ 
+ static void nsim_ipsec_del_sa(struct xfrm_state *xs)
+ {
+-	struct netdevsim *ns = netdev_priv(xs->xso.dev);
++	struct netdevsim *ns = netdev_priv(xs->xso.real_dev);
+ 	struct nsim_ipsec *ipsec = &ns->ipsec;
+ 	u16 sa_idx;
+ 
+@@ -211,7 +211,7 @@ static void nsim_ipsec_del_sa(struct xfrm_state *xs)
+ 
+ static bool nsim_ipsec_offload_ok(struct sk_buff *skb, struct xfrm_state *xs)
+ {
+-	struct netdevsim *ns = netdev_priv(xs->xso.dev);
++	struct netdevsim *ns = netdev_priv(xs->xso.real_dev);
+ 	struct nsim_ipsec *ipsec = &ns->ipsec;
+ 
+ 	ipsec->ok++;
+diff --git a/drivers/net/vmxnet3/vmxnet3_ethtool.c b/drivers/net/vmxnet3/vmxnet3_ethtool.c
+index c0bd9cbc43b1d..1b483cf2b1ca2 100644
+--- a/drivers/net/vmxnet3/vmxnet3_ethtool.c
++++ b/drivers/net/vmxnet3/vmxnet3_ethtool.c
+@@ -1,7 +1,7 @@
+ /*
+  * Linux driver for VMware's vmxnet3 ethernet NIC.
+  *
+- * Copyright (C) 2008-2020, VMware, Inc. All Rights Reserved.
++ * Copyright (C) 2008-2021, VMware, Inc. All Rights Reserved.
+  *
+  * This program is free software; you can redistribute it and/or modify it
+  * under the terms of the GNU General Public License as published by the
+@@ -26,6 +26,10 @@
+ 
+ 
+ #include "vmxnet3_int.h"
++#include <net/vxlan.h>
++#include <net/geneve.h>
++
++#define VXLAN_UDP_PORT 8472
+ 
+ struct vmxnet3_stat_desc {
+ 	char desc[ETH_GSTRING_LEN];
+@@ -262,6 +266,8 @@ netdev_features_t vmxnet3_features_check(struct sk_buff *skb,
+ 	if (VMXNET3_VERSION_GE_4(adapter) &&
+ 	    skb->encapsulation && skb->ip_summed == CHECKSUM_PARTIAL) {
+ 		u8 l4_proto = 0;
++		u16 port;
++		struct udphdr *udph;
+ 
+ 		switch (vlan_get_protocol(skb)) {
+ 		case htons(ETH_P_IP):
+@@ -274,8 +280,20 @@ netdev_features_t vmxnet3_features_check(struct sk_buff *skb,
+ 			return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK);
+ 		}
+ 
+-		if (l4_proto != IPPROTO_UDP)
++		switch (l4_proto) {
++		case IPPROTO_UDP:
++			udph = udp_hdr(skb);
++			port = be16_to_cpu(udph->dest);
++			/* Check if offloaded port is supported */
++			if (port != GENEVE_UDP_PORT &&
++			    port != IANA_VXLAN_UDP_PORT &&
++			    port != VXLAN_UDP_PORT) {
++				return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK);
++			}
++			break;
++		default:
+ 			return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK);
++		}
+ 	}
+ 	return features;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+index ee6cf189103f7..47843b055959b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+@@ -921,7 +921,7 @@ static int mt7921_load_firmware(struct mt7921_dev *dev)
+ 	ret = mt76_get_field(dev, MT_CONN_ON_MISC, MT_TOP_MISC2_FW_N9_RDY);
+ 	if (ret) {
+ 		dev_dbg(dev->mt76.dev, "Firmware is already download\n");
+-		return -EIO;
++		goto fw_loaded;
+ 	}
+ 
+ 	ret = mt7921_load_patch(dev);
+@@ -939,6 +939,7 @@ static int mt7921_load_firmware(struct mt7921_dev *dev)
+ 		return -EIO;
+ 	}
+ 
++fw_loaded:
+ 	mt76_queue_tx_cleanup(dev, dev->mt76.q_mcu[MT_MCUQ_FWDL], false);
+ 
+ #ifdef CONFIG_PM
+diff --git a/drivers/reset/reset-ti-syscon.c b/drivers/reset/reset-ti-syscon.c
+index 218370faf37b5..2b92775d58f0c 100644
+--- a/drivers/reset/reset-ti-syscon.c
++++ b/drivers/reset/reset-ti-syscon.c
+@@ -58,8 +58,8 @@ struct ti_syscon_reset_data {
+ 	unsigned int nr_controls;
+ };
+ 
+-#define to_ti_syscon_reset_data(rcdev)	\
+-	container_of(rcdev, struct ti_syscon_reset_data, rcdev)
++#define to_ti_syscon_reset_data(_rcdev)	\
++	container_of(_rcdev, struct ti_syscon_reset_data, rcdev)
+ 
+ /**
+  * ti_syscon_reset_assert() - assert device reset
+diff --git a/drivers/rtc/rtc-max77686.c b/drivers/rtc/rtc-max77686.c
+index d51cc12114cbe..eae7cb9faf1eb 100644
+--- a/drivers/rtc/rtc-max77686.c
++++ b/drivers/rtc/rtc-max77686.c
+@@ -717,8 +717,8 @@ static int max77686_init_rtc_regmap(struct max77686_rtc_info *info)
+ 
+ add_rtc_irq:
+ 	ret = regmap_add_irq_chip(info->rtc_regmap, info->rtc_irq,
+-				  IRQF_TRIGGER_FALLING | IRQF_ONESHOT |
+-				  IRQF_SHARED, 0, info->drv_data->rtc_irq_chip,
++				  IRQF_ONESHOT | IRQF_SHARED,
++				  0, info->drv_data->rtc_irq_chip,
+ 				  &info->rtc_irq_data);
+ 	if (ret < 0) {
+ 		dev_err(info->dev, "Failed to add RTC irq chip: %d\n", ret);
+diff --git a/drivers/rtc/rtc-mxc_v2.c b/drivers/rtc/rtc-mxc_v2.c
+index a577a74aaf75a..5e03834016294 100644
+--- a/drivers/rtc/rtc-mxc_v2.c
++++ b/drivers/rtc/rtc-mxc_v2.c
+@@ -372,6 +372,7 @@ static const struct of_device_id mxc_ids[] = {
+ 	{ .compatible = "fsl,imx53-rtc", },
+ 	{}
+ };
++MODULE_DEVICE_TABLE(of, mxc_ids);
+ 
+ static struct platform_driver mxc_rtc_driver = {
+ 	.driver = {
+diff --git a/drivers/scsi/aic7xxx/aic7xxx_core.c b/drivers/scsi/aic7xxx/aic7xxx_core.c
+index 4b04ab8908f85..a396f048a031e 100644
+--- a/drivers/scsi/aic7xxx/aic7xxx_core.c
++++ b/drivers/scsi/aic7xxx/aic7xxx_core.c
+@@ -493,7 +493,7 @@ ahc_inq(struct ahc_softc *ahc, u_int port)
+ 	return ((ahc_inb(ahc, port))
+ 	      | (ahc_inb(ahc, port+1) << 8)
+ 	      | (ahc_inb(ahc, port+2) << 16)
+-	      | (ahc_inb(ahc, port+3) << 24)
++	      | (((uint64_t)ahc_inb(ahc, port+3)) << 24)
+ 	      | (((uint64_t)ahc_inb(ahc, port+4)) << 32)
+ 	      | (((uint64_t)ahc_inb(ahc, port+5)) << 40)
+ 	      | (((uint64_t)ahc_inb(ahc, port+6)) << 48)
+diff --git a/drivers/scsi/aic94xx/aic94xx_init.c b/drivers/scsi/aic94xx/aic94xx_init.c
+index a195bfe9eccc0..7a78606598c4b 100644
+--- a/drivers/scsi/aic94xx/aic94xx_init.c
++++ b/drivers/scsi/aic94xx/aic94xx_init.c
+@@ -53,6 +53,7 @@ static struct scsi_host_template aic94xx_sht = {
+ 	.max_sectors		= SCSI_DEFAULT_MAX_SECTORS,
+ 	.eh_device_reset_handler	= sas_eh_device_reset_handler,
+ 	.eh_target_reset_handler	= sas_eh_target_reset_handler,
++	.slave_alloc		= sas_slave_alloc,
+ 	.target_destroy		= sas_target_destroy,
+ 	.ioctl			= sas_ioctl,
+ #ifdef CONFIG_COMPAT
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+index 3cba7bfba2965..30199663c7d80 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+@@ -1771,6 +1771,7 @@ static struct scsi_host_template sht_v1_hw = {
+ 	.max_sectors		= SCSI_DEFAULT_MAX_SECTORS,
+ 	.eh_device_reset_handler = sas_eh_device_reset_handler,
+ 	.eh_target_reset_handler = sas_eh_target_reset_handler,
++	.slave_alloc		= sas_slave_alloc,
+ 	.target_destroy		= sas_target_destroy,
+ 	.ioctl			= sas_ioctl,
+ #ifdef CONFIG_COMPAT
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+index 46f60fc2a069d..9df1639ffa656 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+@@ -3584,6 +3584,7 @@ static struct scsi_host_template sht_v2_hw = {
+ 	.max_sectors		= SCSI_DEFAULT_MAX_SECTORS,
+ 	.eh_device_reset_handler = sas_eh_device_reset_handler,
+ 	.eh_target_reset_handler = sas_eh_target_reset_handler,
++	.slave_alloc		= sas_slave_alloc,
+ 	.target_destroy		= sas_target_destroy,
+ 	.ioctl			= sas_ioctl,
+ #ifdef CONFIG_COMPAT
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index e954083140786..36ec3901cfd48 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -3155,6 +3155,7 @@ static struct scsi_host_template sht_v3_hw = {
+ 	.max_sectors		= SCSI_DEFAULT_MAX_SECTORS,
+ 	.eh_device_reset_handler = sas_eh_device_reset_handler,
+ 	.eh_target_reset_handler = sas_eh_target_reset_handler,
++	.slave_alloc		= sas_slave_alloc,
+ 	.target_destroy		= sas_target_destroy,
+ 	.ioctl			= sas_ioctl,
+ #ifdef CONFIG_COMPAT
+diff --git a/drivers/scsi/isci/init.c b/drivers/scsi/isci/init.c
+index c452849e7bb43..ffd33e5decae8 100644
+--- a/drivers/scsi/isci/init.c
++++ b/drivers/scsi/isci/init.c
+@@ -167,6 +167,7 @@ static struct scsi_host_template isci_sht = {
+ 	.eh_abort_handler		= sas_eh_abort_handler,
+ 	.eh_device_reset_handler        = sas_eh_device_reset_handler,
+ 	.eh_target_reset_handler        = sas_eh_target_reset_handler,
++	.slave_alloc			= sas_slave_alloc,
+ 	.target_destroy			= sas_target_destroy,
+ 	.ioctl				= sas_ioctl,
+ #ifdef CONFIG_COMPAT
+diff --git a/drivers/scsi/libfc/fc_rport.c b/drivers/scsi/libfc/fc_rport.c
+index cd0fb8ca2425d..33da3c1085f06 100644
+--- a/drivers/scsi/libfc/fc_rport.c
++++ b/drivers/scsi/libfc/fc_rport.c
+@@ -1162,6 +1162,7 @@ static void fc_rport_prli_resp(struct fc_seq *sp, struct fc_frame *fp,
+ 		resp_code = (pp->spp.spp_flags & FC_SPP_RESP_MASK);
+ 		FC_RPORT_DBG(rdata, "PRLI spp_flags = 0x%x spp_type 0x%x\n",
+ 			     pp->spp.spp_flags, pp->spp.spp_type);
++
+ 		rdata->spp_type = pp->spp.spp_type;
+ 		if (resp_code != FC_SPP_RESP_ACK) {
+ 			if (resp_code == FC_SPP_RESP_CONF)
+@@ -1184,11 +1185,13 @@ static void fc_rport_prli_resp(struct fc_seq *sp, struct fc_frame *fp,
+ 		/*
+ 		 * Call prli provider if we should act as a target
+ 		 */
+-		prov = fc_passive_prov[rdata->spp_type];
+-		if (prov) {
+-			memset(&temp_spp, 0, sizeof(temp_spp));
+-			prov->prli(rdata, pp->prli.prli_spp_len,
+-				   &pp->spp, &temp_spp);
++		if (rdata->spp_type < FC_FC4_PROV_SIZE) {
++			prov = fc_passive_prov[rdata->spp_type];
++			if (prov) {
++				memset(&temp_spp, 0, sizeof(temp_spp));
++				prov->prli(rdata, pp->prli.prli_spp_len,
++					   &pp->spp, &temp_spp);
++			}
+ 		}
+ 		/*
+ 		 * Check if the image pair could be established
+diff --git a/drivers/scsi/libsas/sas_scsi_host.c b/drivers/scsi/libsas/sas_scsi_host.c
+index 1bf939818c981..ee44a0d7730b4 100644
+--- a/drivers/scsi/libsas/sas_scsi_host.c
++++ b/drivers/scsi/libsas/sas_scsi_host.c
+@@ -911,6 +911,14 @@ void sas_task_abort(struct sas_task *task)
+ 		blk_abort_request(sc->request);
+ }
+ 
++int sas_slave_alloc(struct scsi_device *sdev)
++{
++	if (dev_is_sata(sdev_to_domain_dev(sdev)) && sdev->lun)
++		return -ENXIO;
++
++	return 0;
++}
++
+ void sas_target_destroy(struct scsi_target *starget)
+ {
+ 	struct domain_device *found_dev = starget->hostdata;
+@@ -957,5 +965,6 @@ EXPORT_SYMBOL_GPL(sas_task_abort);
+ EXPORT_SYMBOL_GPL(sas_phy_reset);
+ EXPORT_SYMBOL_GPL(sas_eh_device_reset_handler);
+ EXPORT_SYMBOL_GPL(sas_eh_target_reset_handler);
++EXPORT_SYMBOL_GPL(sas_slave_alloc);
+ EXPORT_SYMBOL_GPL(sas_target_destroy);
+ EXPORT_SYMBOL_GPL(sas_ioctl);
+diff --git a/drivers/scsi/mvsas/mv_init.c b/drivers/scsi/mvsas/mv_init.c
+index 6aa2697c4a15d..b03c0f35d7b04 100644
+--- a/drivers/scsi/mvsas/mv_init.c
++++ b/drivers/scsi/mvsas/mv_init.c
+@@ -46,6 +46,7 @@ static struct scsi_host_template mvs_sht = {
+ 	.max_sectors		= SCSI_DEFAULT_MAX_SECTORS,
+ 	.eh_device_reset_handler = sas_eh_device_reset_handler,
+ 	.eh_target_reset_handler = sas_eh_target_reset_handler,
++	.slave_alloc		= sas_slave_alloc,
+ 	.target_destroy		= sas_target_destroy,
+ 	.ioctl			= sas_ioctl,
+ #ifdef CONFIG_COMPAT
+diff --git a/drivers/scsi/pm8001/pm8001_init.c b/drivers/scsi/pm8001/pm8001_init.c
+index af09bd282cb94..313248c7bab99 100644
+--- a/drivers/scsi/pm8001/pm8001_init.c
++++ b/drivers/scsi/pm8001/pm8001_init.c
+@@ -101,6 +101,7 @@ static struct scsi_host_template pm8001_sht = {
+ 	.max_sectors		= SCSI_DEFAULT_MAX_SECTORS,
+ 	.eh_device_reset_handler = sas_eh_device_reset_handler,
+ 	.eh_target_reset_handler = sas_eh_target_reset_handler,
++	.slave_alloc		= sas_slave_alloc,
+ 	.target_destroy		= sas_target_destroy,
+ 	.ioctl			= sas_ioctl,
+ #ifdef CONFIG_COMPAT
+diff --git a/drivers/scsi/qedf/qedf_io.c b/drivers/scsi/qedf/qedf_io.c
+index 4869ef813dc4f..63f99f4eeed97 100644
+--- a/drivers/scsi/qedf/qedf_io.c
++++ b/drivers/scsi/qedf/qedf_io.c
+@@ -1520,9 +1520,19 @@ void qedf_process_error_detect(struct qedf_ctx *qedf, struct fcoe_cqe *cqe,
+ {
+ 	int rval;
+ 
++	if (io_req == NULL) {
++		QEDF_INFO(NULL, QEDF_LOG_IO, "io_req is NULL.\n");
++		return;
++	}
++
++	if (io_req->fcport == NULL) {
++		QEDF_INFO(NULL, QEDF_LOG_IO, "fcport is NULL.\n");
++		return;
++	}
++
+ 	if (!cqe) {
+ 		QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_IO,
+-			  "cqe is NULL for io_req %p\n", io_req);
++			"cqe is NULL for io_req %p\n", io_req);
+ 		return;
+ 	}
+ 
+@@ -1538,6 +1548,16 @@ void qedf_process_error_detect(struct qedf_ctx *qedf, struct fcoe_cqe *cqe,
+ 		  le32_to_cpu(cqe->cqe_info.err_info.rx_buf_off),
+ 		  le32_to_cpu(cqe->cqe_info.err_info.rx_id));
+ 
++	/* When flush is active, let the cmds be flushed out from the cleanup context */
++	if (test_bit(QEDF_RPORT_IN_TARGET_RESET, &io_req->fcport->flags) ||
++		(test_bit(QEDF_RPORT_IN_LUN_RESET, &io_req->fcport->flags) &&
++		 io_req->sc_cmd->device->lun == (u64)io_req->fcport->lun_reset_lun)) {
++		QEDF_ERR(&qedf->dbg_ctx,
++			"Dropping EQE for xid=0x%x as fcport is flushing",
++			io_req->xid);
++		return;
++	}
++
+ 	if (qedf->stop_io_on_error) {
+ 		qedf_stop_all_io(qedf);
+ 		return;
+diff --git a/drivers/soc/bcm/brcmstb/common.c b/drivers/soc/bcm/brcmstb/common.c
+index e87dfc6660f31..2a010881f4b61 100644
+--- a/drivers/soc/bcm/brcmstb/common.c
++++ b/drivers/soc/bcm/brcmstb/common.c
+@@ -14,11 +14,6 @@
+ static u32 family_id;
+ static u32 product_id;
+ 
+-static const struct of_device_id brcmstb_machine_match[] = {
+-	{ .compatible = "brcm,brcmstb", },
+-	{ }
+-};
+-
+ u32 brcmstb_get_family_id(void)
+ {
+ 	return family_id;
+diff --git a/drivers/soc/mediatek/mtk-devapc.c b/drivers/soc/mediatek/mtk-devapc.c
+index f1cea041dc5a8..7c65ad3d1f8a3 100644
+--- a/drivers/soc/mediatek/mtk-devapc.c
++++ b/drivers/soc/mediatek/mtk-devapc.c
+@@ -234,6 +234,7 @@ static const struct of_device_id mtk_devapc_dt_match[] = {
+ 	}, {
+ 	},
+ };
++MODULE_DEVICE_TABLE(of, mtk_devapc_dt_match);
+ 
+ static int mtk_devapc_probe(struct platform_device *pdev)
+ {
+diff --git a/drivers/soc/tegra/fuse/fuse-tegra30.c b/drivers/soc/tegra/fuse/fuse-tegra30.c
+index 9ea7f01684573..c1aa7815bd6ec 100644
+--- a/drivers/soc/tegra/fuse/fuse-tegra30.c
++++ b/drivers/soc/tegra/fuse/fuse-tegra30.c
+@@ -37,7 +37,8 @@
+     defined(CONFIG_ARCH_TEGRA_132_SOC) || \
+     defined(CONFIG_ARCH_TEGRA_210_SOC) || \
+     defined(CONFIG_ARCH_TEGRA_186_SOC) || \
+-    defined(CONFIG_ARCH_TEGRA_194_SOC)
++    defined(CONFIG_ARCH_TEGRA_194_SOC) || \
++    defined(CONFIG_ARCH_TEGRA_234_SOC)
+ static u32 tegra30_fuse_read_early(struct tegra_fuse *fuse, unsigned int offset)
+ {
+ 	if (WARN_ON(!fuse->base))
+diff --git a/drivers/thermal/imx_sc_thermal.c b/drivers/thermal/imx_sc_thermal.c
+index b01d28eca7eec..8d76dbfde6a9f 100644
+--- a/drivers/thermal/imx_sc_thermal.c
++++ b/drivers/thermal/imx_sc_thermal.c
+@@ -93,6 +93,7 @@ static int imx_sc_thermal_probe(struct platform_device *pdev)
+ 	for_each_available_child_of_node(np, child) {
+ 		sensor = devm_kzalloc(&pdev->dev, sizeof(*sensor), GFP_KERNEL);
+ 		if (!sensor) {
++			of_node_put(child);
+ 			of_node_put(sensor_np);
+ 			return -ENOMEM;
+ 		}
+@@ -104,6 +105,7 @@ static int imx_sc_thermal_probe(struct platform_device *pdev)
+ 			dev_err(&pdev->dev,
+ 				"failed to get valid sensor resource id: %d\n",
+ 				ret);
++			of_node_put(child);
+ 			break;
+ 		}
+ 
+@@ -114,6 +116,7 @@ static int imx_sc_thermal_probe(struct platform_device *pdev)
+ 		if (IS_ERR(sensor->tzd)) {
+ 			dev_err(&pdev->dev, "failed to register thermal zone\n");
+ 			ret = PTR_ERR(sensor->tzd);
++			of_node_put(child);
+ 			break;
+ 		}
+ 
+diff --git a/drivers/thermal/rcar_gen3_thermal.c b/drivers/thermal/rcar_gen3_thermal.c
+index 1a60adb1d30a0..fdf16aa34eb47 100644
+--- a/drivers/thermal/rcar_gen3_thermal.c
++++ b/drivers/thermal/rcar_gen3_thermal.c
+@@ -307,7 +307,7 @@ static int rcar_gen3_thermal_probe(struct platform_device *pdev)
+ {
+ 	struct rcar_gen3_thermal_priv *priv;
+ 	struct device *dev = &pdev->dev;
+-	const int *rcar_gen3_ths_tj_1 = of_device_get_match_data(dev);
++	const int *ths_tj_1 = of_device_get_match_data(dev);
+ 	struct resource *res;
+ 	struct thermal_zone_device *zone;
+ 	int ret, i;
+@@ -352,8 +352,7 @@ static int rcar_gen3_thermal_probe(struct platform_device *pdev)
+ 		priv->tscs[i] = tsc;
+ 
+ 		priv->thermal_init(tsc);
+-		rcar_gen3_thermal_calc_coefs(tsc, ptat, thcodes[i],
+-					     *rcar_gen3_ths_tj_1);
++		rcar_gen3_thermal_calc_coefs(tsc, ptat, thcodes[i], *ths_tj_1);
+ 
+ 		zone = devm_thermal_zone_of_sensor_register(dev, i, tsc,
+ 							    &rcar_gen3_tz_of_ops);
+diff --git a/drivers/thermal/sprd_thermal.c b/drivers/thermal/sprd_thermal.c
+index fe06cccf14b38..fff80fc180028 100644
+--- a/drivers/thermal/sprd_thermal.c
++++ b/drivers/thermal/sprd_thermal.c
+@@ -388,7 +388,7 @@ static int sprd_thm_probe(struct platform_device *pdev)
+ 		sen = devm_kzalloc(&pdev->dev, sizeof(*sen), GFP_KERNEL);
+ 		if (!sen) {
+ 			ret = -ENOMEM;
+-			goto disable_clk;
++			goto of_put;
+ 		}
+ 
+ 		sen->data = thm;
+@@ -397,13 +397,13 @@ static int sprd_thm_probe(struct platform_device *pdev)
+ 		ret = of_property_read_u32(sen_child, "reg", &sen->id);
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "get sensor reg failed");
+-			goto disable_clk;
++			goto of_put;
+ 		}
+ 
+ 		ret = sprd_thm_sensor_calibration(sen_child, thm, sen);
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "efuse cal analysis failed");
+-			goto disable_clk;
++			goto of_put;
+ 		}
+ 
+ 		sprd_thm_sensor_init(thm, sen);
+@@ -416,19 +416,20 @@ static int sprd_thm_probe(struct platform_device *pdev)
+ 			dev_err(&pdev->dev, "register thermal zone failed %d\n",
+ 				sen->id);
+ 			ret = PTR_ERR(sen->tzd);
+-			goto disable_clk;
++			goto of_put;
+ 		}
+ 
+ 		thm->sensor[sen->id] = sen;
+ 	}
++	/* sen_child set to NULL at this point */
+ 
+ 	ret = sprd_thm_set_ready(thm);
+ 	if (ret)
+-		goto disable_clk;
++		goto of_put;
+ 
+ 	ret = sprd_thm_wait_temp_ready(thm);
+ 	if (ret)
+-		goto disable_clk;
++		goto of_put;
+ 
+ 	for (i = 0; i < thm->nr_sensors; i++)
+ 		sprd_thm_toggle_sensor(thm->sensor[i], true);
+@@ -436,6 +437,8 @@ static int sprd_thm_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, thm);
+ 	return 0;
+ 
++of_put:
++	of_node_put(sen_child);
+ disable_clk:
+ 	clk_disable_unprepare(thm->clk);
+ 	return ret;
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index d20b25f40d19e..142dcf5e4a18f 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -1369,7 +1369,7 @@ free_tz:
+ EXPORT_SYMBOL_GPL(thermal_zone_device_register);
+ 
+ /**
+- * thermal_device_unregister - removes the registered thermal zone device
++ * thermal_zone_device_unregister - removes the registered thermal zone device
+  * @tz: the thermal zone device to remove
+  */
+ void thermal_zone_device_unregister(struct thermal_zone_device *tz)
+diff --git a/drivers/thermal/thermal_of.c b/drivers/thermal/thermal_of.c
+index 5b76f9a1280d5..6379f26a335f6 100644
+--- a/drivers/thermal/thermal_of.c
++++ b/drivers/thermal/thermal_of.c
+@@ -559,6 +559,9 @@ void thermal_zone_of_sensor_unregister(struct device *dev,
+ 	if (!tz)
+ 		return;
+ 
++	/* stop temperature polling */
++	thermal_zone_device_disable(tzd);
++
+ 	mutex_lock(&tzd->lock);
+ 	tzd->ops->get_temp = NULL;
+ 	tzd->ops->get_trend = NULL;
+diff --git a/fs/cifs/cifs_dfs_ref.c b/fs/cifs/cifs_dfs_ref.c
+index 8dec4edc8a9fe..35a6007d88a9b 100644
+--- a/fs/cifs/cifs_dfs_ref.c
++++ b/fs/cifs/cifs_dfs_ref.c
+@@ -151,6 +151,9 @@ char *cifs_compose_mount_options(const char *sb_mountdata,
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	if (ref) {
++		if (WARN_ON_ONCE(!ref->node_name || ref->path_consumed < 0))
++			return ERR_PTR(-EINVAL);
++
+ 		if (strlen(fullpath) - ref->path_consumed) {
+ 			prepath = fullpath + ref->path_consumed;
+ 			/* skip initial delimiter */
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index e5dbe87e65b45..08c9348cf9ed6 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -708,7 +708,9 @@ F2FS_FEATURE_RO_ATTR(lost_found, FEAT_LOST_FOUND);
+ F2FS_FEATURE_RO_ATTR(verity, FEAT_VERITY);
+ #endif
+ F2FS_FEATURE_RO_ATTR(sb_checksum, FEAT_SB_CHECKSUM);
++#ifdef CONFIG_UNICODE
+ F2FS_FEATURE_RO_ATTR(casefold, FEAT_CASEFOLD);
++#endif
+ #ifdef CONFIG_F2FS_FS_COMPRESSION
+ F2FS_FEATURE_RO_ATTR(compression, FEAT_COMPRESSION);
+ F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, compr_written_block, compr_written_block);
+@@ -810,7 +812,9 @@ static struct attribute *f2fs_feat_attrs[] = {
+ 	ATTR_LIST(verity),
+ #endif
+ 	ATTR_LIST(sb_checksum),
++#ifdef CONFIG_UNICODE
+ 	ATTR_LIST(casefold),
++#endif
+ #ifdef CONFIG_F2FS_FS_COMPRESSION
+ 	ATTR_LIST(compression),
+ #endif
+diff --git a/fs/vboxsf/dir.c b/fs/vboxsf/dir.c
+index eac6788fc6cff..c4769a9396c50 100644
+--- a/fs/vboxsf/dir.c
++++ b/fs/vboxsf/dir.c
+@@ -253,7 +253,7 @@ static int vboxsf_dir_instantiate(struct inode *parent, struct dentry *dentry,
+ }
+ 
+ static int vboxsf_dir_create(struct inode *parent, struct dentry *dentry,
+-			     umode_t mode, int is_dir)
++			     umode_t mode, bool is_dir, bool excl, u64 *handle_ret)
+ {
+ 	struct vboxsf_inode *sf_parent_i = VBOXSF_I(parent);
+ 	struct vboxsf_sbi *sbi = VBOXSF_SBI(parent->i_sb);
+@@ -261,10 +261,12 @@ static int vboxsf_dir_create(struct inode *parent, struct dentry *dentry,
+ 	int err;
+ 
+ 	params.handle = SHFL_HANDLE_NIL;
+-	params.create_flags = SHFL_CF_ACT_CREATE_IF_NEW |
+-			      SHFL_CF_ACT_FAIL_IF_EXISTS |
+-			      SHFL_CF_ACCESS_READWRITE |
+-			      (is_dir ? SHFL_CF_DIRECTORY : 0);
++	params.create_flags = SHFL_CF_ACT_CREATE_IF_NEW | SHFL_CF_ACCESS_READWRITE;
++	if (is_dir)
++		params.create_flags |= SHFL_CF_DIRECTORY;
++	if (excl)
++		params.create_flags |= SHFL_CF_ACT_FAIL_IF_EXISTS;
++
+ 	params.info.attr.mode = (mode & 0777) |
+ 				(is_dir ? SHFL_TYPE_DIRECTORY : SHFL_TYPE_FILE);
+ 	params.info.attr.additional = SHFLFSOBJATTRADD_NOTHING;
+@@ -276,30 +278,81 @@ static int vboxsf_dir_create(struct inode *parent, struct dentry *dentry,
+ 	if (params.result != SHFL_FILE_CREATED)
+ 		return -EPERM;
+ 
+-	vboxsf_close(sbi->root, params.handle);
+-
+ 	err = vboxsf_dir_instantiate(parent, dentry, &params.info);
+ 	if (err)
+-		return err;
++		goto out;
+ 
+ 	/* parent directory access/change time changed */
+ 	sf_parent_i->force_restat = 1;
+ 
+-	return 0;
++out:
++	if (err == 0 && handle_ret)
++		*handle_ret = params.handle;
++	else
++		vboxsf_close(sbi->root, params.handle);
++
++	return err;
+ }
+ 
+ static int vboxsf_dir_mkfile(struct user_namespace *mnt_userns,
+ 			     struct inode *parent, struct dentry *dentry,
+ 			     umode_t mode, bool excl)
+ {
+-	return vboxsf_dir_create(parent, dentry, mode, 0);
++	return vboxsf_dir_create(parent, dentry, mode, false, excl, NULL);
+ }
+ 
+ static int vboxsf_dir_mkdir(struct user_namespace *mnt_userns,
+ 			    struct inode *parent, struct dentry *dentry,
+ 			    umode_t mode)
+ {
+-	return vboxsf_dir_create(parent, dentry, mode, 1);
++	return vboxsf_dir_create(parent, dentry, mode, true, true, NULL);
++}
++
++static int vboxsf_dir_atomic_open(struct inode *parent, struct dentry *dentry,
++				  struct file *file, unsigned int flags, umode_t mode)
++{
++	struct vboxsf_sbi *sbi = VBOXSF_SBI(parent->i_sb);
++	struct vboxsf_handle *sf_handle;
++	struct dentry *res = NULL;
++	u64 handle;
++	int err;
++
++	if (d_in_lookup(dentry)) {
++		res = vboxsf_dir_lookup(parent, dentry, 0);
++		if (IS_ERR(res))
++			return PTR_ERR(res);
++
++		if (res)
++			dentry = res;
++	}
++
++	/* Only creates */
++	if (!(flags & O_CREAT) || d_really_is_positive(dentry))
++		return finish_no_open(file, res);
++
++	err = vboxsf_dir_create(parent, dentry, mode, false, flags & O_EXCL, &handle);
++	if (err)
++		goto out;
++
++	sf_handle = vboxsf_create_sf_handle(d_inode(dentry), handle, SHFL_CF_ACCESS_READWRITE);
++	if (IS_ERR(sf_handle)) {
++		vboxsf_close(sbi->root, handle);
++		err = PTR_ERR(sf_handle);
++		goto out;
++	}
++
++	err = finish_open(file, dentry, generic_file_open);
++	if (err) {
++		/* This also closes the handle passed to vboxsf_create_sf_handle() */
++		vboxsf_release_sf_handle(d_inode(dentry), sf_handle);
++		goto out;
++	}
++
++	file->private_data = sf_handle;
++	file->f_mode |= FMODE_CREATED;
++out:
++	dput(res);
++	return err;
+ }
+ 
+ static int vboxsf_dir_unlink(struct inode *parent, struct dentry *dentry)
+@@ -422,6 +475,7 @@ const struct inode_operations vboxsf_dir_iops = {
+ 	.lookup  = vboxsf_dir_lookup,
+ 	.create  = vboxsf_dir_mkfile,
+ 	.mkdir   = vboxsf_dir_mkdir,
++	.atomic_open = vboxsf_dir_atomic_open,
+ 	.rmdir   = vboxsf_dir_unlink,
+ 	.unlink  = vboxsf_dir_unlink,
+ 	.rename  = vboxsf_dir_rename,
+diff --git a/fs/vboxsf/file.c b/fs/vboxsf/file.c
+index c4ab5996d97a8..864c2fad23beb 100644
+--- a/fs/vboxsf/file.c
++++ b/fs/vboxsf/file.c
+@@ -20,17 +20,39 @@ struct vboxsf_handle {
+ 	struct list_head head;
+ };
+ 
+-static int vboxsf_file_open(struct inode *inode, struct file *file)
++struct vboxsf_handle *vboxsf_create_sf_handle(struct inode *inode,
++					      u64 handle, u32 access_flags)
+ {
+ 	struct vboxsf_inode *sf_i = VBOXSF_I(inode);
+-	struct shfl_createparms params = {};
+ 	struct vboxsf_handle *sf_handle;
+-	u32 access_flags = 0;
+-	int err;
+ 
+ 	sf_handle = kmalloc(sizeof(*sf_handle), GFP_KERNEL);
+ 	if (!sf_handle)
+-		return -ENOMEM;
++		return ERR_PTR(-ENOMEM);
++
++	/* the host may have given us different attr then requested */
++	sf_i->force_restat = 1;
++
++	/* init our handle struct and add it to the inode's handles list */
++	sf_handle->handle = handle;
++	sf_handle->root = VBOXSF_SBI(inode->i_sb)->root;
++	sf_handle->access_flags = access_flags;
++	kref_init(&sf_handle->refcount);
++
++	mutex_lock(&sf_i->handle_list_mutex);
++	list_add(&sf_handle->head, &sf_i->handle_list);
++	mutex_unlock(&sf_i->handle_list_mutex);
++
++	return sf_handle;
++}
++
++static int vboxsf_file_open(struct inode *inode, struct file *file)
++{
++	struct vboxsf_sbi *sbi = VBOXSF_SBI(inode->i_sb);
++	struct shfl_createparms params = {};
++	struct vboxsf_handle *sf_handle;
++	u32 access_flags = 0;
++	int err;
+ 
+ 	/*
+ 	 * We check the value of params.handle afterwards to find out if
+@@ -83,23 +105,14 @@ static int vboxsf_file_open(struct inode *inode, struct file *file)
+ 	err = vboxsf_create_at_dentry(file_dentry(file), &params);
+ 	if (err == 0 && params.handle == SHFL_HANDLE_NIL)
+ 		err = (params.result == SHFL_FILE_EXISTS) ? -EEXIST : -ENOENT;
+-	if (err) {
+-		kfree(sf_handle);
++	if (err)
+ 		return err;
+-	}
+-
+-	/* the host may have given us different attr then requested */
+-	sf_i->force_restat = 1;
+ 
+-	/* init our handle struct and add it to the inode's handles list */
+-	sf_handle->handle = params.handle;
+-	sf_handle->root = VBOXSF_SBI(inode->i_sb)->root;
+-	sf_handle->access_flags = access_flags;
+-	kref_init(&sf_handle->refcount);
+-
+-	mutex_lock(&sf_i->handle_list_mutex);
+-	list_add(&sf_handle->head, &sf_i->handle_list);
+-	mutex_unlock(&sf_i->handle_list_mutex);
++	sf_handle = vboxsf_create_sf_handle(inode, params.handle, access_flags);
++	if (IS_ERR(sf_handle)) {
++		vboxsf_close(sbi->root, params.handle);
++		return PTR_ERR(sf_handle);
++	}
+ 
+ 	file->private_data = sf_handle;
+ 	return 0;
+@@ -114,22 +127,26 @@ static void vboxsf_handle_release(struct kref *refcount)
+ 	kfree(sf_handle);
+ }
+ 
+-static int vboxsf_file_release(struct inode *inode, struct file *file)
++void vboxsf_release_sf_handle(struct inode *inode, struct vboxsf_handle *sf_handle)
+ {
+ 	struct vboxsf_inode *sf_i = VBOXSF_I(inode);
+-	struct vboxsf_handle *sf_handle = file->private_data;
+ 
++	mutex_lock(&sf_i->handle_list_mutex);
++	list_del(&sf_handle->head);
++	mutex_unlock(&sf_i->handle_list_mutex);
++
++	kref_put(&sf_handle->refcount, vboxsf_handle_release);
++}
++
++static int vboxsf_file_release(struct inode *inode, struct file *file)
++{
+ 	/*
+ 	 * When a file is closed on our (the guest) side, we want any subsequent
+ 	 * accesses done on the host side to see all changes done from our side.
+ 	 */
+ 	filemap_write_and_wait(inode->i_mapping);
+ 
+-	mutex_lock(&sf_i->handle_list_mutex);
+-	list_del(&sf_handle->head);
+-	mutex_unlock(&sf_i->handle_list_mutex);
+-
+-	kref_put(&sf_handle->refcount, vboxsf_handle_release);
++	vboxsf_release_sf_handle(inode, file->private_data);
+ 	return 0;
+ }
+ 
+diff --git a/fs/vboxsf/vfsmod.h b/fs/vboxsf/vfsmod.h
+index 6a7a9cedebc6e..9047befa66c5a 100644
+--- a/fs/vboxsf/vfsmod.h
++++ b/fs/vboxsf/vfsmod.h
+@@ -18,6 +18,8 @@
+ #define VBOXSF_SBI(sb)	((struct vboxsf_sbi *)(sb)->s_fs_info)
+ #define VBOXSF_I(i)	container_of(i, struct vboxsf_inode, vfs_inode)
+ 
++struct vboxsf_handle;
++
+ struct vboxsf_options {
+ 	unsigned long ttl;
+ 	kuid_t uid;
+@@ -80,6 +82,11 @@ extern const struct file_operations vboxsf_reg_fops;
+ extern const struct address_space_operations vboxsf_reg_aops;
+ extern const struct dentry_operations vboxsf_dentry_ops;
+ 
++/* from file.c */
++struct vboxsf_handle *vboxsf_create_sf_handle(struct inode *inode,
++					      u64 handle, u32 access_flags);
++void vboxsf_release_sf_handle(struct inode *inode, struct vboxsf_handle *sf_handle);
++
+ /* from utils.c */
+ struct inode *vboxsf_new_inode(struct super_block *sb);
+ int vboxsf_init_inode(struct vboxsf_sbi *sbi, struct inode *inode,
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 02b02cb29ce29..a7532cb3493ad 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -777,6 +777,7 @@ struct bpf_jit_poke_descriptor {
+ 	void *tailcall_target;
+ 	void *tailcall_bypass;
+ 	void *bypass_addr;
++	void *aux;
+ 	union {
+ 		struct {
+ 			struct bpf_map *map;
+diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
+index b4e1ebaae825a..939f21b69ead3 100644
+--- a/include/linux/huge_mm.h
++++ b/include/linux/huge_mm.h
+@@ -10,7 +10,7 @@
+ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf);
+ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ 		  pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
+-		  struct vm_area_struct *vma);
++		  struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma);
+ void huge_pmd_set_accessed(struct vm_fault *vmf, pmd_t orig_pmd);
+ int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ 		  pud_t *dst_pud, pud_t *src_pud, unsigned long addr,
+diff --git a/include/linux/swap.h b/include/linux/swap.h
+index a84f76db50702..144727041e78b 100644
+--- a/include/linux/swap.h
++++ b/include/linux/swap.h
+@@ -526,15 +526,6 @@ static inline struct swap_info_struct *swp_swap_info(swp_entry_t entry)
+ 	return NULL;
+ }
+ 
+-static inline struct swap_info_struct *get_swap_device(swp_entry_t entry)
+-{
+-	return NULL;
+-}
+-
+-static inline void put_swap_device(struct swap_info_struct *si)
+-{
+-}
+-
+ #define swap_address_space(entry)		(NULL)
+ #define get_nr_swap_pages()			0L
+ #define total_swap_pages			0L
+diff --git a/include/linux/swapops.h b/include/linux/swapops.h
+index 6430a94c69818..0d429a102d417 100644
+--- a/include/linux/swapops.h
++++ b/include/linux/swapops.h
+@@ -265,6 +265,8 @@ static inline swp_entry_t pmd_to_swp_entry(pmd_t pmd)
+ 
+ 	if (pmd_swp_soft_dirty(pmd))
+ 		pmd = pmd_swp_clear_soft_dirty(pmd);
++	if (pmd_swp_uffd_wp(pmd))
++		pmd = pmd_swp_clear_uffd_wp(pmd);
+ 	arch_entry = __pmd_to_swp_entry(pmd);
+ 	return swp_entry(__swp_type(arch_entry), __swp_offset(arch_entry));
+ }
+diff --git a/include/net/dst_metadata.h b/include/net/dst_metadata.h
+index 56cb3c38569a7..14efa0ded75dd 100644
+--- a/include/net/dst_metadata.h
++++ b/include/net/dst_metadata.h
+@@ -45,7 +45,9 @@ skb_tunnel_info(const struct sk_buff *skb)
+ 		return &md_dst->u.tun_info;
+ 
+ 	dst = skb_dst(skb);
+-	if (dst && dst->lwtstate)
++	if (dst && dst->lwtstate &&
++	    (dst->lwtstate->type == LWTUNNEL_ENCAP_IP ||
++	     dst->lwtstate->type == LWTUNNEL_ENCAP_IP6))
+ 		return lwt_tun_info(dst->lwtstate);
+ 
+ 	return NULL;
+diff --git a/include/net/ip6_route.h b/include/net/ip6_route.h
+index f14149df5a654..625a38ccb5d94 100644
+--- a/include/net/ip6_route.h
++++ b/include/net/ip6_route.h
+@@ -263,7 +263,7 @@ static inline bool ipv6_anycast_destination(const struct dst_entry *dst,
+ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ 		 int (*output)(struct net *, struct sock *, struct sk_buff *));
+ 
+-static inline int ip6_skb_dst_mtu(struct sk_buff *skb)
++static inline unsigned int ip6_skb_dst_mtu(struct sk_buff *skb)
+ {
+ 	int mtu;
+ 
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index d05193cb0d990..b42b3e6731ed4 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -682,6 +682,10 @@ static inline u32 __tcp_set_rto(const struct tcp_sock *tp)
+ 
+ static inline void __tcp_fast_path_on(struct tcp_sock *tp, u32 snd_wnd)
+ {
++	/* mptcp hooks are only on the slow path */
++	if (sk_is_mptcp((struct sock *)tp))
++		return;
++
+ 	tp->pred_flags = htonl((tp->tcp_header_len << 26) |
+ 			       ntohl(TCP_FLAG_ACK) |
+ 			       snd_wnd);
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 034ad93a1ad71..9b15774983738 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -2236,8 +2236,14 @@ static void bpf_prog_free_deferred(struct work_struct *work)
+ #endif
+ 	if (aux->dst_trampoline)
+ 		bpf_trampoline_put(aux->dst_trampoline);
+-	for (i = 0; i < aux->func_cnt; i++)
++	for (i = 0; i < aux->func_cnt; i++) {
++		/* We can just unlink the subprog poke descriptor table as
++		 * it was originally linked to the main program and is also
++		 * released along with it.
++		 */
++		aux->func[i]->aux->poke_tab = NULL;
+ 		bpf_jit_free(aux->func[i]);
++	}
+ 	if (aux->func_cnt) {
+ 		kfree(aux->func);
+ 		bpf_prog_unlock_free(aux->prog);
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 6e2ebcb0d66f0..d8a6fcd28e39e 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -12106,33 +12106,19 @@ static int jit_subprogs(struct bpf_verifier_env *env)
+ 			goto out_free;
+ 		func[i]->is_func = 1;
+ 		func[i]->aux->func_idx = i;
+-		/* the btf and func_info will be freed only at prog->aux */
++		/* Below members will be freed only at prog->aux */
+ 		func[i]->aux->btf = prog->aux->btf;
+ 		func[i]->aux->func_info = prog->aux->func_info;
++		func[i]->aux->poke_tab = prog->aux->poke_tab;
++		func[i]->aux->size_poke_tab = prog->aux->size_poke_tab;
+ 
+ 		for (j = 0; j < prog->aux->size_poke_tab; j++) {
+-			u32 insn_idx = prog->aux->poke_tab[j].insn_idx;
+-			int ret;
++			struct bpf_jit_poke_descriptor *poke;
+ 
+-			if (!(insn_idx >= subprog_start &&
+-			      insn_idx <= subprog_end))
+-				continue;
+-
+-			ret = bpf_jit_add_poke_descriptor(func[i],
+-							  &prog->aux->poke_tab[j]);
+-			if (ret < 0) {
+-				verbose(env, "adding tail call poke descriptor failed\n");
+-				goto out_free;
+-			}
+-
+-			func[i]->insnsi[insn_idx - subprog_start].imm = ret + 1;
+-
+-			map_ptr = func[i]->aux->poke_tab[ret].tail_call.map;
+-			ret = map_ptr->ops->map_poke_track(map_ptr, func[i]->aux);
+-			if (ret < 0) {
+-				verbose(env, "tracking tail call prog failed\n");
+-				goto out_free;
+-			}
++			poke = &prog->aux->poke_tab[j];
++			if (poke->insn_idx < subprog_end &&
++			    poke->insn_idx >= subprog_start)
++				poke->aux = func[i]->aux;
+ 		}
+ 
+ 		/* Use bpf_prog_F_tag to indicate functions in stack traces.
+@@ -12163,18 +12149,6 @@ static int jit_subprogs(struct bpf_verifier_env *env)
+ 		cond_resched();
+ 	}
+ 
+-	/* Untrack main program's aux structs so that during map_poke_run()
+-	 * we will not stumble upon the unfilled poke descriptors; each
+-	 * of the main program's poke descs got distributed across subprogs
+-	 * and got tracked onto map, so we are sure that none of them will
+-	 * be missed after the operation below
+-	 */
+-	for (i = 0; i < prog->aux->size_poke_tab; i++) {
+-		map_ptr = prog->aux->poke_tab[i].tail_call.map;
+-
+-		map_ptr->ops->map_poke_untrack(map_ptr, prog->aux);
+-	}
+-
+ 	/* at this point all bpf functions were successfully JITed
+ 	 * now populate all bpf_calls with correct addresses and
+ 	 * run last pass of JIT
+@@ -12252,14 +12226,22 @@ static int jit_subprogs(struct bpf_verifier_env *env)
+ 	bpf_prog_jit_attempt_done(prog);
+ 	return 0;
+ out_free:
++	/* We failed JIT'ing, so at this point we need to unregister poke
++	 * descriptors from subprogs, so that kernel is not attempting to
++	 * patch it anymore as we're freeing the subprog JIT memory.
++	 */
++	for (i = 0; i < prog->aux->size_poke_tab; i++) {
++		map_ptr = prog->aux->poke_tab[i].tail_call.map;
++		map_ptr->ops->map_poke_untrack(map_ptr, prog->aux);
++	}
++	/* At this point we're guaranteed that poke descriptors are not
++	 * live anymore. We can just unlink its descriptor table as it's
++	 * released with the main prog.
++	 */
+ 	for (i = 0; i < env->subprog_cnt; i++) {
+ 		if (!func[i])
+ 			continue;
+-
+-		for (j = 0; j < func[i]->aux->size_poke_tab; j++) {
+-			map_ptr = func[i]->aux->poke_tab[j].tail_call.map;
+-			map_ptr->ops->map_poke_untrack(map_ptr, func[i]->aux);
+-		}
++		func[i]->aux->poke_tab = NULL;
+ 		bpf_jit_free(func[i]);
+ 	}
+ 	kfree(func);
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 7dd0d859d95bb..f60ef0b4ec330 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -5108,7 +5108,7 @@ static const u64 cfs_bandwidth_slack_period = 5 * NSEC_PER_MSEC;
+ static int runtime_refresh_within(struct cfs_bandwidth *cfs_b, u64 min_expire)
+ {
+ 	struct hrtimer *refresh_timer = &cfs_b->period_timer;
+-	u64 remaining;
++	s64 remaining;
+ 
+ 	/* if the call-back is running a quota refresh is already occurring */
+ 	if (hrtimer_callback_running(refresh_timer))
+@@ -5116,7 +5116,7 @@ static int runtime_refresh_within(struct cfs_bandwidth *cfs_b, u64 min_expire)
+ 
+ 	/* is a quota refresh about to occur? */
+ 	remaining = ktime_to_ns(hrtimer_expires_remaining(refresh_timer));
+-	if (remaining < min_expire)
++	if (remaining < (s64)min_expire)
+ 		return 1;
+ 
+ 	return 0;
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 8857ef1543eb6..9aaf4a8ebeeb9 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -1026,7 +1026,7 @@ struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
+ 
+ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ 		  pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
+-		  struct vm_area_struct *vma)
++		  struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
+ {
+ 	spinlock_t *dst_ptl, *src_ptl;
+ 	struct page *src_page;
+@@ -1035,7 +1035,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ 	int ret = -ENOMEM;
+ 
+ 	/* Skip if can be re-fill on fault */
+-	if (!vma_is_anonymous(vma))
++	if (!vma_is_anonymous(dst_vma))
+ 		return 0;
+ 
+ 	pgtable = pte_alloc_one(dst_mm);
+@@ -1049,14 +1049,6 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ 	ret = -EAGAIN;
+ 	pmd = *src_pmd;
+ 
+-	/*
+-	 * Make sure the _PAGE_UFFD_WP bit is cleared if the new VMA
+-	 * does not have the VM_UFFD_WP, which means that the uffd
+-	 * fork event is not enabled.
+-	 */
+-	if (!(vma->vm_flags & VM_UFFD_WP))
+-		pmd = pmd_clear_uffd_wp(pmd);
+-
+ #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+ 	if (unlikely(is_swap_pmd(pmd))) {
+ 		swp_entry_t entry = pmd_to_swp_entry(pmd);
+@@ -1067,11 +1059,15 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ 			pmd = swp_entry_to_pmd(entry);
+ 			if (pmd_swp_soft_dirty(*src_pmd))
+ 				pmd = pmd_swp_mksoft_dirty(pmd);
++			if (pmd_swp_uffd_wp(*src_pmd))
++				pmd = pmd_swp_mkuffd_wp(pmd);
+ 			set_pmd_at(src_mm, addr, src_pmd, pmd);
+ 		}
+ 		add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
+ 		mm_inc_nr_ptes(dst_mm);
+ 		pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable);
++		if (!userfaultfd_wp(dst_vma))
++			pmd = pmd_swp_clear_uffd_wp(pmd);
+ 		set_pmd_at(dst_mm, addr, dst_pmd, pmd);
+ 		ret = 0;
+ 		goto out_unlock;
+@@ -1088,17 +1084,13 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ 	 * a page table.
+ 	 */
+ 	if (is_huge_zero_pmd(pmd)) {
+-		struct page *zero_page;
+ 		/*
+ 		 * get_huge_zero_page() will never allocate a new page here,
+ 		 * since we already have a zero page to copy. It just takes a
+ 		 * reference.
+ 		 */
+-		zero_page = mm_get_huge_zero_page(dst_mm);
+-		set_huge_zero_page(pgtable, dst_mm, vma, addr, dst_pmd,
+-				zero_page);
+-		ret = 0;
+-		goto out_unlock;
++		mm_get_huge_zero_page(dst_mm);
++		goto out_zero_page;
+ 	}
+ 
+ 	src_page = pmd_page(pmd);
+@@ -1111,21 +1103,23 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ 	 * best effort that the pinned pages won't be replaced by another
+ 	 * random page during the coming copy-on-write.
+ 	 */
+-	if (unlikely(page_needs_cow_for_dma(vma, src_page))) {
++	if (unlikely(page_needs_cow_for_dma(src_vma, src_page))) {
+ 		pte_free(dst_mm, pgtable);
+ 		spin_unlock(src_ptl);
+ 		spin_unlock(dst_ptl);
+-		__split_huge_pmd(vma, src_pmd, addr, false, NULL);
++		__split_huge_pmd(src_vma, src_pmd, addr, false, NULL);
+ 		return -EAGAIN;
+ 	}
+ 
+ 	get_page(src_page);
+ 	page_dup_rmap(src_page, true);
+ 	add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
++out_zero_page:
+ 	mm_inc_nr_ptes(dst_mm);
+ 	pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable);
+-
+ 	pmdp_set_wrprotect(src_mm, addr, src_pmd);
++	if (!userfaultfd_wp(dst_vma))
++		pmd = pmd_clear_uffd_wp(pmd);
+ 	pmd = pmd_mkold(pmd_wrprotect(pmd));
+ 	set_pmd_at(dst_mm, addr, dst_pmd, pmd);
+ 
+@@ -1841,6 +1835,8 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ 			newpmd = swp_entry_to_pmd(entry);
+ 			if (pmd_swp_soft_dirty(*pmd))
+ 				newpmd = pmd_swp_mksoft_dirty(newpmd);
++			if (pmd_swp_uffd_wp(*pmd))
++				newpmd = pmd_swp_mkuffd_wp(newpmd);
+ 			set_pmd_at(mm, addr, pmd, newpmd);
+ 		}
+ 		goto unlock;
+@@ -3251,6 +3247,8 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
+ 		pmde = pmd_mksoft_dirty(pmde);
+ 	if (is_write_migration_entry(entry))
+ 		pmde = maybe_pmd_mkwrite(pmde, vma);
++	if (pmd_swp_uffd_wp(*pvmw->pmd))
++		pmde = pmd_wrprotect(pmd_mkuffd_wp(pmde));
+ 
+ 	flush_cache_range(vma, mmun_start, mmun_start + HPAGE_PMD_SIZE);
+ 	if (PageAnon(new))
+diff --git a/mm/memory.c b/mm/memory.c
+index b15367c285bde..3fdba3ec69c8b 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -708,10 +708,10 @@ out:
+ 
+ static unsigned long
+ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+-		pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *vma,
+-		unsigned long addr, int *rss)
++		pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *dst_vma,
++		struct vm_area_struct *src_vma, unsigned long addr, int *rss)
+ {
+-	unsigned long vm_flags = vma->vm_flags;
++	unsigned long vm_flags = dst_vma->vm_flags;
+ 	pte_t pte = *src_pte;
+ 	struct page *page;
+ 	swp_entry_t entry = pte_to_swp_entry(pte);
+@@ -780,6 +780,8 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ 			set_pte_at(src_mm, addr, src_pte, pte);
+ 		}
+ 	}
++	if (!userfaultfd_wp(dst_vma))
++		pte = pte_swp_clear_uffd_wp(pte);
+ 	set_pte_at(dst_mm, addr, dst_pte, pte);
+ 	return 0;
+ }
+@@ -845,6 +847,9 @@ copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
+ 	/* All done, just insert the new page copy in the child */
+ 	pte = mk_pte(new_page, dst_vma->vm_page_prot);
+ 	pte = maybe_mkwrite(pte_mkdirty(pte), dst_vma);
++	if (userfaultfd_pte_wp(dst_vma, *src_pte))
++		/* Uffd-wp needs to be delivered to dest pte as well */
++		pte = pte_wrprotect(pte_mkuffd_wp(pte));
+ 	set_pte_at(dst_vma->vm_mm, addr, dst_pte, pte);
+ 	return 0;
+ }
+@@ -894,12 +899,7 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
+ 		pte = pte_mkclean(pte);
+ 	pte = pte_mkold(pte);
+ 
+-	/*
+-	 * Make sure the _PAGE_UFFD_WP bit is cleared if the new VMA
+-	 * does not have the VM_UFFD_WP, which means that the uffd
+-	 * fork event is not enabled.
+-	 */
+-	if (!(vm_flags & VM_UFFD_WP))
++	if (!userfaultfd_wp(dst_vma))
+ 		pte = pte_clear_uffd_wp(pte);
+ 
+ 	set_pte_at(dst_vma->vm_mm, addr, dst_pte, pte);
+@@ -974,7 +974,8 @@ again:
+ 		if (unlikely(!pte_present(*src_pte))) {
+ 			entry.val = copy_nonpresent_pte(dst_mm, src_mm,
+ 							dst_pte, src_pte,
+-							src_vma, addr, rss);
++							dst_vma, src_vma,
++							addr, rss);
+ 			if (entry.val)
+ 				break;
+ 			progress += 8;
+@@ -1051,8 +1052,8 @@ copy_pmd_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
+ 			|| pmd_devmap(*src_pmd)) {
+ 			int err;
+ 			VM_BUG_ON_VMA(next-addr != HPAGE_PMD_SIZE, src_vma);
+-			err = copy_huge_pmd(dst_mm, src_mm,
+-					    dst_pmd, src_pmd, addr, src_vma);
++			err = copy_huge_pmd(dst_mm, src_mm, dst_pmd, src_pmd,
++					    addr, dst_vma, src_vma);
+ 			if (err == -ENOMEM)
+ 				return -ENOMEM;
+ 			if (!err)
+@@ -3353,7 +3354,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
+ {
+ 	struct vm_area_struct *vma = vmf->vma;
+ 	struct page *page = NULL, *swapcache;
+-	struct swap_info_struct *si = NULL;
+ 	swp_entry_t entry;
+ 	pte_t pte;
+ 	int locked;
+@@ -3381,16 +3381,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
+ 		goto out;
+ 	}
+ 
+-	/* Prevent swapoff from happening to us. */
+-	si = get_swap_device(entry);
+-	if (unlikely(!si))
+-		goto out;
+ 
+ 	delayacct_set_flag(current, DELAYACCT_PF_SWAPIN);
+ 	page = lookup_swap_cache(entry, vma, vmf->address);
+ 	swapcache = page;
+ 
+ 	if (!page) {
++		struct swap_info_struct *si = swp_swap_info(entry);
++
+ 		if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
+ 		    __swap_count(entry) == 1) {
+ 			/* skip swapcache */
+@@ -3559,8 +3557,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
+ unlock:
+ 	pte_unmap_unlock(vmf->pte, vmf->ptl);
+ out:
+-	if (si)
+-		put_swap_device(si);
+ 	return ret;
+ out_nomap:
+ 	pte_unmap_unlock(vmf->pte, vmf->ptl);
+@@ -3572,8 +3568,6 @@ out_release:
+ 		unlock_page(swapcache);
+ 		put_page(swapcache);
+ 	}
+-	if (si)
+-		put_swap_device(si);
+ 	return ret;
+ }
+ 
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 5fa21d66af203..680d83cab0775 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -1696,8 +1696,7 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index,
+ 	struct address_space *mapping = inode->i_mapping;
+ 	struct shmem_inode_info *info = SHMEM_I(inode);
+ 	struct mm_struct *charge_mm = vma ? vma->vm_mm : current->mm;
+-	struct swap_info_struct *si;
+-	struct page *page = NULL;
++	struct page *page;
+ 	swp_entry_t swap;
+ 	int error;
+ 
+@@ -1705,12 +1704,6 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index,
+ 	swap = radix_to_swp_entry(*pagep);
+ 	*pagep = NULL;
+ 
+-	/* Prevent swapoff from happening to us. */
+-	si = get_swap_device(swap);
+-	if (!si) {
+-		error = EINVAL;
+-		goto failed;
+-	}
+ 	/* Look it up and read it in.. */
+ 	page = lookup_swap_cache(swap, NULL, 0);
+ 	if (!page) {
+@@ -1772,8 +1765,6 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index,
+ 	swap_free(swap);
+ 
+ 	*pagep = page;
+-	if (si)
+-		put_swap_device(si);
+ 	return 0;
+ failed:
+ 	if (!shmem_confirm_swap(mapping, index, swap))
+@@ -1784,9 +1775,6 @@ unlock:
+ 		put_page(page);
+ 	}
+ 
+-	if (si)
+-		put_swap_device(si);
+-
+ 	return error;
+ }
+ 
+diff --git a/net/bridge/br_if.c b/net/bridge/br_if.c
+index f7d2f472ae24f..6e4a32354a138 100644
+--- a/net/bridge/br_if.c
++++ b/net/bridge/br_if.c
+@@ -562,7 +562,7 @@ int br_add_if(struct net_bridge *br, struct net_device *dev,
+ 	struct net_bridge_port *p;
+ 	int err = 0;
+ 	unsigned br_hr, dev_hr;
+-	bool changed_addr;
++	bool changed_addr, fdb_synced = false;
+ 
+ 	/* Don't allow bridging non-ethernet like devices. */
+ 	if ((dev->flags & IFF_LOOPBACK) ||
+@@ -652,6 +652,19 @@ int br_add_if(struct net_bridge *br, struct net_device *dev,
+ 	list_add_rcu(&p->list, &br->port_list);
+ 
+ 	nbp_update_port_count(br);
++	if (!br_promisc_port(p) && (p->dev->priv_flags & IFF_UNICAST_FLT)) {
++		/* When updating the port count we also update all ports'
++		 * promiscuous mode.
++		 * A port leaving promiscuous mode normally gets the bridge's
++		 * fdb synced to the unicast filter (if supported), however,
++		 * `br_port_clear_promisc` does not distinguish between
++		 * non-promiscuous ports and *new* ports, so we need to
++		 * sync explicitly here.
++		 */
++		fdb_synced = br_fdb_sync_static(br, p) == 0;
++		if (!fdb_synced)
++			netdev_err(dev, "failed to sync bridge static fdb addresses to this port\n");
++	}
+ 
+ 	netdev_update_features(br->dev);
+ 
+@@ -701,6 +714,8 @@ int br_add_if(struct net_bridge *br, struct net_device *dev,
+ 	return 0;
+ 
+ err7:
++	if (fdb_synced)
++		br_fdb_unsync_static(br, p);
+ 	list_del_rcu(&p->list);
+ 	br_fdb_delete_by_port(br, p, 0, 1);
+ 	nbp_update_port_count(br);
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 50531a2d0b20d..4f29dde4ed0a7 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -6194,6 +6194,8 @@ static gro_result_t napi_skb_finish(struct napi_struct *napi,
+ 	case GRO_MERGED_FREE:
+ 		if (NAPI_GRO_CB(skb)->free == NAPI_GRO_FREE_STOLEN_HEAD)
+ 			napi_skb_free_stolen_head(skb);
++		else if (skb->fclone != SKB_FCLONE_UNAVAILABLE)
++			__kfree_skb(skb);
+ 		else
+ 			__kfree_skb_defer(skb);
+ 		break;
+diff --git a/net/dsa/switch.c b/net/dsa/switch.c
+index 9bf8e20ecdf38..dfd74cd7b5f37 100644
+--- a/net/dsa/switch.c
++++ b/net/dsa/switch.c
+@@ -110,11 +110,11 @@ static int dsa_switch_bridge_leave(struct dsa_switch *ds,
+ 	int err, port;
+ 
+ 	if (dst->index == info->tree_index && ds->index == info->sw_index &&
+-	    ds->ops->port_bridge_join)
++	    ds->ops->port_bridge_leave)
+ 		ds->ops->port_bridge_leave(ds, info->port, info->br);
+ 
+ 	if ((dst->index != info->tree_index || ds->index != info->sw_index) &&
+-	    ds->ops->crosschip_bridge_join)
++	    ds->ops->crosschip_bridge_leave)
+ 		ds->ops->crosschip_bridge_leave(ds, info->tree_index,
+ 						info->sw_index, info->port,
+ 						info->br);
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index f6cc26de5ed30..0dca00745ac3c 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -317,7 +317,7 @@ static int ip_tunnel_bind_dev(struct net_device *dev)
+ 	}
+ 
+ 	dev->needed_headroom = t_hlen + hlen;
+-	mtu -= t_hlen;
++	mtu -= t_hlen + (dev->type == ARPHRD_ETHER ? dev->hard_header_len : 0);
+ 
+ 	if (mtu < IPV4_MIN_MTU)
+ 		mtu = IPV4_MIN_MTU;
+@@ -348,6 +348,9 @@ static struct ip_tunnel *ip_tunnel_create(struct net *net,
+ 	t_hlen = nt->hlen + sizeof(struct iphdr);
+ 	dev->min_mtu = ETH_MIN_MTU;
+ 	dev->max_mtu = IP_MAX_MTU - t_hlen;
++	if (dev->type == ARPHRD_ETHER)
++		dev->max_mtu -= dev->hard_header_len;
++
+ 	ip_tunnel_add(itn, nt);
+ 	return nt;
+ 
+@@ -489,11 +492,14 @@ static int tnl_update_pmtu(struct net_device *dev, struct sk_buff *skb,
+ 
+ 	tunnel_hlen = md ? tunnel_hlen : tunnel->hlen;
+ 	pkt_size = skb->len - tunnel_hlen;
++	pkt_size -= dev->type == ARPHRD_ETHER ? dev->hard_header_len : 0;
+ 
+-	if (df)
++	if (df) {
+ 		mtu = dst_mtu(&rt->dst) - (sizeof(struct iphdr) + tunnel_hlen);
+-	else
++		mtu -= dev->type == ARPHRD_ETHER ? dev->hard_header_len : 0;
++	} else {
+ 		mtu = skb_valid_dst(skb) ? dst_mtu(skb_dst(skb)) : dev->mtu;
++	}
+ 
+ 	if (skb_valid_dst(skb))
+ 		skb_dst_update_pmtu_no_confirm(skb, mtu);
+@@ -972,6 +978,9 @@ int __ip_tunnel_change_mtu(struct net_device *dev, int new_mtu, bool strict)
+ 	int t_hlen = tunnel->hlen + sizeof(struct iphdr);
+ 	int max_mtu = IP_MAX_MTU - t_hlen;
+ 
++	if (dev->type == ARPHRD_ETHER)
++		max_mtu -= dev->hard_header_len;
++
+ 	if (new_mtu < ETH_MIN_MTU)
+ 		return -EINVAL;
+ 
+@@ -1149,6 +1158,9 @@ int ip_tunnel_newlink(struct net_device *dev, struct nlattr *tb[],
+ 	if (tb[IFLA_MTU]) {
+ 		unsigned int max = IP_MAX_MTU - (nt->hlen + sizeof(struct iphdr));
+ 
++		if (dev->type == ARPHRD_ETHER)
++			max -= dev->hard_header_len;
++
+ 		mtu = clamp(dev->mtu, (unsigned int)ETH_MIN_MTU, max);
+ 	}
+ 
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index f1c1f9e3de723..4c9a5b99bf7fd 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -1375,6 +1375,9 @@ new_segment:
+ 			}
+ 			pfrag->offset += copy;
+ 		} else {
++			if (!sk_wmem_schedule(sk, copy))
++				goto wait_for_space;
++
+ 			err = skb_zerocopy_iter_stream(sk, skb, msg, copy, uarg);
+ 			if (err == -EMSGSIZE || err == -EEXIST) {
+ 				tcp_mark_push(tp, skb);
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index bc266514ce589..6bd628f08ded2 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -5921,8 +5921,8 @@ void tcp_init_transfer(struct sock *sk, int bpf_op, struct sk_buff *skb)
+ 		tp->snd_cwnd = tcp_init_cwnd(tp, __sk_dst_get(sk));
+ 	tp->snd_cwnd_stamp = tcp_jiffies32;
+ 
+-	icsk->icsk_ca_initialized = 0;
+ 	bpf_skops_established(sk, bpf_op, skb);
++	/* Initialize congestion control unless BPF initialized it already: */
+ 	if (!icsk->icsk_ca_initialized)
+ 		tcp_init_congestion_control(sk);
+ 	tcp_init_buffer_space(sk);
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 312184cead57e..e409f2de5dc4f 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -342,7 +342,7 @@ void tcp_v4_mtu_reduced(struct sock *sk)
+ 
+ 	if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE))
+ 		return;
+-	mtu = tcp_sk(sk)->mtu_info;
++	mtu = READ_ONCE(tcp_sk(sk)->mtu_info);
+ 	dst = inet_csk_update_pmtu(sk, mtu);
+ 	if (!dst)
+ 		return;
+@@ -546,7 +546,7 @@ int tcp_v4_err(struct sk_buff *skb, u32 info)
+ 			if (sk->sk_state == TCP_LISTEN)
+ 				goto out;
+ 
+-			tp->mtu_info = info;
++			WRITE_ONCE(tp->mtu_info, info);
+ 			if (!sock_owned_by_user(sk)) {
+ 				tcp_v4_mtu_reduced(sk);
+ 			} else {
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index bde781f46b41a..29553fce85028 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -1732,6 +1732,7 @@ int tcp_mtu_to_mss(struct sock *sk, int pmtu)
+ 	return __tcp_mtu_to_mss(sk, pmtu) -
+ 	       (tcp_sk(sk)->tcp_header_len - sizeof(struct tcphdr));
+ }
++EXPORT_SYMBOL(tcp_mtu_to_mss);
+ 
+ /* Inverse of above */
+ int tcp_mss_to_mtu(struct sock *sk, int mss)
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 8091276cb85b8..ca9cf1051b1e1 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -1102,7 +1102,7 @@ int udp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	}
+ 
+ 	ipcm_init_sk(&ipc, inet);
+-	ipc.gso_size = up->gso_size;
++	ipc.gso_size = READ_ONCE(up->gso_size);
+ 
+ 	if (msg->msg_controllen) {
+ 		err = udp_cmsg_send(sk, msg, &ipc.gso_size);
+@@ -2695,7 +2695,7 @@ int udp_lib_setsockopt(struct sock *sk, int level, int optname,
+ 	case UDP_SEGMENT:
+ 		if (val < 0 || val > USHRT_MAX)
+ 			return -EINVAL;
+-		up->gso_size = val;
++		WRITE_ONCE(up->gso_size, val);
+ 		break;
+ 
+ 	case UDP_GRO:
+@@ -2790,7 +2790,7 @@ int udp_lib_getsockopt(struct sock *sk, int level, int optname,
+ 		break;
+ 
+ 	case UDP_SEGMENT:
+-		val = up->gso_size;
++		val = READ_ONCE(up->gso_size);
+ 		break;
+ 
+ 	case UDP_GRO:
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index 54e06b88af69a..9dde1e5fb449b 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -525,8 +525,10 @@ struct sk_buff *udp_gro_receive(struct list_head *head, struct sk_buff *skb,
+ 
+ 		if ((!sk && (skb->dev->features & NETIF_F_GRO_UDP_FWD)) ||
+ 		    (sk && udp_sk(sk)->gro_enabled) || NAPI_GRO_CB(skb)->is_flist)
+-			pp = call_gro_receive(udp_gro_receive_segment, head, skb);
+-		return pp;
++			return call_gro_receive(udp_gro_receive_segment, head, skb);
++
++		/* no GRO, be sure flush the current packet */
++		goto out;
+ 	}
+ 
+ 	if (NAPI_GRO_CB(skb)->encap_mark ||
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 5f47c0b6e3de8..22d7ed08b92d9 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -348,11 +348,20 @@ failure:
+ static void tcp_v6_mtu_reduced(struct sock *sk)
+ {
+ 	struct dst_entry *dst;
++	u32 mtu;
+ 
+ 	if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE))
+ 		return;
+ 
+-	dst = inet6_csk_update_pmtu(sk, tcp_sk(sk)->mtu_info);
++	mtu = READ_ONCE(tcp_sk(sk)->mtu_info);
++
++	/* Drop requests trying to increase our current mss.
++	 * Check done in __ip6_rt_update_pmtu() is too late.
++	 */
++	if (tcp_mtu_to_mss(sk, mtu) >= tcp_sk(sk)->mss_cache)
++		return;
++
++	dst = inet6_csk_update_pmtu(sk, mtu);
+ 	if (!dst)
+ 		return;
+ 
+@@ -433,6 +442,8 @@ static int tcp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ 	}
+ 
+ 	if (type == ICMPV6_PKT_TOOBIG) {
++		u32 mtu = ntohl(info);
++
+ 		/* We are not interested in TCP_LISTEN and open_requests
+ 		 * (SYN-ACKs send out by Linux are always <576bytes so
+ 		 * they should go through unfragmented).
+@@ -443,7 +454,11 @@ static int tcp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ 		if (!ip6_sk_accept_pmtu(sk))
+ 			goto out;
+ 
+-		tp->mtu_info = ntohl(info);
++		if (mtu < IPV6_MIN_MTU)
++			goto out;
++
++		WRITE_ONCE(tp->mtu_info, mtu);
++
+ 		if (!sock_owned_by_user(sk))
+ 			tcp_v6_mtu_reduced(sk);
+ 		else if (!test_and_set_bit(TCP_MTU_REDUCED_DEFERRED,
+@@ -540,7 +555,7 @@ static int tcp_v6_send_synack(const struct sock *sk, struct dst_entry *dst,
+ 		opt = ireq->ipv6_opt;
+ 		if (!opt)
+ 			opt = rcu_dereference(np->opt);
+-		err = ip6_xmit(sk, skb, fl6, sk->sk_mark, opt,
++		err = ip6_xmit(sk, skb, fl6, skb->mark ? : sk->sk_mark, opt,
+ 			       tclass, sk->sk_priority);
+ 		rcu_read_unlock();
+ 		err = net_xmit_eval(err);
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 3fcd86f4dfdca..6774e776228ce 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1296,7 +1296,7 @@ int udpv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 	int (*getfrag)(void *, char *, int, int, int, struct sk_buff *);
+ 
+ 	ipcm6_init(&ipc6);
+-	ipc6.gso_size = up->gso_size;
++	ipc6.gso_size = READ_ONCE(up->gso_size);
+ 	ipc6.sockc.tsflags = sk->sk_tsflags;
+ 	ipc6.sockc.mark = sk->sk_mark;
+ 
+diff --git a/net/ipv6/xfrm6_output.c b/net/ipv6/xfrm6_output.c
+index 8b84d534b19d8..6abb45a671994 100644
+--- a/net/ipv6/xfrm6_output.c
++++ b/net/ipv6/xfrm6_output.c
+@@ -56,7 +56,7 @@ static int __xfrm6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ {
+ 	struct dst_entry *dst = skb_dst(skb);
+ 	struct xfrm_state *x = dst->xfrm;
+-	int mtu;
++	unsigned int mtu;
+ 	bool toobig;
+ 
+ #ifdef CONFIG_NETFILTER
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index 8690fc07030fc..f3e8e6ce82c4b 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -218,6 +218,7 @@ static int ctnetlink_dump_helpinfo(struct sk_buff *skb,
+ 	if (!help)
+ 		return 0;
+ 
++	rcu_read_lock();
+ 	helper = rcu_dereference(help->helper);
+ 	if (!helper)
+ 		goto out;
+@@ -233,9 +234,11 @@ static int ctnetlink_dump_helpinfo(struct sk_buff *skb,
+ 
+ 	nla_nest_end(skb, nest_helper);
+ out:
++	rcu_read_unlock();
+ 	return 0;
+ 
+ nla_put_failure:
++	rcu_read_unlock();
+ 	return -1;
+ }
+ 
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index fcb15b8904e87..a5db7c59ad4e4 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -3453,7 +3453,8 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
+ 	return 0;
+ 
+ err_destroy_flow_rule:
+-	nft_flow_rule_destroy(flow);
++	if (flow)
++		nft_flow_rule_destroy(flow);
+ err_release_rule:
+ 	nf_tables_rule_release(&ctx, rule);
+ err_release_expr:
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index a656baa321fe1..1b4b3514c94f2 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -322,11 +322,22 @@ err_alloc:
+ 
+ static void tcf_ct_flow_table_cleanup_work(struct work_struct *work)
+ {
++	struct flow_block_cb *block_cb, *tmp_cb;
+ 	struct tcf_ct_flow_table *ct_ft;
++	struct flow_block *block;
+ 
+ 	ct_ft = container_of(to_rcu_work(work), struct tcf_ct_flow_table,
+ 			     rwork);
+ 	nf_flow_table_free(&ct_ft->nf_ft);
++
++	/* Remove any remaining callbacks before cleanup */
++	block = &ct_ft->nf_ft.flow_block;
++	down_write(&ct_ft->nf_ft.flow_block_lock);
++	list_for_each_entry_safe(block_cb, tmp_cb, &block->cb_list, list) {
++		list_del(&block_cb->list);
++		flow_block_cb_free(block_cb);
++	}
++	up_write(&ct_ft->nf_ft.flow_block_lock);
+ 	kfree(ct_ft);
+ 
+ 	module_put(THIS_MODULE);
+@@ -1026,7 +1037,8 @@ do_nat:
+ 		/* This will take care of sending queued events
+ 		 * even if the connection is already confirmed.
+ 		 */
+-		nf_conntrack_confirm(skb);
++		if (nf_conntrack_confirm(skb) != NF_ACCEPT)
++			goto drop;
+ 	}
+ 
+ 	if (!skip_add)
+diff --git a/scripts/Kbuild.include b/scripts/Kbuild.include
+index 82dd1b65b7a8f..f247e691562d4 100644
+--- a/scripts/Kbuild.include
++++ b/scripts/Kbuild.include
+@@ -90,8 +90,13 @@ clean := -f $(srctree)/scripts/Makefile.clean obj
+ echo-cmd = $(if $($(quiet)cmd_$(1)),\
+ 	echo '  $(call escsq,$($(quiet)cmd_$(1)))$(echo-why)';)
+ 
++# sink stdout for 'make -s'
++       redirect :=
++ quiet_redirect :=
++silent_redirect := exec >/dev/null;
++
+ # printing commands
+-cmd = @set -e; $(echo-cmd) $(cmd_$(1))
++cmd = @set -e; $(echo-cmd) $($(quiet)redirect) $(cmd_$(1))
+ 
+ ###
+ # if_changed      - execute command if any prerequisite is newer than
+diff --git a/scripts/mkcompile_h b/scripts/mkcompile_h
+index 4ae735039daf2..a72b154de7b01 100755
+--- a/scripts/mkcompile_h
++++ b/scripts/mkcompile_h
+@@ -70,15 +70,23 @@ UTS_VERSION="$(echo $UTS_VERSION $CONFIG_FLAGS $TIMESTAMP | cut -b -$UTS_LEN)"
+ # Only replace the real compile.h if the new one is different,
+ # in order to preserve the timestamp and avoid unnecessary
+ # recompilations.
+-# We don't consider the file changed if only the date/time changed.
++# We don't consider the file changed if only the date/time changed,
++# unless KBUILD_BUILD_TIMESTAMP was explicitly set (e.g. for
++# reproducible builds with that value referring to a commit timestamp).
+ # A kernel config change will increase the generation number, thus
+ # causing compile.h to be updated (including date/time) due to the
+ # changed comment in the
+ # first line.
+ 
++if [ -z "$KBUILD_BUILD_TIMESTAMP" ]; then
++   IGNORE_PATTERN="UTS_VERSION"
++else
++   IGNORE_PATTERN="NOT_A_PATTERN_TO_BE_MATCHED"
++fi
++
+ if [ -r $TARGET ] && \
+-      grep -v 'UTS_VERSION' $TARGET > .tmpver.1 && \
+-      grep -v 'UTS_VERSION' .tmpcompile > .tmpver.2 && \
++      grep -v $IGNORE_PATTERN $TARGET > .tmpver.1 && \
++      grep -v $IGNORE_PATTERN .tmpcompile > .tmpver.2 && \
+       cmp -s .tmpver.1 .tmpver.2; then
+    rm -f .tmpcompile
+ else
+diff --git a/tools/bpf/Makefile b/tools/bpf/Makefile
+index 39bb322707b4b..b11cfc86a3d02 100644
+--- a/tools/bpf/Makefile
++++ b/tools/bpf/Makefile
+@@ -97,7 +97,7 @@ clean: bpftool_clean runqslower_clean resolve_btfids_clean
+ 	$(Q)$(RM) -- $(OUTPUT)FEATURE-DUMP.bpf
+ 	$(Q)$(RM) -r -- $(OUTPUT)feature
+ 
+-install: $(PROGS) bpftool_install runqslower_install
++install: $(PROGS) bpftool_install
+ 	$(call QUIET_INSTALL, bpf_jit_disasm)
+ 	$(Q)$(INSTALL) -m 0755 -d $(DESTDIR)$(prefix)/bin
+ 	$(Q)$(INSTALL) $(OUTPUT)bpf_jit_disasm $(DESTDIR)$(prefix)/bin/bpf_jit_disasm
+@@ -118,9 +118,6 @@ bpftool_clean:
+ runqslower:
+ 	$(call descend,runqslower)
+ 
+-runqslower_install:
+-	$(call descend,runqslower,install)
+-
+ runqslower_clean:
+ 	$(call descend,runqslower,clean)
+ 
+@@ -131,5 +128,5 @@ resolve_btfids_clean:
+ 	$(call descend,resolve_btfids,clean)
+ 
+ .PHONY: all install clean bpftool bpftool_install bpftool_clean \
+-	runqslower runqslower_install runqslower_clean \
++	runqslower runqslower_clean \
+ 	resolve_btfids resolve_btfids_clean
+diff --git a/tools/bpf/bpftool/jit_disasm.c b/tools/bpf/bpftool/jit_disasm.c
+index e7e7eee9f1725..24734f2249d6e 100644
+--- a/tools/bpf/bpftool/jit_disasm.c
++++ b/tools/bpf/bpftool/jit_disasm.c
+@@ -43,11 +43,13 @@ static int fprintf_json(void *out, const char *fmt, ...)
+ {
+ 	va_list ap;
+ 	char *s;
++	int err;
+ 
+ 	va_start(ap, fmt);
+-	if (vasprintf(&s, fmt, ap) < 0)
+-		return -1;
++	err = vasprintf(&s, fmt, ap);
+ 	va_end(ap);
++	if (err < 0)
++		return -1;
+ 
+ 	if (!oper_count) {
+ 		int i;
+diff --git a/tools/perf/tests/bpf.c b/tools/perf/tests/bpf.c
+index c72adbd67386e..081e445f7725e 100644
+--- a/tools/perf/tests/bpf.c
++++ b/tools/perf/tests/bpf.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <errno.h>
+ #include <stdio.h>
++#include <stdlib.h>
+ #include <sys/epoll.h>
+ #include <sys/types.h>
+ #include <sys/stat.h>
+@@ -276,6 +277,7 @@ static int __test__bpf(int idx)
+ 	}
+ 
+ out:
++	free(obj_buf);
+ 	bpf__clear();
+ 	return ret;
+ }


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-07-28 13:23 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-07-28 13:23 UTC (permalink / raw
  To: gentoo-commits

commit:     9d41142a0f860a329fbf4e0cbe31c6e18c6f3ced
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul 28 13:23:28 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul 28 13:23:28 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9d41142a

Linux 5.13.6

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1005_linux-5.13.6.patch | 7382 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7386 insertions(+)

diff --git a/0000_README b/0000_README
index 4c49407..87d634d 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch:  1004_linux-5.13.5.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.13.5
 
+Patch:  1005_linux-5.13.6.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.13.6
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1005_linux-5.13.6.patch b/1005_linux-5.13.6.patch
new file mode 100644
index 0000000..d7df8aa
--- /dev/null
+++ b/1005_linux-5.13.6.patch
@@ -0,0 +1,7382 @@
+diff --git a/Documentation/arm64/tagged-address-abi.rst b/Documentation/arm64/tagged-address-abi.rst
+index 459e6b66ff68c..0c9120ec58ae6 100644
+--- a/Documentation/arm64/tagged-address-abi.rst
++++ b/Documentation/arm64/tagged-address-abi.rst
+@@ -45,14 +45,24 @@ how the user addresses are used by the kernel:
+ 
+ 1. User addresses not accessed by the kernel but used for address space
+    management (e.g. ``mprotect()``, ``madvise()``). The use of valid
+-   tagged pointers in this context is allowed with the exception of
+-   ``brk()``, ``mmap()`` and the ``new_address`` argument to
+-   ``mremap()`` as these have the potential to alias with existing
+-   user addresses.
+-
+-   NOTE: This behaviour changed in v5.6 and so some earlier kernels may
+-   incorrectly accept valid tagged pointers for the ``brk()``,
+-   ``mmap()`` and ``mremap()`` system calls.
++   tagged pointers in this context is allowed with these exceptions:
++
++   - ``brk()``, ``mmap()`` and the ``new_address`` argument to
++     ``mremap()`` as these have the potential to alias with existing
++      user addresses.
++
++     NOTE: This behaviour changed in v5.6 and so some earlier kernels may
++     incorrectly accept valid tagged pointers for the ``brk()``,
++     ``mmap()`` and ``mremap()`` system calls.
++
++   - The ``range.start``, ``start`` and ``dst`` arguments to the
++     ``UFFDIO_*`` ``ioctl()``s used on a file descriptor obtained from
++     ``userfaultfd()``, as fault addresses subsequently obtained by reading
++     the file descriptor will be untagged, which may otherwise confuse
++     tag-unaware programs.
++
++     NOTE: This behaviour changed in v5.14 and so some earlier kernels may
++     incorrectly accept valid tagged pointers for this system call.
+ 
+ 2. User addresses accessed by the kernel (e.g. ``write()``). This ABI
+    relaxation is disabled by default and the application thread needs to
+diff --git a/Documentation/driver-api/early-userspace/early_userspace_support.rst b/Documentation/driver-api/early-userspace/early_userspace_support.rst
+index 8a58c61932ff5..61bdeac1bae54 100644
+--- a/Documentation/driver-api/early-userspace/early_userspace_support.rst
++++ b/Documentation/driver-api/early-userspace/early_userspace_support.rst
+@@ -69,17 +69,17 @@ early userspace image can be built by an unprivileged user.
+ 
+ As a technical note, when directories and files are specified, the
+ entire CONFIG_INITRAMFS_SOURCE is passed to
+-usr/gen_initramfs_list.sh.  This means that CONFIG_INITRAMFS_SOURCE
++usr/gen_initramfs.sh.  This means that CONFIG_INITRAMFS_SOURCE
+ can really be interpreted as any legal argument to
+-gen_initramfs_list.sh.  If a directory is specified as an argument then
++gen_initramfs.sh.  If a directory is specified as an argument then
+ the contents are scanned, uid/gid translation is performed, and
+ usr/gen_init_cpio file directives are output.  If a directory is
+-specified as an argument to usr/gen_initramfs_list.sh then the
++specified as an argument to usr/gen_initramfs.sh then the
+ contents of the file are simply copied to the output.  All of the output
+ directives from directory scanning and file contents copying are
+ processed by usr/gen_init_cpio.
+ 
+-See also 'usr/gen_initramfs_list.sh -h'.
++See also 'usr/gen_initramfs.sh -h'.
+ 
+ Where's this all leading?
+ =========================
+diff --git a/Documentation/filesystems/ramfs-rootfs-initramfs.rst b/Documentation/filesystems/ramfs-rootfs-initramfs.rst
+index 4598b0d90b607..164960631925d 100644
+--- a/Documentation/filesystems/ramfs-rootfs-initramfs.rst
++++ b/Documentation/filesystems/ramfs-rootfs-initramfs.rst
+@@ -170,7 +170,7 @@ Documentation/driver-api/early-userspace/early_userspace_support.rst for more de
+ The kernel does not depend on external cpio tools.  If you specify a
+ directory instead of a configuration file, the kernel's build infrastructure
+ creates a configuration file from that directory (usr/Makefile calls
+-usr/gen_initramfs_list.sh), and proceeds to package up that directory
++usr/gen_initramfs.sh), and proceeds to package up that directory
+ using the config file (by feeding it to usr/gen_init_cpio, which is created
+ from usr/gen_init_cpio.c).  The kernel's build-time cpio creation code is
+ entirely self-contained, and the kernel's boot-time extractor is also
+diff --git a/Documentation/networking/ip-sysctl.rst b/Documentation/networking/ip-sysctl.rst
+index c2ecc9894fd0a..9a57e972dae4f 100644
+--- a/Documentation/networking/ip-sysctl.rst
++++ b/Documentation/networking/ip-sysctl.rst
+@@ -772,7 +772,7 @@ tcp_fastopen_blackhole_timeout_sec - INTEGER
+ 	initial value when the blackhole issue goes away.
+ 	0 to disable the blackhole detection.
+ 
+-	By default, it is set to 1hr.
++	By default, it is set to 0 (feature is disabled).
+ 
+ tcp_fastopen_key - list of comma separated 32-digit hexadecimal INTEGERs
+ 	The list consists of a primary key and an optional backup key. The
+diff --git a/Documentation/trace/histogram.rst b/Documentation/trace/histogram.rst
+index b71e09f745c3d..f99be8062bc82 100644
+--- a/Documentation/trace/histogram.rst
++++ b/Documentation/trace/histogram.rst
+@@ -191,7 +191,7 @@ Documentation written by Tom Zanussi
+                                 with the event, in nanoseconds.  May be
+ 			        modified by .usecs to have timestamps
+ 			        interpreted as microseconds.
+-    cpu                    int  the cpu on which the event occurred.
++    common_cpu             int  the cpu on which the event occurred.
+     ====================== ==== =======================================
+ 
+ Extended error information
+diff --git a/Makefile b/Makefile
+index 41be12f806e0d..96967f8951933 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 13
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/arm/boot/dts/aspeed-bmc-asrock-e3c246d4i.dts b/arch/arm/boot/dts/aspeed-bmc-asrock-e3c246d4i.dts
+index dcab6e78dfa41..8be40c8283af7 100644
+--- a/arch/arm/boot/dts/aspeed-bmc-asrock-e3c246d4i.dts
++++ b/arch/arm/boot/dts/aspeed-bmc-asrock-e3c246d4i.dts
+@@ -4,6 +4,7 @@
+ #include "aspeed-g5.dtsi"
+ #include <dt-bindings/gpio/aspeed-gpio.h>
+ #include <dt-bindings/i2c/i2c.h>
++#include <dt-bindings/interrupt-controller/irq.h>
+ 
+ /{
+ 	model = "ASRock E3C246D4I BMC";
+@@ -73,7 +74,8 @@
+ 
+ &vuart {
+ 	status = "okay";
+-	aspeed,sirq-active-high;
++	aspeed,lpc-io-reg = <0x2f8>;
++	aspeed,lpc-interrupts = <3 IRQ_TYPE_LEVEL_HIGH>;
+ };
+ 
+ &mac0 {
+diff --git a/arch/arm/configs/multi_v7_defconfig b/arch/arm/configs/multi_v7_defconfig
+index 52a0400fdd926..d9abaae118dd1 100644
+--- a/arch/arm/configs/multi_v7_defconfig
++++ b/arch/arm/configs/multi_v7_defconfig
+@@ -821,7 +821,7 @@ CONFIG_USB_ISP1760=y
+ CONFIG_USB_HSIC_USB3503=y
+ CONFIG_AB8500_USB=y
+ CONFIG_KEYSTONE_USB_PHY=m
+-CONFIG_NOP_USB_XCEIV=m
++CONFIG_NOP_USB_XCEIV=y
+ CONFIG_AM335X_PHY_USB=m
+ CONFIG_TWL6030_USB=m
+ CONFIG_USB_GPIO_VBUS=y
+diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
+index 787c3c83edd7a..bea3c5e125ee8 100644
+--- a/arch/arm64/kernel/Makefile
++++ b/arch/arm64/kernel/Makefile
+@@ -17,7 +17,7 @@ CFLAGS_syscall.o	+= -fno-stack-protector
+ # It's not safe to invoke KCOV when portions of the kernel environment aren't
+ # available or are out-of-sync with HW state. Since `noinstr` doesn't always
+ # inhibit KCOV instrumentation, disable it for the entire compilation unit.
+-KCOV_INSTRUMENT_entry.o := n
++KCOV_INSTRUMENT_entry-common.o := n
+ 
+ # Object file lists.
+ obj-y			:= debug-monitors.o entry.o irq.o fpsimd.o		\
+diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
+index 125a10e413e9f..23e9879a6e78f 100644
+--- a/arch/arm64/kernel/mte.c
++++ b/arch/arm64/kernel/mte.c
+@@ -185,18 +185,6 @@ void mte_check_tfsr_el1(void)
+ }
+ #endif
+ 
+-static void update_gcr_el1_excl(u64 excl)
+-{
+-
+-	/*
+-	 * Note that the mask controlled by the user via prctl() is an
+-	 * include while GCR_EL1 accepts an exclude mask.
+-	 * No need for ISB since this only affects EL0 currently, implicit
+-	 * with ERET.
+-	 */
+-	sysreg_clear_set_s(SYS_GCR_EL1, SYS_GCR_EL1_EXCL_MASK, excl);
+-}
+-
+ static void set_gcr_el1_excl(u64 excl)
+ {
+ 	current->thread.gcr_user_excl = excl;
+@@ -257,7 +245,8 @@ void mte_suspend_exit(void)
+ 	if (!system_supports_mte())
+ 		return;
+ 
+-	update_gcr_el1_excl(gcr_kernel_excl);
++	sysreg_clear_set_s(SYS_GCR_EL1, SYS_GCR_EL1_EXCL_MASK, gcr_kernel_excl);
++	isb();
+ }
+ 
+ long set_mte_ctrl(struct task_struct *task, unsigned long arg)
+diff --git a/arch/nds32/mm/mmap.c b/arch/nds32/mm/mmap.c
+index c206b31ce07ac..1bdf5e7d1b438 100644
+--- a/arch/nds32/mm/mmap.c
++++ b/arch/nds32/mm/mmap.c
+@@ -59,7 +59,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
+ 
+ 		vma = find_vma(mm, addr);
+ 		if (TASK_SIZE - len >= addr &&
+-		    (!vma || addr + len <= vma->vm_start))
++		    (!vma || addr + len <= vm_start_gap(vma)))
+ 			return addr;
+ 	}
+ 
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 67cc164c4ac1a..395f98158e81e 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -2445,8 +2445,10 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu)
+ 		HFSCR_DSCR | HFSCR_VECVSX | HFSCR_FP | HFSCR_PREFIX;
+ 	if (cpu_has_feature(CPU_FTR_HVMODE)) {
+ 		vcpu->arch.hfscr &= mfspr(SPRN_HFSCR);
++#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+ 		if (cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
+ 			vcpu->arch.hfscr |= HFSCR_TM;
++#endif
+ 	}
+ 	if (cpu_has_feature(CPU_FTR_TM_COMP))
+ 		vcpu->arch.hfscr |= HFSCR_TM;
+diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
+index 1b3ff0af12648..5252107d2f7ab 100644
+--- a/arch/powerpc/kvm/book3s_hv_nested.c
++++ b/arch/powerpc/kvm/book3s_hv_nested.c
+@@ -301,6 +301,9 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
+ 	if (vcpu->kvm->arch.l1_ptcr == 0)
+ 		return H_NOT_AVAILABLE;
+ 
++	if (MSR_TM_TRANSACTIONAL(vcpu->arch.shregs.msr))
++		return H_BAD_MODE;
++
+ 	/* copy parameters in */
+ 	hv_ptr = kvmppc_get_gpr(vcpu, 4);
+ 	regs_ptr = kvmppc_get_gpr(vcpu, 5);
+@@ -321,6 +324,23 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
+ 	if (l2_hv.vcpu_token >= NR_CPUS)
+ 		return H_PARAMETER;
+ 
++	/*
++	 * L1 must have set up a suspended state to enter the L2 in a
++	 * transactional state, and only in that case. These have to be
++	 * filtered out here to prevent causing a TM Bad Thing in the
++	 * host HRFID. We could synthesize a TM Bad Thing back to the L1
++	 * here but there doesn't seem like much point.
++	 */
++	if (MSR_TM_SUSPENDED(vcpu->arch.shregs.msr)) {
++		if (!MSR_TM_ACTIVE(l2_regs.msr))
++			return H_BAD_MODE;
++	} else {
++		if (l2_regs.msr & MSR_TS_MASK)
++			return H_BAD_MODE;
++		if (WARN_ON_ONCE(vcpu->arch.shregs.msr & MSR_TS_MASK))
++			return H_BAD_MODE;
++	}
++
+ 	/* translate lpid */
+ 	l2 = kvmhv_get_nested(vcpu->kvm, l2_hv.lpid, true);
+ 	if (!l2)
+diff --git a/arch/powerpc/kvm/book3s_rtas.c b/arch/powerpc/kvm/book3s_rtas.c
+index c5e677508d3b2..0f847f1e5ddd0 100644
+--- a/arch/powerpc/kvm/book3s_rtas.c
++++ b/arch/powerpc/kvm/book3s_rtas.c
+@@ -242,6 +242,17 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu)
+ 	 * value so we can restore it on the way out.
+ 	 */
+ 	orig_rets = args.rets;
++	if (be32_to_cpu(args.nargs) >= ARRAY_SIZE(args.args)) {
++		/*
++		 * Don't overflow our args array: ensure there is room for
++		 * at least rets[0] (even if the call specifies 0 nret).
++		 *
++		 * Each handler must then check for the correct nargs and nret
++		 * values, but they may always return failure in rets[0].
++		 */
++		rc = -EINVAL;
++		goto fail;
++	}
+ 	args.rets = &args.args[be32_to_cpu(args.nargs)];
+ 
+ 	mutex_lock(&vcpu->kvm->arch.rtas_token_lock);
+@@ -269,9 +280,17 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu)
+ fail:
+ 	/*
+ 	 * We only get here if the guest has called RTAS with a bogus
+-	 * args pointer. That means we can't get to the args, and so we
+-	 * can't fail the RTAS call. So fail right out to userspace,
+-	 * which should kill the guest.
++	 * args pointer or nargs/nret values that would overflow the
++	 * array. That means we can't get to the args, and so we can't
++	 * fail the RTAS call. So fail right out to userspace, which
++	 * should kill the guest.
++	 *
++	 * SLOF should actually pass the hcall return value from the
++	 * rtas handler call in r3, so enter_rtas could be modified to
++	 * return a failure indication in r3 and we could return such
++	 * errors to the guest rather than failing to host userspace.
++	 * However old guests that don't test for failure could then
++	 * continue silently after errors, so for now we won't do this.
+ 	 */
+ 	return rc;
+ }
+diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
+index a2a68a958fa01..6e4f03c02a0ae 100644
+--- a/arch/powerpc/kvm/powerpc.c
++++ b/arch/powerpc/kvm/powerpc.c
+@@ -2045,9 +2045,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
+ 	{
+ 		struct kvm_enable_cap cap;
+ 		r = -EFAULT;
+-		vcpu_load(vcpu);
+ 		if (copy_from_user(&cap, argp, sizeof(cap)))
+ 			goto out;
++		vcpu_load(vcpu);
+ 		r = kvm_vcpu_ioctl_enable_cap(vcpu, &cap);
+ 		vcpu_put(vcpu);
+ 		break;
+@@ -2071,9 +2071,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
+ 	case KVM_DIRTY_TLB: {
+ 		struct kvm_dirty_tlb dirty;
+ 		r = -EFAULT;
+-		vcpu_load(vcpu);
+ 		if (copy_from_user(&dirty, argp, sizeof(dirty)))
+ 			goto out;
++		vcpu_load(vcpu);
+ 		r = kvm_vcpu_ioctl_dirty_tlb(vcpu, &dirty);
+ 		vcpu_put(vcpu);
+ 		break;
+diff --git a/arch/riscv/include/asm/efi.h b/arch/riscv/include/asm/efi.h
+index 6d98cd999680b..7b3483ba2e847 100644
+--- a/arch/riscv/include/asm/efi.h
++++ b/arch/riscv/include/asm/efi.h
+@@ -27,10 +27,10 @@ int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md);
+ 
+ #define ARCH_EFI_IRQ_FLAGS_MASK (SR_IE | SR_SPIE)
+ 
+-/* Load initrd at enough distance from DRAM start */
++/* Load initrd anywhere in system RAM */
+ static inline unsigned long efi_get_max_initrd_addr(unsigned long image_addr)
+ {
+-	return image_addr + SZ_256M;
++	return ULONG_MAX;
+ }
+ 
+ #define alloc_screen_info(x...)		(&screen_info)
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index 4c4c92ce0bb81..9b23b95c50cfe 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -123,7 +123,7 @@ void __init setup_bootmem(void)
+ {
+ 	phys_addr_t vmlinux_end = __pa_symbol(&_end);
+ 	phys_addr_t vmlinux_start = __pa_symbol(&_start);
+-	phys_addr_t dram_end = memblock_end_of_DRAM();
++	phys_addr_t dram_end;
+ 	phys_addr_t max_mapped_addr = __pa(~(ulong)0);
+ 
+ #ifdef CONFIG_XIP_KERNEL
+@@ -146,6 +146,8 @@ void __init setup_bootmem(void)
+ #endif
+ 	memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);
+ 
++	dram_end = memblock_end_of_DRAM();
++
+ 	/*
+ 	 * memblock allocator is not aware of the fact that last 4K bytes of
+ 	 * the addressable memory can not be mapped because of IS_ERR_VALUE
+diff --git a/arch/s390/boot/text_dma.S b/arch/s390/boot/text_dma.S
+index f7c77cd518f2b..5ff5fee028016 100644
+--- a/arch/s390/boot/text_dma.S
++++ b/arch/s390/boot/text_dma.S
+@@ -9,16 +9,6 @@
+ #include <asm/errno.h>
+ #include <asm/sigp.h>
+ 
+-#ifdef CC_USING_EXPOLINE
+-	.pushsection .dma.text.__s390_indirect_jump_r14,"axG"
+-__dma__s390_indirect_jump_r14:
+-	larl	%r1,0f
+-	ex	0,0(%r1)
+-	j	.
+-0:	br	%r14
+-	.popsection
+-#endif
+-
+ 	.section .dma.text,"ax"
+ /*
+  * Simplified version of expoline thunk. The normal thunks can not be used here,
+@@ -27,11 +17,10 @@ __dma__s390_indirect_jump_r14:
+  * affects a few functions that are not performance-relevant.
+  */
+ 	.macro BR_EX_DMA_r14
+-#ifdef CC_USING_EXPOLINE
+-	jg	__dma__s390_indirect_jump_r14
+-#else
+-	br	%r14
+-#endif
++	larl	%r1,0f
++	ex	0,0(%r1)
++	j	.
++0:	br	%r14
+ 	.endm
+ 
+ /*
+diff --git a/arch/s390/include/asm/ftrace.h b/arch/s390/include/asm/ftrace.h
+index 695c61989f97c..345cbe982a8bf 100644
+--- a/arch/s390/include/asm/ftrace.h
++++ b/arch/s390/include/asm/ftrace.h
+@@ -19,6 +19,7 @@ void ftrace_caller(void);
+ 
+ extern char ftrace_graph_caller_end;
+ extern unsigned long ftrace_plt;
++extern void *ftrace_func;
+ 
+ struct dyn_arch_ftrace { };
+ 
+diff --git a/arch/s390/kernel/ftrace.c b/arch/s390/kernel/ftrace.c
+index c6ddeb5029b49..2d8f595d91961 100644
+--- a/arch/s390/kernel/ftrace.c
++++ b/arch/s390/kernel/ftrace.c
+@@ -40,6 +40,7 @@
+  * trampoline (ftrace_plt), which clobbers also r1.
+  */
+ 
++void *ftrace_func __read_mostly = ftrace_stub;
+ unsigned long ftrace_plt;
+ 
+ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
+@@ -85,6 +86,7 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
+ 
+ int ftrace_update_ftrace_func(ftrace_func_t func)
+ {
++	ftrace_func = func;
+ 	return 0;
+ }
+ 
+diff --git a/arch/s390/kernel/mcount.S b/arch/s390/kernel/mcount.S
+index faf64c2f90f52..6b13797143a72 100644
+--- a/arch/s390/kernel/mcount.S
++++ b/arch/s390/kernel/mcount.S
+@@ -59,13 +59,13 @@ ENTRY(ftrace_caller)
+ #ifdef CONFIG_HAVE_MARCH_Z196_FEATURES
+ 	aghik	%r2,%r0,-MCOUNT_INSN_SIZE
+ 	lgrl	%r4,function_trace_op
+-	lgrl	%r1,ftrace_trace_function
++	lgrl	%r1,ftrace_func
+ #else
+ 	lgr	%r2,%r0
+ 	aghi	%r2,-MCOUNT_INSN_SIZE
+ 	larl	%r4,function_trace_op
+ 	lg	%r4,0(%r4)
+-	larl	%r1,ftrace_trace_function
++	larl	%r1,ftrace_func
+ 	lg	%r1,0(%r1)
+ #endif
+ 	lgr	%r3,%r14
+diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
+index 63cae0476bb49..2ae419f5115a5 100644
+--- a/arch/s390/net/bpf_jit_comp.c
++++ b/arch/s390/net/bpf_jit_comp.c
+@@ -112,7 +112,7 @@ static inline void reg_set_seen(struct bpf_jit *jit, u32 b1)
+ {
+ 	u32 r1 = reg2hex[b1];
+ 
+-	if (!jit->seen_reg[r1] && r1 >= 6 && r1 <= 15)
++	if (r1 >= 6 && r1 <= 15 && !jit->seen_reg[r1])
+ 		jit->seen_reg[r1] = 1;
+ }
+ 
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index ca7866d63e982..739be5da3bca7 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -765,7 +765,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ 
+ 		edx.split.num_counters_fixed = min(cap.num_counters_fixed, MAX_FIXED_COUNTERS);
+ 		edx.split.bit_width_fixed = cap.bit_width_fixed;
+-		edx.split.anythread_deprecated = 1;
++		if (cap.version)
++			edx.split.anythread_deprecated = 1;
+ 		edx.split.reserved1 = 0;
+ 		edx.split.reserved2 = 0;
+ 
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index 8d36f0c730718..02d60d7f903da 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -1271,8 +1271,8 @@ static int sev_send_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
+ 	/* Pin guest memory */
+ 	guest_page = sev_pin_memory(kvm, params.guest_uaddr & PAGE_MASK,
+ 				    PAGE_SIZE, &n, 0);
+-	if (!guest_page)
+-		return -EFAULT;
++	if (IS_ERR(guest_page))
++		return PTR_ERR(guest_page);
+ 
+ 	/* allocate memory for header and transport buffer */
+ 	ret = -ENOMEM;
+@@ -1309,8 +1309,9 @@ static int sev_send_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
+ 	}
+ 
+ 	/* Copy packet header to userspace. */
+-	ret = copy_to_user((void __user *)(uintptr_t)params.hdr_uaddr, hdr,
+-				params.hdr_len);
++	if (copy_to_user((void __user *)(uintptr_t)params.hdr_uaddr, hdr,
++			 params.hdr_len))
++		ret = -EFAULT;
+ 
+ e_free_trans_data:
+ 	kfree(trans_data);
+@@ -1462,11 +1463,12 @@ static int sev_receive_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
+ 	data.trans_len = params.trans_len;
+ 
+ 	/* Pin guest memory */
+-	ret = -EFAULT;
+ 	guest_page = sev_pin_memory(kvm, params.guest_uaddr & PAGE_MASK,
+ 				    PAGE_SIZE, &n, 0);
+-	if (!guest_page)
++	if (IS_ERR(guest_page)) {
++		ret = PTR_ERR(guest_page);
+ 		goto e_free_trans;
++	}
+ 
+ 	/* The RECEIVE_UPDATE_DATA command requires C-bit to be always set. */
+ 	data.guest_address = (page_to_pfn(guest_page[0]) << PAGE_SHIFT) + offset;
+diff --git a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig
+index eedec61e3476e..226f849fe7dc4 100644
+--- a/drivers/acpi/Kconfig
++++ b/drivers/acpi/Kconfig
+@@ -370,7 +370,7 @@ config ACPI_TABLE_UPGRADE
+ config ACPI_TABLE_OVERRIDE_VIA_BUILTIN_INITRD
+ 	bool "Override ACPI tables from built-in initrd"
+ 	depends on ACPI_TABLE_UPGRADE
+-	depends on INITRAMFS_SOURCE!="" && INITRAMFS_COMPRESSION=""
++	depends on INITRAMFS_SOURCE!="" && INITRAMFS_COMPRESSION_NONE
+ 	help
+ 	  This option provides functionality to override arbitrary ACPI tables
+ 	  from built-in uncompressed initrd.
+diff --git a/drivers/acpi/utils.c b/drivers/acpi/utils.c
+index 3b54b8fd73969..27ec9d57f3b82 100644
+--- a/drivers/acpi/utils.c
++++ b/drivers/acpi/utils.c
+@@ -846,11 +846,9 @@ EXPORT_SYMBOL(acpi_dev_present);
+  * Return the next match of ACPI device if another matching device was present
+  * at the moment of invocation, or NULL otherwise.
+  *
+- * FIXME: The function does not tolerate the sudden disappearance of @adev, e.g.
+- * in the case of a hotplug event. That said, the caller should ensure that
+- * this will never happen.
+- *
+  * The caller is responsible for invoking acpi_dev_put() on the returned device.
++ * On the other hand the function invokes  acpi_dev_put() on the given @adev
++ * assuming that its reference counter had been increased beforehand.
+  *
+  * See additional information in acpi_dev_present() as well.
+  */
+@@ -866,6 +864,7 @@ acpi_dev_get_next_match_dev(struct acpi_device *adev, const char *hid, const cha
+ 	match.hrv = hrv;
+ 
+ 	dev = bus_find_device(&acpi_bus_type, start, &match, acpi_dev_match_cb);
++	acpi_dev_put(adev);
+ 	return dev ? to_acpi_device(dev) : NULL;
+ }
+ EXPORT_SYMBOL(acpi_dev_get_next_match_dev);
+diff --git a/drivers/base/auxiliary.c b/drivers/base/auxiliary.c
+index adc199dfba3cb..6a30264ab2ba1 100644
+--- a/drivers/base/auxiliary.c
++++ b/drivers/base/auxiliary.c
+@@ -231,6 +231,8 @@ EXPORT_SYMBOL_GPL(auxiliary_find_device);
+ int __auxiliary_driver_register(struct auxiliary_driver *auxdrv,
+ 				struct module *owner, const char *modname)
+ {
++	int ret;
++
+ 	if (WARN_ON(!auxdrv->probe) || WARN_ON(!auxdrv->id_table))
+ 		return -EINVAL;
+ 
+@@ -246,7 +248,11 @@ int __auxiliary_driver_register(struct auxiliary_driver *auxdrv,
+ 	auxdrv->driver.bus = &auxiliary_bus_type;
+ 	auxdrv->driver.mod_name = modname;
+ 
+-	return driver_register(&auxdrv->driver);
++	ret = driver_register(&auxdrv->driver);
++	if (ret)
++		kfree(auxdrv->driver.name);
++
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(__auxiliary_driver_register);
+ 
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 54ba506e5a89d..042b13d88f179 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -574,8 +574,10 @@ static void devlink_remove_symlinks(struct device *dev,
+ 		return;
+ 	}
+ 
+-	snprintf(buf, len, "supplier:%s:%s", dev_bus_name(sup), dev_name(sup));
+-	sysfs_remove_link(&con->kobj, buf);
++	if (device_is_registered(con)) {
++		snprintf(buf, len, "supplier:%s:%s", dev_bus_name(sup), dev_name(sup));
++		sysfs_remove_link(&con->kobj, buf);
++	}
+ 	snprintf(buf, len, "consumer:%s:%s", dev_bus_name(con), dev_name(con));
+ 	sysfs_remove_link(&sup->kobj, buf);
+ 	kfree(buf);
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index bbb88eb009e0b..7af1c53c5cf74 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -4100,8 +4100,6 @@ again:
+ 
+ static bool rbd_quiesce_lock(struct rbd_device *rbd_dev)
+ {
+-	bool need_wait;
+-
+ 	dout("%s rbd_dev %p\n", __func__, rbd_dev);
+ 	lockdep_assert_held_write(&rbd_dev->lock_rwsem);
+ 
+@@ -4113,11 +4111,11 @@ static bool rbd_quiesce_lock(struct rbd_device *rbd_dev)
+ 	 */
+ 	rbd_dev->lock_state = RBD_LOCK_STATE_RELEASING;
+ 	rbd_assert(!completion_done(&rbd_dev->releasing_wait));
+-	need_wait = !list_empty(&rbd_dev->running_list);
+-	downgrade_write(&rbd_dev->lock_rwsem);
+-	if (need_wait)
+-		wait_for_completion(&rbd_dev->releasing_wait);
+-	up_read(&rbd_dev->lock_rwsem);
++	if (list_empty(&rbd_dev->running_list))
++		return true;
++
++	up_write(&rbd_dev->lock_rwsem);
++	wait_for_completion(&rbd_dev->releasing_wait);
+ 
+ 	down_write(&rbd_dev->lock_rwsem);
+ 	if (rbd_dev->lock_state != RBD_LOCK_STATE_RELEASING)
+@@ -4203,15 +4201,11 @@ static void rbd_handle_acquired_lock(struct rbd_device *rbd_dev, u8 struct_v,
+ 	if (!rbd_cid_equal(&cid, &rbd_empty_cid)) {
+ 		down_write(&rbd_dev->lock_rwsem);
+ 		if (rbd_cid_equal(&cid, &rbd_dev->owner_cid)) {
+-			/*
+-			 * we already know that the remote client is
+-			 * the owner
+-			 */
+-			up_write(&rbd_dev->lock_rwsem);
+-			return;
++			dout("%s rbd_dev %p cid %llu-%llu == owner_cid\n",
++			     __func__, rbd_dev, cid.gid, cid.handle);
++		} else {
++			rbd_set_owner_cid(rbd_dev, &cid);
+ 		}
+-
+-		rbd_set_owner_cid(rbd_dev, &cid);
+ 		downgrade_write(&rbd_dev->lock_rwsem);
+ 	} else {
+ 		down_read(&rbd_dev->lock_rwsem);
+@@ -4236,14 +4230,12 @@ static void rbd_handle_released_lock(struct rbd_device *rbd_dev, u8 struct_v,
+ 	if (!rbd_cid_equal(&cid, &rbd_empty_cid)) {
+ 		down_write(&rbd_dev->lock_rwsem);
+ 		if (!rbd_cid_equal(&cid, &rbd_dev->owner_cid)) {
+-			dout("%s rbd_dev %p unexpected owner, cid %llu-%llu != owner_cid %llu-%llu\n",
++			dout("%s rbd_dev %p cid %llu-%llu != owner_cid %llu-%llu\n",
+ 			     __func__, rbd_dev, cid.gid, cid.handle,
+ 			     rbd_dev->owner_cid.gid, rbd_dev->owner_cid.handle);
+-			up_write(&rbd_dev->lock_rwsem);
+-			return;
++		} else {
++			rbd_set_owner_cid(rbd_dev, &rbd_empty_cid);
+ 		}
+-
+-		rbd_set_owner_cid(rbd_dev, &rbd_empty_cid);
+ 		downgrade_write(&rbd_dev->lock_rwsem);
+ 	} else {
+ 		down_read(&rbd_dev->lock_rwsem);
+diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
+index 22acde118bc35..fc9196f11cb7d 100644
+--- a/drivers/bus/mhi/core/main.c
++++ b/drivers/bus/mhi/core/main.c
+@@ -773,11 +773,18 @@ static void mhi_process_cmd_completion(struct mhi_controller *mhi_cntrl,
+ 	cmd_pkt = mhi_to_virtual(mhi_ring, ptr);
+ 
+ 	chan = MHI_TRE_GET_CMD_CHID(cmd_pkt);
+-	mhi_chan = &mhi_cntrl->mhi_chan[chan];
+-	write_lock_bh(&mhi_chan->lock);
+-	mhi_chan->ccs = MHI_TRE_GET_EV_CODE(tre);
+-	complete(&mhi_chan->completion);
+-	write_unlock_bh(&mhi_chan->lock);
++
++	if (chan < mhi_cntrl->max_chan &&
++	    mhi_cntrl->mhi_chan[chan].configured) {
++		mhi_chan = &mhi_cntrl->mhi_chan[chan];
++		write_lock_bh(&mhi_chan->lock);
++		mhi_chan->ccs = MHI_TRE_GET_EV_CODE(tre);
++		complete(&mhi_chan->completion);
++		write_unlock_bh(&mhi_chan->lock);
++	} else {
++		dev_err(&mhi_cntrl->mhi_dev->dev,
++			"Completion packet for invalid channel ID: %d\n", chan);
++	}
+ 
+ 	mhi_del_ring_element(mhi_cntrl, mhi_ring);
+ }
+diff --git a/drivers/bus/mhi/pci_generic.c b/drivers/bus/mhi/pci_generic.c
+index ca3bc40427f85..4dd1077354af0 100644
+--- a/drivers/bus/mhi/pci_generic.c
++++ b/drivers/bus/mhi/pci_generic.c
+@@ -32,6 +32,8 @@
+  * @edl: emergency download mode firmware path (if any)
+  * @bar_num: PCI base address register to use for MHI MMIO register space
+  * @dma_data_width: DMA transfer word size (32 or 64 bits)
++ * @sideband_wake: Devices using dedicated sideband GPIO for wakeup instead
++ *		   of inband wake support (such as sdx24)
+  */
+ struct mhi_pci_dev_info {
+ 	const struct mhi_controller_config *config;
+@@ -40,6 +42,7 @@ struct mhi_pci_dev_info {
+ 	const char *edl;
+ 	unsigned int bar_num;
+ 	unsigned int dma_data_width;
++	bool sideband_wake;
+ };
+ 
+ #define MHI_CHANNEL_CONFIG_UL(ch_num, ch_name, el_count, ev_ring) \
+@@ -72,6 +75,22 @@ struct mhi_pci_dev_info {
+ 		.doorbell_mode_switch = false,		\
+ 	}
+ 
++#define MHI_CHANNEL_CONFIG_DL_AUTOQUEUE(ch_num, ch_name, el_count, ev_ring) \
++	{						\
++		.num = ch_num,				\
++		.name = ch_name,			\
++		.num_elements = el_count,		\
++		.event_ring = ev_ring,			\
++		.dir = DMA_FROM_DEVICE,			\
++		.ee_mask = BIT(MHI_EE_AMSS),		\
++		.pollcfg = 0,				\
++		.doorbell = MHI_DB_BRST_DISABLE,	\
++		.lpm_notify = false,			\
++		.offload_channel = false,		\
++		.doorbell_mode_switch = false,		\
++		.auto_queue = true,			\
++	}
++
+ #define MHI_EVENT_CONFIG_CTRL(ev_ring, el_count) \
+ 	{					\
+ 		.num_elements = el_count,	\
+@@ -210,7 +229,7 @@ static const struct mhi_channel_config modem_qcom_v1_mhi_channels[] = {
+ 	MHI_CHANNEL_CONFIG_UL(14, "QMI", 4, 0),
+ 	MHI_CHANNEL_CONFIG_DL(15, "QMI", 4, 0),
+ 	MHI_CHANNEL_CONFIG_UL(20, "IPCR", 8, 0),
+-	MHI_CHANNEL_CONFIG_DL(21, "IPCR", 8, 0),
++	MHI_CHANNEL_CONFIG_DL_AUTOQUEUE(21, "IPCR", 8, 0),
+ 	MHI_CHANNEL_CONFIG_UL_FP(34, "FIREHOSE", 32, 0),
+ 	MHI_CHANNEL_CONFIG_DL_FP(35, "FIREHOSE", 32, 0),
+ 	MHI_CHANNEL_CONFIG_HW_UL(100, "IP_HW0", 128, 2),
+@@ -242,7 +261,8 @@ static const struct mhi_pci_dev_info mhi_qcom_sdx65_info = {
+ 	.edl = "qcom/sdx65m/edl.mbn",
+ 	.config = &modem_qcom_v1_mhiv_config,
+ 	.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
+-	.dma_data_width = 32
++	.dma_data_width = 32,
++	.sideband_wake = false,
+ };
+ 
+ static const struct mhi_pci_dev_info mhi_qcom_sdx55_info = {
+@@ -251,7 +271,8 @@ static const struct mhi_pci_dev_info mhi_qcom_sdx55_info = {
+ 	.edl = "qcom/sdx55m/edl.mbn",
+ 	.config = &modem_qcom_v1_mhiv_config,
+ 	.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
+-	.dma_data_width = 32
++	.dma_data_width = 32,
++	.sideband_wake = false,
+ };
+ 
+ static const struct mhi_pci_dev_info mhi_qcom_sdx24_info = {
+@@ -259,7 +280,8 @@ static const struct mhi_pci_dev_info mhi_qcom_sdx24_info = {
+ 	.edl = "qcom/prog_firehose_sdx24.mbn",
+ 	.config = &modem_qcom_v1_mhiv_config,
+ 	.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
+-	.dma_data_width = 32
++	.dma_data_width = 32,
++	.sideband_wake = true,
+ };
+ 
+ static const struct mhi_channel_config mhi_quectel_em1xx_channels[] = {
+@@ -301,7 +323,8 @@ static const struct mhi_pci_dev_info mhi_quectel_em1xx_info = {
+ 	.edl = "qcom/prog_firehose_sdx24.mbn",
+ 	.config = &modem_quectel_em1xx_config,
+ 	.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
+-	.dma_data_width = 32
++	.dma_data_width = 32,
++	.sideband_wake = true,
+ };
+ 
+ static const struct mhi_channel_config mhi_foxconn_sdx55_channels[] = {
+@@ -339,7 +362,8 @@ static const struct mhi_pci_dev_info mhi_foxconn_sdx55_info = {
+ 	.edl = "qcom/sdx55m/edl.mbn",
+ 	.config = &modem_foxconn_sdx55_config,
+ 	.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
+-	.dma_data_width = 32
++	.dma_data_width = 32,
++	.sideband_wake = false,
+ };
+ 
+ static const struct pci_device_id mhi_pci_id_table[] = {
+@@ -640,9 +664,12 @@ static int mhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	mhi_cntrl->status_cb = mhi_pci_status_cb;
+ 	mhi_cntrl->runtime_get = mhi_pci_runtime_get;
+ 	mhi_cntrl->runtime_put = mhi_pci_runtime_put;
+-	mhi_cntrl->wake_get = mhi_pci_wake_get_nop;
+-	mhi_cntrl->wake_put = mhi_pci_wake_put_nop;
+-	mhi_cntrl->wake_toggle = mhi_pci_wake_toggle_nop;
++
++	if (info->sideband_wake) {
++		mhi_cntrl->wake_get = mhi_pci_wake_get_nop;
++		mhi_cntrl->wake_put = mhi_pci_wake_put_nop;
++		mhi_cntrl->wake_toggle = mhi_pci_wake_toggle_nop;
++	}
+ 
+ 	err = mhi_pci_claim(mhi_cntrl, info->bar_num, DMA_BIT_MASK(info->dma_data_width));
+ 	if (err)
+diff --git a/drivers/firmware/arm_scmi/bus.c b/drivers/firmware/arm_scmi/bus.c
+index 784cf0027da3c..9184a0d5acbef 100644
+--- a/drivers/firmware/arm_scmi/bus.c
++++ b/drivers/firmware/arm_scmi/bus.c
+@@ -139,6 +139,9 @@ int scmi_driver_register(struct scmi_driver *driver, struct module *owner,
+ {
+ 	int retval;
+ 
++	if (!driver->probe)
++		return -EINVAL;
++
+ 	retval = scmi_protocol_device_request(driver->id_table);
+ 	if (retval)
+ 		return retval;
+diff --git a/drivers/firmware/efi/dev-path-parser.c b/drivers/firmware/efi/dev-path-parser.c
+index 5c9625e552f4f..eb9c65f978419 100644
+--- a/drivers/firmware/efi/dev-path-parser.c
++++ b/drivers/firmware/efi/dev-path-parser.c
+@@ -12,52 +12,38 @@
+ #include <linux/efi.h>
+ #include <linux/pci.h>
+ 
+-struct acpi_hid_uid {
+-	struct acpi_device_id hid[2];
+-	char uid[11]; /* UINT_MAX + null byte */
+-};
+-
+-static int __init match_acpi_dev(struct device *dev, const void *data)
+-{
+-	struct acpi_hid_uid hid_uid = *(const struct acpi_hid_uid *)data;
+-	struct acpi_device *adev = to_acpi_device(dev);
+-
+-	if (acpi_match_device_ids(adev, hid_uid.hid))
+-		return 0;
+-
+-	if (adev->pnp.unique_id)
+-		return !strcmp(adev->pnp.unique_id, hid_uid.uid);
+-	else
+-		return !strcmp("0", hid_uid.uid);
+-}
+-
+ static long __init parse_acpi_path(const struct efi_dev_path *node,
+ 				   struct device *parent, struct device **child)
+ {
+-	struct acpi_hid_uid hid_uid = {};
++	char hid[ACPI_ID_LEN], uid[11]; /* UINT_MAX + null byte */
++	struct acpi_device *adev;
+ 	struct device *phys_dev;
+ 
+ 	if (node->header.length != 12)
+ 		return -EINVAL;
+ 
+-	sprintf(hid_uid.hid[0].id, "%c%c%c%04X",
++	sprintf(hid, "%c%c%c%04X",
+ 		'A' + ((node->acpi.hid >> 10) & 0x1f) - 1,
+ 		'A' + ((node->acpi.hid >>  5) & 0x1f) - 1,
+ 		'A' + ((node->acpi.hid >>  0) & 0x1f) - 1,
+ 			node->acpi.hid >> 16);
+-	sprintf(hid_uid.uid, "%u", node->acpi.uid);
++	sprintf(uid, "%u", node->acpi.uid);
+ 
+-	*child = bus_find_device(&acpi_bus_type, NULL, &hid_uid,
+-				 match_acpi_dev);
+-	if (!*child)
++	for_each_acpi_dev_match(adev, hid, NULL, -1) {
++		if (adev->pnp.unique_id && !strcmp(adev->pnp.unique_id, uid))
++			break;
++		if (!adev->pnp.unique_id && node->acpi.uid == 0)
++			break;
++	}
++	if (!adev)
+ 		return -ENODEV;
+ 
+-	phys_dev = acpi_get_first_physical_node(to_acpi_device(*child));
++	phys_dev = acpi_get_first_physical_node(adev);
+ 	if (phys_dev) {
+-		get_device(phys_dev);
+-		put_device(*child);
+-		*child = phys_dev;
+-	}
++		*child = get_device(phys_dev);
++		acpi_dev_put(adev);
++	} else
++		*child = &adev->dev;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index 4b7ee3fa9224f..847f33ffc4aed 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -896,6 +896,7 @@ static int __init efi_memreserve_map_root(void)
+ static int efi_mem_reserve_iomem(phys_addr_t addr, u64 size)
+ {
+ 	struct resource *res, *parent;
++	int ret;
+ 
+ 	res = kzalloc(sizeof(struct resource), GFP_ATOMIC);
+ 	if (!res)
+@@ -908,7 +909,17 @@ static int efi_mem_reserve_iomem(phys_addr_t addr, u64 size)
+ 
+ 	/* we expect a conflict with a 'System RAM' region */
+ 	parent = request_resource_conflict(&iomem_resource, res);
+-	return parent ? request_resource(parent, res) : 0;
++	ret = parent ? request_resource(parent, res) : 0;
++
++	/*
++	 * Given that efi_mem_reserve_iomem() can be called at any
++	 * time, only call memblock_reserve() if the architecture
++	 * keeps the infrastructure around.
++	 */
++	if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK) && !ret)
++		memblock_reserve(addr, size);
++
++	return ret;
+ }
+ 
+ int __ref efi_mem_reserve_persistent(phys_addr_t addr, u64 size)
+diff --git a/drivers/firmware/efi/tpm.c b/drivers/firmware/efi/tpm.c
+index c1955d320fecd..8f665678e9e39 100644
+--- a/drivers/firmware/efi/tpm.c
++++ b/drivers/firmware/efi/tpm.c
+@@ -62,9 +62,11 @@ int __init efi_tpm_eventlog_init(void)
+ 	tbl_size = sizeof(*log_tbl) + log_tbl->size;
+ 	memblock_reserve(efi.tpm_log, tbl_size);
+ 
+-	if (efi.tpm_final_log == EFI_INVALID_TABLE_ADDR ||
+-	    log_tbl->version != EFI_TCG2_EVENT_LOG_FORMAT_TCG_2) {
+-		pr_warn(FW_BUG "TPM Final Events table missing or invalid\n");
++	if (efi.tpm_final_log == EFI_INVALID_TABLE_ADDR) {
++		pr_info("TPM Final Events table not present\n");
++		goto out;
++	} else if (log_tbl->version != EFI_TCG2_EVENT_LOG_FORMAT_TCG_2) {
++		pr_warn(FW_BUG "TPM Final Events table invalid\n");
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index 9ea5b4d2fe8b8..ef65c20feda70 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -3290,6 +3290,7 @@ static const struct soc15_reg_golden golden_settings_gc_10_3[] =
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_PERFCOUNTER7_SELECT, 0xf0f001ff, 0x00000000),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_PERFCOUNTER8_SELECT, 0xf0f001ff, 0x00000000),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_PERFCOUNTER9_SELECT, 0xf0f001ff, 0x00000000),
++	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSX_DEBUG_1, 0x00010000, 0x00010020),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmTA_CNTL_AUX, 0xfff7ffff, 0x01030000),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmUTCL1_CTRL, 0xffbfffff, 0x00a00000)
+ };
+@@ -3369,6 +3370,7 @@ static const struct soc15_reg_golden golden_settings_gc_10_3_vangogh[] =
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmPA_SC_ENHANCE_2, 0xffffffbf, 0x00000020),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSPI_CONFIG_CNTL_1_Vangogh, 0xffffffff, 0x00070103),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQG_CONFIG, 0x000017ff, 0x00001000),
++	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSX_DEBUG_1, 0x00010000, 0x00010020),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmTA_CNTL_AUX, 0xfff7ffff, 0x01030000),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmUTCL1_CTRL, 0xffffffff, 0x00400000),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmVGT_GS_MAX_WAVE_ID, 0x00000fff, 0x000000ff),
+@@ -3411,6 +3413,7 @@ static const struct soc15_reg_golden golden_settings_gc_10_3_4[] =
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_PERFCOUNTER7_SELECT, 0xf0f001ff, 0x00000000),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_PERFCOUNTER8_SELECT, 0xf0f001ff, 0x00000000),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_PERFCOUNTER9_SELECT, 0xf0f001ff, 0x00000000),
++	SOC15_REG_GOLDEN_VALUE(GC, 0, mmSX_DEBUG_1, 0x00010000, 0x00010020),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmTA_CNTL_AUX, 0x01030000, 0x01030000),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmUTCL1_CTRL, 0x03a00000, 0x00a00000),
+ 	SOC15_REG_GOLDEN_VALUE(GC, 0, mmLDS_CONFIG,  0x00000020, 0x00000020)
+diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c
+index 495a4767a4430..94591c1a748f6 100644
+--- a/drivers/gpu/drm/drm_ioctl.c
++++ b/drivers/gpu/drm/drm_ioctl.c
+@@ -827,6 +827,9 @@ long drm_ioctl(struct file *filp,
+ 	if (drm_dev_is_unplugged(dev))
+ 		return -ENODEV;
+ 
++       if (DRM_IOCTL_TYPE(cmd) != DRM_IOCTL_BASE)
++               return -ENOTTY;
++
+ 	is_driver_ioctl = nr >= DRM_COMMAND_BASE && nr < DRM_COMMAND_END;
+ 
+ 	if (is_driver_ioctl) {
+diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
+index dda320749c65c..2358c92733b0d 100644
+--- a/drivers/gpu/drm/i915/gvt/handlers.c
++++ b/drivers/gpu/drm/i915/gvt/handlers.c
+@@ -1977,6 +1977,21 @@ static int elsp_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
+ 	if (drm_WARN_ON(&i915->drm, !engine))
+ 		return -EINVAL;
+ 
++	/*
++	 * Due to d3_entered is used to indicate skipping PPGTT invalidation on
++	 * vGPU reset, it's set on D0->D3 on PCI config write, and cleared after
++	 * vGPU reset if in resuming.
++	 * In S0ix exit, the device power state also transite from D3 to D0 as
++	 * S3 resume, but no vGPU reset (triggered by QEMU devic model). After
++	 * S0ix exit, all engines continue to work. However the d3_entered
++	 * remains set which will break next vGPU reset logic (miss the expected
++	 * PPGTT invalidation).
++	 * Engines can only work in D0. Thus the 1st elsp write gives GVT a
++	 * chance to clear d3_entered.
++	 */
++	if (vgpu->d3_entered)
++		vgpu->d3_entered = false;
++
+ 	execlist = &vgpu->submission.execlist[engine->id];
+ 
+ 	execlist->elsp_dwords.data[3 - execlist->elsp_dwords.index] = data;
+diff --git a/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c b/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
+index 5e9ccefb88f62..bbdd086be7f59 100644
+--- a/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
++++ b/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
+@@ -447,7 +447,6 @@ static int rpi_touchscreen_remove(struct i2c_client *i2c)
+ 	drm_panel_remove(&ts->base);
+ 
+ 	mipi_dsi_device_unregister(ts->dsi);
+-	kfree(ts->dsi);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/ttm/ttm_device.c b/drivers/gpu/drm/ttm/ttm_device.c
+index 3d9c62b93e299..ef6e0c042bb12 100644
+--- a/drivers/gpu/drm/ttm/ttm_device.c
++++ b/drivers/gpu/drm/ttm/ttm_device.c
+@@ -100,6 +100,8 @@ static int ttm_global_init(void)
+ 	debugfs_create_atomic_t("buffer_objects", 0444, ttm_debugfs_root,
+ 				&glob->bo_count);
+ out:
++	if (ret)
++		--ttm_glob_use_count;
+ 	mutex_unlock(&ttm_global_mutex);
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 188b74c9e9fff..edee565334d8e 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -1690,38 +1690,46 @@ static int vc4_hdmi_cec_init(struct vc4_hdmi *vc4_hdmi)
+ 	vc4_hdmi_cec_update_clk_div(vc4_hdmi);
+ 
+ 	if (vc4_hdmi->variant->external_irq_controller) {
+-		ret = devm_request_threaded_irq(&pdev->dev,
+-						platform_get_irq_byname(pdev, "cec-rx"),
+-						vc4_cec_irq_handler_rx_bare,
+-						vc4_cec_irq_handler_rx_thread, 0,
+-						"vc4 hdmi cec rx", vc4_hdmi);
++		ret = request_threaded_irq(platform_get_irq_byname(pdev, "cec-rx"),
++					   vc4_cec_irq_handler_rx_bare,
++					   vc4_cec_irq_handler_rx_thread, 0,
++					   "vc4 hdmi cec rx", vc4_hdmi);
+ 		if (ret)
+ 			goto err_delete_cec_adap;
+ 
+-		ret = devm_request_threaded_irq(&pdev->dev,
+-						platform_get_irq_byname(pdev, "cec-tx"),
+-						vc4_cec_irq_handler_tx_bare,
+-						vc4_cec_irq_handler_tx_thread, 0,
+-						"vc4 hdmi cec tx", vc4_hdmi);
++		ret = request_threaded_irq(platform_get_irq_byname(pdev, "cec-tx"),
++					   vc4_cec_irq_handler_tx_bare,
++					   vc4_cec_irq_handler_tx_thread, 0,
++					   "vc4 hdmi cec tx", vc4_hdmi);
+ 		if (ret)
+-			goto err_delete_cec_adap;
++			goto err_remove_cec_rx_handler;
+ 	} else {
+ 		HDMI_WRITE(HDMI_CEC_CPU_MASK_SET, 0xffffffff);
+ 
+-		ret = devm_request_threaded_irq(&pdev->dev, platform_get_irq(pdev, 0),
+-						vc4_cec_irq_handler,
+-						vc4_cec_irq_handler_thread, 0,
+-						"vc4 hdmi cec", vc4_hdmi);
++		ret = request_threaded_irq(platform_get_irq(pdev, 0),
++					   vc4_cec_irq_handler,
++					   vc4_cec_irq_handler_thread, 0,
++					   "vc4 hdmi cec", vc4_hdmi);
+ 		if (ret)
+ 			goto err_delete_cec_adap;
+ 	}
+ 
+ 	ret = cec_register_adapter(vc4_hdmi->cec_adap, &pdev->dev);
+ 	if (ret < 0)
+-		goto err_delete_cec_adap;
++		goto err_remove_handlers;
+ 
+ 	return 0;
+ 
++err_remove_handlers:
++	if (vc4_hdmi->variant->external_irq_controller)
++		free_irq(platform_get_irq_byname(pdev, "cec-tx"), vc4_hdmi);
++	else
++		free_irq(platform_get_irq(pdev, 0), vc4_hdmi);
++
++err_remove_cec_rx_handler:
++	if (vc4_hdmi->variant->external_irq_controller)
++		free_irq(platform_get_irq_byname(pdev, "cec-rx"), vc4_hdmi);
++
+ err_delete_cec_adap:
+ 	cec_delete_adapter(vc4_hdmi->cec_adap);
+ 
+@@ -1730,6 +1738,15 @@ err_delete_cec_adap:
+ 
+ static void vc4_hdmi_cec_exit(struct vc4_hdmi *vc4_hdmi)
+ {
++	struct platform_device *pdev = vc4_hdmi->pdev;
++
++	if (vc4_hdmi->variant->external_irq_controller) {
++		free_irq(platform_get_irq_byname(pdev, "cec-rx"), vc4_hdmi);
++		free_irq(platform_get_irq_byname(pdev, "cec-tx"), vc4_hdmi);
++	} else {
++		free_irq(platform_get_irq(pdev, 0), vc4_hdmi);
++	}
++
+ 	cec_unregister_adapter(vc4_hdmi->cec_adap);
+ }
+ #else
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c b/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c
+index 5648664f71bc1..f2d6254154585 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c
+@@ -354,7 +354,6 @@ static void vmw_otable_batch_takedown(struct vmw_private *dev_priv,
+ 	ttm_bo_unpin(bo);
+ 	ttm_bo_unreserve(bo);
+ 
+-	ttm_bo_unpin(batch->otable_bo);
+ 	ttm_bo_put(batch->otable_bo);
+ 	batch->otable_bo = NULL;
+ }
+diff --git a/drivers/i2c/busses/i2c-mpc.c b/drivers/i2c/busses/i2c-mpc.c
+index 6d5014ebaab5e..a6ea1eb1394e1 100644
+--- a/drivers/i2c/busses/i2c-mpc.c
++++ b/drivers/i2c/busses/i2c-mpc.c
+@@ -635,8 +635,8 @@ static irqreturn_t mpc_i2c_isr(int irq, void *dev_id)
+ 
+ 	status = readb(i2c->base + MPC_I2C_SR);
+ 	if (status & CSR_MIF) {
+-		/* Read again to allow register to stabilise */
+-		status = readb(i2c->base + MPC_I2C_SR);
++		/* Wait up to 100us for transfer to properly complete */
++		readb_poll_timeout(i2c->base + MPC_I2C_SR, status, !(status & CSR_MCF), 0, 100);
+ 		writeb(0, i2c->base + MPC_I2C_SR);
+ 		mpc_i2c_do_intr(i2c, status);
+ 		return IRQ_HANDLED;
+diff --git a/drivers/media/pci/intel/ipu3/cio2-bridge.c b/drivers/media/pci/intel/ipu3/cio2-bridge.c
+index 4657e99df0339..59a36f9226755 100644
+--- a/drivers/media/pci/intel/ipu3/cio2-bridge.c
++++ b/drivers/media/pci/intel/ipu3/cio2-bridge.c
+@@ -173,10 +173,8 @@ static int cio2_bridge_connect_sensor(const struct cio2_sensor_config *cfg,
+ 	int ret;
+ 
+ 	for_each_acpi_dev_match(adev, cfg->hid, NULL, -1) {
+-		if (!adev->status.enabled) {
+-			acpi_dev_put(adev);
++		if (!adev->status.enabled)
+ 			continue;
+-		}
+ 
+ 		if (bridge->n_sensors >= CIO2_NUM_PORTS) {
+ 			acpi_dev_put(adev);
+@@ -185,7 +183,6 @@ static int cio2_bridge_connect_sensor(const struct cio2_sensor_config *cfg,
+ 		}
+ 
+ 		sensor = &bridge->sensors[bridge->n_sensors];
+-		sensor->adev = adev;
+ 		strscpy(sensor->name, cfg->hid, sizeof(sensor->name));
+ 
+ 		ret = cio2_bridge_read_acpi_buffer(adev, "SSDB",
+@@ -215,6 +212,7 @@ static int cio2_bridge_connect_sensor(const struct cio2_sensor_config *cfg,
+ 			goto err_free_swnodes;
+ 		}
+ 
++		sensor->adev = acpi_dev_get(adev);
+ 		adev->fwnode.secondary = fwnode;
+ 
+ 		dev_info(&cio2->dev, "Found supported sensor %s\n",
+diff --git a/drivers/media/pci/ngene/ngene-core.c b/drivers/media/pci/ngene/ngene-core.c
+index 07f342db6701f..7481f553f9595 100644
+--- a/drivers/media/pci/ngene/ngene-core.c
++++ b/drivers/media/pci/ngene/ngene-core.c
+@@ -385,7 +385,7 @@ static int ngene_command_config_free_buf(struct ngene *dev, u8 *config)
+ 
+ 	com.cmd.hdr.Opcode = CMD_CONFIGURE_FREE_BUFFER;
+ 	com.cmd.hdr.Length = 6;
+-	memcpy(&com.cmd.ConfigureBuffers.config, config, 6);
++	memcpy(&com.cmd.ConfigureFreeBuffers.config, config, 6);
+ 	com.in_len = 6;
+ 	com.out_len = 0;
+ 
+diff --git a/drivers/media/pci/ngene/ngene.h b/drivers/media/pci/ngene/ngene.h
+index 84f04e0e0cb9a..3d296f1998a1a 100644
+--- a/drivers/media/pci/ngene/ngene.h
++++ b/drivers/media/pci/ngene/ngene.h
+@@ -407,12 +407,14 @@ enum _BUFFER_CONFIGS {
+ 
+ struct FW_CONFIGURE_FREE_BUFFERS {
+ 	struct FW_HEADER hdr;
+-	u8   UVI1_BufferLength;
+-	u8   UVI2_BufferLength;
+-	u8   TVO_BufferLength;
+-	u8   AUD1_BufferLength;
+-	u8   AUD2_BufferLength;
+-	u8   TVA_BufferLength;
++	struct {
++		u8   UVI1_BufferLength;
++		u8   UVI2_BufferLength;
++		u8   TVO_BufferLength;
++		u8   AUD1_BufferLength;
++		u8   AUD2_BufferLength;
++		u8   TVA_BufferLength;
++	} __packed config;
+ } __attribute__ ((__packed__));
+ 
+ struct FW_CONFIGURE_UART {
+diff --git a/drivers/misc/eeprom/at24.c b/drivers/misc/eeprom/at24.c
+index 7a6f01ace78ac..305ffad131a29 100644
+--- a/drivers/misc/eeprom/at24.c
++++ b/drivers/misc/eeprom/at24.c
+@@ -714,23 +714,20 @@ static int at24_probe(struct i2c_client *client)
+ 	}
+ 
+ 	/*
+-	 * If the 'label' property is not present for the AT24 EEPROM,
+-	 * then nvmem_config.id is initialised to NVMEM_DEVID_AUTO,
+-	 * and this will append the 'devid' to the name of the NVMEM
+-	 * device. This is purely legacy and the AT24 driver has always
+-	 * defaulted to this. However, if the 'label' property is
+-	 * present then this means that the name is specified by the
+-	 * firmware and this name should be used verbatim and so it is
+-	 * not necessary to append the 'devid'.
++	 * We initialize nvmem_config.id to NVMEM_DEVID_AUTO even if the
++	 * label property is set as some platform can have multiple eeproms
++	 * with same label and we can not register each of those with same
++	 * label. Failing to register those eeproms trigger cascade failure
++	 * on such platform.
+ 	 */
++	nvmem_config.id = NVMEM_DEVID_AUTO;
++
+ 	if (device_property_present(dev, "label")) {
+-		nvmem_config.id = NVMEM_DEVID_NONE;
+ 		err = device_property_read_string(dev, "label",
+ 						  &nvmem_config.name);
+ 		if (err)
+ 			return err;
+ 	} else {
+-		nvmem_config.id = NVMEM_DEVID_AUTO;
+ 		nvmem_config.name = dev_name(dev);
+ 	}
+ 
+diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
+index 0b0577990ddc9..8375d4381d47e 100644
+--- a/drivers/mmc/core/host.c
++++ b/drivers/mmc/core/host.c
+@@ -75,7 +75,8 @@ static void mmc_host_classdev_release(struct device *dev)
+ {
+ 	struct mmc_host *host = cls_dev_to_mmc_host(dev);
+ 	wakeup_source_unregister(host->ws);
+-	ida_simple_remove(&mmc_host_ida, host->index);
++	if (of_alias_get_id(host->parent->of_node, "mmc") < 0)
++		ida_simple_remove(&mmc_host_ida, host->index);
+ 	kfree(host);
+ }
+ 
+@@ -499,7 +500,7 @@ static int mmc_first_nonreserved_index(void)
+  */
+ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
+ {
+-	int err;
++	int index;
+ 	struct mmc_host *host;
+ 	int alias_id, min_idx, max_idx;
+ 
+@@ -512,20 +513,19 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
+ 
+ 	alias_id = of_alias_get_id(dev->of_node, "mmc");
+ 	if (alias_id >= 0) {
+-		min_idx = alias_id;
+-		max_idx = alias_id + 1;
++		index = alias_id;
+ 	} else {
+ 		min_idx = mmc_first_nonreserved_index();
+ 		max_idx = 0;
+-	}
+ 
+-	err = ida_simple_get(&mmc_host_ida, min_idx, max_idx, GFP_KERNEL);
+-	if (err < 0) {
+-		kfree(host);
+-		return NULL;
++		index = ida_simple_get(&mmc_host_ida, min_idx, max_idx, GFP_KERNEL);
++		if (index < 0) {
++			kfree(host);
++			return NULL;
++		}
+ 	}
+ 
+-	host->index = err;
++	host->index = index;
+ 
+ 	dev_set_name(&host->class_dev, "mmc%d", host->index);
+ 	host->ws = wakeup_source_register(NULL, dev_name(&host->class_dev));
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index c5a646d06102a..9a184c99fbe44 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -401,24 +401,85 @@ static int bond_vlan_rx_kill_vid(struct net_device *bond_dev,
+ static int bond_ipsec_add_sa(struct xfrm_state *xs)
+ {
+ 	struct net_device *bond_dev = xs->xso.dev;
++	struct bond_ipsec *ipsec;
+ 	struct bonding *bond;
+ 	struct slave *slave;
++	int err;
+ 
+ 	if (!bond_dev)
+ 		return -EINVAL;
+ 
++	rcu_read_lock();
+ 	bond = netdev_priv(bond_dev);
+ 	slave = rcu_dereference(bond->curr_active_slave);
+-	xs->xso.real_dev = slave->dev;
+-	bond->xs = xs;
++	if (!slave) {
++		rcu_read_unlock();
++		return -ENODEV;
++	}
+ 
+-	if (!(slave->dev->xfrmdev_ops
+-	      && slave->dev->xfrmdev_ops->xdo_dev_state_add)) {
++	if (!slave->dev->xfrmdev_ops ||
++	    !slave->dev->xfrmdev_ops->xdo_dev_state_add ||
++	    netif_is_bond_master(slave->dev)) {
+ 		slave_warn(bond_dev, slave->dev, "Slave does not support ipsec offload\n");
++		rcu_read_unlock();
+ 		return -EINVAL;
+ 	}
+ 
+-	return slave->dev->xfrmdev_ops->xdo_dev_state_add(xs);
++	ipsec = kmalloc(sizeof(*ipsec), GFP_ATOMIC);
++	if (!ipsec) {
++		rcu_read_unlock();
++		return -ENOMEM;
++	}
++	xs->xso.real_dev = slave->dev;
++
++	err = slave->dev->xfrmdev_ops->xdo_dev_state_add(xs);
++	if (!err) {
++		ipsec->xs = xs;
++		INIT_LIST_HEAD(&ipsec->list);
++		spin_lock_bh(&bond->ipsec_lock);
++		list_add(&ipsec->list, &bond->ipsec_list);
++		spin_unlock_bh(&bond->ipsec_lock);
++	} else {
++		kfree(ipsec);
++	}
++	rcu_read_unlock();
++	return err;
++}
++
++static void bond_ipsec_add_sa_all(struct bonding *bond)
++{
++	struct net_device *bond_dev = bond->dev;
++	struct bond_ipsec *ipsec;
++	struct slave *slave;
++
++	rcu_read_lock();
++	slave = rcu_dereference(bond->curr_active_slave);
++	if (!slave)
++		goto out;
++
++	if (!slave->dev->xfrmdev_ops ||
++	    !slave->dev->xfrmdev_ops->xdo_dev_state_add ||
++	    netif_is_bond_master(slave->dev)) {
++		spin_lock_bh(&bond->ipsec_lock);
++		if (!list_empty(&bond->ipsec_list))
++			slave_warn(bond_dev, slave->dev,
++				   "%s: no slave xdo_dev_state_add\n",
++				   __func__);
++		spin_unlock_bh(&bond->ipsec_lock);
++		goto out;
++	}
++
++	spin_lock_bh(&bond->ipsec_lock);
++	list_for_each_entry(ipsec, &bond->ipsec_list, list) {
++		ipsec->xs->xso.real_dev = slave->dev;
++		if (slave->dev->xfrmdev_ops->xdo_dev_state_add(ipsec->xs)) {
++			slave_warn(bond_dev, slave->dev, "%s: failed to add SA\n", __func__);
++			ipsec->xs->xso.real_dev = NULL;
++		}
++	}
++	spin_unlock_bh(&bond->ipsec_lock);
++out:
++	rcu_read_unlock();
+ }
+ 
+ /**
+@@ -428,27 +489,77 @@ static int bond_ipsec_add_sa(struct xfrm_state *xs)
+ static void bond_ipsec_del_sa(struct xfrm_state *xs)
+ {
+ 	struct net_device *bond_dev = xs->xso.dev;
++	struct bond_ipsec *ipsec;
+ 	struct bonding *bond;
+ 	struct slave *slave;
+ 
+ 	if (!bond_dev)
+ 		return;
+ 
++	rcu_read_lock();
+ 	bond = netdev_priv(bond_dev);
+ 	slave = rcu_dereference(bond->curr_active_slave);
+ 
+ 	if (!slave)
+-		return;
++		goto out;
+ 
+-	xs->xso.real_dev = slave->dev;
++	if (!xs->xso.real_dev)
++		goto out;
++
++	WARN_ON(xs->xso.real_dev != slave->dev);
+ 
+-	if (!(slave->dev->xfrmdev_ops
+-	      && slave->dev->xfrmdev_ops->xdo_dev_state_delete)) {
++	if (!slave->dev->xfrmdev_ops ||
++	    !slave->dev->xfrmdev_ops->xdo_dev_state_delete ||
++	    netif_is_bond_master(slave->dev)) {
+ 		slave_warn(bond_dev, slave->dev, "%s: no slave xdo_dev_state_delete\n", __func__);
+-		return;
++		goto out;
+ 	}
+ 
+ 	slave->dev->xfrmdev_ops->xdo_dev_state_delete(xs);
++out:
++	spin_lock_bh(&bond->ipsec_lock);
++	list_for_each_entry(ipsec, &bond->ipsec_list, list) {
++		if (ipsec->xs == xs) {
++			list_del(&ipsec->list);
++			kfree(ipsec);
++			break;
++		}
++	}
++	spin_unlock_bh(&bond->ipsec_lock);
++	rcu_read_unlock();
++}
++
++static void bond_ipsec_del_sa_all(struct bonding *bond)
++{
++	struct net_device *bond_dev = bond->dev;
++	struct bond_ipsec *ipsec;
++	struct slave *slave;
++
++	rcu_read_lock();
++	slave = rcu_dereference(bond->curr_active_slave);
++	if (!slave) {
++		rcu_read_unlock();
++		return;
++	}
++
++	spin_lock_bh(&bond->ipsec_lock);
++	list_for_each_entry(ipsec, &bond->ipsec_list, list) {
++		if (!ipsec->xs->xso.real_dev)
++			continue;
++
++		if (!slave->dev->xfrmdev_ops ||
++		    !slave->dev->xfrmdev_ops->xdo_dev_state_delete ||
++		    netif_is_bond_master(slave->dev)) {
++			slave_warn(bond_dev, slave->dev,
++				   "%s: no slave xdo_dev_state_delete\n",
++				   __func__);
++		} else {
++			slave->dev->xfrmdev_ops->xdo_dev_state_delete(ipsec->xs);
++		}
++		ipsec->xs->xso.real_dev = NULL;
++	}
++	spin_unlock_bh(&bond->ipsec_lock);
++	rcu_read_unlock();
+ }
+ 
+ /**
+@@ -459,21 +570,37 @@ static void bond_ipsec_del_sa(struct xfrm_state *xs)
+ static bool bond_ipsec_offload_ok(struct sk_buff *skb, struct xfrm_state *xs)
+ {
+ 	struct net_device *bond_dev = xs->xso.dev;
+-	struct bonding *bond = netdev_priv(bond_dev);
+-	struct slave *curr_active = rcu_dereference(bond->curr_active_slave);
+-	struct net_device *slave_dev = curr_active->dev;
++	struct net_device *real_dev;
++	struct slave *curr_active;
++	struct bonding *bond;
++	int err;
+ 
+-	if (BOND_MODE(bond) != BOND_MODE_ACTIVEBACKUP)
+-		return true;
++	bond = netdev_priv(bond_dev);
++	rcu_read_lock();
++	curr_active = rcu_dereference(bond->curr_active_slave);
++	real_dev = curr_active->dev;
+ 
+-	if (!(slave_dev->xfrmdev_ops
+-	      && slave_dev->xfrmdev_ops->xdo_dev_offload_ok)) {
+-		slave_warn(bond_dev, slave_dev, "%s: no slave xdo_dev_offload_ok\n", __func__);
+-		return false;
++	if (BOND_MODE(bond) != BOND_MODE_ACTIVEBACKUP) {
++		err = false;
++		goto out;
+ 	}
+ 
+-	xs->xso.real_dev = slave_dev;
+-	return slave_dev->xfrmdev_ops->xdo_dev_offload_ok(skb, xs);
++	if (!xs->xso.real_dev) {
++		err = false;
++		goto out;
++	}
++
++	if (!real_dev->xfrmdev_ops ||
++	    !real_dev->xfrmdev_ops->xdo_dev_offload_ok ||
++	    netif_is_bond_master(real_dev)) {
++		err = false;
++		goto out;
++	}
++
++	err = real_dev->xfrmdev_ops->xdo_dev_offload_ok(skb, xs);
++out:
++	rcu_read_unlock();
++	return err;
+ }
+ 
+ static const struct xfrmdev_ops bond_xfrmdev_ops = {
+@@ -990,8 +1117,7 @@ void bond_change_active_slave(struct bonding *bond, struct slave *new_active)
+ 		return;
+ 
+ #ifdef CONFIG_XFRM_OFFLOAD
+-	if (old_active && bond->xs)
+-		bond_ipsec_del_sa(bond->xs);
++	bond_ipsec_del_sa_all(bond);
+ #endif /* CONFIG_XFRM_OFFLOAD */
+ 
+ 	if (new_active) {
+@@ -1067,10 +1193,7 @@ void bond_change_active_slave(struct bonding *bond, struct slave *new_active)
+ 	}
+ 
+ #ifdef CONFIG_XFRM_OFFLOAD
+-	if (new_active && bond->xs) {
+-		xfrm_dev_state_flush(dev_net(bond->dev), bond->dev, true);
+-		bond_ipsec_add_sa(bond->xs);
+-	}
++	bond_ipsec_add_sa_all(bond);
+ #endif /* CONFIG_XFRM_OFFLOAD */
+ 
+ 	/* resend IGMP joins since active slave has changed or
+@@ -3319,6 +3442,9 @@ static int bond_master_netdev_event(unsigned long event,
+ 		return bond_event_changename(event_bond);
+ 	case NETDEV_UNREGISTER:
+ 		bond_remove_proc_entry(event_bond);
++#ifdef CONFIG_XFRM_OFFLOAD
++		xfrm_dev_state_flush(dev_net(bond_dev), bond_dev, true);
++#endif /* CONFIG_XFRM_OFFLOAD */
+ 		break;
+ 	case NETDEV_REGISTER:
+ 		bond_create_proc_entry(event_bond);
+@@ -4882,7 +5008,8 @@ void bond_setup(struct net_device *bond_dev)
+ #ifdef CONFIG_XFRM_OFFLOAD
+ 	/* set up xfrm device ops (only supported in active-backup right now) */
+ 	bond_dev->xfrmdev_ops = &bond_xfrmdev_ops;
+-	bond->xs = NULL;
++	INIT_LIST_HEAD(&bond->ipsec_list);
++	spin_lock_init(&bond->ipsec_lock);
+ #endif /* CONFIG_XFRM_OFFLOAD */
+ 
+ 	/* don't acquire bond device's netif_tx_lock when transmitting */
+diff --git a/drivers/net/dsa/mv88e6xxx/Kconfig b/drivers/net/dsa/mv88e6xxx/Kconfig
+index 05af632b0f597..634a48e6616b9 100644
+--- a/drivers/net/dsa/mv88e6xxx/Kconfig
++++ b/drivers/net/dsa/mv88e6xxx/Kconfig
+@@ -12,7 +12,7 @@ config NET_DSA_MV88E6XXX
+ config NET_DSA_MV88E6XXX_PTP
+ 	bool "PTP support for Marvell 88E6xxx"
+ 	default n
+-	depends on PTP_1588_CLOCK
++	depends on NET_DSA_MV88E6XXX && PTP_1588_CLOCK
+ 	help
+ 	  Say Y to enable PTP hardware timestamping on Marvell 88E6xxx switch
+ 	  chips that support it.
+diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
+index ebe4d33cda276..6e5dbe9f3892e 100644
+--- a/drivers/net/dsa/sja1105/sja1105_main.c
++++ b/drivers/net/dsa/sja1105/sja1105_main.c
+@@ -378,6 +378,12 @@ static int sja1105_init_static_vlan(struct sja1105_private *priv)
+ 		if (dsa_is_cpu_port(ds, port))
+ 			v->pvid = true;
+ 		list_add(&v->list, &priv->dsa_8021q_vlans);
++
++		v = kmemdup(v, sizeof(*v), GFP_KERNEL);
++		if (!v)
++			return -ENOMEM;
++
++		list_add(&v->list, &priv->bridge_vlans);
+ 	}
+ 
+ 	((struct sja1105_vlan_lookup_entry *)table->entries)[0] = pvid;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index aef3fccc27a97..3c3aa94673103 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -1640,11 +1640,16 @@ static inline struct sk_buff *bnxt_tpa_end(struct bnxt *bp,
+ 
+ 	if ((tpa_info->flags2 & RX_CMP_FLAGS2_META_FORMAT_VLAN) &&
+ 	    (skb->dev->features & BNXT_HW_FEATURE_VLAN_ALL_RX)) {
+-		u16 vlan_proto = tpa_info->metadata >>
+-			RX_CMP_FLAGS2_METADATA_TPID_SFT;
++		__be16 vlan_proto = htons(tpa_info->metadata >>
++					  RX_CMP_FLAGS2_METADATA_TPID_SFT);
+ 		u16 vtag = tpa_info->metadata & RX_CMP_FLAGS2_METADATA_TCI_MASK;
+ 
+-		__vlan_hwaccel_put_tag(skb, htons(vlan_proto), vtag);
++		if (eth_type_vlan(vlan_proto)) {
++			__vlan_hwaccel_put_tag(skb, vlan_proto, vtag);
++		} else {
++			dev_kfree_skb(skb);
++			return NULL;
++		}
+ 	}
+ 
+ 	skb_checksum_none_assert(skb);
+@@ -1865,9 +1870,15 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ 	    (skb->dev->features & BNXT_HW_FEATURE_VLAN_ALL_RX)) {
+ 		u32 meta_data = le32_to_cpu(rxcmp1->rx_cmp_meta_data);
+ 		u16 vtag = meta_data & RX_CMP_FLAGS2_METADATA_TCI_MASK;
+-		u16 vlan_proto = meta_data >> RX_CMP_FLAGS2_METADATA_TPID_SFT;
++		__be16 vlan_proto = htons(meta_data >>
++					  RX_CMP_FLAGS2_METADATA_TPID_SFT);
+ 
+-		__vlan_hwaccel_put_tag(skb, htons(vlan_proto), vtag);
++		if (eth_type_vlan(vlan_proto)) {
++			__vlan_hwaccel_put_tag(skb, vlan_proto, vtag);
++		} else {
++			dev_kfree_skb(skb);
++			goto next_rx;
++		}
+ 	}
+ 
+ 	skb_checksum_none_assert(skb);
+@@ -10093,6 +10104,12 @@ int bnxt_half_open_nic(struct bnxt *bp)
+ {
+ 	int rc = 0;
+ 
++	if (test_bit(BNXT_STATE_ABORT_ERR, &bp->state)) {
++		netdev_err(bp->dev, "A previous firmware reset has not completed, aborting half open\n");
++		rc = -ENODEV;
++		goto half_open_err;
++	}
++
+ 	rc = bnxt_alloc_mem(bp, false);
+ 	if (rc) {
+ 		netdev_err(bp->dev, "bnxt_alloc_mem err: %x\n", rc);
+@@ -11849,10 +11866,21 @@ static bool bnxt_fw_reset_timeout(struct bnxt *bp)
+ 			  (bp->fw_reset_max_dsecs * HZ / 10));
+ }
+ 
++static void bnxt_fw_reset_abort(struct bnxt *bp, int rc)
++{
++	clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
++	if (bp->fw_reset_state != BNXT_FW_RESET_STATE_POLL_VF) {
++		bnxt_ulp_start(bp, rc);
++		bnxt_dl_health_status_update(bp, false);
++	}
++	bp->fw_reset_state = 0;
++	dev_close(bp->dev);
++}
++
+ static void bnxt_fw_reset_task(struct work_struct *work)
+ {
+ 	struct bnxt *bp = container_of(work, struct bnxt, fw_reset_task.work);
+-	int rc;
++	int rc = 0;
+ 
+ 	if (!test_bit(BNXT_STATE_IN_FW_RESET, &bp->state)) {
+ 		netdev_err(bp->dev, "bnxt_fw_reset_task() called when not in fw reset mode!\n");
+@@ -11882,6 +11910,11 @@ static void bnxt_fw_reset_task(struct work_struct *work)
+ 		}
+ 		bp->fw_reset_timestamp = jiffies;
+ 		rtnl_lock();
++		if (test_bit(BNXT_STATE_ABORT_ERR, &bp->state)) {
++			bnxt_fw_reset_abort(bp, rc);
++			rtnl_unlock();
++			return;
++		}
+ 		bnxt_fw_reset_close(bp);
+ 		if (bp->fw_cap & BNXT_FW_CAP_ERR_RECOVER_RELOAD) {
+ 			bp->fw_reset_state = BNXT_FW_RESET_STATE_POLL_FW_DOWN;
+@@ -11929,6 +11962,7 @@ static void bnxt_fw_reset_task(struct work_struct *work)
+ 			if (val == 0xffff) {
+ 				if (bnxt_fw_reset_timeout(bp)) {
+ 					netdev_err(bp->dev, "Firmware reset aborted, PCI config space invalid\n");
++					rc = -ETIMEDOUT;
+ 					goto fw_reset_abort;
+ 				}
+ 				bnxt_queue_fw_reset_work(bp, HZ / 1000);
+@@ -11938,6 +11972,7 @@ static void bnxt_fw_reset_task(struct work_struct *work)
+ 		clear_bit(BNXT_STATE_FW_FATAL_COND, &bp->state);
+ 		if (pci_enable_device(bp->pdev)) {
+ 			netdev_err(bp->dev, "Cannot re-enable PCI device\n");
++			rc = -ENODEV;
+ 			goto fw_reset_abort;
+ 		}
+ 		pci_set_master(bp->pdev);
+@@ -11964,9 +11999,10 @@ static void bnxt_fw_reset_task(struct work_struct *work)
+ 		}
+ 		rc = bnxt_open(bp->dev);
+ 		if (rc) {
+-			netdev_err(bp->dev, "bnxt_open_nic() failed\n");
+-			clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
+-			dev_close(bp->dev);
++			netdev_err(bp->dev, "bnxt_open() failed during FW reset\n");
++			bnxt_fw_reset_abort(bp, rc);
++			rtnl_unlock();
++			return;
+ 		}
+ 
+ 		bp->fw_reset_state = 0;
+@@ -11993,12 +12029,8 @@ fw_reset_abort_status:
+ 		netdev_err(bp->dev, "fw_health_status 0x%x\n", sts);
+ 	}
+ fw_reset_abort:
+-	clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
+-	if (bp->fw_reset_state != BNXT_FW_RESET_STATE_POLL_VF)
+-		bnxt_dl_health_status_update(bp, false);
+-	bp->fw_reset_state = 0;
+ 	rtnl_lock();
+-	dev_close(bp->dev);
++	bnxt_fw_reset_abort(bp, rc);
+ 	rtnl_unlock();
+ }
+ 
+@@ -13315,7 +13347,8 @@ static pci_ers_result_t bnxt_io_error_detected(struct pci_dev *pdev,
+ 	if (netif_running(netdev))
+ 		bnxt_close(netdev);
+ 
+-	pci_disable_device(pdev);
++	if (pci_is_enabled(pdev))
++		pci_disable_device(pdev);
+ 	bnxt_free_ctx_mem(bp);
+ 	kfree(bp->ctx);
+ 	bp->ctx = NULL;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+index a918e374f3c5c..187ff643ad2ae 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+@@ -479,16 +479,17 @@ struct bnxt_en_dev *bnxt_ulp_probe(struct net_device *dev)
+ 		if (!edev)
+ 			return ERR_PTR(-ENOMEM);
+ 		edev->en_ops = &bnxt_en_ops_tbl;
+-		if (bp->flags & BNXT_FLAG_ROCEV1_CAP)
+-			edev->flags |= BNXT_EN_FLAG_ROCEV1_CAP;
+-		if (bp->flags & BNXT_FLAG_ROCEV2_CAP)
+-			edev->flags |= BNXT_EN_FLAG_ROCEV2_CAP;
+ 		edev->net = dev;
+ 		edev->pdev = bp->pdev;
+ 		edev->l2_db_size = bp->db_size;
+ 		edev->l2_db_size_nc = bp->db_size;
+ 		bp->edev = edev;
+ 	}
++	edev->flags &= ~BNXT_EN_FLAG_ROCE_CAP;
++	if (bp->flags & BNXT_FLAG_ROCEV1_CAP)
++		edev->flags |= BNXT_EN_FLAG_ROCEV1_CAP;
++	if (bp->flags & BNXT_FLAG_ROCEV2_CAP)
++		edev->flags |= BNXT_EN_FLAG_ROCEV2_CAP;
+ 	return bp->edev;
+ }
+ EXPORT_SYMBOL(bnxt_ulp_probe);
+diff --git a/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c b/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
+index 4cddd628d41b2..9ed3d1ab2ca58 100644
+--- a/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
++++ b/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
+@@ -420,7 +420,7 @@ static int cn23xx_pf_setup_global_input_regs(struct octeon_device *oct)
+ 	 * bits 32:47 indicate the PVF num.
+ 	 */
+ 	for (q_no = 0; q_no < ern; q_no++) {
+-		reg_val = oct->pcie_port << CN23XX_PKT_INPUT_CTL_MAC_NUM_POS;
++		reg_val = (u64)oct->pcie_port << CN23XX_PKT_INPUT_CTL_MAC_NUM_POS;
+ 
+ 		/* for VF assigned queues. */
+ 		if (q_no < oct->sriov_info.pf_srn) {
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index 762113a04dde6..9f62ffe647819 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -2643,6 +2643,9 @@ static void detach_ulds(struct adapter *adap)
+ {
+ 	unsigned int i;
+ 
++	if (!is_uld(adap))
++		return;
++
+ 	mutex_lock(&uld_mutex);
+ 	list_del(&adap->list_node);
+ 
+@@ -7141,10 +7144,13 @@ static void remove_one(struct pci_dev *pdev)
+ 		 */
+ 		destroy_workqueue(adapter->workq);
+ 
+-		if (is_uld(adapter)) {
+-			detach_ulds(adapter);
+-			t4_uld_clean_up(adapter);
+-		}
++		detach_ulds(adapter);
++
++		for_each_port(adapter, i)
++			if (adapter->port[i]->reg_state == NETREG_REGISTERED)
++				unregister_netdev(adapter->port[i]);
++
++		t4_uld_clean_up(adapter);
+ 
+ 		adap_free_hma_mem(adapter);
+ 
+@@ -7152,10 +7158,6 @@ static void remove_one(struct pci_dev *pdev)
+ 
+ 		cxgb4_free_mps_ref_entries(adapter);
+ 
+-		for_each_port(adapter, i)
+-			if (adapter->port[i]->reg_state == NETREG_REGISTERED)
+-				unregister_netdev(adapter->port[i]);
+-
+ 		debugfs_remove_recursive(adapter->debugfs_root);
+ 
+ 		if (!is_t4(adapter->params.chip))
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
+index 743af9e654aa7..17faac715882d 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
+@@ -581,6 +581,9 @@ void t4_uld_clean_up(struct adapter *adap)
+ {
+ 	unsigned int i;
+ 
++	if (!is_uld(adap))
++		return;
++
+ 	mutex_lock(&uld_mutex);
+ 	for (i = 0; i < CXGB4_ULD_MAX; i++) {
+ 		if (!adap->uld[i].handle)
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
+index 05de37c3b64c7..87321b7239cf4 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
+@@ -2770,32 +2770,32 @@ static int dpaa2_switch_ctrl_if_setup(struct ethsw_core *ethsw)
+ 	if (err)
+ 		return err;
+ 
+-	err = dpaa2_switch_seed_bp(ethsw);
+-	if (err)
+-		goto err_free_dpbp;
+-
+ 	err = dpaa2_switch_alloc_rings(ethsw);
+ 	if (err)
+-		goto err_drain_dpbp;
++		goto err_free_dpbp;
+ 
+ 	err = dpaa2_switch_setup_dpio(ethsw);
+ 	if (err)
+ 		goto err_destroy_rings;
+ 
++	err = dpaa2_switch_seed_bp(ethsw);
++	if (err)
++		goto err_deregister_dpio;
++
+ 	err = dpsw_ctrl_if_enable(ethsw->mc_io, 0, ethsw->dpsw_handle);
+ 	if (err) {
+ 		dev_err(ethsw->dev, "dpsw_ctrl_if_enable err %d\n", err);
+-		goto err_deregister_dpio;
++		goto err_drain_dpbp;
+ 	}
+ 
+ 	return 0;
+ 
++err_drain_dpbp:
++	dpaa2_switch_drain_bp(ethsw);
+ err_deregister_dpio:
+ 	dpaa2_switch_free_dpio(ethsw);
+ err_destroy_rings:
+ 	dpaa2_switch_destroy_rings(ethsw);
+-err_drain_dpbp:
+-	dpaa2_switch_drain_bp(ethsw);
+ err_free_dpbp:
+ 	dpaa2_switch_free_dpbp(ethsw);
+ 
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index 79cefe85a799f..b43c6ff07614a 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -1349,13 +1349,16 @@ static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	err = register_netdev(dev);
+ 	if (err)
+-		goto abort_with_wq;
++		goto abort_with_gve_init;
+ 
+ 	dev_info(&pdev->dev, "GVE version %s\n", gve_version_str);
+ 	gve_clear_probe_in_progress(priv);
+ 	queue_work(priv->gve_wq, &priv->service_task);
+ 	return 0;
+ 
++abort_with_gve_init:
++	gve_teardown_priv_resources(priv);
++
+ abort_with_wq:
+ 	destroy_workqueue(priv->gve_wq);
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hip04_eth.c b/drivers/net/ethernet/hisilicon/hip04_eth.c
+index 12f6c2442a7ad..e53512f6878af 100644
+--- a/drivers/net/ethernet/hisilicon/hip04_eth.c
++++ b/drivers/net/ethernet/hisilicon/hip04_eth.c
+@@ -131,7 +131,7 @@
+ /* buf unit size is cache_line_size, which is 64, so the shift is 6 */
+ #define PPE_BUF_SIZE_SHIFT		6
+ #define PPE_TX_BUF_HOLD			BIT(31)
+-#define CACHE_LINE_MASK			0x3F
++#define SOC_CACHE_LINE_MASK		0x3F
+ #else
+ #define PPE_CFG_QOS_VMID_GRP_SHIFT	8
+ #define PPE_CFG_RX_CTRL_ALIGN_SHIFT	11
+@@ -531,8 +531,8 @@ hip04_mac_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ #if defined(CONFIG_HI13X1_GMAC)
+ 	desc->cfg = (__force u32)cpu_to_be32(TX_CLEAR_WB | TX_FINISH_CACHE_INV
+ 		| TX_RELEASE_TO_PPE | priv->port << TX_POOL_SHIFT);
+-	desc->data_offset = (__force u32)cpu_to_be32(phys & CACHE_LINE_MASK);
+-	desc->send_addr =  (__force u32)cpu_to_be32(phys & ~CACHE_LINE_MASK);
++	desc->data_offset = (__force u32)cpu_to_be32(phys & SOC_CACHE_LINE_MASK);
++	desc->send_addr =  (__force u32)cpu_to_be32(phys & ~SOC_CACHE_LINE_MASK);
+ #else
+ 	desc->cfg = (__force u32)cpu_to_be32(TX_CLEAR_WB | TX_FINISH_CACHE_INV);
+ 	desc->send_addr = (__force u32)cpu_to_be32(phys);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
+index a2c17af57fde7..d283beec9f66c 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
+@@ -135,7 +135,8 @@ struct hclge_mbx_vf_to_pf_cmd {
+ 	u8 mbx_need_resp;
+ 	u8 rsv1[1];
+ 	u8 msg_len;
+-	u8 rsv2[3];
++	u8 rsv2;
++	u16 match_id;
+ 	struct hclge_vf_to_pf_msg msg;
+ };
+ 
+@@ -145,7 +146,8 @@ struct hclge_mbx_pf_to_vf_cmd {
+ 	u8 dest_vfid;
+ 	u8 rsv[3];
+ 	u8 msg_len;
+-	u8 rsv1[3];
++	u8 rsv1;
++	u16 match_id;
+ 	struct hclge_pf_to_vf_msg msg;
+ };
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+index f1c9f4ada348a..38b601031db46 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+@@ -47,6 +47,7 @@ static int hclge_gen_resp_to_vf(struct hclge_vport *vport,
+ 
+ 	resp_pf_to_vf->dest_vfid = vf_to_pf_req->mbx_src_vfid;
+ 	resp_pf_to_vf->msg_len = vf_to_pf_req->msg_len;
++	resp_pf_to_vf->match_id = vf_to_pf_req->match_id;
+ 
+ 	resp_pf_to_vf->msg.code = HCLGE_MBX_PF_VF_RESP;
+ 	resp_pf_to_vf->msg.vf_mbx_msg_code = vf_to_pf_req->msg.code;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 0db51ef15ef67..fe03c84198906 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -2621,6 +2621,16 @@ static int hclgevf_rss_init_hw(struct hclgevf_dev *hdev)
+ 
+ static int hclgevf_init_vlan_config(struct hclgevf_dev *hdev)
+ {
++	struct hnae3_handle *nic = &hdev->nic;
++	int ret;
++
++	ret = hclgevf_en_hw_strip_rxvtag(nic, true);
++	if (ret) {
++		dev_err(&hdev->pdev->dev,
++			"failed to enable rx vlan offload, ret = %d\n", ret);
++		return ret;
++	}
++
+ 	return hclgevf_set_vlan_filter(&hdev->nic, htons(ETH_P_8021Q), 0,
+ 				       false);
+ }
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index efc98903c0b72..5b4a7ef7dffa5 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -1707,7 +1707,6 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ 		tx_send_failed++;
+ 		tx_dropped++;
+ 		ret = NETDEV_TX_OK;
+-		ibmvnic_tx_scrq_flush(adapter, tx_scrq);
+ 		goto out;
+ 	}
+ 
+@@ -1729,6 +1728,7 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ 		dev_kfree_skb_any(skb);
+ 		tx_send_failed++;
+ 		tx_dropped++;
++		ibmvnic_tx_scrq_flush(adapter, tx_scrq);
+ 		ret = NETDEV_TX_OK;
+ 		goto out;
+ 	}
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index dc0ded7e5e614..86b7778dc9b45 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -7664,6 +7664,7 @@ err_flashmap:
+ err_ioremap:
+ 	free_netdev(netdev);
+ err_alloc_etherdev:
++	pci_disable_pcie_error_reporting(pdev);
+ 	pci_release_mem_regions(pdev);
+ err_pci_reg:
+ err_dma:
+diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_pci.c b/drivers/net/ethernet/intel/fm10k/fm10k_pci.c
+index 9e3103fae723c..caedf24c24c1f 100644
+--- a/drivers/net/ethernet/intel/fm10k/fm10k_pci.c
++++ b/drivers/net/ethernet/intel/fm10k/fm10k_pci.c
+@@ -2227,6 +2227,7 @@ err_sw_init:
+ err_ioremap:
+ 	free_netdev(netdev);
+ err_alloc_netdev:
++	pci_disable_pcie_error_reporting(pdev);
+ 	pci_release_mem_regions(pdev);
+ err_pci_reg:
+ err_dma:
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index e612c24fa3842..44bafedd09f28 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -3798,6 +3798,7 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ err_ioremap:
+ 	free_netdev(netdev);
+ err_alloc_etherdev:
++	pci_disable_pcie_error_reporting(pdev);
+ 	pci_release_regions(pdev);
+ err_pci_reg:
+ err_dma:
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 7b1885f9ce039..b0e900d1eae24 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -931,6 +931,7 @@ static void igb_configure_msix(struct igb_adapter *adapter)
+  **/
+ static int igb_request_msix(struct igb_adapter *adapter)
+ {
++	unsigned int num_q_vectors = adapter->num_q_vectors;
+ 	struct net_device *netdev = adapter->netdev;
+ 	int i, err = 0, vector = 0, free_vector = 0;
+ 
+@@ -939,7 +940,13 @@ static int igb_request_msix(struct igb_adapter *adapter)
+ 	if (err)
+ 		goto err_out;
+ 
+-	for (i = 0; i < adapter->num_q_vectors; i++) {
++	if (num_q_vectors > MAX_Q_VECTORS) {
++		num_q_vectors = MAX_Q_VECTORS;
++		dev_warn(&adapter->pdev->dev,
++			 "The number of queue vectors (%d) is higher than max allowed (%d)\n",
++			 adapter->num_q_vectors, MAX_Q_VECTORS);
++	}
++	for (i = 0; i < num_q_vectors; i++) {
+ 		struct igb_q_vector *q_vector = adapter->q_vector[i];
+ 
+ 		vector++;
+@@ -1678,14 +1685,15 @@ static bool is_any_txtime_enabled(struct igb_adapter *adapter)
+  **/
+ static void igb_config_tx_modes(struct igb_adapter *adapter, int queue)
+ {
+-	struct igb_ring *ring = adapter->tx_ring[queue];
+ 	struct net_device *netdev = adapter->netdev;
+ 	struct e1000_hw *hw = &adapter->hw;
++	struct igb_ring *ring;
+ 	u32 tqavcc, tqavctrl;
+ 	u16 value;
+ 
+ 	WARN_ON(hw->mac.type != e1000_i210);
+ 	WARN_ON(queue < 0 || queue > 1);
++	ring = adapter->tx_ring[queue];
+ 
+ 	/* If any of the Qav features is enabled, configure queues as SR and
+ 	 * with HIGH PRIO. If none is, then configure them with LOW PRIO and
+@@ -3615,6 +3623,7 @@ err_sw_init:
+ err_ioremap:
+ 	free_netdev(netdev);
+ err_alloc_etherdev:
++	pci_disable_pcie_error_reporting(pdev);
+ 	pci_release_mem_regions(pdev);
+ err_pci_reg:
+ err_dma:
+@@ -4835,6 +4844,8 @@ static void igb_clean_tx_ring(struct igb_ring *tx_ring)
+ 					       DMA_TO_DEVICE);
+ 		}
+ 
++		tx_buffer->next_to_watch = NULL;
++
+ 		/* move us one more past the eop_desc for start of next pkt */
+ 		tx_buffer++;
+ 		i++;
+diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h
+index 25871351730b4..58e842cbf6eff 100644
+--- a/drivers/net/ethernet/intel/igc/igc.h
++++ b/drivers/net/ethernet/intel/igc/igc.h
+@@ -560,7 +560,7 @@ static inline s32 igc_read_phy_reg(struct igc_hw *hw, u32 offset, u16 *data)
+ 	if (hw->phy.ops.read_reg)
+ 		return hw->phy.ops.read_reg(hw, offset, data);
+ 
+-	return 0;
++	return -EOPNOTSUPP;
+ }
+ 
+ void igc_reinit_locked(struct igc_adapter *);
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index f1adf154ec4ae..a8d5f196fdbd6 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -217,6 +217,8 @@ static void igc_clean_tx_ring(struct igc_ring *tx_ring)
+ 					       DMA_TO_DEVICE);
+ 		}
+ 
++		tx_buffer->next_to_watch = NULL;
++
+ 		/* move us one more past the eop_desc for start of next pkt */
+ 		tx_buffer++;
+ 		i++;
+@@ -5594,6 +5596,7 @@ err_sw_init:
+ err_ioremap:
+ 	free_netdev(netdev);
+ err_alloc_etherdev:
++	pci_disable_pcie_error_reporting(pdev);
+ 	pci_release_mem_regions(pdev);
+ err_pci_reg:
+ err_dma:
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index 2ac5b82676f3b..42d57a73ce1ac 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -1825,7 +1825,8 @@ static void ixgbe_dma_sync_frag(struct ixgbe_ring *rx_ring,
+ 				struct sk_buff *skb)
+ {
+ 	if (ring_uses_build_skb(rx_ring)) {
+-		unsigned long offset = (unsigned long)(skb->data) & ~PAGE_MASK;
++		unsigned long mask = (unsigned long)ixgbe_rx_pg_size(rx_ring) - 1;
++		unsigned long offset = (unsigned long)(skb->data) & mask;
+ 
+ 		dma_sync_single_range_for_cpu(rx_ring->dev,
+ 					      IXGBE_CB(skb)->dma,
+@@ -11069,6 +11070,7 @@ err_ioremap:
+ 	disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state);
+ 	free_netdev(netdev);
+ err_alloc_etherdev:
++	pci_disable_pcie_error_reporting(pdev);
+ 	pci_release_mem_regions(pdev);
+ err_pci_reg:
+ err_dma:
+diff --git a/drivers/net/ethernet/intel/ixgbevf/ipsec.c b/drivers/net/ethernet/intel/ixgbevf/ipsec.c
+index caaea2c920a6e..e3e4676af9e45 100644
+--- a/drivers/net/ethernet/intel/ixgbevf/ipsec.c
++++ b/drivers/net/ethernet/intel/ixgbevf/ipsec.c
+@@ -211,7 +211,7 @@ struct xfrm_state *ixgbevf_ipsec_find_rx_state(struct ixgbevf_ipsec *ipsec,
+ static int ixgbevf_ipsec_parse_proto_keys(struct xfrm_state *xs,
+ 					  u32 *mykey, u32 *mysalt)
+ {
+-	struct net_device *dev = xs->xso.dev;
++	struct net_device *dev = xs->xso.real_dev;
+ 	unsigned char *key_data;
+ 	char *alg_name = NULL;
+ 	int key_len;
+@@ -260,12 +260,15 @@ static int ixgbevf_ipsec_parse_proto_keys(struct xfrm_state *xs,
+  **/
+ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs)
+ {
+-	struct net_device *dev = xs->xso.dev;
+-	struct ixgbevf_adapter *adapter = netdev_priv(dev);
+-	struct ixgbevf_ipsec *ipsec = adapter->ipsec;
++	struct net_device *dev = xs->xso.real_dev;
++	struct ixgbevf_adapter *adapter;
++	struct ixgbevf_ipsec *ipsec;
+ 	u16 sa_idx;
+ 	int ret;
+ 
++	adapter = netdev_priv(dev);
++	ipsec = adapter->ipsec;
++
+ 	if (xs->id.proto != IPPROTO_ESP && xs->id.proto != IPPROTO_AH) {
+ 		netdev_err(dev, "Unsupported protocol 0x%04x for IPsec offload\n",
+ 			   xs->id.proto);
+@@ -383,11 +386,14 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs)
+  **/
+ static void ixgbevf_ipsec_del_sa(struct xfrm_state *xs)
+ {
+-	struct net_device *dev = xs->xso.dev;
+-	struct ixgbevf_adapter *adapter = netdev_priv(dev);
+-	struct ixgbevf_ipsec *ipsec = adapter->ipsec;
++	struct net_device *dev = xs->xso.real_dev;
++	struct ixgbevf_adapter *adapter;
++	struct ixgbevf_ipsec *ipsec;
+ 	u16 sa_idx;
+ 
++	adapter = netdev_priv(dev);
++	ipsec = adapter->ipsec;
++
+ 	if (xs->xso.flags & XFRM_OFFLOAD_INBOUND) {
+ 		sa_idx = xs->xso.offload_handle - IXGBE_IPSEC_BASE_RX_INDEX;
+ 
+diff --git a/drivers/net/ethernet/mscc/ocelot_net.c b/drivers/net/ethernet/mscc/ocelot_net.c
+index aad33d22c33f6..3dc577183a400 100644
+--- a/drivers/net/ethernet/mscc/ocelot_net.c
++++ b/drivers/net/ethernet/mscc/ocelot_net.c
+@@ -1287,6 +1287,7 @@ static int ocelot_netdevice_lag_leave(struct net_device *dev,
+ }
+ 
+ static int ocelot_netdevice_changeupper(struct net_device *dev,
++					struct net_device *brport_dev,
+ 					struct netdev_notifier_changeupper_info *info)
+ {
+ 	struct netlink_ext_ack *extack;
+@@ -1296,11 +1297,11 @@ static int ocelot_netdevice_changeupper(struct net_device *dev,
+ 
+ 	if (netif_is_bridge_master(info->upper_dev)) {
+ 		if (info->linking)
+-			err = ocelot_netdevice_bridge_join(dev, dev,
++			err = ocelot_netdevice_bridge_join(dev, brport_dev,
+ 							   info->upper_dev,
+ 							   extack);
+ 		else
+-			err = ocelot_netdevice_bridge_leave(dev, dev,
++			err = ocelot_netdevice_bridge_leave(dev, brport_dev,
+ 							    info->upper_dev);
+ 	}
+ 	if (netif_is_lag_master(info->upper_dev)) {
+@@ -1335,7 +1336,7 @@ ocelot_netdevice_lag_changeupper(struct net_device *dev,
+ 		if (ocelot_port->bond != dev)
+ 			return NOTIFY_OK;
+ 
+-		err = ocelot_netdevice_changeupper(lower, info);
++		err = ocelot_netdevice_changeupper(lower, dev, info);
+ 		if (err)
+ 			return notifier_from_errno(err);
+ 	}
+@@ -1374,7 +1375,7 @@ static int ocelot_netdevice_event(struct notifier_block *unused,
+ 		struct netdev_notifier_changeupper_info *info = ptr;
+ 
+ 		if (ocelot_netdevice_dev_check(dev))
+-			return ocelot_netdevice_changeupper(dev, info);
++			return ocelot_netdevice_changeupper(dev, dev, info);
+ 
+ 		if (netif_is_lag_master(dev))
+ 			return ocelot_netdevice_lag_changeupper(dev, info);
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index a0d4e052a79ed..b8eb1b2a8de39 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -5085,7 +5085,8 @@ static int r8169_mdio_register(struct rtl8169_private *tp)
+ 	new_bus->priv = tp;
+ 	new_bus->parent = &pdev->dev;
+ 	new_bus->irq[0] = PHY_MAC_INTERRUPT;
+-	snprintf(new_bus->id, MII_BUS_ID_SIZE, "r8169-%x", pci_dev_id(pdev));
++	snprintf(new_bus->id, MII_BUS_ID_SIZE, "r8169-%x-%x",
++		 pci_domain_nr(pdev->bus), pci_dev_id(pdev));
+ 
+ 	new_bus->read = r8169_mdio_read_reg;
+ 	new_bus->write = r8169_mdio_write_reg;
+diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet/sfc/efx_channels.c
+index a3ca406a35611..bb48a139dd158 100644
+--- a/drivers/net/ethernet/sfc/efx_channels.c
++++ b/drivers/net/ethernet/sfc/efx_channels.c
+@@ -152,6 +152,7 @@ static int efx_allocate_msix_channels(struct efx_nic *efx,
+ 	 * maximum size.
+ 	 */
+ 	tx_per_ev = EFX_MAX_EVQ_SIZE / EFX_TXQ_MAX_ENT(efx);
++	tx_per_ev = min(tx_per_ev, EFX_MAX_TXQ_PER_CHANNEL);
+ 	n_xdp_tx = num_possible_cpus();
+ 	n_xdp_ev = DIV_ROUND_UP(n_xdp_tx, tx_per_ev);
+ 
+@@ -181,7 +182,7 @@ static int efx_allocate_msix_channels(struct efx_nic *efx,
+ 		efx->xdp_tx_queue_count = 0;
+ 	} else {
+ 		efx->n_xdp_channels = n_xdp_ev;
+-		efx->xdp_tx_per_channel = EFX_MAX_TXQ_PER_CHANNEL;
++		efx->xdp_tx_per_channel = tx_per_ev;
+ 		efx->xdp_tx_queue_count = n_xdp_tx;
+ 		n_channels += n_xdp_ev;
+ 		netif_dbg(efx, drv, efx->net_dev,
+@@ -891,18 +892,20 @@ int efx_set_channels(struct efx_nic *efx)
+ 			if (efx_channel_is_xdp_tx(channel)) {
+ 				efx_for_each_channel_tx_queue(tx_queue, channel) {
+ 					tx_queue->queue = next_queue++;
+-					netif_dbg(efx, drv, efx->net_dev, "Channel %u TXQ %u is XDP %u, HW %u\n",
+-						  channel->channel, tx_queue->label,
+-						  xdp_queue_number, tx_queue->queue);
++
+ 					/* We may have a few left-over XDP TX
+ 					 * queues owing to xdp_tx_queue_count
+ 					 * not dividing evenly by EFX_MAX_TXQ_PER_CHANNEL.
+ 					 * We still allocate and probe those
+ 					 * TXQs, but never use them.
+ 					 */
+-					if (xdp_queue_number < efx->xdp_tx_queue_count)
++					if (xdp_queue_number < efx->xdp_tx_queue_count) {
++						netif_dbg(efx, drv, efx->net_dev, "Channel %u TXQ %u is XDP %u, HW %u\n",
++							  channel->channel, tx_queue->label,
++							  xdp_queue_number, tx_queue->queue);
+ 						efx->xdp_tx_queues[xdp_queue_number] = tx_queue;
+-					xdp_queue_number++;
++						xdp_queue_number++;
++					}
+ 				}
+ 			} else {
+ 				efx_for_each_channel_tx_queue(tx_queue, channel) {
+@@ -914,8 +917,7 @@ int efx_set_channels(struct efx_nic *efx)
+ 			}
+ 		}
+ 	}
+-	if (xdp_queue_number)
+-		efx->xdp_tx_queue_count = xdp_queue_number;
++	WARN_ON(xdp_queue_number != efx->xdp_tx_queue_count);
+ 
+ 	rc = netif_set_real_num_tx_queues(efx->net_dev, efx->n_tx_channels);
+ 	if (rc)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 91cd5073ddb26..980a60477b022 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -7170,6 +7170,7 @@ int stmmac_suspend(struct device *dev)
+ 				     priv->plat->rx_queues_to_use, false);
+ 
+ 		stmmac_fpe_handshake(priv, false);
++		stmmac_fpe_stop_wq(priv);
+ 	}
+ 
+ 	priv->speed = SPEED_UNKNOWN;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index a696ada013eb5..cad9e466353f7 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -399,6 +399,7 @@ stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
+ 	struct device_node *np = pdev->dev.of_node;
+ 	struct plat_stmmacenet_data *plat;
+ 	struct stmmac_dma_cfg *dma_cfg;
++	int phy_mode;
+ 	void *ret;
+ 	int rc;
+ 
+@@ -414,10 +415,11 @@ stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
+ 		eth_zero_addr(mac);
+ 	}
+ 
+-	plat->phy_interface = device_get_phy_mode(&pdev->dev);
+-	if (plat->phy_interface < 0)
+-		return ERR_PTR(plat->phy_interface);
++	phy_mode = device_get_phy_mode(&pdev->dev);
++	if (phy_mode < 0)
++		return ERR_PTR(phy_mode);
+ 
++	plat->phy_interface = phy_mode;
+ 	plat->interface = stmmac_of_get_mac_mode(np);
+ 	if (plat->interface < 0)
+ 		plat->interface = plat->phy_interface;
+diff --git a/drivers/net/phy/marvell10g.c b/drivers/net/phy/marvell10g.c
+index bbbc6ac8fa825..53a433442803a 100644
+--- a/drivers/net/phy/marvell10g.c
++++ b/drivers/net/phy/marvell10g.c
+@@ -78,6 +78,11 @@ enum {
+ 	/* Temperature read register (88E2110 only) */
+ 	MV_PCS_TEMP		= 0x8042,
+ 
++	/* Number of ports on the device */
++	MV_PCS_PORT_INFO	= 0xd00d,
++	MV_PCS_PORT_INFO_NPORTS_MASK	= 0x0380,
++	MV_PCS_PORT_INFO_NPORTS_SHIFT	= 7,
++
+ 	/* These registers appear at 0x800X and 0xa00X - the 0xa00X control
+ 	 * registers appear to set themselves to the 0x800X when AN is
+ 	 * restarted, but status registers appear readable from either.
+@@ -966,6 +971,30 @@ static const struct mv3310_chip mv2111_type = {
+ #endif
+ };
+ 
++static int mv3310_get_number_of_ports(struct phy_device *phydev)
++{
++	int ret;
++
++	ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MV_PCS_PORT_INFO);
++	if (ret < 0)
++		return ret;
++
++	ret &= MV_PCS_PORT_INFO_NPORTS_MASK;
++	ret >>= MV_PCS_PORT_INFO_NPORTS_SHIFT;
++
++	return ret + 1;
++}
++
++static int mv3310_match_phy_device(struct phy_device *phydev)
++{
++	return mv3310_get_number_of_ports(phydev) == 1;
++}
++
++static int mv3340_match_phy_device(struct phy_device *phydev)
++{
++	return mv3310_get_number_of_ports(phydev) == 4;
++}
++
+ static int mv211x_match_phy_device(struct phy_device *phydev, bool has_5g)
+ {
+ 	int val;
+@@ -994,7 +1023,8 @@ static int mv2111_match_phy_device(struct phy_device *phydev)
+ static struct phy_driver mv3310_drivers[] = {
+ 	{
+ 		.phy_id		= MARVELL_PHY_ID_88X3310,
+-		.phy_id_mask	= MARVELL_PHY_ID_88X33X0_MASK,
++		.phy_id_mask	= MARVELL_PHY_ID_MASK,
++		.match_phy_device = mv3310_match_phy_device,
+ 		.name		= "mv88x3310",
+ 		.driver_data	= &mv3310_type,
+ 		.get_features	= mv3310_get_features,
+@@ -1011,8 +1041,9 @@ static struct phy_driver mv3310_drivers[] = {
+ 		.set_loopback	= genphy_c45_loopback,
+ 	},
+ 	{
+-		.phy_id		= MARVELL_PHY_ID_88X3340,
+-		.phy_id_mask	= MARVELL_PHY_ID_88X33X0_MASK,
++		.phy_id		= MARVELL_PHY_ID_88X3310,
++		.phy_id_mask	= MARVELL_PHY_ID_MASK,
++		.match_phy_device = mv3340_match_phy_device,
+ 		.name		= "mv88x3340",
+ 		.driver_data	= &mv3340_type,
+ 		.get_features	= mv3310_get_features,
+@@ -1069,8 +1100,7 @@ static struct phy_driver mv3310_drivers[] = {
+ module_phy_driver(mv3310_drivers);
+ 
+ static struct mdio_device_id __maybe_unused mv3310_tbl[] = {
+-	{ MARVELL_PHY_ID_88X3310, MARVELL_PHY_ID_88X33X0_MASK },
+-	{ MARVELL_PHY_ID_88X3340, MARVELL_PHY_ID_88X33X0_MASK },
++	{ MARVELL_PHY_ID_88X3310, MARVELL_PHY_ID_MASK },
+ 	{ MARVELL_PHY_ID_88E2110, MARVELL_PHY_ID_MASK },
+ 	{ },
+ };
+diff --git a/drivers/net/usb/hso.c b/drivers/net/usb/hso.c
+index 5c779cc0ea112..28ebf4955b839 100644
+--- a/drivers/net/usb/hso.c
++++ b/drivers/net/usb/hso.c
+@@ -2496,7 +2496,7 @@ static struct hso_device *hso_create_net_device(struct usb_interface *interface,
+ 			   hso_net_init);
+ 	if (!net) {
+ 		dev_err(&interface->dev, "Unable to create ethernet device\n");
+-		goto exit;
++		goto err_hso_dev;
+ 	}
+ 
+ 	hso_net = netdev_priv(net);
+@@ -2509,13 +2509,13 @@ static struct hso_device *hso_create_net_device(struct usb_interface *interface,
+ 				      USB_DIR_IN);
+ 	if (!hso_net->in_endp) {
+ 		dev_err(&interface->dev, "Can't find BULK IN endpoint\n");
+-		goto exit;
++		goto err_net;
+ 	}
+ 	hso_net->out_endp = hso_get_ep(interface, USB_ENDPOINT_XFER_BULK,
+ 				       USB_DIR_OUT);
+ 	if (!hso_net->out_endp) {
+ 		dev_err(&interface->dev, "Can't find BULK OUT endpoint\n");
+-		goto exit;
++		goto err_net;
+ 	}
+ 	SET_NETDEV_DEV(net, &interface->dev);
+ 	SET_NETDEV_DEVTYPE(net, &hso_type);
+@@ -2524,18 +2524,18 @@ static struct hso_device *hso_create_net_device(struct usb_interface *interface,
+ 	for (i = 0; i < MUX_BULK_RX_BUF_COUNT; i++) {
+ 		hso_net->mux_bulk_rx_urb_pool[i] = usb_alloc_urb(0, GFP_KERNEL);
+ 		if (!hso_net->mux_bulk_rx_urb_pool[i])
+-			goto exit;
++			goto err_mux_bulk_rx;
+ 		hso_net->mux_bulk_rx_buf_pool[i] = kzalloc(MUX_BULK_RX_BUF_SIZE,
+ 							   GFP_KERNEL);
+ 		if (!hso_net->mux_bulk_rx_buf_pool[i])
+-			goto exit;
++			goto err_mux_bulk_rx;
+ 	}
+ 	hso_net->mux_bulk_tx_urb = usb_alloc_urb(0, GFP_KERNEL);
+ 	if (!hso_net->mux_bulk_tx_urb)
+-		goto exit;
++		goto err_mux_bulk_rx;
+ 	hso_net->mux_bulk_tx_buf = kzalloc(MUX_BULK_TX_BUF_SIZE, GFP_KERNEL);
+ 	if (!hso_net->mux_bulk_tx_buf)
+-		goto exit;
++		goto err_free_tx_urb;
+ 
+ 	add_net_device(hso_dev);
+ 
+@@ -2543,7 +2543,7 @@ static struct hso_device *hso_create_net_device(struct usb_interface *interface,
+ 	result = register_netdev(net);
+ 	if (result) {
+ 		dev_err(&interface->dev, "Failed to register device\n");
+-		goto exit;
++		goto err_free_tx_buf;
+ 	}
+ 
+ 	hso_log_port(hso_dev);
+@@ -2551,8 +2551,21 @@ static struct hso_device *hso_create_net_device(struct usb_interface *interface,
+ 	hso_create_rfkill(hso_dev, interface);
+ 
+ 	return hso_dev;
+-exit:
+-	hso_free_net_device(hso_dev, true);
++
++err_free_tx_buf:
++	remove_net_device(hso_dev);
++	kfree(hso_net->mux_bulk_tx_buf);
++err_free_tx_urb:
++	usb_free_urb(hso_net->mux_bulk_tx_urb);
++err_mux_bulk_rx:
++	for (i = 0; i < MUX_BULK_RX_BUF_COUNT; i++) {
++		usb_free_urb(hso_net->mux_bulk_rx_urb_pool[i]);
++		kfree(hso_net->mux_bulk_rx_buf_pool[i]);
++	}
++err_net:
++	free_netdev(net);
++err_hso_dev:
++	kfree(hso_dev);
+ 	return NULL;
+ }
+ 
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 66973bb563055..148e756857a89 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -880,7 +880,10 @@ static inline blk_status_t nvme_setup_write_zeroes(struct nvme_ns *ns,
+ 		cpu_to_le64(nvme_sect_to_lba(ns, blk_rq_pos(req)));
+ 	cmnd->write_zeroes.length =
+ 		cpu_to_le16((blk_rq_bytes(req) >> ns->lba_shift) - 1);
+-	cmnd->write_zeroes.control = 0;
++	if (nvme_ns_has_pi(ns))
++		cmnd->write_zeroes.control = cpu_to_le16(NVME_RW_PRINFO_PRACT);
++	else
++		cmnd->write_zeroes.control = 0;
+ 	return BLK_STS_OK;
+ }
+ 
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 42ad75ff13481..fb1c5ae0da39d 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2591,7 +2591,9 @@ static void nvme_reset_work(struct work_struct *work)
+ 	bool was_suspend = !!(dev->ctrl.ctrl_config & NVME_CC_SHN_NORMAL);
+ 	int result;
+ 
+-	if (WARN_ON(dev->ctrl.state != NVME_CTRL_RESETTING)) {
++	if (dev->ctrl.state != NVME_CTRL_RESETTING) {
++		dev_warn(dev->ctrl.device, "ctrl state %d is not RESETTING\n",
++			 dev->ctrl.state);
+ 		result = -ENODEV;
+ 		goto out;
+ 	}
+@@ -2998,7 +3000,6 @@ static void nvme_remove(struct pci_dev *pdev)
+ 	if (!pci_device_is_present(pdev)) {
+ 		nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_DEAD);
+ 		nvme_dev_disable(dev, true);
+-		nvme_dev_remove_admin(dev);
+ 	}
+ 
+ 	flush_work(&dev->ctrl.reset_work);
+diff --git a/drivers/pwm/pwm-sprd.c b/drivers/pwm/pwm-sprd.c
+index 98c479dfae318..3041f0b3bbb63 100644
+--- a/drivers/pwm/pwm-sprd.c
++++ b/drivers/pwm/pwm-sprd.c
+@@ -183,13 +183,10 @@ static int sprd_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 			}
+ 		}
+ 
+-		if (state->period != cstate->period ||
+-		    state->duty_cycle != cstate->duty_cycle) {
+-			ret = sprd_pwm_config(spc, pwm, state->duty_cycle,
+-					      state->period);
+-			if (ret)
+-				return ret;
+-		}
++		ret = sprd_pwm_config(spc, pwm, state->duty_cycle,
++				      state->period);
++		if (ret)
++			return ret;
+ 
+ 		sprd_pwm_write(spc, pwm->hwpwm, SPRD_PWM_ENABLE, 1);
+ 	} else if (cstate->enabled) {
+diff --git a/drivers/regulator/hi6421-regulator.c b/drivers/regulator/hi6421-regulator.c
+index dc631c1a46b4c..d144a4bdb76da 100644
+--- a/drivers/regulator/hi6421-regulator.c
++++ b/drivers/regulator/hi6421-regulator.c
+@@ -366,9 +366,8 @@ static struct hi6421_regulator_info
+ 
+ static int hi6421_regulator_enable(struct regulator_dev *rdev)
+ {
+-	struct hi6421_regulator_pdata *pdata;
++	struct hi6421_regulator_pdata *pdata = rdev_get_drvdata(rdev);
+ 
+-	pdata = dev_get_drvdata(rdev->dev.parent);
+ 	/* hi6421 spec requires regulator enablement must be serialized:
+ 	 *  - Because when BUCK, LDO switching from off to on, it will have
+ 	 *    a huge instantaneous current; so you can not turn on two or
+@@ -385,9 +384,10 @@ static int hi6421_regulator_enable(struct regulator_dev *rdev)
+ 
+ static unsigned int hi6421_regulator_ldo_get_mode(struct regulator_dev *rdev)
+ {
+-	struct hi6421_regulator_info *info = rdev_get_drvdata(rdev);
+-	u32 reg_val;
++	struct hi6421_regulator_info *info;
++	unsigned int reg_val;
+ 
++	info = container_of(rdev->desc, struct hi6421_regulator_info, desc);
+ 	regmap_read(rdev->regmap, rdev->desc->enable_reg, &reg_val);
+ 	if (reg_val & info->mode_mask)
+ 		return REGULATOR_MODE_IDLE;
+@@ -397,9 +397,10 @@ static unsigned int hi6421_regulator_ldo_get_mode(struct regulator_dev *rdev)
+ 
+ static unsigned int hi6421_regulator_buck_get_mode(struct regulator_dev *rdev)
+ {
+-	struct hi6421_regulator_info *info = rdev_get_drvdata(rdev);
+-	u32 reg_val;
++	struct hi6421_regulator_info *info;
++	unsigned int reg_val;
+ 
++	info = container_of(rdev->desc, struct hi6421_regulator_info, desc);
+ 	regmap_read(rdev->regmap, rdev->desc->enable_reg, &reg_val);
+ 	if (reg_val & info->mode_mask)
+ 		return REGULATOR_MODE_STANDBY;
+@@ -410,9 +411,10 @@ static unsigned int hi6421_regulator_buck_get_mode(struct regulator_dev *rdev)
+ static int hi6421_regulator_ldo_set_mode(struct regulator_dev *rdev,
+ 						unsigned int mode)
+ {
+-	struct hi6421_regulator_info *info = rdev_get_drvdata(rdev);
+-	u32 new_mode;
++	struct hi6421_regulator_info *info;
++	unsigned int new_mode;
+ 
++	info = container_of(rdev->desc, struct hi6421_regulator_info, desc);
+ 	switch (mode) {
+ 	case REGULATOR_MODE_NORMAL:
+ 		new_mode = 0;
+@@ -434,9 +436,10 @@ static int hi6421_regulator_ldo_set_mode(struct regulator_dev *rdev,
+ static int hi6421_regulator_buck_set_mode(struct regulator_dev *rdev,
+ 						unsigned int mode)
+ {
+-	struct hi6421_regulator_info *info = rdev_get_drvdata(rdev);
+-	u32 new_mode;
++	struct hi6421_regulator_info *info;
++	unsigned int new_mode;
+ 
++	info = container_of(rdev->desc, struct hi6421_regulator_info, desc);
+ 	switch (mode) {
+ 	case REGULATOR_MODE_NORMAL:
+ 		new_mode = 0;
+@@ -459,7 +462,9 @@ static unsigned int
+ hi6421_regulator_ldo_get_optimum_mode(struct regulator_dev *rdev,
+ 			int input_uV, int output_uV, int load_uA)
+ {
+-	struct hi6421_regulator_info *info = rdev_get_drvdata(rdev);
++	struct hi6421_regulator_info *info;
++
++	info = container_of(rdev->desc, struct hi6421_regulator_info, desc);
+ 
+ 	if (load_uA > info->eco_microamp)
+ 		return REGULATOR_MODE_NORMAL;
+@@ -543,14 +548,13 @@ static int hi6421_regulator_probe(struct platform_device *pdev)
+ 	if (!pdata)
+ 		return -ENOMEM;
+ 	mutex_init(&pdata->lock);
+-	platform_set_drvdata(pdev, pdata);
+ 
+ 	for (i = 0; i < ARRAY_SIZE(hi6421_regulator_info); i++) {
+ 		/* assign per-regulator data */
+ 		info = &hi6421_regulator_info[i];
+ 
+ 		config.dev = pdev->dev.parent;
+-		config.driver_data = info;
++		config.driver_data = pdata;
+ 		config.regmap = pmic->regmap;
+ 
+ 		rdev = devm_regulator_register(&pdev->dev, &info->desc,
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index b07105ae7c917..d8b05d8b54708 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -439,39 +439,10 @@ static umode_t iscsi_iface_attr_is_visible(struct kobject *kobj,
+ 	struct device *dev = container_of(kobj, struct device, kobj);
+ 	struct iscsi_iface *iface = iscsi_dev_to_iface(dev);
+ 	struct iscsi_transport *t = iface->transport;
+-	int param;
+-	int param_type;
++	int param = -1;
+ 
+ 	if (attr == &dev_attr_iface_enabled.attr)
+ 		param = ISCSI_NET_PARAM_IFACE_ENABLE;
+-	else if (attr == &dev_attr_iface_vlan_id.attr)
+-		param = ISCSI_NET_PARAM_VLAN_ID;
+-	else if (attr == &dev_attr_iface_vlan_priority.attr)
+-		param = ISCSI_NET_PARAM_VLAN_PRIORITY;
+-	else if (attr == &dev_attr_iface_vlan_enabled.attr)
+-		param = ISCSI_NET_PARAM_VLAN_ENABLED;
+-	else if (attr == &dev_attr_iface_mtu.attr)
+-		param = ISCSI_NET_PARAM_MTU;
+-	else if (attr == &dev_attr_iface_port.attr)
+-		param = ISCSI_NET_PARAM_PORT;
+-	else if (attr == &dev_attr_iface_ipaddress_state.attr)
+-		param = ISCSI_NET_PARAM_IPADDR_STATE;
+-	else if (attr == &dev_attr_iface_delayed_ack_en.attr)
+-		param = ISCSI_NET_PARAM_DELAYED_ACK_EN;
+-	else if (attr == &dev_attr_iface_tcp_nagle_disable.attr)
+-		param = ISCSI_NET_PARAM_TCP_NAGLE_DISABLE;
+-	else if (attr == &dev_attr_iface_tcp_wsf_disable.attr)
+-		param = ISCSI_NET_PARAM_TCP_WSF_DISABLE;
+-	else if (attr == &dev_attr_iface_tcp_wsf.attr)
+-		param = ISCSI_NET_PARAM_TCP_WSF;
+-	else if (attr == &dev_attr_iface_tcp_timer_scale.attr)
+-		param = ISCSI_NET_PARAM_TCP_TIMER_SCALE;
+-	else if (attr == &dev_attr_iface_tcp_timestamp_en.attr)
+-		param = ISCSI_NET_PARAM_TCP_TIMESTAMP_EN;
+-	else if (attr == &dev_attr_iface_cache_id.attr)
+-		param = ISCSI_NET_PARAM_CACHE_ID;
+-	else if (attr == &dev_attr_iface_redirect_en.attr)
+-		param = ISCSI_NET_PARAM_REDIRECT_EN;
+ 	else if (attr == &dev_attr_iface_def_taskmgmt_tmo.attr)
+ 		param = ISCSI_IFACE_PARAM_DEF_TASKMGMT_TMO;
+ 	else if (attr == &dev_attr_iface_header_digest.attr)
+@@ -508,6 +479,38 @@ static umode_t iscsi_iface_attr_is_visible(struct kobject *kobj,
+ 		param = ISCSI_IFACE_PARAM_STRICT_LOGIN_COMP_EN;
+ 	else if (attr == &dev_attr_iface_initiator_name.attr)
+ 		param = ISCSI_IFACE_PARAM_INITIATOR_NAME;
++
++	if (param != -1)
++		return t->attr_is_visible(ISCSI_IFACE_PARAM, param);
++
++	if (attr == &dev_attr_iface_vlan_id.attr)
++		param = ISCSI_NET_PARAM_VLAN_ID;
++	else if (attr == &dev_attr_iface_vlan_priority.attr)
++		param = ISCSI_NET_PARAM_VLAN_PRIORITY;
++	else if (attr == &dev_attr_iface_vlan_enabled.attr)
++		param = ISCSI_NET_PARAM_VLAN_ENABLED;
++	else if (attr == &dev_attr_iface_mtu.attr)
++		param = ISCSI_NET_PARAM_MTU;
++	else if (attr == &dev_attr_iface_port.attr)
++		param = ISCSI_NET_PARAM_PORT;
++	else if (attr == &dev_attr_iface_ipaddress_state.attr)
++		param = ISCSI_NET_PARAM_IPADDR_STATE;
++	else if (attr == &dev_attr_iface_delayed_ack_en.attr)
++		param = ISCSI_NET_PARAM_DELAYED_ACK_EN;
++	else if (attr == &dev_attr_iface_tcp_nagle_disable.attr)
++		param = ISCSI_NET_PARAM_TCP_NAGLE_DISABLE;
++	else if (attr == &dev_attr_iface_tcp_wsf_disable.attr)
++		param = ISCSI_NET_PARAM_TCP_WSF_DISABLE;
++	else if (attr == &dev_attr_iface_tcp_wsf.attr)
++		param = ISCSI_NET_PARAM_TCP_WSF;
++	else if (attr == &dev_attr_iface_tcp_timer_scale.attr)
++		param = ISCSI_NET_PARAM_TCP_TIMER_SCALE;
++	else if (attr == &dev_attr_iface_tcp_timestamp_en.attr)
++		param = ISCSI_NET_PARAM_TCP_TIMESTAMP_EN;
++	else if (attr == &dev_attr_iface_cache_id.attr)
++		param = ISCSI_NET_PARAM_CACHE_ID;
++	else if (attr == &dev_attr_iface_redirect_en.attr)
++		param = ISCSI_NET_PARAM_REDIRECT_EN;
+ 	else if (iface->iface_type == ISCSI_IFACE_TYPE_IPV4) {
+ 		if (attr == &dev_attr_ipv4_iface_ipaddress.attr)
+ 			param = ISCSI_NET_PARAM_IPV4_ADDR;
+@@ -598,32 +601,7 @@ static umode_t iscsi_iface_attr_is_visible(struct kobject *kobj,
+ 		return 0;
+ 	}
+ 
+-	switch (param) {
+-	case ISCSI_IFACE_PARAM_DEF_TASKMGMT_TMO:
+-	case ISCSI_IFACE_PARAM_HDRDGST_EN:
+-	case ISCSI_IFACE_PARAM_DATADGST_EN:
+-	case ISCSI_IFACE_PARAM_IMM_DATA_EN:
+-	case ISCSI_IFACE_PARAM_INITIAL_R2T_EN:
+-	case ISCSI_IFACE_PARAM_DATASEQ_INORDER_EN:
+-	case ISCSI_IFACE_PARAM_PDU_INORDER_EN:
+-	case ISCSI_IFACE_PARAM_ERL:
+-	case ISCSI_IFACE_PARAM_MAX_RECV_DLENGTH:
+-	case ISCSI_IFACE_PARAM_FIRST_BURST:
+-	case ISCSI_IFACE_PARAM_MAX_R2T:
+-	case ISCSI_IFACE_PARAM_MAX_BURST:
+-	case ISCSI_IFACE_PARAM_CHAP_AUTH_EN:
+-	case ISCSI_IFACE_PARAM_BIDI_CHAP_EN:
+-	case ISCSI_IFACE_PARAM_DISCOVERY_AUTH_OPTIONAL:
+-	case ISCSI_IFACE_PARAM_DISCOVERY_LOGOUT_EN:
+-	case ISCSI_IFACE_PARAM_STRICT_LOGIN_COMP_EN:
+-	case ISCSI_IFACE_PARAM_INITIATOR_NAME:
+-		param_type = ISCSI_IFACE_PARAM;
+-		break;
+-	default:
+-		param_type = ISCSI_NET_PARAM;
+-	}
+-
+-	return t->attr_is_visible(param_type, param);
++	return t->attr_is_visible(ISCSI_NET_PARAM, param);
+ }
+ 
+ static struct attribute *iscsi_iface_attrs[] = {
+diff --git a/drivers/spi/spi-bcm2835.c b/drivers/spi/spi-bcm2835.c
+index fe40626e45aa8..61cbcc7e2121f 100644
+--- a/drivers/spi/spi-bcm2835.c
++++ b/drivers/spi/spi-bcm2835.c
+@@ -84,6 +84,7 @@ MODULE_PARM_DESC(polling_limit_us,
+  * struct bcm2835_spi - BCM2835 SPI controller
+  * @regs: base address of register map
+  * @clk: core clock, divided to calculate serial clock
++ * @clk_hz: core clock cached speed
+  * @irq: interrupt, signals TX FIFO empty or RX FIFO ¾ full
+  * @tfr: SPI transfer currently processed
+  * @ctlr: SPI controller reverse lookup
+@@ -124,6 +125,7 @@ MODULE_PARM_DESC(polling_limit_us,
+ struct bcm2835_spi {
+ 	void __iomem *regs;
+ 	struct clk *clk;
++	unsigned long clk_hz;
+ 	int irq;
+ 	struct spi_transfer *tfr;
+ 	struct spi_controller *ctlr;
+@@ -1082,19 +1084,18 @@ static int bcm2835_spi_transfer_one(struct spi_controller *ctlr,
+ 				    struct spi_transfer *tfr)
+ {
+ 	struct bcm2835_spi *bs = spi_controller_get_devdata(ctlr);
+-	unsigned long spi_hz, clk_hz, cdiv;
++	unsigned long spi_hz, cdiv;
+ 	unsigned long hz_per_byte, byte_limit;
+ 	u32 cs = bs->prepare_cs[spi->chip_select];
+ 
+ 	/* set clock */
+ 	spi_hz = tfr->speed_hz;
+-	clk_hz = clk_get_rate(bs->clk);
+ 
+-	if (spi_hz >= clk_hz / 2) {
++	if (spi_hz >= bs->clk_hz / 2) {
+ 		cdiv = 2; /* clk_hz/2 is the fastest we can go */
+ 	} else if (spi_hz) {
+ 		/* CDIV must be a multiple of two */
+-		cdiv = DIV_ROUND_UP(clk_hz, spi_hz);
++		cdiv = DIV_ROUND_UP(bs->clk_hz, spi_hz);
+ 		cdiv += (cdiv % 2);
+ 
+ 		if (cdiv >= 65536)
+@@ -1102,7 +1103,7 @@ static int bcm2835_spi_transfer_one(struct spi_controller *ctlr,
+ 	} else {
+ 		cdiv = 0; /* 0 is the slowest we can go */
+ 	}
+-	tfr->effective_speed_hz = cdiv ? (clk_hz / cdiv) : (clk_hz / 65536);
++	tfr->effective_speed_hz = cdiv ? (bs->clk_hz / cdiv) : (bs->clk_hz / 65536);
+ 	bcm2835_wr(bs, BCM2835_SPI_CLK, cdiv);
+ 
+ 	/* handle all the 3-wire mode */
+@@ -1320,6 +1321,7 @@ static int bcm2835_spi_probe(struct platform_device *pdev)
+ 		return bs->irq ? bs->irq : -ENODEV;
+ 
+ 	clk_prepare_enable(bs->clk);
++	bs->clk_hz = clk_get_rate(bs->clk);
+ 
+ 	err = bcm2835_dma_init(ctlr, &pdev->dev, bs);
+ 	if (err)
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index 7a00346ff9b92..d62d69dd72b9d 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -309,6 +309,9 @@ static unsigned int cqspi_calc_dummy(const struct spi_mem_op *op, bool dtr)
+ {
+ 	unsigned int dummy_clk;
+ 
++	if (!op->dummy.nbytes)
++		return 0;
++
+ 	dummy_clk = op->dummy.nbytes * (8 / op->dummy.buswidth);
+ 	if (dtr)
+ 		dummy_clk /= 2;
+diff --git a/drivers/spi/spi-cadence.c b/drivers/spi/spi-cadence.c
+index a3afd1b9ac567..ceb16e70d235a 100644
+--- a/drivers/spi/spi-cadence.c
++++ b/drivers/spi/spi-cadence.c
+@@ -517,6 +517,12 @@ static int cdns_spi_probe(struct platform_device *pdev)
+ 		goto clk_dis_apb;
+ 	}
+ 
++	pm_runtime_use_autosuspend(&pdev->dev);
++	pm_runtime_set_autosuspend_delay(&pdev->dev, SPI_AUTOSUSPEND_TIMEOUT);
++	pm_runtime_get_noresume(&pdev->dev);
++	pm_runtime_set_active(&pdev->dev);
++	pm_runtime_enable(&pdev->dev);
++
+ 	ret = of_property_read_u32(pdev->dev.of_node, "num-cs", &num_cs);
+ 	if (ret < 0)
+ 		master->num_chipselect = CDNS_SPI_DEFAULT_NUM_CS;
+@@ -531,11 +537,6 @@ static int cdns_spi_probe(struct platform_device *pdev)
+ 	/* SPI controller initializations */
+ 	cdns_spi_init_hw(xspi);
+ 
+-	pm_runtime_set_active(&pdev->dev);
+-	pm_runtime_enable(&pdev->dev);
+-	pm_runtime_use_autosuspend(&pdev->dev);
+-	pm_runtime_set_autosuspend_delay(&pdev->dev, SPI_AUTOSUSPEND_TIMEOUT);
+-
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq <= 0) {
+ 		ret = -ENXIO;
+@@ -566,6 +567,9 @@ static int cdns_spi_probe(struct platform_device *pdev)
+ 
+ 	master->bits_per_word_mask = SPI_BPW_MASK(8);
+ 
++	pm_runtime_mark_last_busy(&pdev->dev);
++	pm_runtime_put_autosuspend(&pdev->dev);
++
+ 	ret = spi_register_master(master);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "spi_register_master failed\n");
+diff --git a/drivers/spi/spi-mt65xx.c b/drivers/spi/spi-mt65xx.c
+index 976f73b9e2998..8d5fa7f1e5069 100644
+--- a/drivers/spi/spi-mt65xx.c
++++ b/drivers/spi/spi-mt65xx.c
+@@ -427,13 +427,23 @@ static int mtk_spi_fifo_transfer(struct spi_master *master,
+ 	mtk_spi_setup_packet(master);
+ 
+ 	cnt = xfer->len / 4;
+-	iowrite32_rep(mdata->base + SPI_TX_DATA_REG, xfer->tx_buf, cnt);
++	if (xfer->tx_buf)
++		iowrite32_rep(mdata->base + SPI_TX_DATA_REG, xfer->tx_buf, cnt);
++
++	if (xfer->rx_buf)
++		ioread32_rep(mdata->base + SPI_RX_DATA_REG, xfer->rx_buf, cnt);
+ 
+ 	remainder = xfer->len % 4;
+ 	if (remainder > 0) {
+ 		reg_val = 0;
+-		memcpy(&reg_val, xfer->tx_buf + (cnt * 4), remainder);
+-		writel(reg_val, mdata->base + SPI_TX_DATA_REG);
++		if (xfer->tx_buf) {
++			memcpy(&reg_val, xfer->tx_buf + (cnt * 4), remainder);
++			writel(reg_val, mdata->base + SPI_TX_DATA_REG);
++		}
++		if (xfer->rx_buf) {
++			reg_val = readl(mdata->base + SPI_RX_DATA_REG);
++			memcpy(xfer->rx_buf + (cnt * 4), &reg_val, remainder);
++		}
+ 	}
+ 
+ 	mtk_spi_enable_transfer(master);
+diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
+index 8ffcffbb81571..a92a28933edbb 100644
+--- a/drivers/spi/spi-stm32.c
++++ b/drivers/spi/spi-stm32.c
+@@ -1925,6 +1925,7 @@ static int stm32_spi_probe(struct platform_device *pdev)
+ 		master->can_dma = stm32_spi_can_dma;
+ 
+ 	pm_runtime_set_active(&pdev->dev);
++	pm_runtime_get_noresume(&pdev->dev);
+ 	pm_runtime_enable(&pdev->dev);
+ 
+ 	ret = spi_register_master(master);
+@@ -1940,6 +1941,8 @@ static int stm32_spi_probe(struct platform_device *pdev)
+ 
+ err_pm_disable:
+ 	pm_runtime_disable(&pdev->dev);
++	pm_runtime_put_noidle(&pdev->dev);
++	pm_runtime_set_suspended(&pdev->dev);
+ err_dma_release:
+ 	if (spi->dma_tx)
+ 		dma_release_channel(spi->dma_tx);
+@@ -1956,9 +1959,14 @@ static int stm32_spi_remove(struct platform_device *pdev)
+ 	struct spi_master *master = platform_get_drvdata(pdev);
+ 	struct stm32_spi *spi = spi_master_get_devdata(master);
+ 
++	pm_runtime_get_sync(&pdev->dev);
++
+ 	spi_unregister_master(master);
+ 	spi->cfg->disable(spi);
+ 
++	pm_runtime_disable(&pdev->dev);
++	pm_runtime_put_noidle(&pdev->dev);
++	pm_runtime_set_suspended(&pdev->dev);
+ 	if (master->dma_tx)
+ 		dma_release_channel(master->dma_tx);
+ 	if (master->dma_rx)
+@@ -1966,7 +1974,6 @@ static int stm32_spi_remove(struct platform_device *pdev)
+ 
+ 	clk_disable_unprepare(spi->clk);
+ 
+-	pm_runtime_disable(&pdev->dev);
+ 
+ 	pinctrl_pm_select_sleep_state(&pdev->dev);
+ 
+diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
+index 7b07e557dc8d8..6594bb0b9df04 100644
+--- a/drivers/target/target_core_sbc.c
++++ b/drivers/target/target_core_sbc.c
+@@ -25,7 +25,7 @@
+ #include "target_core_alua.h"
+ 
+ static sense_reason_t
+-sbc_check_prot(struct se_device *, struct se_cmd *, unsigned char *, u32, bool);
++sbc_check_prot(struct se_device *, struct se_cmd *, unsigned char, u32, bool);
+ static sense_reason_t sbc_execute_unmap(struct se_cmd *cmd);
+ 
+ static sense_reason_t
+@@ -279,14 +279,14 @@ static inline unsigned long long transport_lba_64_ext(unsigned char *cdb)
+ }
+ 
+ static sense_reason_t
+-sbc_setup_write_same(struct se_cmd *cmd, unsigned char *flags, struct sbc_ops *ops)
++sbc_setup_write_same(struct se_cmd *cmd, unsigned char flags, struct sbc_ops *ops)
+ {
+ 	struct se_device *dev = cmd->se_dev;
+ 	sector_t end_lba = dev->transport->get_blocks(dev) + 1;
+ 	unsigned int sectors = sbc_get_write_same_sectors(cmd);
+ 	sense_reason_t ret;
+ 
+-	if ((flags[0] & 0x04) || (flags[0] & 0x02)) {
++	if ((flags & 0x04) || (flags & 0x02)) {
+ 		pr_err("WRITE_SAME PBDATA and LBDATA"
+ 			" bits not supported for Block Discard"
+ 			" Emulation\n");
+@@ -308,7 +308,7 @@ sbc_setup_write_same(struct se_cmd *cmd, unsigned char *flags, struct sbc_ops *o
+ 	}
+ 
+ 	/* We always have ANC_SUP == 0 so setting ANCHOR is always an error */
+-	if (flags[0] & 0x10) {
++	if (flags & 0x10) {
+ 		pr_warn("WRITE SAME with ANCHOR not supported\n");
+ 		return TCM_INVALID_CDB_FIELD;
+ 	}
+@@ -316,7 +316,7 @@ sbc_setup_write_same(struct se_cmd *cmd, unsigned char *flags, struct sbc_ops *o
+ 	 * Special case for WRITE_SAME w/ UNMAP=1 that ends up getting
+ 	 * translated into block discard requests within backend code.
+ 	 */
+-	if (flags[0] & 0x08) {
++	if (flags & 0x08) {
+ 		if (!ops->execute_unmap)
+ 			return TCM_UNSUPPORTED_SCSI_OPCODE;
+ 
+@@ -331,7 +331,7 @@ sbc_setup_write_same(struct se_cmd *cmd, unsigned char *flags, struct sbc_ops *o
+ 	if (!ops->execute_write_same)
+ 		return TCM_UNSUPPORTED_SCSI_OPCODE;
+ 
+-	ret = sbc_check_prot(dev, cmd, &cmd->t_task_cdb[0], sectors, true);
++	ret = sbc_check_prot(dev, cmd, flags >> 5, sectors, true);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -717,10 +717,9 @@ sbc_set_prot_op_checks(u8 protect, bool fabric_prot, enum target_prot_type prot_
+ }
+ 
+ static sense_reason_t
+-sbc_check_prot(struct se_device *dev, struct se_cmd *cmd, unsigned char *cdb,
++sbc_check_prot(struct se_device *dev, struct se_cmd *cmd, unsigned char protect,
+ 	       u32 sectors, bool is_write)
+ {
+-	u8 protect = cdb[1] >> 5;
+ 	int sp_ops = cmd->se_sess->sup_prot_ops;
+ 	int pi_prot_type = dev->dev_attrib.pi_prot_type;
+ 	bool fabric_prot = false;
+@@ -768,7 +767,7 @@ sbc_check_prot(struct se_device *dev, struct se_cmd *cmd, unsigned char *cdb,
+ 		fallthrough;
+ 	default:
+ 		pr_err("Unable to determine pi_prot_type for CDB: 0x%02x "
+-		       "PROTECT: 0x%02x\n", cdb[0], protect);
++		       "PROTECT: 0x%02x\n", cmd->t_task_cdb[0], protect);
+ 		return TCM_INVALID_CDB_FIELD;
+ 	}
+ 
+@@ -843,7 +842,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
+ 		if (sbc_check_dpofua(dev, cmd, cdb))
+ 			return TCM_INVALID_CDB_FIELD;
+ 
+-		ret = sbc_check_prot(dev, cmd, cdb, sectors, false);
++		ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, false);
+ 		if (ret)
+ 			return ret;
+ 
+@@ -857,7 +856,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
+ 		if (sbc_check_dpofua(dev, cmd, cdb))
+ 			return TCM_INVALID_CDB_FIELD;
+ 
+-		ret = sbc_check_prot(dev, cmd, cdb, sectors, false);
++		ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, false);
+ 		if (ret)
+ 			return ret;
+ 
+@@ -871,7 +870,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
+ 		if (sbc_check_dpofua(dev, cmd, cdb))
+ 			return TCM_INVALID_CDB_FIELD;
+ 
+-		ret = sbc_check_prot(dev, cmd, cdb, sectors, false);
++		ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, false);
+ 		if (ret)
+ 			return ret;
+ 
+@@ -892,7 +891,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
+ 		if (sbc_check_dpofua(dev, cmd, cdb))
+ 			return TCM_INVALID_CDB_FIELD;
+ 
+-		ret = sbc_check_prot(dev, cmd, cdb, sectors, true);
++		ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, true);
+ 		if (ret)
+ 			return ret;
+ 
+@@ -906,7 +905,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
+ 		if (sbc_check_dpofua(dev, cmd, cdb))
+ 			return TCM_INVALID_CDB_FIELD;
+ 
+-		ret = sbc_check_prot(dev, cmd, cdb, sectors, true);
++		ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, true);
+ 		if (ret)
+ 			return ret;
+ 
+@@ -921,7 +920,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
+ 		if (sbc_check_dpofua(dev, cmd, cdb))
+ 			return TCM_INVALID_CDB_FIELD;
+ 
+-		ret = sbc_check_prot(dev, cmd, cdb, sectors, true);
++		ret = sbc_check_prot(dev, cmd, cdb[1] >> 5, sectors, true);
+ 		if (ret)
+ 			return ret;
+ 
+@@ -980,7 +979,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
+ 			size = sbc_get_size(cmd, 1);
+ 			cmd->t_task_lba = get_unaligned_be64(&cdb[12]);
+ 
+-			ret = sbc_setup_write_same(cmd, &cdb[10], ops);
++			ret = sbc_setup_write_same(cmd, cdb[10], ops);
+ 			if (ret)
+ 				return ret;
+ 			break;
+@@ -1079,7 +1078,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
+ 		size = sbc_get_size(cmd, 1);
+ 		cmd->t_task_lba = get_unaligned_be64(&cdb[2]);
+ 
+-		ret = sbc_setup_write_same(cmd, &cdb[1], ops);
++		ret = sbc_setup_write_same(cmd, cdb[1], ops);
+ 		if (ret)
+ 			return ret;
+ 		break;
+@@ -1097,7 +1096,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
+ 		 * Follow sbcr26 with WRITE_SAME (10) and check for the existence
+ 		 * of byte 1 bit 3 UNMAP instead of original reserved field
+ 		 */
+-		ret = sbc_setup_write_same(cmd, &cdb[1], ops);
++		ret = sbc_setup_write_same(cmd, cdb[1], ops);
+ 		if (ret)
+ 			return ret;
+ 		break;
+diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
+index 7e35eddd9eb70..26ceabe34de55 100644
+--- a/drivers/target/target_core_transport.c
++++ b/drivers/target/target_core_transport.c
+@@ -886,7 +886,7 @@ void target_complete_cmd(struct se_cmd *cmd, u8 scsi_status)
+ 	INIT_WORK(&cmd->work, success ? target_complete_ok_work :
+ 		  target_complete_failure_work);
+ 
+-	if (wwn->cmd_compl_affinity == SE_COMPL_AFFINITY_CPUID)
++	if (!wwn || wwn->cmd_compl_affinity == SE_COMPL_AFFINITY_CPUID)
+ 		cpu = cmd->cpuid;
+ 	else
+ 		cpu = wwn->cmd_compl_affinity;
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index df8e69e60aaf7..4e123336e410b 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -48,6 +48,7 @@
+ 
+ #define USB_TP_TRANSMISSION_DELAY	40	/* ns */
+ #define USB_TP_TRANSMISSION_DELAY_MAX	65535	/* ns */
++#define USB_PING_RESPONSE_TIME		400	/* ns */
+ 
+ /* Protect struct usb_device->state and ->children members
+  * Note: Both are also protected by ->dev.sem, except that ->state can
+@@ -182,8 +183,9 @@ int usb_device_supports_lpm(struct usb_device *udev)
+ }
+ 
+ /*
+- * Set the Maximum Exit Latency (MEL) for the host to initiate a transition from
+- * either U1 or U2.
++ * Set the Maximum Exit Latency (MEL) for the host to wakup up the path from
++ * U1/U2, send a PING to the device and receive a PING_RESPONSE.
++ * See USB 3.1 section C.1.5.2
+  */
+ static void usb_set_lpm_mel(struct usb_device *udev,
+ 		struct usb3_lpm_parameters *udev_lpm_params,
+@@ -193,35 +195,37 @@ static void usb_set_lpm_mel(struct usb_device *udev,
+ 		unsigned int hub_exit_latency)
+ {
+ 	unsigned int total_mel;
+-	unsigned int device_mel;
+-	unsigned int hub_mel;
+ 
+ 	/*
+-	 * Calculate the time it takes to transition all links from the roothub
+-	 * to the parent hub into U0.  The parent hub must then decode the
+-	 * packet (hub header decode latency) to figure out which port it was
+-	 * bound for.
+-	 *
+-	 * The Hub Header decode latency is expressed in 0.1us intervals (0x1
+-	 * means 0.1us).  Multiply that by 100 to get nanoseconds.
++	 * tMEL1. time to transition path from host to device into U0.
++	 * MEL for parent already contains the delay up to parent, so only add
++	 * the exit latency for the last link (pick the slower exit latency),
++	 * and the hub header decode latency. See USB 3.1 section C 2.2.1
++	 * Store MEL in nanoseconds
+ 	 */
+ 	total_mel = hub_lpm_params->mel +
+-		(hub->descriptor->u.ss.bHubHdrDecLat * 100);
++		max(udev_exit_latency, hub_exit_latency) * 1000 +
++		hub->descriptor->u.ss.bHubHdrDecLat * 100;
+ 
+ 	/*
+-	 * How long will it take to transition the downstream hub's port into
+-	 * U0?  The greater of either the hub exit latency or the device exit
+-	 * latency.
+-	 *
+-	 * The BOS U1/U2 exit latencies are expressed in 1us intervals.
+-	 * Multiply that by 1000 to get nanoseconds.
++	 * tMEL2. Time to submit PING packet. Sum of tTPTransmissionDelay for
++	 * each link + wHubDelay for each hub. Add only for last link.
++	 * tMEL4, the time for PING_RESPONSE to traverse upstream is similar.
++	 * Multiply by 2 to include it as well.
+ 	 */
+-	device_mel = udev_exit_latency * 1000;
+-	hub_mel = hub_exit_latency * 1000;
+-	if (device_mel > hub_mel)
+-		total_mel += device_mel;
+-	else
+-		total_mel += hub_mel;
++	total_mel += (__le16_to_cpu(hub->descriptor->u.ss.wHubDelay) +
++		      USB_TP_TRANSMISSION_DELAY) * 2;
++
++	/*
++	 * tMEL3, tPingResponse. Time taken by device to generate PING_RESPONSE
++	 * after receiving PING. Also add 2100ns as stated in USB 3.1 C 1.5.2.4
++	 * to cover the delay if the PING_RESPONSE is queued behind a Max Packet
++	 * Size DP.
++	 * Note these delays should be added only once for the entire path, so
++	 * add them to the MEL of the device connected to the roothub.
++	 */
++	if (!hub->hdev->parent)
++		total_mel += USB_PING_RESPONSE_TIME + 2100;
+ 
+ 	udev_lpm_params->mel = total_mel;
+ }
+@@ -4090,6 +4094,47 @@ static int usb_set_lpm_timeout(struct usb_device *udev,
+ 	return 0;
+ }
+ 
++/*
++ * Don't allow device intiated U1/U2 if the system exit latency + one bus
++ * interval is greater than the minimum service interval of any active
++ * periodic endpoint. See USB 3.2 section 9.4.9
++ */
++static bool usb_device_may_initiate_lpm(struct usb_device *udev,
++					enum usb3_link_state state)
++{
++	unsigned int sel;		/* us */
++	int i, j;
++
++	if (state == USB3_LPM_U1)
++		sel = DIV_ROUND_UP(udev->u1_params.sel, 1000);
++	else if (state == USB3_LPM_U2)
++		sel = DIV_ROUND_UP(udev->u2_params.sel, 1000);
++	else
++		return false;
++
++	for (i = 0; i < udev->actconfig->desc.bNumInterfaces; i++) {
++		struct usb_interface *intf;
++		struct usb_endpoint_descriptor *desc;
++		unsigned int interval;
++
++		intf = udev->actconfig->interface[i];
++		if (!intf)
++			continue;
++
++		for (j = 0; j < intf->cur_altsetting->desc.bNumEndpoints; j++) {
++			desc = &intf->cur_altsetting->endpoint[j].desc;
++
++			if (usb_endpoint_xfer_int(desc) ||
++			    usb_endpoint_xfer_isoc(desc)) {
++				interval = (1 << (desc->bInterval - 1)) * 125;
++				if (sel + 125 > interval)
++					return false;
++			}
++		}
++	}
++	return true;
++}
++
+ /*
+  * Enable the hub-initiated U1/U2 idle timeouts, and enable device-initiated
+  * U1/U2 entry.
+@@ -4162,20 +4207,23 @@ static void usb_enable_link_state(struct usb_hcd *hcd, struct usb_device *udev,
+ 	 * U1/U2_ENABLE
+ 	 */
+ 	if (udev->actconfig &&
+-	    usb_set_device_initiated_lpm(udev, state, true) == 0) {
+-		if (state == USB3_LPM_U1)
+-			udev->usb3_lpm_u1_enabled = 1;
+-		else if (state == USB3_LPM_U2)
+-			udev->usb3_lpm_u2_enabled = 1;
+-	} else {
+-		/* Don't request U1/U2 entry if the device
+-		 * cannot transition to U1/U2.
+-		 */
+-		usb_set_lpm_timeout(udev, state, 0);
+-		hcd->driver->disable_usb3_lpm_timeout(hcd, udev, state);
++	    usb_device_may_initiate_lpm(udev, state)) {
++		if (usb_set_device_initiated_lpm(udev, state, true)) {
++			/*
++			 * Request to enable device initiated U1/U2 failed,
++			 * better to turn off lpm in this case.
++			 */
++			usb_set_lpm_timeout(udev, state, 0);
++			hcd->driver->disable_usb3_lpm_timeout(hcd, udev, state);
++			return;
++		}
+ 	}
+-}
+ 
++	if (state == USB3_LPM_U1)
++		udev->usb3_lpm_u1_enabled = 1;
++	else if (state == USB3_LPM_U2)
++		udev->usb3_lpm_u2_enabled = 1;
++}
+ /*
+  * Disable the hub-initiated U1/U2 idle timeouts, and disable device-initiated
+  * U1/U2 entry.
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 21e7522655ac9..a54a735b63843 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -502,10 +502,6 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	/* DJI CineSSD */
+ 	{ USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
+ 
+-	/* Fibocom L850-GL LTE Modem */
+-	{ USB_DEVICE(0x2cb7, 0x0007), .driver_info =
+-			USB_QUIRK_IGNORE_REMOTE_WAKEUP },
+-
+ 	/* INTEL VALUE SSD */
+ 	{ USB_DEVICE(0x8086, 0xf1a5), .driver_info = USB_QUIRK_RESET_RESUME },
+ 
+diff --git a/drivers/usb/dwc2/core.h b/drivers/usb/dwc2/core.h
+index ab6b815e0089c..483de2bbfaabe 100644
+--- a/drivers/usb/dwc2/core.h
++++ b/drivers/usb/dwc2/core.h
+@@ -383,6 +383,9 @@ enum dwc2_ep0_state {
+  *			0 - No (default)
+  *			1 - Partial power down
+  *			2 - Hibernation
++ * @no_clock_gating:	Specifies whether to avoid clock gating feature.
++ *			0 - No (use clock gating)
++ *			1 - Yes (avoid it)
+  * @lpm:		Enable LPM support.
+  *			0 - No
+  *			1 - Yes
+@@ -480,6 +483,7 @@ struct dwc2_core_params {
+ #define DWC2_POWER_DOWN_PARAM_NONE		0
+ #define DWC2_POWER_DOWN_PARAM_PARTIAL		1
+ #define DWC2_POWER_DOWN_PARAM_HIBERNATION	2
++	bool no_clock_gating;
+ 
+ 	bool lpm;
+ 	bool lpm_clock_gating;
+diff --git a/drivers/usb/dwc2/core_intr.c b/drivers/usb/dwc2/core_intr.c
+index a5ab03808da69..a5c52b237e723 100644
+--- a/drivers/usb/dwc2/core_intr.c
++++ b/drivers/usb/dwc2/core_intr.c
+@@ -556,7 +556,8 @@ static void dwc2_handle_usb_suspend_intr(struct dwc2_hsotg *hsotg)
+ 				 * If neither hibernation nor partial power down are supported,
+ 				 * clock gating is used to save power.
+ 				 */
+-				dwc2_gadget_enter_clock_gating(hsotg);
++				if (!hsotg->params.no_clock_gating)
++					dwc2_gadget_enter_clock_gating(hsotg);
+ 			}
+ 
+ 			/*
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index 184964174dc0c..a9b63af5e498d 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -2749,12 +2749,14 @@ static void dwc2_hsotg_complete_in(struct dwc2_hsotg *hsotg,
+ 		return;
+ 	}
+ 
+-	/* Zlp for all endpoints, for ep0 only in DATA IN stage */
++	/* Zlp for all endpoints in non DDMA, for ep0 only in DATA IN stage */
+ 	if (hs_ep->send_zlp) {
+-		dwc2_hsotg_program_zlp(hsotg, hs_ep);
+ 		hs_ep->send_zlp = 0;
+-		/* transfer will be completed on next complete interrupt */
+-		return;
++		if (!using_desc_dma(hsotg)) {
++			dwc2_hsotg_program_zlp(hsotg, hs_ep);
++			/* transfer will be completed on next complete interrupt */
++			return;
++		}
+ 	}
+ 
+ 	if (hs_ep->index == 0 && hsotg->ep0_state == DWC2_EP0_DATA_IN) {
+@@ -3900,9 +3902,27 @@ static void dwc2_hsotg_ep_stop_xfr(struct dwc2_hsotg *hsotg,
+ 					 __func__);
+ 		}
+ 	} else {
++		/* Mask GINTSTS_GOUTNAKEFF interrupt */
++		dwc2_hsotg_disable_gsint(hsotg, GINTSTS_GOUTNAKEFF);
++
+ 		if (!(dwc2_readl(hsotg, GINTSTS) & GINTSTS_GOUTNAKEFF))
+ 			dwc2_set_bit(hsotg, DCTL, DCTL_SGOUTNAK);
+ 
++		if (!using_dma(hsotg)) {
++			/* Wait for GINTSTS_RXFLVL interrupt */
++			if (dwc2_hsotg_wait_bit_set(hsotg, GINTSTS,
++						    GINTSTS_RXFLVL, 100)) {
++				dev_warn(hsotg->dev, "%s: timeout GINTSTS.RXFLVL\n",
++					 __func__);
++			} else {
++				/*
++				 * Pop GLOBAL OUT NAK status packet from RxFIFO
++				 * to assert GOUTNAKEFF interrupt
++				 */
++				dwc2_readl(hsotg, GRXSTSP);
++			}
++		}
++
+ 		/* Wait for global nak to take effect */
+ 		if (dwc2_hsotg_wait_bit_set(hsotg, GINTSTS,
+ 					    GINTSTS_GOUTNAKEFF, 100))
+@@ -4348,6 +4368,9 @@ static int dwc2_hsotg_ep_sethalt(struct usb_ep *ep, int value, bool now)
+ 		epctl = dwc2_readl(hs, epreg);
+ 
+ 		if (value) {
++			/* Unmask GOUTNAKEFF interrupt */
++			dwc2_hsotg_en_gsint(hs, GINTSTS_GOUTNAKEFF);
++
+ 			if (!(dwc2_readl(hs, GINTSTS) & GINTSTS_GOUTNAKEFF))
+ 				dwc2_set_bit(hs, DCTL, DCTL_SGOUTNAK);
+ 			// STALL bit will be set in GOUTNAKEFF interrupt handler
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index 035d4911a3c32..2a7828971d056 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -3338,7 +3338,8 @@ int dwc2_port_suspend(struct dwc2_hsotg *hsotg, u16 windex)
+ 		 * If not hibernation nor partial power down are supported,
+ 		 * clock gating is used to save power.
+ 		 */
+-		dwc2_host_enter_clock_gating(hsotg);
++		if (!hsotg->params.no_clock_gating)
++			dwc2_host_enter_clock_gating(hsotg);
+ 		break;
+ 	}
+ 
+@@ -4402,7 +4403,8 @@ static int _dwc2_hcd_suspend(struct usb_hcd *hcd)
+ 		 * If not hibernation nor partial power down are supported,
+ 		 * clock gating is used to save power.
+ 		 */
+-		dwc2_host_enter_clock_gating(hsotg);
++		if (!hsotg->params.no_clock_gating)
++			dwc2_host_enter_clock_gating(hsotg);
+ 
+ 		/* After entering suspend, hardware is not accessible */
+ 		clear_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);
+diff --git a/drivers/usb/dwc2/params.c b/drivers/usb/dwc2/params.c
+index 7a6089fa81e1d..1b67d7baccda8 100644
+--- a/drivers/usb/dwc2/params.c
++++ b/drivers/usb/dwc2/params.c
+@@ -76,6 +76,7 @@ static void dwc2_set_s3c6400_params(struct dwc2_hsotg *hsotg)
+ 	struct dwc2_core_params *p = &hsotg->params;
+ 
+ 	p->power_down = DWC2_POWER_DOWN_PARAM_NONE;
++	p->no_clock_gating = true;
+ 	p->phy_utmi_width = 8;
+ }
+ 
+diff --git a/drivers/usb/gadget/udc/tegra-xudc.c b/drivers/usb/gadget/udc/tegra-xudc.c
+index 2319c9737c2bd..f3f112b08c9b1 100644
+--- a/drivers/usb/gadget/udc/tegra-xudc.c
++++ b/drivers/usb/gadget/udc/tegra-xudc.c
+@@ -3861,6 +3861,7 @@ static int tegra_xudc_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ free_eps:
++	pm_runtime_disable(&pdev->dev);
+ 	tegra_xudc_free_eps(xudc);
+ free_event_ring:
+ 	tegra_xudc_free_event_ring(xudc);
+diff --git a/drivers/usb/host/ehci-hcd.c b/drivers/usb/host/ehci-hcd.c
+index 94b5e64ae9a25..f214aa6285920 100644
+--- a/drivers/usb/host/ehci-hcd.c
++++ b/drivers/usb/host/ehci-hcd.c
+@@ -703,24 +703,28 @@ EXPORT_SYMBOL_GPL(ehci_setup);
+ static irqreturn_t ehci_irq (struct usb_hcd *hcd)
+ {
+ 	struct ehci_hcd		*ehci = hcd_to_ehci (hcd);
+-	u32			status, masked_status, pcd_status = 0, cmd;
++	u32			status, current_status, masked_status, pcd_status = 0;
++	u32			cmd;
+ 	int			bh;
+ 
+ 	spin_lock(&ehci->lock);
+ 
+-	status = ehci_readl(ehci, &ehci->regs->status);
++	status = 0;
++	current_status = ehci_readl(ehci, &ehci->regs->status);
++restart:
+ 
+ 	/* e.g. cardbus physical eject */
+-	if (status == ~(u32) 0) {
++	if (current_status == ~(u32) 0) {
+ 		ehci_dbg (ehci, "device removed\n");
+ 		goto dead;
+ 	}
++	status |= current_status;
+ 
+ 	/*
+ 	 * We don't use STS_FLR, but some controllers don't like it to
+ 	 * remain on, so mask it out along with the other status bits.
+ 	 */
+-	masked_status = status & (INTR_MASK | STS_FLR);
++	masked_status = current_status & (INTR_MASK | STS_FLR);
+ 
+ 	/* Shared IRQ? */
+ 	if (!masked_status || unlikely(ehci->rh_state == EHCI_RH_HALTED)) {
+@@ -730,6 +734,12 @@ static irqreturn_t ehci_irq (struct usb_hcd *hcd)
+ 
+ 	/* clear (just) interrupts */
+ 	ehci_writel(ehci, masked_status, &ehci->regs->status);
++
++	/* For edge interrupts, don't race with an interrupt bit being raised */
++	current_status = ehci_readl(ehci, &ehci->regs->status);
++	if (current_status & INTR_MASK)
++		goto restart;
++
+ 	cmd = ehci_readl(ehci, &ehci->regs->command);
+ 	bh = 0;
+ 
+diff --git a/drivers/usb/host/max3421-hcd.c b/drivers/usb/host/max3421-hcd.c
+index afd9174d83b14..abceca1c9c0f9 100644
+--- a/drivers/usb/host/max3421-hcd.c
++++ b/drivers/usb/host/max3421-hcd.c
+@@ -153,8 +153,6 @@ struct max3421_hcd {
+ 	 */
+ 	struct urb *curr_urb;
+ 	enum scheduling_pass sched_pass;
+-	struct usb_device *loaded_dev;	/* dev that's loaded into the chip */
+-	int loaded_epnum;		/* epnum whose toggles are loaded */
+ 	int urb_done;			/* > 0 -> no errors, < 0: errno */
+ 	size_t curr_len;
+ 	u8 hien;
+@@ -492,39 +490,17 @@ max3421_set_speed(struct usb_hcd *hcd, struct usb_device *dev)
+  * Caller must NOT hold HCD spinlock.
+  */
+ static void
+-max3421_set_address(struct usb_hcd *hcd, struct usb_device *dev, int epnum,
+-		    int force_toggles)
++max3421_set_address(struct usb_hcd *hcd, struct usb_device *dev, int epnum)
+ {
+-	struct max3421_hcd *max3421_hcd = hcd_to_max3421(hcd);
+-	int old_epnum, same_ep, rcvtog, sndtog;
+-	struct usb_device *old_dev;
++	int rcvtog, sndtog;
+ 	u8 hctl;
+ 
+-	old_dev = max3421_hcd->loaded_dev;
+-	old_epnum = max3421_hcd->loaded_epnum;
+-
+-	same_ep = (dev == old_dev && epnum == old_epnum);
+-	if (same_ep && !force_toggles)
+-		return;
+-
+-	if (old_dev && !same_ep) {
+-		/* save the old end-points toggles: */
+-		u8 hrsl = spi_rd8(hcd, MAX3421_REG_HRSL);
+-
+-		rcvtog = (hrsl >> MAX3421_HRSL_RCVTOGRD_BIT) & 1;
+-		sndtog = (hrsl >> MAX3421_HRSL_SNDTOGRD_BIT) & 1;
+-
+-		/* no locking: HCD (i.e., we) own toggles, don't we? */
+-		usb_settoggle(old_dev, old_epnum, 0, rcvtog);
+-		usb_settoggle(old_dev, old_epnum, 1, sndtog);
+-	}
+ 	/* setup new endpoint's toggle bits: */
+ 	rcvtog = usb_gettoggle(dev, epnum, 0);
+ 	sndtog = usb_gettoggle(dev, epnum, 1);
+ 	hctl = (BIT(rcvtog + MAX3421_HCTL_RCVTOG0_BIT) |
+ 		BIT(sndtog + MAX3421_HCTL_SNDTOG0_BIT));
+ 
+-	max3421_hcd->loaded_epnum = epnum;
+ 	spi_wr8(hcd, MAX3421_REG_HCTL, hctl);
+ 
+ 	/*
+@@ -532,7 +508,6 @@ max3421_set_address(struct usb_hcd *hcd, struct usb_device *dev, int epnum,
+ 	 * address-assignment so it's best to just always load the
+ 	 * address whenever the end-point changed/was forced.
+ 	 */
+-	max3421_hcd->loaded_dev = dev;
+ 	spi_wr8(hcd, MAX3421_REG_PERADDR, dev->devnum);
+ }
+ 
+@@ -667,7 +642,7 @@ max3421_select_and_start_urb(struct usb_hcd *hcd)
+ 	struct max3421_hcd *max3421_hcd = hcd_to_max3421(hcd);
+ 	struct urb *urb, *curr_urb = NULL;
+ 	struct max3421_ep *max3421_ep;
+-	int epnum, force_toggles = 0;
++	int epnum;
+ 	struct usb_host_endpoint *ep;
+ 	struct list_head *pos;
+ 	unsigned long flags;
+@@ -777,7 +752,6 @@ done:
+ 			usb_settoggle(urb->dev, epnum, 0, 1);
+ 			usb_settoggle(urb->dev, epnum, 1, 1);
+ 			max3421_ep->pkt_state = PKT_STATE_SETUP;
+-			force_toggles = 1;
+ 		} else
+ 			max3421_ep->pkt_state = PKT_STATE_TRANSFER;
+ 	}
+@@ -785,7 +759,7 @@ done:
+ 	spin_unlock_irqrestore(&max3421_hcd->lock, flags);
+ 
+ 	max3421_ep->last_active = max3421_hcd->frame_number;
+-	max3421_set_address(hcd, urb->dev, epnum, force_toggles);
++	max3421_set_address(hcd, urb->dev, epnum);
+ 	max3421_set_speed(hcd, urb->dev);
+ 	max3421_next_transfer(hcd, 0);
+ 	return 1;
+@@ -1380,6 +1354,16 @@ max3421_urb_done(struct usb_hcd *hcd)
+ 		status = 0;
+ 	urb = max3421_hcd->curr_urb;
+ 	if (urb) {
++		/* save the old end-points toggles: */
++		u8 hrsl = spi_rd8(hcd, MAX3421_REG_HRSL);
++		int rcvtog = (hrsl >> MAX3421_HRSL_RCVTOGRD_BIT) & 1;
++		int sndtog = (hrsl >> MAX3421_HRSL_SNDTOGRD_BIT) & 1;
++		int epnum = usb_endpoint_num(&urb->ep->desc);
++
++		/* no locking: HCD (i.e., we) own toggles, don't we? */
++		usb_settoggle(urb->dev, epnum, 0, rcvtog);
++		usb_settoggle(urb->dev, epnum, 1, sndtog);
++
+ 		max3421_hcd->curr_urb = NULL;
+ 		spin_lock_irqsave(&max3421_hcd->lock, flags);
+ 		usb_hcd_unlink_urb_from_ep(hcd, urb);
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index e9b18fc176172..151e93c4bd574 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -1638,11 +1638,12 @@ int xhci_hub_status_data(struct usb_hcd *hcd, char *buf)
+ 	 * Inform the usbcore about resume-in-progress by returning
+ 	 * a non-zero value even if there are no status changes.
+ 	 */
++	spin_lock_irqsave(&xhci->lock, flags);
++
+ 	status = bus_state->resuming_ports;
+ 
+ 	mask = PORT_CSC | PORT_PEC | PORT_OCC | PORT_PLC | PORT_WRC | PORT_CEC;
+ 
+-	spin_lock_irqsave(&xhci->lock, flags);
+ 	/* For each port, did anything change?  If so, set that bit in buf. */
+ 	for (i = 0; i < max_ports; i++) {
+ 		temp = readl(ports[i]->addr);
+diff --git a/drivers/usb/host/xhci-pci-renesas.c b/drivers/usb/host/xhci-pci-renesas.c
+index 431213cdf9e0e..f97ac9f52bf4d 100644
+--- a/drivers/usb/host/xhci-pci-renesas.c
++++ b/drivers/usb/host/xhci-pci-renesas.c
+@@ -207,8 +207,7 @@ static int renesas_check_rom_state(struct pci_dev *pdev)
+ 			return 0;
+ 
+ 		case RENESAS_ROM_STATUS_NO_RESULT: /* No result yet */
+-			dev_dbg(&pdev->dev, "Unknown ROM status ...\n");
+-			break;
++			return 0;
+ 
+ 		case RENESAS_ROM_STATUS_ERROR: /* Error State */
+ 		default: /* All other states are marked as "Reserved states" */
+@@ -225,12 +224,13 @@ static int renesas_fw_check_running(struct pci_dev *pdev)
+ 	u8 fw_state;
+ 	int err;
+ 
+-	/*
+-	 * Only if device has ROM and loaded FW we can skip loading and
+-	 * return success. Otherwise (even unknown state), attempt to load FW.
+-	 */
+-	if (renesas_check_rom(pdev) && !renesas_check_rom_state(pdev))
+-		return 0;
++	/* Check if device has ROM and loaded, if so skip everything */
++	err = renesas_check_rom(pdev);
++	if (err) { /* we have rom */
++		err = renesas_check_rom_state(pdev);
++		if (!err)
++			return err;
++	}
+ 
+ 	/*
+ 	 * Test if the device is actually needing the firmware. As most
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 18c2bbddf080b..1c9a7957c45c5 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -636,7 +636,14 @@ static const struct pci_device_id pci_ids[] = {
+ 	{ /* end: all zeroes */ }
+ };
+ MODULE_DEVICE_TABLE(pci, pci_ids);
++
++/*
++ * Without CONFIG_USB_XHCI_PCI_RENESAS renesas_xhci_check_request_fw() won't
++ * load firmware, so don't encumber the xhci-pci driver with it.
++ */
++#if IS_ENABLED(CONFIG_USB_XHCI_PCI_RENESAS)
+ MODULE_FIRMWARE("renesas_usb_fw.mem");
++#endif
+ 
+ /* pci driver glue; this is a "new style" PCI driver module */
+ static struct pci_driver xhci_pci_driver = {
+diff --git a/drivers/usb/renesas_usbhs/fifo.c b/drivers/usb/renesas_usbhs/fifo.c
+index b5e7991dc7d9e..a3c2b01ccf7b5 100644
+--- a/drivers/usb/renesas_usbhs/fifo.c
++++ b/drivers/usb/renesas_usbhs/fifo.c
+@@ -101,6 +101,8 @@ static struct dma_chan *usbhsf_dma_chan_get(struct usbhs_fifo *fifo,
+ #define usbhsf_dma_map(p)	__usbhsf_dma_map_ctrl(p, 1)
+ #define usbhsf_dma_unmap(p)	__usbhsf_dma_map_ctrl(p, 0)
+ static int __usbhsf_dma_map_ctrl(struct usbhs_pkt *pkt, int map);
++static void usbhsf_tx_irq_ctrl(struct usbhs_pipe *pipe, int enable);
++static void usbhsf_rx_irq_ctrl(struct usbhs_pipe *pipe, int enable);
+ struct usbhs_pkt *usbhs_pkt_pop(struct usbhs_pipe *pipe, struct usbhs_pkt *pkt)
+ {
+ 	struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe);
+@@ -123,6 +125,11 @@ struct usbhs_pkt *usbhs_pkt_pop(struct usbhs_pipe *pipe, struct usbhs_pkt *pkt)
+ 		if (chan) {
+ 			dmaengine_terminate_all(chan);
+ 			usbhsf_dma_unmap(pkt);
++		} else {
++			if (usbhs_pipe_is_dir_in(pipe))
++				usbhsf_rx_irq_ctrl(pipe, 0);
++			else
++				usbhsf_tx_irq_ctrl(pipe, 0);
+ 		}
+ 
+ 		usbhs_pipe_clear_without_sequence(pipe, 0, 0);
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index fcb812bc832cc..ea2e2d925a960 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -155,6 +155,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x10C4, 0x89A4) }, /* CESINEL FTBC Flexible Thyristor Bridge Controller */
+ 	{ USB_DEVICE(0x10C4, 0x89FB) }, /* Qivicon ZigBee USB Radio Stick */
+ 	{ USB_DEVICE(0x10C4, 0x8A2A) }, /* HubZ dual ZigBee and Z-Wave dongle */
++	{ USB_DEVICE(0x10C4, 0x8A5B) }, /* CEL EM3588 ZigBee USB Stick */
+ 	{ USB_DEVICE(0x10C4, 0x8A5E) }, /* CEL EM3588 ZigBee USB Stick Long Range */
+ 	{ USB_DEVICE(0x10C4, 0x8B34) }, /* Qivicon ZigBee USB Radio Stick */
+ 	{ USB_DEVICE(0x10C4, 0xEA60) }, /* Silicon Labs factory default */
+@@ -202,8 +203,8 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x1901, 0x0194) },	/* GE Healthcare Remote Alarm Box */
+ 	{ USB_DEVICE(0x1901, 0x0195) },	/* GE B850/B650/B450 CP2104 DP UART interface */
+ 	{ USB_DEVICE(0x1901, 0x0196) },	/* GE B850 CP2105 DP UART interface */
+-	{ USB_DEVICE(0x1901, 0x0197) }, /* GE CS1000 Display serial interface */
+-	{ USB_DEVICE(0x1901, 0x0198) }, /* GE CS1000 M.2 Key E serial interface */
++	{ USB_DEVICE(0x1901, 0x0197) }, /* GE CS1000 M.2 Key E serial interface */
++	{ USB_DEVICE(0x1901, 0x0198) }, /* GE CS1000 Display serial interface */
+ 	{ USB_DEVICE(0x199B, 0xBA30) }, /* LORD WSDA-200-USB */
+ 	{ USB_DEVICE(0x19CF, 0x3000) }, /* Parrot NMEA GPS Flight Recorder */
+ 	{ USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 7608584ef4fe7..0fbe253dc570b 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -238,6 +238,7 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_UC15			0x9090
+ /* These u-blox products use Qualcomm's vendor ID */
+ #define UBLOX_PRODUCT_R410M			0x90b2
++#define UBLOX_PRODUCT_R6XX			0x90fa
+ /* These Yuga products use Qualcomm's vendor ID */
+ #define YUGA_PRODUCT_CLM920_NC5			0x9625
+ 
+@@ -1101,6 +1102,8 @@ static const struct usb_device_id option_ids[] = {
+ 	/* u-blox products using Qualcomm vendor ID */
+ 	{ USB_DEVICE(QUALCOMM_VENDOR_ID, UBLOX_PRODUCT_R410M),
+ 	  .driver_info = RSVD(1) | RSVD(3) },
++	{ USB_DEVICE(QUALCOMM_VENDOR_ID, UBLOX_PRODUCT_R6XX),
++	  .driver_info = RSVD(3) },
+ 	/* Quectel products using Quectel vendor ID */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC21, 0xff, 0xff, 0xff),
+ 	  .driver_info = NUMEP2 },
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index f9677a5ec31b2..c35a6db993f1b 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -45,6 +45,13 @@ UNUSUAL_DEV(0x059f, 0x105f, 0x0000, 0x9999,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ 		US_FL_NO_REPORT_OPCODES | US_FL_NO_SAME),
+ 
++/* Reported-by: Julian Sikorski <belegdol@gmail.com> */
++UNUSUAL_DEV(0x059f, 0x1061, 0x0000, 0x9999,
++		"LaCie",
++		"Rugged USB3-FW",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_IGNORE_UAS),
++
+ /*
+  * Apricorn USB3 dongle sometimes returns "USBSUSBSUSBS" in response to SCSI
+  * commands in UAS mode.  Observed with the 1.28 firmware; are there others?
+diff --git a/drivers/usb/typec/stusb160x.c b/drivers/usb/typec/stusb160x.c
+index 6eaeba9b096e1..e7745d1c2a5c4 100644
+--- a/drivers/usb/typec/stusb160x.c
++++ b/drivers/usb/typec/stusb160x.c
+@@ -685,6 +685,15 @@ static int stusb160x_probe(struct i2c_client *client)
+ 	if (!fwnode)
+ 		return -ENODEV;
+ 
++	/*
++	 * This fwnode has a "compatible" property, but is never populated as a
++	 * struct device. Instead we simply parse it to read the properties.
++	 * This it breaks fw_devlink=on. To maintain backward compatibility
++	 * with existing DT files, we work around this by deleting any
++	 * fwnode_links to/from this fwnode.
++	 */
++	fw_devlink_purge_absent_suppliers(fwnode);
++
+ 	/*
+ 	 * When both VDD and VSYS power supplies are present, the low power
+ 	 * supply VSYS is selected when VSYS voltage is above 3.1 V.
+@@ -739,10 +748,6 @@ static int stusb160x_probe(struct i2c_client *client)
+ 	typec_set_pwr_opmode(chip->port, chip->pwr_opmode);
+ 
+ 	if (client->irq) {
+-		ret = stusb160x_irq_init(chip, client->irq);
+-		if (ret)
+-			goto port_unregister;
+-
+ 		chip->role_sw = fwnode_usb_role_switch_get(fwnode);
+ 		if (IS_ERR(chip->role_sw)) {
+ 			ret = PTR_ERR(chip->role_sw);
+@@ -752,6 +757,10 @@ static int stusb160x_probe(struct i2c_client *client)
+ 					ret);
+ 			goto port_unregister;
+ 		}
++
++		ret = stusb160x_irq_init(chip, client->irq);
++		if (ret)
++			goto role_sw_put;
+ 	} else {
+ 		/*
+ 		 * If Source or Dual power role, need to enable VDD supply
+@@ -775,6 +784,9 @@ static int stusb160x_probe(struct i2c_client *client)
+ 
+ 	return 0;
+ 
++role_sw_put:
++	if (chip->role_sw)
++		usb_role_switch_put(chip->role_sw);
+ port_unregister:
+ 	typec_unregister_port(chip->port);
+ all_reg_disable:
+diff --git a/drivers/usb/typec/tipd/core.c b/drivers/usb/typec/tipd/core.c
+index 938219bc1b4be..21b3ae25c76d2 100644
+--- a/drivers/usb/typec/tipd/core.c
++++ b/drivers/usb/typec/tipd/core.c
+@@ -629,6 +629,15 @@ static int tps6598x_probe(struct i2c_client *client)
+ 	if (!fwnode)
+ 		return -ENODEV;
+ 
++	/*
++	 * This fwnode has a "compatible" property, but is never populated as a
++	 * struct device. Instead we simply parse it to read the properties.
++	 * This breaks fw_devlink=on. To maintain backward compatibility
++	 * with existing DT files, we work around this by deleting any
++	 * fwnode_links to/from this fwnode.
++	 */
++	fw_devlink_purge_absent_suppliers(fwnode);
++
+ 	tps->role_sw = fwnode_usb_role_switch_get(fwnode);
+ 	if (IS_ERR(tps->role_sw)) {
+ 		ret = PTR_ERR(tps->role_sw);
+diff --git a/fs/afs/cmservice.c b/fs/afs/cmservice.c
+index d3c6bb22c5f48..a3f5de28be798 100644
+--- a/fs/afs/cmservice.c
++++ b/fs/afs/cmservice.c
+@@ -29,16 +29,11 @@ static void SRXAFSCB_TellMeAboutYourself(struct work_struct *);
+ 
+ static int afs_deliver_yfs_cb_callback(struct afs_call *);
+ 
+-#define CM_NAME(name) \
+-	char afs_SRXCB##name##_name[] __tracepoint_string =	\
+-		"CB." #name
+-
+ /*
+  * CB.CallBack operation type
+  */
+-static CM_NAME(CallBack);
+ static const struct afs_call_type afs_SRXCBCallBack = {
+-	.name		= afs_SRXCBCallBack_name,
++	.name		= "CB.CallBack",
+ 	.deliver	= afs_deliver_cb_callback,
+ 	.destructor	= afs_cm_destructor,
+ 	.work		= SRXAFSCB_CallBack,
+@@ -47,9 +42,8 @@ static const struct afs_call_type afs_SRXCBCallBack = {
+ /*
+  * CB.InitCallBackState operation type
+  */
+-static CM_NAME(InitCallBackState);
+ static const struct afs_call_type afs_SRXCBInitCallBackState = {
+-	.name		= afs_SRXCBInitCallBackState_name,
++	.name		= "CB.InitCallBackState",
+ 	.deliver	= afs_deliver_cb_init_call_back_state,
+ 	.destructor	= afs_cm_destructor,
+ 	.work		= SRXAFSCB_InitCallBackState,
+@@ -58,9 +52,8 @@ static const struct afs_call_type afs_SRXCBInitCallBackState = {
+ /*
+  * CB.InitCallBackState3 operation type
+  */
+-static CM_NAME(InitCallBackState3);
+ static const struct afs_call_type afs_SRXCBInitCallBackState3 = {
+-	.name		= afs_SRXCBInitCallBackState3_name,
++	.name		= "CB.InitCallBackState3",
+ 	.deliver	= afs_deliver_cb_init_call_back_state3,
+ 	.destructor	= afs_cm_destructor,
+ 	.work		= SRXAFSCB_InitCallBackState,
+@@ -69,9 +62,8 @@ static const struct afs_call_type afs_SRXCBInitCallBackState3 = {
+ /*
+  * CB.Probe operation type
+  */
+-static CM_NAME(Probe);
+ static const struct afs_call_type afs_SRXCBProbe = {
+-	.name		= afs_SRXCBProbe_name,
++	.name		= "CB.Probe",
+ 	.deliver	= afs_deliver_cb_probe,
+ 	.destructor	= afs_cm_destructor,
+ 	.work		= SRXAFSCB_Probe,
+@@ -80,9 +72,8 @@ static const struct afs_call_type afs_SRXCBProbe = {
+ /*
+  * CB.ProbeUuid operation type
+  */
+-static CM_NAME(ProbeUuid);
+ static const struct afs_call_type afs_SRXCBProbeUuid = {
+-	.name		= afs_SRXCBProbeUuid_name,
++	.name		= "CB.ProbeUuid",
+ 	.deliver	= afs_deliver_cb_probe_uuid,
+ 	.destructor	= afs_cm_destructor,
+ 	.work		= SRXAFSCB_ProbeUuid,
+@@ -91,9 +82,8 @@ static const struct afs_call_type afs_SRXCBProbeUuid = {
+ /*
+  * CB.TellMeAboutYourself operation type
+  */
+-static CM_NAME(TellMeAboutYourself);
+ static const struct afs_call_type afs_SRXCBTellMeAboutYourself = {
+-	.name		= afs_SRXCBTellMeAboutYourself_name,
++	.name		= "CB.TellMeAboutYourself",
+ 	.deliver	= afs_deliver_cb_tell_me_about_yourself,
+ 	.destructor	= afs_cm_destructor,
+ 	.work		= SRXAFSCB_TellMeAboutYourself,
+@@ -102,9 +92,8 @@ static const struct afs_call_type afs_SRXCBTellMeAboutYourself = {
+ /*
+  * YFS CB.CallBack operation type
+  */
+-static CM_NAME(YFS_CallBack);
+ static const struct afs_call_type afs_SRXYFSCB_CallBack = {
+-	.name		= afs_SRXCBYFS_CallBack_name,
++	.name		= "YFSCB.CallBack",
+ 	.deliver	= afs_deliver_yfs_cb_callback,
+ 	.destructor	= afs_cm_destructor,
+ 	.work		= SRXAFSCB_CallBack,
+diff --git a/fs/afs/write.c b/fs/afs/write.c
+index 3104b62c20826..c0534697268ef 100644
+--- a/fs/afs/write.c
++++ b/fs/afs/write.c
+@@ -771,14 +771,20 @@ int afs_writepages(struct address_space *mapping,
+ 	if (wbc->range_cyclic) {
+ 		start = mapping->writeback_index * PAGE_SIZE;
+ 		ret = afs_writepages_region(mapping, wbc, start, LLONG_MAX, &next);
+-		if (start > 0 && wbc->nr_to_write > 0 && ret == 0)
+-			ret = afs_writepages_region(mapping, wbc, 0, start,
+-						    &next);
+-		mapping->writeback_index = next / PAGE_SIZE;
++		if (ret == 0) {
++			mapping->writeback_index = next / PAGE_SIZE;
++			if (start > 0 && wbc->nr_to_write > 0) {
++				ret = afs_writepages_region(mapping, wbc, 0,
++							    start, &next);
++				if (ret == 0)
++					mapping->writeback_index =
++						next / PAGE_SIZE;
++			}
++		}
+ 	} else if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX) {
+ 		ret = afs_writepages_region(mapping, wbc, 0, LLONG_MAX, &next);
+-		if (wbc->nr_to_write > 0)
+-			mapping->writeback_index = next;
++		if (wbc->nr_to_write > 0 && ret == 0)
++			mapping->writeback_index = next / PAGE_SIZE;
+ 	} else {
+ 		ret = afs_writepages_region(mapping, wbc,
+ 					    wbc->range_start, wbc->range_end, &next);
+diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
+index 117d423fdb930..e656134cadaab 100644
+--- a/fs/btrfs/backref.c
++++ b/fs/btrfs/backref.c
+@@ -1488,15 +1488,15 @@ static int btrfs_find_all_roots_safe(struct btrfs_trans_handle *trans,
+ int btrfs_find_all_roots(struct btrfs_trans_handle *trans,
+ 			 struct btrfs_fs_info *fs_info, u64 bytenr,
+ 			 u64 time_seq, struct ulist **roots,
+-			 bool ignore_offset)
++			 bool ignore_offset, bool skip_commit_root_sem)
+ {
+ 	int ret;
+ 
+-	if (!trans)
++	if (!trans && !skip_commit_root_sem)
+ 		down_read(&fs_info->commit_root_sem);
+ 	ret = btrfs_find_all_roots_safe(trans, fs_info, bytenr,
+ 					time_seq, roots, ignore_offset);
+-	if (!trans)
++	if (!trans && !skip_commit_root_sem)
+ 		up_read(&fs_info->commit_root_sem);
+ 	return ret;
+ }
+diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
+index 17abde7f794ce..ff5f07f9940bd 100644
+--- a/fs/btrfs/backref.h
++++ b/fs/btrfs/backref.h
+@@ -47,7 +47,8 @@ int btrfs_find_all_leafs(struct btrfs_trans_handle *trans,
+ 			 const u64 *extent_item_pos, bool ignore_offset);
+ int btrfs_find_all_roots(struct btrfs_trans_handle *trans,
+ 			 struct btrfs_fs_info *fs_info, u64 bytenr,
+-			 u64 time_seq, struct ulist **roots, bool ignore_offset);
++			 u64 time_seq, struct ulist **roots, bool ignore_offset,
++			 bool skip_commit_root_sem);
+ char *btrfs_ref_to_path(struct btrfs_root *fs_root, struct btrfs_path *path,
+ 			u32 name_len, unsigned long name_off,
+ 			struct extent_buffer *eb_in, u64 parent,
+diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
+index c92d9d4f5f46c..6689cac64c157 100644
+--- a/fs/btrfs/delayed-ref.c
++++ b/fs/btrfs/delayed-ref.c
+@@ -1000,7 +1000,7 @@ int btrfs_add_delayed_tree_ref(struct btrfs_trans_handle *trans,
+ 		kmem_cache_free(btrfs_delayed_tree_ref_cachep, ref);
+ 
+ 	if (qrecord_inserted)
+-		btrfs_qgroup_trace_extent_post(fs_info, record);
++		btrfs_qgroup_trace_extent_post(trans, record);
+ 
+ 	return 0;
+ }
+@@ -1095,7 +1095,7 @@ int btrfs_add_delayed_data_ref(struct btrfs_trans_handle *trans,
+ 
+ 
+ 	if (qrecord_inserted)
+-		return btrfs_qgroup_trace_extent_post(fs_info, record);
++		return btrfs_qgroup_trace_extent_post(trans, record);
+ 	return 0;
+ }
+ 
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index d2f39a122d89d..7e79467c08ff6 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -6034,6 +6034,9 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ 	mutex_lock(&fs_info->fs_devices->device_list_mutex);
+ 	devices = &fs_info->fs_devices->devices;
+ 	list_for_each_entry(device, devices, dev_list) {
++		if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state))
++			continue;
++
+ 		ret = btrfs_trim_free_extents(device, &group_trimmed);
+ 		if (ret) {
+ 			dev_failed++;
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 3ded812f522cc..dde8b8334d298 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1704,17 +1704,39 @@ int btrfs_qgroup_trace_extent_nolock(struct btrfs_fs_info *fs_info,
+ 	return 0;
+ }
+ 
+-int btrfs_qgroup_trace_extent_post(struct btrfs_fs_info *fs_info,
++int btrfs_qgroup_trace_extent_post(struct btrfs_trans_handle *trans,
+ 				   struct btrfs_qgroup_extent_record *qrecord)
+ {
+ 	struct ulist *old_root;
+ 	u64 bytenr = qrecord->bytenr;
+ 	int ret;
+ 
+-	ret = btrfs_find_all_roots(NULL, fs_info, bytenr, 0, &old_root, false);
++	/*
++	 * We are always called in a context where we are already holding a
++	 * transaction handle. Often we are called when adding a data delayed
++	 * reference from btrfs_truncate_inode_items() (truncating or unlinking),
++	 * in which case we will be holding a write lock on extent buffer from a
++	 * subvolume tree. In this case we can't allow btrfs_find_all_roots() to
++	 * acquire fs_info->commit_root_sem, because that is a higher level lock
++	 * that must be acquired before locking any extent buffers.
++	 *
++	 * So we want btrfs_find_all_roots() to not acquire the commit_root_sem
++	 * but we can't pass it a non-NULL transaction handle, because otherwise
++	 * it would not use commit roots and would lock extent buffers, causing
++	 * a deadlock if it ends up trying to read lock the same extent buffer
++	 * that was previously write locked at btrfs_truncate_inode_items().
++	 *
++	 * So pass a NULL transaction handle to btrfs_find_all_roots() and
++	 * explicitly tell it to not acquire the commit_root_sem - if we are
++	 * holding a transaction handle we don't need its protection.
++	 */
++	ASSERT(trans != NULL);
++
++	ret = btrfs_find_all_roots(NULL, trans->fs_info, bytenr, 0, &old_root,
++				   false, true);
+ 	if (ret < 0) {
+-		fs_info->qgroup_flags |= BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT;
+-		btrfs_warn(fs_info,
++		trans->fs_info->qgroup_flags |= BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT;
++		btrfs_warn(trans->fs_info,
+ "error accounting new delayed refs extent (err code: %d), quota inconsistent",
+ 			ret);
+ 		return 0;
+@@ -1758,7 +1780,7 @@ int btrfs_qgroup_trace_extent(struct btrfs_trans_handle *trans, u64 bytenr,
+ 		kfree(record);
+ 		return 0;
+ 	}
+-	return btrfs_qgroup_trace_extent_post(fs_info, record);
++	return btrfs_qgroup_trace_extent_post(trans, record);
+ }
+ 
+ int btrfs_qgroup_trace_leaf_items(struct btrfs_trans_handle *trans,
+@@ -2629,7 +2651,7 @@ int btrfs_qgroup_account_extents(struct btrfs_trans_handle *trans)
+ 				/* Search commit root to find old_roots */
+ 				ret = btrfs_find_all_roots(NULL, fs_info,
+ 						record->bytenr, 0,
+-						&record->old_roots, false);
++						&record->old_roots, false, false);
+ 				if (ret < 0)
+ 					goto cleanup;
+ 			}
+@@ -2645,7 +2667,7 @@ int btrfs_qgroup_account_extents(struct btrfs_trans_handle *trans)
+ 			 * current root. It's safe inside commit_transaction().
+ 			 */
+ 			ret = btrfs_find_all_roots(trans, fs_info,
+-				record->bytenr, BTRFS_SEQ_LAST, &new_roots, false);
++			   record->bytenr, BTRFS_SEQ_LAST, &new_roots, false, false);
+ 			if (ret < 0)
+ 				goto cleanup;
+ 			if (qgroup_to_skip) {
+@@ -3179,7 +3201,7 @@ static int qgroup_rescan_leaf(struct btrfs_trans_handle *trans,
+ 			num_bytes = found.offset;
+ 
+ 		ret = btrfs_find_all_roots(NULL, fs_info, found.objectid, 0,
+-					   &roots, false);
++					   &roots, false, false);
+ 		if (ret < 0)
+ 			goto out;
+ 		/* For rescan, just pass old_roots as NULL */
+diff --git a/fs/btrfs/qgroup.h b/fs/btrfs/qgroup.h
+index 7283e4f549af7..880e9df0dac1d 100644
+--- a/fs/btrfs/qgroup.h
++++ b/fs/btrfs/qgroup.h
+@@ -298,7 +298,7 @@ int btrfs_qgroup_trace_extent_nolock(
+  * using current root, then we can move all expensive backref walk out of
+  * transaction committing, but not now as qgroup accounting will be wrong again.
+  */
+-int btrfs_qgroup_trace_extent_post(struct btrfs_fs_info *fs_info,
++int btrfs_qgroup_trace_extent_post(struct btrfs_trans_handle *trans,
+ 				   struct btrfs_qgroup_extent_record *qrecord);
+ 
+ /*
+diff --git a/fs/btrfs/tests/qgroup-tests.c b/fs/btrfs/tests/qgroup-tests.c
+index f3137285a9e2d..98b5aaba46f16 100644
+--- a/fs/btrfs/tests/qgroup-tests.c
++++ b/fs/btrfs/tests/qgroup-tests.c
+@@ -224,7 +224,7 @@ static int test_no_shared_qgroup(struct btrfs_root *root,
+ 	 * quota.
+ 	 */
+ 	ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots,
+-			false);
++			false, false);
+ 	if (ret) {
+ 		ulist_free(old_roots);
+ 		test_err("couldn't find old roots: %d", ret);
+@@ -237,7 +237,7 @@ static int test_no_shared_qgroup(struct btrfs_root *root,
+ 		return ret;
+ 
+ 	ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots,
+-			false);
++			false, false);
+ 	if (ret) {
+ 		ulist_free(old_roots);
+ 		ulist_free(new_roots);
+@@ -261,7 +261,7 @@ static int test_no_shared_qgroup(struct btrfs_root *root,
+ 	new_roots = NULL;
+ 
+ 	ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots,
+-			false);
++			false, false);
+ 	if (ret) {
+ 		ulist_free(old_roots);
+ 		test_err("couldn't find old roots: %d", ret);
+@@ -273,7 +273,7 @@ static int test_no_shared_qgroup(struct btrfs_root *root,
+ 		return -EINVAL;
+ 
+ 	ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots,
+-			false);
++			false, false);
+ 	if (ret) {
+ 		ulist_free(old_roots);
+ 		ulist_free(new_roots);
+@@ -325,7 +325,7 @@ static int test_multiple_refs(struct btrfs_root *root,
+ 	}
+ 
+ 	ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots,
+-			false);
++			false, false);
+ 	if (ret) {
+ 		ulist_free(old_roots);
+ 		test_err("couldn't find old roots: %d", ret);
+@@ -338,7 +338,7 @@ static int test_multiple_refs(struct btrfs_root *root,
+ 		return ret;
+ 
+ 	ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots,
+-			false);
++			false, false);
+ 	if (ret) {
+ 		ulist_free(old_roots);
+ 		ulist_free(new_roots);
+@@ -360,7 +360,7 @@ static int test_multiple_refs(struct btrfs_root *root,
+ 	}
+ 
+ 	ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots,
+-			false);
++			false, false);
+ 	if (ret) {
+ 		ulist_free(old_roots);
+ 		test_err("couldn't find old roots: %d", ret);
+@@ -373,7 +373,7 @@ static int test_multiple_refs(struct btrfs_root *root,
+ 		return ret;
+ 
+ 	ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots,
+-			false);
++			false, false);
+ 	if (ret) {
+ 		ulist_free(old_roots);
+ 		ulist_free(new_roots);
+@@ -401,7 +401,7 @@ static int test_multiple_refs(struct btrfs_root *root,
+ 	}
+ 
+ 	ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots,
+-			false);
++			false, false);
+ 	if (ret) {
+ 		ulist_free(old_roots);
+ 		test_err("couldn't find old roots: %d", ret);
+@@ -414,7 +414,7 @@ static int test_multiple_refs(struct btrfs_root *root,
+ 		return ret;
+ 
+ 	ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &new_roots,
+-			false);
++			false, false);
+ 	if (ret) {
+ 		ulist_free(old_roots);
+ 		ulist_free(new_roots);
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index b5ed0a699ee5f..f08375cb871ed 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -5515,16 +5515,29 @@ log_extents:
+ 		spin_lock(&inode->lock);
+ 		inode->logged_trans = trans->transid;
+ 		/*
+-		 * Don't update last_log_commit if we logged that an inode exists
+-		 * after it was loaded to memory (full_sync bit set).
+-		 * This is to prevent data loss when we do a write to the inode,
+-		 * then the inode gets evicted after all delalloc was flushed,
+-		 * then we log it exists (due to a rename for example) and then
+-		 * fsync it. This last fsync would do nothing (not logging the
+-		 * extents previously written).
++		 * Don't update last_log_commit if we logged that an inode exists.
++		 * We do this for two reasons:
++		 *
++		 * 1) We might have had buffered writes to this inode that were
++		 *    flushed and had their ordered extents completed in this
++		 *    transaction, but we did not previously log the inode with
++		 *    LOG_INODE_ALL. Later the inode was evicted and after that
++		 *    it was loaded again and this LOG_INODE_EXISTS log operation
++		 *    happened. We must make sure that if an explicit fsync against
++		 *    the inode is performed later, it logs the new extents, an
++		 *    updated inode item, etc, and syncs the log. The same logic
++		 *    applies to direct IO writes instead of buffered writes.
++		 *
++		 * 2) When we log the inode with LOG_INODE_EXISTS, its inode item
++		 *    is logged with an i_size of 0 or whatever value was logged
++		 *    before. If later the i_size of the inode is increased by a
++		 *    truncate operation, the log is synced through an fsync of
++		 *    some other inode and then finally an explicit fsync against
++		 *    this inode is made, we must make sure this fsync logs the
++		 *    inode with the new i_size, the hole between old i_size and
++		 *    the new i_size, and syncs the log.
+ 		 */
+-		if (inode_only != LOG_INODE_EXISTS ||
+-		    !test_bit(BTRFS_INODE_NEEDS_FULL_SYNC, &inode->runtime_flags))
++		if (inode_only != LOG_INODE_EXISTS)
+ 			inode->last_log_commit = inode->last_sub_trans;
+ 		spin_unlock(&inode->lock);
+ 	}
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index e5af591d3bd45..86f09b1110a2f 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -4468,7 +4468,7 @@ bool check_session_state(struct ceph_mds_session *s)
+ 		break;
+ 	case CEPH_MDS_SESSION_CLOSING:
+ 		/* Should never reach this when we're unmounting */
+-		WARN_ON_ONCE(true);
++		WARN_ON_ONCE(s->s_ttl);
+ 		fallthrough;
+ 	case CEPH_MDS_SESSION_NEW:
+ 	case CEPH_MDS_SESSION_RESTARTING:
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 903de7449aa33..64cad843ce723 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -3613,7 +3613,7 @@ static int smb3_simple_fallocate_write_range(unsigned int xid,
+ 					     char *buf)
+ {
+ 	struct cifs_io_parms io_parms = {0};
+-	int nbytes;
++	int rc, nbytes;
+ 	struct kvec iov[2];
+ 
+ 	io_parms.netfid = cfile->fid.netfid;
+@@ -3621,13 +3621,25 @@ static int smb3_simple_fallocate_write_range(unsigned int xid,
+ 	io_parms.tcon = tcon;
+ 	io_parms.persistent_fid = cfile->fid.persistent_fid;
+ 	io_parms.volatile_fid = cfile->fid.volatile_fid;
+-	io_parms.offset = off;
+-	io_parms.length = len;
+ 
+-	/* iov[0] is reserved for smb header */
+-	iov[1].iov_base = buf;
+-	iov[1].iov_len = io_parms.length;
+-	return SMB2_write(xid, &io_parms, &nbytes, iov, 1);
++	while (len) {
++		io_parms.offset = off;
++		io_parms.length = len;
++		if (io_parms.length > SMB2_MAX_BUFFER_SIZE)
++			io_parms.length = SMB2_MAX_BUFFER_SIZE;
++		/* iov[0] is reserved for smb header */
++		iov[1].iov_base = buf;
++		iov[1].iov_len = io_parms.length;
++		rc = SMB2_write(xid, &io_parms, &nbytes, iov, 1);
++		if (rc)
++			break;
++		if (nbytes > len)
++			return -EINVAL;
++		buf += nbytes;
++		off += nbytes;
++		len -= nbytes;
++	}
++	return rc;
+ }
+ 
+ static int smb3_simple_fallocate_range(unsigned int xid,
+@@ -3651,11 +3663,6 @@ static int smb3_simple_fallocate_range(unsigned int xid,
+ 			(char **)&out_data, &out_data_len);
+ 	if (rc)
+ 		goto out;
+-	/*
+-	 * It is already all allocated
+-	 */
+-	if (out_data_len == 0)
+-		goto out;
+ 
+ 	buf = kzalloc(1024 * 1024, GFP_KERNEL);
+ 	if (buf == NULL) {
+@@ -3778,6 +3785,24 @@ static long smb3_simple_falloc(struct file *file, struct cifs_tcon *tcon,
+ 		goto out;
+ 	}
+ 
++	if (keep_size == true) {
++		/*
++		 * We can not preallocate pages beyond the end of the file
++		 * in SMB2
++		 */
++		if (off >= i_size_read(inode)) {
++			rc = 0;
++			goto out;
++		}
++		/*
++		 * For fallocates that are partially beyond the end of file,
++		 * clamp len so we only fallocate up to the end of file.
++		 */
++		if (off + len > i_size_read(inode)) {
++			len = i_size_read(inode) - off;
++		}
++	}
++
+ 	if ((keep_size == true) || (i_size_read(inode) >= off + len)) {
+ 		/*
+ 		 * At this point, we are trying to fallocate an internal
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index 30dee68458c7e..f7b80307b8f89 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -77,7 +77,7 @@ enum hugetlb_param {
+ static const struct fs_parameter_spec hugetlb_fs_parameters[] = {
+ 	fsparam_u32   ("gid",		Opt_gid),
+ 	fsparam_string("min_size",	Opt_min_size),
+-	fsparam_u32   ("mode",		Opt_mode),
++	fsparam_u32oct("mode",		Opt_mode),
+ 	fsparam_string("nr_inodes",	Opt_nr_inodes),
+ 	fsparam_string("pagesize",	Opt_pagesize),
+ 	fsparam_string("size",		Opt_size),
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index eeea6b8c8bee0..df4288776815e 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -4805,6 +4805,7 @@ IO_NETOP_FN(recv);
+ struct io_poll_table {
+ 	struct poll_table_struct pt;
+ 	struct io_kiocb *req;
++	int nr_entries;
+ 	int error;
+ };
+ 
+@@ -5002,11 +5003,11 @@ static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
+ 	struct io_kiocb *req = pt->req;
+ 
+ 	/*
+-	 * If poll->head is already set, it's because the file being polled
+-	 * uses multiple waitqueues for poll handling (eg one for read, one
+-	 * for write). Setup a separate io_poll_iocb if this happens.
++	 * The file being polled uses multiple waitqueues for poll handling
++	 * (e.g. one for read, one for write). Setup a separate io_poll_iocb
++	 * if this happens.
+ 	 */
+-	if (unlikely(poll->head)) {
++	if (unlikely(pt->nr_entries)) {
+ 		struct io_poll_iocb *poll_one = poll;
+ 
+ 		/* already have a 2nd entry, fail a third attempt */
+@@ -5034,7 +5035,7 @@ static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
+ 		*poll_ptr = poll;
+ 	}
+ 
+-	pt->error = 0;
++	pt->nr_entries++;
+ 	poll->head = head;
+ 
+ 	if (poll->events & EPOLLEXCLUSIVE)
+@@ -5112,11 +5113,16 @@ static __poll_t __io_arm_poll_handler(struct io_kiocb *req,
+ 
+ 	ipt->pt._key = mask;
+ 	ipt->req = req;
+-	ipt->error = -EINVAL;
++	ipt->error = 0;
++	ipt->nr_entries = 0;
+ 
+ 	mask = vfs_poll(req->file, &ipt->pt) & poll->events;
++	if (unlikely(!ipt->nr_entries) && !ipt->error)
++		ipt->error = -EINVAL;
+ 
+ 	spin_lock_irq(&ctx->completion_lock);
++	if (ipt->error)
++		io_poll_remove_double(req);
+ 	if (likely(poll->head)) {
+ 		spin_lock(&poll->head->lock);
+ 		if (unlikely(list_empty(&poll->wait.entry))) {
+@@ -6876,7 +6882,8 @@ static int io_sq_thread(void *data)
+ 		}
+ 
+ 		prepare_to_wait(&sqd->wait, &wait, TASK_INTERRUPTIBLE);
+-		if (!test_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state)) {
++		if (!test_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state) &&
++		    !io_run_task_work()) {
+ 			list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
+ 				io_ring_set_wakeup_flag(ctx);
+ 
+@@ -7859,15 +7866,19 @@ static struct io_wq *io_init_wq_offload(struct io_ring_ctx *ctx,
+ 	struct io_wq_data data;
+ 	unsigned int concurrency;
+ 
++	mutex_lock(&ctx->uring_lock);
+ 	hash = ctx->hash_map;
+ 	if (!hash) {
+ 		hash = kzalloc(sizeof(*hash), GFP_KERNEL);
+-		if (!hash)
++		if (!hash) {
++			mutex_unlock(&ctx->uring_lock);
+ 			return ERR_PTR(-ENOMEM);
++		}
+ 		refcount_set(&hash->refs, 1);
+ 		init_waitqueue_head(&hash->wait);
+ 		ctx->hash_map = hash;
+ 	}
++	mutex_unlock(&ctx->uring_lock);
+ 
+ 	data.hash = hash;
+ 	data.task = task;
+@@ -7942,9 +7953,11 @@ static int io_sq_offload_create(struct io_ring_ctx *ctx,
+ 		f = fdget(p->wq_fd);
+ 		if (!f.file)
+ 			return -ENXIO;
+-		fdput(f);
+-		if (f.file->f_op != &io_uring_fops)
++		if (f.file->f_op != &io_uring_fops) {
++			fdput(f);
+ 			return -EINVAL;
++		}
++		fdput(f);
+ 	}
+ 	if (ctx->flags & IORING_SETUP_SQPOLL) {
+ 		struct task_struct *tsk;
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index 9cbd915025ad7..a0a2fc1c9da26 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -854,7 +854,7 @@ static ssize_t mem_rw(struct file *file, char __user *buf,
+ 	flags = FOLL_FORCE | (write ? FOLL_WRITE : 0);
+ 
+ 	while (count > 0) {
+-		int this_len = min_t(int, count, PAGE_SIZE);
++		size_t this_len = min_t(size_t, count, PAGE_SIZE);
+ 
+ 		if (write && copy_from_user(page, buf, this_len)) {
+ 			copied = -EFAULT;
+diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
+index 14f92285d04f8..8fc8bbf9635b6 100644
+--- a/fs/userfaultfd.c
++++ b/fs/userfaultfd.c
+@@ -1236,23 +1236,21 @@ static __always_inline void wake_userfault(struct userfaultfd_ctx *ctx,
+ }
+ 
+ static __always_inline int validate_range(struct mm_struct *mm,
+-					  __u64 *start, __u64 len)
++					  __u64 start, __u64 len)
+ {
+ 	__u64 task_size = mm->task_size;
+ 
+-	*start = untagged_addr(*start);
+-
+-	if (*start & ~PAGE_MASK)
++	if (start & ~PAGE_MASK)
+ 		return -EINVAL;
+ 	if (len & ~PAGE_MASK)
+ 		return -EINVAL;
+ 	if (!len)
+ 		return -EINVAL;
+-	if (*start < mmap_min_addr)
++	if (start < mmap_min_addr)
+ 		return -EINVAL;
+-	if (*start >= task_size)
++	if (start >= task_size)
+ 		return -EINVAL;
+-	if (len > task_size - *start)
++	if (len > task_size - start)
+ 		return -EINVAL;
+ 	return 0;
+ }
+@@ -1313,7 +1311,7 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
+ 		vm_flags |= VM_UFFD_MINOR;
+ 	}
+ 
+-	ret = validate_range(mm, &uffdio_register.range.start,
++	ret = validate_range(mm, uffdio_register.range.start,
+ 			     uffdio_register.range.len);
+ 	if (ret)
+ 		goto out;
+@@ -1519,7 +1517,7 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
+ 	if (copy_from_user(&uffdio_unregister, buf, sizeof(uffdio_unregister)))
+ 		goto out;
+ 
+-	ret = validate_range(mm, &uffdio_unregister.start,
++	ret = validate_range(mm, uffdio_unregister.start,
+ 			     uffdio_unregister.len);
+ 	if (ret)
+ 		goto out;
+@@ -1668,7 +1666,7 @@ static int userfaultfd_wake(struct userfaultfd_ctx *ctx,
+ 	if (copy_from_user(&uffdio_wake, buf, sizeof(uffdio_wake)))
+ 		goto out;
+ 
+-	ret = validate_range(ctx->mm, &uffdio_wake.start, uffdio_wake.len);
++	ret = validate_range(ctx->mm, uffdio_wake.start, uffdio_wake.len);
+ 	if (ret)
+ 		goto out;
+ 
+@@ -1708,7 +1706,7 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx,
+ 			   sizeof(uffdio_copy)-sizeof(__s64)))
+ 		goto out;
+ 
+-	ret = validate_range(ctx->mm, &uffdio_copy.dst, uffdio_copy.len);
++	ret = validate_range(ctx->mm, uffdio_copy.dst, uffdio_copy.len);
+ 	if (ret)
+ 		goto out;
+ 	/*
+@@ -1765,7 +1763,7 @@ static int userfaultfd_zeropage(struct userfaultfd_ctx *ctx,
+ 			   sizeof(uffdio_zeropage)-sizeof(__s64)))
+ 		goto out;
+ 
+-	ret = validate_range(ctx->mm, &uffdio_zeropage.range.start,
++	ret = validate_range(ctx->mm, uffdio_zeropage.range.start,
+ 			     uffdio_zeropage.range.len);
+ 	if (ret)
+ 		goto out;
+@@ -1815,7 +1813,7 @@ static int userfaultfd_writeprotect(struct userfaultfd_ctx *ctx,
+ 			   sizeof(struct uffdio_writeprotect)))
+ 		return -EFAULT;
+ 
+-	ret = validate_range(ctx->mm, &uffdio_wp.range.start,
++	ret = validate_range(ctx->mm, uffdio_wp.range.start,
+ 			     uffdio_wp.range.len);
+ 	if (ret)
+ 		return ret;
+@@ -1863,7 +1861,7 @@ static int userfaultfd_continue(struct userfaultfd_ctx *ctx, unsigned long arg)
+ 			   sizeof(uffdio_continue) - (sizeof(__s64))))
+ 		goto out;
+ 
+-	ret = validate_range(ctx->mm, &uffdio_continue.range.start,
++	ret = validate_range(ctx->mm, uffdio_continue.range.start,
+ 			     uffdio_continue.range.len);
+ 	if (ret)
+ 		goto out;
+diff --git a/include/acpi/acpi_bus.h b/include/acpi/acpi_bus.h
+index 3a82faac57673..394eb42d0ec14 100644
+--- a/include/acpi/acpi_bus.h
++++ b/include/acpi/acpi_bus.h
+@@ -698,11 +698,6 @@ acpi_dev_get_first_match_dev(const char *hid, const char *uid, s64 hrv);
+  * @hrv: Hardware Revision of the device, pass -1 to not check _HRV
+  *
+  * The caller is responsible for invoking acpi_dev_put() on the returned device.
+- *
+- * FIXME: Due to above requirement there is a window that may invalidate @adev
+- * and next iteration will use a dangling pointer, e.g. in the case of a
+- * hotplug event. That said, the caller should ensure that this will never
+- * happen.
+  */
+ #define for_each_acpi_dev_match(adev, hid, uid, hrv)			\
+ 	for (adev = acpi_dev_get_first_match_dev(hid, uid, hrv);	\
+@@ -716,7 +711,8 @@ static inline struct acpi_device *acpi_dev_get(struct acpi_device *adev)
+ 
+ static inline void acpi_dev_put(struct acpi_device *adev)
+ {
+-	put_device(&adev->dev);
++	if (adev)
++		put_device(&adev->dev);
+ }
+ #else	/* CONFIG_ACPI */
+ 
+diff --git a/include/drm/drm_ioctl.h b/include/drm/drm_ioctl.h
+index 10100a4bbe2ad..afb27cb6a7bd8 100644
+--- a/include/drm/drm_ioctl.h
++++ b/include/drm/drm_ioctl.h
+@@ -68,6 +68,7 @@ typedef int drm_ioctl_compat_t(struct file *filp, unsigned int cmd,
+ 			       unsigned long arg);
+ 
+ #define DRM_IOCTL_NR(n)                _IOC_NR(n)
++#define DRM_IOCTL_TYPE(n)              _IOC_TYPE(n)
+ #define DRM_MAJOR       226
+ 
+ /**
+diff --git a/include/linux/highmem.h b/include/linux/highmem.h
+index 832b49b50c7bf..fc80a40f50531 100644
+--- a/include/linux/highmem.h
++++ b/include/linux/highmem.h
+@@ -329,6 +329,7 @@ static inline void memcpy_to_page(struct page *page, size_t offset,
+ 
+ 	VM_BUG_ON(offset + len > PAGE_SIZE);
+ 	memcpy(to + offset, from, len);
++	flush_dcache_page(page);
+ 	kunmap_local(to);
+ }
+ 
+@@ -336,6 +337,7 @@ static inline void memzero_page(struct page *page, size_t offset, size_t len)
+ {
+ 	char *addr = kmap_atomic(page);
+ 	memset(addr + offset, 0, len);
++	flush_dcache_page(page);
+ 	kunmap_atomic(addr);
+ }
+ 
+diff --git a/include/linux/marvell_phy.h b/include/linux/marvell_phy.h
+index acee44b9db269..0f06c2287b527 100644
+--- a/include/linux/marvell_phy.h
++++ b/include/linux/marvell_phy.h
+@@ -22,14 +22,10 @@
+ #define MARVELL_PHY_ID_88E1545		0x01410ea0
+ #define MARVELL_PHY_ID_88E1548P		0x01410ec0
+ #define MARVELL_PHY_ID_88E3016		0x01410e60
++#define MARVELL_PHY_ID_88X3310		0x002b09a0
+ #define MARVELL_PHY_ID_88E2110		0x002b09b0
+ #define MARVELL_PHY_ID_88X2222		0x01410f10
+ 
+-/* PHY IDs and mask for Alaska 10G PHYs */
+-#define MARVELL_PHY_ID_88X33X0_MASK	0xfffffff8
+-#define MARVELL_PHY_ID_88X3310		0x002b09a0
+-#define MARVELL_PHY_ID_88X3340		0x002b09a8
+-
+ /* Marvel 88E1111 in Finisar SFP module with modified PHY ID */
+ #define MARVELL_PHY_ID_88E1111_FINISAR	0x01ff0cc0
+ 
+diff --git a/include/linux/memblock.h b/include/linux/memblock.h
+index 5984fff3f1758..0df96a399d673 100644
+--- a/include/linux/memblock.h
++++ b/include/linux/memblock.h
+@@ -207,7 +207,7 @@ static inline void __next_physmem_range(u64 *idx, struct memblock_type *type,
+  */
+ #define for_each_mem_range(i, p_start, p_end) \
+ 	__for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE,	\
+-			     MEMBLOCK_NONE, p_start, p_end, NULL)
++			     MEMBLOCK_HOTPLUG, p_start, p_end, NULL)
+ 
+ /**
+  * for_each_mem_range_rev - reverse iterate through memblock areas from
+@@ -218,7 +218,7 @@ static inline void __next_physmem_range(u64 *idx, struct memblock_type *type,
+  */
+ #define for_each_mem_range_rev(i, p_start, p_end)			\
+ 	__for_each_mem_range_rev(i, &memblock.memory, NULL, NUMA_NO_NODE, \
+-				 MEMBLOCK_NONE, p_start, p_end, NULL)
++				 MEMBLOCK_HOTPLUG, p_start, p_end, NULL)
+ 
+ /**
+  * for_each_reserved_mem_range - iterate over all reserved memblock areas
+diff --git a/include/net/bonding.h b/include/net/bonding.h
+index 019e998d944ad..a02b19843819c 100644
+--- a/include/net/bonding.h
++++ b/include/net/bonding.h
+@@ -201,6 +201,11 @@ struct bond_up_slave {
+  */
+ #define BOND_LINK_NOCHANGE -1
+ 
++struct bond_ipsec {
++	struct list_head list;
++	struct xfrm_state *xs;
++};
++
+ /*
+  * Here are the locking policies for the two bonding locks:
+  * Get rcu_read_lock when reading or RTNL when writing slave list.
+@@ -249,7 +254,9 @@ struct bonding {
+ #endif /* CONFIG_DEBUG_FS */
+ 	struct rtnl_link_stats64 bond_stats;
+ #ifdef CONFIG_XFRM_OFFLOAD
+-	struct xfrm_state *xs;
++	struct list_head ipsec_list;
++	/* protecting ipsec_list */
++	spinlock_t ipsec_lock;
+ #endif /* CONFIG_XFRM_OFFLOAD */
+ };
+ 
+diff --git a/include/net/mptcp.h b/include/net/mptcp.h
+index 83f23774b9084..f1d798ff29e97 100644
+--- a/include/net/mptcp.h
++++ b/include/net/mptcp.h
+@@ -101,7 +101,7 @@ bool mptcp_synack_options(const struct request_sock *req, unsigned int *size,
+ bool mptcp_established_options(struct sock *sk, struct sk_buff *skb,
+ 			       unsigned int *size, unsigned int remaining,
+ 			       struct mptcp_out_options *opts);
+-void mptcp_incoming_options(struct sock *sk, struct sk_buff *skb);
++bool mptcp_incoming_options(struct sock *sk, struct sk_buff *skb);
+ 
+ void mptcp_write_options(__be32 *ptr, const struct tcp_sock *tp,
+ 			 struct mptcp_out_options *opts);
+@@ -223,9 +223,10 @@ static inline bool mptcp_established_options(struct sock *sk,
+ 	return false;
+ }
+ 
+-static inline void mptcp_incoming_options(struct sock *sk,
++static inline bool mptcp_incoming_options(struct sock *sk,
+ 					  struct sk_buff *skb)
+ {
++	return true;
+ }
+ 
+ static inline void mptcp_skb_ext_move(struct sk_buff *to,
+diff --git a/include/sound/soc.h b/include/sound/soc.h
+index e746da996351e..723eeb1c3f78d 100644
+--- a/include/sound/soc.h
++++ b/include/sound/soc.h
+@@ -712,6 +712,12 @@ struct snd_soc_dai_link {
+ 	/* Do not create a PCM for this DAI link (Backend link) */
+ 	unsigned int ignore:1;
+ 
++	/* This flag will reorder stop sequence. By enabling this flag
++	 * DMA controller stop sequence will be invoked first followed by
++	 * CPU DAI driver stop sequence
++	 */
++	unsigned int stop_dma_first:1;
++
+ #ifdef CONFIG_SND_SOC_TOPOLOGY
+ 	struct snd_soc_dobj dobj; /* For topology */
+ #endif
+diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h
+index 3ccf591b23740..9f73ed2cf0611 100644
+--- a/include/trace/events/afs.h
++++ b/include/trace/events/afs.h
+@@ -174,6 +174,34 @@ enum afs_vl_operation {
+ 	afs_VL_GetCapabilities	= 65537,	/* AFS Get VL server capabilities */
+ };
+ 
++enum afs_cm_operation {
++	afs_CB_CallBack			= 204,	/* AFS break callback promises */
++	afs_CB_InitCallBackState	= 205,	/* AFS initialise callback state */
++	afs_CB_Probe			= 206,	/* AFS probe client */
++	afs_CB_GetLock			= 207,	/* AFS get contents of CM lock table */
++	afs_CB_GetCE			= 208,	/* AFS get cache file description */
++	afs_CB_GetXStatsVersion		= 209,	/* AFS get version of extended statistics */
++	afs_CB_GetXStats		= 210,	/* AFS get contents of extended statistics data */
++	afs_CB_InitCallBackState3	= 213,	/* AFS initialise callback state, version 3 */
++	afs_CB_ProbeUuid		= 214,	/* AFS check the client hasn't rebooted */
++};
++
++enum yfs_cm_operation {
++	yfs_CB_Probe			= 206,	/* YFS probe client */
++	yfs_CB_GetLock			= 207,	/* YFS get contents of CM lock table */
++	yfs_CB_XStatsVersion		= 209,	/* YFS get version of extended statistics */
++	yfs_CB_GetXStats		= 210,	/* YFS get contents of extended statistics data */
++	yfs_CB_InitCallBackState3	= 213,	/* YFS initialise callback state, version 3 */
++	yfs_CB_ProbeUuid		= 214,	/* YFS check the client hasn't rebooted */
++	yfs_CB_GetServerPrefs		= 215,
++	yfs_CB_GetCellServDV		= 216,
++	yfs_CB_GetLocalCell		= 217,
++	yfs_CB_GetCacheConfig		= 218,
++	yfs_CB_GetCellByNum		= 65537,
++	yfs_CB_TellMeAboutYourself	= 65538, /* get client capabilities */
++	yfs_CB_CallBack			= 64204,
++};
++
+ enum afs_edit_dir_op {
+ 	afs_edit_dir_create,
+ 	afs_edit_dir_create_error,
+@@ -436,6 +464,32 @@ enum afs_cb_break_reason {
+ 	EM(afs_YFSVL_GetCellName,		"YFSVL.GetCellName") \
+ 	E_(afs_VL_GetCapabilities,		"VL.GetCapabilities")
+ 
++#define afs_cm_operations \
++	EM(afs_CB_CallBack,			"CB.CallBack") \
++	EM(afs_CB_InitCallBackState,		"CB.InitCallBackState") \
++	EM(afs_CB_Probe,			"CB.Probe") \
++	EM(afs_CB_GetLock,			"CB.GetLock") \
++	EM(afs_CB_GetCE,			"CB.GetCE") \
++	EM(afs_CB_GetXStatsVersion,		"CB.GetXStatsVersion") \
++	EM(afs_CB_GetXStats,			"CB.GetXStats") \
++	EM(afs_CB_InitCallBackState3,		"CB.InitCallBackState3") \
++	E_(afs_CB_ProbeUuid,			"CB.ProbeUuid")
++
++#define yfs_cm_operations \
++	EM(yfs_CB_Probe,			"YFSCB.Probe") \
++	EM(yfs_CB_GetLock,			"YFSCB.GetLock") \
++	EM(yfs_CB_XStatsVersion,		"YFSCB.XStatsVersion") \
++	EM(yfs_CB_GetXStats,			"YFSCB.GetXStats") \
++	EM(yfs_CB_InitCallBackState3,		"YFSCB.InitCallBackState3") \
++	EM(yfs_CB_ProbeUuid,			"YFSCB.ProbeUuid") \
++	EM(yfs_CB_GetServerPrefs,		"YFSCB.GetServerPrefs") \
++	EM(yfs_CB_GetCellServDV,		"YFSCB.GetCellServDV") \
++	EM(yfs_CB_GetLocalCell,			"YFSCB.GetLocalCell") \
++	EM(yfs_CB_GetCacheConfig,		"YFSCB.GetCacheConfig") \
++	EM(yfs_CB_GetCellByNum,			"YFSCB.GetCellByNum") \
++	EM(yfs_CB_TellMeAboutYourself,		"YFSCB.TellMeAboutYourself") \
++	E_(yfs_CB_CallBack,			"YFSCB.CallBack")
++
+ #define afs_edit_dir_ops				  \
+ 	EM(afs_edit_dir_create,			"create") \
+ 	EM(afs_edit_dir_create_error,		"c_fail") \
+@@ -569,6 +623,8 @@ afs_server_traces;
+ afs_cell_traces;
+ afs_fs_operations;
+ afs_vl_operations;
++afs_cm_operations;
++yfs_cm_operations;
+ afs_edit_dir_ops;
+ afs_edit_dir_reasons;
+ afs_eproto_causes;
+@@ -649,20 +705,21 @@ TRACE_EVENT(afs_cb_call,
+ 
+ 	    TP_STRUCT__entry(
+ 		    __field(unsigned int,		call		)
+-		    __field(const char *,		name		)
+ 		    __field(u32,			op		)
++		    __field(u16,			service_id	)
+ 			     ),
+ 
+ 	    TP_fast_assign(
+ 		    __entry->call	= call->debug_id;
+-		    __entry->name	= call->type->name;
+ 		    __entry->op		= call->operation_ID;
++		    __entry->service_id	= call->service_id;
+ 			   ),
+ 
+-	    TP_printk("c=%08x %s o=%u",
++	    TP_printk("c=%08x %s",
+ 		      __entry->call,
+-		      __entry->name,
+-		      __entry->op)
++		      __entry->service_id == 2501 ?
++		      __print_symbolic(__entry->op, yfs_cm_operations) :
++		      __print_symbolic(__entry->op, afs_cm_operations))
+ 	    );
+ 
+ TRACE_EVENT(afs_call,
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index d8a6fcd28e39e..e6db39a00de25 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -3675,6 +3675,8 @@ continue_func:
+ 	if (tail_call_reachable)
+ 		for (j = 0; j < frame; j++)
+ 			subprog[ret_prog[j]].tail_call_reachable = true;
++	if (subprog[0].tail_call_reachable)
++		env->prog->aux->tail_call_reachable = true;
+ 
+ 	/* end of for() loop means the last insn of the 'subprog'
+ 	 * was reached. Doesn't matter whether it was JA or EXIT
+diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c
+index 910ae69cae777..af4a6ef48ce04 100644
+--- a/kernel/dma/ops_helpers.c
++++ b/kernel/dma/ops_helpers.c
+@@ -5,6 +5,13 @@
+  */
+ #include <linux/dma-map-ops.h>
+ 
++static struct page *dma_common_vaddr_to_page(void *cpu_addr)
++{
++	if (is_vmalloc_addr(cpu_addr))
++		return vmalloc_to_page(cpu_addr);
++	return virt_to_page(cpu_addr);
++}
++
+ /*
+  * Create scatter-list for the already allocated DMA buffer.
+  */
+@@ -12,7 +19,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
+ 		 void *cpu_addr, dma_addr_t dma_addr, size_t size,
+ 		 unsigned long attrs)
+ {
+-	struct page *page = virt_to_page(cpu_addr);
++	struct page *page = dma_common_vaddr_to_page(cpu_addr);
+ 	int ret;
+ 
+ 	ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
+@@ -32,6 +39,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
+ 	unsigned long user_count = vma_pages(vma);
+ 	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+ 	unsigned long off = vma->vm_pgoff;
++	struct page *page = dma_common_vaddr_to_page(cpu_addr);
+ 	int ret = -ENXIO;
+ 
+ 	vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs);
+@@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
+ 		return -ENXIO;
+ 
+ 	return remap_pfn_range(vma, vma->vm_start,
+-			page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff,
++			page_to_pfn(page) + vma->vm_pgoff,
+ 			user_count << PAGE_SHIFT, vma->vm_page_prot);
+ #else
+ 	return -ENXIO;
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 3bb96a8b49c9b..aa52fc85dbcbf 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -991,6 +991,11 @@ static void posix_cpu_timer_rearm(struct k_itimer *timer)
+ 	if (!p)
+ 		goto out;
+ 
++	/* Protect timer list r/w in arm_timer() */
++	sighand = lock_task_sighand(p, &flags);
++	if (unlikely(sighand == NULL))
++		goto out;
++
+ 	/*
+ 	 * Fetch the current sample and update the timer's expiry time.
+ 	 */
+@@ -1001,11 +1006,6 @@ static void posix_cpu_timer_rearm(struct k_itimer *timer)
+ 
+ 	bump_cpu_timer(timer, now);
+ 
+-	/* Protect timer list r/w in arm_timer() */
+-	sighand = lock_task_sighand(p, &flags);
+-	if (unlikely(sighand == NULL))
+-		goto out;
+-
+ 	/*
+ 	 * Now re-arm for the new expiry time.
+ 	 */
+diff --git a/kernel/time/timer.c b/kernel/time/timer.c
+index d111adf4a0cb4..99b97ccefdbdf 100644
+--- a/kernel/time/timer.c
++++ b/kernel/time/timer.c
+@@ -207,6 +207,7 @@ struct timer_base {
+ 	unsigned int		cpu;
+ 	bool			next_expiry_recalc;
+ 	bool			is_idle;
++	bool			timers_pending;
+ 	DECLARE_BITMAP(pending_map, WHEEL_SIZE);
+ 	struct hlist_head	vectors[WHEEL_SIZE];
+ } ____cacheline_aligned;
+@@ -595,6 +596,7 @@ static void enqueue_timer(struct timer_base *base, struct timer_list *timer,
+ 		 * can reevaluate the wheel:
+ 		 */
+ 		base->next_expiry = bucket_expiry;
++		base->timers_pending = true;
+ 		base->next_expiry_recalc = false;
+ 		trigger_dyntick_cpu(base, timer);
+ 	}
+@@ -1596,6 +1598,7 @@ static unsigned long __next_timer_interrupt(struct timer_base *base)
+ 	}
+ 
+ 	base->next_expiry_recalc = false;
++	base->timers_pending = !(next == base->clk + NEXT_TIMER_MAX_DELTA);
+ 
+ 	return next;
+ }
+@@ -1647,7 +1650,6 @@ u64 get_next_timer_interrupt(unsigned long basej, u64 basem)
+ 	struct timer_base *base = this_cpu_ptr(&timer_bases[BASE_STD]);
+ 	u64 expires = KTIME_MAX;
+ 	unsigned long nextevt;
+-	bool is_max_delta;
+ 
+ 	/*
+ 	 * Pretend that there is no timer pending if the cpu is offline.
+@@ -1660,7 +1662,6 @@ u64 get_next_timer_interrupt(unsigned long basej, u64 basem)
+ 	if (base->next_expiry_recalc)
+ 		base->next_expiry = __next_timer_interrupt(base);
+ 	nextevt = base->next_expiry;
+-	is_max_delta = (nextevt == base->clk + NEXT_TIMER_MAX_DELTA);
+ 
+ 	/*
+ 	 * We have a fresh next event. Check whether we can forward the
+@@ -1678,7 +1679,7 @@ u64 get_next_timer_interrupt(unsigned long basej, u64 basem)
+ 		expires = basem;
+ 		base->is_idle = false;
+ 	} else {
+-		if (!is_max_delta)
++		if (base->timers_pending)
+ 			expires = basem + (u64)(nextevt - basej) * TICK_NSEC;
+ 		/*
+ 		 * If we expect to sleep more than a tick, mark the base idle.
+@@ -1961,6 +1962,7 @@ int timers_prepare_cpu(unsigned int cpu)
+ 		base = per_cpu_ptr(&timer_bases[b], cpu);
+ 		base->clk = jiffies;
+ 		base->next_expiry = base->clk + NEXT_TIMER_MAX_DELTA;
++		base->timers_pending = false;
+ 		base->is_idle = false;
+ 	}
+ 	return 0;
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 2c0ee64849903..16a8d8da29f73 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -3880,10 +3880,30 @@ static bool rb_per_cpu_empty(struct ring_buffer_per_cpu *cpu_buffer)
+ 	if (unlikely(!head))
+ 		return true;
+ 
+-	return reader->read == rb_page_commit(reader) &&
+-		(commit == reader ||
+-		 (commit == head &&
+-		  head->read == rb_page_commit(commit)));
++	/* Reader should exhaust content in reader page */
++	if (reader->read != rb_page_commit(reader))
++		return false;
++
++	/*
++	 * If writers are committing on the reader page, knowing all
++	 * committed content has been read, the ring buffer is empty.
++	 */
++	if (commit == reader)
++		return true;
++
++	/*
++	 * If writers are committing on a page other than reader page
++	 * and head page, there should always be content to read.
++	 */
++	if (commit != head)
++		return false;
++
++	/*
++	 * Writers are committing on the head page, we just need
++	 * to care about there're committed data, and the reader will
++	 * swap reader page with head page when it is to read data.
++	 */
++	return rb_page_commit(commit) == 0;
+ }
+ 
+ /**
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 1afd3b57ae517..20196380fc545 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -5565,6 +5565,10 @@ static const char readme_msg[] =
+ 	"\t            [:name=histname1]\n"
+ 	"\t            [:<handler>.<action>]\n"
+ 	"\t            [if <filter>]\n\n"
++	"\t    Note, special fields can be used as well:\n"
++	"\t            common_timestamp - to record current timestamp\n"
++	"\t            common_cpu - to record the CPU the event happened on\n"
++	"\n"
+ 	"\t    When a matching event is hit, an entry is added to a hash\n"
+ 	"\t    table using the key(s) and value(s) named, and the value of a\n"
+ 	"\t    sum called 'hitcount' is incremented.  Keys and values\n"
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 53b4ab9ea305c..023777a4a04d7 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -1111,7 +1111,7 @@ static const char *hist_field_name(struct hist_field *field,
+ 		 field->flags & HIST_FIELD_FL_ALIAS)
+ 		field_name = hist_field_name(field->operands[0], ++level);
+ 	else if (field->flags & HIST_FIELD_FL_CPU)
+-		field_name = "cpu";
++		field_name = "common_cpu";
+ 	else if (field->flags & HIST_FIELD_FL_EXPR ||
+ 		 field->flags & HIST_FIELD_FL_VAR_REF) {
+ 		if (field->system) {
+@@ -1991,14 +1991,24 @@ parse_field(struct hist_trigger_data *hist_data, struct trace_event_file *file,
+ 		hist_data->enable_timestamps = true;
+ 		if (*flags & HIST_FIELD_FL_TIMESTAMP_USECS)
+ 			hist_data->attrs->ts_in_usecs = true;
+-	} else if (strcmp(field_name, "cpu") == 0)
++	} else if (strcmp(field_name, "common_cpu") == 0)
+ 		*flags |= HIST_FIELD_FL_CPU;
+ 	else {
+ 		field = trace_find_event_field(file->event_call, field_name);
+ 		if (!field || !field->size) {
+-			hist_err(tr, HIST_ERR_FIELD_NOT_FOUND, errpos(field_name));
+-			field = ERR_PTR(-EINVAL);
+-			goto out;
++			/*
++			 * For backward compatibility, if field_name
++			 * was "cpu", then we treat this the same as
++			 * common_cpu.
++			 */
++			if (strcmp(field_name, "cpu") == 0) {
++				*flags |= HIST_FIELD_FL_CPU;
++			} else {
++				hist_err(tr, HIST_ERR_FIELD_NOT_FOUND,
++					 errpos(field_name));
++				field = ERR_PTR(-EINVAL);
++				goto out;
++			}
+ 		}
+ 	}
+  out:
+@@ -5085,7 +5095,7 @@ static void hist_field_print(struct seq_file *m, struct hist_field *hist_field)
+ 		seq_printf(m, "%s=", hist_field->var.name);
+ 
+ 	if (hist_field->flags & HIST_FIELD_FL_CPU)
+-		seq_puts(m, "cpu");
++		seq_puts(m, "common_cpu");
+ 	else if (field_name) {
+ 		if (hist_field->flags & HIST_FIELD_FL_VAR_REF ||
+ 		    hist_field->flags & HIST_FIELD_FL_ALIAS)
+diff --git a/kernel/trace/trace_synth.h b/kernel/trace/trace_synth.h
+index 6e146b959dcd0..4007fe95cf42c 100644
+--- a/kernel/trace/trace_synth.h
++++ b/kernel/trace/trace_synth.h
+@@ -14,10 +14,10 @@ struct synth_field {
+ 	char *name;
+ 	size_t size;
+ 	unsigned int offset;
++	unsigned int field_pos;
+ 	bool is_signed;
+ 	bool is_string;
+ 	bool is_dynamic;
+-	bool field_pos;
+ };
+ 
+ struct synth_event {
+diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
+index 976bf8ce80396..fc32821f8240b 100644
+--- a/kernel/tracepoint.c
++++ b/kernel/tracepoint.c
+@@ -299,8 +299,8 @@ static int tracepoint_add_func(struct tracepoint *tp,
+ 	 * a pointer to it.  This array is referenced by __DO_TRACE from
+ 	 * include/linux/tracepoint.h using rcu_dereference_sched().
+ 	 */
+-	rcu_assign_pointer(tp->funcs, tp_funcs);
+ 	tracepoint_update_call(tp, tp_funcs, false);
++	rcu_assign_pointer(tp->funcs, tp_funcs);
+ 	static_key_enable(&tp->key);
+ 
+ 	release_probes(old);
+diff --git a/mm/kfence/core.c b/mm/kfence/core.c
+index d7666ace9d2e4..575c685aa6422 100644
+--- a/mm/kfence/core.c
++++ b/mm/kfence/core.c
+@@ -733,6 +733,22 @@ void kfence_shutdown_cache(struct kmem_cache *s)
+ 
+ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags)
+ {
++	/*
++	 * Perform size check before switching kfence_allocation_gate, so that
++	 * we don't disable KFENCE without making an allocation.
++	 */
++	if (size > PAGE_SIZE)
++		return NULL;
++
++	/*
++	 * Skip allocations from non-default zones, including DMA. We cannot
++	 * guarantee that pages in the KFENCE pool will have the requested
++	 * properties (e.g. reside in DMAable memory).
++	 */
++	if ((flags & GFP_ZONEMASK) ||
++	    (s->flags & (SLAB_CACHE_DMA | SLAB_CACHE_DMA32)))
++		return NULL;
++
+ 	/*
+ 	 * allocation_gate only needs to become non-zero, so it doesn't make
+ 	 * sense to continue writing to it and pay the associated contention
+@@ -757,9 +773,6 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags)
+ 	if (!READ_ONCE(kfence_enabled))
+ 		return NULL;
+ 
+-	if (size > PAGE_SIZE)
+-		return NULL;
+-
+ 	return kfence_guarded_alloc(s, size, flags);
+ }
+ 
+diff --git a/mm/memblock.c b/mm/memblock.c
+index afaefa8fc6ab5..d47b7afc9dc42 100644
+--- a/mm/memblock.c
++++ b/mm/memblock.c
+@@ -940,7 +940,8 @@ static bool should_skip_region(struct memblock_type *type,
+ 		return true;
+ 
+ 	/* skip hotpluggable memory regions if needed */
+-	if (movable_node_is_enabled() && memblock_is_hotpluggable(m))
++	if (movable_node_is_enabled() && memblock_is_hotpluggable(m) &&
++	    !(flags & MEMBLOCK_HOTPLUG))
+ 		return true;
+ 
+ 	/* if we want mirror memory skip non-mirror memory regions */
+diff --git a/mm/memory.c b/mm/memory.c
+index 3fdba3ec69c8b..f2293c0df33ff 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -3891,8 +3891,17 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
+ 				return ret;
+ 		}
+ 
+-		if (unlikely(pte_alloc(vma->vm_mm, vmf->pmd)))
++		if (vmf->prealloc_pte) {
++			vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd);
++			if (likely(pmd_none(*vmf->pmd))) {
++				mm_inc_nr_ptes(vma->vm_mm);
++				pmd_populate(vma->vm_mm, vmf->pmd, vmf->prealloc_pte);
++				vmf->prealloc_pte = NULL;
++			}
++			spin_unlock(vmf->ptl);
++		} else if (unlikely(pte_alloc(vma->vm_mm, vmf->pmd))) {
+ 			return VM_FAULT_OOM;
++		}
+ 	}
+ 
+ 	/* See comment in handle_pte_fault() */
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index fc5beebf69887..de392908bf64a 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -801,21 +801,24 @@ void init_mem_debugging_and_hardening(void)
+ 	}
+ #endif
+ 
+-	if (_init_on_alloc_enabled_early) {
+-		if (page_poisoning_requested)
+-			pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, "
+-				"will take precedence over init_on_alloc\n");
+-		else
+-			static_branch_enable(&init_on_alloc);
+-	}
+-	if (_init_on_free_enabled_early) {
+-		if (page_poisoning_requested)
+-			pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, "
+-				"will take precedence over init_on_free\n");
+-		else
+-			static_branch_enable(&init_on_free);
++	if ((_init_on_alloc_enabled_early || _init_on_free_enabled_early) &&
++	    page_poisoning_requested) {
++		pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, "
++			"will take precedence over init_on_alloc and init_on_free\n");
++		_init_on_alloc_enabled_early = false;
++		_init_on_free_enabled_early = false;
+ 	}
+ 
++	if (_init_on_alloc_enabled_early)
++		static_branch_enable(&init_on_alloc);
++	else
++		static_branch_disable(&init_on_alloc);
++
++	if (_init_on_free_enabled_early)
++		static_branch_enable(&init_on_free);
++	else
++		static_branch_disable(&init_on_free);
++
+ #ifdef CONFIG_DEBUG_PAGEALLOC
+ 	if (!debug_pagealloc_enabled())
+ 		return;
+diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
+index a5d72c48fb662..28ac3c96fa88b 100644
+--- a/net/bpf/test_run.c
++++ b/net/bpf/test_run.c
+@@ -701,6 +701,9 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
+ 	void *data;
+ 	int ret;
+ 
++	if (prog->expected_attach_type == BPF_XDP_DEVMAP ||
++	    prog->expected_attach_type == BPF_XDP_CPUMAP)
++		return -EINVAL;
+ 	if (kattr->test.ctx_in || kattr->test.ctx_out)
+ 		return -EINVAL;
+ 
+diff --git a/net/caif/caif_socket.c b/net/caif/caif_socket.c
+index 3ad0a1df67128..9d26c5e9da058 100644
+--- a/net/caif/caif_socket.c
++++ b/net/caif/caif_socket.c
+@@ -539,7 +539,8 @@ static int caif_seqpkt_sendmsg(struct socket *sock, struct msghdr *msg,
+ 		goto err;
+ 
+ 	ret = -EINVAL;
+-	if (unlikely(msg->msg_iter.iov->iov_base == NULL))
++	if (unlikely(msg->msg_iter.nr_segs == 0) ||
++	    unlikely(msg->msg_iter.iov->iov_base == NULL))
+ 		goto err;
+ 	noblock = msg->msg_flags & MSG_DONTWAIT;
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 4f29dde4ed0a7..04c4e236952fa 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -5981,6 +5981,19 @@ static void gro_list_prepare(const struct list_head *head,
+ 			diffs = memcmp(skb_mac_header(p),
+ 				       skb_mac_header(skb),
+ 				       maclen);
++
++		diffs |= skb_get_nfct(p) ^ skb_get_nfct(skb);
++#if IS_ENABLED(CONFIG_SKB_EXTENSIONS) && IS_ENABLED(CONFIG_NET_TC_SKB_EXT)
++		if (!diffs) {
++			struct tc_skb_ext *skb_ext = skb_ext_find(skb, TC_SKB_EXT);
++			struct tc_skb_ext *p_ext = skb_ext_find(p, TC_SKB_EXT);
++
++			diffs |= (!!p_ext) ^ (!!skb_ext);
++			if (!diffs && unlikely(skb_ext))
++				diffs |= p_ext->chain ^ skb_ext->chain;
++		}
++#endif
++
+ 		NAPI_GRO_CB(p)->same_flow = !diffs;
+ 	}
+ }
+@@ -6245,6 +6258,7 @@ static void napi_reuse_skb(struct napi_struct *napi, struct sk_buff *skb)
+ 	skb_shinfo(skb)->gso_type = 0;
+ 	skb->truesize = SKB_TRUESIZE(skb_end_offset(skb));
+ 	skb_ext_reset(skb);
++	nf_reset_ct(skb);
+ 
+ 	napi->skb = skb;
+ }
+@@ -9659,14 +9673,17 @@ int bpf_xdp_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
+ 	struct net_device *dev;
+ 	int err, fd;
+ 
++	rtnl_lock();
+ 	dev = dev_get_by_index(net, attr->link_create.target_ifindex);
+-	if (!dev)
++	if (!dev) {
++		rtnl_unlock();
+ 		return -EINVAL;
++	}
+ 
+ 	link = kzalloc(sizeof(*link), GFP_USER);
+ 	if (!link) {
+ 		err = -ENOMEM;
+-		goto out_put_dev;
++		goto unlock;
+ 	}
+ 
+ 	bpf_link_init(&link->link, BPF_LINK_TYPE_XDP, &bpf_xdp_link_lops, prog);
+@@ -9676,14 +9693,14 @@ int bpf_xdp_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
+ 	err = bpf_link_prime(&link->link, &link_primer);
+ 	if (err) {
+ 		kfree(link);
+-		goto out_put_dev;
++		goto unlock;
+ 	}
+ 
+-	rtnl_lock();
+ 	err = dev_xdp_attach_link(dev, NULL, link);
+ 	rtnl_unlock();
+ 
+ 	if (err) {
++		link->dev = NULL;
+ 		bpf_link_cleanup(&link_primer);
+ 		goto out_put_dev;
+ 	}
+@@ -9693,6 +9710,9 @@ int bpf_xdp_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
+ 	dev_put(dev);
+ 	return fd;
+ 
++unlock:
++	rtnl_unlock();
++
+ out_put_dev:
+ 	dev_put(dev);
+ 	return err;
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index bbc3b4b62032b..30ca61d91b69a 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -939,6 +939,7 @@ void __kfree_skb_defer(struct sk_buff *skb)
+ 
+ void napi_skb_free_stolen_head(struct sk_buff *skb)
+ {
++	nf_reset_ct(skb);
+ 	skb_dst_drop(skb);
+ 	skb_ext_put(skb);
+ 	napi_skb_cache_put(skb);
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 539c83a45665e..b2410a1bfa23d 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -531,10 +531,8 @@ static int sk_psock_skb_ingress_enqueue(struct sk_buff *skb,
+ 	if (skb_linearize(skb))
+ 		return -EAGAIN;
+ 	num_sge = skb_to_sgvec(skb, msg->sg.data, 0, skb->len);
+-	if (unlikely(num_sge < 0)) {
+-		kfree(msg);
++	if (unlikely(num_sge < 0))
+ 		return num_sge;
+-	}
+ 
+ 	copied = skb->len;
+ 	msg->sg.start = 0;
+@@ -553,6 +551,7 @@ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb)
+ {
+ 	struct sock *sk = psock->sk;
+ 	struct sk_msg *msg;
++	int err;
+ 
+ 	/* If we are receiving on the same sock skb->sk is already assigned,
+ 	 * skip memory accounting and owner transition seeing it already set
+@@ -571,7 +570,10 @@ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb)
+ 	 * into user buffers.
+ 	 */
+ 	skb_set_owner_r(skb, sk);
+-	return sk_psock_skb_ingress_enqueue(skb, psock, sk, msg);
++	err = sk_psock_skb_ingress_enqueue(skb, psock, sk, msg);
++	if (err < 0)
++		kfree(msg);
++	return err;
+ }
+ 
+ /* Puts an skb on the ingress queue of the socket already assigned to the
+@@ -582,12 +584,16 @@ static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb
+ {
+ 	struct sk_msg *msg = kzalloc(sizeof(*msg), __GFP_NOWARN | GFP_ATOMIC);
+ 	struct sock *sk = psock->sk;
++	int err;
+ 
+ 	if (unlikely(!msg))
+ 		return -EAGAIN;
+ 	sk_msg_init(msg);
+ 	skb_set_owner_r(skb, sk);
+-	return sk_psock_skb_ingress_enqueue(skb, psock, sk, msg);
++	err = sk_psock_skb_ingress_enqueue(skb, psock, sk, msg);
++	if (err < 0)
++		kfree(msg);
++	return err;
+ }
+ 
+ static int sk_psock_handle_skb(struct sk_psock *psock, struct sk_buff *skb,
+diff --git a/net/decnet/af_decnet.c b/net/decnet/af_decnet.c
+index 5dbd45dc35ad3..dc92a67baea39 100644
+--- a/net/decnet/af_decnet.c
++++ b/net/decnet/af_decnet.c
+@@ -816,7 +816,7 @@ static int dn_auto_bind(struct socket *sock)
+ static int dn_confirm_accept(struct sock *sk, long *timeo, gfp_t allocation)
+ {
+ 	struct dn_scp *scp = DN_SK(sk);
+-	DEFINE_WAIT(wait);
++	DEFINE_WAIT_FUNC(wait, woken_wake_function);
+ 	int err;
+ 
+ 	if (scp->state != DN_CR)
+@@ -826,11 +826,11 @@ static int dn_confirm_accept(struct sock *sk, long *timeo, gfp_t allocation)
+ 	scp->segsize_loc = dst_metric_advmss(__sk_dst_get(sk));
+ 	dn_send_conn_conf(sk, allocation);
+ 
+-	prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
++	add_wait_queue(sk_sleep(sk), &wait);
+ 	for(;;) {
+ 		release_sock(sk);
+ 		if (scp->state == DN_CC)
+-			*timeo = schedule_timeout(*timeo);
++			*timeo = wait_woken(&wait, TASK_INTERRUPTIBLE, *timeo);
+ 		lock_sock(sk);
+ 		err = 0;
+ 		if (scp->state == DN_RUN)
+@@ -844,9 +844,8 @@ static int dn_confirm_accept(struct sock *sk, long *timeo, gfp_t allocation)
+ 		err = -EAGAIN;
+ 		if (!*timeo)
+ 			break;
+-		prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
+ 	}
+-	finish_wait(sk_sleep(sk), &wait);
++	remove_wait_queue(sk_sleep(sk), &wait);
+ 	if (err == 0) {
+ 		sk->sk_socket->state = SS_CONNECTED;
+ 	} else if (scp->state != DN_CC) {
+@@ -858,7 +857,7 @@ static int dn_confirm_accept(struct sock *sk, long *timeo, gfp_t allocation)
+ static int dn_wait_run(struct sock *sk, long *timeo)
+ {
+ 	struct dn_scp *scp = DN_SK(sk);
+-	DEFINE_WAIT(wait);
++	DEFINE_WAIT_FUNC(wait, woken_wake_function);
+ 	int err = 0;
+ 
+ 	if (scp->state == DN_RUN)
+@@ -867,11 +866,11 @@ static int dn_wait_run(struct sock *sk, long *timeo)
+ 	if (!*timeo)
+ 		return -EALREADY;
+ 
+-	prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
++	add_wait_queue(sk_sleep(sk), &wait);
+ 	for(;;) {
+ 		release_sock(sk);
+ 		if (scp->state == DN_CI || scp->state == DN_CC)
+-			*timeo = schedule_timeout(*timeo);
++			*timeo = wait_woken(&wait, TASK_INTERRUPTIBLE, *timeo);
+ 		lock_sock(sk);
+ 		err = 0;
+ 		if (scp->state == DN_RUN)
+@@ -885,9 +884,8 @@ static int dn_wait_run(struct sock *sk, long *timeo)
+ 		err = -ETIMEDOUT;
+ 		if (!*timeo)
+ 			break;
+-		prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
+ 	}
+-	finish_wait(sk_sleep(sk), &wait);
++	remove_wait_queue(sk_sleep(sk), &wait);
+ out:
+ 	if (err == 0) {
+ 		sk->sk_socket->state = SS_CONNECTED;
+@@ -1032,16 +1030,16 @@ static void dn_user_copy(struct sk_buff *skb, struct optdata_dn *opt)
+ 
+ static struct sk_buff *dn_wait_for_connect(struct sock *sk, long *timeo)
+ {
+-	DEFINE_WAIT(wait);
++	DEFINE_WAIT_FUNC(wait, woken_wake_function);
+ 	struct sk_buff *skb = NULL;
+ 	int err = 0;
+ 
+-	prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
++	add_wait_queue(sk_sleep(sk), &wait);
+ 	for(;;) {
+ 		release_sock(sk);
+ 		skb = skb_dequeue(&sk->sk_receive_queue);
+ 		if (skb == NULL) {
+-			*timeo = schedule_timeout(*timeo);
++			*timeo = wait_woken(&wait, TASK_INTERRUPTIBLE, *timeo);
+ 			skb = skb_dequeue(&sk->sk_receive_queue);
+ 		}
+ 		lock_sock(sk);
+@@ -1056,9 +1054,8 @@ static struct sk_buff *dn_wait_for_connect(struct sock *sk, long *timeo)
+ 		err = -EAGAIN;
+ 		if (!*timeo)
+ 			break;
+-		prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
+ 	}
+-	finish_wait(sk_sleep(sk), &wait);
++	remove_wait_queue(sk_sleep(sk), &wait);
+ 
+ 	return skb == NULL ? ERR_PTR(err) : skb;
+ }
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index ad9d17923fc56..b65201ba4d931 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -486,7 +486,7 @@ static int __init tcp_bpf_v4_build_proto(void)
+ 	tcp_bpf_rebuild_protos(tcp_bpf_prots[TCP_BPF_IPV4], &tcp_prot);
+ 	return 0;
+ }
+-core_initcall(tcp_bpf_v4_build_proto);
++late_initcall(tcp_bpf_v4_build_proto);
+ 
+ static int tcp_bpf_assert_proto_ops(struct proto *ops)
+ {
+diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c
+index af2814c9342af..d49709ba8e165 100644
+--- a/net/ipv4/tcp_fastopen.c
++++ b/net/ipv4/tcp_fastopen.c
+@@ -507,8 +507,18 @@ void tcp_fastopen_active_disable(struct sock *sk)
+ {
+ 	struct net *net = sock_net(sk);
+ 
++	if (!sock_net(sk)->ipv4.sysctl_tcp_fastopen_blackhole_timeout)
++		return;
++
++	/* Paired with READ_ONCE() in tcp_fastopen_active_should_disable() */
++	WRITE_ONCE(net->ipv4.tfo_active_disable_stamp, jiffies);
++
++	/* Paired with smp_rmb() in tcp_fastopen_active_should_disable().
++	 * We want net->ipv4.tfo_active_disable_stamp to be updated first.
++	 */
++	smp_mb__before_atomic();
+ 	atomic_inc(&net->ipv4.tfo_active_disable_times);
+-	net->ipv4.tfo_active_disable_stamp = jiffies;
++
+ 	NET_INC_STATS(net, LINUX_MIB_TCPFASTOPENBLACKHOLE);
+ }
+ 
+@@ -519,17 +529,27 @@ void tcp_fastopen_active_disable(struct sock *sk)
+ bool tcp_fastopen_active_should_disable(struct sock *sk)
+ {
+ 	unsigned int tfo_bh_timeout = sock_net(sk)->ipv4.sysctl_tcp_fastopen_blackhole_timeout;
+-	int tfo_da_times = atomic_read(&sock_net(sk)->ipv4.tfo_active_disable_times);
+ 	unsigned long timeout;
++	int tfo_da_times;
+ 	int multiplier;
+ 
++	if (!tfo_bh_timeout)
++		return false;
++
++	tfo_da_times = atomic_read(&sock_net(sk)->ipv4.tfo_active_disable_times);
+ 	if (!tfo_da_times)
+ 		return false;
+ 
++	/* Paired with smp_mb__before_atomic() in tcp_fastopen_active_disable() */
++	smp_rmb();
++
+ 	/* Limit timout to max: 2^6 * initial timeout */
+ 	multiplier = 1 << min(tfo_da_times - 1, 6);
+-	timeout = multiplier * tfo_bh_timeout * HZ;
+-	if (time_before(jiffies, sock_net(sk)->ipv4.tfo_active_disable_stamp + timeout))
++
++	/* Paired with the WRITE_ONCE() in tcp_fastopen_active_disable(). */
++	timeout = READ_ONCE(sock_net(sk)->ipv4.tfo_active_disable_stamp) +
++		  multiplier * tfo_bh_timeout * HZ;
++	if (time_before(jiffies, timeout))
+ 		return true;
+ 
+ 	/* Mark check bit so we can check for successful active TFO
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 6bd628f08ded2..0f1b4bfddfd4e 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -4247,6 +4247,9 @@ void tcp_reset(struct sock *sk, struct sk_buff *skb)
+ {
+ 	trace_tcp_receive_reset(sk);
+ 
++	/* mptcp can't tell us to ignore reset pkts,
++	 * so just ignore the return value of mptcp_incoming_options().
++	 */
+ 	if (sk_is_mptcp(sk))
+ 		mptcp_incoming_options(sk, skb);
+ 
+@@ -4941,8 +4944,13 @@ static void tcp_data_queue(struct sock *sk, struct sk_buff *skb)
+ 	bool fragstolen;
+ 	int eaten;
+ 
+-	if (sk_is_mptcp(sk))
+-		mptcp_incoming_options(sk, skb);
++	/* If a subflow has been reset, the packet should not continue
++	 * to be processed, drop the packet.
++	 */
++	if (sk_is_mptcp(sk) && !mptcp_incoming_options(sk, skb)) {
++		__kfree_skb(skb);
++		return;
++	}
+ 
+ 	if (TCP_SKB_CB(skb)->seq == TCP_SKB_CB(skb)->end_seq) {
+ 		__kfree_skb(skb);
+@@ -6522,8 +6530,11 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb)
+ 	case TCP_CLOSING:
+ 	case TCP_LAST_ACK:
+ 		if (!before(TCP_SKB_CB(skb)->seq, tp->rcv_nxt)) {
+-			if (sk_is_mptcp(sk))
+-				mptcp_incoming_options(sk, skb);
++			/* If a subflow has been reset, the packet should not
++			 * continue to be processed, drop the packet.
++			 */
++			if (sk_is_mptcp(sk) && !mptcp_incoming_options(sk, skb))
++				goto discard;
+ 			break;
+ 		}
+ 		fallthrough;
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index e409f2de5dc4f..8bb5f7f51dae7 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -2954,7 +2954,7 @@ static int __net_init tcp_sk_init(struct net *net)
+ 	net->ipv4.sysctl_tcp_comp_sack_nr = 44;
+ 	net->ipv4.sysctl_tcp_fastopen = TFO_CLIENT_ENABLE;
+ 	spin_lock_init(&net->ipv4.tcp_fastopen_ctx_lock);
+-	net->ipv4.sysctl_tcp_fastopen_blackhole_timeout = 60 * 60;
++	net->ipv4.sysctl_tcp_fastopen_blackhole_timeout = 0;
+ 	atomic_set(&net->ipv4.tfo_active_disable_times, 0);
+ 
+ 	/* Reno is always built in */
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index ca9cf1051b1e1..568dc31a04670 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -645,10 +645,12 @@ static struct sock *__udp4_lib_err_encap(struct net *net,
+ 					 const struct iphdr *iph,
+ 					 struct udphdr *uh,
+ 					 struct udp_table *udptable,
++					 struct sock *sk,
+ 					 struct sk_buff *skb, u32 info)
+ {
++	int (*lookup)(struct sock *sk, struct sk_buff *skb);
+ 	int network_offset, transport_offset;
+-	struct sock *sk;
++	struct udp_sock *up;
+ 
+ 	network_offset = skb_network_offset(skb);
+ 	transport_offset = skb_transport_offset(skb);
+@@ -659,18 +661,28 @@ static struct sock *__udp4_lib_err_encap(struct net *net,
+ 	/* Transport header needs to point to the UDP header */
+ 	skb_set_transport_header(skb, iph->ihl << 2);
+ 
++	if (sk) {
++		up = udp_sk(sk);
++
++		lookup = READ_ONCE(up->encap_err_lookup);
++		if (lookup && lookup(sk, skb))
++			sk = NULL;
++
++		goto out;
++	}
++
+ 	sk = __udp4_lib_lookup(net, iph->daddr, uh->source,
+ 			       iph->saddr, uh->dest, skb->dev->ifindex, 0,
+ 			       udptable, NULL);
+ 	if (sk) {
+-		int (*lookup)(struct sock *sk, struct sk_buff *skb);
+-		struct udp_sock *up = udp_sk(sk);
++		up = udp_sk(sk);
+ 
+ 		lookup = READ_ONCE(up->encap_err_lookup);
+ 		if (!lookup || lookup(sk, skb))
+ 			sk = NULL;
+ 	}
+ 
++out:
+ 	if (!sk)
+ 		sk = ERR_PTR(__udp4_lib_err_encap_no_sk(skb, info));
+ 
+@@ -707,15 +719,16 @@ int __udp4_lib_err(struct sk_buff *skb, u32 info, struct udp_table *udptable)
+ 	sk = __udp4_lib_lookup(net, iph->daddr, uh->dest,
+ 			       iph->saddr, uh->source, skb->dev->ifindex,
+ 			       inet_sdif(skb), udptable, NULL);
++
+ 	if (!sk || udp_sk(sk)->encap_type) {
+ 		/* No socket for error: try tunnels before discarding */
+-		sk = ERR_PTR(-ENOENT);
+ 		if (static_branch_unlikely(&udp_encap_needed_key)) {
+-			sk = __udp4_lib_err_encap(net, iph, uh, udptable, skb,
++			sk = __udp4_lib_err_encap(net, iph, uh, udptable, sk, skb,
+ 						  info);
+ 			if (!sk)
+ 				return 0;
+-		}
++		} else
++			sk = ERR_PTR(-ENOENT);
+ 
+ 		if (IS_ERR(sk)) {
+ 			__ICMP_INC_STATS(net, ICMP_MIB_INERRORS);
+diff --git a/net/ipv4/udp_bpf.c b/net/ipv4/udp_bpf.c
+index 954c4591a6fd6..725b6df4b2a2c 100644
+--- a/net/ipv4/udp_bpf.c
++++ b/net/ipv4/udp_bpf.c
+@@ -101,7 +101,7 @@ static int __init udp_bpf_v4_build_proto(void)
+ 	udp_bpf_rebuild_protos(&udp_bpf_prots[UDP_BPF_IPV4], &udp_prot);
+ 	return 0;
+ }
+-core_initcall(udp_bpf_v4_build_proto);
++late_initcall(udp_bpf_v4_build_proto);
+ 
+ int udp_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore)
+ {
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 497974b4372a7..b7ffb4f227a45 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -479,7 +479,9 @@ int ip6_forward(struct sk_buff *skb)
+ 	if (skb_warn_if_lro(skb))
+ 		goto drop;
+ 
+-	if (!xfrm6_policy_check(NULL, XFRM_POLICY_FWD, skb)) {
++	if (!net->ipv6.devconf_all->disable_policy &&
++	    !idev->cnf.disable_policy &&
++	    !xfrm6_policy_check(NULL, XFRM_POLICY_FWD, skb)) {
+ 		__IP6_INC_STATS(net, idev, IPSTATS_MIB_INDISCARDS);
+ 		goto drop;
+ 	}
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index d417e514bd52c..09e84161b7312 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -3642,7 +3642,7 @@ static struct fib6_info *ip6_route_info_create(struct fib6_config *cfg,
+ 		err = PTR_ERR(rt->fib6_metrics);
+ 		/* Do not leave garbage there. */
+ 		rt->fib6_metrics = (struct dst_metrics *)&dst_default_metrics;
+-		goto out;
++		goto out_free;
+ 	}
+ 
+ 	if (cfg->fc_flags & RTF_ADDRCONF)
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 6774e776228ce..2d3bd4a9b0d0e 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -502,12 +502,14 @@ static struct sock *__udp6_lib_err_encap(struct net *net,
+ 					 const struct ipv6hdr *hdr, int offset,
+ 					 struct udphdr *uh,
+ 					 struct udp_table *udptable,
++					 struct sock *sk,
+ 					 struct sk_buff *skb,
+ 					 struct inet6_skb_parm *opt,
+ 					 u8 type, u8 code, __be32 info)
+ {
++	int (*lookup)(struct sock *sk, struct sk_buff *skb);
+ 	int network_offset, transport_offset;
+-	struct sock *sk;
++	struct udp_sock *up;
+ 
+ 	network_offset = skb_network_offset(skb);
+ 	transport_offset = skb_transport_offset(skb);
+@@ -518,18 +520,28 @@ static struct sock *__udp6_lib_err_encap(struct net *net,
+ 	/* Transport header needs to point to the UDP header */
+ 	skb_set_transport_header(skb, offset);
+ 
++	if (sk) {
++		up = udp_sk(sk);
++
++		lookup = READ_ONCE(up->encap_err_lookup);
++		if (lookup && lookup(sk, skb))
++			sk = NULL;
++
++		goto out;
++	}
++
+ 	sk = __udp6_lib_lookup(net, &hdr->daddr, uh->source,
+ 			       &hdr->saddr, uh->dest,
+ 			       inet6_iif(skb), 0, udptable, skb);
+ 	if (sk) {
+-		int (*lookup)(struct sock *sk, struct sk_buff *skb);
+-		struct udp_sock *up = udp_sk(sk);
++		up = udp_sk(sk);
+ 
+ 		lookup = READ_ONCE(up->encap_err_lookup);
+ 		if (!lookup || lookup(sk, skb))
+ 			sk = NULL;
+ 	}
+ 
++out:
+ 	if (!sk) {
+ 		sk = ERR_PTR(__udp6_lib_err_encap_no_sk(skb, opt, type, code,
+ 							offset, info));
+@@ -558,16 +570,17 @@ int __udp6_lib_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ 
+ 	sk = __udp6_lib_lookup(net, daddr, uh->dest, saddr, uh->source,
+ 			       inet6_iif(skb), inet6_sdif(skb), udptable, NULL);
++
+ 	if (!sk || udp_sk(sk)->encap_type) {
+ 		/* No socket for error: try tunnels before discarding */
+-		sk = ERR_PTR(-ENOENT);
+ 		if (static_branch_unlikely(&udpv6_encap_needed_key)) {
+ 			sk = __udp6_lib_err_encap(net, hdr, offset, uh,
+-						  udptable, skb,
++						  udptable, sk, skb,
+ 						  opt, type, code, info);
+ 			if (!sk)
+ 				return 0;
+-		}
++		} else
++			sk = ERR_PTR(-ENOENT);
+ 
+ 		if (IS_ERR(sk)) {
+ 			__ICMP6_INC_STATS(net, __in6_dev_get(skb->dev),
+diff --git a/net/mptcp/mib.c b/net/mptcp/mib.c
+index eb2dc6dbe212b..c8f4823cd79f2 100644
+--- a/net/mptcp/mib.c
++++ b/net/mptcp/mib.c
+@@ -42,6 +42,7 @@ static const struct snmp_mib mptcp_snmp_list[] = {
+ 	SNMP_MIB_ITEM("RmSubflow", MPTCP_MIB_RMSUBFLOW),
+ 	SNMP_MIB_ITEM("MPPrioTx", MPTCP_MIB_MPPRIOTX),
+ 	SNMP_MIB_ITEM("MPPrioRx", MPTCP_MIB_MPPRIORX),
++	SNMP_MIB_ITEM("RcvPruned", MPTCP_MIB_RCVPRUNED),
+ 	SNMP_MIB_SENTINEL
+ };
+ 
+diff --git a/net/mptcp/mib.h b/net/mptcp/mib.h
+index f0da4f060fe1a..93fa7c95e2068 100644
+--- a/net/mptcp/mib.h
++++ b/net/mptcp/mib.h
+@@ -35,6 +35,7 @@ enum linux_mptcp_mib_field {
+ 	MPTCP_MIB_RMSUBFLOW,		/* Remove a subflow */
+ 	MPTCP_MIB_MPPRIOTX,		/* Transmit a MP_PRIO */
+ 	MPTCP_MIB_MPPRIORX,		/* Received a MP_PRIO */
++	MPTCP_MIB_RCVPRUNED,		/* Incoming packet dropped due to memory limit */
+ 	__MPTCP_MIB_MAX
+ };
+ 
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index b87e46f515fb8..4f08e04e1ab71 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -323,7 +323,8 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ 	}
+ }
+ 
+-void mptcp_get_options(const struct sk_buff *skb,
++void mptcp_get_options(const struct sock *sk,
++		       const struct sk_buff *skb,
+ 		       struct mptcp_options_received *mp_opt)
+ {
+ 	const struct tcphdr *th = tcp_hdr(skb);
+@@ -989,7 +990,8 @@ static bool add_addr_hmac_valid(struct mptcp_sock *msk,
+ 	return hmac == mp_opt->ahmac;
+ }
+ 
+-void mptcp_incoming_options(struct sock *sk, struct sk_buff *skb)
++/* Return false if a subflow has been reset, else return true */
++bool mptcp_incoming_options(struct sock *sk, struct sk_buff *skb)
+ {
+ 	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
+ 	struct mptcp_sock *msk = mptcp_sk(subflow->conn);
+@@ -1007,12 +1009,16 @@ void mptcp_incoming_options(struct sock *sk, struct sk_buff *skb)
+ 			__mptcp_check_push(subflow->conn, sk);
+ 		__mptcp_data_acked(subflow->conn);
+ 		mptcp_data_unlock(subflow->conn);
+-		return;
++		return true;
+ 	}
+ 
+-	mptcp_get_options(skb, &mp_opt);
++	mptcp_get_options(sk, skb, &mp_opt);
++
++	/* The subflow can be in close state only if check_fully_established()
++	 * just sent a reset. If so, tell the caller to ignore the current packet.
++	 */
+ 	if (!check_fully_established(msk, sk, subflow, skb, &mp_opt))
+-		return;
++		return sk->sk_state != TCP_CLOSE;
+ 
+ 	if (mp_opt.fastclose &&
+ 	    msk->local_key == mp_opt.rcvr_key) {
+@@ -1054,7 +1060,7 @@ void mptcp_incoming_options(struct sock *sk, struct sk_buff *skb)
+ 	}
+ 
+ 	if (!mp_opt.dss)
+-		return;
++		return true;
+ 
+ 	/* we can't wait for recvmsg() to update the ack_seq, otherwise
+ 	 * monodirectional flows will stuck
+@@ -1073,12 +1079,12 @@ void mptcp_incoming_options(struct sock *sk, struct sk_buff *skb)
+ 		    schedule_work(&msk->work))
+ 			sock_hold(subflow->conn);
+ 
+-		return;
++		return true;
+ 	}
+ 
+ 	mpext = skb_ext_add(skb, SKB_EXT_MPTCP);
+ 	if (!mpext)
+-		return;
++		return true;
+ 
+ 	memset(mpext, 0, sizeof(*mpext));
+ 
+@@ -1103,6 +1109,8 @@ void mptcp_incoming_options(struct sock *sk, struct sk_buff *skb)
+ 		mpext->data_len = mp_opt.data_len;
+ 		mpext->use_map = 1;
+ 	}
++
++	return true;
+ }
+ 
+ static void mptcp_set_rwin(const struct tcp_sock *tp)
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index 3f5d90a20235a..fce1d057d19eb 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -540,6 +540,7 @@ void mptcp_pm_nl_addr_send_ack(struct mptcp_sock *msk)
+ 	subflow = list_first_entry_or_null(&msk->conn_list, typeof(*subflow), node);
+ 	if (subflow) {
+ 		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
++		bool slow;
+ 
+ 		spin_unlock_bh(&msk->pm.lock);
+ 		pr_debug("send ack for %s%s%s",
+@@ -547,9 +548,9 @@ void mptcp_pm_nl_addr_send_ack(struct mptcp_sock *msk)
+ 			 mptcp_pm_should_add_signal_ipv6(msk) ? " [ipv6]" : "",
+ 			 mptcp_pm_should_add_signal_port(msk) ? " [port]" : "");
+ 
+-		lock_sock(ssk);
++		slow = lock_sock_fast(ssk);
+ 		tcp_send_ack(ssk);
+-		release_sock(ssk);
++		unlock_sock_fast(ssk, slow);
+ 		spin_lock_bh(&msk->pm.lock);
+ 	}
+ }
+@@ -566,6 +567,7 @@ int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk,
+ 		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+ 		struct sock *sk = (struct sock *)msk;
+ 		struct mptcp_addr_info local;
++		bool slow;
+ 
+ 		local_address((struct sock_common *)ssk, &local);
+ 		if (!addresses_equal(&local, addr, addr->port))
+@@ -578,9 +580,9 @@ int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk,
+ 
+ 		spin_unlock_bh(&msk->pm.lock);
+ 		pr_debug("send ack for mp_prio");
+-		lock_sock(ssk);
++		slow = lock_sock_fast(ssk);
+ 		tcp_send_ack(ssk);
+-		release_sock(ssk);
++		unlock_sock_fast(ssk, slow);
+ 		spin_lock_bh(&msk->pm.lock);
+ 
+ 		return 0;
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 8ead550df8b1e..dde68da0c1f93 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -424,56 +424,55 @@ static void mptcp_send_ack(struct mptcp_sock *msk)
+ 
+ 	mptcp_for_each_subflow(msk, subflow) {
+ 		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
++		bool slow;
+ 
+-		lock_sock(ssk);
++		slow = lock_sock_fast(ssk);
+ 		if (tcp_can_send_ack(ssk))
+ 			tcp_send_ack(ssk);
+-		release_sock(ssk);
++		unlock_sock_fast(ssk, slow);
+ 	}
+ }
+ 
+-static bool mptcp_subflow_cleanup_rbuf(struct sock *ssk)
++static void mptcp_subflow_cleanup_rbuf(struct sock *ssk)
+ {
+-	int ret;
++	bool slow;
+ 
+-	lock_sock(ssk);
+-	ret = tcp_can_send_ack(ssk);
+-	if (ret)
++	slow = lock_sock_fast(ssk);
++	if (tcp_can_send_ack(ssk))
+ 		tcp_cleanup_rbuf(ssk, 1);
+-	release_sock(ssk);
+-	return ret;
++	unlock_sock_fast(ssk, slow);
++}
++
++static bool mptcp_subflow_could_cleanup(const struct sock *ssk, bool rx_empty)
++{
++	const struct inet_connection_sock *icsk = inet_csk(ssk);
++	u8 ack_pending = READ_ONCE(icsk->icsk_ack.pending);
++	const struct tcp_sock *tp = tcp_sk(ssk);
++
++	return (ack_pending & ICSK_ACK_SCHED) &&
++		((READ_ONCE(tp->rcv_nxt) - READ_ONCE(tp->rcv_wup) >
++		  READ_ONCE(icsk->icsk_ack.rcv_mss)) ||
++		 (rx_empty && ack_pending &
++			      (ICSK_ACK_PUSHED2 | ICSK_ACK_PUSHED)));
+ }
+ 
+ static void mptcp_cleanup_rbuf(struct mptcp_sock *msk)
+ {
+-	struct sock *ack_hint = READ_ONCE(msk->ack_hint);
+ 	int old_space = READ_ONCE(msk->old_wspace);
+ 	struct mptcp_subflow_context *subflow;
+ 	struct sock *sk = (struct sock *)msk;
+-	bool cleanup;
++	int space =  __mptcp_space(sk);
++	bool cleanup, rx_empty;
+ 
+-	/* this is a simple superset of what tcp_cleanup_rbuf() implements
+-	 * so that we don't have to acquire the ssk socket lock most of the time
+-	 * to do actually nothing
+-	 */
+-	cleanup = __mptcp_space(sk) - old_space >= max(0, old_space);
+-	if (!cleanup)
+-		return;
++	cleanup = (space > 0) && (space >= (old_space << 1));
++	rx_empty = !__mptcp_rmem(sk);
+ 
+-	/* if the hinted ssk is still active, try to use it */
+-	if (likely(ack_hint)) {
+-		mptcp_for_each_subflow(msk, subflow) {
+-			struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
++	mptcp_for_each_subflow(msk, subflow) {
++		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+ 
+-			if (ack_hint == ssk && mptcp_subflow_cleanup_rbuf(ssk))
+-				return;
+-		}
++		if (cleanup || mptcp_subflow_could_cleanup(ssk, rx_empty))
++			mptcp_subflow_cleanup_rbuf(ssk);
+ 	}
+-
+-	/* otherwise pick the first active subflow */
+-	mptcp_for_each_subflow(msk, subflow)
+-		if (mptcp_subflow_cleanup_rbuf(mptcp_subflow_tcp_sock(subflow)))
+-			return;
+ }
+ 
+ static bool mptcp_check_data_fin(struct sock *sk)
+@@ -618,7 +617,6 @@ static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk,
+ 			break;
+ 		}
+ 	} while (more_data_avail);
+-	WRITE_ONCE(msk->ack_hint, ssk);
+ 
+ 	*bytes += moved;
+ 	return done;
+@@ -716,8 +714,10 @@ void mptcp_data_ready(struct sock *sk, struct sock *ssk)
+ 		sk_rbuf = ssk_rbuf;
+ 
+ 	/* over limit? can't append more skbs to msk, Also, no need to wake-up*/
+-	if (atomic_read(&sk->sk_rmem_alloc) > sk_rbuf)
++	if (__mptcp_rmem(sk) > sk_rbuf) {
++		MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RCVPRUNED);
+ 		return;
++	}
+ 
+ 	/* Wake-up the reader only for in-sequence data */
+ 	mptcp_data_lock(sk);
+@@ -1801,7 +1801,7 @@ static int __mptcp_recvmsg_mskq(struct mptcp_sock *msk,
+ 		if (!(flags & MSG_PEEK)) {
+ 			/* we will bulk release the skb memory later */
+ 			skb->destructor = NULL;
+-			msk->rmem_released += skb->truesize;
++			WRITE_ONCE(msk->rmem_released, msk->rmem_released + skb->truesize);
+ 			__skb_unlink(skb, &msk->receive_queue);
+ 			__kfree_skb(skb);
+ 		}
+@@ -1920,7 +1920,7 @@ static void __mptcp_update_rmem(struct sock *sk)
+ 
+ 	atomic_sub(msk->rmem_released, &sk->sk_rmem_alloc);
+ 	sk_mem_uncharge(sk, msk->rmem_released);
+-	msk->rmem_released = 0;
++	WRITE_ONCE(msk->rmem_released, 0);
+ }
+ 
+ static void __mptcp_splice_receive_queue(struct sock *sk)
+@@ -1953,7 +1953,6 @@ static bool __mptcp_move_skbs(struct mptcp_sock *msk)
+ 		__mptcp_update_rmem(sk);
+ 		done = __mptcp_move_skbs_from_subflow(msk, ssk, &moved);
+ 		mptcp_data_unlock(sk);
+-		tcp_cleanup_rbuf(ssk, moved);
+ 
+ 		if (unlikely(ssk->sk_err))
+ 			__mptcp_error_report(sk);
+@@ -1969,7 +1968,6 @@ static bool __mptcp_move_skbs(struct mptcp_sock *msk)
+ 		ret |= __mptcp_ofo_queue(msk);
+ 		__mptcp_splice_receive_queue(sk);
+ 		mptcp_data_unlock(sk);
+-		mptcp_cleanup_rbuf(msk);
+ 	}
+ 	if (ret)
+ 		mptcp_check_data_fin((struct sock *)msk);
+@@ -2214,9 +2212,6 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
+ 	if (ssk == msk->last_snd)
+ 		msk->last_snd = NULL;
+ 
+-	if (ssk == msk->ack_hint)
+-		msk->ack_hint = NULL;
+-
+ 	if (ssk == msk->first)
+ 		msk->first = NULL;
+ 
+@@ -2288,13 +2283,14 @@ static void mptcp_check_fastclose(struct mptcp_sock *msk)
+ 
+ 	list_for_each_entry_safe(subflow, tmp, &msk->conn_list, node) {
+ 		struct sock *tcp_sk = mptcp_subflow_tcp_sock(subflow);
++		bool slow;
+ 
+-		lock_sock(tcp_sk);
++		slow = lock_sock_fast(tcp_sk);
+ 		if (tcp_sk->sk_state != TCP_CLOSE) {
+ 			tcp_send_active_reset(tcp_sk, GFP_ATOMIC);
+ 			tcp_set_state(tcp_sk, TCP_CLOSE);
+ 		}
+-		release_sock(tcp_sk);
++		unlock_sock_fast(tcp_sk, slow);
+ 	}
+ 
+ 	inet_sk_state_store(sk, TCP_CLOSE);
+@@ -2426,11 +2422,10 @@ static int __mptcp_init_sock(struct sock *sk)
+ 	msk->out_of_order_queue = RB_ROOT;
+ 	msk->first_pending = NULL;
+ 	msk->wmem_reserved = 0;
+-	msk->rmem_released = 0;
++	WRITE_ONCE(msk->rmem_released, 0);
+ 	msk->tx_pending_data = 0;
+ 	msk->size_goal_cache = TCP_BASE_MSS;
+ 
+-	msk->ack_hint = NULL;
+ 	msk->first = NULL;
+ 	inet_csk(sk)->icsk_sync_mss = mptcp_sync_mss;
+ 
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 7b634568f49cf..dc5b71de0a9a6 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -236,7 +236,6 @@ struct mptcp_sock {
+ 	bool		rcv_fastclose;
+ 	bool		use_64bit_ack; /* Set when we received a 64-bit DSN */
+ 	spinlock_t	join_list_lock;
+-	struct sock	*ack_hint;
+ 	struct work_struct work;
+ 	struct sk_buff  *ooo_last_skb;
+ 	struct rb_root  out_of_order_queue;
+@@ -291,9 +290,17 @@ static inline struct mptcp_sock *mptcp_sk(const struct sock *sk)
+ 	return (struct mptcp_sock *)sk;
+ }
+ 
++/* the msk socket don't use the backlog, also account for the bulk
++ * free memory
++ */
++static inline int __mptcp_rmem(const struct sock *sk)
++{
++	return atomic_read(&sk->sk_rmem_alloc) - READ_ONCE(mptcp_sk(sk)->rmem_released);
++}
++
+ static inline int __mptcp_space(const struct sock *sk)
+ {
+-	return tcp_space(sk) + READ_ONCE(mptcp_sk(sk)->rmem_released);
++	return tcp_win_from_space(sk, READ_ONCE(sk->sk_rcvbuf) - __mptcp_rmem(sk));
+ }
+ 
+ static inline struct mptcp_data_frag *mptcp_send_head(const struct sock *sk)
+@@ -576,7 +583,8 @@ int __init mptcp_proto_v6_init(void);
+ struct sock *mptcp_sk_clone(const struct sock *sk,
+ 			    const struct mptcp_options_received *mp_opt,
+ 			    struct request_sock *req);
+-void mptcp_get_options(const struct sk_buff *skb,
++void mptcp_get_options(const struct sock *sk,
++		       const struct sk_buff *skb,
+ 		       struct mptcp_options_received *mp_opt);
+ 
+ void mptcp_finish_connect(struct sock *sk);
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index cbc452d0901ec..78e787ef8fff0 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -150,7 +150,7 @@ static int subflow_check_req(struct request_sock *req,
+ 		return -EINVAL;
+ #endif
+ 
+-	mptcp_get_options(skb, &mp_opt);
++	mptcp_get_options(sk_listener, skb, &mp_opt);
+ 
+ 	if (mp_opt.mp_capable) {
+ 		SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_MPCAPABLEPASSIVE);
+@@ -212,11 +212,6 @@ again:
+ 				 ntohs(inet_sk(sk_listener)->inet_sport),
+ 				 ntohs(inet_sk((struct sock *)subflow_req->msk)->inet_sport));
+ 			if (!mptcp_pm_sport_in_anno_list(subflow_req->msk, sk_listener)) {
+-				sock_put((struct sock *)subflow_req->msk);
+-				mptcp_token_destroy_request(req);
+-				tcp_request_sock_ops.destructor(req);
+-				subflow_req->msk = NULL;
+-				subflow_req->mp_join = 0;
+ 				SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_MISMATCHPORTSYNRX);
+ 				return -EPERM;
+ 			}
+@@ -228,6 +223,8 @@ again:
+ 		if (unlikely(req->syncookie)) {
+ 			if (mptcp_can_accept_new_subflow(subflow_req->msk))
+ 				subflow_init_req_cookie_join_save(subflow_req, skb);
++			else
++				return -EPERM;
+ 		}
+ 
+ 		pr_debug("token=%u, remote_nonce=%u msk=%p", subflow_req->token,
+@@ -247,7 +244,7 @@ int mptcp_subflow_init_cookie_req(struct request_sock *req,
+ 	int err;
+ 
+ 	subflow_init_req(req, sk_listener);
+-	mptcp_get_options(skb, &mp_opt);
++	mptcp_get_options(sk_listener, skb, &mp_opt);
+ 
+ 	if (mp_opt.mp_capable && mp_opt.mp_join)
+ 		return -EINVAL;
+@@ -267,9 +264,7 @@ int mptcp_subflow_init_cookie_req(struct request_sock *req,
+ 		if (!mptcp_token_join_cookie_init_state(subflow_req, skb))
+ 			return -EINVAL;
+ 
+-		if (mptcp_can_accept_new_subflow(subflow_req->msk))
+-			subflow_req->mp_join = 1;
+-
++		subflow_req->mp_join = 1;
+ 		subflow_req->ssn_offset = TCP_SKB_CB(skb)->seq - 1;
+ 	}
+ 
+@@ -408,7 +403,7 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
+ 	subflow->ssn_offset = TCP_SKB_CB(skb)->seq;
+ 	pr_debug("subflow=%p synack seq=%x", subflow, subflow->ssn_offset);
+ 
+-	mptcp_get_options(skb, &mp_opt);
++	mptcp_get_options(sk, skb, &mp_opt);
+ 	if (subflow->request_mptcp) {
+ 		if (!mp_opt.mp_capable) {
+ 			MPTCP_INC_STATS(sock_net(sk),
+@@ -655,7 +650,7 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
+ 		 * reordered MPC will cause fallback, but we don't have other
+ 		 * options.
+ 		 */
+-		mptcp_get_options(skb, &mp_opt);
++		mptcp_get_options(sk, skb, &mp_opt);
+ 		if (!mp_opt.mp_capable) {
+ 			fallback = true;
+ 			goto create_child;
+@@ -665,7 +660,7 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
+ 		if (!new_msk)
+ 			fallback = true;
+ 	} else if (subflow_req->mp_join) {
+-		mptcp_get_options(skb, &mp_opt);
++		mptcp_get_options(sk, skb, &mp_opt);
+ 		if (!mp_opt.mp_join || !subflow_hmac_valid(req, &mp_opt) ||
+ 		    !mptcp_can_accept_new_subflow(subflow_req->msk)) {
+ 			SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINACKMAC);
+diff --git a/net/mptcp/syncookies.c b/net/mptcp/syncookies.c
+index abe0fd0997467..37127781aee98 100644
+--- a/net/mptcp/syncookies.c
++++ b/net/mptcp/syncookies.c
+@@ -37,7 +37,21 @@ static spinlock_t join_entry_locks[COOKIE_JOIN_SLOTS] __cacheline_aligned_in_smp
+ 
+ static u32 mptcp_join_entry_hash(struct sk_buff *skb, struct net *net)
+ {
+-	u32 i = skb_get_hash(skb) ^ net_hash_mix(net);
++	static u32 mptcp_join_hash_secret __read_mostly;
++	struct tcphdr *th = tcp_hdr(skb);
++	u32 seq, i;
++
++	net_get_random_once(&mptcp_join_hash_secret,
++			    sizeof(mptcp_join_hash_secret));
++
++	if (th->syn)
++		seq = TCP_SKB_CB(skb)->seq;
++	else
++		seq = TCP_SKB_CB(skb)->seq - 1;
++
++	i = jhash_3words(seq, net_hash_mix(net),
++			 (__force __u32)th->source << 16 | (__force __u32)th->dest,
++			 mptcp_join_hash_secret);
+ 
+ 	return i % ARRAY_SIZE(join_entries);
+ }
+diff --git a/net/netrom/nr_timer.c b/net/netrom/nr_timer.c
+index 9115f8a7dd45b..a8da88db7893f 100644
+--- a/net/netrom/nr_timer.c
++++ b/net/netrom/nr_timer.c
+@@ -121,11 +121,9 @@ static void nr_heartbeat_expiry(struct timer_list *t)
+ 		   is accepted() it isn't 'dead' so doesn't get removed. */
+ 		if (sock_flag(sk, SOCK_DESTROY) ||
+ 		    (sk->sk_state == TCP_LISTEN && sock_flag(sk, SOCK_DEAD))) {
+-			sock_hold(sk);
+ 			bh_unlock_sock(sk);
+ 			nr_destroy_socket(sk);
+-			sock_put(sk);
+-			return;
++			goto out;
+ 		}
+ 		break;
+ 
+@@ -146,6 +144,8 @@ static void nr_heartbeat_expiry(struct timer_list *t)
+ 
+ 	nr_start_heartbeat(sk);
+ 	bh_unlock_sock(sk);
++out:
++	sock_put(sk);
+ }
+ 
+ static void nr_t2timer_expiry(struct timer_list *t)
+@@ -159,6 +159,7 @@ static void nr_t2timer_expiry(struct timer_list *t)
+ 		nr_enquiry_response(sk);
+ 	}
+ 	bh_unlock_sock(sk);
++	sock_put(sk);
+ }
+ 
+ static void nr_t4timer_expiry(struct timer_list *t)
+@@ -169,6 +170,7 @@ static void nr_t4timer_expiry(struct timer_list *t)
+ 	bh_lock_sock(sk);
+ 	nr_sk(sk)->condition &= ~NR_COND_PEER_RX_BUSY;
+ 	bh_unlock_sock(sk);
++	sock_put(sk);
+ }
+ 
+ static void nr_idletimer_expiry(struct timer_list *t)
+@@ -197,6 +199,7 @@ static void nr_idletimer_expiry(struct timer_list *t)
+ 		sock_set_flag(sk, SOCK_DEAD);
+ 	}
+ 	bh_unlock_sock(sk);
++	sock_put(sk);
+ }
+ 
+ static void nr_t1timer_expiry(struct timer_list *t)
+@@ -209,8 +212,7 @@ static void nr_t1timer_expiry(struct timer_list *t)
+ 	case NR_STATE_1:
+ 		if (nr->n2count == nr->n2) {
+ 			nr_disconnect(sk, ETIMEDOUT);
+-			bh_unlock_sock(sk);
+-			return;
++			goto out;
+ 		} else {
+ 			nr->n2count++;
+ 			nr_write_internal(sk, NR_CONNREQ);
+@@ -220,8 +222,7 @@ static void nr_t1timer_expiry(struct timer_list *t)
+ 	case NR_STATE_2:
+ 		if (nr->n2count == nr->n2) {
+ 			nr_disconnect(sk, ETIMEDOUT);
+-			bh_unlock_sock(sk);
+-			return;
++			goto out;
+ 		} else {
+ 			nr->n2count++;
+ 			nr_write_internal(sk, NR_DISCREQ);
+@@ -231,8 +232,7 @@ static void nr_t1timer_expiry(struct timer_list *t)
+ 	case NR_STATE_3:
+ 		if (nr->n2count == nr->n2) {
+ 			nr_disconnect(sk, ETIMEDOUT);
+-			bh_unlock_sock(sk);
+-			return;
++			goto out;
+ 		} else {
+ 			nr->n2count++;
+ 			nr_requeue_frames(sk);
+@@ -241,5 +241,7 @@ static void nr_t1timer_expiry(struct timer_list *t)
+ 	}
+ 
+ 	nr_start_t1timer(sk);
++out:
+ 	bh_unlock_sock(sk);
++	sock_put(sk);
+ }
+diff --git a/net/sched/act_skbmod.c b/net/sched/act_skbmod.c
+index 81a1c67335be6..8d17a543cc9fe 100644
+--- a/net/sched/act_skbmod.c
++++ b/net/sched/act_skbmod.c
+@@ -6,6 +6,7 @@
+ */
+ 
+ #include <linux/module.h>
++#include <linux/if_arp.h>
+ #include <linux/init.h>
+ #include <linux/kernel.h>
+ #include <linux/skbuff.h>
+@@ -33,6 +34,13 @@ static int tcf_skbmod_act(struct sk_buff *skb, const struct tc_action *a,
+ 	tcf_lastuse_update(&d->tcf_tm);
+ 	bstats_cpu_update(this_cpu_ptr(d->common.cpu_bstats), skb);
+ 
++	action = READ_ONCE(d->tcf_action);
++	if (unlikely(action == TC_ACT_SHOT))
++		goto drop;
++
++	if (!skb->dev || skb->dev->type != ARPHRD_ETHER)
++		return action;
++
+ 	/* XXX: if you are going to edit more fields beyond ethernet header
+ 	 * (example when you add IP header replacement or vlan swap)
+ 	 * then MAX_EDIT_LEN needs to change appropriately
+@@ -41,10 +49,6 @@ static int tcf_skbmod_act(struct sk_buff *skb, const struct tc_action *a,
+ 	if (unlikely(err)) /* best policy is to drop on the floor */
+ 		goto drop;
+ 
+-	action = READ_ONCE(d->tcf_action);
+-	if (unlikely(action == TC_ACT_SHOT))
+-		goto drop;
+-
+ 	p = rcu_dereference_bh(d->skbmod_p);
+ 	flags = p->flags;
+ 	if (flags & SKBMOD_F_DMAC)
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index d73b5c5514a9f..e3e79e9bd7067 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -2904,7 +2904,7 @@ replay:
+ 		break;
+ 	case RTM_GETCHAIN:
+ 		err = tc_chain_notify(chain, skb, n->nlmsg_seq,
+-				      n->nlmsg_seq, n->nlmsg_type, true);
++				      n->nlmsg_flags, n->nlmsg_type, true);
+ 		if (err < 0)
+ 			NL_SET_ERR_MSG(extack, "Failed to send chain notify message");
+ 		break;
+diff --git a/net/sched/cls_tcindex.c b/net/sched/cls_tcindex.c
+index 5b274534264c2..e9a8a2c86bbdd 100644
+--- a/net/sched/cls_tcindex.c
++++ b/net/sched/cls_tcindex.c
+@@ -278,6 +278,8 @@ static int tcindex_filter_result_init(struct tcindex_filter_result *r,
+ 			     TCA_TCINDEX_POLICE);
+ }
+ 
++static void tcindex_free_perfect_hash(struct tcindex_data *cp);
++
+ static void tcindex_partial_destroy_work(struct work_struct *work)
+ {
+ 	struct tcindex_data *p = container_of(to_rcu_work(work),
+@@ -285,7 +287,8 @@ static void tcindex_partial_destroy_work(struct work_struct *work)
+ 					      rwork);
+ 
+ 	rtnl_lock();
+-	kfree(p->perfect);
++	if (p->perfect)
++		tcindex_free_perfect_hash(p);
+ 	kfree(p);
+ 	rtnl_unlock();
+ }
+diff --git a/net/sctp/auth.c b/net/sctp/auth.c
+index 6f8319b828b0d..fe74c5f956303 100644
+--- a/net/sctp/auth.c
++++ b/net/sctp/auth.c
+@@ -860,6 +860,8 @@ int sctp_auth_set_key(struct sctp_endpoint *ep,
+ 	if (replace) {
+ 		list_del_init(&shkey->key_list);
+ 		sctp_auth_shkey_release(shkey);
++		if (asoc && asoc->active_key_id == auth_key->sca_keynumber)
++			sctp_auth_asoc_init_active_key(asoc, GFP_KERNEL);
+ 	}
+ 	list_add(&cur_key->key_list, sh_keys);
+ 
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index a79d193ff8720..dbd074f4d450a 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -4521,6 +4521,10 @@ static int sctp_setsockopt(struct sock *sk, int level, int optname,
+ 	}
+ 
+ 	if (optlen > 0) {
++		/* Trim it to the biggest size sctp sockopt may need if necessary */
++		optlen = min_t(unsigned int, optlen,
++			       PAGE_ALIGN(USHRT_MAX +
++					  sizeof(__u16) * sizeof(struct sctp_reset_streams)));
+ 		kopt = memdup_sockptr(optval, optlen);
+ 		if (IS_ERR(kopt))
+ 			return PTR_ERR(kopt);
+diff --git a/samples/bpf/xdpsock_user.c b/samples/bpf/xdpsock_user.c
+index 53e300f860bb4..33d0bdebbed81 100644
+--- a/samples/bpf/xdpsock_user.c
++++ b/samples/bpf/xdpsock_user.c
+@@ -96,6 +96,7 @@ static int opt_xsk_frame_size = XSK_UMEM__DEFAULT_FRAME_SIZE;
+ static int opt_timeout = 1000;
+ static bool opt_need_wakeup = true;
+ static u32 opt_num_xsks = 1;
++static u32 prog_id;
+ static bool opt_busy_poll;
+ static bool opt_reduced_cap;
+ 
+@@ -461,6 +462,23 @@ static void *poller(void *arg)
+ 	return NULL;
+ }
+ 
++static void remove_xdp_program(void)
++{
++	u32 curr_prog_id = 0;
++
++	if (bpf_get_link_xdp_id(opt_ifindex, &curr_prog_id, opt_xdp_flags)) {
++		printf("bpf_get_link_xdp_id failed\n");
++		exit(EXIT_FAILURE);
++	}
++
++	if (prog_id == curr_prog_id)
++		bpf_set_link_xdp_fd(opt_ifindex, -1, opt_xdp_flags);
++	else if (!curr_prog_id)
++		printf("couldn't find a prog id on a given interface\n");
++	else
++		printf("program on interface changed, not removing\n");
++}
++
+ static void int_exit(int sig)
+ {
+ 	benchmark_done = true;
+@@ -471,6 +489,9 @@ static void __exit_with_error(int error, const char *file, const char *func,
+ {
+ 	fprintf(stderr, "%s:%s:%i: errno: %d/\"%s\"\n", file, func,
+ 		line, error, strerror(error));
++
++	if (opt_num_xsks > 1)
++		remove_xdp_program();
+ 	exit(EXIT_FAILURE);
+ }
+ 
+@@ -490,6 +511,9 @@ static void xdpsock_cleanup(void)
+ 		if (write(sock, &cmd, sizeof(int)) < 0)
+ 			exit_with_error(errno);
+ 	}
++
++	if (opt_num_xsks > 1)
++		remove_xdp_program();
+ }
+ 
+ static void swap_mac_addresses(void *data)
+@@ -857,6 +881,10 @@ static struct xsk_socket_info *xsk_configure_socket(struct xsk_umem_info *umem,
+ 	if (ret)
+ 		exit_with_error(-ret);
+ 
++	ret = bpf_get_link_xdp_id(opt_ifindex, &prog_id, opt_xdp_flags);
++	if (ret)
++		exit_with_error(-ret);
++
+ 	xsk->app_stats.rx_empty_polls = 0;
+ 	xsk->app_stats.fill_fail_polls = 0;
+ 	xsk->app_stats.copy_tx_sendtos = 0;
+diff --git a/scripts/Makefile.build b/scripts/Makefile.build
+index 34d257653fb47..c6bd62f518fff 100644
+--- a/scripts/Makefile.build
++++ b/scripts/Makefile.build
+@@ -388,7 +388,7 @@ ifeq ($(CONFIG_LTO_CLANG) $(CONFIG_MODVERSIONS),y y)
+       cmd_update_lto_symversions =					\
+ 	rm -f $@.symversions						\
+ 	$(foreach n, $(filter-out FORCE,$^),				\
+-		$(if $(wildcard $(n).symversions),			\
++		$(if $(shell test -s $(n).symversions && echo y),	\
+ 			; cat $(n).symversions >> $@.symversions))
+ else
+       cmd_update_lto_symversions = echo >/dev/null
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index 8dbe86cf2e4f5..a9fd486089a7a 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -246,12 +246,18 @@ static bool hw_support_mmap(struct snd_pcm_substream *substream)
+ 	if (!(substream->runtime->hw.info & SNDRV_PCM_INFO_MMAP))
+ 		return false;
+ 
+-	if (substream->ops->mmap ||
+-	    (substream->dma_buffer.dev.type != SNDRV_DMA_TYPE_DEV &&
+-	     substream->dma_buffer.dev.type != SNDRV_DMA_TYPE_DEV_UC))
++	if (substream->ops->mmap)
+ 		return true;
+ 
+-	return dma_can_mmap(substream->dma_buffer.dev.dev);
++	switch (substream->dma_buffer.dev.type) {
++	case SNDRV_DMA_TYPE_UNKNOWN:
++		return false;
++	case SNDRV_DMA_TYPE_CONTINUOUS:
++	case SNDRV_DMA_TYPE_VMALLOC:
++		return true;
++	default:
++		return dma_can_mmap(substream->dma_buffer.dev.dev);
++	}
+ }
+ 
+ static int constrain_mask_params(struct snd_pcm_substream *substream,
+@@ -3057,9 +3063,14 @@ static int snd_pcm_ioctl_sync_ptr_compat(struct snd_pcm_substream *substream,
+ 		boundary = 0x7fffffff;
+ 	snd_pcm_stream_lock_irq(substream);
+ 	/* FIXME: we should consider the boundary for the sync from app */
+-	if (!(sflags & SNDRV_PCM_SYNC_PTR_APPL))
+-		control->appl_ptr = scontrol.appl_ptr;
+-	else
++	if (!(sflags & SNDRV_PCM_SYNC_PTR_APPL)) {
++		err = pcm_lib_apply_appl_ptr(substream,
++				scontrol.appl_ptr);
++		if (err < 0) {
++			snd_pcm_stream_unlock_irq(substream);
++			return err;
++		}
++	} else
+ 		scontrol.appl_ptr = control->appl_ptr % boundary;
+ 	if (!(sflags & SNDRV_PCM_SYNC_PTR_AVAIL_MIN))
+ 		control->avail_min = scontrol.avail_min;
+diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c
+index d8be146793eee..c9d0ba353463b 100644
+--- a/sound/hda/intel-dsp-config.c
++++ b/sound/hda/intel-dsp-config.c
+@@ -319,6 +319,10 @@ static const struct config_entry config_table[] = {
+ 		.flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC,
+ 		.device = 0x4b55,
+ 	},
++	{
++		.flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC,
++		.device = 0x4b58,
++	},
+ #endif
+ 
+ /* Alder Lake */
+diff --git a/sound/isa/sb/sb16_csp.c b/sound/isa/sb/sb16_csp.c
+index c98ccd421a2ec..2cc30a004eb7b 100644
+--- a/sound/isa/sb/sb16_csp.c
++++ b/sound/isa/sb/sb16_csp.c
+@@ -814,6 +814,7 @@ static int snd_sb_csp_start(struct snd_sb_csp * p, int sample_width, int channel
+ 	mixR = snd_sbmixer_read(p->chip, SB_DSP4_PCM_DEV + 1);
+ 	snd_sbmixer_write(p->chip, SB_DSP4_PCM_DEV, mixL & 0x7);
+ 	snd_sbmixer_write(p->chip, SB_DSP4_PCM_DEV + 1, mixR & 0x7);
++	spin_unlock_irqrestore(&p->chip->mixer_lock, flags);
+ 
+ 	spin_lock(&p->chip->reg_lock);
+ 	set_mode_register(p->chip, 0xc0);	/* c0 = STOP */
+@@ -853,6 +854,7 @@ static int snd_sb_csp_start(struct snd_sb_csp * p, int sample_width, int channel
+ 	spin_unlock(&p->chip->reg_lock);
+ 
+ 	/* restore PCM volume */
++	spin_lock_irqsave(&p->chip->mixer_lock, flags);
+ 	snd_sbmixer_write(p->chip, SB_DSP4_PCM_DEV, mixL);
+ 	snd_sbmixer_write(p->chip, SB_DSP4_PCM_DEV + 1, mixR);
+ 	spin_unlock_irqrestore(&p->chip->mixer_lock, flags);
+@@ -878,6 +880,7 @@ static int snd_sb_csp_stop(struct snd_sb_csp * p)
+ 	mixR = snd_sbmixer_read(p->chip, SB_DSP4_PCM_DEV + 1);
+ 	snd_sbmixer_write(p->chip, SB_DSP4_PCM_DEV, mixL & 0x7);
+ 	snd_sbmixer_write(p->chip, SB_DSP4_PCM_DEV + 1, mixR & 0x7);
++	spin_unlock_irqrestore(&p->chip->mixer_lock, flags);
+ 
+ 	spin_lock(&p->chip->reg_lock);
+ 	if (p->running & SNDRV_SB_CSP_ST_QSOUND) {
+@@ -892,6 +895,7 @@ static int snd_sb_csp_stop(struct snd_sb_csp * p)
+ 	spin_unlock(&p->chip->reg_lock);
+ 
+ 	/* restore PCM volume */
++	spin_lock_irqsave(&p->chip->mixer_lock, flags);
+ 	snd_sbmixer_write(p->chip, SB_DSP4_PCM_DEV, mixL);
+ 	snd_sbmixer_write(p->chip, SB_DSP4_PCM_DEV + 1, mixR);
+ 	spin_unlock_irqrestore(&p->chip->mixer_lock, flags);
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 4b2cc8cb55c49..84c088912b3c0 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -1940,6 +1940,7 @@ static int hdmi_add_cvt(struct hda_codec *codec, hda_nid_t cvt_nid)
+ static const struct snd_pci_quirk force_connect_list[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x870f, "HP", 1),
+ 	SND_PCI_QUIRK(0x103c, 0x871a, "HP", 1),
++	SND_PCI_QUIRK(0x1462, 0xec94, "MS-7C94", 1),
+ 	{}
+ };
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 1ca320fef670f..19a3ae79c0012 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8566,6 +8566,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x3151, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3176, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
++	SND_PCI_QUIRK(0x17aa, 0x31af, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340),
+ 	SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x17aa, 0x3827, "Ideapad S740", ALC285_FIXUP_IDEAPAD_S740_COEF),
+ 	SND_PCI_QUIRK(0x17aa, 0x3843, "Yoga 9i", ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP),
+diff --git a/sound/soc/codecs/rt5631.c b/sound/soc/codecs/rt5631.c
+index 3000bc128b5bc..38356ea2bd6ef 100644
+--- a/sound/soc/codecs/rt5631.c
++++ b/sound/soc/codecs/rt5631.c
+@@ -1695,6 +1695,8 @@ static const struct regmap_config rt5631_regmap_config = {
+ 	.reg_defaults = rt5631_reg,
+ 	.num_reg_defaults = ARRAY_SIZE(rt5631_reg),
+ 	.cache_type = REGCACHE_RBTREE,
++	.use_single_read = true,
++	.use_single_write = true,
+ };
+ 
+ static int rt5631_i2c_probe(struct i2c_client *i2c,
+diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
+index 3dc119daf2f67..cef05d81c39b8 100644
+--- a/sound/soc/codecs/wm_adsp.c
++++ b/sound/soc/codecs/wm_adsp.c
+@@ -1213,7 +1213,7 @@ static int wm_coeff_tlv_get(struct snd_kcontrol *kctl,
+ 
+ 	mutex_lock(&ctl->dsp->pwr_lock);
+ 
+-	ret = wm_coeff_read_ctrl_raw(ctl, ctl->cache, size);
++	ret = wm_coeff_read_ctrl(ctl, ctl->cache, size);
+ 
+ 	if (!ret && copy_to_user(bytes, ctl->cache, size))
+ 		ret = -EFAULT;
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 46513bb979044..d1c570ca21ea7 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -1015,6 +1015,7 @@ out:
+ 
+ static int soc_pcm_trigger(struct snd_pcm_substream *substream, int cmd)
+ {
++	struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream);
+ 	int ret = -EINVAL, _ret = 0;
+ 	int rollback = 0;
+ 
+@@ -1055,14 +1056,23 @@ start_err:
+ 	case SNDRV_PCM_TRIGGER_STOP:
+ 	case SNDRV_PCM_TRIGGER_SUSPEND:
+ 	case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+-		ret = snd_soc_pcm_dai_trigger(substream, cmd, rollback);
+-		if (ret < 0)
+-			break;
++		if (rtd->dai_link->stop_dma_first) {
++			ret = snd_soc_pcm_component_trigger(substream, cmd, rollback);
++			if (ret < 0)
++				break;
+ 
+-		ret = snd_soc_pcm_component_trigger(substream, cmd, rollback);
+-		if (ret < 0)
+-			break;
++			ret = snd_soc_pcm_dai_trigger(substream, cmd, rollback);
++			if (ret < 0)
++				break;
++		} else {
++			ret = snd_soc_pcm_dai_trigger(substream, cmd, rollback);
++			if (ret < 0)
++				break;
+ 
++			ret = snd_soc_pcm_component_trigger(substream, cmd, rollback);
++			if (ret < 0)
++				break;
++		}
+ 		ret = snd_soc_link_trigger(substream, cmd, rollback);
+ 		break;
+ 	}
+diff --git a/sound/soc/sof/intel/pci-tgl.c b/sound/soc/sof/intel/pci-tgl.c
+index 88c3bf404dd7a..d1fd0a330554c 100644
+--- a/sound/soc/sof/intel/pci-tgl.c
++++ b/sound/soc/sof/intel/pci-tgl.c
+@@ -89,6 +89,7 @@ static const struct sof_dev_desc adls_desc = {
+ static const struct sof_dev_desc adl_desc = {
+ 	.machines               = snd_soc_acpi_intel_adl_machines,
+ 	.alt_machines           = snd_soc_acpi_intel_adl_sdw_machines,
++	.use_acpi_target_states = true,
+ 	.resindex_lpe_base      = 0,
+ 	.resindex_pcicfg_base   = -1,
+ 	.resindex_imr_base      = -1,
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 30b3e128e28d8..f4cdaf1ba44ac 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -3295,7 +3295,15 @@ static void snd_usb_mixer_dump_cval(struct snd_info_buffer *buffer,
+ {
+ 	struct usb_mixer_elem_info *cval = mixer_elem_list_to_info(list);
+ 	static const char * const val_types[] = {
+-		"BOOLEAN", "INV_BOOLEAN", "S8", "U8", "S16", "U16", "S32", "U32",
++		[USB_MIXER_BOOLEAN] = "BOOLEAN",
++		[USB_MIXER_INV_BOOLEAN] = "INV_BOOLEAN",
++		[USB_MIXER_S8] = "S8",
++		[USB_MIXER_U8] = "U8",
++		[USB_MIXER_S16] = "S16",
++		[USB_MIXER_U16] = "U16",
++		[USB_MIXER_S32] = "S32",
++		[USB_MIXER_U32] = "U32",
++		[USB_MIXER_BESPOKEN] = "BESPOKEN",
+ 	};
+ 	snd_iprintf(buffer, "    Info: id=%i, control=%i, cmask=0x%x, "
+ 			    "channels=%i, type=\"%s\"\n", cval->head.id,
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 8b8bee3c3dd63..e7accd87e0632 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1897,6 +1897,9 @@ static const struct registration_quirk registration_quirks[] = {
+ 	REG_QUIRK_ENTRY(0x0951, 0x16d8, 2),	/* Kingston HyperX AMP */
+ 	REG_QUIRK_ENTRY(0x0951, 0x16ed, 2),	/* Kingston HyperX Cloud Alpha S */
+ 	REG_QUIRK_ENTRY(0x0951, 0x16ea, 2),	/* Kingston HyperX Cloud Flight S */
++	REG_QUIRK_ENTRY(0x0ecb, 0x1f46, 2),	/* JBL Quantum 600 */
++	REG_QUIRK_ENTRY(0x0ecb, 0x2039, 2),	/* JBL Quantum 400 */
++	REG_QUIRK_ENTRY(0x0ecb, 0x203e, 2),	/* JBL Quantum 800 */
+ 	{ 0 }					/* terminator */
+ };
+ 
+diff --git a/tools/bpf/bpftool/common.c b/tools/bpf/bpftool/common.c
+index 1828bba19020d..dc6daa193557a 100644
+--- a/tools/bpf/bpftool/common.c
++++ b/tools/bpf/bpftool/common.c
+@@ -222,6 +222,11 @@ int mount_bpffs_for_pin(const char *name)
+ 	int err = 0;
+ 
+ 	file = malloc(strlen(name) + 1);
++	if (!file) {
++		p_err("mem alloc failed");
++		return -1;
++	}
++
+ 	strcpy(file, name);
+ 	dir = dirname(file);
+ 
+diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
+index ddccc0eb73907..6b64cb280b2ec 100644
+--- a/tools/perf/builtin-inject.c
++++ b/tools/perf/builtin-inject.c
+@@ -358,9 +358,10 @@ static struct dso *findnew_dso(int pid, int tid, const char *filename,
+ 		dso = machine__findnew_dso_id(machine, filename, id);
+ 	}
+ 
+-	if (dso)
++	if (dso) {
++		nsinfo__put(dso->nsinfo);
+ 		dso->nsinfo = nsi;
+-	else
++	} else
+ 		nsinfo__put(nsi);
+ 
+ 	thread__put(thread);
+@@ -907,8 +908,10 @@ int cmd_inject(int argc, const char **argv)
+ 
+ 	data.path = inject.input_name;
+ 	inject.session = perf_session__new(&data, inject.output.is_pipe, &inject.tool);
+-	if (IS_ERR(inject.session))
+-		return PTR_ERR(inject.session);
++	if (IS_ERR(inject.session)) {
++		ret = PTR_ERR(inject.session);
++		goto out_close_output;
++	}
+ 
+ 	if (zstd_init(&(inject.session->zstd_data), 0) < 0)
+ 		pr_warning("Decompression initialization failed.\n");
+@@ -950,5 +953,7 @@ int cmd_inject(int argc, const char **argv)
+ out_delete:
+ 	zstd_fini(&(inject.session->zstd_data));
+ 	perf_session__delete(inject.session);
++out_close_output:
++	perf_data__close(&inject.output);
+ 	return ret;
+ }
+diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
+index 36f9ccfeb38a8..ce420f910ff81 100644
+--- a/tools/perf/builtin-report.c
++++ b/tools/perf/builtin-report.c
+@@ -1167,6 +1167,8 @@ int cmd_report(int argc, const char **argv)
+ 		.annotation_opts	 = annotation__default_options,
+ 		.skip_empty		 = true,
+ 	};
++	char *sort_order_help = sort_help("sort by key(s):");
++	char *field_order_help = sort_help("output field(s): overhead period sample ");
+ 	const struct option options[] = {
+ 	OPT_STRING('i', "input", &input_name, "file",
+ 		    "input file name"),
+@@ -1201,9 +1203,9 @@ int cmd_report(int argc, const char **argv)
+ 	OPT_BOOLEAN(0, "header-only", &report.header_only,
+ 		    "Show only data header."),
+ 	OPT_STRING('s', "sort", &sort_order, "key[,key2...]",
+-		   sort_help("sort by key(s):")),
++		   sort_order_help),
+ 	OPT_STRING('F', "fields", &field_order, "key[,keys...]",
+-		   sort_help("output field(s): overhead period sample ")),
++		   field_order_help),
+ 	OPT_BOOLEAN(0, "show-cpu-utilization", &symbol_conf.show_cpu_utilization,
+ 		    "Show sample percentage for different cpu modes"),
+ 	OPT_BOOLEAN_FLAG(0, "showcpuutilization", &symbol_conf.show_cpu_utilization,
+@@ -1336,11 +1338,11 @@ int cmd_report(int argc, const char **argv)
+ 	char sort_tmp[128];
+ 
+ 	if (ret < 0)
+-		return ret;
++		goto exit;
+ 
+ 	ret = perf_config(report__config, &report);
+ 	if (ret)
+-		return ret;
++		goto exit;
+ 
+ 	argc = parse_options(argc, argv, options, report_usage, 0);
+ 	if (argc) {
+@@ -1354,8 +1356,10 @@ int cmd_report(int argc, const char **argv)
+ 		report.symbol_filter_str = argv[0];
+ 	}
+ 
+-	if (annotate_check_args(&report.annotation_opts) < 0)
+-		return -EINVAL;
++	if (annotate_check_args(&report.annotation_opts) < 0) {
++		ret = -EINVAL;
++		goto exit;
++	}
+ 
+ 	if (report.mmaps_mode)
+ 		report.tasks_mode = true;
+@@ -1369,12 +1373,14 @@ int cmd_report(int argc, const char **argv)
+ 	if (symbol_conf.vmlinux_name &&
+ 	    access(symbol_conf.vmlinux_name, R_OK)) {
+ 		pr_err("Invalid file: %s\n", symbol_conf.vmlinux_name);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto exit;
+ 	}
+ 	if (symbol_conf.kallsyms_name &&
+ 	    access(symbol_conf.kallsyms_name, R_OK)) {
+ 		pr_err("Invalid file: %s\n", symbol_conf.kallsyms_name);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto exit;
+ 	}
+ 
+ 	if (report.inverted_callchain)
+@@ -1398,12 +1404,14 @@ int cmd_report(int argc, const char **argv)
+ 
+ repeat:
+ 	session = perf_session__new(&data, false, &report.tool);
+-	if (IS_ERR(session))
+-		return PTR_ERR(session);
++	if (IS_ERR(session)) {
++		ret = PTR_ERR(session);
++		goto exit;
++	}
+ 
+ 	ret = evswitch__init(&report.evswitch, session->evlist, stderr);
+ 	if (ret)
+-		return ret;
++		goto exit;
+ 
+ 	if (zstd_init(&(session->zstd_data), 0) < 0)
+ 		pr_warning("Decompression initialization failed. Reported data may be incomplete.\n");
+@@ -1638,5 +1646,8 @@ error:
+ 
+ 	zstd_fini(&(session->zstd_data));
+ 	perf_session__delete(session);
++exit:
++	free(sort_order_help);
++	free(field_order_help);
+ 	return ret;
+ }
+diff --git a/tools/perf/builtin-sched.c b/tools/perf/builtin-sched.c
+index 954ce2f594e96..3e5b7faf0c16e 100644
+--- a/tools/perf/builtin-sched.c
++++ b/tools/perf/builtin-sched.c
+@@ -3335,6 +3335,16 @@ static void setup_sorting(struct perf_sched *sched, const struct option *options
+ 	sort_dimension__add("pid", &sched->cmp_pid);
+ }
+ 
++static bool schedstat_events_exposed(void)
++{
++	/*
++	 * Select "sched:sched_stat_wait" event to check
++	 * whether schedstat tracepoints are exposed.
++	 */
++	return IS_ERR(trace_event__tp_format("sched", "sched_stat_wait")) ?
++		false : true;
++}
++
+ static int __cmd_record(int argc, const char **argv)
+ {
+ 	unsigned int rec_argc, i, j;
+@@ -3346,21 +3356,33 @@ static int __cmd_record(int argc, const char **argv)
+ 		"-m", "1024",
+ 		"-c", "1",
+ 		"-e", "sched:sched_switch",
+-		"-e", "sched:sched_stat_wait",
+-		"-e", "sched:sched_stat_sleep",
+-		"-e", "sched:sched_stat_iowait",
+ 		"-e", "sched:sched_stat_runtime",
+ 		"-e", "sched:sched_process_fork",
+ 		"-e", "sched:sched_wakeup_new",
+ 		"-e", "sched:sched_migrate_task",
+ 	};
++
++	/*
++	 * The tracepoints trace_sched_stat_{wait, sleep, iowait}
++	 * are not exposed to user if CONFIG_SCHEDSTATS is not set,
++	 * to prevent "perf sched record" execution failure, determine
++	 * whether to record schedstat events according to actual situation.
++	 */
++	const char * const schedstat_args[] = {
++		"-e", "sched:sched_stat_wait",
++		"-e", "sched:sched_stat_sleep",
++		"-e", "sched:sched_stat_iowait",
++	};
++	unsigned int schedstat_argc = schedstat_events_exposed() ?
++		ARRAY_SIZE(schedstat_args) : 0;
++
+ 	struct tep_event *waking_event;
+ 
+ 	/*
+ 	 * +2 for either "-e", "sched:sched_wakeup" or
+ 	 * "-e", "sched:sched_waking"
+ 	 */
+-	rec_argc = ARRAY_SIZE(record_args) + 2 + argc - 1;
++	rec_argc = ARRAY_SIZE(record_args) + 2 + schedstat_argc + argc - 1;
+ 	rec_argv = calloc(rec_argc + 1, sizeof(char *));
+ 
+ 	if (rec_argv == NULL)
+@@ -3376,6 +3398,9 @@ static int __cmd_record(int argc, const char **argv)
+ 	else
+ 		rec_argv[i++] = strdup("sched:sched_wakeup");
+ 
++	for (j = 0; j < schedstat_argc; j++)
++		rec_argv[i++] = strdup(schedstat_args[j]);
++
+ 	for (j = 1; j < (unsigned int)argc; j++, i++)
+ 		rec_argv[i] = argv[j];
+ 
+diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
+index 1280cbfad4db5..c43c2963117d7 100644
+--- a/tools/perf/builtin-script.c
++++ b/tools/perf/builtin-script.c
+@@ -2534,6 +2534,12 @@ static void perf_script__exit_per_event_dump_stats(struct perf_script *script)
+ 	}
+ }
+ 
++static void perf_script__exit(struct perf_script *script)
++{
++	perf_thread_map__put(script->threads);
++	perf_cpu_map__put(script->cpus);
++}
++
+ static int __cmd_script(struct perf_script *script)
+ {
+ 	int ret;
+@@ -3991,8 +3997,10 @@ out_delete:
+ 		zfree(&script.ptime_range);
+ 	}
+ 
++	zstd_fini(&(session->zstd_data));
+ 	evlist__free_stats(session->evlist);
+ 	perf_session__delete(session);
++	perf_script__exit(&script);
+ 
+ 	if (script_started)
+ 		cleanup_scripting();
+diff --git a/tools/perf/tests/event_update.c b/tools/perf/tests/event_update.c
+index 656218179222c..44a50527f9d95 100644
+--- a/tools/perf/tests/event_update.c
++++ b/tools/perf/tests/event_update.c
+@@ -88,6 +88,7 @@ int test__event_update(struct test *test __maybe_unused, int subtest __maybe_unu
+ 	struct evsel *evsel;
+ 	struct event_name tmp;
+ 	struct evlist *evlist = evlist__new_default();
++	char *unit = strdup("KRAVA");
+ 
+ 	TEST_ASSERT_VAL("failed to get evlist", evlist);
+ 
+@@ -98,7 +99,7 @@ int test__event_update(struct test *test __maybe_unused, int subtest __maybe_unu
+ 
+ 	perf_evlist__id_add(&evlist->core, &evsel->core, 0, 0, 123);
+ 
+-	evsel->unit = strdup("KRAVA");
++	evsel->unit = unit;
+ 
+ 	TEST_ASSERT_VAL("failed to synthesize attr update unit",
+ 			!perf_event__synthesize_event_update_unit(NULL, evsel, process_event_unit));
+@@ -118,6 +119,7 @@ int test__event_update(struct test *test __maybe_unused, int subtest __maybe_unu
+ 	TEST_ASSERT_VAL("failed to synthesize attr update cpus",
+ 			!perf_event__synthesize_event_update_cpus(&tmp.tool, evsel, process_event_cpus));
+ 
+-	perf_cpu_map__put(evsel->core.own_cpus);
++	free(unit);
++	evlist__delete(evlist);
+ 	return 0;
+ }
+diff --git a/tools/perf/tests/maps.c b/tools/perf/tests/maps.c
+index edcbc70ff9d66..1ac72919fa358 100644
+--- a/tools/perf/tests/maps.c
++++ b/tools/perf/tests/maps.c
+@@ -116,5 +116,7 @@ int test__maps__merge_in(struct test *t __maybe_unused, int subtest __maybe_unus
+ 
+ 	ret = check_maps(merged3, ARRAY_SIZE(merged3), &maps);
+ 	TEST_ASSERT_VAL("merge check failed", !ret);
++
++	maps__exit(&maps);
+ 	return TEST_OK;
+ }
+diff --git a/tools/perf/tests/topology.c b/tools/perf/tests/topology.c
+index ec4e3b21b8311..b5efe675b3217 100644
+--- a/tools/perf/tests/topology.c
++++ b/tools/perf/tests/topology.c
+@@ -61,6 +61,7 @@ static int session_write_header(char *path)
+ 	TEST_ASSERT_VAL("failed to write header",
+ 			!perf_session__write_header(session, session->evlist, data.file.fd, true));
+ 
++	evlist__delete(session->evlist);
+ 	perf_session__delete(session);
+ 
+ 	return 0;
+diff --git a/tools/perf/util/data.c b/tools/perf/util/data.c
+index 8fca4779ae6a8..70b91ce35178c 100644
+--- a/tools/perf/util/data.c
++++ b/tools/perf/util/data.c
+@@ -20,7 +20,7 @@
+ 
+ static void close_dir(struct perf_data_file *files, int nr)
+ {
+-	while (--nr >= 1) {
++	while (--nr >= 0) {
+ 		close(files[nr].fd);
+ 		zfree(&files[nr].path);
+ 	}
+diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
+index d786cf6b0cfa6..ee15db2be2f43 100644
+--- a/tools/perf/util/dso.c
++++ b/tools/perf/util/dso.c
+@@ -1154,8 +1154,10 @@ struct map *dso__new_map(const char *name)
+ 	struct map *map = NULL;
+ 	struct dso *dso = dso__new(name);
+ 
+-	if (dso)
++	if (dso) {
+ 		map = map__new2(0, dso);
++		dso__put(dso);
++	}
+ 
+ 	return map;
+ }
+diff --git a/tools/perf/util/env.c b/tools/perf/util/env.c
+index bc5e4f294e9e9..16a111b62cc39 100644
+--- a/tools/perf/util/env.c
++++ b/tools/perf/util/env.c
+@@ -186,10 +186,12 @@ void perf_env__exit(struct perf_env *env)
+ 	zfree(&env->cpuid);
+ 	zfree(&env->cmdline);
+ 	zfree(&env->cmdline_argv);
++	zfree(&env->sibling_dies);
+ 	zfree(&env->sibling_cores);
+ 	zfree(&env->sibling_threads);
+ 	zfree(&env->pmu_mappings);
+ 	zfree(&env->cpu);
++	zfree(&env->cpu_pmu_caps);
+ 	zfree(&env->numa_map);
+ 
+ 	for (i = 0; i < env->nr_numa_nodes; i++)
+diff --git a/tools/perf/util/lzma.c b/tools/perf/util/lzma.c
+index 39062df026291..51424cdc3b682 100644
+--- a/tools/perf/util/lzma.c
++++ b/tools/perf/util/lzma.c
+@@ -69,7 +69,7 @@ int lzma_decompress_to_file(const char *input, int output_fd)
+ 
+ 			if (ferror(infile)) {
+ 				pr_err("lzma: read error: %s\n", strerror(errno));
+-				goto err_fclose;
++				goto err_lzma_end;
+ 			}
+ 
+ 			if (feof(infile))
+@@ -83,7 +83,7 @@ int lzma_decompress_to_file(const char *input, int output_fd)
+ 
+ 			if (writen(output_fd, buf_out, write_size) != write_size) {
+ 				pr_err("lzma: write error: %s\n", strerror(errno));
+-				goto err_fclose;
++				goto err_lzma_end;
+ 			}
+ 
+ 			strm.next_out  = buf_out;
+@@ -95,11 +95,13 @@ int lzma_decompress_to_file(const char *input, int output_fd)
+ 				break;
+ 
+ 			pr_err("lzma: failed %s\n", lzma_strerror(ret));
+-			goto err_fclose;
++			goto err_lzma_end;
+ 		}
+ 	}
+ 
+ 	err = 0;
++err_lzma_end:
++	lzma_end(&strm);
+ err_fclose:
+ 	fclose(infile);
+ 	return err;
+diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
+index 8af693d9678ce..72e7f3616157e 100644
+--- a/tools/perf/util/map.c
++++ b/tools/perf/util/map.c
+@@ -192,6 +192,8 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
+ 			if (!(prot & PROT_EXEC))
+ 				dso__set_loaded(dso);
+ 		}
++
++		nsinfo__put(dso->nsinfo);
+ 		dso->nsinfo = nsi;
+ 
+ 		if (build_id__is_defined(bid))
+diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
+index a78c8d59a555e..9cc89a047b15a 100644
+--- a/tools/perf/util/probe-event.c
++++ b/tools/perf/util/probe-event.c
+@@ -180,8 +180,10 @@ struct map *get_target_map(const char *target, struct nsinfo *nsi, bool user)
+ 		struct map *map;
+ 
+ 		map = dso__new_map(target);
+-		if (map && map->dso)
++		if (map && map->dso) {
++			nsinfo__put(map->dso->nsinfo);
+ 			map->dso->nsinfo = nsinfo__get(nsi);
++		}
+ 		return map;
+ 	} else {
+ 		return kernel_get_module_map(target);
+diff --git a/tools/perf/util/probe-file.c b/tools/perf/util/probe-file.c
+index 52273542e6ef4..3f6de459ac2b6 100644
+--- a/tools/perf/util/probe-file.c
++++ b/tools/perf/util/probe-file.c
+@@ -342,11 +342,11 @@ int probe_file__del_events(int fd, struct strfilter *filter)
+ 
+ 	ret = probe_file__get_events(fd, filter, namelist);
+ 	if (ret < 0)
+-		return ret;
++		goto out;
+ 
+ 	ret = probe_file__del_strlist(fd, namelist);
++out:
+ 	strlist__delete(namelist);
+-
+ 	return ret;
+ }
+ 
+diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
+index 88ce47f2547e3..568a88c001c6c 100644
+--- a/tools/perf/util/sort.c
++++ b/tools/perf/util/sort.c
+@@ -3370,7 +3370,7 @@ static void add_hpp_sort_string(struct strbuf *sb, struct hpp_dimension *s, int
+ 		add_key(sb, s[i].name, llen);
+ }
+ 
+-const char *sort_help(const char *prefix)
++char *sort_help(const char *prefix)
+ {
+ 	struct strbuf sb;
+ 	char *s;
+diff --git a/tools/perf/util/sort.h b/tools/perf/util/sort.h
+index 87a092645aa72..b67c469aba795 100644
+--- a/tools/perf/util/sort.h
++++ b/tools/perf/util/sort.h
+@@ -302,7 +302,7 @@ void reset_output_field(void);
+ void sort__setup_elide(FILE *fp);
+ void perf_hpp__set_elide(int idx, bool elide);
+ 
+-const char *sort_help(const char *prefix);
++char *sort_help(const char *prefix);
+ 
+ int report_parse_ignore_callees_opt(const struct option *opt, const char *arg, int unset);
+ 
+diff --git a/tools/testing/selftests/net/icmp_redirect.sh b/tools/testing/selftests/net/icmp_redirect.sh
+index bf361f30d6ef9..104a7a5f13b1e 100755
+--- a/tools/testing/selftests/net/icmp_redirect.sh
++++ b/tools/testing/selftests/net/icmp_redirect.sh
+@@ -309,9 +309,10 @@ check_exception()
+ 	fi
+ 	log_test $? 0 "IPv4: ${desc}"
+ 
+-	if [ "$with_redirect" = "yes" ]; then
++	# No PMTU info for test "redirect" and "mtu exception plus redirect"
++	if [ "$with_redirect" = "yes" ] && [ "$desc" != "redirect exception plus mtu" ]; then
+ 		ip -netns h1 -6 ro get ${H1_VRF_ARG} ${H2_N2_IP6} | \
+-		grep -q "${H2_N2_IP6} from :: via ${R2_LLADDR} dev br0.*${mtu}"
++		grep -v "mtu" | grep -q "${H2_N2_IP6} .*via ${R2_LLADDR} dev br0"
+ 	elif [ -n "${mtu}" ]; then
+ 		ip -netns h1 -6 ro get ${H1_VRF_ARG} ${H2_N2_IP6} | \
+ 		grep -q "${mtu}"
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+index fd99485cf2a4a..e8ac852c6ff6d 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -1341,7 +1341,7 @@ syncookies_tests()
+ 	ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow
+ 	ip netns exec $ns2 ./pm_nl_ctl add 10.0.2.2 flags subflow
+ 	run_tests $ns1 $ns2 10.0.1.1
+-	chk_join_nr "subflows limited by server w cookies" 2 2 1
++	chk_join_nr "subflows limited by server w cookies" 2 1 1
+ 
+ 	# test signal address with cookies
+ 	reset_with_cookies
+diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c
+index f5ab5e0312e7d..3e1fef3d81045 100644
+--- a/tools/testing/selftests/vm/userfaultfd.c
++++ b/tools/testing/selftests/vm/userfaultfd.c
+@@ -197,8 +197,10 @@ static int anon_release_pages(char *rel_area)
+ 
+ static void anon_allocate_area(void **alloc_area)
+ {
+-	if (posix_memalign(alloc_area, page_size, nr_pages * page_size)) {
+-		fprintf(stderr, "out of memory\n");
++	*alloc_area = mmap(NULL, nr_pages * page_size, PROT_READ | PROT_WRITE,
++			   MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
++	if (*alloc_area == MAP_FAILED)
++		fprintf(stderr, "mmap of anonymous memory failed");
+ 		*alloc_area = NULL;
+ 	}
+ }


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-07-31 10:28 Alice Ferrazzi
  0 siblings, 0 replies; 42+ messages in thread
From: Alice Ferrazzi @ 2021-07-31 10:28 UTC (permalink / raw
  To: gentoo-commits

commit:     853dad7e1b5cbd0cf31a9ff7de75724bace6cf20
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Sat Jul 31 10:27:57 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Sat Jul 31 10:28:11 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=853dad7e

Linux patch 5.13.7

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README             |   4 +
 1006_linux-5.13.7.patch | 781 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 785 insertions(+)

diff --git a/0000_README b/0000_README
index 87d634d..79d1b68 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch:  1005_linux-5.13.6.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.13.6
 
+Patch:  1006_linux-5.13.7.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.13.7
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1006_linux-5.13.7.patch b/1006_linux-5.13.7.patch
new file mode 100644
index 0000000..840c600
--- /dev/null
+++ b/1006_linux-5.13.7.patch
@@ -0,0 +1,781 @@
+diff --git a/Makefile b/Makefile
+index 96967f8951933..614327400aea2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 13
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/arm/boot/dts/versatile-ab.dts b/arch/arm/boot/dts/versatile-ab.dts
+index 37bd41ff8dffa..151c0220047dd 100644
+--- a/arch/arm/boot/dts/versatile-ab.dts
++++ b/arch/arm/boot/dts/versatile-ab.dts
+@@ -195,16 +195,15 @@
+ 		#size-cells = <1>;
+ 		ranges;
+ 
+-		vic: intc@10140000 {
++		vic: interrupt-controller@10140000 {
+ 			compatible = "arm,versatile-vic";
+ 			interrupt-controller;
+ 			#interrupt-cells = <1>;
+ 			reg = <0x10140000 0x1000>;
+-			clear-mask = <0xffffffff>;
+ 			valid-mask = <0xffffffff>;
+ 		};
+ 
+-		sic: intc@10003000 {
++		sic: interrupt-controller@10003000 {
+ 			compatible = "arm,versatile-sic";
+ 			interrupt-controller;
+ 			#interrupt-cells = <1>;
+diff --git a/arch/arm/boot/dts/versatile-pb.dts b/arch/arm/boot/dts/versatile-pb.dts
+index 06a0fdf24026c..e7e751a858d81 100644
+--- a/arch/arm/boot/dts/versatile-pb.dts
++++ b/arch/arm/boot/dts/versatile-pb.dts
+@@ -7,7 +7,7 @@
+ 
+ 	amba {
+ 		/* The Versatile PB is using more SIC IRQ lines than the AB */
+-		sic: intc@10003000 {
++		sic: interrupt-controller@10003000 {
+ 			clear-mask = <0xffffffff>;
+ 			/*
+ 			 * Valid interrupt lines mask according to
+diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
+index 74986bf96656f..c9fda6261c6b0 100644
+--- a/drivers/firmware/arm_scmi/driver.c
++++ b/drivers/firmware/arm_scmi/driver.c
+@@ -47,7 +47,6 @@ enum scmi_error_codes {
+ 	SCMI_ERR_GENERIC = -8,	/* Generic Error */
+ 	SCMI_ERR_HARDWARE = -9,	/* Hardware Error */
+ 	SCMI_ERR_PROTOCOL = -10,/* Protocol Error */
+-	SCMI_ERR_MAX
+ };
+ 
+ /* List of all SCMI devices active in system */
+@@ -166,8 +165,10 @@ static const int scmi_linux_errmap[] = {
+ 
+ static inline int scmi_to_linux_errno(int errno)
+ {
+-	if (errno < SCMI_SUCCESS && errno > SCMI_ERR_MAX)
+-		return scmi_linux_errmap[-errno];
++	int err_idx = -errno;
++
++	if (err_idx >= SCMI_SUCCESS && err_idx < ARRAY_SIZE(scmi_linux_errmap))
++		return scmi_linux_errmap[err_idx];
+ 	return -EIO;
+ }
+ 
+@@ -1029,8 +1030,9 @@ static int __scmi_xfer_info_init(struct scmi_info *sinfo,
+ 	const struct scmi_desc *desc = sinfo->desc;
+ 
+ 	/* Pre-allocated messages, no more than what hdr.seq can support */
+-	if (WARN_ON(desc->max_msg >= MSG_TOKEN_MAX)) {
+-		dev_err(dev, "Maximum message of %d exceeds supported %ld\n",
++	if (WARN_ON(!desc->max_msg || desc->max_msg > MSG_TOKEN_MAX)) {
++		dev_err(dev,
++			"Invalid maximum messages %d, not in range [1 - %lu]\n",
+ 			desc->max_msg, MSG_TOKEN_MAX);
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/gpu/drm/ttm/ttm_range_manager.c b/drivers/gpu/drm/ttm/ttm_range_manager.c
+index 707e5c1528967..ed053fd15c90b 100644
+--- a/drivers/gpu/drm/ttm/ttm_range_manager.c
++++ b/drivers/gpu/drm/ttm/ttm_range_manager.c
+@@ -146,6 +146,9 @@ int ttm_range_man_fini(struct ttm_device *bdev,
+ 	struct drm_mm *mm = &rman->mm;
+ 	int ret;
+ 
++	if (!man)
++		return 0;
++
+ 	ttm_resource_manager_set_used(man, false);
+ 
+ 	ret = ttm_resource_manager_evict_all(bdev, man);
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index fb1c5ae0da39d..d963f25fc7aed 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -1562,6 +1562,28 @@ static void nvme_init_queue(struct nvme_queue *nvmeq, u16 qid)
+ 	wmb(); /* ensure the first interrupt sees the initialization */
+ }
+ 
++/*
++ * Try getting shutdown_lock while setting up IO queues.
++ */
++static int nvme_setup_io_queues_trylock(struct nvme_dev *dev)
++{
++	/*
++	 * Give up if the lock is being held by nvme_dev_disable.
++	 */
++	if (!mutex_trylock(&dev->shutdown_lock))
++		return -ENODEV;
++
++	/*
++	 * Controller is in wrong state, fail early.
++	 */
++	if (dev->ctrl.state != NVME_CTRL_CONNECTING) {
++		mutex_unlock(&dev->shutdown_lock);
++		return -ENODEV;
++	}
++
++	return 0;
++}
++
+ static int nvme_create_queue(struct nvme_queue *nvmeq, int qid, bool polled)
+ {
+ 	struct nvme_dev *dev = nvmeq->dev;
+@@ -1590,8 +1612,11 @@ static int nvme_create_queue(struct nvme_queue *nvmeq, int qid, bool polled)
+ 		goto release_cq;
+ 
+ 	nvmeq->cq_vector = vector;
+-	nvme_init_queue(nvmeq, qid);
+ 
++	result = nvme_setup_io_queues_trylock(dev);
++	if (result)
++		return result;
++	nvme_init_queue(nvmeq, qid);
+ 	if (!polled) {
+ 		result = queue_request_irq(nvmeq);
+ 		if (result < 0)
+@@ -1599,10 +1624,12 @@ static int nvme_create_queue(struct nvme_queue *nvmeq, int qid, bool polled)
+ 	}
+ 
+ 	set_bit(NVMEQ_ENABLED, &nvmeq->flags);
++	mutex_unlock(&dev->shutdown_lock);
+ 	return result;
+ 
+ release_sq:
+ 	dev->online_queues--;
++	mutex_unlock(&dev->shutdown_lock);
+ 	adapter_delete_sq(dev, qid);
+ release_cq:
+ 	adapter_delete_cq(dev, qid);
+@@ -2176,7 +2203,18 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
+ 	if (nr_io_queues == 0)
+ 		return 0;
+ 
+-	clear_bit(NVMEQ_ENABLED, &adminq->flags);
++	/*
++	 * Free IRQ resources as soon as NVMEQ_ENABLED bit transitions
++	 * from set to unset. If there is a window to it is truely freed,
++	 * pci_free_irq_vectors() jumping into this window will crash.
++	 * And take lock to avoid racing with pci_free_irq_vectors() in
++	 * nvme_dev_disable() path.
++	 */
++	result = nvme_setup_io_queues_trylock(dev);
++	if (result)
++		return result;
++	if (test_and_clear_bit(NVMEQ_ENABLED, &adminq->flags))
++		pci_free_irq(pdev, 0, adminq);
+ 
+ 	if (dev->cmb_use_sqes) {
+ 		result = nvme_cmb_qdepth(dev, nr_io_queues,
+@@ -2192,14 +2230,17 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
+ 		result = nvme_remap_bar(dev, size);
+ 		if (!result)
+ 			break;
+-		if (!--nr_io_queues)
+-			return -ENOMEM;
++		if (!--nr_io_queues) {
++			result = -ENOMEM;
++			goto out_unlock;
++		}
+ 	} while (1);
+ 	adminq->q_db = dev->dbs;
+ 
+  retry:
+ 	/* Deregister the admin queue's interrupt */
+-	pci_free_irq(pdev, 0, adminq);
++	if (test_and_clear_bit(NVMEQ_ENABLED, &adminq->flags))
++		pci_free_irq(pdev, 0, adminq);
+ 
+ 	/*
+ 	 * If we enable msix early due to not intx, disable it again before
+@@ -2208,8 +2249,10 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
+ 	pci_free_irq_vectors(pdev);
+ 
+ 	result = nvme_setup_irqs(dev, nr_io_queues);
+-	if (result <= 0)
+-		return -EIO;
++	if (result <= 0) {
++		result = -EIO;
++		goto out_unlock;
++	}
+ 
+ 	dev->num_vecs = result;
+ 	result = max(result - 1, 1);
+@@ -2223,8 +2266,9 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
+ 	 */
+ 	result = queue_request_irq(adminq);
+ 	if (result)
+-		return result;
++		goto out_unlock;
+ 	set_bit(NVMEQ_ENABLED, &adminq->flags);
++	mutex_unlock(&dev->shutdown_lock);
+ 
+ 	result = nvme_create_io_queues(dev);
+ 	if (result || dev->online_queues < 2)
+@@ -2233,6 +2277,9 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
+ 	if (dev->online_queues - 1 < dev->max_qid) {
+ 		nr_io_queues = dev->online_queues - 1;
+ 		nvme_disable_io_queues(dev);
++		result = nvme_setup_io_queues_trylock(dev);
++		if (result)
++			return result;
+ 		nvme_suspend_io_queues(dev);
+ 		goto retry;
+ 	}
+@@ -2241,6 +2288,9 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
+ 					dev->io_queues[HCTX_TYPE_READ],
+ 					dev->io_queues[HCTX_TYPE_POLL]);
+ 	return 0;
++out_unlock:
++	mutex_unlock(&dev->shutdown_lock);
++	return result;
+ }
+ 
+ static void nvme_del_queue_end(struct request *req, blk_status_t error)
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 64cad843ce723..398c941e38974 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -555,8 +555,8 @@ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
+ 	p = buf;
+ 	while (bytes_left >= sizeof(*p)) {
+ 		info->speed = le64_to_cpu(p->LinkSpeed);
+-		info->rdma_capable = le32_to_cpu(p->Capability & RDMA_CAPABLE);
+-		info->rss_capable = le32_to_cpu(p->Capability & RSS_CAPABLE);
++		info->rdma_capable = le32_to_cpu(p->Capability & RDMA_CAPABLE) ? 1 : 0;
++		info->rss_capable = le32_to_cpu(p->Capability & RSS_CAPABLE) ? 1 : 0;
+ 
+ 		cifs_dbg(FYI, "%s: adding iface %zu\n", __func__, *iface_count);
+ 		cifs_dbg(FYI, "%s: speed %zu bps\n", __func__, info->speed);
+diff --git a/fs/hfs/bfind.c b/fs/hfs/bfind.c
+index 4af318fbda774..ef9498a6e88ac 100644
+--- a/fs/hfs/bfind.c
++++ b/fs/hfs/bfind.c
+@@ -25,7 +25,19 @@ int hfs_find_init(struct hfs_btree *tree, struct hfs_find_data *fd)
+ 	fd->key = ptr + tree->max_key_len + 2;
+ 	hfs_dbg(BNODE_REFS, "find_init: %d (%p)\n",
+ 		tree->cnid, __builtin_return_address(0));
+-	mutex_lock(&tree->tree_lock);
++	switch (tree->cnid) {
++	case HFS_CAT_CNID:
++		mutex_lock_nested(&tree->tree_lock, CATALOG_BTREE_MUTEX);
++		break;
++	case HFS_EXT_CNID:
++		mutex_lock_nested(&tree->tree_lock, EXTENTS_BTREE_MUTEX);
++		break;
++	case HFS_ATTR_CNID:
++		mutex_lock_nested(&tree->tree_lock, ATTR_BTREE_MUTEX);
++		break;
++	default:
++		return -EINVAL;
++	}
+ 	return 0;
+ }
+ 
+diff --git a/fs/hfs/bnode.c b/fs/hfs/bnode.c
+index b63a4df7327b6..c0a73a6ffb28b 100644
+--- a/fs/hfs/bnode.c
++++ b/fs/hfs/bnode.c
+@@ -15,16 +15,31 @@
+ 
+ #include "btree.h"
+ 
+-void hfs_bnode_read(struct hfs_bnode *node, void *buf,
+-		int off, int len)
++void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len)
+ {
+ 	struct page *page;
++	int pagenum;
++	int bytes_read;
++	int bytes_to_read;
++	void *vaddr;
+ 
+ 	off += node->page_offset;
+-	page = node->page[0];
++	pagenum = off >> PAGE_SHIFT;
++	off &= ~PAGE_MASK; /* compute page offset for the first page */
+ 
+-	memcpy(buf, kmap(page) + off, len);
+-	kunmap(page);
++	for (bytes_read = 0; bytes_read < len; bytes_read += bytes_to_read) {
++		if (pagenum >= node->tree->pages_per_bnode)
++			break;
++		page = node->page[pagenum];
++		bytes_to_read = min_t(int, len - bytes_read, PAGE_SIZE - off);
++
++		vaddr = kmap_atomic(page);
++		memcpy(buf + bytes_read, vaddr + off, bytes_to_read);
++		kunmap_atomic(vaddr);
++
++		pagenum++;
++		off = 0; /* page offset only applies to the first page */
++	}
+ }
+ 
+ u16 hfs_bnode_read_u16(struct hfs_bnode *node, int off)
+diff --git a/fs/hfs/btree.h b/fs/hfs/btree.h
+index 4ba45caf59392..0e6baee932453 100644
+--- a/fs/hfs/btree.h
++++ b/fs/hfs/btree.h
+@@ -13,6 +13,13 @@ typedef int (*btree_keycmp)(const btree_key *, const btree_key *);
+ 
+ #define NODE_HASH_SIZE  256
+ 
++/* B-tree mutex nested subclasses */
++enum hfs_btree_mutex_classes {
++	CATALOG_BTREE_MUTEX,
++	EXTENTS_BTREE_MUTEX,
++	ATTR_BTREE_MUTEX,
++};
++
+ /* A HFS BTree held in memory */
+ struct hfs_btree {
+ 	struct super_block *sb;
+diff --git a/fs/hfs/super.c b/fs/hfs/super.c
+index 44d07c9e3a7f0..12d9bae393631 100644
+--- a/fs/hfs/super.c
++++ b/fs/hfs/super.c
+@@ -420,14 +420,12 @@ static int hfs_fill_super(struct super_block *sb, void *data, int silent)
+ 	if (!res) {
+ 		if (fd.entrylength > sizeof(rec) || fd.entrylength < 0) {
+ 			res =  -EIO;
+-			goto bail;
++			goto bail_hfs_find;
+ 		}
+ 		hfs_bnode_read(fd.bnode, &rec, fd.entryoffset, fd.entrylength);
+ 	}
+-	if (res) {
+-		hfs_find_exit(&fd);
+-		goto bail_no_root;
+-	}
++	if (res)
++		goto bail_hfs_find;
+ 	res = -EINVAL;
+ 	root_inode = hfs_iget(sb, &fd.search_key->cat, &rec);
+ 	hfs_find_exit(&fd);
+@@ -443,6 +441,8 @@ static int hfs_fill_super(struct super_block *sb, void *data, int silent)
+ 	/* everything's okay */
+ 	return 0;
+ 
++bail_hfs_find:
++	hfs_find_exit(&fd);
+ bail_no_root:
+ 	pr_err("get root inode failed\n");
+ bail:
+diff --git a/fs/internal.h b/fs/internal.h
+index 6aeae7ef33803..728f8d70d7f1d 100644
+--- a/fs/internal.h
++++ b/fs/internal.h
+@@ -61,7 +61,6 @@ extern void __init chrdev_init(void);
+  */
+ extern const struct fs_context_operations legacy_fs_context_ops;
+ extern int parse_monolithic_mount_data(struct fs_context *, void *);
+-extern void fc_drop_locked(struct fs_context *);
+ extern void vfs_clean_context(struct fs_context *fc);
+ extern int finish_clean_context(struct fs_context *fc);
+ 
+diff --git a/fs/iomap/seek.c b/fs/iomap/seek.c
+index dab1b02eba5b7..ce6fb810854fe 100644
+--- a/fs/iomap/seek.c
++++ b/fs/iomap/seek.c
+@@ -35,23 +35,20 @@ loff_t
+ iomap_seek_hole(struct inode *inode, loff_t offset, const struct iomap_ops *ops)
+ {
+ 	loff_t size = i_size_read(inode);
+-	loff_t length = size - offset;
+ 	loff_t ret;
+ 
+ 	/* Nothing to be found before or beyond the end of the file. */
+ 	if (offset < 0 || offset >= size)
+ 		return -ENXIO;
+ 
+-	while (length > 0) {
+-		ret = iomap_apply(inode, offset, length, IOMAP_REPORT, ops,
+-				  &offset, iomap_seek_hole_actor);
++	while (offset < size) {
++		ret = iomap_apply(inode, offset, size - offset, IOMAP_REPORT,
++				  ops, &offset, iomap_seek_hole_actor);
+ 		if (ret < 0)
+ 			return ret;
+ 		if (ret == 0)
+ 			break;
+-
+ 		offset += ret;
+-		length -= ret;
+ 	}
+ 
+ 	return offset;
+@@ -83,27 +80,23 @@ loff_t
+ iomap_seek_data(struct inode *inode, loff_t offset, const struct iomap_ops *ops)
+ {
+ 	loff_t size = i_size_read(inode);
+-	loff_t length = size - offset;
+ 	loff_t ret;
+ 
+ 	/* Nothing to be found before or beyond the end of the file. */
+ 	if (offset < 0 || offset >= size)
+ 		return -ENXIO;
+ 
+-	while (length > 0) {
+-		ret = iomap_apply(inode, offset, length, IOMAP_REPORT, ops,
+-				  &offset, iomap_seek_data_actor);
++	while (offset < size) {
++		ret = iomap_apply(inode, offset, size - offset, IOMAP_REPORT,
++				  ops, &offset, iomap_seek_data_actor);
+ 		if (ret < 0)
+ 			return ret;
+ 		if (ret == 0)
+-			break;
+-
++			return offset;
+ 		offset += ret;
+-		length -= ret;
+ 	}
+ 
+-	if (length <= 0)
+-		return -ENXIO;
+-	return offset;
++	/* We've reached the end of the file without finding data */
++	return -ENXIO;
+ }
+ EXPORT_SYMBOL_GPL(iomap_seek_data);
+diff --git a/include/linux/fs_context.h b/include/linux/fs_context.h
+index 37e1e8f7f08da..5b44b0195a28a 100644
+--- a/include/linux/fs_context.h
++++ b/include/linux/fs_context.h
+@@ -139,6 +139,7 @@ extern int vfs_parse_fs_string(struct fs_context *fc, const char *key,
+ extern int generic_parse_monolithic(struct fs_context *fc, void *data);
+ extern int vfs_get_tree(struct fs_context *fc);
+ extern void put_fs_context(struct fs_context *fc);
++extern void fc_drop_locked(struct fs_context *fc);
+ 
+ /*
+  * sget() wrappers to be called from the ->get_tree() op.
+diff --git a/include/net/busy_poll.h b/include/net/busy_poll.h
+index 73af4a64a5999..40296ed976a97 100644
+--- a/include/net/busy_poll.h
++++ b/include/net/busy_poll.h
+@@ -38,7 +38,7 @@ static inline bool net_busy_loop_on(void)
+ 
+ static inline bool sk_can_busy_loop(const struct sock *sk)
+ {
+-	return sk->sk_ll_usec && !signal_pending(current);
++	return READ_ONCE(sk->sk_ll_usec) && !signal_pending(current);
+ }
+ 
+ bool sk_busy_loop_end(void *p, unsigned long start_time);
+diff --git a/include/net/sctp/constants.h b/include/net/sctp/constants.h
+index 14a0d22c91133..bf23a2ed92da8 100644
+--- a/include/net/sctp/constants.h
++++ b/include/net/sctp/constants.h
+@@ -342,8 +342,7 @@ enum {
+ #define SCTP_SCOPE_POLICY_MAX	SCTP_SCOPE_POLICY_LINK
+ 
+ /* Based on IPv4 scoping <draft-stewart-tsvwg-sctp-ipv4-00.txt>,
+- * SCTP IPv4 unusable addresses: 0.0.0.0/8, 224.0.0.0/4, 198.18.0.0/24,
+- * 192.88.99.0/24.
++ * SCTP IPv4 unusable addresses: 0.0.0.0/8, 224.0.0.0/4, 192.88.99.0/24.
+  * Also, RFC 8.4, non-unicast addresses are not considered valid SCTP
+  * addresses.
+  */
+@@ -351,7 +350,6 @@ enum {
+ 	((htonl(INADDR_BROADCAST) == a) ||  \
+ 	 ipv4_is_multicast(a) ||	    \
+ 	 ipv4_is_zeronet(a) ||		    \
+-	 ipv4_is_test_198(a) ||		    \
+ 	 ipv4_is_anycast_6to4(a))
+ 
+ /* Flags used for the bind address copy functions.  */
+diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
+index d189eff4c92ff..583790d2060ce 100644
+--- a/kernel/cgroup/cgroup-v1.c
++++ b/kernel/cgroup/cgroup-v1.c
+@@ -1225,9 +1225,7 @@ int cgroup1_get_tree(struct fs_context *fc)
+ 		ret = cgroup_do_get_tree(fc);
+ 
+ 	if (!ret && percpu_ref_is_dying(&ctx->root->cgrp.self.refcnt)) {
+-		struct super_block *sb = fc->root->d_sb;
+-		dput(fc->root);
+-		deactivate_locked_super(sb);
++		fc_drop_locked(fc);
+ 		ret = 1;
+ 	}
+ 
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index 350ebf5051f97..fcef5f0c60b8b 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -908,10 +908,9 @@ static bool trc_inspect_reader(struct task_struct *t, void *arg)
+ 		in_qs = likely(!t->trc_reader_nesting);
+ 	}
+ 
+-	// Mark as checked.  Because this is called from the grace-period
+-	// kthread, also remove the task from the holdout list.
++	// Mark as checked so that the grace-period kthread will
++	// remove it from the holdout list.
+ 	t->trc_reader_checked = true;
+-	trc_del_holdout(t);
+ 
+ 	if (in_qs)
+ 		return true;  // Already in quiescent state, done!!!
+@@ -938,7 +937,6 @@ static void trc_wait_for_one_reader(struct task_struct *t,
+ 	// The current task had better be in a quiescent state.
+ 	if (t == current) {
+ 		t->trc_reader_checked = true;
+-		trc_del_holdout(t);
+ 		WARN_ON_ONCE(t->trc_reader_nesting);
+ 		return;
+ 	}
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 50142fc08902d..f148eacda55a9 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -3676,15 +3676,21 @@ static void pwq_unbound_release_workfn(struct work_struct *work)
+ 						  unbound_release_work);
+ 	struct workqueue_struct *wq = pwq->wq;
+ 	struct worker_pool *pool = pwq->pool;
+-	bool is_last;
++	bool is_last = false;
+ 
+-	if (WARN_ON_ONCE(!(wq->flags & WQ_UNBOUND)))
+-		return;
++	/*
++	 * when @pwq is not linked, it doesn't hold any reference to the
++	 * @wq, and @wq is invalid to access.
++	 */
++	if (!list_empty(&pwq->pwqs_node)) {
++		if (WARN_ON_ONCE(!(wq->flags & WQ_UNBOUND)))
++			return;
+ 
+-	mutex_lock(&wq->mutex);
+-	list_del_rcu(&pwq->pwqs_node);
+-	is_last = list_empty(&wq->pwqs);
+-	mutex_unlock(&wq->mutex);
++		mutex_lock(&wq->mutex);
++		list_del_rcu(&pwq->pwqs_node);
++		is_last = list_empty(&wq->pwqs);
++		mutex_unlock(&wq->mutex);
++	}
+ 
+ 	mutex_lock(&wq_pool_mutex);
+ 	put_unbound_pool(pool);
+diff --git a/net/802/garp.c b/net/802/garp.c
+index 400bd857e5f57..f6012f8e59f00 100644
+--- a/net/802/garp.c
++++ b/net/802/garp.c
+@@ -203,6 +203,19 @@ static void garp_attr_destroy(struct garp_applicant *app, struct garp_attr *attr
+ 	kfree(attr);
+ }
+ 
++static void garp_attr_destroy_all(struct garp_applicant *app)
++{
++	struct rb_node *node, *next;
++	struct garp_attr *attr;
++
++	for (node = rb_first(&app->gid);
++	     next = node ? rb_next(node) : NULL, node != NULL;
++	     node = next) {
++		attr = rb_entry(node, struct garp_attr, node);
++		garp_attr_destroy(app, attr);
++	}
++}
++
+ static int garp_pdu_init(struct garp_applicant *app)
+ {
+ 	struct sk_buff *skb;
+@@ -609,6 +622,7 @@ void garp_uninit_applicant(struct net_device *dev, struct garp_application *appl
+ 
+ 	spin_lock_bh(&app->lock);
+ 	garp_gid_event(app, GARP_EVENT_TRANSMIT_PDU);
++	garp_attr_destroy_all(app);
+ 	garp_pdu_queue(app);
+ 	spin_unlock_bh(&app->lock);
+ 
+diff --git a/net/802/mrp.c b/net/802/mrp.c
+index bea6e43d45a0d..35e04cc5390c4 100644
+--- a/net/802/mrp.c
++++ b/net/802/mrp.c
+@@ -292,6 +292,19 @@ static void mrp_attr_destroy(struct mrp_applicant *app, struct mrp_attr *attr)
+ 	kfree(attr);
+ }
+ 
++static void mrp_attr_destroy_all(struct mrp_applicant *app)
++{
++	struct rb_node *node, *next;
++	struct mrp_attr *attr;
++
++	for (node = rb_first(&app->mad);
++	     next = node ? rb_next(node) : NULL, node != NULL;
++	     node = next) {
++		attr = rb_entry(node, struct mrp_attr, node);
++		mrp_attr_destroy(app, attr);
++	}
++}
++
+ static int mrp_pdu_init(struct mrp_applicant *app)
+ {
+ 	struct sk_buff *skb;
+@@ -895,6 +908,7 @@ void mrp_uninit_applicant(struct net_device *dev, struct mrp_application *appl)
+ 
+ 	spin_lock_bh(&app->lock);
+ 	mrp_mad_event(app, MRP_EVENT_TX);
++	mrp_attr_destroy_all(app);
+ 	mrp_pdu_queue(app);
+ 	spin_unlock_bh(&app->lock);
+ 
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 2003c5ebb4c2e..37d732fe3fcf9 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -1172,7 +1172,7 @@ set_sndbuf:
+ 			if (val < 0)
+ 				ret = -EINVAL;
+ 			else
+-				sk->sk_ll_usec = val;
++				WRITE_ONCE(sk->sk_ll_usec, val);
+ 		}
+ 		break;
+ 	case SO_PREFER_BUSY_POLL:
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index b7ffb4f227a45..6062ad1d5b510 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -60,10 +60,38 @@ static int ip6_finish_output2(struct net *net, struct sock *sk, struct sk_buff *
+ {
+ 	struct dst_entry *dst = skb_dst(skb);
+ 	struct net_device *dev = dst->dev;
++	unsigned int hh_len = LL_RESERVED_SPACE(dev);
++	int delta = hh_len - skb_headroom(skb);
+ 	const struct in6_addr *nexthop;
+ 	struct neighbour *neigh;
+ 	int ret;
+ 
++	/* Be paranoid, rather than too clever. */
++	if (unlikely(delta > 0) && dev->header_ops) {
++		/* pskb_expand_head() might crash, if skb is shared */
++		if (skb_shared(skb)) {
++			struct sk_buff *nskb = skb_clone(skb, GFP_ATOMIC);
++
++			if (likely(nskb)) {
++				if (skb->sk)
++					skb_set_owner_w(nskb, skb->sk);
++				consume_skb(skb);
++			} else {
++				kfree_skb(skb);
++			}
++			skb = nskb;
++		}
++		if (skb &&
++		    pskb_expand_head(skb, SKB_DATA_ALIGN(delta), 0, GFP_ATOMIC)) {
++			kfree_skb(skb);
++			skb = NULL;
++		}
++		if (!skb) {
++			IP6_INC_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTDISCARDS);
++			return -ENOMEM;
++		}
++	}
++
+ 	if (ipv6_addr_is_multicast(&ipv6_hdr(skb)->daddr)) {
+ 		struct inet6_dev *idev = ip6_dst_idev(skb_dst(skb));
+ 
+diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
+index 25192b378e2ec..9b444df5e53ee 100644
+--- a/net/sctp/protocol.c
++++ b/net/sctp/protocol.c
+@@ -398,7 +398,8 @@ static enum sctp_scope sctp_v4_scope(union sctp_addr *addr)
+ 		retval = SCTP_SCOPE_LINK;
+ 	} else if (ipv4_is_private_10(addr->v4.sin_addr.s_addr) ||
+ 		   ipv4_is_private_172(addr->v4.sin_addr.s_addr) ||
+-		   ipv4_is_private_192(addr->v4.sin_addr.s_addr)) {
++		   ipv4_is_private_192(addr->v4.sin_addr.s_addr) ||
++		   ipv4_is_test_198(addr->v4.sin_addr.s_addr)) {
+ 		retval = SCTP_SCOPE_PRIVATE;
+ 	} else {
+ 		retval = SCTP_SCOPE_GLOBAL;
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 5d1192ceb1397..68a9591d0144c 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -1522,6 +1522,53 @@ out:
+ 	return err;
+ }
+ 
++static void unix_peek_fds(struct scm_cookie *scm, struct sk_buff *skb)
++{
++	scm->fp = scm_fp_dup(UNIXCB(skb).fp);
++
++	/*
++	 * Garbage collection of unix sockets starts by selecting a set of
++	 * candidate sockets which have reference only from being in flight
++	 * (total_refs == inflight_refs).  This condition is checked once during
++	 * the candidate collection phase, and candidates are marked as such, so
++	 * that non-candidates can later be ignored.  While inflight_refs is
++	 * protected by unix_gc_lock, total_refs (file count) is not, hence this
++	 * is an instantaneous decision.
++	 *
++	 * Once a candidate, however, the socket must not be reinstalled into a
++	 * file descriptor while the garbage collection is in progress.
++	 *
++	 * If the above conditions are met, then the directed graph of
++	 * candidates (*) does not change while unix_gc_lock is held.
++	 *
++	 * Any operations that changes the file count through file descriptors
++	 * (dup, close, sendmsg) does not change the graph since candidates are
++	 * not installed in fds.
++	 *
++	 * Dequeing a candidate via recvmsg would install it into an fd, but
++	 * that takes unix_gc_lock to decrement the inflight count, so it's
++	 * serialized with garbage collection.
++	 *
++	 * MSG_PEEK is special in that it does not change the inflight count,
++	 * yet does install the socket into an fd.  The following lock/unlock
++	 * pair is to ensure serialization with garbage collection.  It must be
++	 * done between incrementing the file count and installing the file into
++	 * an fd.
++	 *
++	 * If garbage collection starts after the barrier provided by the
++	 * lock/unlock, then it will see the elevated refcount and not mark this
++	 * as a candidate.  If a garbage collection is already in progress
++	 * before the file count was incremented, then the lock/unlock pair will
++	 * ensure that garbage collection is finished before progressing to
++	 * installing the fd.
++	 *
++	 * (*) A -> B where B is on the queue of A or B is on the queue of C
++	 * which is on the queue of listening socket A.
++	 */
++	spin_lock(&unix_gc_lock);
++	spin_unlock(&unix_gc_lock);
++}
++
+ static int unix_scm_to_skb(struct scm_cookie *scm, struct sk_buff *skb, bool send_fds)
+ {
+ 	int err = 0;
+@@ -2171,7 +2218,7 @@ static int unix_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
+ 		sk_peek_offset_fwd(sk, size);
+ 
+ 		if (UNIXCB(skb).fp)
+-			scm.fp = scm_fp_dup(UNIXCB(skb).fp);
++			unix_peek_fds(&scm, skb);
+ 	}
+ 	err = (flags & MSG_TRUNC) ? skb->len - skip : size;
+ 
+@@ -2414,7 +2461,7 @@ unlock:
+ 			/* It is questionable, see note in unix_dgram_recvmsg.
+ 			 */
+ 			if (UNIXCB(skb).fp)
+-				scm.fp = scm_fp_dup(UNIXCB(skb).fp);
++				unix_peek_fds(&scm, skb);
+ 
+ 			sk_peek_offset_fwd(sk, chunk);
+ 


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-08-02 22:34 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-08-02 22:34 UTC (permalink / raw
  To: gentoo-commits

commit:     c72d1d6b55f7eb2d23edd4578a0f1f0a9490c35b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Aug  2 22:27:34 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Aug  2 22:34:23 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c72d1d6b

Select SECCOMP options only if supported

Thanks to Matt Turner for this patch

Some architectures (e.g., alpha, sparc) do not support SECCOMP.
Without this kernel builds will show:

WARNING: unmet direct dependencies detected for SECCOMP
  Depends on [n]: HAVE_ARCH_SECCOMP [=n]
  Selected by [y]:
  - GENTOO_LINUX_INIT_SYSTEMD [=y] && GENTOO_LINUX [=y] && GENTOO_LINUX_UDEV [=y]

WARNING: unmet direct dependencies detected for SECCOMP_FILTER
  Depends on [n]: HAVE_ARCH_SECCOMP_FILTER [=n] && SECCOMP [=y] && NET [=y]
  Selected by [y]:
  - GENTOO_LINUX_INIT_SYSTEMD [=y] && GENTOO_LINUX [=y] && GENTOO_LINUX_UDEV [=y]

Signed-off-by: Matt Turner <mattst88 <AT> gentoo.org>
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index c063c6d..f875dba 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -138,8 +138,8 @@
 +	select NET
 +	select NET_NS
 +	select PROC_FS
-+	select SECCOMP
-+	select SECCOMP_FILTER
++	select SECCOMP if HAVE_ARCH_SECCOMP
++	select SECCOMP_FILTER HAVE_ARCH_SECCOMP_FILTER
 +	select SIGNALFD
 +	select SYSFS
 +	select TIMERFD
@@ -188,8 +188,8 @@
 +	select DEBUG_SG
 +	select BUG_ON_DATA_CORRUPTION
 +	select SCHED_STACK_END_CHECK
-+	select SECCOMP
-+	select SECCOMP_FILTER
++	select SECCOMP if HAVE_ARCH_SECCOMP
++	select SECCOMP_FILTER HAVE_ARCH_SECCOMP_FILTER
 +	select SECURITY_YAMA
 +	select SLAB_FREELIST_RANDOM
 +	select SLAB_FREELIST_HARDENED


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-08-03 11:03 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-08-03 11:03 UTC (permalink / raw
  To: gentoo-commits

commit:     f1aed8f6aaa147902088fac52a8183aafbda4dd4
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Aug  3 11:00:25 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Aug  3 11:03:20 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f1aed8f6

Fix SECCOMP Patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index f875dba..fa005e6 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -139,7 +139,7 @@
 +	select NET_NS
 +	select PROC_FS
 +	select SECCOMP if HAVE_ARCH_SECCOMP
-+	select SECCOMP_FILTER HAVE_ARCH_SECCOMP_FILTER
++	select SECCOMP_FILTER if HAVE_ARCH_SECCOMP_FILTER
 +	select SIGNALFD
 +	select SYSFS
 +	select TIMERFD
@@ -189,7 +189,7 @@
 +	select BUG_ON_DATA_CORRUPTION
 +	select SCHED_STACK_END_CHECK
 +	select SECCOMP if HAVE_ARCH_SECCOMP
-+	select SECCOMP_FILTER HAVE_ARCH_SECCOMP_FILTER
++	select SECCOMP_FILTER if HAVE_ARCH_SECCOMP_FILTER
 +	select SECURITY_YAMA
 +	select SLAB_FREELIST_RANDOM
 +	select SLAB_FREELIST_HARDENED


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-08-04 11:50 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-08-04 11:50 UTC (permalink / raw
  To: gentoo-commits

commit:     3bbe4c15e73c8d29a1e3ffc272ac3a5e6bf17a65
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug  4 11:49:34 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug  4 11:49:34 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3bbe4c15

Linux patch 5.13.8

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1007_linux-5.13.8.patch | 4318 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4322 insertions(+)

diff --git a/0000_README b/0000_README
index 79d1b68..91c9a8e 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  1006_linux-5.13.7.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.13.7
 
+Patch:  1007_linux-5.13.8.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.13.8
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1007_linux-5.13.8.patch b/1007_linux-5.13.8.patch
new file mode 100644
index 0000000..f6a05d4
--- /dev/null
+++ b/1007_linux-5.13.8.patch
@@ -0,0 +1,4318 @@
+diff --git a/Makefile b/Makefile
+index 614327400aea2..fdb4e2fd9d8f3 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 13
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/alpha/kernel/setup.c b/arch/alpha/kernel/setup.c
+index 03dda3beb3bd4..e5ec9b9b73a1d 100644
+--- a/arch/alpha/kernel/setup.c
++++ b/arch/alpha/kernel/setup.c
+@@ -325,18 +325,19 @@ setup_memory(void *kernel_end)
+ 		       i, cluster->usage, cluster->start_pfn,
+ 		       cluster->start_pfn + cluster->numpages);
+ 
+-		/* Bit 0 is console/PALcode reserved.  Bit 1 is
+-		   non-volatile memory -- we might want to mark
+-		   this for later.  */
+-		if (cluster->usage & 3)
+-			continue;
+-
+ 		end = cluster->start_pfn + cluster->numpages;
+ 		if (end > max_low_pfn)
+ 			max_low_pfn = end;
+ 
+ 		memblock_add(PFN_PHYS(cluster->start_pfn),
+ 			     cluster->numpages << PAGE_SHIFT);
++
++		/* Bit 0 is console/PALcode reserved.  Bit 1 is
++		   non-volatile memory -- we might want to mark
++		   this for later.  */
++		if (cluster->usage & 3)
++			memblock_reserve(PFN_PHYS(cluster->start_pfn),
++				         cluster->numpages << PAGE_SHIFT);
+ 	}
+ 
+ 	/*
+diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
+index 897634d0a67ca..a951276f05475 100644
+--- a/arch/arm/net/bpf_jit_32.c
++++ b/arch/arm/net/bpf_jit_32.c
+@@ -1602,6 +1602,9 @@ exit:
+ 		rn = arm_bpf_get_reg32(src_lo, tmp2[1], ctx);
+ 		emit_ldx_r(dst, rn, off, ctx, BPF_SIZE(code));
+ 		break;
++	/* speculation barrier */
++	case BPF_ST | BPF_NOSPEC:
++		break;
+ 	/* ST: *(size *)(dst + off) = imm */
+ 	case BPF_ST | BPF_MEM | BPF_W:
+ 	case BPF_ST | BPF_MEM | BPF_H:
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index f7b194878a99a..5a876af34230c 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -829,6 +829,19 @@ emit_cond_jmp:
+ 			return ret;
+ 		break;
+ 
++	/* speculation barrier */
++	case BPF_ST | BPF_NOSPEC:
++		/*
++		 * Nothing required here.
++		 *
++		 * In case of arm64, we rely on the firmware mitigation of
++		 * Speculative Store Bypass as controlled via the ssbd kernel
++		 * parameter. Whenever the mitigation is enabled, it works
++		 * for all of the kernel code with no need to provide any
++		 * additional instructions.
++		 */
++		break;
++
+ 	/* ST: *(size *)(dst + off) = imm */
+ 	case BPF_ST | BPF_MEM | BPF_W:
+ 	case BPF_ST | BPF_MEM | BPF_H:
+diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c
+index 939dd06764bc9..3a73e93757121 100644
+--- a/arch/mips/net/ebpf_jit.c
++++ b/arch/mips/net/ebpf_jit.c
+@@ -1355,6 +1355,9 @@ jeq_common:
+ 		}
+ 		break;
+ 
++	case BPF_ST | BPF_NOSPEC: /* speculation barrier */
++		break;
++
+ 	case BPF_ST | BPF_B | BPF_MEM:
+ 	case BPF_ST | BPF_H | BPF_MEM:
+ 	case BPF_ST | BPF_W | BPF_MEM:
+diff --git a/arch/powerpc/kernel/vdso64/Makefile b/arch/powerpc/kernel/vdso64/Makefile
+index 2813e3f98db65..3c5baaa6f1e7f 100644
+--- a/arch/powerpc/kernel/vdso64/Makefile
++++ b/arch/powerpc/kernel/vdso64/Makefile
+@@ -27,6 +27,13 @@ KASAN_SANITIZE := n
+ 
+ ccflags-y := -shared -fno-common -fno-builtin -nostdlib \
+ 	-Wl,-soname=linux-vdso64.so.1 -Wl,--hash-style=both
++
++# Go prior to 1.16.x assumes r30 is not clobbered by any VDSO code. That used to be true
++# by accident when the VDSO was hand-written asm code, but may not be now that the VDSO is
++# compiler generated. To avoid breaking Go tell GCC not to use r30. Impact on code
++# generation is minimal, it will just use r29 instead.
++ccflags-y += $(call cc-option, -ffixed-r30)
++
+ asflags-y := -D__VDSO64__ -s
+ 
+ targets += vdso64.lds
+diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
+index 68476780047ac..6c0915285b171 100644
+--- a/arch/powerpc/net/bpf_jit_comp32.c
++++ b/arch/powerpc/net/bpf_jit_comp32.c
+@@ -737,6 +737,12 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
+ 			}
+ 			break;
+ 
++		/*
++		 * BPF_ST NOSPEC (speculation barrier)
++		 */
++		case BPF_ST | BPF_NOSPEC:
++			break;
++
+ 		/*
+ 		 * BPF_ST(X)
+ 		 */
+diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
+index 94411af24013f..d3ad8dfba1f69 100644
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -627,6 +627,12 @@ emit_clear:
+ 			}
+ 			break;
+ 
++		/*
++		 * BPF_ST NOSPEC (speculation barrier)
++		 */
++		case BPF_ST | BPF_NOSPEC:
++			break;
++
+ 		/*
+ 		 * BPF_ST(X)
+ 		 */
+diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
+index 754e493b7c05b..0338f481c12bb 100644
+--- a/arch/powerpc/platforms/pseries/setup.c
++++ b/arch/powerpc/platforms/pseries/setup.c
+@@ -77,7 +77,7 @@
+ #include "../../../../drivers/pci/pci.h"
+ 
+ DEFINE_STATIC_KEY_FALSE(shared_processor);
+-EXPORT_SYMBOL_GPL(shared_processor);
++EXPORT_SYMBOL(shared_processor);
+ 
+ int CMO_PrPSP = -1;
+ int CMO_SecPSP = -1;
+diff --git a/arch/riscv/net/bpf_jit_comp32.c b/arch/riscv/net/bpf_jit_comp32.c
+index 81de865f4c7c3..e6497424cbf60 100644
+--- a/arch/riscv/net/bpf_jit_comp32.c
++++ b/arch/riscv/net/bpf_jit_comp32.c
+@@ -1251,6 +1251,10 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
+ 			return -1;
+ 		break;
+ 
++	/* speculation barrier */
++	case BPF_ST | BPF_NOSPEC:
++		break;
++
+ 	case BPF_ST | BPF_MEM | BPF_B:
+ 	case BPF_ST | BPF_MEM | BPF_H:
+ 	case BPF_ST | BPF_MEM | BPF_W:
+diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
+index 87e3bf5b9086d..3af4131c22c7a 100644
+--- a/arch/riscv/net/bpf_jit_comp64.c
++++ b/arch/riscv/net/bpf_jit_comp64.c
+@@ -939,6 +939,10 @@ out_be:
+ 		emit_ld(rd, 0, RV_REG_T1, ctx);
+ 		break;
+ 
++	/* speculation barrier */
++	case BPF_ST | BPF_NOSPEC:
++		break;
++
+ 	/* ST: *(size *)(dst + off) = imm */
+ 	case BPF_ST | BPF_MEM | BPF_B:
+ 		emit_imm(RV_REG_T1, imm, ctx);
+diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
+index 2ae419f5115a5..88419263a89a9 100644
+--- a/arch/s390/net/bpf_jit_comp.c
++++ b/arch/s390/net/bpf_jit_comp.c
+@@ -1153,6 +1153,11 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
+ 			break;
+ 		}
+ 		break;
++	/*
++	 * BPF_NOSPEC (speculation barrier)
++	 */
++	case BPF_ST | BPF_NOSPEC:
++		break;
+ 	/*
+ 	 * BPF_ST(X)
+ 	 */
+diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
+index 4b8d3c65d2666..9a2f20cbd48b7 100644
+--- a/arch/sparc/net/bpf_jit_comp_64.c
++++ b/arch/sparc/net/bpf_jit_comp_64.c
+@@ -1287,6 +1287,9 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
+ 			return 1;
+ 		break;
+ 	}
++	/* speculation barrier */
++	case BPF_ST | BPF_NOSPEC:
++		break;
+ 	/* ST: *(size *)(dst + off) = imm */
+ 	case BPF_ST | BPF_MEM | BPF_W:
+ 	case BPF_ST | BPF_MEM | BPF_H:
+diff --git a/arch/x86/kvm/ioapic.c b/arch/x86/kvm/ioapic.c
+index 698969e18fe35..ff005fe738a4c 100644
+--- a/arch/x86/kvm/ioapic.c
++++ b/arch/x86/kvm/ioapic.c
+@@ -96,7 +96,7 @@ static unsigned long ioapic_read_indirect(struct kvm_ioapic *ioapic,
+ static void rtc_irq_eoi_tracking_reset(struct kvm_ioapic *ioapic)
+ {
+ 	ioapic->rtc_status.pending_eoi = 0;
+-	bitmap_zero(ioapic->rtc_status.dest_map.map, KVM_MAX_VCPU_ID);
++	bitmap_zero(ioapic->rtc_status.dest_map.map, KVM_MAX_VCPU_ID + 1);
+ }
+ 
+ static void kvm_rtc_eoi_tracking_restore_all(struct kvm_ioapic *ioapic);
+diff --git a/arch/x86/kvm/ioapic.h b/arch/x86/kvm/ioapic.h
+index 660401700075d..11e4065e16176 100644
+--- a/arch/x86/kvm/ioapic.h
++++ b/arch/x86/kvm/ioapic.h
+@@ -43,13 +43,13 @@ struct kvm_vcpu;
+ 
+ struct dest_map {
+ 	/* vcpu bitmap where IRQ has been sent */
+-	DECLARE_BITMAP(map, KVM_MAX_VCPU_ID);
++	DECLARE_BITMAP(map, KVM_MAX_VCPU_ID + 1);
+ 
+ 	/*
+ 	 * Vector sent to a given vcpu, only valid when
+ 	 * the vcpu's bit in map is set
+ 	 */
+-	u8 vectors[KVM_MAX_VCPU_ID];
++	u8 vectors[KVM_MAX_VCPU_ID + 1];
+ };
+ 
+ 
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index b5a3de788b5fc..d6a9f05187849 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -3314,7 +3314,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 			return 1;
+ 		break;
+ 	case MSR_KVM_ASYNC_PF_ACK:
+-		if (!guest_pv_has(vcpu, KVM_FEATURE_ASYNC_PF))
++		if (!guest_pv_has(vcpu, KVM_FEATURE_ASYNC_PF_INT))
+ 			return 1;
+ 		if (data & 0x1) {
+ 			vcpu->arch.apf.pageready_pending = false;
+@@ -3646,7 +3646,7 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		msr_info->data = vcpu->arch.apf.msr_int_val;
+ 		break;
+ 	case MSR_KVM_ASYNC_PF_ACK:
+-		if (!guest_pv_has(vcpu, KVM_FEATURE_ASYNC_PF))
++		if (!guest_pv_has(vcpu, KVM_FEATURE_ASYNC_PF_INT))
+ 			return 1;
+ 
+ 		msr_info->data = 0;
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index 66e304a84deb0..ee9971bbe034a 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -1235,6 +1235,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
+ 			}
+ 			break;
+ 
++			/* speculation barrier */
++		case BPF_ST | BPF_NOSPEC:
++			if (boot_cpu_has(X86_FEATURE_XMM2))
++				/* Emit 'lfence' */
++				EMIT3(0x0F, 0xAE, 0xE8);
++			break;
++
+ 			/* ST: *(u8*)(dst_reg + off) = imm */
+ 		case BPF_ST | BPF_MEM | BPF_B:
+ 			if (is_ereg(dst_reg))
+diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c
+index 3da88ded6ee39..3bfda5f502cb8 100644
+--- a/arch/x86/net/bpf_jit_comp32.c
++++ b/arch/x86/net/bpf_jit_comp32.c
+@@ -1886,6 +1886,12 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
+ 			i++;
+ 			break;
+ 		}
++		/* speculation barrier */
++		case BPF_ST | BPF_NOSPEC:
++			if (boot_cpu_has(X86_FEATURE_XMM2))
++				/* Emit 'lfence' */
++				EMIT3(0x0F, 0xAE, 0xE8);
++			break;
+ 		/* ST: *(u8*)(dst_reg + off) = imm */
+ 		case BPF_ST | BPF_MEM | BPF_H:
+ 		case BPF_ST | BPF_MEM | BPF_B:
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index c2d6bc88d3f15..5fac3757e6e05 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -1440,16 +1440,17 @@ static int iocg_wake_fn(struct wait_queue_entry *wq_entry, unsigned mode,
+ 		return -1;
+ 
+ 	iocg_commit_bio(ctx->iocg, wait->bio, wait->abs_cost, cost);
++	wait->committed = true;
+ 
+ 	/*
+ 	 * autoremove_wake_function() removes the wait entry only when it
+-	 * actually changed the task state.  We want the wait always
+-	 * removed.  Remove explicitly and use default_wake_function().
++	 * actually changed the task state. We want the wait always removed.
++	 * Remove explicitly and use default_wake_function(). Note that the
++	 * order of operations is important as finish_wait() tests whether
++	 * @wq_entry is removed without grabbing the lock.
+ 	 */
+-	list_del_init(&wq_entry->entry);
+-	wait->committed = true;
+-
+ 	default_wake_function(wq_entry, mode, flags, key);
++	list_del_init_careful(&wq_entry->entry);
+ 	return 0;
+ }
+ 
+diff --git a/block/genhd.c b/block/genhd.c
+index ad7436bd60c1b..e8968fd30b2bc 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -1124,10 +1124,9 @@ static void disk_release(struct device *dev)
+ 	disk_release_events(disk);
+ 	kfree(disk->random);
+ 	xa_destroy(&disk->part_tbl);
+-	bdput(disk->part0);
+ 	if (disk->queue)
+ 		blk_put_queue(disk->queue);
+-	kfree(disk);
++	bdput(disk->part0);	/* frees the disk */
+ }
+ struct class block_class = {
+ 	.name		= "block",
+diff --git a/drivers/acpi/dptf/dptf_pch_fivr.c b/drivers/acpi/dptf/dptf_pch_fivr.c
+index 5fca18296bf68..550b9081fcbc2 100644
+--- a/drivers/acpi/dptf/dptf_pch_fivr.c
++++ b/drivers/acpi/dptf/dptf_pch_fivr.c
+@@ -9,6 +9,42 @@
+ #include <linux/module.h>
+ #include <linux/platform_device.h>
+ 
++struct pch_fivr_resp {
++	u64 status;
++	u64 result;
++};
++
++static int pch_fivr_read(acpi_handle handle, char *method, struct pch_fivr_resp *fivr_resp)
++{
++	struct acpi_buffer resp = { sizeof(struct pch_fivr_resp), fivr_resp};
++	struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
++	struct acpi_buffer format = { sizeof("NN"), "NN" };
++	union acpi_object *obj;
++	acpi_status status;
++	int ret = -EFAULT;
++
++	status = acpi_evaluate_object(handle, method, NULL, &buffer);
++	if (ACPI_FAILURE(status))
++		return ret;
++
++	obj = buffer.pointer;
++	if (!obj || obj->type != ACPI_TYPE_PACKAGE)
++		goto release_buffer;
++
++	status = acpi_extract_package(obj, &format, &resp);
++	if (ACPI_FAILURE(status))
++		goto release_buffer;
++
++	if (fivr_resp->status)
++		goto release_buffer;
++
++	ret = 0;
++
++release_buffer:
++	kfree(buffer.pointer);
++	return ret;
++}
++
+ /*
+  * Presentation of attributes which are defined for INT1045
+  * They are:
+@@ -23,15 +59,14 @@ static ssize_t name##_show(struct device *dev,\
+ 			   char *buf)\
+ {\
+ 	struct acpi_device *acpi_dev = dev_get_drvdata(dev);\
+-	unsigned long long val;\
+-	acpi_status status;\
++	struct pch_fivr_resp fivr_resp;\
++	int status;\
+ \
+-	status = acpi_evaluate_integer(acpi_dev->handle, #method,\
+-				       NULL, &val);\
+-	if (ACPI_SUCCESS(status))\
+-		return sprintf(buf, "%d\n", (int)val);\
+-	else\
+-		return -EINVAL;\
++	status = pch_fivr_read(acpi_dev->handle, #method, &fivr_resp);\
++	if (status)\
++		return status;\
++\
++	return sprintf(buf, "%llu\n", fivr_resp.result);\
+ }
+ 
+ #define PCH_FIVR_STORE(name, method) \
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index dc01fb550b28d..ee78a210c6068 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -423,13 +423,6 @@ static void acpi_dev_get_irqresource(struct resource *res, u32 gsi,
+ 	}
+ }
+ 
+-static bool irq_is_legacy(struct acpi_resource_irq *irq)
+-{
+-	return irq->triggering == ACPI_EDGE_SENSITIVE &&
+-		irq->polarity == ACPI_ACTIVE_HIGH &&
+-		irq->shareable == ACPI_EXCLUSIVE;
+-}
+-
+ /**
+  * acpi_dev_resource_interrupt - Extract ACPI interrupt resource information.
+  * @ares: Input ACPI resource object.
+@@ -468,7 +461,7 @@ bool acpi_dev_resource_interrupt(struct acpi_resource *ares, int index,
+ 		}
+ 		acpi_dev_get_irqresource(res, irq->interrupts[index],
+ 					 irq->triggering, irq->polarity,
+-					 irq->shareable, irq_is_legacy(irq));
++					 irq->shareable, true);
+ 		break;
+ 	case ACPI_RESOURCE_TYPE_EXTENDED_IRQ:
+ 		ext_irq = &ares->data.extended_irq;
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 8271df1251535..e81298b912270 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -86,6 +86,47 @@
+ 
+ static DEFINE_IDR(loop_index_idr);
+ static DEFINE_MUTEX(loop_ctl_mutex);
++static DEFINE_MUTEX(loop_validate_mutex);
++
++/**
++ * loop_global_lock_killable() - take locks for safe loop_validate_file() test
++ *
++ * @lo: struct loop_device
++ * @global: true if @lo is about to bind another "struct loop_device", false otherwise
++ *
++ * Returns 0 on success, -EINTR otherwise.
++ *
++ * Since loop_validate_file() traverses on other "struct loop_device" if
++ * is_loop_device() is true, we need a global lock for serializing concurrent
++ * loop_configure()/loop_change_fd()/__loop_clr_fd() calls.
++ */
++static int loop_global_lock_killable(struct loop_device *lo, bool global)
++{
++	int err;
++
++	if (global) {
++		err = mutex_lock_killable(&loop_validate_mutex);
++		if (err)
++			return err;
++	}
++	err = mutex_lock_killable(&lo->lo_mutex);
++	if (err && global)
++		mutex_unlock(&loop_validate_mutex);
++	return err;
++}
++
++/**
++ * loop_global_unlock() - release locks taken by loop_global_lock_killable()
++ *
++ * @lo: struct loop_device
++ * @global: true if @lo was about to bind another "struct loop_device", false otherwise
++ */
++static void loop_global_unlock(struct loop_device *lo, bool global)
++{
++	mutex_unlock(&lo->lo_mutex);
++	if (global)
++		mutex_unlock(&loop_validate_mutex);
++}
+ 
+ static int max_part;
+ static int part_shift;
+@@ -676,13 +717,15 @@ static int loop_validate_file(struct file *file, struct block_device *bdev)
+ 	while (is_loop_device(f)) {
+ 		struct loop_device *l;
+ 
++		lockdep_assert_held(&loop_validate_mutex);
+ 		if (f->f_mapping->host->i_rdev == bdev->bd_dev)
+ 			return -EBADF;
+ 
+ 		l = I_BDEV(f->f_mapping->host)->bd_disk->private_data;
+-		if (l->lo_state != Lo_bound) {
++		if (l->lo_state != Lo_bound)
+ 			return -EINVAL;
+-		}
++		/* Order wrt setting lo->lo_backing_file in loop_configure(). */
++		rmb();
+ 		f = l->lo_backing_file;
+ 	}
+ 	if (!S_ISREG(inode->i_mode) && !S_ISBLK(inode->i_mode))
+@@ -701,13 +744,18 @@ static int loop_validate_file(struct file *file, struct block_device *bdev)
+ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev,
+ 			  unsigned int arg)
+ {
+-	struct file	*file = NULL, *old_file;
+-	int		error;
+-	bool		partscan;
++	struct file *file = fget(arg);
++	struct file *old_file;
++	int error;
++	bool partscan;
++	bool is_loop;
+ 
+-	error = mutex_lock_killable(&lo->lo_mutex);
++	if (!file)
++		return -EBADF;
++	is_loop = is_loop_device(file);
++	error = loop_global_lock_killable(lo, is_loop);
+ 	if (error)
+-		return error;
++		goto out_putf;
+ 	error = -ENXIO;
+ 	if (lo->lo_state != Lo_bound)
+ 		goto out_err;
+@@ -717,11 +765,6 @@ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev,
+ 	if (!(lo->lo_flags & LO_FLAGS_READ_ONLY))
+ 		goto out_err;
+ 
+-	error = -EBADF;
+-	file = fget(arg);
+-	if (!file)
+-		goto out_err;
+-
+ 	error = loop_validate_file(file, bdev);
+ 	if (error)
+ 		goto out_err;
+@@ -744,7 +787,16 @@ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev,
+ 	loop_update_dio(lo);
+ 	blk_mq_unfreeze_queue(lo->lo_queue);
+ 	partscan = lo->lo_flags & LO_FLAGS_PARTSCAN;
+-	mutex_unlock(&lo->lo_mutex);
++	loop_global_unlock(lo, is_loop);
++
++	/*
++	 * Flush loop_validate_file() before fput(), for l->lo_backing_file
++	 * might be pointing at old_file which might be the last reference.
++	 */
++	if (!is_loop) {
++		mutex_lock(&loop_validate_mutex);
++		mutex_unlock(&loop_validate_mutex);
++	}
+ 	/*
+ 	 * We must drop file reference outside of lo_mutex as dropping
+ 	 * the file ref can take bd_mutex which creates circular locking
+@@ -756,9 +808,9 @@ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev,
+ 	return 0;
+ 
+ out_err:
+-	mutex_unlock(&lo->lo_mutex);
+-	if (file)
+-		fput(file);
++	loop_global_unlock(lo, is_loop);
++out_putf:
++	fput(file);
+ 	return error;
+ }
+ 
+@@ -1067,22 +1119,22 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
+ 			  struct block_device *bdev,
+ 			  const struct loop_config *config)
+ {
+-	struct file	*file;
+-	struct inode	*inode;
++	struct file *file = fget(config->fd);
++	struct inode *inode;
+ 	struct address_space *mapping;
+-	int		error;
+-	loff_t		size;
+-	bool		partscan;
+-	unsigned short  bsize;
++	int error;
++	loff_t size;
++	bool partscan;
++	unsigned short bsize;
++	bool is_loop;
++
++	if (!file)
++		return -EBADF;
++	is_loop = is_loop_device(file);
+ 
+ 	/* This is safe, since we have a reference from open(). */
+ 	__module_get(THIS_MODULE);
+ 
+-	error = -EBADF;
+-	file = fget(config->fd);
+-	if (!file)
+-		goto out;
+-
+ 	/*
+ 	 * If we don't hold exclusive handle for the device, upgrade to it
+ 	 * here to avoid changing device under exclusive owner.
+@@ -1093,7 +1145,7 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
+ 			goto out_putf;
+ 	}
+ 
+-	error = mutex_lock_killable(&lo->lo_mutex);
++	error = loop_global_lock_killable(lo, is_loop);
+ 	if (error)
+ 		goto out_bdev;
+ 
+@@ -1162,6 +1214,9 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
+ 	size = get_loop_size(lo, file);
+ 	loop_set_size(lo, size);
+ 
++	/* Order wrt reading lo_state in loop_validate_file(). */
++	wmb();
++
+ 	lo->lo_state = Lo_bound;
+ 	if (part_shift)
+ 		lo->lo_flags |= LO_FLAGS_PARTSCAN;
+@@ -1173,7 +1228,7 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
+ 	 * put /dev/loopXX inode. Later in __loop_clr_fd() we bdput(bdev).
+ 	 */
+ 	bdgrab(bdev);
+-	mutex_unlock(&lo->lo_mutex);
++	loop_global_unlock(lo, is_loop);
+ 	if (partscan)
+ 		loop_reread_partitions(lo, bdev);
+ 	if (!(mode & FMODE_EXCL))
+@@ -1181,13 +1236,12 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
+ 	return 0;
+ 
+ out_unlock:
+-	mutex_unlock(&lo->lo_mutex);
++	loop_global_unlock(lo, is_loop);
+ out_bdev:
+ 	if (!(mode & FMODE_EXCL))
+ 		bd_abort_claiming(bdev, loop_configure);
+ out_putf:
+ 	fput(file);
+-out:
+ 	/* This is safe: open() is still holding a reference. */
+ 	module_put(THIS_MODULE);
+ 	return error;
+@@ -1202,6 +1256,18 @@ static int __loop_clr_fd(struct loop_device *lo, bool release)
+ 	bool partscan = false;
+ 	int lo_number;
+ 
++	/*
++	 * Flush loop_configure() and loop_change_fd(). It is acceptable for
++	 * loop_validate_file() to succeed, for actual clear operation has not
++	 * started yet.
++	 */
++	mutex_lock(&loop_validate_mutex);
++	mutex_unlock(&loop_validate_mutex);
++	/*
++	 * loop_validate_file() now fails because l->lo_state != Lo_bound
++	 * became visible.
++	 */
++
+ 	mutex_lock(&lo->lo_mutex);
+ 	if (WARN_ON_ONCE(lo->lo_state != Lo_rundown)) {
+ 		err = -ENXIO;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+index 2e9b16fb3fcd1..355a6923849d3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+@@ -26,6 +26,7 @@
+ #include <linux/slab.h>
+ #include <linux/power_supply.h>
+ #include <linux/pm_runtime.h>
++#include <linux/suspend.h>
+ #include <acpi/video.h>
+ #include <acpi/actbl.h>
+ 
+@@ -906,7 +907,7 @@ bool amdgpu_acpi_is_s0ix_supported(struct amdgpu_device *adev)
+ #if defined(CONFIG_AMD_PMC) || defined(CONFIG_AMD_PMC_MODULE)
+ 	if (acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0) {
+ 		if (adev->flags & AMD_IS_APU)
+-			return true;
++			return pm_suspend_target_state == PM_SUSPEND_TO_IDLE;
+ 	}
+ #endif
+ 	return false;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index d83f2ee150b86..cb3ad1395e13c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -3412,13 +3412,13 @@ int amdgpu_device_init(struct amdgpu_device *adev,
+ 	r = amdgpu_device_get_job_timeout_settings(adev);
+ 	if (r) {
+ 		dev_err(adev->dev, "invalid lockup_timeout parameter syntax\n");
+-		goto failed_unmap;
++		return r;
+ 	}
+ 
+ 	/* early init functions */
+ 	r = amdgpu_device_ip_early_init(adev);
+ 	if (r)
+-		goto failed_unmap;
++		return r;
+ 
+ 	/* doorbell bar mapping and doorbell index init*/
+ 	amdgpu_device_doorbell_init(adev);
+@@ -3644,10 +3644,6 @@ release_ras_con:
+ failed:
+ 	amdgpu_vf_error_trans_all(adev);
+ 
+-failed_unmap:
+-	iounmap(adev->rmmio);
+-	adev->rmmio = NULL;
+-
+ 	return r;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c
+index c4828bd3264bc..b0ee77ee80b90 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c
+@@ -67,7 +67,7 @@ static int psp_v12_0_init_microcode(struct psp_context *psp)
+ 
+ 	err = psp_init_asd_microcode(psp, chip_name);
+ 	if (err)
+-		goto out;
++		return err;
+ 
+ 	snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_ta.bin", chip_name);
+ 	err = request_firmware(&adev->psp.ta_fw, fw_name, adev->dev);
+@@ -80,7 +80,7 @@ static int psp_v12_0_init_microcode(struct psp_context *psp)
+ 	} else {
+ 		err = amdgpu_ucode_validate(adev->psp.ta_fw);
+ 		if (err)
+-			goto out2;
++			goto out;
+ 
+ 		ta_hdr = (const struct ta_firmware_header_v1_0 *)
+ 				 adev->psp.ta_fw->data;
+@@ -105,10 +105,9 @@ static int psp_v12_0_init_microcode(struct psp_context *psp)
+ 
+ 	return 0;
+ 
+-out2:
++out:
+ 	release_firmware(adev->psp.ta_fw);
+ 	adev->psp.ta_fw = NULL;
+-out:
+ 	if (err) {
+ 		dev_err(adev->dev,
+ 			"psp v12.0: Failed to load firmware \"%s\"\n",
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
+index 372d53b5a34d4..f47d469ee9149 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c
+@@ -135,7 +135,7 @@ void dcn20_update_clocks_update_dentist(struct clk_mgr_internal *clk_mgr)
+ 
+ 	REG_UPDATE(DENTIST_DISPCLK_CNTL,
+ 			DENTIST_DISPCLK_WDIVIDER, dispclk_wdivider);
+-//	REG_WAIT(DENTIST_DISPCLK_CNTL, DENTIST_DISPCLK_CHG_DONE, 1, 5, 100);
++	REG_WAIT(DENTIST_DISPCLK_CNTL, DENTIST_DISPCLK_CHG_DONE, 1, 50, 1000);
+ 	REG_UPDATE(DENTIST_DISPCLK_CNTL,
+ 			DENTIST_DPPCLK_WDIVIDER, dppclk_wdivider);
+ 	REG_WAIT(DENTIST_DISPCLK_CNTL, DENTIST_DPPCLK_CHG_DONE, 1, 5, 100);
+diff --git a/drivers/gpu/drm/i915/display/intel_bios.c b/drivers/gpu/drm/i915/display/intel_bios.c
+index 3d0c035b5e380..04c8d2ff78673 100644
+--- a/drivers/gpu/drm/i915/display/intel_bios.c
++++ b/drivers/gpu/drm/i915/display/intel_bios.c
+@@ -2130,7 +2130,8 @@ static void
+ init_vbt_missing_defaults(struct drm_i915_private *i915)
+ {
+ 	enum port port;
+-	int ports = PORT_A | PORT_B | PORT_C | PORT_D | PORT_E | PORT_F;
++	int ports = BIT(PORT_A) | BIT(PORT_B) | BIT(PORT_C) |
++		    BIT(PORT_D) | BIT(PORT_E) | BIT(PORT_F);
+ 
+ 	if (!HAS_DDI(i915) && !IS_CHERRYVIEW(i915))
+ 		return;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+index b569030a0847b..2daf81f630764 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+@@ -268,7 +268,7 @@ static const struct dpu_mdp_cfg sc7180_mdp[] = {
+ static const struct dpu_mdp_cfg sm8250_mdp[] = {
+ 	{
+ 	.name = "top_0", .id = MDP_TOP,
+-	.base = 0x0, .len = 0x45C,
++	.base = 0x0, .len = 0x494,
+ 	.features = 0,
+ 	.highest_bank_bit = 0x3, /* TODO: 2 for LP_DDR4 */
+ 	.clk_ctrls[DPU_CLK_CTRL_VIG0] = {
+diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.c b/drivers/gpu/drm/msm/dp/dp_catalog.c
+index f4f53f23e331e..146a223a997ae 100644
+--- a/drivers/gpu/drm/msm/dp/dp_catalog.c
++++ b/drivers/gpu/drm/msm/dp/dp_catalog.c
+@@ -762,6 +762,7 @@ int dp_catalog_panel_timing_cfg(struct dp_catalog *dp_catalog)
+ 	dp_write_link(catalog, REG_DP_HSYNC_VSYNC_WIDTH_POLARITY,
+ 				dp_catalog->width_blanking);
+ 	dp_write_link(catalog, REG_DP_ACTIVE_HOR_VER, dp_catalog->dp_active);
++	dp_write_p0(catalog, MMSS_DP_INTF_CONFIG, 0);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+index 2a8955ca70d1a..6856223e91e12 100644
+--- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
++++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+@@ -1528,7 +1528,7 @@ static int dp_ctrl_process_phy_test_request(struct dp_ctrl_private *ctrl)
+ 	 * running. Add the global reset just before disabling the
+ 	 * link clocks and core clocks.
+ 	 */
+-	ret = dp_ctrl_off(&ctrl->dp_ctrl);
++	ret = dp_ctrl_off_link_stream(&ctrl->dp_ctrl);
+ 	if (ret) {
+ 		DRM_ERROR("failed to disable DP controller\n");
+ 		return ret;
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index be312b5c04dd9..1301d42cfffb4 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -4124,7 +4124,7 @@ static const struct drm_display_mode yes_optoelectronics_ytc700tlag_05_201c_mode
+ static const struct panel_desc yes_optoelectronics_ytc700tlag_05_201c = {
+ 	.modes = &yes_optoelectronics_ytc700tlag_05_201c_mode,
+ 	.num_modes = 1,
+-	.bpc = 6,
++	.bpc = 8,
+ 	.size = {
+ 		.width = 154,
+ 		.height = 90,
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 81d7d12bcf342..496a000ef862c 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -3831,7 +3831,7 @@ int wacom_setup_touch_input_capabilities(struct input_dev *input_dev,
+ 		    wacom_wac->shared->touch->product == 0xF6) {
+ 			input_dev->evbit[0] |= BIT_MASK(EV_SW);
+ 			__set_bit(SW_MUTE_DEVICE, input_dev->swbit);
+-			wacom_wac->shared->has_mute_touch_switch = true;
++			wacom_wac->has_mute_touch_switch = true;
+ 		}
+ 		fallthrough;
+ 
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index 8bfbf0231a9ef..25550d982238c 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -120,6 +120,7 @@ static int bnxt_re_setup_chip_ctx(struct bnxt_re_dev *rdev, u8 wqe_mode)
+ 	if (!chip_ctx)
+ 		return -ENOMEM;
+ 	chip_ctx->chip_num = bp->chip_num;
++	chip_ctx->hw_stats_size = bp->hw_ring_stats_size;
+ 
+ 	rdev->chip_ctx = chip_ctx;
+ 	/* rest members to follow eventually */
+@@ -547,6 +548,7 @@ static int bnxt_re_net_stats_ctx_alloc(struct bnxt_re_dev *rdev,
+ 				       dma_addr_t dma_map,
+ 				       u32 *fw_stats_ctx_id)
+ {
++	struct bnxt_qplib_chip_ctx *chip_ctx = rdev->chip_ctx;
+ 	struct hwrm_stat_ctx_alloc_output resp = {0};
+ 	struct hwrm_stat_ctx_alloc_input req = {0};
+ 	struct bnxt_en_dev *en_dev = rdev->en_dev;
+@@ -563,7 +565,7 @@ static int bnxt_re_net_stats_ctx_alloc(struct bnxt_re_dev *rdev,
+ 	bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_STAT_CTX_ALLOC, -1, -1);
+ 	req.update_period_ms = cpu_to_le32(1000);
+ 	req.stats_dma_addr = cpu_to_le64(dma_map);
+-	req.stats_dma_length = cpu_to_le16(sizeof(struct ctx_hw_stats_ext));
++	req.stats_dma_length = cpu_to_le16(chip_ctx->hw_stats_size);
+ 	req.stat_ctx_flags = STAT_CTX_ALLOC_REQ_STAT_CTX_FLAGS_ROCE;
+ 	bnxt_re_fill_fw_msg(&fw_msg, (void *)&req, sizeof(req), (void *)&resp,
+ 			    sizeof(resp), DFLT_HWRM_CMD_TIMEOUT);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.c b/drivers/infiniband/hw/bnxt_re/qplib_res.c
+index 3ca47004b7527..754dcebeb4ca1 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_res.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_res.c
+@@ -56,6 +56,7 @@
+ static void bnxt_qplib_free_stats_ctx(struct pci_dev *pdev,
+ 				      struct bnxt_qplib_stats *stats);
+ static int bnxt_qplib_alloc_stats_ctx(struct pci_dev *pdev,
++				      struct bnxt_qplib_chip_ctx *cctx,
+ 				      struct bnxt_qplib_stats *stats);
+ 
+ /* PBL */
+@@ -559,7 +560,7 @@ int bnxt_qplib_alloc_ctx(struct bnxt_qplib_res *res,
+ 		goto fail;
+ stats_alloc:
+ 	/* Stats */
+-	rc = bnxt_qplib_alloc_stats_ctx(res->pdev, &ctx->stats);
++	rc = bnxt_qplib_alloc_stats_ctx(res->pdev, res->cctx, &ctx->stats);
+ 	if (rc)
+ 		goto fail;
+ 
+@@ -889,15 +890,12 @@ static void bnxt_qplib_free_stats_ctx(struct pci_dev *pdev,
+ }
+ 
+ static int bnxt_qplib_alloc_stats_ctx(struct pci_dev *pdev,
++				      struct bnxt_qplib_chip_ctx *cctx,
+ 				      struct bnxt_qplib_stats *stats)
+ {
+ 	memset(stats, 0, sizeof(*stats));
+ 	stats->fw_id = -1;
+-	/* 128 byte aligned context memory is required only for 57500.
+-	 * However making this unconditional, it does not harm previous
+-	 * generation.
+-	 */
+-	stats->size = ALIGN(sizeof(struct ctx_hw_stats), 128);
++	stats->size = cctx->hw_stats_size;
+ 	stats->dma = dma_alloc_coherent(&pdev->dev, stats->size,
+ 					&stats->dma_map, GFP_KERNEL);
+ 	if (!stats->dma) {
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.h b/drivers/infiniband/hw/bnxt_re/qplib_res.h
+index 7a1ab38b95da1..58bad6f784567 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_res.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_res.h
+@@ -60,6 +60,7 @@ struct bnxt_qplib_chip_ctx {
+ 	u16	chip_num;
+ 	u8	chip_rev;
+ 	u8	chip_metal;
++	u16	hw_stats_size;
+ 	struct bnxt_qplib_drv_modes modes;
+ };
+ 
+diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
+index fe2b7d223183f..fa3d29825ef67 100644
+--- a/drivers/infiniband/sw/rxe/rxe_mr.c
++++ b/drivers/infiniband/sw/rxe/rxe_mr.c
+@@ -130,13 +130,14 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
+ 	int			num_buf;
+ 	void			*vaddr;
+ 	int err;
++	int i;
+ 
+ 	umem = ib_umem_get(pd->ibpd.device, start, length, access);
+ 	if (IS_ERR(umem)) {
+-		pr_warn("err %d from rxe_umem_get\n",
+-			(int)PTR_ERR(umem));
++		pr_warn("%s: Unable to pin memory region err = %d\n",
++			__func__, (int)PTR_ERR(umem));
+ 		err = PTR_ERR(umem);
+-		goto err1;
++		goto err_out;
+ 	}
+ 
+ 	mr->umem = umem;
+@@ -146,9 +147,9 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
+ 
+ 	err = rxe_mr_alloc(mr, num_buf);
+ 	if (err) {
+-		pr_warn("err %d from rxe_mr_alloc\n", err);
+-		ib_umem_release(umem);
+-		goto err1;
++		pr_warn("%s: Unable to allocate memory for map\n",
++				__func__);
++		goto err_release_umem;
+ 	}
+ 
+ 	mr->page_shift = PAGE_SHIFT;
+@@ -168,10 +169,10 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
+ 
+ 			vaddr = page_address(sg_page_iter_page(&sg_iter));
+ 			if (!vaddr) {
+-				pr_warn("null vaddr\n");
+-				ib_umem_release(umem);
++				pr_warn("%s: Unable to get virtual address\n",
++						__func__);
+ 				err = -ENOMEM;
+-				goto err1;
++				goto err_cleanup_map;
+ 			}
+ 
+ 			buf->addr = (uintptr_t)vaddr;
+@@ -194,7 +195,13 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
+ 
+ 	return 0;
+ 
+-err1:
++err_cleanup_map:
++	for (i = 0; i < mr->num_map; i++)
++		kfree(mr->map[i]);
++	kfree(mr->map);
++err_release_umem:
++	ib_umem_release(umem);
++err_out:
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/can/spi/hi311x.c b/drivers/net/can/spi/hi311x.c
+index 6f5d6d04a8b96..c84a198776c7a 100644
+--- a/drivers/net/can/spi/hi311x.c
++++ b/drivers/net/can/spi/hi311x.c
+@@ -218,7 +218,7 @@ static int hi3110_spi_trans(struct spi_device *spi, int len)
+ 	return ret;
+ }
+ 
+-static u8 hi3110_cmd(struct spi_device *spi, u8 command)
++static int hi3110_cmd(struct spi_device *spi, u8 command)
+ {
+ 	struct hi3110_priv *priv = spi_get_drvdata(spi);
+ 
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+index e0ae00e34c7be..d371af7ab4969 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+@@ -2300,6 +2300,7 @@ static irqreturn_t mcp251xfd_irq(int irq, void *dev_id)
+ 		   err, priv->regs_status.intf);
+ 	mcp251xfd_dump(priv);
+ 	mcp251xfd_chip_interrupts_disable(priv);
++	mcp251xfd_timestamp_stop(priv);
+ 
+ 	return handled;
+ }
+diff --git a/drivers/net/can/usb/ems_usb.c b/drivers/net/can/usb/ems_usb.c
+index 0a37af4a3fa40..2b5302e724353 100644
+--- a/drivers/net/can/usb/ems_usb.c
++++ b/drivers/net/can/usb/ems_usb.c
+@@ -255,6 +255,8 @@ struct ems_usb {
+ 	unsigned int free_slots; /* remember number of available slots */
+ 
+ 	struct ems_cpc_msg active_params; /* active controller parameters */
++	void *rxbuf[MAX_RX_URBS];
++	dma_addr_t rxbuf_dma[MAX_RX_URBS];
+ };
+ 
+ static void ems_usb_read_interrupt_callback(struct urb *urb)
+@@ -587,6 +589,7 @@ static int ems_usb_start(struct ems_usb *dev)
+ 	for (i = 0; i < MAX_RX_URBS; i++) {
+ 		struct urb *urb = NULL;
+ 		u8 *buf = NULL;
++		dma_addr_t buf_dma;
+ 
+ 		/* create a URB, and a buffer for it */
+ 		urb = usb_alloc_urb(0, GFP_KERNEL);
+@@ -596,7 +599,7 @@ static int ems_usb_start(struct ems_usb *dev)
+ 		}
+ 
+ 		buf = usb_alloc_coherent(dev->udev, RX_BUFFER_SIZE, GFP_KERNEL,
+-					 &urb->transfer_dma);
++					 &buf_dma);
+ 		if (!buf) {
+ 			netdev_err(netdev, "No memory left for USB buffer\n");
+ 			usb_free_urb(urb);
+@@ -604,6 +607,8 @@ static int ems_usb_start(struct ems_usb *dev)
+ 			break;
+ 		}
+ 
++		urb->transfer_dma = buf_dma;
++
+ 		usb_fill_bulk_urb(urb, dev->udev, usb_rcvbulkpipe(dev->udev, 2),
+ 				  buf, RX_BUFFER_SIZE,
+ 				  ems_usb_read_bulk_callback, dev);
+@@ -619,6 +624,9 @@ static int ems_usb_start(struct ems_usb *dev)
+ 			break;
+ 		}
+ 
++		dev->rxbuf[i] = buf;
++		dev->rxbuf_dma[i] = buf_dma;
++
+ 		/* Drop reference, USB core will take care of freeing it */
+ 		usb_free_urb(urb);
+ 	}
+@@ -684,6 +692,10 @@ static void unlink_all_urbs(struct ems_usb *dev)
+ 
+ 	usb_kill_anchored_urbs(&dev->rx_submitted);
+ 
++	for (i = 0; i < MAX_RX_URBS; ++i)
++		usb_free_coherent(dev->udev, RX_BUFFER_SIZE,
++				  dev->rxbuf[i], dev->rxbuf_dma[i]);
++
+ 	usb_kill_anchored_urbs(&dev->tx_submitted);
+ 	atomic_set(&dev->active_tx_urbs, 0);
+ 
+diff --git a/drivers/net/can/usb/esd_usb2.c b/drivers/net/can/usb/esd_usb2.c
+index 65b58f8fc3287..66fa8b07c2e6f 100644
+--- a/drivers/net/can/usb/esd_usb2.c
++++ b/drivers/net/can/usb/esd_usb2.c
+@@ -195,6 +195,8 @@ struct esd_usb2 {
+ 	int net_count;
+ 	u32 version;
+ 	int rxinitdone;
++	void *rxbuf[MAX_RX_URBS];
++	dma_addr_t rxbuf_dma[MAX_RX_URBS];
+ };
+ 
+ struct esd_usb2_net_priv {
+@@ -545,6 +547,7 @@ static int esd_usb2_setup_rx_urbs(struct esd_usb2 *dev)
+ 	for (i = 0; i < MAX_RX_URBS; i++) {
+ 		struct urb *urb = NULL;
+ 		u8 *buf = NULL;
++		dma_addr_t buf_dma;
+ 
+ 		/* create a URB, and a buffer for it */
+ 		urb = usb_alloc_urb(0, GFP_KERNEL);
+@@ -554,7 +557,7 @@ static int esd_usb2_setup_rx_urbs(struct esd_usb2 *dev)
+ 		}
+ 
+ 		buf = usb_alloc_coherent(dev->udev, RX_BUFFER_SIZE, GFP_KERNEL,
+-					 &urb->transfer_dma);
++					 &buf_dma);
+ 		if (!buf) {
+ 			dev_warn(dev->udev->dev.parent,
+ 				 "No memory left for USB buffer\n");
+@@ -562,6 +565,8 @@ static int esd_usb2_setup_rx_urbs(struct esd_usb2 *dev)
+ 			goto freeurb;
+ 		}
+ 
++		urb->transfer_dma = buf_dma;
++
+ 		usb_fill_bulk_urb(urb, dev->udev,
+ 				  usb_rcvbulkpipe(dev->udev, 1),
+ 				  buf, RX_BUFFER_SIZE,
+@@ -574,8 +579,12 @@ static int esd_usb2_setup_rx_urbs(struct esd_usb2 *dev)
+ 			usb_unanchor_urb(urb);
+ 			usb_free_coherent(dev->udev, RX_BUFFER_SIZE, buf,
+ 					  urb->transfer_dma);
++			goto freeurb;
+ 		}
+ 
++		dev->rxbuf[i] = buf;
++		dev->rxbuf_dma[i] = buf_dma;
++
+ freeurb:
+ 		/* Drop reference, USB core will take care of freeing it */
+ 		usb_free_urb(urb);
+@@ -663,6 +672,11 @@ static void unlink_all_urbs(struct esd_usb2 *dev)
+ 	int i, j;
+ 
+ 	usb_kill_anchored_urbs(&dev->rx_submitted);
++
++	for (i = 0; i < MAX_RX_URBS; ++i)
++		usb_free_coherent(dev->udev, RX_BUFFER_SIZE,
++				  dev->rxbuf[i], dev->rxbuf_dma[i]);
++
+ 	for (i = 0; i < dev->net_count; i++) {
+ 		priv = dev->nets[i];
+ 		if (priv) {
+diff --git a/drivers/net/can/usb/mcba_usb.c b/drivers/net/can/usb/mcba_usb.c
+index a45865bd72546..a1a154c08b7f7 100644
+--- a/drivers/net/can/usb/mcba_usb.c
++++ b/drivers/net/can/usb/mcba_usb.c
+@@ -653,6 +653,8 @@ static int mcba_usb_start(struct mcba_priv *priv)
+ 			break;
+ 		}
+ 
++		urb->transfer_dma = buf_dma;
++
+ 		usb_fill_bulk_urb(urb, priv->udev,
+ 				  usb_rcvbulkpipe(priv->udev, MCBA_USB_EP_IN),
+ 				  buf, MCBA_USB_RX_BUFF_SIZE,
+diff --git a/drivers/net/can/usb/peak_usb/pcan_usb.c b/drivers/net/can/usb/peak_usb/pcan_usb.c
+index 1d6f77252f018..899a3d21b77f9 100644
+--- a/drivers/net/can/usb/peak_usb/pcan_usb.c
++++ b/drivers/net/can/usb/peak_usb/pcan_usb.c
+@@ -117,7 +117,8 @@
+ #define PCAN_USB_BERR_MASK	(PCAN_USB_ERR_RXERR | PCAN_USB_ERR_TXERR)
+ 
+ /* identify bus event packets with rx/tx error counters */
+-#define PCAN_USB_ERR_CNT		0x80
++#define PCAN_USB_ERR_CNT_DEC		0x00	/* counters are decreasing */
++#define PCAN_USB_ERR_CNT_INC		0x80	/* counters are increasing */
+ 
+ /* private to PCAN-USB adapter */
+ struct pcan_usb {
+@@ -608,11 +609,12 @@ static int pcan_usb_handle_bus_evt(struct pcan_usb_msg_context *mc, u8 ir)
+ 
+ 	/* acccording to the content of the packet */
+ 	switch (ir) {
+-	case PCAN_USB_ERR_CNT:
++	case PCAN_USB_ERR_CNT_DEC:
++	case PCAN_USB_ERR_CNT_INC:
+ 
+ 		/* save rx/tx error counters from in the device context */
+-		pdev->bec.rxerr = mc->ptr[0];
+-		pdev->bec.txerr = mc->ptr[1];
++		pdev->bec.rxerr = mc->ptr[1];
++		pdev->bec.txerr = mc->ptr[2];
+ 		break;
+ 
+ 	default:
+diff --git a/drivers/net/can/usb/usb_8dev.c b/drivers/net/can/usb/usb_8dev.c
+index b6e7ef0d5bc69..d1b83bd1b3cb9 100644
+--- a/drivers/net/can/usb/usb_8dev.c
++++ b/drivers/net/can/usb/usb_8dev.c
+@@ -137,7 +137,8 @@ struct usb_8dev_priv {
+ 	u8 *cmd_msg_buffer;
+ 
+ 	struct mutex usb_8dev_cmd_lock;
+-
++	void *rxbuf[MAX_RX_URBS];
++	dma_addr_t rxbuf_dma[MAX_RX_URBS];
+ };
+ 
+ /* tx frame */
+@@ -733,6 +734,7 @@ static int usb_8dev_start(struct usb_8dev_priv *priv)
+ 	for (i = 0; i < MAX_RX_URBS; i++) {
+ 		struct urb *urb = NULL;
+ 		u8 *buf;
++		dma_addr_t buf_dma;
+ 
+ 		/* create a URB, and a buffer for it */
+ 		urb = usb_alloc_urb(0, GFP_KERNEL);
+@@ -742,7 +744,7 @@ static int usb_8dev_start(struct usb_8dev_priv *priv)
+ 		}
+ 
+ 		buf = usb_alloc_coherent(priv->udev, RX_BUFFER_SIZE, GFP_KERNEL,
+-					 &urb->transfer_dma);
++					 &buf_dma);
+ 		if (!buf) {
+ 			netdev_err(netdev, "No memory left for USB buffer\n");
+ 			usb_free_urb(urb);
+@@ -750,6 +752,8 @@ static int usb_8dev_start(struct usb_8dev_priv *priv)
+ 			break;
+ 		}
+ 
++		urb->transfer_dma = buf_dma;
++
+ 		usb_fill_bulk_urb(urb, priv->udev,
+ 				  usb_rcvbulkpipe(priv->udev,
+ 						  USB_8DEV_ENDP_DATA_RX),
+@@ -767,6 +771,9 @@ static int usb_8dev_start(struct usb_8dev_priv *priv)
+ 			break;
+ 		}
+ 
++		priv->rxbuf[i] = buf;
++		priv->rxbuf_dma[i] = buf_dma;
++
+ 		/* Drop reference, USB core will take care of freeing it */
+ 		usb_free_urb(urb);
+ 	}
+@@ -836,6 +843,10 @@ static void unlink_all_urbs(struct usb_8dev_priv *priv)
+ 
+ 	usb_kill_anchored_urbs(&priv->rx_submitted);
+ 
++	for (i = 0; i < MAX_RX_URBS; ++i)
++		usb_free_coherent(priv->udev, RX_BUFFER_SIZE,
++				  priv->rxbuf[i], priv->rxbuf_dma[i]);
++
+ 	usb_kill_anchored_urbs(&priv->tx_submitted);
+ 	atomic_set(&priv->active_tx_urbs, 0);
+ 
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index beb41572d04ea..272b0535d9461 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -2155,7 +2155,7 @@ static int mv88e6xxx_port_vlan_leave(struct mv88e6xxx_chip *chip,
+ 	int i, err;
+ 
+ 	if (!vid)
+-		return -EOPNOTSUPP;
++		return 0;
+ 
+ 	err = mv88e6xxx_vtu_get(chip, vid, &vlan);
+ 	if (err)
+diff --git a/drivers/net/ethernet/dec/tulip/winbond-840.c b/drivers/net/ethernet/dec/tulip/winbond-840.c
+index 514df170ec5df..c967e0e859e5e 100644
+--- a/drivers/net/ethernet/dec/tulip/winbond-840.c
++++ b/drivers/net/ethernet/dec/tulip/winbond-840.c
+@@ -357,7 +357,7 @@ static int w840_probe1(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	int i, option = find_cnt < MAX_UNITS ? options[find_cnt] : 0;
+ 	void __iomem *ioaddr;
+ 
+-	i = pci_enable_device(pdev);
++	i = pcim_enable_device(pdev);
+ 	if (i) return i;
+ 
+ 	pci_set_master(pdev);
+@@ -379,7 +379,7 @@ static int w840_probe1(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	ioaddr = pci_iomap(pdev, TULIP_BAR, netdev_res_size);
+ 	if (!ioaddr)
+-		goto err_out_free_res;
++		goto err_out_netdev;
+ 
+ 	for (i = 0; i < 3; i++)
+ 		((__le16 *)dev->dev_addr)[i] = cpu_to_le16(eeprom_read(ioaddr, i));
+@@ -458,8 +458,6 @@ static int w840_probe1(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ err_out_cleardev:
+ 	pci_iounmap(pdev, ioaddr);
+-err_out_free_res:
+-	pci_release_regions(pdev);
+ err_out_netdev:
+ 	free_netdev (dev);
+ 	return -ENODEV;
+@@ -1526,7 +1524,6 @@ static void w840_remove1(struct pci_dev *pdev)
+ 	if (dev) {
+ 		struct netdev_private *np = netdev_priv(dev);
+ 		unregister_netdev(dev);
+-		pci_release_regions(pdev);
+ 		pci_iounmap(pdev, np->base_addr);
+ 		free_netdev(dev);
+ 	}
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index 3e822bad48513..2c9e4eeb7270d 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -980,7 +980,7 @@ static void i40e_get_settings_link_up(struct i40e_hw *hw,
+ 	default:
+ 		/* if we got here and link is up something bad is afoot */
+ 		netdev_info(netdev,
+-			    "WARNING: Link is up but PHY type 0x%x is not recognized.\n",
++			    "WARNING: Link is up but PHY type 0x%x is not recognized, or incorrect cable is in use\n",
+ 			    hw_link_info->phy_type);
+ 	}
+ 
+@@ -5294,6 +5294,10 @@ flags_complete:
+ 					dev_warn(&pf->pdev->dev,
+ 						 "Device configuration forbids SW from starting the LLDP agent.\n");
+ 					return -EINVAL;
++				case I40E_AQ_RC_EAGAIN:
++					dev_warn(&pf->pdev->dev,
++						 "Stop FW LLDP agent command is still being processed, please try again in a second.\n");
++					return -EBUSY;
+ 				default:
+ 					dev_warn(&pf->pdev->dev,
+ 						 "Starting FW LLDP agent failed: error: %s, %s\n",
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index f9fe500d4ec44..4e5c53a6265ce 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -4454,11 +4454,10 @@ int i40e_control_wait_tx_q(int seid, struct i40e_pf *pf, int pf_q,
+ }
+ 
+ /**
+- * i40e_vsi_control_tx - Start or stop a VSI's rings
++ * i40e_vsi_enable_tx - Start a VSI's rings
+  * @vsi: the VSI being configured
+- * @enable: start or stop the rings
+  **/
+-static int i40e_vsi_control_tx(struct i40e_vsi *vsi, bool enable)
++static int i40e_vsi_enable_tx(struct i40e_vsi *vsi)
+ {
+ 	struct i40e_pf *pf = vsi->back;
+ 	int i, pf_q, ret = 0;
+@@ -4467,7 +4466,7 @@ static int i40e_vsi_control_tx(struct i40e_vsi *vsi, bool enable)
+ 	for (i = 0; i < vsi->num_queue_pairs; i++, pf_q++) {
+ 		ret = i40e_control_wait_tx_q(vsi->seid, pf,
+ 					     pf_q,
+-					     false /*is xdp*/, enable);
++					     false /*is xdp*/, true);
+ 		if (ret)
+ 			break;
+ 
+@@ -4476,7 +4475,7 @@ static int i40e_vsi_control_tx(struct i40e_vsi *vsi, bool enable)
+ 
+ 		ret = i40e_control_wait_tx_q(vsi->seid, pf,
+ 					     pf_q + vsi->alloc_queue_pairs,
+-					     true /*is xdp*/, enable);
++					     true /*is xdp*/, true);
+ 		if (ret)
+ 			break;
+ 	}
+@@ -4574,32 +4573,25 @@ int i40e_control_wait_rx_q(struct i40e_pf *pf, int pf_q, bool enable)
+ }
+ 
+ /**
+- * i40e_vsi_control_rx - Start or stop a VSI's rings
++ * i40e_vsi_enable_rx - Start a VSI's rings
+  * @vsi: the VSI being configured
+- * @enable: start or stop the rings
+  **/
+-static int i40e_vsi_control_rx(struct i40e_vsi *vsi, bool enable)
++static int i40e_vsi_enable_rx(struct i40e_vsi *vsi)
+ {
+ 	struct i40e_pf *pf = vsi->back;
+ 	int i, pf_q, ret = 0;
+ 
+ 	pf_q = vsi->base_queue;
+ 	for (i = 0; i < vsi->num_queue_pairs; i++, pf_q++) {
+-		ret = i40e_control_wait_rx_q(pf, pf_q, enable);
++		ret = i40e_control_wait_rx_q(pf, pf_q, true);
+ 		if (ret) {
+ 			dev_info(&pf->pdev->dev,
+-				 "VSI seid %d Rx ring %d %sable timeout\n",
+-				 vsi->seid, pf_q, (enable ? "en" : "dis"));
++				 "VSI seid %d Rx ring %d enable timeout\n",
++				 vsi->seid, pf_q);
+ 			break;
+ 		}
+ 	}
+ 
+-	/* Due to HW errata, on Rx disable only, the register can indicate done
+-	 * before it really is. Needs 50ms to be sure
+-	 */
+-	if (!enable)
+-		mdelay(50);
+-
+ 	return ret;
+ }
+ 
+@@ -4612,29 +4604,47 @@ int i40e_vsi_start_rings(struct i40e_vsi *vsi)
+ 	int ret = 0;
+ 
+ 	/* do rx first for enable and last for disable */
+-	ret = i40e_vsi_control_rx(vsi, true);
++	ret = i40e_vsi_enable_rx(vsi);
+ 	if (ret)
+ 		return ret;
+-	ret = i40e_vsi_control_tx(vsi, true);
++	ret = i40e_vsi_enable_tx(vsi);
+ 
+ 	return ret;
+ }
+ 
++#define I40E_DISABLE_TX_GAP_MSEC	50
++
+ /**
+  * i40e_vsi_stop_rings - Stop a VSI's rings
+  * @vsi: the VSI being configured
+  **/
+ void i40e_vsi_stop_rings(struct i40e_vsi *vsi)
+ {
++	struct i40e_pf *pf = vsi->back;
++	int pf_q, err, q_end;
++
+ 	/* When port TX is suspended, don't wait */
+ 	if (test_bit(__I40E_PORT_SUSPENDED, vsi->back->state))
+ 		return i40e_vsi_stop_rings_no_wait(vsi);
+ 
+-	/* do rx first for enable and last for disable
+-	 * Ignore return value, we need to shutdown whatever we can
+-	 */
+-	i40e_vsi_control_tx(vsi, false);
+-	i40e_vsi_control_rx(vsi, false);
++	q_end = vsi->base_queue + vsi->num_queue_pairs;
++	for (pf_q = vsi->base_queue; pf_q < q_end; pf_q++)
++		i40e_pre_tx_queue_cfg(&pf->hw, (u32)pf_q, false);
++
++	for (pf_q = vsi->base_queue; pf_q < q_end; pf_q++) {
++		err = i40e_control_wait_rx_q(pf, pf_q, false);
++		if (err)
++			dev_info(&pf->pdev->dev,
++				 "VSI seid %d Rx ring %d dissable timeout\n",
++				 vsi->seid, pf_q);
++	}
++
++	msleep(I40E_DISABLE_TX_GAP_MSEC);
++	pf_q = vsi->base_queue;
++	for (pf_q = vsi->base_queue; pf_q < q_end; pf_q++)
++		wr32(&pf->hw, I40E_QTX_ENA(pf_q), 0);
++
++	i40e_vsi_wait_queues_disabled(vsi);
+ }
+ 
+ /**
+@@ -7280,6 +7290,8 @@ static int i40e_validate_mqprio_qopt(struct i40e_vsi *vsi,
+ 	}
+ 	if (vsi->num_queue_pairs <
+ 	    (mqprio_qopt->qopt.offset[i] + mqprio_qopt->qopt.count[i])) {
++		dev_err(&vsi->back->pdev->dev,
++			"Failed to create traffic channel, insufficient number of queues.\n");
+ 		return -EINVAL;
+ 	}
+ 	if (sum_max_rate > i40e_get_link_speed(vsi)) {
+@@ -13261,6 +13273,7 @@ static const struct net_device_ops i40e_netdev_ops = {
+ 	.ndo_poll_controller	= i40e_netpoll,
+ #endif
+ 	.ndo_setup_tc		= __i40e_setup_tc,
++	.ndo_select_queue	= i40e_lan_select_queue,
+ 	.ndo_set_features	= i40e_set_features,
+ 	.ndo_set_vf_mac		= i40e_ndo_set_vf_mac,
+ 	.ndo_set_vf_vlan	= i40e_ndo_set_vf_port_vlan,
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+index b883ab809df30..107fb472319ee 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+@@ -3633,6 +3633,56 @@ dma_error:
+ 	return -1;
+ }
+ 
++static u16 i40e_swdcb_skb_tx_hash(struct net_device *dev,
++				  const struct sk_buff *skb,
++				  u16 num_tx_queues)
++{
++	u32 jhash_initval_salt = 0xd631614b;
++	u32 hash;
++
++	if (skb->sk && skb->sk->sk_hash)
++		hash = skb->sk->sk_hash;
++	else
++		hash = (__force u16)skb->protocol ^ skb->hash;
++
++	hash = jhash_1word(hash, jhash_initval_salt);
++
++	return (u16)(((u64)hash * num_tx_queues) >> 32);
++}
++
++u16 i40e_lan_select_queue(struct net_device *netdev,
++			  struct sk_buff *skb,
++			  struct net_device __always_unused *sb_dev)
++{
++	struct i40e_netdev_priv *np = netdev_priv(netdev);
++	struct i40e_vsi *vsi = np->vsi;
++	struct i40e_hw *hw;
++	u16 qoffset;
++	u16 qcount;
++	u8 tclass;
++	u16 hash;
++	u8 prio;
++
++	/* is DCB enabled at all? */
++	if (vsi->tc_config.numtc == 1)
++		return i40e_swdcb_skb_tx_hash(netdev, skb,
++					      netdev->real_num_tx_queues);
++
++	prio = skb->priority;
++	hw = &vsi->back->hw;
++	tclass = hw->local_dcbx_config.etscfg.prioritytable[prio];
++	/* sanity check */
++	if (unlikely(!(vsi->tc_config.enabled_tc & BIT(tclass))))
++		tclass = 0;
++
++	/* select a queue assigned for the given TC */
++	qcount = vsi->tc_config.tc_info[tclass].qcount;
++	hash = i40e_swdcb_skb_tx_hash(netdev, skb, qcount);
++
++	qoffset = vsi->tc_config.tc_info[tclass].qoffset;
++	return qoffset + hash;
++}
++
+ /**
+  * i40e_xmit_xdp_ring - transmits an XDP buffer to an XDP Tx ring
+  * @xdpf: data to transmit
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
+index 86fed05b4f193..bfc2845c99d1c 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
+@@ -451,6 +451,8 @@ static inline unsigned int i40e_rx_pg_order(struct i40e_ring *ring)
+ 
+ bool i40e_alloc_rx_buffers(struct i40e_ring *rxr, u16 cleaned_count);
+ netdev_tx_t i40e_lan_xmit_frame(struct sk_buff *skb, struct net_device *netdev);
++u16 i40e_lan_select_queue(struct net_device *netdev, struct sk_buff *skb,
++			  struct net_device *sb_dev);
+ void i40e_clean_tx_ring(struct i40e_ring *tx_ring);
+ void i40e_clean_rx_ring(struct i40e_ring *rx_ring);
+ int i40e_setup_tx_descriptors(struct i40e_ring *tx_ring);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+index fac6474ad694d..f43cb1407e8cd 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+@@ -1243,8 +1243,8 @@ static int cgx_lmac_init(struct cgx *cgx)
+ 
+ 		/* Add reference */
+ 		cgx->lmac_idmap[lmac->lmac_id] = lmac;
+-		cgx->mac_ops->mac_pause_frm_config(cgx, lmac->lmac_id, true);
+ 		set_bit(lmac->lmac_id, &cgx->lmac_bmap);
++		cgx->mac_ops->mac_pause_frm_config(cgx, lmac->lmac_id, true);
+ 	}
+ 
+ 	return cgx_lmac_verify_fwi_version(cgx);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+index 0a8bd667cb110..61ab4fdee73a3 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+@@ -3587,7 +3587,6 @@ static void rvu_nix_block_freemem(struct rvu *rvu, int blkaddr,
+ 		vlan = &nix_hw->txvlan;
+ 		kfree(vlan->rsrc.bmap);
+ 		mutex_destroy(&vlan->rsrc_lock);
+-		devm_kfree(rvu->dev, vlan->entry2pfvf_map);
+ 
+ 		mcast = &nix_hw->mcast;
+ 		qmem_free(rvu->dev, mcast->mce_ctx);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+index cf7875d51d879..16ba457197a2b 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+@@ -921,12 +921,14 @@ static int otx2_cq_init(struct otx2_nic *pfvf, u16 qidx)
+ 		aq->cq.drop = RQ_DROP_LVL_CQ(pfvf->hw.rq_skid, cq->cqe_cnt);
+ 		aq->cq.drop_ena = 1;
+ 
+-		/* Enable receive CQ backpressure */
+-		aq->cq.bp_ena = 1;
+-		aq->cq.bpid = pfvf->bpid[0];
++		if (!is_otx2_lbkvf(pfvf->pdev)) {
++			/* Enable receive CQ backpressure */
++			aq->cq.bp_ena = 1;
++			aq->cq.bpid = pfvf->bpid[0];
+ 
+-		/* Set backpressure level is same as cq pass level */
+-		aq->cq.bp = RQ_PASS_LVL_CQ(pfvf->hw.rq_skid, qset->rqe_cnt);
++			/* Set backpressure level is same as cq pass level */
++			aq->cq.bp = RQ_PASS_LVL_CQ(pfvf->hw.rq_skid, qset->rqe_cnt);
++		}
+ 	}
+ 
+ 	/* Fill AQ info */
+@@ -1183,7 +1185,7 @@ static int otx2_aura_init(struct otx2_nic *pfvf, int aura_id,
+ 	aq->aura.fc_hyst_bits = 0; /* Store count on all updates */
+ 
+ 	/* Enable backpressure for RQ aura */
+-	if (aura_id < pfvf->hw.rqpool_cnt) {
++	if (aura_id < pfvf->hw.rqpool_cnt && !is_otx2_lbkvf(pfvf->pdev)) {
+ 		aq->aura.bp_ena = 0;
+ 		aq->aura.nix0_bpid = pfvf->bpid[0];
+ 		/* Set backpressure level for RQ's Aura */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
+index 9d9a2e438acfc..ae06eeeb5a45d 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
+@@ -292,15 +292,14 @@ static int otx2_set_channels(struct net_device *dev,
+ 	err = otx2_set_real_num_queues(dev, channel->tx_count,
+ 				       channel->rx_count);
+ 	if (err)
+-		goto fail;
++		return err;
+ 
+ 	pfvf->hw.rx_queues = channel->rx_count;
+ 	pfvf->hw.tx_queues = channel->tx_count;
+ 	pfvf->qset.cq_cnt = pfvf->hw.tx_queues +  pfvf->hw.rx_queues;
+ 
+-fail:
+ 	if (if_up)
+-		dev->netdev_ops->ndo_open(dev);
++		err = dev->netdev_ops->ndo_open(dev);
+ 
+ 	netdev_info(dev, "Setting num Tx rings to %d, Rx rings to %d success\n",
+ 		    pfvf->hw.tx_queues, pfvf->hw.rx_queues);
+@@ -404,7 +403,7 @@ static int otx2_set_ringparam(struct net_device *netdev,
+ 	qs->rqe_cnt = rx_count;
+ 
+ 	if (if_up)
+-		netdev->netdev_ops->ndo_open(netdev);
++		return netdev->netdev_ops->ndo_open(netdev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index 03004fdac0c6b..2af50250d13cc 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -1648,6 +1648,7 @@ int otx2_open(struct net_device *netdev)
+ err_tx_stop_queues:
+ 	netif_tx_stop_all_queues(netdev);
+ 	netif_carrier_off(netdev);
++	pf->flags |= OTX2_FLAG_INTF_DOWN;
+ err_free_cints:
+ 	otx2_free_cints(pf, qidx);
+ 	vec = pci_irq_vector(pf->pdev,
+@@ -1675,6 +1676,10 @@ int otx2_stop(struct net_device *netdev)
+ 	struct otx2_rss_info *rss;
+ 	int qidx, vec, wrk;
+ 
++	/* If the DOWN flag is set resources are already freed */
++	if (pf->flags & OTX2_FLAG_INTF_DOWN)
++		return 0;
++
+ 	netif_carrier_off(netdev);
+ 	netif_tx_stop_all_queues(netdev);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c
+index 00c84656b2e7e..28ac4693da3cf 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/main.c
++++ b/drivers/net/ethernet/mellanox/mlx4/main.c
+@@ -3535,6 +3535,7 @@ slave_start:
+ 
+ 		if (!SRIOV_VALID_STATE(dev->flags)) {
+ 			mlx4_err(dev, "Invalid SRIOV state\n");
++			err = -EINVAL;
+ 			goto err_close;
+ 		}
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+index ceebfc20f65e5..def2156e50eeb 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/dev.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+@@ -500,10 +500,7 @@ static int next_phys_dev(struct device *dev, const void *data)
+ 	return 1;
+ }
+ 
+-/* This function is called with two flows:
+- * 1. During initialization of mlx5_core_dev and we don't need to lock it.
+- * 2. During LAG configure stage and caller holds &mlx5_intf_mutex.
+- */
++/* Must be called with intf_mutex held */
+ struct mlx5_core_dev *mlx5_get_next_phys_dev(struct mlx5_core_dev *dev)
+ {
+ 	struct auxiliary_device *adev;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
+index f410c12684225..133eb13facfd4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
+@@ -471,6 +471,15 @@ static void mlx5e_build_rx_cq_param(struct mlx5_core_dev *mdev,
+ 	param->cq_period_mode = params->rx_cq_moderation.cq_period_mode;
+ }
+ 
++static u8 rq_end_pad_mode(struct mlx5_core_dev *mdev, struct mlx5e_params *params)
++{
++	bool ro = pcie_relaxed_ordering_enabled(mdev->pdev) &&
++		MLX5_CAP_GEN(mdev, relaxed_ordering_write);
++
++	return ro && params->lro_en ?
++		MLX5_WQ_END_PAD_MODE_NONE : MLX5_WQ_END_PAD_MODE_ALIGN;
++}
++
+ int mlx5e_build_rq_param(struct mlx5_core_dev *mdev,
+ 			 struct mlx5e_params *params,
+ 			 struct mlx5e_xsk_param *xsk,
+@@ -508,7 +517,7 @@ int mlx5e_build_rq_param(struct mlx5_core_dev *mdev,
+ 	}
+ 
+ 	MLX5_SET(wq, wq, wq_type,          params->rq_wq_type);
+-	MLX5_SET(wq, wq, end_padding_mode, MLX5_WQ_END_PAD_MODE_ALIGN);
++	MLX5_SET(wq, wq, end_padding_mode, rq_end_pad_mode(mdev, params));
+ 	MLX5_SET(wq, wq, log_wq_stride,
+ 		 mlx5e_get_rqwq_log_stride(params->rq_wq_type, ndsegs));
+ 	MLX5_SET(wq, wq, pd,               mdev->mlx5e_res.hw_objs.pdn);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
+index 778e229310a93..0f6b3231ca1d7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
+@@ -494,7 +494,7 @@ static int mlx5e_init_ptp_rq(struct mlx5e_ptp *c, struct mlx5e_params *params,
+ 	int err;
+ 
+ 	rq->wq_type      = params->rq_wq_type;
+-	rq->pdev         = mdev->device;
++	rq->pdev         = c->pdev;
+ 	rq->netdev       = priv->netdev;
+ 	rq->priv         = priv;
+ 	rq->clock        = &mdev->clock;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/trap.c b/drivers/net/ethernet/mellanox/mlx5/core/en/trap.c
+index 86ab4e864fe6c..7f94508594fb6 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/trap.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/trap.c
+@@ -37,7 +37,7 @@ static void mlx5e_init_trap_rq(struct mlx5e_trap *t, struct mlx5e_params *params
+ 	struct mlx5e_priv *priv = t->priv;
+ 
+ 	rq->wq_type      = params->rq_wq_type;
+-	rq->pdev         = mdev->device;
++	rq->pdev         = t->pdev;
+ 	rq->netdev       = priv->netdev;
+ 	rq->priv         = priv;
+ 	rq->clock        = &mdev->clock;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index d26b8ed511959..d0d9acb172536 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -3825,6 +3825,24 @@ int mlx5e_set_features(struct net_device *netdev, netdev_features_t features)
+ 	return 0;
+ }
+ 
++static netdev_features_t mlx5e_fix_uplink_rep_features(struct net_device *netdev,
++						       netdev_features_t features)
++{
++	features &= ~NETIF_F_HW_TLS_RX;
++	if (netdev->features & NETIF_F_HW_TLS_RX)
++		netdev_warn(netdev, "Disabling hw_tls_rx, not supported in switchdev mode\n");
++
++	features &= ~NETIF_F_HW_TLS_TX;
++	if (netdev->features & NETIF_F_HW_TLS_TX)
++		netdev_warn(netdev, "Disabling hw_tls_tx, not supported in switchdev mode\n");
++
++	features &= ~NETIF_F_NTUPLE;
++	if (netdev->features & NETIF_F_NTUPLE)
++		netdev_warn(netdev, "Disabling ntuple, not supported in switchdev mode\n");
++
++	return features;
++}
++
+ static netdev_features_t mlx5e_fix_features(struct net_device *netdev,
+ 					    netdev_features_t features)
+ {
+@@ -3856,15 +3874,8 @@ static netdev_features_t mlx5e_fix_features(struct net_device *netdev,
+ 			netdev_warn(netdev, "Disabling rxhash, not supported when CQE compress is active\n");
+ 	}
+ 
+-	if (mlx5e_is_uplink_rep(priv)) {
+-		features &= ~NETIF_F_HW_TLS_RX;
+-		if (netdev->features & NETIF_F_HW_TLS_RX)
+-			netdev_warn(netdev, "Disabling hw_tls_rx, not supported in switchdev mode\n");
+-
+-		features &= ~NETIF_F_HW_TLS_TX;
+-		if (netdev->features & NETIF_F_HW_TLS_TX)
+-			netdev_warn(netdev, "Disabling hw_tls_tx, not supported in switchdev mode\n");
+-	}
++	if (mlx5e_is_uplink_rep(priv))
++		features = mlx5e_fix_uplink_rep_features(netdev, features);
+ 
+ 	mutex_unlock(&priv->state_lock);
+ 
+@@ -4855,6 +4866,9 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
+ 	if (MLX5_CAP_ETH(mdev, scatter_fcs))
+ 		netdev->hw_features |= NETIF_F_RXFCS;
+ 
++	if (mlx5_qos_is_supported(mdev))
++		netdev->hw_features |= NETIF_F_HW_TC;
++
+ 	netdev->features          = netdev->hw_features;
+ 
+ 	/* Defaults */
+@@ -4875,8 +4889,6 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
+ 		netdev->hw_features	 |= NETIF_F_NTUPLE;
+ #endif
+ 	}
+-	if (mlx5_qos_is_supported(mdev))
+-		netdev->features |= NETIF_F_HW_TC;
+ 
+ 	netdev->features         |= NETIF_F_HIGHDMA;
+ 	netdev->features         |= NETIF_F_HW_VLAN_STAG_FILTER;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index d4b0f270b6bb8..47bd20ad81080 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -424,12 +424,32 @@ static void mlx5e_detach_mod_hdr(struct mlx5e_priv *priv,
+ static
+ struct mlx5_core_dev *mlx5e_hairpin_get_mdev(struct net *net, int ifindex)
+ {
++	struct mlx5_core_dev *mdev;
+ 	struct net_device *netdev;
+ 	struct mlx5e_priv *priv;
+ 
+-	netdev = __dev_get_by_index(net, ifindex);
++	netdev = dev_get_by_index(net, ifindex);
++	if (!netdev)
++		return ERR_PTR(-ENODEV);
++
+ 	priv = netdev_priv(netdev);
+-	return priv->mdev;
++	mdev = priv->mdev;
++	dev_put(netdev);
++
++	/* Mirred tc action holds a refcount on the ifindex net_device (see
++	 * net/sched/act_mirred.c:tcf_mirred_get_dev). So, it's okay to continue using mdev
++	 * after dev_put(netdev), while we're in the context of adding a tc flow.
++	 *
++	 * The mdev pointer corresponds to the peer/out net_device of a hairpin. It is then
++	 * stored in a hairpin object, which exists until all flows, that refer to it, get
++	 * removed.
++	 *
++	 * On the other hand, after a hairpin object has been created, the peer net_device may
++	 * be removed/unbound while there are still some hairpin flows that are using it. This
++	 * case is handled by mlx5e_tc_hairpin_update_dead_peer, which is hooked to
++	 * NETDEV_UNREGISTER event of the peer net_device.
++	 */
++	return mdev;
+ }
+ 
+ static int mlx5e_hairpin_create_transport(struct mlx5e_hairpin *hp)
+@@ -638,6 +658,10 @@ mlx5e_hairpin_create(struct mlx5e_priv *priv, struct mlx5_hairpin_params *params
+ 
+ 	func_mdev = priv->mdev;
+ 	peer_mdev = mlx5e_hairpin_get_mdev(dev_net(priv->netdev), peer_ifindex);
++	if (IS_ERR(peer_mdev)) {
++		err = PTR_ERR(peer_mdev);
++		goto create_pair_err;
++	}
+ 
+ 	pair = mlx5_core_hairpin_create(func_mdev, peer_mdev, params);
+ 	if (IS_ERR(pair)) {
+@@ -776,6 +800,11 @@ static int mlx5e_hairpin_flow_add(struct mlx5e_priv *priv,
+ 	int err;
+ 
+ 	peer_mdev = mlx5e_hairpin_get_mdev(dev_net(priv->netdev), peer_ifindex);
++	if (IS_ERR(peer_mdev)) {
++		NL_SET_ERR_MSG_MOD(extack, "invalid ifindex of mirred device");
++		return PTR_ERR(peer_mdev);
++	}
++
+ 	if (!MLX5_CAP_GEN(priv->mdev, hairpin) || !MLX5_CAP_GEN(peer_mdev, hairpin)) {
+ 		NL_SET_ERR_MSG_MOD(extack, "hairpin is not supported");
+ 		return -EOPNOTSUPP;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+index 64ccb2bc0b58c..e0f6f75fd9d62 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+@@ -629,7 +629,7 @@ struct esw_vport_tbl_namespace {
+ };
+ 
+ struct mlx5_vport_tbl_attr {
+-	u16 chain;
++	u32 chain;
+ 	u16 prio;
+ 	u16 vport;
+ 	const struct esw_vport_tbl_namespace *vport_ns;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index d18a28a6e9a63..b66e12753f37f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -382,10 +382,11 @@ esw_setup_vport_dest(struct mlx5_flow_destination *dest, struct mlx5_flow_act *f
+ {
+ 	dest[dest_idx].type = MLX5_FLOW_DESTINATION_TYPE_VPORT;
+ 	dest[dest_idx].vport.num = esw_attr->dests[attr_idx].rep->vport;
+-	dest[dest_idx].vport.vhca_id =
+-		MLX5_CAP_GEN(esw_attr->dests[attr_idx].mdev, vhca_id);
+-	if (MLX5_CAP_ESW(esw->dev, merged_eswitch))
++	if (MLX5_CAP_ESW(esw->dev, merged_eswitch)) {
++		dest[dest_idx].vport.vhca_id =
++			MLX5_CAP_GEN(esw_attr->dests[attr_idx].mdev, vhca_id);
+ 		dest[dest_idx].vport.flags |= MLX5_FLOW_DEST_VPORT_VHCA_ID;
++	}
+ 	if (esw_attr->dests[attr_idx].flags & MLX5_ESW_DEST_ENCAP) {
+ 		if (pkt_reformat) {
+ 			flow_act->action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
+@@ -2350,6 +2351,9 @@ static int mlx5_esw_offloads_devcom_event(int event,
+ 
+ 	switch (event) {
+ 	case ESW_OFFLOADS_DEVCOM_PAIR:
++		if (mlx5_get_next_phys_dev(esw->dev) != peer_esw->dev)
++			break;
++
+ 		if (mlx5_eswitch_vport_match_metadata_enabled(esw) !=
+ 		    mlx5_eswitch_vport_match_metadata_enabled(peer_esw))
+ 			break;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index f74d2c834037f..48fc242e066f7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -1024,17 +1024,19 @@ static int connect_fwd_rules(struct mlx5_core_dev *dev,
+ static int connect_flow_table(struct mlx5_core_dev *dev, struct mlx5_flow_table *ft,
+ 			      struct fs_prio *prio)
+ {
+-	struct mlx5_flow_table *next_ft;
++	struct mlx5_flow_table *next_ft, *first_ft;
+ 	int err = 0;
+ 
+ 	/* Connect_prev_fts and update_root_ft_create are mutually exclusive */
+ 
+-	if (list_empty(&prio->node.children)) {
++	first_ft = list_first_entry_or_null(&prio->node.children,
++					    struct mlx5_flow_table, node.list);
++	if (!first_ft || first_ft->level > ft->level) {
+ 		err = connect_prev_fts(dev, ft, prio);
+ 		if (err)
+ 			return err;
+ 
+-		next_ft = find_next_chained_ft(prio);
++		next_ft = first_ft ? first_ft : find_next_chained_ft(prio);
+ 		err = connect_fwd_rules(dev, ft, next_ft);
+ 		if (err)
+ 			return err;
+@@ -2109,7 +2111,7 @@ static int disconnect_flow_table(struct mlx5_flow_table *ft)
+ 				node.list) == ft))
+ 		return 0;
+ 
+-	next_ft = find_next_chained_ft(prio);
++	next_ft = find_next_ft(ft);
+ 	err = connect_fwd_rules(dev, next_ft, ft);
+ 	if (err)
+ 		return err;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+index 9ff163c5bcde8..9abeb80ffa316 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+@@ -626,8 +626,16 @@ static void mlx5_fw_fatal_reporter_err_work(struct work_struct *work)
+ 	}
+ 	fw_reporter_ctx.err_synd = health->synd;
+ 	fw_reporter_ctx.miss_counter = health->miss_counter;
+-	devlink_health_report(health->fw_fatal_reporter,
+-			      "FW fatal error reported", &fw_reporter_ctx);
++	if (devlink_health_report(health->fw_fatal_reporter,
++				  "FW fatal error reported", &fw_reporter_ctx) == -ECANCELED) {
++		/* If recovery wasn't performed, due to grace period,
++		 * unload the driver. This ensures that the driver
++		 * closes all its resources and it is not subjected to
++		 * requests from the kernel.
++		 */
++		mlx5_core_err(dev, "Driver is in error state. Unloading\n");
++		mlx5_unload_one(dev);
++	}
+ }
+ 
+ static const struct devlink_health_reporter_ops mlx5_fw_fatal_reporter_ops = {
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index af3a5368529cc..e795fa63ca12e 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -29,7 +29,7 @@ static const u8 ionic_qtype_versions[IONIC_QTYPE_MAX] = {
+ 				      */
+ };
+ 
+-static void ionic_lif_rx_mode(struct ionic_lif *lif, unsigned int rx_mode);
++static void ionic_lif_rx_mode(struct ionic_lif *lif);
+ static int ionic_lif_addr_add(struct ionic_lif *lif, const u8 *addr);
+ static int ionic_lif_addr_del(struct ionic_lif *lif, const u8 *addr);
+ static void ionic_link_status_check(struct ionic_lif *lif);
+@@ -53,7 +53,19 @@ static void ionic_dim_work(struct work_struct *work)
+ 	cur_moder = net_dim_get_rx_moderation(dim->mode, dim->profile_ix);
+ 	qcq = container_of(dim, struct ionic_qcq, dim);
+ 	new_coal = ionic_coal_usec_to_hw(qcq->q.lif->ionic, cur_moder.usec);
+-	qcq->intr.dim_coal_hw = new_coal ? new_coal : 1;
++	new_coal = new_coal ? new_coal : 1;
++
++	if (qcq->intr.dim_coal_hw != new_coal) {
++		unsigned int qi = qcq->cq.bound_q->index;
++		struct ionic_lif *lif = qcq->q.lif;
++
++		qcq->intr.dim_coal_hw = new_coal;
++
++		ionic_intr_coal_init(lif->ionic->idev.intr_ctrl,
++				     lif->rxqcqs[qi]->intr.index,
++				     qcq->intr.dim_coal_hw);
++	}
++
+ 	dim->state = DIM_START_MEASURE;
+ }
+ 
+@@ -77,7 +89,7 @@ static void ionic_lif_deferred_work(struct work_struct *work)
+ 
+ 		switch (w->type) {
+ 		case IONIC_DW_TYPE_RX_MODE:
+-			ionic_lif_rx_mode(lif, w->rx_mode);
++			ionic_lif_rx_mode(lif);
+ 			break;
+ 		case IONIC_DW_TYPE_RX_ADDR_ADD:
+ 			ionic_lif_addr_add(lif, w->addr);
+@@ -1301,10 +1313,8 @@ static int ionic_lif_addr_del(struct ionic_lif *lif, const u8 *addr)
+ 	return 0;
+ }
+ 
+-static int ionic_lif_addr(struct ionic_lif *lif, const u8 *addr, bool add,
+-			  bool can_sleep)
++static int ionic_lif_addr(struct ionic_lif *lif, const u8 *addr, bool add)
+ {
+-	struct ionic_deferred_work *work;
+ 	unsigned int nmfilters;
+ 	unsigned int nufilters;
+ 
+@@ -1330,97 +1340,46 @@ static int ionic_lif_addr(struct ionic_lif *lif, const u8 *addr, bool add,
+ 			lif->nucast--;
+ 	}
+ 
+-	if (!can_sleep) {
+-		work = kzalloc(sizeof(*work), GFP_ATOMIC);
+-		if (!work)
+-			return -ENOMEM;
+-		work->type = add ? IONIC_DW_TYPE_RX_ADDR_ADD :
+-				   IONIC_DW_TYPE_RX_ADDR_DEL;
+-		memcpy(work->addr, addr, ETH_ALEN);
+-		netdev_dbg(lif->netdev, "deferred: rx_filter %s %pM\n",
+-			   add ? "add" : "del", addr);
+-		ionic_lif_deferred_enqueue(&lif->deferred, work);
+-	} else {
+-		netdev_dbg(lif->netdev, "rx_filter %s %pM\n",
+-			   add ? "add" : "del", addr);
+-		if (add)
+-			return ionic_lif_addr_add(lif, addr);
+-		else
+-			return ionic_lif_addr_del(lif, addr);
+-	}
++	netdev_dbg(lif->netdev, "rx_filter %s %pM\n",
++		   add ? "add" : "del", addr);
++	if (add)
++		return ionic_lif_addr_add(lif, addr);
++	else
++		return ionic_lif_addr_del(lif, addr);
+ 
+ 	return 0;
+ }
+ 
+ static int ionic_addr_add(struct net_device *netdev, const u8 *addr)
+ {
+-	return ionic_lif_addr(netdev_priv(netdev), addr, ADD_ADDR, CAN_SLEEP);
+-}
+-
+-static int ionic_ndo_addr_add(struct net_device *netdev, const u8 *addr)
+-{
+-	return ionic_lif_addr(netdev_priv(netdev), addr, ADD_ADDR, CAN_NOT_SLEEP);
++	return ionic_lif_addr(netdev_priv(netdev), addr, ADD_ADDR);
+ }
+ 
+ static int ionic_addr_del(struct net_device *netdev, const u8 *addr)
+ {
+-	return ionic_lif_addr(netdev_priv(netdev), addr, DEL_ADDR, CAN_SLEEP);
++	return ionic_lif_addr(netdev_priv(netdev), addr, DEL_ADDR);
+ }
+ 
+-static int ionic_ndo_addr_del(struct net_device *netdev, const u8 *addr)
++static void ionic_lif_rx_mode(struct ionic_lif *lif)
+ {
+-	return ionic_lif_addr(netdev_priv(netdev), addr, DEL_ADDR, CAN_NOT_SLEEP);
+-}
+-
+-static void ionic_lif_rx_mode(struct ionic_lif *lif, unsigned int rx_mode)
+-{
+-	struct ionic_admin_ctx ctx = {
+-		.work = COMPLETION_INITIALIZER_ONSTACK(ctx.work),
+-		.cmd.rx_mode_set = {
+-			.opcode = IONIC_CMD_RX_MODE_SET,
+-			.lif_index = cpu_to_le16(lif->index),
+-			.rx_mode = cpu_to_le16(rx_mode),
+-		},
+-	};
++	struct net_device *netdev = lif->netdev;
++	unsigned int nfilters;
++	unsigned int nd_flags;
+ 	char buf[128];
+-	int err;
++	u16 rx_mode;
+ 	int i;
+ #define REMAIN(__x) (sizeof(buf) - (__x))
+ 
+-	i = scnprintf(buf, sizeof(buf), "rx_mode 0x%04x -> 0x%04x:",
+-		      lif->rx_mode, rx_mode);
+-	if (rx_mode & IONIC_RX_MODE_F_UNICAST)
+-		i += scnprintf(&buf[i], REMAIN(i), " RX_MODE_F_UNICAST");
+-	if (rx_mode & IONIC_RX_MODE_F_MULTICAST)
+-		i += scnprintf(&buf[i], REMAIN(i), " RX_MODE_F_MULTICAST");
+-	if (rx_mode & IONIC_RX_MODE_F_BROADCAST)
+-		i += scnprintf(&buf[i], REMAIN(i), " RX_MODE_F_BROADCAST");
+-	if (rx_mode & IONIC_RX_MODE_F_PROMISC)
+-		i += scnprintf(&buf[i], REMAIN(i), " RX_MODE_F_PROMISC");
+-	if (rx_mode & IONIC_RX_MODE_F_ALLMULTI)
+-		i += scnprintf(&buf[i], REMAIN(i), " RX_MODE_F_ALLMULTI");
+-	netdev_dbg(lif->netdev, "lif%d %s\n", lif->index, buf);
+-
+-	err = ionic_adminq_post_wait(lif, &ctx);
+-	if (err)
+-		netdev_warn(lif->netdev, "set rx_mode 0x%04x failed: %d\n",
+-			    rx_mode, err);
+-	else
+-		lif->rx_mode = rx_mode;
+-}
++	mutex_lock(&lif->config_lock);
+ 
+-static void ionic_set_rx_mode(struct net_device *netdev, bool can_sleep)
+-{
+-	struct ionic_lif *lif = netdev_priv(netdev);
+-	struct ionic_deferred_work *work;
+-	unsigned int nfilters;
+-	unsigned int rx_mode;
++	/* grab the flags once for local use */
++	nd_flags = netdev->flags;
+ 
+ 	rx_mode = IONIC_RX_MODE_F_UNICAST;
+-	rx_mode |= (netdev->flags & IFF_MULTICAST) ? IONIC_RX_MODE_F_MULTICAST : 0;
+-	rx_mode |= (netdev->flags & IFF_BROADCAST) ? IONIC_RX_MODE_F_BROADCAST : 0;
+-	rx_mode |= (netdev->flags & IFF_PROMISC) ? IONIC_RX_MODE_F_PROMISC : 0;
+-	rx_mode |= (netdev->flags & IFF_ALLMULTI) ? IONIC_RX_MODE_F_ALLMULTI : 0;
++	rx_mode |= (nd_flags & IFF_MULTICAST) ? IONIC_RX_MODE_F_MULTICAST : 0;
++	rx_mode |= (nd_flags & IFF_BROADCAST) ? IONIC_RX_MODE_F_BROADCAST : 0;
++	rx_mode |= (nd_flags & IFF_PROMISC) ? IONIC_RX_MODE_F_PROMISC : 0;
++	rx_mode |= (nd_flags & IFF_ALLMULTI) ? IONIC_RX_MODE_F_ALLMULTI : 0;
+ 
+ 	/* sync unicast addresses
+ 	 * next check to see if we're in an overflow state
+@@ -1429,49 +1388,83 @@ static void ionic_set_rx_mode(struct net_device *netdev, bool can_sleep)
+ 	 *       we remove our overflow flag and check the netdev flags
+ 	 *       to see if we can disable NIC PROMISC
+ 	 */
+-	if (can_sleep)
+-		__dev_uc_sync(netdev, ionic_addr_add, ionic_addr_del);
+-	else
+-		__dev_uc_sync(netdev, ionic_ndo_addr_add, ionic_ndo_addr_del);
++	__dev_uc_sync(netdev, ionic_addr_add, ionic_addr_del);
+ 	nfilters = le32_to_cpu(lif->identity->eth.max_ucast_filters);
+ 	if (netdev_uc_count(netdev) + 1 > nfilters) {
+ 		rx_mode |= IONIC_RX_MODE_F_PROMISC;
+ 		lif->uc_overflow = true;
+ 	} else if (lif->uc_overflow) {
+ 		lif->uc_overflow = false;
+-		if (!(netdev->flags & IFF_PROMISC))
++		if (!(nd_flags & IFF_PROMISC))
+ 			rx_mode &= ~IONIC_RX_MODE_F_PROMISC;
+ 	}
+ 
+ 	/* same for multicast */
+-	if (can_sleep)
+-		__dev_mc_sync(netdev, ionic_addr_add, ionic_addr_del);
+-	else
+-		__dev_mc_sync(netdev, ionic_ndo_addr_add, ionic_ndo_addr_del);
++	__dev_mc_sync(netdev, ionic_addr_add, ionic_addr_del);
+ 	nfilters = le32_to_cpu(lif->identity->eth.max_mcast_filters);
+ 	if (netdev_mc_count(netdev) > nfilters) {
+ 		rx_mode |= IONIC_RX_MODE_F_ALLMULTI;
+ 		lif->mc_overflow = true;
+ 	} else if (lif->mc_overflow) {
+ 		lif->mc_overflow = false;
+-		if (!(netdev->flags & IFF_ALLMULTI))
++		if (!(nd_flags & IFF_ALLMULTI))
+ 			rx_mode &= ~IONIC_RX_MODE_F_ALLMULTI;
+ 	}
+ 
++	i = scnprintf(buf, sizeof(buf), "rx_mode 0x%04x -> 0x%04x:",
++		      lif->rx_mode, rx_mode);
++	if (rx_mode & IONIC_RX_MODE_F_UNICAST)
++		i += scnprintf(&buf[i], REMAIN(i), " RX_MODE_F_UNICAST");
++	if (rx_mode & IONIC_RX_MODE_F_MULTICAST)
++		i += scnprintf(&buf[i], REMAIN(i), " RX_MODE_F_MULTICAST");
++	if (rx_mode & IONIC_RX_MODE_F_BROADCAST)
++		i += scnprintf(&buf[i], REMAIN(i), " RX_MODE_F_BROADCAST");
++	if (rx_mode & IONIC_RX_MODE_F_PROMISC)
++		i += scnprintf(&buf[i], REMAIN(i), " RX_MODE_F_PROMISC");
++	if (rx_mode & IONIC_RX_MODE_F_ALLMULTI)
++		i += scnprintf(&buf[i], REMAIN(i), " RX_MODE_F_ALLMULTI");
++	if (rx_mode & IONIC_RX_MODE_F_RDMA_SNIFFER)
++		i += scnprintf(&buf[i], REMAIN(i), " RX_MODE_F_RDMA_SNIFFER");
++	netdev_dbg(netdev, "lif%d %s\n", lif->index, buf);
++
+ 	if (lif->rx_mode != rx_mode) {
+-		if (!can_sleep) {
+-			work = kzalloc(sizeof(*work), GFP_ATOMIC);
+-			if (!work) {
+-				netdev_err(lif->netdev, "rxmode change dropped\n");
+-				return;
+-			}
+-			work->type = IONIC_DW_TYPE_RX_MODE;
+-			work->rx_mode = rx_mode;
+-			netdev_dbg(lif->netdev, "deferred: rx_mode\n");
+-			ionic_lif_deferred_enqueue(&lif->deferred, work);
+-		} else {
+-			ionic_lif_rx_mode(lif, rx_mode);
++		struct ionic_admin_ctx ctx = {
++			.work = COMPLETION_INITIALIZER_ONSTACK(ctx.work),
++			.cmd.rx_mode_set = {
++				.opcode = IONIC_CMD_RX_MODE_SET,
++				.lif_index = cpu_to_le16(lif->index),
++			},
++		};
++		int err;
++
++		ctx.cmd.rx_mode_set.rx_mode = cpu_to_le16(rx_mode);
++		err = ionic_adminq_post_wait(lif, &ctx);
++		if (err)
++			netdev_warn(netdev, "set rx_mode 0x%04x failed: %d\n",
++				    rx_mode, err);
++		else
++			lif->rx_mode = rx_mode;
++	}
++
++	mutex_unlock(&lif->config_lock);
++}
++
++static void ionic_set_rx_mode(struct net_device *netdev, bool can_sleep)
++{
++	struct ionic_lif *lif = netdev_priv(netdev);
++	struct ionic_deferred_work *work;
++
++	if (!can_sleep) {
++		work = kzalloc(sizeof(*work), GFP_ATOMIC);
++		if (!work) {
++			netdev_err(lif->netdev, "rxmode change dropped\n");
++			return;
+ 		}
++		work->type = IONIC_DW_TYPE_RX_MODE;
++		netdev_dbg(lif->netdev, "deferred: rx_mode\n");
++		ionic_lif_deferred_enqueue(&lif->deferred, work);
++	} else {
++		ionic_lif_rx_mode(lif);
+ 	}
+ }
+ 
+@@ -3058,6 +3051,7 @@ void ionic_lif_deinit(struct ionic_lif *lif)
+ 	ionic_lif_qcq_deinit(lif, lif->notifyqcq);
+ 	ionic_lif_qcq_deinit(lif, lif->adminqcq);
+ 
++	mutex_destroy(&lif->config_lock);
+ 	mutex_destroy(&lif->queue_lock);
+ 	ionic_lif_reset(lif);
+ }
+@@ -3185,7 +3179,7 @@ static int ionic_station_set(struct ionic_lif *lif)
+ 		 */
+ 		if (!ether_addr_equal(ctx.comp.lif_getattr.mac,
+ 				      netdev->dev_addr))
+-			ionic_lif_addr(lif, netdev->dev_addr, ADD_ADDR, CAN_SLEEP);
++			ionic_lif_addr(lif, netdev->dev_addr, ADD_ADDR);
+ 	} else {
+ 		/* Update the netdev mac with the device's mac */
+ 		memcpy(addr.sa_data, ctx.comp.lif_getattr.mac, netdev->addr_len);
+@@ -3202,7 +3196,7 @@ static int ionic_station_set(struct ionic_lif *lif)
+ 
+ 	netdev_dbg(lif->netdev, "adding station MAC addr %pM\n",
+ 		   netdev->dev_addr);
+-	ionic_lif_addr(lif, netdev->dev_addr, ADD_ADDR, CAN_SLEEP);
++	ionic_lif_addr(lif, netdev->dev_addr, ADD_ADDR);
+ 
+ 	return 0;
+ }
+@@ -3225,6 +3219,7 @@ int ionic_lif_init(struct ionic_lif *lif)
+ 
+ 	lif->hw_index = le16_to_cpu(comp.hw_index);
+ 	mutex_init(&lif->queue_lock);
++	mutex_init(&lif->config_lock);
+ 
+ 	/* now that we have the hw_index we can figure out our doorbell page */
+ 	lif->dbid_count = le32_to_cpu(lif->ionic->ident.dev.ndbpgs_per_lif);
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.h b/drivers/net/ethernet/pensando/ionic/ionic_lif.h
+index 346506f017153..69ab59fedb6c6 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.h
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.h
+@@ -108,7 +108,6 @@ struct ionic_deferred_work {
+ 	struct list_head list;
+ 	enum ionic_deferred_work_type type;
+ 	union {
+-		unsigned int rx_mode;
+ 		u8 addr[ETH_ALEN];
+ 		u8 fw_status;
+ 	};
+@@ -179,6 +178,7 @@ struct ionic_lif {
+ 	unsigned int index;
+ 	unsigned int hw_index;
+ 	struct mutex queue_lock;	/* lock for queue structures */
++	struct mutex config_lock;	/* lock for config actions */
+ 	spinlock_t adminq_lock;		/* lock for AdminQ operations */
+ 	struct ionic_qcq *adminqcq;
+ 	struct ionic_qcq *notifyqcq;
+@@ -199,7 +199,7 @@ struct ionic_lif {
+ 	unsigned int nrxq_descs;
+ 	u32 rx_copybreak;
+ 	u64 rxq_features;
+-	unsigned int rx_mode;
++	u16 rx_mode;
+ 	u64 hw_features;
+ 	bool registered;
+ 	bool mc_overflow;
+@@ -302,7 +302,7 @@ int ionic_lif_identify(struct ionic *ionic, u8 lif_type,
+ int ionic_lif_size(struct ionic *ionic);
+ 
+ #if IS_ENABLED(CONFIG_PTP_1588_CLOCK)
+-int ionic_lif_hwstamp_replay(struct ionic_lif *lif);
++void ionic_lif_hwstamp_replay(struct ionic_lif *lif);
+ int ionic_lif_hwstamp_set(struct ionic_lif *lif, struct ifreq *ifr);
+ int ionic_lif_hwstamp_get(struct ionic_lif *lif, struct ifreq *ifr);
+ ktime_t ionic_lif_phc_ktime(struct ionic_lif *lif, u64 counter);
+@@ -311,10 +311,7 @@ void ionic_lif_unregister_phc(struct ionic_lif *lif);
+ void ionic_lif_alloc_phc(struct ionic_lif *lif);
+ void ionic_lif_free_phc(struct ionic_lif *lif);
+ #else
+-static inline int ionic_lif_hwstamp_replay(struct ionic_lif *lif)
+-{
+-	return -EOPNOTSUPP;
+-}
++static inline void ionic_lif_hwstamp_replay(struct ionic_lif *lif) {}
+ 
+ static inline int ionic_lif_hwstamp_set(struct ionic_lif *lif, struct ifreq *ifr)
+ {
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_phc.c b/drivers/net/ethernet/pensando/ionic/ionic_phc.c
+index a87c87e86aef6..6e2403c716087 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_phc.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_phc.c
+@@ -188,6 +188,9 @@ int ionic_lif_hwstamp_set(struct ionic_lif *lif, struct ifreq *ifr)
+ 	struct hwtstamp_config config;
+ 	int err;
+ 
++	if (!lif->phc || !lif->phc->ptp)
++		return -EOPNOTSUPP;
++
+ 	if (copy_from_user(&config, ifr->ifr_data, sizeof(config)))
+ 		return -EFAULT;
+ 
+@@ -203,15 +206,16 @@ int ionic_lif_hwstamp_set(struct ionic_lif *lif, struct ifreq *ifr)
+ 	return 0;
+ }
+ 
+-int ionic_lif_hwstamp_replay(struct ionic_lif *lif)
++void ionic_lif_hwstamp_replay(struct ionic_lif *lif)
+ {
+ 	int err;
+ 
++	if (!lif->phc || !lif->phc->ptp)
++		return;
++
+ 	err = ionic_lif_hwstamp_set_ts_config(lif, NULL);
+ 	if (err)
+ 		netdev_info(lif->netdev, "hwstamp replay failed: %d\n", err);
+-
+-	return err;
+ }
+ 
+ int ionic_lif_hwstamp_get(struct ionic_lif *lif, struct ifreq *ifr)
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
+index 08934888575ce..08870190e4d28 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
+@@ -274,12 +274,11 @@ static void ionic_rx_clean(struct ionic_queue *q,
+ 		}
+ 	}
+ 
+-	if (likely(netdev->features & NETIF_F_RXCSUM)) {
+-		if (comp->csum_flags & IONIC_RXQ_COMP_CSUM_F_CALC) {
+-			skb->ip_summed = CHECKSUM_COMPLETE;
+-			skb->csum = (__force __wsum)le16_to_cpu(comp->csum);
+-			stats->csum_complete++;
+-		}
++	if (likely(netdev->features & NETIF_F_RXCSUM) &&
++	    (comp->csum_flags & IONIC_RXQ_COMP_CSUM_F_CALC)) {
++		skb->ip_summed = CHECKSUM_COMPLETE;
++		skb->csum = (__force __wsum)le16_to_cpu(comp->csum);
++		stats->csum_complete++;
+ 	} else {
+ 		stats->csum_none++;
+ 	}
+@@ -451,11 +450,12 @@ void ionic_rx_empty(struct ionic_queue *q)
+ 	q->tail_idx = 0;
+ }
+ 
+-static void ionic_dim_update(struct ionic_qcq *qcq)
++static void ionic_dim_update(struct ionic_qcq *qcq, int napi_mode)
+ {
+ 	struct dim_sample dim_sample;
+ 	struct ionic_lif *lif;
+ 	unsigned int qi;
++	u64 pkts, bytes;
+ 
+ 	if (!qcq->intr.dim_coal_hw)
+ 		return;
+@@ -463,14 +463,23 @@ static void ionic_dim_update(struct ionic_qcq *qcq)
+ 	lif = qcq->q.lif;
+ 	qi = qcq->cq.bound_q->index;
+ 
+-	ionic_intr_coal_init(lif->ionic->idev.intr_ctrl,
+-			     lif->rxqcqs[qi]->intr.index,
+-			     qcq->intr.dim_coal_hw);
++	switch (napi_mode) {
++	case IONIC_LIF_F_TX_DIM_INTR:
++		pkts = lif->txqstats[qi].pkts;
++		bytes = lif->txqstats[qi].bytes;
++		break;
++	case IONIC_LIF_F_RX_DIM_INTR:
++		pkts = lif->rxqstats[qi].pkts;
++		bytes = lif->rxqstats[qi].bytes;
++		break;
++	default:
++		pkts = lif->txqstats[qi].pkts + lif->rxqstats[qi].pkts;
++		bytes = lif->txqstats[qi].bytes + lif->rxqstats[qi].bytes;
++		break;
++	}
+ 
+ 	dim_update_sample(qcq->cq.bound_intr->rearm_count,
+-			  lif->txqstats[qi].pkts,
+-			  lif->txqstats[qi].bytes,
+-			  &dim_sample);
++			  pkts, bytes, &dim_sample);
+ 
+ 	net_dim(&qcq->dim, dim_sample);
+ }
+@@ -491,7 +500,7 @@ int ionic_tx_napi(struct napi_struct *napi, int budget)
+ 				     ionic_tx_service, NULL, NULL);
+ 
+ 	if (work_done < budget && napi_complete_done(napi, work_done)) {
+-		ionic_dim_update(qcq);
++		ionic_dim_update(qcq, IONIC_LIF_F_TX_DIM_INTR);
+ 		flags |= IONIC_INTR_CRED_UNMASK;
+ 		cq->bound_intr->rearm_count++;
+ 	}
+@@ -530,7 +539,7 @@ int ionic_rx_napi(struct napi_struct *napi, int budget)
+ 		ionic_rx_fill(cq->bound_q);
+ 
+ 	if (work_done < budget && napi_complete_done(napi, work_done)) {
+-		ionic_dim_update(qcq);
++		ionic_dim_update(qcq, IONIC_LIF_F_RX_DIM_INTR);
+ 		flags |= IONIC_INTR_CRED_UNMASK;
+ 		cq->bound_intr->rearm_count++;
+ 	}
+@@ -576,7 +585,7 @@ int ionic_txrx_napi(struct napi_struct *napi, int budget)
+ 		ionic_rx_fill(rxcq->bound_q);
+ 
+ 	if (rx_work_done < budget && napi_complete_done(napi, rx_work_done)) {
+-		ionic_dim_update(qcq);
++		ionic_dim_update(qcq, 0);
+ 		flags |= IONIC_INTR_CRED_UNMASK;
+ 		rxcq->bound_intr->rearm_count++;
+ 	}
+diff --git a/drivers/net/ethernet/sis/sis900.c b/drivers/net/ethernet/sis/sis900.c
+index 620c26f71be89..e267b7ce3a45e 100644
+--- a/drivers/net/ethernet/sis/sis900.c
++++ b/drivers/net/ethernet/sis/sis900.c
+@@ -443,7 +443,7 @@ static int sis900_probe(struct pci_dev *pci_dev,
+ #endif
+ 
+ 	/* setup various bits in PCI command register */
+-	ret = pci_enable_device(pci_dev);
++	ret = pcim_enable_device(pci_dev);
+ 	if(ret) return ret;
+ 
+ 	i = dma_set_mask(&pci_dev->dev, DMA_BIT_MASK(32));
+@@ -469,7 +469,7 @@ static int sis900_probe(struct pci_dev *pci_dev,
+ 	ioaddr = pci_iomap(pci_dev, 0, 0);
+ 	if (!ioaddr) {
+ 		ret = -ENOMEM;
+-		goto err_out_cleardev;
++		goto err_out;
+ 	}
+ 
+ 	sis_priv = netdev_priv(net_dev);
+@@ -581,8 +581,6 @@ err_unmap_tx:
+ 			  sis_priv->tx_ring_dma);
+ err_out_unmap:
+ 	pci_iounmap(pci_dev, ioaddr);
+-err_out_cleardev:
+-	pci_release_regions(pci_dev);
+  err_out:
+ 	free_netdev(net_dev);
+ 	return ret;
+@@ -2499,7 +2497,6 @@ static void sis900_remove(struct pci_dev *pci_dev)
+ 			  sis_priv->tx_ring_dma);
+ 	pci_iounmap(pci_dev, sis_priv->ioaddr);
+ 	free_netdev(net_dev);
+-	pci_release_regions(pci_dev);
+ }
+ 
+ static int __maybe_unused sis900_suspend(struct device *dev)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+index f35c03c9f91e3..2b03d970ca67a 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+@@ -1249,6 +1249,7 @@ const struct stmmac_ops dwmac410_ops = {
+ 	.config_l3_filter = dwmac4_config_l3_filter,
+ 	.config_l4_filter = dwmac4_config_l4_filter,
+ 	.est_configure = dwmac5_est_configure,
++	.est_irq_status = dwmac5_est_irq_status,
+ 	.fpe_configure = dwmac5_fpe_configure,
+ 	.fpe_send_mpacket = dwmac5_fpe_send_mpacket,
+ 	.fpe_irq_status = dwmac5_fpe_irq_status,
+@@ -1300,6 +1301,7 @@ const struct stmmac_ops dwmac510_ops = {
+ 	.config_l3_filter = dwmac4_config_l3_filter,
+ 	.config_l4_filter = dwmac4_config_l4_filter,
+ 	.est_configure = dwmac5_est_configure,
++	.est_irq_status = dwmac5_est_irq_status,
+ 	.fpe_configure = dwmac5_fpe_configure,
+ 	.fpe_send_mpacket = dwmac5_fpe_send_mpacket,
+ 	.fpe_irq_status = dwmac5_fpe_irq_status,
+diff --git a/drivers/net/ethernet/sun/niu.c b/drivers/net/ethernet/sun/niu.c
+index 74e748662ec01..860644d182ab0 100644
+--- a/drivers/net/ethernet/sun/niu.c
++++ b/drivers/net/ethernet/sun/niu.c
+@@ -8191,8 +8191,9 @@ static int niu_pci_vpd_fetch(struct niu *np, u32 start)
+ 		err = niu_pci_vpd_scan_props(np, here, end);
+ 		if (err < 0)
+ 			return err;
++		/* ret == 1 is not an error */
+ 		if (err == 1)
+-			return -EINVAL;
++			return 0;
+ 	}
+ 	return 0;
+ }
+diff --git a/drivers/net/phy/broadcom.c b/drivers/net/phy/broadcom.c
+index 7bf3011b8e777..83aea5c5cd03c 100644
+--- a/drivers/net/phy/broadcom.c
++++ b/drivers/net/phy/broadcom.c
+@@ -288,7 +288,7 @@ static void bcm54xx_adjust_rxrefclk(struct phy_device *phydev)
+ 	if (phydev->dev_flags & PHY_BRCM_DIS_TXCRXC_NOENRGY) {
+ 		if (BRCM_PHY_MODEL(phydev) == PHY_ID_BCM54210E ||
+ 		    BRCM_PHY_MODEL(phydev) == PHY_ID_BCM54810 ||
+-		    BRCM_PHY_MODEL(phydev) == PHY_ID_BCM54210E)
++		    BRCM_PHY_MODEL(phydev) == PHY_ID_BCM54811)
+ 			val |= BCM54XX_SHD_SCR3_RXCTXC_DIS;
+ 		else
+ 			val |= BCM54XX_SHD_SCR3_TRDDAPD;
+diff --git a/drivers/nfc/nfcsim.c b/drivers/nfc/nfcsim.c
+index a9864fcdfba6b..dd27c85190d34 100644
+--- a/drivers/nfc/nfcsim.c
++++ b/drivers/nfc/nfcsim.c
+@@ -192,8 +192,7 @@ static void nfcsim_recv_wq(struct work_struct *work)
+ 
+ 		if (!IS_ERR(skb))
+ 			dev_kfree_skb(skb);
+-
+-		skb = ERR_PTR(-ENODEV);
++		return;
+ 	}
+ 
+ 	dev->cb(dev->nfc_digital_dev, dev->arg, skb);
+diff --git a/drivers/platform/x86/amd-pmc.c b/drivers/platform/x86/amd-pmc.c
+index b9da58ee9b1e3..ca95c2a52e260 100644
+--- a/drivers/platform/x86/amd-pmc.c
++++ b/drivers/platform/x86/amd-pmc.c
+@@ -52,7 +52,6 @@
+ #define AMD_CPU_ID_PCO			AMD_CPU_ID_RV
+ #define AMD_CPU_ID_CZN			AMD_CPU_ID_RN
+ 
+-#define AMD_SMU_FW_VERSION		0x0
+ #define PMC_MSG_DELAY_MIN_US		100
+ #define RESPONSE_REGISTER_LOOP_MAX	200
+ 
+@@ -68,6 +67,7 @@ struct amd_pmc_dev {
+ 	u32 base_addr;
+ 	u32 cpu_id;
+ 	struct device *dev;
++	struct mutex lock; /* generic mutex lock */
+ #if IS_ENABLED(CONFIG_DEBUG_FS)
+ 	struct dentry *dbgfs_dir;
+ #endif /* CONFIG_DEBUG_FS */
+@@ -88,11 +88,6 @@ static inline void amd_pmc_reg_write(struct amd_pmc_dev *dev, int reg_offset, u3
+ #ifdef CONFIG_DEBUG_FS
+ static int smu_fw_info_show(struct seq_file *s, void *unused)
+ {
+-	struct amd_pmc_dev *dev = s->private;
+-	u32 value;
+-
+-	value = ioread32(dev->smu_base + AMD_SMU_FW_VERSION);
+-	seq_printf(s, "SMU FW Info: %x\n", value);
+ 	return 0;
+ }
+ DEFINE_SHOW_ATTRIBUTE(smu_fw_info);
+@@ -138,13 +133,14 @@ static int amd_pmc_send_cmd(struct amd_pmc_dev *dev, bool set)
+ 	u8 msg;
+ 	u32 val;
+ 
++	mutex_lock(&dev->lock);
+ 	/* Wait until we get a valid response */
+ 	rc = readx_poll_timeout(ioread32, dev->regbase + AMD_PMC_REGISTER_RESPONSE,
+-				val, val > 0, PMC_MSG_DELAY_MIN_US,
++				val, val != 0, PMC_MSG_DELAY_MIN_US,
+ 				PMC_MSG_DELAY_MIN_US * RESPONSE_REGISTER_LOOP_MAX);
+ 	if (rc) {
+ 		dev_err(dev->dev, "failed to talk to SMU\n");
+-		return rc;
++		goto out_unlock;
+ 	}
+ 
+ 	/* Write zero to response register */
+@@ -156,7 +152,37 @@ static int amd_pmc_send_cmd(struct amd_pmc_dev *dev, bool set)
+ 	/* Write message ID to message ID register */
+ 	msg = (dev->cpu_id == AMD_CPU_ID_RN) ? MSG_OS_HINT_RN : MSG_OS_HINT_PCO;
+ 	amd_pmc_reg_write(dev, AMD_PMC_REGISTER_MESSAGE, msg);
+-	return 0;
++	/* Wait until we get a valid response */
++	rc = readx_poll_timeout(ioread32, dev->regbase + AMD_PMC_REGISTER_RESPONSE,
++				val, val != 0, PMC_MSG_DELAY_MIN_US,
++				PMC_MSG_DELAY_MIN_US * RESPONSE_REGISTER_LOOP_MAX);
++	if (rc) {
++		dev_err(dev->dev, "SMU response timed out\n");
++		goto out_unlock;
++	}
++
++	switch (val) {
++	case AMD_PMC_RESULT_OK:
++		break;
++	case AMD_PMC_RESULT_CMD_REJECT_BUSY:
++		dev_err(dev->dev, "SMU not ready. err: 0x%x\n", val);
++		rc = -EBUSY;
++		goto out_unlock;
++	case AMD_PMC_RESULT_CMD_UNKNOWN:
++		dev_err(dev->dev, "SMU cmd unknown. err: 0x%x\n", val);
++		rc = -EINVAL;
++		goto out_unlock;
++	case AMD_PMC_RESULT_CMD_REJECT_PREREQ:
++	case AMD_PMC_RESULT_FAILED:
++	default:
++		dev_err(dev->dev, "SMU cmd failed. err: 0x%x\n", val);
++		rc = -EIO;
++		goto out_unlock;
++	}
++
++out_unlock:
++	mutex_unlock(&dev->lock);
++	return rc;
+ }
+ 
+ static int __maybe_unused amd_pmc_suspend(struct device *dev)
+@@ -248,10 +274,6 @@ static int amd_pmc_probe(struct platform_device *pdev)
+ 	pci_dev_put(rdev);
+ 	base_addr = ((u64)base_addr_hi << 32 | base_addr_lo);
+ 
+-	dev->smu_base = devm_ioremap(dev->dev, base_addr, AMD_PMC_MAPPING_SIZE);
+-	if (!dev->smu_base)
+-		return -ENOMEM;
+-
+ 	dev->regbase = devm_ioremap(dev->dev, base_addr + AMD_PMC_BASE_ADDR_OFFSET,
+ 				    AMD_PMC_MAPPING_SIZE);
+ 	if (!dev->regbase)
+@@ -259,6 +281,7 @@ static int amd_pmc_probe(struct platform_device *pdev)
+ 
+ 	amd_pmc_dump_registers(dev);
+ 
++	mutex_init(&dev->lock);
+ 	platform_set_drvdata(pdev, dev);
+ 	amd_pmc_dbgfs_register(dev);
+ 	return 0;
+@@ -269,6 +292,7 @@ static int amd_pmc_remove(struct platform_device *pdev)
+ 	struct amd_pmc_dev *dev = platform_get_drvdata(pdev);
+ 
+ 	amd_pmc_dbgfs_unregister(dev);
++	mutex_destroy(&dev->lock);
+ 	return 0;
+ }
+ 
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index 6cc4d4cfe0c28..e4a80bd4ddf1f 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -812,6 +812,8 @@ static void bdev_free_inode(struct inode *inode)
+ 	free_percpu(bdev->bd_stats);
+ 	kfree(bdev->bd_meta_info);
+ 
++	if (!bdev_is_partition(bdev))
++		kfree(bdev->bd_disk);
+ 	kmem_cache_free(bdev_cachep, BDEV_I(inode));
+ }
+ 
+diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
+index 1346d698463a6..f69b8d3325743 100644
+--- a/fs/btrfs/compression.c
++++ b/fs/btrfs/compression.c
+@@ -353,7 +353,7 @@ static void end_compressed_bio_write(struct bio *bio)
+ 	btrfs_record_physical_zoned(inode, cb->start, bio);
+ 	btrfs_writepage_endio_finish_ordered(cb->compressed_pages[0],
+ 			cb->start, cb->start + cb->len - 1,
+-			bio->bi_status == BLK_STS_OK);
++			!cb->errors);
+ 	cb->compressed_pages[0]->mapping = NULL;
+ 
+ 	end_compressed_writeback(inode, cb);
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index f08375cb871ed..24555cc1f42d5 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -6492,8 +6492,8 @@ void btrfs_log_new_name(struct btrfs_trans_handle *trans,
+ 	 * if this inode hasn't been logged and directory we're renaming it
+ 	 * from hasn't been logged, we don't need to log it
+ 	 */
+-	if (inode->logged_trans < trans->transid &&
+-	    (!old_dir || old_dir->logged_trans < trans->transid))
++	if (!inode_logged(trans, inode) &&
++	    (!old_dir || !inode_logged(trans, old_dir)))
+ 		return;
+ 
+ 	/*
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 38633ab8108bb..9f723b744863f 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -1078,6 +1078,7 @@ static void __btrfs_free_extra_devids(struct btrfs_fs_devices *fs_devices,
+ 		if (test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state)) {
+ 			list_del_init(&device->dev_alloc_list);
+ 			clear_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state);
++			fs_devices->rw_devices--;
+ 		}
+ 		list_del_init(&device->dev_list);
+ 		fs_devices->num_devices--;
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 379a427f3c2f1..ae4ce762f4fb4 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -4631,7 +4631,7 @@ read_complete:
+ 
+ static int cifs_readpage(struct file *file, struct page *page)
+ {
+-	loff_t offset = (loff_t)page->index << PAGE_SHIFT;
++	loff_t offset = page_file_offset(page);
+ 	int rc = -EACCES;
+ 	unsigned int xid;
+ 
+diff --git a/fs/ext2/dir.c b/fs/ext2/dir.c
+index 14292dba3a12d..2c2f179b69779 100644
+--- a/fs/ext2/dir.c
++++ b/fs/ext2/dir.c
+@@ -106,12 +106,11 @@ static int ext2_commit_chunk(struct page *page, loff_t pos, unsigned len)
+ 	return err;
+ }
+ 
+-static bool ext2_check_page(struct page *page, int quiet)
++static bool ext2_check_page(struct page *page, int quiet, char *kaddr)
+ {
+ 	struct inode *dir = page->mapping->host;
+ 	struct super_block *sb = dir->i_sb;
+ 	unsigned chunk_size = ext2_chunk_size(dir);
+-	char *kaddr = page_address(page);
+ 	u32 max_inumber = le32_to_cpu(EXT2_SB(sb)->s_es->s_inodes_count);
+ 	unsigned offs, rec_len;
+ 	unsigned limit = PAGE_SIZE;
+@@ -205,7 +204,8 @@ static struct page * ext2_get_page(struct inode *dir, unsigned long n,
+ 	if (!IS_ERR(page)) {
+ 		*page_addr = kmap_local_page(page);
+ 		if (unlikely(!PageChecked(page))) {
+-			if (PageError(page) || !ext2_check_page(page, quiet))
++			if (PageError(page) || !ext2_check_page(page, quiet,
++								*page_addr))
+ 				goto fail;
+ 		}
+ 	}
+@@ -584,10 +584,10 @@ out_unlock:
+  * ext2_delete_entry deletes a directory entry by merging it with the
+  * previous entry. Page is up-to-date.
+  */
+-int ext2_delete_entry (struct ext2_dir_entry_2 * dir, struct page * page )
++int ext2_delete_entry (struct ext2_dir_entry_2 *dir, struct page *page,
++			char *kaddr)
+ {
+ 	struct inode *inode = page->mapping->host;
+-	char *kaddr = page_address(page);
+ 	unsigned from = ((char*)dir - kaddr) & ~(ext2_chunk_size(inode)-1);
+ 	unsigned to = ((char *)dir - kaddr) +
+ 				ext2_rec_len_from_disk(dir->rec_len);
+@@ -607,7 +607,7 @@ int ext2_delete_entry (struct ext2_dir_entry_2 * dir, struct page * page )
+ 		de = ext2_next_entry(de);
+ 	}
+ 	if (pde)
+-		from = (char*)pde - (char*)page_address(page);
++		from = (char *)pde - kaddr;
+ 	pos = page_offset(page) + from;
+ 	lock_page(page);
+ 	err = ext2_prepare_chunk(page, pos, to - from);
+diff --git a/fs/ext2/ext2.h b/fs/ext2/ext2.h
+index b0a694820cb7f..e512630cb63ed 100644
+--- a/fs/ext2/ext2.h
++++ b/fs/ext2/ext2.h
+@@ -740,7 +740,8 @@ extern int ext2_inode_by_name(struct inode *dir,
+ extern int ext2_make_empty(struct inode *, struct inode *);
+ extern struct ext2_dir_entry_2 *ext2_find_entry(struct inode *, const struct qstr *,
+ 						struct page **, void **res_page_addr);
+-extern int ext2_delete_entry (struct ext2_dir_entry_2 *, struct page *);
++extern int ext2_delete_entry(struct ext2_dir_entry_2 *dir, struct page *page,
++			     char *kaddr);
+ extern int ext2_empty_dir (struct inode *);
+ extern struct ext2_dir_entry_2 *ext2_dotdot(struct inode *dir, struct page **p, void **pa);
+ extern void ext2_set_link(struct inode *, struct ext2_dir_entry_2 *, struct page *, void *,
+diff --git a/fs/ext2/namei.c b/fs/ext2/namei.c
+index 1f69b81655b66..5f6b7560eb3f3 100644
+--- a/fs/ext2/namei.c
++++ b/fs/ext2/namei.c
+@@ -293,7 +293,7 @@ static int ext2_unlink(struct inode * dir, struct dentry *dentry)
+ 		goto out;
+ 	}
+ 
+-	err = ext2_delete_entry (de, page);
++	err = ext2_delete_entry (de, page, page_addr);
+ 	ext2_put_page(page, page_addr);
+ 	if (err)
+ 		goto out;
+@@ -397,7 +397,7 @@ static int ext2_rename (struct user_namespace * mnt_userns,
+ 	old_inode->i_ctime = current_time(old_inode);
+ 	mark_inode_dirty(old_inode);
+ 
+-	ext2_delete_entry(old_de, old_page);
++	ext2_delete_entry(old_de, old_page, old_page_addr);
+ 
+ 	if (dir_de) {
+ 		if (old_dir != new_dir)
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index df4288776815e..d465e99971574 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -1258,8 +1258,17 @@ static void io_prep_async_link(struct io_kiocb *req)
+ {
+ 	struct io_kiocb *cur;
+ 
+-	io_for_each_link(cur, req)
+-		io_prep_async_work(cur);
++	if (req->flags & REQ_F_LINK_TIMEOUT) {
++		struct io_ring_ctx *ctx = req->ctx;
++
++		spin_lock_irq(&ctx->completion_lock);
++		io_for_each_link(cur, req)
++			io_prep_async_work(cur);
++		spin_unlock_irq(&ctx->completion_lock);
++	} else {
++		io_for_each_link(cur, req)
++			io_prep_async_work(cur);
++	}
+ }
+ 
+ static void io_queue_async_work(struct io_kiocb *req)
+@@ -1890,7 +1899,7 @@ static void tctx_task_work(struct callback_head *cb)
+ 
+ 	clear_bit(0, &tctx->task_state);
+ 
+-	while (!wq_list_empty(&tctx->task_list)) {
++	while (true) {
+ 		struct io_ring_ctx *ctx = NULL;
+ 		struct io_wq_work_list list;
+ 		struct io_wq_work_node *node;
+@@ -1900,6 +1909,9 @@ static void tctx_task_work(struct callback_head *cb)
+ 		INIT_WQ_LIST(&tctx->task_list);
+ 		spin_unlock_irq(&tctx->task_lock);
+ 
++		if (wq_list_empty(&list))
++			break;
++
+ 		node = list.first;
+ 		while (node) {
+ 			struct io_wq_work_node *next = node->next;
+@@ -2448,6 +2460,12 @@ static bool io_rw_should_reissue(struct io_kiocb *req)
+ 	 */
+ 	if (percpu_ref_is_dying(&ctx->refs))
+ 		return false;
++	/*
++	 * Play it safe and assume not safe to re-import and reissue if we're
++	 * not in the original thread group (or in task context).
++	 */
++	if (!same_thread_group(req->task, current) || !in_task())
++		return false;
+ 	return true;
+ }
+ #else
+@@ -4909,7 +4927,6 @@ static bool io_poll_complete(struct io_kiocb *req, __poll_t mask)
+ 	if (req->poll.events & EPOLLONESHOT)
+ 		flags = 0;
+ 	if (!io_cqring_fill_event(ctx, req->user_data, error, flags)) {
+-		io_poll_remove_waitqs(req);
+ 		req->poll.done = true;
+ 		flags = 0;
+ 	}
+@@ -4933,6 +4950,7 @@ static void io_poll_task_func(struct callback_head *cb)
+ 
+ 		done = io_poll_complete(req, req->result);
+ 		if (done) {
++			io_poll_remove_double(req);
+ 			hash_del(&req->hash_node);
+ 		} else {
+ 			req->result = 0;
+@@ -5121,7 +5139,7 @@ static __poll_t __io_arm_poll_handler(struct io_kiocb *req,
+ 		ipt->error = -EINVAL;
+ 
+ 	spin_lock_irq(&ctx->completion_lock);
+-	if (ipt->error)
++	if (ipt->error || (mask && (poll->events & EPOLLONESHOT)))
+ 		io_poll_remove_double(req);
+ 	if (likely(poll->head)) {
+ 		spin_lock(&poll->head->lock);
+@@ -5192,7 +5210,6 @@ static bool io_arm_poll_handler(struct io_kiocb *req)
+ 	ret = __io_arm_poll_handler(req, &apoll->poll, &ipt, mask,
+ 					io_async_wake);
+ 	if (ret || ipt.error) {
+-		io_poll_remove_double(req);
+ 		spin_unlock_irq(&ctx->completion_lock);
+ 		return false;
+ 	}
+diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
+index 7756579430578..54d7843c02114 100644
+--- a/fs/ocfs2/file.c
++++ b/fs/ocfs2/file.c
+@@ -1529,6 +1529,45 @@ static void ocfs2_truncate_cluster_pages(struct inode *inode, u64 byte_start,
+ 	}
+ }
+ 
++/*
++ * zero out partial blocks of one cluster.
++ *
++ * start: file offset where zero starts, will be made upper block aligned.
++ * len: it will be trimmed to the end of current cluster if "start + len"
++ *      is bigger than it.
++ */
++static int ocfs2_zeroout_partial_cluster(struct inode *inode,
++					u64 start, u64 len)
++{
++	int ret;
++	u64 start_block, end_block, nr_blocks;
++	u64 p_block, offset;
++	u32 cluster, p_cluster, nr_clusters;
++	struct super_block *sb = inode->i_sb;
++	u64 end = ocfs2_align_bytes_to_clusters(sb, start);
++
++	if (start + len < end)
++		end = start + len;
++
++	start_block = ocfs2_blocks_for_bytes(sb, start);
++	end_block = ocfs2_blocks_for_bytes(sb, end);
++	nr_blocks = end_block - start_block;
++	if (!nr_blocks)
++		return 0;
++
++	cluster = ocfs2_bytes_to_clusters(sb, start);
++	ret = ocfs2_get_clusters(inode, cluster, &p_cluster,
++				&nr_clusters, NULL);
++	if (ret)
++		return ret;
++	if (!p_cluster)
++		return 0;
++
++	offset = start_block - ocfs2_clusters_to_blocks(sb, cluster);
++	p_block = ocfs2_clusters_to_blocks(sb, p_cluster) + offset;
++	return sb_issue_zeroout(sb, p_block, nr_blocks, GFP_NOFS);
++}
++
+ static int ocfs2_zero_partial_clusters(struct inode *inode,
+ 				       u64 start, u64 len)
+ {
+@@ -1538,6 +1577,7 @@ static int ocfs2_zero_partial_clusters(struct inode *inode,
+ 	struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
+ 	unsigned int csize = osb->s_clustersize;
+ 	handle_t *handle;
++	loff_t isize = i_size_read(inode);
+ 
+ 	/*
+ 	 * The "start" and "end" values are NOT necessarily part of
+@@ -1558,6 +1598,26 @@ static int ocfs2_zero_partial_clusters(struct inode *inode,
+ 	if ((start & (csize - 1)) == 0 && (end & (csize - 1)) == 0)
+ 		goto out;
+ 
++	/* No page cache for EOF blocks, issue zero out to disk. */
++	if (end > isize) {
++		/*
++		 * zeroout eof blocks in last cluster starting from
++		 * "isize" even "start" > "isize" because it is
++		 * complicated to zeroout just at "start" as "start"
++		 * may be not aligned with block size, buffer write
++		 * would be required to do that, but out of eof buffer
++		 * write is not supported.
++		 */
++		ret = ocfs2_zeroout_partial_cluster(inode, isize,
++					end - isize);
++		if (ret) {
++			mlog_errno(ret);
++			goto out;
++		}
++		if (start >= isize)
++			goto out;
++		end = isize;
++	}
+ 	handle = ocfs2_start_trans(osb, OCFS2_INODE_UPDATE_CREDITS);
+ 	if (IS_ERR(handle)) {
+ 		ret = PTR_ERR(handle);
+@@ -1855,45 +1915,6 @@ out:
+ 	return ret;
+ }
+ 
+-/*
+- * zero out partial blocks of one cluster.
+- *
+- * start: file offset where zero starts, will be made upper block aligned.
+- * len: it will be trimmed to the end of current cluster if "start + len"
+- *      is bigger than it.
+- */
+-static int ocfs2_zeroout_partial_cluster(struct inode *inode,
+-					u64 start, u64 len)
+-{
+-	int ret;
+-	u64 start_block, end_block, nr_blocks;
+-	u64 p_block, offset;
+-	u32 cluster, p_cluster, nr_clusters;
+-	struct super_block *sb = inode->i_sb;
+-	u64 end = ocfs2_align_bytes_to_clusters(sb, start);
+-
+-	if (start + len < end)
+-		end = start + len;
+-
+-	start_block = ocfs2_blocks_for_bytes(sb, start);
+-	end_block = ocfs2_blocks_for_bytes(sb, end);
+-	nr_blocks = end_block - start_block;
+-	if (!nr_blocks)
+-		return 0;
+-
+-	cluster = ocfs2_bytes_to_clusters(sb, start);
+-	ret = ocfs2_get_clusters(inode, cluster, &p_cluster,
+-				&nr_clusters, NULL);
+-	if (ret)
+-		return ret;
+-	if (!p_cluster)
+-		return 0;
+-
+-	offset = start_block - ocfs2_clusters_to_blocks(sb, cluster);
+-	p_block = ocfs2_clusters_to_blocks(sb, p_cluster) + offset;
+-	return sb_issue_zeroout(sb, p_block, nr_blocks, GFP_NOFS);
+-}
+-
+ /*
+  * Parts of this function taken from xfs_change_file_space()
+  */
+@@ -1935,7 +1956,6 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+ 		goto out_inode_unlock;
+ 	}
+ 
+-	orig_isize = i_size_read(inode);
+ 	switch (sr->l_whence) {
+ 	case 0: /*SEEK_SET*/
+ 		break;
+@@ -1943,7 +1963,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+ 		sr->l_start += f_pos;
+ 		break;
+ 	case 2: /*SEEK_END*/
+-		sr->l_start += orig_isize;
++		sr->l_start += i_size_read(inode);
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
+@@ -1998,6 +2018,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+ 		ret = -EINVAL;
+ 	}
+ 
++	orig_isize = i_size_read(inode);
+ 	/* zeroout eof blocks in the cluster. */
+ 	if (!ret && change_size && orig_isize < size) {
+ 		ret = ocfs2_zeroout_partial_cluster(inode, orig_isize,
+diff --git a/fs/pipe.c b/fs/pipe.c
+index bfd946a9ad01f..9ef4231cce61c 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -429,20 +429,20 @@ pipe_write(struct kiocb *iocb, struct iov_iter *from)
+ #endif
+ 
+ 	/*
+-	 * Only wake up if the pipe started out empty, since
+-	 * otherwise there should be no readers waiting.
++	 * Epoll nonsensically wants a wakeup whether the pipe
++	 * was already empty or not.
+ 	 *
+ 	 * If it wasn't empty we try to merge new data into
+ 	 * the last buffer.
+ 	 *
+ 	 * That naturally merges small writes, but it also
+-	 * page-aligs the rest of the writes for large writes
++	 * page-aligns the rest of the writes for large writes
+ 	 * spanning multiple pages.
+ 	 */
+ 	head = pipe->head;
+-	was_empty = pipe_empty(head, pipe->tail);
++	was_empty = true;
+ 	chars = total_len & (PAGE_SIZE-1);
+-	if (chars && !was_empty) {
++	if (chars && !pipe_empty(head, pipe->tail)) {
+ 		unsigned int mask = pipe->ring_size - 1;
+ 		struct pipe_buffer *buf = &pipe->bufs[(head - 1) & mask];
+ 		int offset = buf->offset + buf->len;
+diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
+index f883f01a5061f..def596a85752d 100644
+--- a/include/linux/bpf_types.h
++++ b/include/linux/bpf_types.h
+@@ -132,4 +132,5 @@ BPF_LINK_TYPE(BPF_LINK_TYPE_CGROUP, cgroup)
+ BPF_LINK_TYPE(BPF_LINK_TYPE_ITER, iter)
+ #ifdef CONFIG_NET
+ BPF_LINK_TYPE(BPF_LINK_TYPE_NETNS, netns)
++BPF_LINK_TYPE(BPF_LINK_TYPE_XDP, xdp)
+ #endif
+diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
+index 06841517ab1ed..6b6b201b75bf9 100644
+--- a/include/linux/bpf_verifier.h
++++ b/include/linux/bpf_verifier.h
+@@ -215,6 +215,13 @@ struct bpf_idx_pair {
+ 	u32 idx;
+ };
+ 
++struct bpf_id_pair {
++	u32 old;
++	u32 cur;
++};
++
++/* Maximum number of register states that can exist at once */
++#define BPF_ID_MAP_SIZE (MAX_BPF_REG + MAX_BPF_STACK / BPF_REG_SIZE)
+ #define MAX_CALL_FRAMES 8
+ struct bpf_verifier_state {
+ 	/* call stack tracking */
+@@ -333,8 +340,8 @@ struct bpf_insn_aux_data {
+ 	};
+ 	u64 map_key_state; /* constant (32 bit) key tracking for maps */
+ 	int ctx_field_size; /* the ctx field size for load insn, maybe 0 */
+-	int sanitize_stack_off; /* stack slot to be cleared */
+ 	u32 seen; /* this insn was processed by the verifier at env->pass_cnt */
++	bool sanitize_stack_spill; /* subject to Spectre v4 sanitation */
+ 	bool zext_dst; /* this insn zero extends dst reg */
+ 	u8 alu_state; /* used in combination with alu_limit */
+ 
+@@ -407,6 +414,7 @@ struct bpf_verifier_env {
+ 	u32 used_map_cnt;		/* number of used maps */
+ 	u32 used_btf_cnt;		/* number of used BTF objects */
+ 	u32 id_gen;			/* used to generate unique reg IDs */
++	bool explore_alu_limits;
+ 	bool allow_ptr_leaks;
+ 	bool allow_uninit_stack;
+ 	bool allow_ptr_to_map_access;
+@@ -418,6 +426,7 @@ struct bpf_verifier_env {
+ 	const struct bpf_line_info *prev_linfo;
+ 	struct bpf_verifier_log log;
+ 	struct bpf_subprog_info subprog_info[BPF_MAX_SUBPROGS + 1];
++	struct bpf_id_pair idmap_scratch[BPF_ID_MAP_SIZE];
+ 	struct {
+ 		int *insn_state;
+ 		int *insn_stack;
+diff --git a/include/linux/filter.h b/include/linux/filter.h
+index 9a09547bc7bae..16e5cebea82ca 100644
+--- a/include/linux/filter.h
++++ b/include/linux/filter.h
+@@ -73,6 +73,11 @@ struct ctl_table_header;
+ /* unused opcode to mark call to interpreter with arguments */
+ #define BPF_CALL_ARGS	0xe0
+ 
++/* unused opcode to mark speculation barrier for mitigating
++ * Speculative Store Bypass
++ */
++#define BPF_NOSPEC	0xc0
++
+ /* As per nm, we expose JITed images as text (code) section for
+  * kallsyms. That way, tools like perf can find it to match
+  * addresses.
+@@ -390,6 +395,16 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
+ 		.off   = 0,					\
+ 		.imm   = 0 })
+ 
++/* Speculation barrier */
++
++#define BPF_ST_NOSPEC()						\
++	((struct bpf_insn) {					\
++		.code  = BPF_ST | BPF_NOSPEC,			\
++		.dst_reg = 0,					\
++		.src_reg = 0,					\
++		.off   = 0,					\
++		.imm   = 0 })
++
+ /* Internal classic blocks for direct assignment */
+ 
+ #define __BPF_STMT(CODE, K)					\
+diff --git a/include/net/llc_pdu.h b/include/net/llc_pdu.h
+index c0f0a13ed8183..49aa79c7b278a 100644
+--- a/include/net/llc_pdu.h
++++ b/include/net/llc_pdu.h
+@@ -15,9 +15,11 @@
+ #include <linux/if_ether.h>
+ 
+ /* Lengths of frame formats */
+-#define LLC_PDU_LEN_I	4       /* header and 2 control bytes */
+-#define LLC_PDU_LEN_S	4
+-#define LLC_PDU_LEN_U	3       /* header and 1 control byte */
++#define LLC_PDU_LEN_I		4       /* header and 2 control bytes */
++#define LLC_PDU_LEN_S		4
++#define LLC_PDU_LEN_U		3       /* header and 1 control byte */
++/* header and 1 control byte and XID info */
++#define LLC_PDU_LEN_U_XID	(LLC_PDU_LEN_U + sizeof(struct llc_xid_info))
+ /* Known SAP addresses */
+ #define LLC_GLOBAL_SAP	0xFF
+ #define LLC_NULL_SAP	0x00	/* not network-layer visible */
+@@ -50,9 +52,10 @@
+ #define LLC_PDU_TYPE_U_MASK    0x03	/* 8-bit control field */
+ #define LLC_PDU_TYPE_MASK      0x03
+ 
+-#define LLC_PDU_TYPE_I	0	/* first bit */
+-#define LLC_PDU_TYPE_S	1	/* first two bits */
+-#define LLC_PDU_TYPE_U	3	/* first two bits */
++#define LLC_PDU_TYPE_I		0	/* first bit */
++#define LLC_PDU_TYPE_S		1	/* first two bits */
++#define LLC_PDU_TYPE_U		3	/* first two bits */
++#define LLC_PDU_TYPE_U_XID	4	/* private type for detecting XID commands */
+ 
+ #define LLC_PDU_TYPE_IS_I(pdu) \
+ 	((!(pdu->ctrl_1 & LLC_PDU_TYPE_I_MASK)) ? 1 : 0)
+@@ -230,9 +233,18 @@ static inline struct llc_pdu_un *llc_pdu_un_hdr(struct sk_buff *skb)
+ static inline void llc_pdu_header_init(struct sk_buff *skb, u8 type,
+ 				       u8 ssap, u8 dsap, u8 cr)
+ {
+-	const int hlen = type == LLC_PDU_TYPE_U ? 3 : 4;
++	int hlen = 4; /* default value for I and S types */
+ 	struct llc_pdu_un *pdu;
+ 
++	switch (type) {
++	case LLC_PDU_TYPE_U:
++		hlen = 3;
++		break;
++	case LLC_PDU_TYPE_U_XID:
++		hlen = 6;
++		break;
++	}
++
+ 	skb_push(skb, hlen);
+ 	skb_reset_network_header(skb);
+ 	pdu = llc_pdu_un_hdr(skb);
+@@ -374,7 +386,10 @@ static inline void llc_pdu_init_as_xid_cmd(struct sk_buff *skb,
+ 	xid_info->fmt_id = LLC_XID_FMT_ID;	/* 0x81 */
+ 	xid_info->type	 = svcs_supported;
+ 	xid_info->rw	 = rx_window << 1;	/* size of receive window */
+-	skb_put(skb, sizeof(struct llc_xid_info));
++
++	/* no need to push/put since llc_pdu_header_init() has already
++	 * pushed 3 + 3 bytes
++	 */
+ }
+ 
+ /**
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 9b15774983738..b1a5fc04492bd 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -32,6 +32,8 @@
+ #include <linux/perf_event.h>
+ #include <linux/extable.h>
+ #include <linux/log2.h>
++
++#include <asm/barrier.h>
+ #include <asm/unaligned.h>
+ 
+ /* Registers */
+@@ -1377,6 +1379,7 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
+ 		/* Non-UAPI available opcodes. */
+ 		[BPF_JMP | BPF_CALL_ARGS] = &&JMP_CALL_ARGS,
+ 		[BPF_JMP | BPF_TAIL_CALL] = &&JMP_TAIL_CALL,
++		[BPF_ST  | BPF_NOSPEC] = &&ST_NOSPEC,
+ 		[BPF_LDX | BPF_PROBE_MEM | BPF_B] = &&LDX_PROBE_MEM_B,
+ 		[BPF_LDX | BPF_PROBE_MEM | BPF_H] = &&LDX_PROBE_MEM_H,
+ 		[BPF_LDX | BPF_PROBE_MEM | BPF_W] = &&LDX_PROBE_MEM_W,
+@@ -1621,7 +1624,21 @@ out:
+ 	COND_JMP(s, JSGE, >=)
+ 	COND_JMP(s, JSLE, <=)
+ #undef COND_JMP
+-	/* STX and ST and LDX*/
++	/* ST, STX and LDX*/
++	ST_NOSPEC:
++		/* Speculation barrier for mitigating Speculative Store Bypass.
++		 * In case of arm64, we rely on the firmware mitigation as
++		 * controlled via the ssbd kernel parameter. Whenever the
++		 * mitigation is enabled, it works for all of the kernel code
++		 * with no need to provide any additional instructions here.
++		 * In case of x86, we use 'lfence' insn for mitigation. We
++		 * reuse preexisting logic from Spectre v1 mitigation that
++		 * happens to produce the required code on x86 for v4 as well.
++		 */
++#ifdef CONFIG_X86
++		barrier_nospec();
++#endif
++		CONT;
+ #define LDST(SIZEOP, SIZE)						\
+ 	STX_MEM_##SIZEOP:						\
+ 		*(SIZE *)(unsigned long) (DST + insn->off) = SRC;	\
+diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c
+index bbfc6bb792400..ca3cd9aaa6ced 100644
+--- a/kernel/bpf/disasm.c
++++ b/kernel/bpf/disasm.c
+@@ -206,15 +206,17 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
+ 			verbose(cbs->private_data, "BUG_%02x\n", insn->code);
+ 		}
+ 	} else if (class == BPF_ST) {
+-		if (BPF_MODE(insn->code) != BPF_MEM) {
++		if (BPF_MODE(insn->code) == BPF_MEM) {
++			verbose(cbs->private_data, "(%02x) *(%s *)(r%d %+d) = %d\n",
++				insn->code,
++				bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
++				insn->dst_reg,
++				insn->off, insn->imm);
++		} else if (BPF_MODE(insn->code) == 0xc0 /* BPF_NOSPEC, no UAPI */) {
++			verbose(cbs->private_data, "(%02x) nospec\n", insn->code);
++		} else {
+ 			verbose(cbs->private_data, "BUG_st_%02x\n", insn->code);
+-			return;
+ 		}
+-		verbose(cbs->private_data, "(%02x) *(%s *)(r%d %+d) = %d\n",
+-			insn->code,
+-			bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
+-			insn->dst_reg,
+-			insn->off, insn->imm);
+ 	} else if (class == BPF_LDX) {
+ 		if (BPF_MODE(insn->code) != BPF_MEM) {
+ 			verbose(cbs->private_data, "BUG_ldx_%02x\n", insn->code);
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index e6db39a00de25..eab48745231fb 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -2607,6 +2607,19 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
+ 	cur = env->cur_state->frame[env->cur_state->curframe];
+ 	if (value_regno >= 0)
+ 		reg = &cur->regs[value_regno];
++	if (!env->bypass_spec_v4) {
++		bool sanitize = reg && is_spillable_regtype(reg->type);
++
++		for (i = 0; i < size; i++) {
++			if (state->stack[spi].slot_type[i] == STACK_INVALID) {
++				sanitize = true;
++				break;
++			}
++		}
++
++		if (sanitize)
++			env->insn_aux_data[insn_idx].sanitize_stack_spill = true;
++	}
+ 
+ 	if (reg && size == BPF_REG_SIZE && register_is_bounded(reg) &&
+ 	    !register_is_null(reg) && env->bpf_capable) {
+@@ -2629,47 +2642,10 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
+ 			verbose(env, "invalid size of register spill\n");
+ 			return -EACCES;
+ 		}
+-
+ 		if (state != cur && reg->type == PTR_TO_STACK) {
+ 			verbose(env, "cannot spill pointers to stack into stack frame of the caller\n");
+ 			return -EINVAL;
+ 		}
+-
+-		if (!env->bypass_spec_v4) {
+-			bool sanitize = false;
+-
+-			if (state->stack[spi].slot_type[0] == STACK_SPILL &&
+-			    register_is_const(&state->stack[spi].spilled_ptr))
+-				sanitize = true;
+-			for (i = 0; i < BPF_REG_SIZE; i++)
+-				if (state->stack[spi].slot_type[i] == STACK_MISC) {
+-					sanitize = true;
+-					break;
+-				}
+-			if (sanitize) {
+-				int *poff = &env->insn_aux_data[insn_idx].sanitize_stack_off;
+-				int soff = (-spi - 1) * BPF_REG_SIZE;
+-
+-				/* detected reuse of integer stack slot with a pointer
+-				 * which means either llvm is reusing stack slot or
+-				 * an attacker is trying to exploit CVE-2018-3639
+-				 * (speculative store bypass)
+-				 * Have to sanitize that slot with preemptive
+-				 * store of zero.
+-				 */
+-				if (*poff && *poff != soff) {
+-					/* disallow programs where single insn stores
+-					 * into two different stack slots, since verifier
+-					 * cannot sanitize them
+-					 */
+-					verbose(env,
+-						"insn %d cannot access two stack slots fp%d and fp%d",
+-						insn_idx, *poff, soff);
+-					return -EINVAL;
+-				}
+-				*poff = soff;
+-			}
+-		}
+ 		save_register_state(state, spi, reg);
+ 	} else {
+ 		u8 type = STACK_MISC;
+@@ -6559,6 +6535,12 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
+ 		alu_state |= off_is_imm ? BPF_ALU_IMMEDIATE : 0;
+ 		alu_state |= ptr_is_dst_reg ?
+ 			     BPF_ALU_SANITIZE_SRC : BPF_ALU_SANITIZE_DST;
++
++		/* Limit pruning on unknown scalars to enable deep search for
++		 * potential masking differences from other program paths.
++		 */
++		if (!off_is_imm)
++			env->explore_alu_limits = true;
+ 	}
+ 
+ 	err = update_alu_sanitation_state(aux, alu_state, alu_limit);
+@@ -9803,13 +9785,6 @@ static bool range_within(struct bpf_reg_state *old,
+ 	       old->s32_max_value >= cur->s32_max_value;
+ }
+ 
+-/* Maximum number of register states that can exist at once */
+-#define ID_MAP_SIZE	(MAX_BPF_REG + MAX_BPF_STACK / BPF_REG_SIZE)
+-struct idpair {
+-	u32 old;
+-	u32 cur;
+-};
+-
+ /* If in the old state two registers had the same id, then they need to have
+  * the same id in the new state as well.  But that id could be different from
+  * the old state, so we need to track the mapping from old to new ids.
+@@ -9820,11 +9795,11 @@ struct idpair {
+  * So we look through our idmap to see if this old id has been seen before.  If
+  * so, we require the new id to match; otherwise, we add the id pair to the map.
+  */
+-static bool check_ids(u32 old_id, u32 cur_id, struct idpair *idmap)
++static bool check_ids(u32 old_id, u32 cur_id, struct bpf_id_pair *idmap)
+ {
+ 	unsigned int i;
+ 
+-	for (i = 0; i < ID_MAP_SIZE; i++) {
++	for (i = 0; i < BPF_ID_MAP_SIZE; i++) {
+ 		if (!idmap[i].old) {
+ 			/* Reached an empty slot; haven't seen this id before */
+ 			idmap[i].old = old_id;
+@@ -9936,8 +9911,8 @@ next:
+ }
+ 
+ /* Returns true if (rold safe implies rcur safe) */
+-static bool regsafe(struct bpf_reg_state *rold, struct bpf_reg_state *rcur,
+-		    struct idpair *idmap)
++static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
++		    struct bpf_reg_state *rcur, struct bpf_id_pair *idmap)
+ {
+ 	bool equal;
+ 
+@@ -9963,6 +9938,8 @@ static bool regsafe(struct bpf_reg_state *rold, struct bpf_reg_state *rcur,
+ 		return false;
+ 	switch (rold->type) {
+ 	case SCALAR_VALUE:
++		if (env->explore_alu_limits)
++			return false;
+ 		if (rcur->type == SCALAR_VALUE) {
+ 			if (!rold->precise && !rcur->precise)
+ 				return true;
+@@ -10053,9 +10030,8 @@ static bool regsafe(struct bpf_reg_state *rold, struct bpf_reg_state *rcur,
+ 	return false;
+ }
+ 
+-static bool stacksafe(struct bpf_func_state *old,
+-		      struct bpf_func_state *cur,
+-		      struct idpair *idmap)
++static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
++		      struct bpf_func_state *cur, struct bpf_id_pair *idmap)
+ {
+ 	int i, spi;
+ 
+@@ -10100,9 +10076,8 @@ static bool stacksafe(struct bpf_func_state *old,
+ 			continue;
+ 		if (old->stack[spi].slot_type[0] != STACK_SPILL)
+ 			continue;
+-		if (!regsafe(&old->stack[spi].spilled_ptr,
+-			     &cur->stack[spi].spilled_ptr,
+-			     idmap))
++		if (!regsafe(env, &old->stack[spi].spilled_ptr,
++			     &cur->stack[spi].spilled_ptr, idmap))
+ 			/* when explored and current stack slot are both storing
+ 			 * spilled registers, check that stored pointers types
+ 			 * are the same as well.
+@@ -10152,32 +10127,24 @@ static bool refsafe(struct bpf_func_state *old, struct bpf_func_state *cur)
+  * whereas register type in current state is meaningful, it means that
+  * the current state will reach 'bpf_exit' instruction safely
+  */
+-static bool func_states_equal(struct bpf_func_state *old,
++static bool func_states_equal(struct bpf_verifier_env *env, struct bpf_func_state *old,
+ 			      struct bpf_func_state *cur)
+ {
+-	struct idpair *idmap;
+-	bool ret = false;
+ 	int i;
+ 
+-	idmap = kcalloc(ID_MAP_SIZE, sizeof(struct idpair), GFP_KERNEL);
+-	/* If we failed to allocate the idmap, just say it's not safe */
+-	if (!idmap)
+-		return false;
+-
+-	for (i = 0; i < MAX_BPF_REG; i++) {
+-		if (!regsafe(&old->regs[i], &cur->regs[i], idmap))
+-			goto out_free;
+-	}
++	memset(env->idmap_scratch, 0, sizeof(env->idmap_scratch));
++	for (i = 0; i < MAX_BPF_REG; i++)
++		if (!regsafe(env, &old->regs[i], &cur->regs[i],
++			     env->idmap_scratch))
++			return false;
+ 
+-	if (!stacksafe(old, cur, idmap))
+-		goto out_free;
++	if (!stacksafe(env, old, cur, env->idmap_scratch))
++		return false;
+ 
+ 	if (!refsafe(old, cur))
+-		goto out_free;
+-	ret = true;
+-out_free:
+-	kfree(idmap);
+-	return ret;
++		return false;
++
++	return true;
+ }
+ 
+ static bool states_equal(struct bpf_verifier_env *env,
+@@ -10204,7 +10171,7 @@ static bool states_equal(struct bpf_verifier_env *env,
+ 	for (i = 0; i <= old->curframe; i++) {
+ 		if (old->frame[i]->callsite != cur->frame[i]->callsite)
+ 			return false;
+-		if (!func_states_equal(old->frame[i], cur->frame[i]))
++		if (!func_states_equal(env, old->frame[i], cur->frame[i]))
+ 			return false;
+ 	}
+ 	return true;
+@@ -11891,35 +11858,33 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
+ 
+ 	for (i = 0; i < insn_cnt; i++, insn++) {
+ 		bpf_convert_ctx_access_t convert_ctx_access;
++		bool ctx_access;
+ 
+ 		if (insn->code == (BPF_LDX | BPF_MEM | BPF_B) ||
+ 		    insn->code == (BPF_LDX | BPF_MEM | BPF_H) ||
+ 		    insn->code == (BPF_LDX | BPF_MEM | BPF_W) ||
+-		    insn->code == (BPF_LDX | BPF_MEM | BPF_DW))
++		    insn->code == (BPF_LDX | BPF_MEM | BPF_DW)) {
+ 			type = BPF_READ;
+-		else if (insn->code == (BPF_STX | BPF_MEM | BPF_B) ||
+-			 insn->code == (BPF_STX | BPF_MEM | BPF_H) ||
+-			 insn->code == (BPF_STX | BPF_MEM | BPF_W) ||
+-			 insn->code == (BPF_STX | BPF_MEM | BPF_DW))
++			ctx_access = true;
++		} else if (insn->code == (BPF_STX | BPF_MEM | BPF_B) ||
++			   insn->code == (BPF_STX | BPF_MEM | BPF_H) ||
++			   insn->code == (BPF_STX | BPF_MEM | BPF_W) ||
++			   insn->code == (BPF_STX | BPF_MEM | BPF_DW) ||
++			   insn->code == (BPF_ST | BPF_MEM | BPF_B) ||
++			   insn->code == (BPF_ST | BPF_MEM | BPF_H) ||
++			   insn->code == (BPF_ST | BPF_MEM | BPF_W) ||
++			   insn->code == (BPF_ST | BPF_MEM | BPF_DW)) {
+ 			type = BPF_WRITE;
+-		else
++			ctx_access = BPF_CLASS(insn->code) == BPF_STX;
++		} else {
+ 			continue;
++		}
+ 
+ 		if (type == BPF_WRITE &&
+-		    env->insn_aux_data[i + delta].sanitize_stack_off) {
++		    env->insn_aux_data[i + delta].sanitize_stack_spill) {
+ 			struct bpf_insn patch[] = {
+-				/* Sanitize suspicious stack slot with zero.
+-				 * There are no memory dependencies for this store,
+-				 * since it's only using frame pointer and immediate
+-				 * constant of zero
+-				 */
+-				BPF_ST_MEM(BPF_DW, BPF_REG_FP,
+-					   env->insn_aux_data[i + delta].sanitize_stack_off,
+-					   0),
+-				/* the original STX instruction will immediately
+-				 * overwrite the same stack slot with appropriate value
+-				 */
+ 				*insn,
++				BPF_ST_NOSPEC(),
+ 			};
+ 
+ 			cnt = ARRAY_SIZE(patch);
+@@ -11933,6 +11898,9 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
+ 			continue;
+ 		}
+ 
++		if (!ctx_access)
++			continue;
++
+ 		switch (env->insn_aux_data[i + delta].ptr_type) {
+ 		case PTR_TO_CTX:
+ 			if (!ops->convert_ctx_access)
+@@ -12737,37 +12705,6 @@ static void free_states(struct bpf_verifier_env *env)
+ 	}
+ }
+ 
+-/* The verifier is using insn_aux_data[] to store temporary data during
+- * verification and to store information for passes that run after the
+- * verification like dead code sanitization. do_check_common() for subprogram N
+- * may analyze many other subprograms. sanitize_insn_aux_data() clears all
+- * temporary data after do_check_common() finds that subprogram N cannot be
+- * verified independently. pass_cnt counts the number of times
+- * do_check_common() was run and insn->aux->seen tells the pass number
+- * insn_aux_data was touched. These variables are compared to clear temporary
+- * data from failed pass. For testing and experiments do_check_common() can be
+- * run multiple times even when prior attempt to verify is unsuccessful.
+- *
+- * Note that special handling is needed on !env->bypass_spec_v1 if this is
+- * ever called outside of error path with subsequent program rejection.
+- */
+-static void sanitize_insn_aux_data(struct bpf_verifier_env *env)
+-{
+-	struct bpf_insn *insn = env->prog->insnsi;
+-	struct bpf_insn_aux_data *aux;
+-	int i, class;
+-
+-	for (i = 0; i < env->prog->len; i++) {
+-		class = BPF_CLASS(insn[i].code);
+-		if (class != BPF_LDX && class != BPF_STX)
+-			continue;
+-		aux = &env->insn_aux_data[i];
+-		if (aux->seen != env->pass_cnt)
+-			continue;
+-		memset(aux, 0, offsetof(typeof(*aux), orig_idx));
+-	}
+-}
+-
+ static int do_check_common(struct bpf_verifier_env *env, int subprog)
+ {
+ 	bool pop_log = !(env->log.level & BPF_LOG_LEVEL2);
+@@ -12844,9 +12781,6 @@ out:
+ 	if (!ret && pop_log)
+ 		bpf_vlog_reset(&env->log, 0);
+ 	free_states(env);
+-	if (ret)
+-		/* clean aux data in case subprog was rejected */
+-		sanitize_insn_aux_data(env);
+ 	return ret;
+ }
+ 
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index f4f2d05c8c7ba..946d848156932 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -3394,7 +3394,8 @@ static unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap)
+ 	unsigned long val;
+ 
+ 	if (mem_cgroup_is_root(memcg)) {
+-		cgroup_rstat_flush(memcg->css.cgroup);
++		/* mem_cgroup_threshold() calls here from irqsafe context */
++		cgroup_rstat_flush_irqsafe(memcg->css.cgroup);
+ 		val = memcg_page_state(memcg, NR_FILE_PAGES) +
+ 			memcg_page_state(memcg, NR_ANON_MAPPED);
+ 		if (swap)
+diff --git a/mm/slab.h b/mm/slab.h
+index b3294712a6868..aed67dbc79659 100644
+--- a/mm/slab.h
++++ b/mm/slab.h
+@@ -350,7 +350,7 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s_orig,
+ 			continue;
+ 
+ 		page = virt_to_head_page(p[i]);
+-		objcgs = page_objcgs(page);
++		objcgs = page_objcgs_check(page);
+ 		if (!objcgs)
+ 			continue;
+ 
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index c3946c3558826..bdc95bd7a851f 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -1075,11 +1075,16 @@ static bool j1939_session_deactivate_locked(struct j1939_session *session)
+ 
+ static bool j1939_session_deactivate(struct j1939_session *session)
+ {
++	struct j1939_priv *priv = session->priv;
+ 	bool active;
+ 
+-	j1939_session_list_lock(session->priv);
++	j1939_session_list_lock(priv);
++	/* This function should be called with a session ref-count of at
++	 * least 2.
++	 */
++	WARN_ON_ONCE(kref_read(&session->kref) < 2);
+ 	active = j1939_session_deactivate_locked(session);
+-	j1939_session_list_unlock(session->priv);
++	j1939_session_list_unlock(priv);
+ 
+ 	return active;
+ }
+@@ -1869,7 +1874,7 @@ static void j1939_xtp_rx_dat_one(struct j1939_session *session,
+ 		if (!session->transmission)
+ 			j1939_tp_schedule_txtimer(session, 0);
+ 	} else {
+-		j1939_tp_set_rxtimeout(session, 250);
++		j1939_tp_set_rxtimeout(session, 750);
+ 	}
+ 	session->last_cmd = 0xff;
+ 	consume_skb(se_skb);
+diff --git a/net/can/raw.c b/net/can/raw.c
+index ac96fc2100253..5dca1e9e44cf5 100644
+--- a/net/can/raw.c
++++ b/net/can/raw.c
+@@ -546,10 +546,18 @@ static int raw_setsockopt(struct socket *sock, int level, int optname,
+ 				return -EFAULT;
+ 		}
+ 
++		rtnl_lock();
+ 		lock_sock(sk);
+ 
+-		if (ro->bound && ro->ifindex)
++		if (ro->bound && ro->ifindex) {
+ 			dev = dev_get_by_index(sock_net(sk), ro->ifindex);
++			if (!dev) {
++				if (count > 1)
++					kfree(filter);
++				err = -ENODEV;
++				goto out_fil;
++			}
++		}
+ 
+ 		if (ro->bound) {
+ 			/* (try to) register the new filters */
+@@ -588,6 +596,7 @@ static int raw_setsockopt(struct socket *sock, int level, int optname,
+ 			dev_put(dev);
+ 
+ 		release_sock(sk);
++		rtnl_unlock();
+ 
+ 		break;
+ 
+@@ -600,10 +609,16 @@ static int raw_setsockopt(struct socket *sock, int level, int optname,
+ 
+ 		err_mask &= CAN_ERR_MASK;
+ 
++		rtnl_lock();
+ 		lock_sock(sk);
+ 
+-		if (ro->bound && ro->ifindex)
++		if (ro->bound && ro->ifindex) {
+ 			dev = dev_get_by_index(sock_net(sk), ro->ifindex);
++			if (!dev) {
++				err = -ENODEV;
++				goto out_err;
++			}
++		}
+ 
+ 		/* remove current error mask */
+ 		if (ro->bound) {
+@@ -627,6 +642,7 @@ static int raw_setsockopt(struct socket *sock, int level, int optname,
+ 			dev_put(dev);
+ 
+ 		release_sock(sk);
++		rtnl_unlock();
+ 
+ 		break;
+ 
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index b2410a1bfa23d..45b3a3adc886f 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -790,8 +790,6 @@ static void sk_psock_destroy(struct work_struct *work)
+ 
+ void sk_psock_drop(struct sock *sk, struct sk_psock *psock)
+ {
+-	sk_psock_stop(psock, false);
+-
+ 	write_lock_bh(&sk->sk_callback_lock);
+ 	sk_psock_restore_proto(sk, psock);
+ 	rcu_assign_sk_user_data(sk, NULL);
+@@ -801,6 +799,8 @@ void sk_psock_drop(struct sock *sk, struct sk_psock *psock)
+ 		sk_psock_stop_verdict(sk, psock);
+ 	write_unlock_bh(&sk->sk_callback_lock);
+ 
++	sk_psock_stop(psock, false);
++
+ 	INIT_RCU_WORK(&psock->rwork, sk_psock_destroy);
+ 	queue_rcu_work(system_wq, &psock->rwork);
+ }
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index 0dca00745ac3c..be75b409445c2 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -390,7 +390,7 @@ int ip_tunnel_rcv(struct ip_tunnel *tunnel, struct sk_buff *skb,
+ 		tunnel->i_seqno = ntohl(tpi->seq) + 1;
+ 	}
+ 
+-	skb_reset_network_header(skb);
++	skb_set_network_header(skb, (tunnel->dev->type == ARPHRD_ETHER) ? ETH_HLEN : 0);
+ 
+ 	err = IP_ECN_decapsulate(iph, skb);
+ 	if (unlikely(err)) {
+diff --git a/net/llc/af_llc.c b/net/llc/af_llc.c
+index 7180979114e49..ac5cadd02cfa8 100644
+--- a/net/llc/af_llc.c
++++ b/net/llc/af_llc.c
+@@ -98,8 +98,16 @@ static inline u8 llc_ui_header_len(struct sock *sk, struct sockaddr_llc *addr)
+ {
+ 	u8 rc = LLC_PDU_LEN_U;
+ 
+-	if (addr->sllc_test || addr->sllc_xid)
++	if (addr->sllc_test)
+ 		rc = LLC_PDU_LEN_U;
++	else if (addr->sllc_xid)
++		/* We need to expand header to sizeof(struct llc_xid_info)
++		 * since llc_pdu_init_as_xid_cmd() sets 4,5,6 bytes of LLC header
++		 * as XID PDU. In llc_ui_sendmsg() we reserved header size and then
++		 * filled all other space with user data. If we won't reserve this
++		 * bytes, llc_pdu_init_as_xid_cmd() will overwrite user data
++		 */
++		rc = LLC_PDU_LEN_U_XID;
+ 	else if (sk->sk_type == SOCK_STREAM)
+ 		rc = LLC_PDU_LEN_I;
+ 	return rc;
+diff --git a/net/llc/llc_s_ac.c b/net/llc/llc_s_ac.c
+index b554f26c68ee0..79d1cef8f15a9 100644
+--- a/net/llc/llc_s_ac.c
++++ b/net/llc/llc_s_ac.c
+@@ -79,7 +79,7 @@ int llc_sap_action_send_xid_c(struct llc_sap *sap, struct sk_buff *skb)
+ 	struct llc_sap_state_ev *ev = llc_sap_ev(skb);
+ 	int rc;
+ 
+-	llc_pdu_header_init(skb, LLC_PDU_TYPE_U, ev->saddr.lsap,
++	llc_pdu_header_init(skb, LLC_PDU_TYPE_U_XID, ev->saddr.lsap,
+ 			    ev->daddr.lsap, LLC_PDU_CMD);
+ 	llc_pdu_init_as_xid_cmd(skb, LLC_XID_NULL_CLASS_2, 0);
+ 	rc = llc_mac_hdr_init(skb, ev->saddr.mac, ev->daddr.mac);
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 7a99892e5aba4..9f14434598527 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -152,6 +152,8 @@ static int ieee80211_change_iface(struct wiphy *wiphy,
+ 				  struct vif_params *params)
+ {
+ 	struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
++	struct ieee80211_local *local = sdata->local;
++	struct sta_info *sta;
+ 	int ret;
+ 
+ 	ret = ieee80211_if_change_type(sdata, type);
+@@ -162,7 +164,24 @@ static int ieee80211_change_iface(struct wiphy *wiphy,
+ 		RCU_INIT_POINTER(sdata->u.vlan.sta, NULL);
+ 		ieee80211_check_fast_rx_iface(sdata);
+ 	} else if (type == NL80211_IFTYPE_STATION && params->use_4addr >= 0) {
++		struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
++
++		if (params->use_4addr == ifmgd->use_4addr)
++			return 0;
++
+ 		sdata->u.mgd.use_4addr = params->use_4addr;
++		if (!ifmgd->associated)
++			return 0;
++
++		mutex_lock(&local->sta_mtx);
++		sta = sta_info_get(sdata, ifmgd->bssid);
++		if (sta)
++			drv_sta_set_4addr(local, sdata, &sta->sta,
++					  params->use_4addr);
++		mutex_unlock(&local->sta_mtx);
++
++		if (params->use_4addr)
++			ieee80211_send_4addr_nullfunc(local, sdata);
+ 	}
+ 
+ 	if (sdata->vif.type == NL80211_IFTYPE_MONITOR) {
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index 648696b49f897..1e1d2e72de4a0 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -2045,6 +2045,8 @@ void ieee80211_dynamic_ps_timer(struct timer_list *t);
+ void ieee80211_send_nullfunc(struct ieee80211_local *local,
+ 			     struct ieee80211_sub_if_data *sdata,
+ 			     bool powersave);
++void ieee80211_send_4addr_nullfunc(struct ieee80211_local *local,
++				   struct ieee80211_sub_if_data *sdata);
+ void ieee80211_sta_tx_notify(struct ieee80211_sub_if_data *sdata,
+ 			     struct ieee80211_hdr *hdr, bool ack, u16 tx_time);
+ 
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index b1c44fa63a06f..9bed6464c5bd6 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -1115,8 +1115,8 @@ void ieee80211_send_nullfunc(struct ieee80211_local *local,
+ 	ieee80211_tx_skb(sdata, skb);
+ }
+ 
+-static void ieee80211_send_4addr_nullfunc(struct ieee80211_local *local,
+-					  struct ieee80211_sub_if_data *sdata)
++void ieee80211_send_4addr_nullfunc(struct ieee80211_local *local,
++				   struct ieee80211_sub_if_data *sdata)
+ {
+ 	struct sk_buff *skb;
+ 	struct ieee80211_hdr *nullfunc;
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index e0befcf8113a9..69079a382d3a5 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -666,8 +666,13 @@ bool nf_ct_delete(struct nf_conn *ct, u32 portid, int report)
+ 		return false;
+ 
+ 	tstamp = nf_conn_tstamp_find(ct);
+-	if (tstamp && tstamp->stop == 0)
++	if (tstamp) {
++		s32 timeout = ct->timeout - nfct_time_stamp;
++
+ 		tstamp->stop = ktime_get_real_ns();
++		if (timeout < 0)
++			tstamp->stop -= jiffies_to_nsecs(-timeout);
++	}
+ 
+ 	if (nf_conntrack_event_report(IPCT_DESTROY, ct,
+ 				    portid, report) < 0) {
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index a5db7c59ad4e4..7512bb819dff3 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -8479,6 +8479,16 @@ static int nf_tables_commit_audit_alloc(struct list_head *adl,
+ 	return 0;
+ }
+ 
++static void nf_tables_commit_audit_free(struct list_head *adl)
++{
++	struct nft_audit_data *adp, *adn;
++
++	list_for_each_entry_safe(adp, adn, adl, list) {
++		list_del(&adp->list);
++		kfree(adp);
++	}
++}
++
+ static void nf_tables_commit_audit_collect(struct list_head *adl,
+ 					   struct nft_table *table, u32 op)
+ {
+@@ -8543,6 +8553,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 		ret = nf_tables_commit_audit_alloc(&adl, trans->ctx.table);
+ 		if (ret) {
+ 			nf_tables_commit_chain_prepare_cancel(net);
++			nf_tables_commit_audit_free(&adl);
+ 			return ret;
+ 		}
+ 		if (trans->msg_type == NFT_MSG_NEWRULE ||
+@@ -8552,6 +8563,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ 			ret = nf_tables_commit_chain_prepare(net, chain);
+ 			if (ret < 0) {
+ 				nf_tables_commit_chain_prepare_cancel(net);
++				nf_tables_commit_audit_free(&adl);
+ 				return ret;
+ 			}
+ 		}
+diff --git a/net/netfilter/nft_nat.c b/net/netfilter/nft_nat.c
+index 0840c635b752e..be1595d6979d8 100644
+--- a/net/netfilter/nft_nat.c
++++ b/net/netfilter/nft_nat.c
+@@ -201,7 +201,9 @@ static int nft_nat_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ 		alen = sizeof_field(struct nf_nat_range, min_addr.ip6);
+ 		break;
+ 	default:
+-		return -EAFNOSUPPORT;
++		if (tb[NFTA_NAT_REG_ADDR_MIN])
++			return -EAFNOSUPPORT;
++		break;
+ 	}
+ 	priv->family = family;
+ 
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index f2efaa4225f91..67993bcfecdea 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -518,8 +518,10 @@ int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len)
+ 		if (!ipc)
+ 			goto err;
+ 
+-		if (sock_queue_rcv_skb(&ipc->sk, skb))
++		if (sock_queue_rcv_skb(&ipc->sk, skb)) {
++			qrtr_port_put(ipc);
+ 			goto err;
++		}
+ 
+ 		qrtr_port_put(ipc);
+ 	}
+@@ -839,6 +841,8 @@ static int qrtr_local_enqueue(struct qrtr_node *node, struct sk_buff *skb,
+ 
+ 	ipc = qrtr_port_lookup(to->sq_port);
+ 	if (!ipc || &ipc->sk == skb->sk) { /* do not send to self */
++		if (ipc)
++			qrtr_port_put(ipc);
+ 		kfree_skb(skb);
+ 		return -ENODEV;
+ 	}
+diff --git a/net/sctp/input.c b/net/sctp/input.c
+index f72bff93745c4..ddb5b5c2550ef 100644
+--- a/net/sctp/input.c
++++ b/net/sctp/input.c
+@@ -1175,7 +1175,7 @@ static struct sctp_association *__sctp_rcv_asconf_lookup(
+ 	if (unlikely(!af))
+ 		return NULL;
+ 
+-	if (af->from_addr_param(&paddr, param, peer_port, 0))
++	if (!af->from_addr_param(&paddr, param, peer_port, 0))
+ 		return NULL;
+ 
+ 	return __sctp_lookup_association(net, laddr, &paddr, transportp);
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index e5c43d4d5a75f..c9391d38de85c 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -898,16 +898,10 @@ static int tipc_aead_decrypt(struct net *net, struct tipc_aead *aead,
+ 	if (unlikely(!aead))
+ 		return -ENOKEY;
+ 
+-	/* Cow skb data if needed */
+-	if (likely(!skb_cloned(skb) &&
+-		   (!skb_is_nonlinear(skb) || !skb_has_frag_list(skb)))) {
+-		nsg = 1 + skb_shinfo(skb)->nr_frags;
+-	} else {
+-		nsg = skb_cow_data(skb, 0, &unused);
+-		if (unlikely(nsg < 0)) {
+-			pr_err("RX: skb_cow_data() returned %d\n", nsg);
+-			return nsg;
+-		}
++	nsg = skb_cow_data(skb, 0, &unused);
++	if (unlikely(nsg < 0)) {
++		pr_err("RX: skb_cow_data() returned %d\n", nsg);
++		return nsg;
+ 	}
+ 
+ 	/* Allocate memory for the AEAD operation */
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 53af72824c9ce..9bdc5147a65a1 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -160,6 +160,7 @@ static void tipc_sk_remove(struct tipc_sock *tsk);
+ static int __tipc_sendstream(struct socket *sock, struct msghdr *m, size_t dsz);
+ static int __tipc_sendmsg(struct socket *sock, struct msghdr *m, size_t dsz);
+ static void tipc_sk_push_backlog(struct tipc_sock *tsk, bool nagle_ack);
++static int tipc_wait_for_connect(struct socket *sock, long *timeo_p);
+ 
+ static const struct proto_ops packet_ops;
+ static const struct proto_ops stream_ops;
+@@ -1525,8 +1526,13 @@ static int __tipc_sendmsg(struct socket *sock, struct msghdr *m, size_t dlen)
+ 		rc = 0;
+ 	}
+ 
+-	if (unlikely(syn && !rc))
++	if (unlikely(syn && !rc)) {
+ 		tipc_set_sk_state(sk, TIPC_CONNECTING);
++		if (timeout) {
++			timeout = msecs_to_jiffies(timeout);
++			tipc_wait_for_connect(sock, &timeout);
++		}
++	}
+ 
+ 	return rc ? rc : dlen;
+ }
+@@ -1574,7 +1580,7 @@ static int __tipc_sendstream(struct socket *sock, struct msghdr *m, size_t dlen)
+ 		return -EMSGSIZE;
+ 
+ 	/* Handle implicit connection setup */
+-	if (unlikely(dest)) {
++	if (unlikely(dest && sk->sk_state == TIPC_OPEN)) {
+ 		rc = __tipc_sendmsg(sock, m, dlen);
+ 		if (dlen && dlen == rc) {
+ 			tsk->peer_caps = tipc_node_get_capabilities(net, dnode);
+@@ -2665,7 +2671,7 @@ static int tipc_listen(struct socket *sock, int len)
+ static int tipc_wait_for_accept(struct socket *sock, long timeo)
+ {
+ 	struct sock *sk = sock->sk;
+-	DEFINE_WAIT(wait);
++	DEFINE_WAIT_FUNC(wait, woken_wake_function);
+ 	int err;
+ 
+ 	/* True wake-one mechanism for incoming connections: only
+@@ -2674,12 +2680,12 @@ static int tipc_wait_for_accept(struct socket *sock, long timeo)
+ 	 * anymore, the common case will execute the loop only once.
+ 	*/
+ 	for (;;) {
+-		prepare_to_wait_exclusive(sk_sleep(sk), &wait,
+-					  TASK_INTERRUPTIBLE);
+ 		if (timeo && skb_queue_empty(&sk->sk_receive_queue)) {
++			add_wait_queue(sk_sleep(sk), &wait);
+ 			release_sock(sk);
+-			timeo = schedule_timeout(timeo);
++			timeo = wait_woken(&wait, TASK_INTERRUPTIBLE, timeo);
+ 			lock_sock(sk);
++			remove_wait_queue(sk_sleep(sk), &wait);
+ 		}
+ 		err = 0;
+ 		if (!skb_queue_empty(&sk->sk_receive_queue))
+@@ -2691,7 +2697,6 @@ static int tipc_wait_for_accept(struct socket *sock, long timeo)
+ 		if (signal_pending(current))
+ 			break;
+ 	}
+-	finish_wait(sk_sleep(sk), &wait);
+ 	return err;
+ }
+ 
+@@ -2708,9 +2713,10 @@ static int tipc_accept(struct socket *sock, struct socket *new_sock, int flags,
+ 		       bool kern)
+ {
+ 	struct sock *new_sk, *sk = sock->sk;
+-	struct sk_buff *buf;
+ 	struct tipc_sock *new_tsock;
++	struct msghdr m = {NULL,};
+ 	struct tipc_msg *msg;
++	struct sk_buff *buf;
+ 	long timeo;
+ 	int res;
+ 
+@@ -2755,19 +2761,17 @@ static int tipc_accept(struct socket *sock, struct socket *new_sock, int flags,
+ 	}
+ 
+ 	/*
+-	 * Respond to 'SYN-' by discarding it & returning 'ACK'-.
+-	 * Respond to 'SYN+' by queuing it on new socket.
++	 * Respond to 'SYN-' by discarding it & returning 'ACK'.
++	 * Respond to 'SYN+' by queuing it on new socket & returning 'ACK'.
+ 	 */
+ 	if (!msg_data_sz(msg)) {
+-		struct msghdr m = {NULL,};
+-
+ 		tsk_advance_rx_queue(sk);
+-		__tipc_sendstream(new_sock, &m, 0);
+ 	} else {
+ 		__skb_dequeue(&sk->sk_receive_queue);
+ 		__skb_queue_head(&new_sk->sk_receive_queue, buf);
+ 		skb_set_owner_r(buf, new_sk);
+ 	}
++	__tipc_sendstream(new_sock, &m, 0);
+ 	release_sock(new_sk);
+ exit:
+ 	release_sock(sk);
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 4f06c1825029f..dd76accab018b 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -1744,16 +1744,14 @@ cfg80211_bss_update(struct cfg80211_registered_device *rdev,
+ 			 * be grouped with this beacon for updates ...
+ 			 */
+ 			if (!cfg80211_combine_bsses(rdev, new)) {
+-				kfree(new);
++				bss_ref_put(rdev, new);
+ 				goto drop;
+ 			}
+ 		}
+ 
+ 		if (rdev->bss_entries >= bss_entries_limit &&
+ 		    !cfg80211_bss_expire_oldest(rdev)) {
+-			if (!list_empty(&new->hidden_list))
+-				list_del(&new->hidden_list);
+-			kfree(new);
++			bss_ref_put(rdev, new);
+ 			goto drop;
+ 		}
+ 
+diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
+index 72e7f3616157e..8af693d9678ce 100644
+--- a/tools/perf/util/map.c
++++ b/tools/perf/util/map.c
+@@ -192,8 +192,6 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
+ 			if (!(prot & PROT_EXEC))
+ 				dso__set_loaded(dso);
+ 		}
+-
+-		nsinfo__put(dso->nsinfo);
+ 		dso->nsinfo = nsi;
+ 
+ 		if (build_id__is_defined(bid))
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index 44b90d638ad5f..538b8fa8a7109 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -742,9 +742,13 @@ struct pmu_events_map *__weak pmu_events_map__find(void)
+ 	return perf_pmu__find_map(NULL);
+ }
+ 
+-static bool perf_pmu__valid_suffix(char *pmu_name, char *tok)
++/*
++ * Suffix must be in form tok_{digits}, or tok{digits}, or same as pmu_name
++ * to be valid.
++ */
++static bool perf_pmu__valid_suffix(const char *pmu_name, char *tok)
+ {
+-	char *p;
++	const char *p;
+ 
+ 	if (strncmp(pmu_name, tok, strlen(tok)))
+ 		return false;
+@@ -753,12 +757,16 @@ static bool perf_pmu__valid_suffix(char *pmu_name, char *tok)
+ 	if (*p == 0)
+ 		return true;
+ 
+-	if (*p != '_')
+-		return false;
++	if (*p == '_')
++		++p;
+ 
+-	++p;
+-	if (*p == 0 || !isdigit(*p))
+-		return false;
++	/* Ensure we end in a number */
++	while (1) {
++		if (!isdigit(*p))
++			return false;
++		if (*(++p) == 0)
++			break;
++	}
+ 
+ 	return true;
+ }
+@@ -789,12 +797,19 @@ bool pmu_uncore_alias_match(const char *pmu_name, const char *name)
+ 	 *	    match "socket" in "socketX_pmunameY" and then "pmuname" in
+ 	 *	    "pmunameY".
+ 	 */
+-	for (; tok; name += strlen(tok), tok = strtok_r(NULL, ",", &tmp)) {
++	while (1) {
++		char *next_tok = strtok_r(NULL, ",", &tmp);
++
+ 		name = strstr(name, tok);
+-		if (!name || !perf_pmu__valid_suffix((char *)name, tok)) {
++		if (!name ||
++		    (!next_tok && !perf_pmu__valid_suffix(name, tok))) {
+ 			res = false;
+ 			goto out;
+ 		}
++		if (!next_tok)
++			break;
++		tok = next_tok;
++		name += strlen(tok);
+ 	}
+ 
+ 	res = true;
+diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c
+index 04a2641261beb..80cbd3a748c02 100644
+--- a/tools/testing/selftests/kvm/dirty_log_perf_test.c
++++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c
+@@ -312,6 +312,7 @@ int main(int argc, char *argv[])
+ 			break;
+ 		case 'o':
+ 			p.partition_vcpu_memory_access = false;
++			break;
+ 		case 's':
+ 			p.backing_src = parse_backing_src_type(optarg);
+ 			break;
+diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c
+index 3e1fef3d81045..be6d64c90eadb 100644
+--- a/tools/testing/selftests/vm/userfaultfd.c
++++ b/tools/testing/selftests/vm/userfaultfd.c
+@@ -199,7 +199,7 @@ static void anon_allocate_area(void **alloc_area)
+ {
+ 	*alloc_area = mmap(NULL, nr_pages * page_size, PROT_READ | PROT_WRITE,
+ 			   MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+-	if (*alloc_area == MAP_FAILED)
++	if (*alloc_area == MAP_FAILED) {
+ 		fprintf(stderr, "mmap of anonymous memory failed");
+ 		*alloc_area = NULL;
+ 	}
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 46fb042837d20..0119466677b7d 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -4183,6 +4183,16 @@ struct compat_kvm_dirty_log {
+ 	};
+ };
+ 
++struct compat_kvm_clear_dirty_log {
++	__u32 slot;
++	__u32 num_pages;
++	__u64 first_page;
++	union {
++		compat_uptr_t dirty_bitmap; /* one bit per page */
++		__u64 padding2;
++	};
++};
++
+ static long kvm_vm_compat_ioctl(struct file *filp,
+ 			   unsigned int ioctl, unsigned long arg)
+ {
+@@ -4192,6 +4202,24 @@ static long kvm_vm_compat_ioctl(struct file *filp,
+ 	if (kvm->mm != current->mm)
+ 		return -EIO;
+ 	switch (ioctl) {
++#ifdef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
++	case KVM_CLEAR_DIRTY_LOG: {
++		struct compat_kvm_clear_dirty_log compat_log;
++		struct kvm_clear_dirty_log log;
++
++		if (copy_from_user(&compat_log, (void __user *)arg,
++				   sizeof(compat_log)))
++			return -EFAULT;
++		log.slot	 = compat_log.slot;
++		log.num_pages	 = compat_log.num_pages;
++		log.first_page	 = compat_log.first_page;
++		log.padding2	 = compat_log.padding2;
++		log.dirty_bitmap = compat_ptr(compat_log.dirty_bitmap);
++
++		r = kvm_vm_ioctl_clear_dirty_log(kvm, &log);
++		break;
++	}
++#endif
+ 	case KVM_GET_DIRTY_LOG: {
+ 		struct compat_kvm_dirty_log compat_log;
+ 		struct kvm_dirty_log log;


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-08-08 13:35 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-08-08 13:35 UTC (permalink / raw
  To: gentoo-commits

commit:     379a3098e34317927463529d75f11fe19a0b4315
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Aug  8 13:35:15 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Aug  8 13:35:15 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=379a3098

Linux patch 5.13.9

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1008_linux-5.13.9.patch | 2415 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2419 insertions(+)

diff --git a/0000_README b/0000_README
index 91c9a8e..fd05e40 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch:  1007_linux-5.13.8.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.13.8
 
+Patch:  1008_linux-5.13.9.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.13.9
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1008_linux-5.13.9.patch b/1008_linux-5.13.9.patch
new file mode 100644
index 0000000..2058d37
--- /dev/null
+++ b/1008_linux-5.13.9.patch
@@ -0,0 +1,2415 @@
+diff --git a/Makefile b/Makefile
+index fdb4e2fd9d8f3..9d810e13a83f4 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 13
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/drivers/firmware/efi/mokvar-table.c b/drivers/firmware/efi/mokvar-table.c
+index d8bc013406861..38722d2009e20 100644
+--- a/drivers/firmware/efi/mokvar-table.c
++++ b/drivers/firmware/efi/mokvar-table.c
+@@ -180,7 +180,10 @@ void __init efi_mokvar_table_init(void)
+ 		pr_err("EFI MOKvar config table is not valid\n");
+ 		return;
+ 	}
+-	efi_mem_reserve(efi.mokvar_table, map_size_needed);
++
++	if (md.type == EFI_BOOT_SERVICES_DATA)
++		efi_mem_reserve(efi.mokvar_table, map_size_needed);
++
+ 	efi_mokvar_table_size = map_size_needed;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index ae6830ff1cf7d..774e825e5aab8 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -1675,9 +1675,6 @@ static enum dp_panel_mode try_enable_assr(struct dc_stream_state *stream)
+ 	} else
+ 		panel_mode = DP_PANEL_MODE_DEFAULT;
+ 
+-#else
+-	/* turn off ASSR if the implementation is not compiled in */
+-	panel_mode = DP_PANEL_MODE_DEFAULT;
+ #endif
+ 	return panel_mode;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index d7d70b9bb3874..81f583733fa87 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -2093,8 +2093,10 @@ int dcn20_populate_dml_pipes_from_context(
+ 				- timing->v_border_bottom;
+ 		pipes[pipe_cnt].pipe.dest.htotal = timing->h_total;
+ 		pipes[pipe_cnt].pipe.dest.vtotal = v_total;
+-		pipes[pipe_cnt].pipe.dest.hactive = timing->h_addressable;
+-		pipes[pipe_cnt].pipe.dest.vactive = timing->v_addressable;
++		pipes[pipe_cnt].pipe.dest.hactive =
++			timing->h_addressable + timing->h_border_left + timing->h_border_right;
++		pipes[pipe_cnt].pipe.dest.vactive =
++			timing->v_addressable + timing->v_border_top + timing->v_border_bottom;
+ 		pipes[pipe_cnt].pipe.dest.interlaced = timing->flags.INTERLACE;
+ 		pipes[pipe_cnt].pipe.dest.pixel_rate_mhz = timing->pix_clk_100hz/10000.0;
+ 		if (timing->timing_3d_format == TIMING_3D_FORMAT_HW_FRAME_PACKING)
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
+index 398210d1af34f..f497439293572 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
+@@ -4889,7 +4889,7 @@ void dml21_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
+ 				}
+ 			} while ((locals->PrefetchSupported[i][j] != true || locals->VRatioInPrefetchSupported[i][j] != true)
+ 					&& (mode_lib->vba.NextMaxVStartup != mode_lib->vba.MaxMaxVStartup[0][0]
+-						|| mode_lib->vba.NextPrefetchMode < mode_lib->vba.MaxPrefetchMode));
++						|| mode_lib->vba.NextPrefetchMode <= mode_lib->vba.MaxPrefetchMode));
+ 
+ 			if (locals->PrefetchSupported[i][j] == true && locals->VRatioInPrefetchSupported[i][j] == true) {
+ 				mode_lib->vba.BandwidthAvailableForImmediateFlip = locals->ReturnBWPerState[i][0];
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+index 5964e67c7d363..305c320f9a838 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+@@ -25,10 +25,8 @@
+ #include "i915_gem_clflush.h"
+ #include "i915_gem_context.h"
+ #include "i915_gem_ioctls.h"
+-#include "i915_sw_fence_work.h"
+ #include "i915_trace.h"
+ #include "i915_user_extensions.h"
+-#include "i915_memcpy.h"
+ 
+ struct eb_vma {
+ 	struct i915_vma *vma;
+@@ -1456,6 +1454,10 @@ static u32 *reloc_gpu(struct i915_execbuffer *eb,
+ 		int err;
+ 		struct intel_engine_cs *engine = eb->engine;
+ 
++		/* If we need to copy for the cmdparser, we will stall anyway */
++		if (eb_use_cmdparser(eb))
++			return ERR_PTR(-EWOULDBLOCK);
++
+ 		if (!reloc_can_use_engine(engine)) {
+ 			engine = engine->gt->engine_class[COPY_ENGINE_CLASS][0];
+ 			if (!engine)
+@@ -2372,217 +2374,6 @@ shadow_batch_pin(struct i915_execbuffer *eb,
+ 	return vma;
+ }
+ 
+-struct eb_parse_work {
+-	struct dma_fence_work base;
+-	struct intel_engine_cs *engine;
+-	struct i915_vma *batch;
+-	struct i915_vma *shadow;
+-	struct i915_vma *trampoline;
+-	unsigned long batch_offset;
+-	unsigned long batch_length;
+-	unsigned long *jump_whitelist;
+-	const void *batch_map;
+-	void *shadow_map;
+-};
+-
+-static int __eb_parse(struct dma_fence_work *work)
+-{
+-	struct eb_parse_work *pw = container_of(work, typeof(*pw), base);
+-	int ret;
+-	bool cookie;
+-
+-	cookie = dma_fence_begin_signalling();
+-	ret = intel_engine_cmd_parser(pw->engine,
+-				      pw->batch,
+-				      pw->batch_offset,
+-				      pw->batch_length,
+-				      pw->shadow,
+-				      pw->jump_whitelist,
+-				      pw->shadow_map,
+-				      pw->batch_map);
+-	dma_fence_end_signalling(cookie);
+-
+-	return ret;
+-}
+-
+-static void __eb_parse_release(struct dma_fence_work *work)
+-{
+-	struct eb_parse_work *pw = container_of(work, typeof(*pw), base);
+-
+-	if (!IS_ERR_OR_NULL(pw->jump_whitelist))
+-		kfree(pw->jump_whitelist);
+-
+-	if (pw->batch_map)
+-		i915_gem_object_unpin_map(pw->batch->obj);
+-	else
+-		i915_gem_object_unpin_pages(pw->batch->obj);
+-
+-	i915_gem_object_unpin_map(pw->shadow->obj);
+-
+-	if (pw->trampoline)
+-		i915_active_release(&pw->trampoline->active);
+-	i915_active_release(&pw->shadow->active);
+-	i915_active_release(&pw->batch->active);
+-}
+-
+-static const struct dma_fence_work_ops eb_parse_ops = {
+-	.name = "eb_parse",
+-	.work = __eb_parse,
+-	.release = __eb_parse_release,
+-};
+-
+-static inline int
+-__parser_mark_active(struct i915_vma *vma,
+-		     struct intel_timeline *tl,
+-		     struct dma_fence *fence)
+-{
+-	struct intel_gt_buffer_pool_node *node = vma->private;
+-
+-	return i915_active_ref(&node->active, tl->fence_context, fence);
+-}
+-
+-static int
+-parser_mark_active(struct eb_parse_work *pw, struct intel_timeline *tl)
+-{
+-	int err;
+-
+-	mutex_lock(&tl->mutex);
+-
+-	err = __parser_mark_active(pw->shadow, tl, &pw->base.dma);
+-	if (err)
+-		goto unlock;
+-
+-	if (pw->trampoline) {
+-		err = __parser_mark_active(pw->trampoline, tl, &pw->base.dma);
+-		if (err)
+-			goto unlock;
+-	}
+-
+-unlock:
+-	mutex_unlock(&tl->mutex);
+-	return err;
+-}
+-
+-static int eb_parse_pipeline(struct i915_execbuffer *eb,
+-			     struct i915_vma *shadow,
+-			     struct i915_vma *trampoline)
+-{
+-	struct eb_parse_work *pw;
+-	struct drm_i915_gem_object *batch = eb->batch->vma->obj;
+-	bool needs_clflush;
+-	int err;
+-
+-	GEM_BUG_ON(overflows_type(eb->batch_start_offset, pw->batch_offset));
+-	GEM_BUG_ON(overflows_type(eb->batch_len, pw->batch_length));
+-
+-	pw = kzalloc(sizeof(*pw), GFP_KERNEL);
+-	if (!pw)
+-		return -ENOMEM;
+-
+-	err = i915_active_acquire(&eb->batch->vma->active);
+-	if (err)
+-		goto err_free;
+-
+-	err = i915_active_acquire(&shadow->active);
+-	if (err)
+-		goto err_batch;
+-
+-	if (trampoline) {
+-		err = i915_active_acquire(&trampoline->active);
+-		if (err)
+-			goto err_shadow;
+-	}
+-
+-	pw->shadow_map = i915_gem_object_pin_map(shadow->obj, I915_MAP_WB);
+-	if (IS_ERR(pw->shadow_map)) {
+-		err = PTR_ERR(pw->shadow_map);
+-		goto err_trampoline;
+-	}
+-
+-	needs_clflush =
+-		!(batch->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ);
+-
+-	pw->batch_map = ERR_PTR(-ENODEV);
+-	if (needs_clflush && i915_has_memcpy_from_wc())
+-		pw->batch_map = i915_gem_object_pin_map(batch, I915_MAP_WC);
+-
+-	if (IS_ERR(pw->batch_map)) {
+-		err = i915_gem_object_pin_pages(batch);
+-		if (err)
+-			goto err_unmap_shadow;
+-		pw->batch_map = NULL;
+-	}
+-
+-	pw->jump_whitelist =
+-		intel_engine_cmd_parser_alloc_jump_whitelist(eb->batch_len,
+-							     trampoline);
+-	if (IS_ERR(pw->jump_whitelist)) {
+-		err = PTR_ERR(pw->jump_whitelist);
+-		goto err_unmap_batch;
+-	}
+-
+-	dma_fence_work_init(&pw->base, &eb_parse_ops);
+-
+-	pw->engine = eb->engine;
+-	pw->batch = eb->batch->vma;
+-	pw->batch_offset = eb->batch_start_offset;
+-	pw->batch_length = eb->batch_len;
+-	pw->shadow = shadow;
+-	pw->trampoline = trampoline;
+-
+-	/* Mark active refs early for this worker, in case we get interrupted */
+-	err = parser_mark_active(pw, eb->context->timeline);
+-	if (err)
+-		goto err_commit;
+-
+-	err = dma_resv_reserve_shared(pw->batch->resv, 1);
+-	if (err)
+-		goto err_commit;
+-
+-	err = dma_resv_reserve_shared(shadow->resv, 1);
+-	if (err)
+-		goto err_commit;
+-
+-	/* Wait for all writes (and relocs) into the batch to complete */
+-	err = i915_sw_fence_await_reservation(&pw->base.chain,
+-					      pw->batch->resv, NULL, false,
+-					      0, I915_FENCE_GFP);
+-	if (err < 0)
+-		goto err_commit;
+-
+-	/* Keep the batch alive and unwritten as we parse */
+-	dma_resv_add_shared_fence(pw->batch->resv, &pw->base.dma);
+-
+-	/* Force execution to wait for completion of the parser */
+-	dma_resv_add_excl_fence(shadow->resv, &pw->base.dma);
+-
+-	dma_fence_work_commit_imm(&pw->base);
+-	return 0;
+-
+-err_commit:
+-	i915_sw_fence_set_error_once(&pw->base.chain, err);
+-	dma_fence_work_commit_imm(&pw->base);
+-	return err;
+-
+-err_unmap_batch:
+-	if (pw->batch_map)
+-		i915_gem_object_unpin_map(batch);
+-	else
+-		i915_gem_object_unpin_pages(batch);
+-err_unmap_shadow:
+-	i915_gem_object_unpin_map(shadow->obj);
+-err_trampoline:
+-	if (trampoline)
+-		i915_active_release(&trampoline->active);
+-err_shadow:
+-	i915_active_release(&shadow->active);
+-err_batch:
+-	i915_active_release(&eb->batch->vma->active);
+-err_free:
+-	kfree(pw);
+-	return err;
+-}
+-
+ static struct i915_vma *eb_dispatch_secure(struct i915_execbuffer *eb, struct i915_vma *vma)
+ {
+ 	/*
+@@ -2672,7 +2463,15 @@ static int eb_parse(struct i915_execbuffer *eb)
+ 		goto err_trampoline;
+ 	}
+ 
+-	err = eb_parse_pipeline(eb, shadow, trampoline);
++	err = dma_resv_reserve_shared(shadow->resv, 1);
++	if (err)
++		goto err_trampoline;
++
++	err = intel_engine_cmd_parser(eb->engine,
++				      eb->batch->vma,
++				      eb->batch_start_offset,
++				      eb->batch_len,
++				      shadow, trampoline);
+ 	if (err)
+ 		goto err_unpin_batch;
+ 
+diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c
+index 4df505e4c53ae..16162fc2782dc 100644
+--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c
++++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_execbuffer.c
+@@ -125,6 +125,10 @@ static int igt_gpu_reloc(void *arg)
+ 	intel_gt_pm_get(&eb.i915->gt);
+ 
+ 	for_each_uabi_engine(eb.engine, eb.i915) {
++		if (intel_engine_requires_cmd_parser(eb.engine) ||
++		    intel_engine_using_cmd_parser(eb.engine))
++			continue;
++
+ 		reloc_cache_init(&eb.reloc_cache, eb.i915);
+ 		memset(map, POISON_INUSE, 4096);
+ 
+diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c
+index e6f1e93abbbb3..ce61ea4506ea9 100644
+--- a/drivers/gpu/drm/i915/i915_cmd_parser.c
++++ b/drivers/gpu/drm/i915/i915_cmd_parser.c
+@@ -1145,19 +1145,41 @@ find_reg(const struct intel_engine_cs *engine, u32 addr)
+ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,
+ 		       struct drm_i915_gem_object *src_obj,
+ 		       unsigned long offset, unsigned long length,
+-		       void *dst, const void *src)
++		       bool *needs_clflush_after)
+ {
+-	bool needs_clflush =
+-		!(src_obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ);
+-
+-	if (src) {
+-		GEM_BUG_ON(!needs_clflush);
+-		i915_unaligned_memcpy_from_wc(dst, src + offset, length);
+-	} else {
+-		struct scatterlist *sg;
++	unsigned int src_needs_clflush;
++	unsigned int dst_needs_clflush;
++	void *dst, *src;
++	int ret;
++
++	ret = i915_gem_object_prepare_write(dst_obj, &dst_needs_clflush);
++	if (ret)
++		return ERR_PTR(ret);
++
++	dst = i915_gem_object_pin_map(dst_obj, I915_MAP_WB);
++	i915_gem_object_finish_access(dst_obj);
++	if (IS_ERR(dst))
++		return dst;
++
++	ret = i915_gem_object_prepare_read(src_obj, &src_needs_clflush);
++	if (ret) {
++		i915_gem_object_unpin_map(dst_obj);
++		return ERR_PTR(ret);
++	}
++
++	src = ERR_PTR(-ENODEV);
++	if (src_needs_clflush && i915_has_memcpy_from_wc()) {
++		src = i915_gem_object_pin_map(src_obj, I915_MAP_WC);
++		if (!IS_ERR(src)) {
++			i915_unaligned_memcpy_from_wc(dst,
++						      src + offset,
++						      length);
++			i915_gem_object_unpin_map(src_obj);
++		}
++	}
++	if (IS_ERR(src)) {
++		unsigned long x, n, remain;
+ 		void *ptr;
+-		unsigned int x, sg_ofs;
+-		unsigned long remain;
+ 
+ 		/*
+ 		 * We can avoid clflushing partial cachelines before the write
+@@ -1168,40 +1190,34 @@ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj,
+ 		 * validate up to the end of the batch.
+ 		 */
+ 		remain = length;
+-		if (!(dst_obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ))
++		if (dst_needs_clflush & CLFLUSH_BEFORE)
+ 			remain = round_up(remain,
+ 					  boot_cpu_data.x86_clflush_size);
+ 
+ 		ptr = dst;
+ 		x = offset_in_page(offset);
+-		sg = i915_gem_object_get_sg(src_obj, offset >> PAGE_SHIFT, &sg_ofs, false);
+-
+-		while (remain) {
+-			unsigned long sg_max = sg->length >> PAGE_SHIFT;
+-
+-			for (; remain && sg_ofs < sg_max; sg_ofs++) {
+-				unsigned long len = min(remain, PAGE_SIZE - x);
+-				void *map;
+-
+-				map = kmap_atomic(nth_page(sg_page(sg), sg_ofs));
+-				if (needs_clflush)
+-					drm_clflush_virt_range(map + x, len);
+-				memcpy(ptr, map + x, len);
+-				kunmap_atomic(map);
+-
+-				ptr += len;
+-				remain -= len;
+-				x = 0;
+-			}
+-
+-			sg_ofs = 0;
+-			sg = sg_next(sg);
++		for (n = offset >> PAGE_SHIFT; remain; n++) {
++			int len = min(remain, PAGE_SIZE - x);
++
++			src = kmap_atomic(i915_gem_object_get_page(src_obj, n));
++			if (src_needs_clflush)
++				drm_clflush_virt_range(src + x, len);
++			memcpy(ptr, src + x, len);
++			kunmap_atomic(src);
++
++			ptr += len;
++			remain -= len;
++			x = 0;
+ 		}
+ 	}
+ 
++	i915_gem_object_finish_access(src_obj);
++
+ 	memset32(dst + length, 0, (dst_obj->base.size - length) / sizeof(u32));
+ 
+ 	/* dst_obj is returned with vmap pinned */
++	*needs_clflush_after = dst_needs_clflush & CLFLUSH_AFTER;
++
+ 	return dst;
+ }
+ 
+@@ -1360,6 +1376,9 @@ static int check_bbstart(u32 *cmd, u32 offset, u32 length,
+ 	if (target_cmd_index == offset)
+ 		return 0;
+ 
++	if (IS_ERR(jump_whitelist))
++		return PTR_ERR(jump_whitelist);
++
+ 	if (!test_bit(target_cmd_index, jump_whitelist)) {
+ 		DRM_DEBUG("CMD: BB_START to 0x%llx not a previously executed cmd\n",
+ 			  jump_target);
+@@ -1369,14 +1388,10 @@ static int check_bbstart(u32 *cmd, u32 offset, u32 length,
+ 	return 0;
+ }
+ 
+-unsigned long *intel_engine_cmd_parser_alloc_jump_whitelist(u32 batch_length,
+-							    bool trampoline)
++static unsigned long *alloc_whitelist(u32 batch_length)
+ {
+ 	unsigned long *jmp;
+ 
+-	if (trampoline)
+-		return NULL;
+-
+ 	/*
+ 	 * We expect batch_length to be less than 256KiB for known users,
+ 	 * i.e. we need at most an 8KiB bitmap allocation which should be
+@@ -1409,21 +1424,21 @@ unsigned long *intel_engine_cmd_parser_alloc_jump_whitelist(u32 batch_length,
+  * Return: non-zero if the parser finds violations or otherwise fails; -EACCES
+  * if the batch appears legal but should use hardware parsing
+  */
++
+ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
+ 			    struct i915_vma *batch,
+ 			    unsigned long batch_offset,
+ 			    unsigned long batch_length,
+ 			    struct i915_vma *shadow,
+-			    unsigned long *jump_whitelist,
+-			    void *shadow_map,
+-			    const void *batch_map)
++			    bool trampoline)
+ {
+ 	u32 *cmd, *batch_end, offset = 0;
+ 	struct drm_i915_cmd_descriptor default_desc = noop_desc;
+ 	const struct drm_i915_cmd_descriptor *desc = &default_desc;
++	bool needs_clflush_after = false;
++	unsigned long *jump_whitelist;
+ 	u64 batch_addr, shadow_addr;
+ 	int ret = 0;
+-	bool trampoline = !jump_whitelist;
+ 
+ 	GEM_BUG_ON(!IS_ALIGNED(batch_offset, sizeof(*cmd)));
+ 	GEM_BUG_ON(!IS_ALIGNED(batch_length, sizeof(*cmd)));
+@@ -1431,8 +1446,18 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
+ 				     batch->size));
+ 	GEM_BUG_ON(!batch_length);
+ 
+-	cmd = copy_batch(shadow->obj, batch->obj, batch_offset, batch_length,
+-			 shadow_map, batch_map);
++	cmd = copy_batch(shadow->obj, batch->obj,
++			 batch_offset, batch_length,
++			 &needs_clflush_after);
++	if (IS_ERR(cmd)) {
++		DRM_DEBUG("CMD: Failed to copy batch\n");
++		return PTR_ERR(cmd);
++	}
++
++	jump_whitelist = NULL;
++	if (!trampoline)
++		/* Defer failure until attempted use */
++		jump_whitelist = alloc_whitelist(batch_length);
+ 
+ 	shadow_addr = gen8_canonical_addr(shadow->node.start);
+ 	batch_addr = gen8_canonical_addr(batch->node.start + batch_offset);
+@@ -1533,6 +1558,9 @@ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
+ 
+ 	i915_gem_object_flush_map(shadow->obj);
+ 
++	if (!IS_ERR_OR_NULL(jump_whitelist))
++		kfree(jump_whitelist);
++	i915_gem_object_unpin_map(shadow->obj);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index 69e43bf91a153..4c041e670904e 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -1881,17 +1881,12 @@ const char *i915_cache_level_str(struct drm_i915_private *i915, int type);
+ int i915_cmd_parser_get_version(struct drm_i915_private *dev_priv);
+ int intel_engine_init_cmd_parser(struct intel_engine_cs *engine);
+ void intel_engine_cleanup_cmd_parser(struct intel_engine_cs *engine);
+-unsigned long *intel_engine_cmd_parser_alloc_jump_whitelist(u32 batch_length,
+-							    bool trampoline);
+-
+ int intel_engine_cmd_parser(struct intel_engine_cs *engine,
+ 			    struct i915_vma *batch,
+ 			    unsigned long batch_offset,
+ 			    unsigned long batch_length,
+ 			    struct i915_vma *shadow,
+-			    unsigned long *jump_whitelist,
+-			    void *shadow_map,
+-			    const void *batch_map);
++			    bool trampoline);
+ #define I915_CMD_PARSER_TRAMPOLINE_SIZE 8
+ 
+ /* intel_device_info.c */
+diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
+index bec9c3652188b..59d48a6a83d24 100644
+--- a/drivers/gpu/drm/i915/i915_request.c
++++ b/drivers/gpu/drm/i915/i915_request.c
+@@ -1426,10 +1426,8 @@ i915_request_await_execution(struct i915_request *rq,
+ 
+ 	do {
+ 		fence = *child++;
+-		if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
+-			i915_sw_fence_set_error_once(&rq->submit, fence->error);
++		if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+ 			continue;
+-		}
+ 
+ 		if (fence->context == rq->fence.context)
+ 			continue;
+@@ -1527,10 +1525,8 @@ i915_request_await_dma_fence(struct i915_request *rq, struct dma_fence *fence)
+ 
+ 	do {
+ 		fence = *child++;
+-		if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
+-			i915_sw_fence_set_error_once(&rq->submit, fence->error);
++		if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+ 			continue;
+-		}
+ 
+ 		/*
+ 		 * Requests on the same timeline are explicitly ordered, along
+diff --git a/drivers/net/dsa/sja1105/sja1105_clocking.c b/drivers/net/dsa/sja1105/sja1105_clocking.c
+index 2a9b8a6a5306f..f54b4d03a0024 100644
+--- a/drivers/net/dsa/sja1105/sja1105_clocking.c
++++ b/drivers/net/dsa/sja1105/sja1105_clocking.c
+@@ -721,9 +721,10 @@ int sja1105_clocking_setup_port(struct sja1105_private *priv, int port)
+ 
+ int sja1105_clocking_setup(struct sja1105_private *priv)
+ {
++	struct dsa_switch *ds = priv->ds;
+ 	int port, rc;
+ 
+-	for (port = 0; port < SJA1105_NUM_PORTS; port++) {
++	for (port = 0; port < ds->num_ports; port++) {
+ 		rc = sja1105_clocking_setup_port(priv, port);
+ 		if (rc < 0)
+ 			return rc;
+diff --git a/drivers/net/dsa/sja1105/sja1105_flower.c b/drivers/net/dsa/sja1105/sja1105_flower.c
+index 973761132fc35..77c54126b3fc8 100644
+--- a/drivers/net/dsa/sja1105/sja1105_flower.c
++++ b/drivers/net/dsa/sja1105/sja1105_flower.c
+@@ -35,6 +35,7 @@ static int sja1105_setup_bcast_policer(struct sja1105_private *priv,
+ {
+ 	struct sja1105_rule *rule = sja1105_rule_find(priv, cookie);
+ 	struct sja1105_l2_policing_entry *policing;
++	struct dsa_switch *ds = priv->ds;
+ 	bool new_rule = false;
+ 	unsigned long p;
+ 	int rc;
+@@ -59,7 +60,7 @@ static int sja1105_setup_bcast_policer(struct sja1105_private *priv,
+ 
+ 	policing = priv->static_config.tables[BLK_IDX_L2_POLICING].entries;
+ 
+-	if (policing[(SJA1105_NUM_PORTS * SJA1105_NUM_TC) + port].sharindx != port) {
++	if (policing[(ds->num_ports * SJA1105_NUM_TC) + port].sharindx != port) {
+ 		NL_SET_ERR_MSG_MOD(extack,
+ 				   "Port already has a broadcast policer");
+ 		rc = -EEXIST;
+@@ -72,7 +73,7 @@ static int sja1105_setup_bcast_policer(struct sja1105_private *priv,
+ 	 * point to the newly allocated policer
+ 	 */
+ 	for_each_set_bit(p, &rule->port_mask, SJA1105_NUM_PORTS) {
+-		int bcast = (SJA1105_NUM_PORTS * SJA1105_NUM_TC) + p;
++		int bcast = (ds->num_ports * SJA1105_NUM_TC) + p;
+ 
+ 		policing[bcast].sharindx = rule->bcast_pol.sharindx;
+ 	}
+@@ -435,7 +436,7 @@ int sja1105_cls_flower_del(struct dsa_switch *ds, int port,
+ 	policing = priv->static_config.tables[BLK_IDX_L2_POLICING].entries;
+ 
+ 	if (rule->type == SJA1105_RULE_BCAST_POLICER) {
+-		int bcast = (SJA1105_NUM_PORTS * SJA1105_NUM_TC) + port;
++		int bcast = (ds->num_ports * SJA1105_NUM_TC) + port;
+ 
+ 		old_sharindx = policing[bcast].sharindx;
+ 		policing[bcast].sharindx = port;
+@@ -486,7 +487,7 @@ void sja1105_flower_setup(struct dsa_switch *ds)
+ 
+ 	INIT_LIST_HEAD(&priv->flow_block.rules);
+ 
+-	for (port = 0; port < SJA1105_NUM_PORTS; port++)
++	for (port = 0; port < ds->num_ports; port++)
+ 		priv->flow_block.l2_policer_used[port] = true;
+ }
+ 
+diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
+index 6e5dbe9f3892e..5b7947832b877 100644
+--- a/drivers/net/dsa/sja1105/sja1105_main.c
++++ b/drivers/net/dsa/sja1105/sja1105_main.c
+@@ -107,6 +107,7 @@ static int sja1105_init_mac_settings(struct sja1105_private *priv)
+ 		.ingress = false,
+ 	};
+ 	struct sja1105_mac_config_entry *mac;
++	struct dsa_switch *ds = priv->ds;
+ 	struct sja1105_table *table;
+ 	int i;
+ 
+@@ -118,25 +119,23 @@ static int sja1105_init_mac_settings(struct sja1105_private *priv)
+ 		table->entry_count = 0;
+ 	}
+ 
+-	table->entries = kcalloc(SJA1105_NUM_PORTS,
++	table->entries = kcalloc(ds->num_ports,
+ 				 table->ops->unpacked_entry_size, GFP_KERNEL);
+ 	if (!table->entries)
+ 		return -ENOMEM;
+ 
+-	table->entry_count = SJA1105_NUM_PORTS;
++	table->entry_count = ds->num_ports;
+ 
+ 	mac = table->entries;
+ 
+-	for (i = 0; i < SJA1105_NUM_PORTS; i++) {
++	for (i = 0; i < ds->num_ports; i++) {
+ 		mac[i] = default_mac;
+-		if (i == dsa_upstream_port(priv->ds, i)) {
+-			/* STP doesn't get called for CPU port, so we need to
+-			 * set the I/O parameters statically.
+-			 */
+-			mac[i].dyn_learn = true;
+-			mac[i].ingress = true;
+-			mac[i].egress = true;
+-		}
++
++		/* Let sja1105_bridge_stp_state_set() keep address learning
++		 * enabled for the CPU port.
++		 */
++		if (dsa_is_cpu_port(ds, i))
++			priv->learn_ena |= BIT(i);
+ 	}
+ 
+ 	return 0;
+@@ -162,6 +161,7 @@ static int sja1105_init_mii_settings(struct sja1105_private *priv,
+ {
+ 	struct device *dev = &priv->spidev->dev;
+ 	struct sja1105_xmii_params_entry *mii;
++	struct dsa_switch *ds = priv->ds;
+ 	struct sja1105_table *table;
+ 	int i;
+ 
+@@ -183,7 +183,7 @@ static int sja1105_init_mii_settings(struct sja1105_private *priv,
+ 
+ 	mii = table->entries;
+ 
+-	for (i = 0; i < SJA1105_NUM_PORTS; i++) {
++	for (i = 0; i < ds->num_ports; i++) {
+ 		if (dsa_is_unused_port(priv->ds, i))
+ 			continue;
+ 
+@@ -267,8 +267,6 @@ static int sja1105_init_static_fdb(struct sja1105_private *priv)
+ 
+ static int sja1105_init_l2_lookup_params(struct sja1105_private *priv)
+ {
+-	struct sja1105_table *table;
+-	u64 max_fdb_entries = SJA1105_MAX_L2_LOOKUP_COUNT / SJA1105_NUM_PORTS;
+ 	struct sja1105_l2_lookup_params_entry default_l2_lookup_params = {
+ 		/* Learned FDB entries are forgotten after 300 seconds */
+ 		.maxage = SJA1105_AGEING_TIME_MS(300000),
+@@ -276,8 +274,6 @@ static int sja1105_init_l2_lookup_params(struct sja1105_private *priv)
+ 		.dyn_tbsz = SJA1105ET_FDB_BIN_SIZE,
+ 		/* And the P/Q/R/S equivalent setting: */
+ 		.start_dynspc = 0,
+-		.maxaddrp = {max_fdb_entries, max_fdb_entries, max_fdb_entries,
+-			     max_fdb_entries, max_fdb_entries, },
+ 		/* 2^8 + 2^5 + 2^3 + 2^2 + 2^1 + 1 in Koopman notation */
+ 		.poly = 0x97,
+ 		/* This selects between Independent VLAN Learning (IVL) and
+@@ -301,6 +297,15 @@ static int sja1105_init_l2_lookup_params(struct sja1105_private *priv)
+ 		.owr_dyn = true,
+ 		.drpnolearn = true,
+ 	};
++	struct dsa_switch *ds = priv->ds;
++	struct sja1105_table *table;
++	u64 max_fdb_entries;
++	int port;
++
++	max_fdb_entries = SJA1105_MAX_L2_LOOKUP_COUNT / ds->num_ports;
++
++	for (port = 0; port < ds->num_ports; port++)
++		default_l2_lookup_params.maxaddrp[port] = max_fdb_entries;
+ 
+ 	table = &priv->static_config.tables[BLK_IDX_L2_LOOKUP_PARAMS];
+ 
+@@ -393,6 +398,7 @@ static int sja1105_init_static_vlan(struct sja1105_private *priv)
+ static int sja1105_init_l2_forwarding(struct sja1105_private *priv)
+ {
+ 	struct sja1105_l2_forwarding_entry *l2fwd;
++	struct dsa_switch *ds = priv->ds;
+ 	struct sja1105_table *table;
+ 	int i, j;
+ 
+@@ -413,7 +419,7 @@ static int sja1105_init_l2_forwarding(struct sja1105_private *priv)
+ 	l2fwd = table->entries;
+ 
+ 	/* First 5 entries define the forwarding rules */
+-	for (i = 0; i < SJA1105_NUM_PORTS; i++) {
++	for (i = 0; i < ds->num_ports; i++) {
+ 		unsigned int upstream = dsa_upstream_port(priv->ds, i);
+ 
+ 		for (j = 0; j < SJA1105_NUM_TC; j++)
+@@ -441,8 +447,8 @@ static int sja1105_init_l2_forwarding(struct sja1105_private *priv)
+ 	 * Create a one-to-one mapping.
+ 	 */
+ 	for (i = 0; i < SJA1105_NUM_TC; i++)
+-		for (j = 0; j < SJA1105_NUM_PORTS; j++)
+-			l2fwd[SJA1105_NUM_PORTS + i].vlan_pmap[j] = i;
++		for (j = 0; j < ds->num_ports; j++)
++			l2fwd[ds->num_ports + i].vlan_pmap[j] = i;
+ 
+ 	return 0;
+ }
+@@ -538,7 +544,7 @@ static int sja1105_init_general_params(struct sja1105_private *priv)
+ 		 */
+ 		.host_port = dsa_upstream_port(priv->ds, 0),
+ 		/* Default to an invalid value */
+-		.mirr_port = SJA1105_NUM_PORTS,
++		.mirr_port = priv->ds->num_ports,
+ 		/* Link-local traffic received on casc_port will be forwarded
+ 		 * to host_port without embedding the source port and device ID
+ 		 * info in the destination MAC address (presumably because it
+@@ -546,7 +552,7 @@ static int sja1105_init_general_params(struct sja1105_private *priv)
+ 		 * that). Default to an invalid port (to disable the feature)
+ 		 * and overwrite this if we find any DSA (cascaded) ports.
+ 		 */
+-		.casc_port = SJA1105_NUM_PORTS,
++		.casc_port = priv->ds->num_ports,
+ 		/* No TTEthernet */
+ 		.vllupformat = SJA1105_VL_FORMAT_PSFP,
+ 		.vlmarker = 0,
+@@ -667,6 +673,7 @@ static int sja1105_init_avb_params(struct sja1105_private *priv)
+ static int sja1105_init_l2_policing(struct sja1105_private *priv)
+ {
+ 	struct sja1105_l2_policing_entry *policing;
++	struct dsa_switch *ds = priv->ds;
+ 	struct sja1105_table *table;
+ 	int port, tc;
+ 
+@@ -688,8 +695,8 @@ static int sja1105_init_l2_policing(struct sja1105_private *priv)
+ 	policing = table->entries;
+ 
+ 	/* Setup shared indices for the matchall policers */
+-	for (port = 0; port < SJA1105_NUM_PORTS; port++) {
+-		int bcast = (SJA1105_NUM_PORTS * SJA1105_NUM_TC) + port;
++	for (port = 0; port < ds->num_ports; port++) {
++		int bcast = (ds->num_ports * SJA1105_NUM_TC) + port;
+ 
+ 		for (tc = 0; tc < SJA1105_NUM_TC; tc++)
+ 			policing[port * SJA1105_NUM_TC + tc].sharindx = port;
+@@ -698,7 +705,7 @@ static int sja1105_init_l2_policing(struct sja1105_private *priv)
+ 	}
+ 
+ 	/* Setup the matchall policer parameters */
+-	for (port = 0; port < SJA1105_NUM_PORTS; port++) {
++	for (port = 0; port < ds->num_ports; port++) {
+ 		int mtu = VLAN_ETH_FRAME_LEN + ETH_FCS_LEN;
+ 
+ 		if (dsa_is_cpu_port(priv->ds, port))
+@@ -764,9 +771,10 @@ static int sja1105_static_config_load(struct sja1105_private *priv,
+ static int sja1105_parse_rgmii_delays(struct sja1105_private *priv,
+ 				      const struct sja1105_dt_port *ports)
+ {
++	struct dsa_switch *ds = priv->ds;
+ 	int i;
+ 
+-	for (i = 0; i < SJA1105_NUM_PORTS; i++) {
++	for (i = 0; i < ds->num_ports; i++) {
+ 		if (ports[i].role == XMII_MAC)
+ 			continue;
+ 
+@@ -1641,7 +1649,7 @@ static int sja1105_bridge_member(struct dsa_switch *ds, int port,
+ 
+ 	l2_fwd = priv->static_config.tables[BLK_IDX_L2_FORWARDING].entries;
+ 
+-	for (i = 0; i < SJA1105_NUM_PORTS; i++) {
++	for (i = 0; i < ds->num_ports; i++) {
+ 		/* Add this port to the forwarding matrix of the
+ 		 * other ports in the same bridge, and viceversa.
+ 		 */
+@@ -1863,7 +1871,7 @@ int sja1105_static_config_reload(struct sja1105_private *priv,
+ 	 * switch wants to see in the static config in order to allow us to
+ 	 * change it through the dynamic interface later.
+ 	 */
+-	for (i = 0; i < SJA1105_NUM_PORTS; i++) {
++	for (i = 0; i < ds->num_ports; i++) {
+ 		speed_mbps[i] = sja1105_speed[mac[i].speed];
+ 		mac[i].speed = SJA1105_SPEED_AUTO;
+ 	}
+@@ -1915,7 +1923,7 @@ out_unlock_ptp:
+ 	if (rc < 0)
+ 		goto out;
+ 
+-	for (i = 0; i < SJA1105_NUM_PORTS; i++) {
++	for (i = 0; i < ds->num_ports; i++) {
+ 		rc = sja1105_adjust_port_config(priv, i, speed_mbps[i]);
+ 		if (rc < 0)
+ 			goto out;
+@@ -3055,7 +3063,7 @@ static void sja1105_teardown(struct dsa_switch *ds)
+ 	struct sja1105_bridge_vlan *v, *n;
+ 	int port;
+ 
+-	for (port = 0; port < SJA1105_NUM_PORTS; port++) {
++	for (port = 0; port < ds->num_ports; port++) {
+ 		struct sja1105_port *sp = &priv->ports[port];
+ 
+ 		if (!dsa_is_user_port(ds, port))
+@@ -3258,6 +3266,7 @@ static int sja1105_mirror_apply(struct sja1105_private *priv, int from, int to,
+ {
+ 	struct sja1105_general_params_entry *general_params;
+ 	struct sja1105_mac_config_entry *mac;
++	struct dsa_switch *ds = priv->ds;
+ 	struct sja1105_table *table;
+ 	bool already_enabled;
+ 	u64 new_mirr_port;
+@@ -3268,7 +3277,7 @@ static int sja1105_mirror_apply(struct sja1105_private *priv, int from, int to,
+ 
+ 	mac = priv->static_config.tables[BLK_IDX_MAC_CONFIG].entries;
+ 
+-	already_enabled = (general_params->mirr_port != SJA1105_NUM_PORTS);
++	already_enabled = (general_params->mirr_port != ds->num_ports);
+ 	if (already_enabled && enabled && general_params->mirr_port != to) {
+ 		dev_err(priv->ds->dev,
+ 			"Delete mirroring rules towards port %llu first\n",
+@@ -3282,7 +3291,7 @@ static int sja1105_mirror_apply(struct sja1105_private *priv, int from, int to,
+ 		int port;
+ 
+ 		/* Anybody still referencing mirr_port? */
+-		for (port = 0; port < SJA1105_NUM_PORTS; port++) {
++		for (port = 0; port < ds->num_ports; port++) {
+ 			if (mac[port].ing_mirr || mac[port].egr_mirr) {
+ 				keep = true;
+ 				break;
+@@ -3290,7 +3299,7 @@ static int sja1105_mirror_apply(struct sja1105_private *priv, int from, int to,
+ 		}
+ 		/* Unset already_enabled for next time */
+ 		if (!keep)
+-			new_mirr_port = SJA1105_NUM_PORTS;
++			new_mirr_port = ds->num_ports;
+ 	}
+ 	if (new_mirr_port != general_params->mirr_port) {
+ 		general_params->mirr_port = new_mirr_port;
+@@ -3686,7 +3695,7 @@ static int sja1105_probe(struct spi_device *spi)
+ 	}
+ 
+ 	/* Connections between dsa_port and sja1105_port */
+-	for (port = 0; port < SJA1105_NUM_PORTS; port++) {
++	for (port = 0; port < ds->num_ports; port++) {
+ 		struct sja1105_port *sp = &priv->ports[port];
+ 		struct dsa_port *dp = dsa_to_port(ds, port);
+ 		struct net_device *slave;
+diff --git a/drivers/net/dsa/sja1105/sja1105_spi.c b/drivers/net/dsa/sja1105/sja1105_spi.c
+index f7a1514f81e8d..923d617cbec69 100644
+--- a/drivers/net/dsa/sja1105/sja1105_spi.c
++++ b/drivers/net/dsa/sja1105/sja1105_spi.c
+@@ -339,10 +339,10 @@ int static_config_buf_prepare_for_upload(struct sja1105_private *priv,
+ 
+ int sja1105_static_config_upload(struct sja1105_private *priv)
+ {
+-	unsigned long port_bitmap = GENMASK_ULL(SJA1105_NUM_PORTS - 1, 0);
+ 	struct sja1105_static_config *config = &priv->static_config;
+ 	const struct sja1105_regs *regs = priv->info->regs;
+ 	struct device *dev = &priv->spidev->dev;
++	struct dsa_switch *ds = priv->ds;
+ 	struct sja1105_status status;
+ 	int rc, retries = RETRIES;
+ 	u8 *config_buf;
+@@ -363,7 +363,7 @@ int sja1105_static_config_upload(struct sja1105_private *priv)
+ 	 * Tx on all ports and waiting for current packet to drain.
+ 	 * Otherwise, the PHY will see an unterminated Ethernet packet.
+ 	 */
+-	rc = sja1105_inhibit_tx(priv, port_bitmap, true);
++	rc = sja1105_inhibit_tx(priv, GENMASK_ULL(ds->num_ports - 1, 0), true);
+ 	if (rc < 0) {
+ 		dev_err(dev, "Failed to inhibit Tx on ports\n");
+ 		rc = -ENXIO;
+diff --git a/drivers/net/dsa/sja1105/sja1105_tas.c b/drivers/net/dsa/sja1105/sja1105_tas.c
+index 31d8acff1f012..e6153848a9509 100644
+--- a/drivers/net/dsa/sja1105/sja1105_tas.c
++++ b/drivers/net/dsa/sja1105/sja1105_tas.c
+@@ -27,7 +27,7 @@ static int sja1105_tas_set_runtime_params(struct sja1105_private *priv)
+ 
+ 	tas_data->enabled = false;
+ 
+-	for (port = 0; port < SJA1105_NUM_PORTS; port++) {
++	for (port = 0; port < ds->num_ports; port++) {
+ 		const struct tc_taprio_qopt_offload *offload;
+ 
+ 		offload = tas_data->offload[port];
+@@ -164,6 +164,7 @@ int sja1105_init_scheduling(struct sja1105_private *priv)
+ 	struct sja1105_tas_data *tas_data = &priv->tas_data;
+ 	struct sja1105_gating_config *gating_cfg = &tas_data->gating_cfg;
+ 	struct sja1105_schedule_entry *schedule;
++	struct dsa_switch *ds = priv->ds;
+ 	struct sja1105_table *table;
+ 	int schedule_start_idx;
+ 	s64 entry_point_delta;
+@@ -207,7 +208,7 @@ int sja1105_init_scheduling(struct sja1105_private *priv)
+ 	}
+ 
+ 	/* Figure out the dimensioning of the problem */
+-	for (port = 0; port < SJA1105_NUM_PORTS; port++) {
++	for (port = 0; port < ds->num_ports; port++) {
+ 		if (tas_data->offload[port]) {
+ 			num_entries += tas_data->offload[port]->num_entries;
+ 			num_cycles++;
+@@ -269,7 +270,7 @@ int sja1105_init_scheduling(struct sja1105_private *priv)
+ 	schedule_entry_points_params->clksrc = SJA1105_TAS_CLKSRC_PTP;
+ 	schedule_entry_points_params->actsubsch = num_cycles - 1;
+ 
+-	for (port = 0; port < SJA1105_NUM_PORTS; port++) {
++	for (port = 0; port < ds->num_ports; port++) {
+ 		const struct tc_taprio_qopt_offload *offload;
+ 		/* Relative base time */
+ 		s64 rbt;
+@@ -468,6 +469,7 @@ bool sja1105_gating_check_conflicts(struct sja1105_private *priv, int port,
+ 	struct sja1105_gating_config *gating_cfg = &priv->tas_data.gating_cfg;
+ 	size_t num_entries = gating_cfg->num_entries;
+ 	struct tc_taprio_qopt_offload *dummy;
++	struct dsa_switch *ds = priv->ds;
+ 	struct sja1105_gate_entry *e;
+ 	bool conflict;
+ 	int i = 0;
+@@ -491,7 +493,7 @@ bool sja1105_gating_check_conflicts(struct sja1105_private *priv, int port,
+ 	if (port != -1) {
+ 		conflict = sja1105_tas_check_conflicts(priv, port, dummy);
+ 	} else {
+-		for (port = 0; port < SJA1105_NUM_PORTS; port++) {
++		for (port = 0; port < ds->num_ports; port++) {
+ 			conflict = sja1105_tas_check_conflicts(priv, port,
+ 							       dummy);
+ 			if (conflict)
+@@ -554,7 +556,7 @@ int sja1105_setup_tc_taprio(struct dsa_switch *ds, int port,
+ 		}
+ 	}
+ 
+-	for (other_port = 0; other_port < SJA1105_NUM_PORTS; other_port++) {
++	for (other_port = 0; other_port < ds->num_ports; other_port++) {
+ 		if (other_port == port)
+ 			continue;
+ 
+@@ -885,7 +887,7 @@ void sja1105_tas_teardown(struct dsa_switch *ds)
+ 
+ 	cancel_work_sync(&priv->tas_data.tas_work);
+ 
+-	for (port = 0; port < SJA1105_NUM_PORTS; port++) {
++	for (port = 0; port < ds->num_ports; port++) {
+ 		offload = priv->tas_data.offload[port];
+ 		if (!offload)
+ 			continue;
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+index cd882c4533942..caeef25c89bb1 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+@@ -474,14 +474,18 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 
+ 		spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 
+-		if (!qed_mcp_has_pending_cmd(p_hwfn))
++		if (!qed_mcp_has_pending_cmd(p_hwfn)) {
++			spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 			break;
++		}
+ 
+ 		rc = qed_mcp_update_pending_cmd(p_hwfn, p_ptt);
+-		if (!rc)
++		if (!rc) {
++			spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 			break;
+-		else if (rc != -EAGAIN)
++		} else if (rc != -EAGAIN) {
+ 			goto err;
++		}
+ 
+ 		spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 
+@@ -498,6 +502,8 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		return -EAGAIN;
+ 	}
+ 
++	spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);
++
+ 	/* Send the mailbox command */
+ 	qed_mcp_reread_offsets(p_hwfn, p_ptt);
+ 	seq_num = ++p_hwfn->mcp_info->drv_mb_seq;
+@@ -524,14 +530,18 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 
+ 		spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 
+-		if (p_cmd_elem->b_is_completed)
++		if (p_cmd_elem->b_is_completed) {
++			spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 			break;
++		}
+ 
+ 		rc = qed_mcp_update_pending_cmd(p_hwfn, p_ptt);
+-		if (!rc)
++		if (!rc) {
++			spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 			break;
+-		else if (rc != -EAGAIN)
++		} else if (rc != -EAGAIN) {
+ 			goto err;
++		}
+ 
+ 		spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 	} while (++cnt < max_retries);
+@@ -554,6 +564,7 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		return -EAGAIN;
+ 	}
+ 
++	spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 	qed_mcp_cmd_del_elem(p_hwfn, p_cmd_elem);
+ 	spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index e25bfb7021ed4..2cf763b4ea84f 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -1550,7 +1550,8 @@ static int
+ rtl8152_set_speed(struct r8152 *tp, u8 autoneg, u32 speed, u8 duplex,
+ 		  u32 advertising);
+ 
+-static int rtl8152_set_mac_address(struct net_device *netdev, void *p)
++static int __rtl8152_set_mac_address(struct net_device *netdev, void *p,
++				     bool in_resume)
+ {
+ 	struct r8152 *tp = netdev_priv(netdev);
+ 	struct sockaddr *addr = p;
+@@ -1559,9 +1560,11 @@ static int rtl8152_set_mac_address(struct net_device *netdev, void *p)
+ 	if (!is_valid_ether_addr(addr->sa_data))
+ 		goto out1;
+ 
+-	ret = usb_autopm_get_interface(tp->intf);
+-	if (ret < 0)
+-		goto out1;
++	if (!in_resume) {
++		ret = usb_autopm_get_interface(tp->intf);
++		if (ret < 0)
++			goto out1;
++	}
+ 
+ 	mutex_lock(&tp->control);
+ 
+@@ -1573,11 +1576,17 @@ static int rtl8152_set_mac_address(struct net_device *netdev, void *p)
+ 
+ 	mutex_unlock(&tp->control);
+ 
+-	usb_autopm_put_interface(tp->intf);
++	if (!in_resume)
++		usb_autopm_put_interface(tp->intf);
+ out1:
+ 	return ret;
+ }
+ 
++static int rtl8152_set_mac_address(struct net_device *netdev, void *p)
++{
++	return __rtl8152_set_mac_address(netdev, p, false);
++}
++
+ /* Devices containing proper chips can support a persistent
+  * host system provided MAC address.
+  * Examples of this are Dell TB15 and Dell WD15 docks
+@@ -1696,7 +1705,7 @@ static int determine_ethernet_addr(struct r8152 *tp, struct sockaddr *sa)
+ 	return ret;
+ }
+ 
+-static int set_ethernet_addr(struct r8152 *tp)
++static int set_ethernet_addr(struct r8152 *tp, bool in_resume)
+ {
+ 	struct net_device *dev = tp->netdev;
+ 	struct sockaddr sa;
+@@ -1709,7 +1718,7 @@ static int set_ethernet_addr(struct r8152 *tp)
+ 	if (tp->version == RTL_VER_01)
+ 		ether_addr_copy(dev->dev_addr, sa.sa_data);
+ 	else
+-		ret = rtl8152_set_mac_address(dev, &sa);
++		ret = __rtl8152_set_mac_address(dev, &sa, in_resume);
+ 
+ 	return ret;
+ }
+@@ -6761,9 +6770,10 @@ static int rtl8152_close(struct net_device *netdev)
+ 		tp->rtl_ops.down(tp);
+ 
+ 		mutex_unlock(&tp->control);
++	}
+ 
++	if (!res)
+ 		usb_autopm_put_interface(tp->intf);
+-	}
+ 
+ 	free_all_mem(tp);
+ 
+@@ -8441,7 +8451,7 @@ static int rtl8152_reset_resume(struct usb_interface *intf)
+ 	clear_bit(SELECTIVE_SUSPEND, &tp->flags);
+ 	tp->rtl_ops.init(tp);
+ 	queue_delayed_work(system_long_wq, &tp->hw_phy_work, 0);
+-	set_ethernet_addr(tp);
++	set_ethernet_addr(tp, true);
+ 	return rtl8152_resume(intf);
+ }
+ 
+@@ -9561,7 +9571,7 @@ static int rtl8152_probe(struct usb_interface *intf,
+ 	tp->rtl_fw.retry = true;
+ #endif
+ 	queue_delayed_work(system_long_wq, &tp->hw_phy_work, 0);
+-	set_ethernet_addr(tp);
++	set_ethernet_addr(tp, false);
+ 
+ 	usb_set_intfdata(intf, tp);
+ 
+diff --git a/drivers/nvme/host/trace.h b/drivers/nvme/host/trace.h
+index daaf700eae799..35bac7a254227 100644
+--- a/drivers/nvme/host/trace.h
++++ b/drivers/nvme/host/trace.h
+@@ -56,7 +56,7 @@ TRACE_EVENT(nvme_setup_cmd,
+ 		__field(u8, fctype)
+ 		__field(u16, cid)
+ 		__field(u32, nsid)
+-		__field(u64, metadata)
++		__field(bool, metadata)
+ 		__array(u8, cdw10, 24)
+ 	    ),
+ 	    TP_fast_assign(
+@@ -66,13 +66,13 @@ TRACE_EVENT(nvme_setup_cmd,
+ 		__entry->flags = cmd->common.flags;
+ 		__entry->cid = cmd->common.command_id;
+ 		__entry->nsid = le32_to_cpu(cmd->common.nsid);
+-		__entry->metadata = le64_to_cpu(cmd->common.metadata);
++		__entry->metadata = !!blk_integrity_rq(req);
+ 		__entry->fctype = cmd->fabrics.fctype;
+ 		__assign_disk_name(__entry->disk, req->rq_disk);
+ 		memcpy(__entry->cdw10, &cmd->common.cdw10,
+ 			sizeof(__entry->cdw10));
+ 	    ),
+-	    TP_printk("nvme%d: %sqid=%d, cmdid=%u, nsid=%u, flags=0x%x, meta=0x%llx, cmd=(%s %s)",
++	    TP_printk("nvme%d: %sqid=%d, cmdid=%u, nsid=%u, flags=0x%x, meta=0x%x, cmd=(%s %s)",
+ 		      __entry->ctrl_id, __print_disk_name(__entry->disk),
+ 		      __entry->qid, __entry->cid, __entry->nsid,
+ 		      __entry->flags, __entry->metadata,
+diff --git a/drivers/power/supply/ab8500_btemp.c b/drivers/power/supply/ab8500_btemp.c
+index 4df305c767c5c..dbdcff32f3539 100644
+--- a/drivers/power/supply/ab8500_btemp.c
++++ b/drivers/power/supply/ab8500_btemp.c
+@@ -983,7 +983,6 @@ static const struct component_ops ab8500_btemp_component_ops = {
+ 
+ static int ab8500_btemp_probe(struct platform_device *pdev)
+ {
+-	struct device_node *np = pdev->dev.of_node;
+ 	struct power_supply_config psy_cfg = {};
+ 	struct device *dev = &pdev->dev;
+ 	struct ab8500_btemp *di;
+@@ -996,12 +995,6 @@ static int ab8500_btemp_probe(struct platform_device *pdev)
+ 
+ 	di->bm = &ab8500_bm_data;
+ 
+-	ret = ab8500_bm_of_probe(dev, np, di->bm);
+-	if (ret) {
+-		dev_err(dev, "failed to get battery information\n");
+-		return ret;
+-	}
+-
+ 	/* get parent data */
+ 	di->dev = dev;
+ 	di->parent = dev_get_drvdata(pdev->dev.parent);
+diff --git a/drivers/power/supply/ab8500_fg.c b/drivers/power/supply/ab8500_fg.c
+index 46c718e9ebb7c..146a5f03818f2 100644
+--- a/drivers/power/supply/ab8500_fg.c
++++ b/drivers/power/supply/ab8500_fg.c
+@@ -3058,12 +3058,6 @@ static int ab8500_fg_probe(struct platform_device *pdev)
+ 
+ 	di->bm = &ab8500_bm_data;
+ 
+-	ret = ab8500_bm_of_probe(dev, np, di->bm);
+-	if (ret) {
+-		dev_err(dev, "failed to get battery information\n");
+-		return ret;
+-	}
+-
+ 	mutex_init(&di->cc_lock);
+ 
+ 	/* get parent data */
+diff --git a/drivers/power/supply/abx500_chargalg.c b/drivers/power/supply/abx500_chargalg.c
+index 599684ce0e4b0..a17849bfacbff 100644
+--- a/drivers/power/supply/abx500_chargalg.c
++++ b/drivers/power/supply/abx500_chargalg.c
+@@ -2002,7 +2002,6 @@ static const struct component_ops abx500_chargalg_component_ops = {
+ static int abx500_chargalg_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+-	struct device_node *np = dev->of_node;
+ 	struct power_supply_config psy_cfg = {};
+ 	struct abx500_chargalg *di;
+ 	int ret = 0;
+@@ -2013,12 +2012,6 @@ static int abx500_chargalg_probe(struct platform_device *pdev)
+ 
+ 	di->bm = &ab8500_bm_data;
+ 
+-	ret = ab8500_bm_of_probe(dev, np, di->bm);
+-	if (ret) {
+-		dev_err(dev, "failed to get battery information\n");
+-		return ret;
+-	}
+-
+ 	/* get device struct and parent */
+ 	di->dev = dev;
+ 	di->parent = dev_get_drvdata(pdev->dev.parent);
+diff --git a/drivers/regulator/mtk-dvfsrc-regulator.c b/drivers/regulator/mtk-dvfsrc-regulator.c
+index d3d876198d6ec..234af3a66c77d 100644
+--- a/drivers/regulator/mtk-dvfsrc-regulator.c
++++ b/drivers/regulator/mtk-dvfsrc-regulator.c
+@@ -179,8 +179,7 @@ static int dvfsrc_vcore_regulator_probe(struct platform_device *pdev)
+ 	for (i = 0; i < regulator_init_data->size; i++) {
+ 		config.dev = dev->parent;
+ 		config.driver_data = (mt_regulators + i);
+-		rdev = devm_regulator_register(dev->parent,
+-					       &(mt_regulators + i)->desc,
++		rdev = devm_regulator_register(dev, &(mt_regulators + i)->desc,
+ 					       &config);
+ 		if (IS_ERR(rdev)) {
+ 			dev_err(dev, "failed to register %s\n",
+diff --git a/drivers/regulator/rtmv20-regulator.c b/drivers/regulator/rtmv20-regulator.c
+index 4bca64de0f672..2ee334174e2b0 100644
+--- a/drivers/regulator/rtmv20-regulator.c
++++ b/drivers/regulator/rtmv20-regulator.c
+@@ -37,7 +37,7 @@
+ #define RTMV20_WIDTH2_MASK	GENMASK(7, 0)
+ #define RTMV20_LBPLVL_MASK	GENMASK(3, 0)
+ #define RTMV20_LBPEN_MASK	BIT(7)
+-#define RTMV20_STROBEPOL_MASK	BIT(1)
++#define RTMV20_STROBEPOL_MASK	BIT(0)
+ #define RTMV20_VSYNPOL_MASK	BIT(1)
+ #define RTMV20_FSINEN_MASK	BIT(7)
+ #define RTMV20_ESEN_MASK	BIT(6)
+diff --git a/drivers/spi/spi-mt65xx.c b/drivers/spi/spi-mt65xx.c
+index 8d5fa7f1e5069..03510a744fa18 100644
+--- a/drivers/spi/spi-mt65xx.c
++++ b/drivers/spi/spi-mt65xx.c
+@@ -426,24 +426,15 @@ static int mtk_spi_fifo_transfer(struct spi_master *master,
+ 	mtk_spi_prepare_transfer(master, xfer);
+ 	mtk_spi_setup_packet(master);
+ 
+-	cnt = xfer->len / 4;
+-	if (xfer->tx_buf)
++	if (xfer->tx_buf) {
++		cnt = xfer->len / 4;
+ 		iowrite32_rep(mdata->base + SPI_TX_DATA_REG, xfer->tx_buf, cnt);
+-
+-	if (xfer->rx_buf)
+-		ioread32_rep(mdata->base + SPI_RX_DATA_REG, xfer->rx_buf, cnt);
+-
+-	remainder = xfer->len % 4;
+-	if (remainder > 0) {
+-		reg_val = 0;
+-		if (xfer->tx_buf) {
++		remainder = xfer->len % 4;
++		if (remainder > 0) {
++			reg_val = 0;
+ 			memcpy(&reg_val, xfer->tx_buf + (cnt * 4), remainder);
+ 			writel(reg_val, mdata->base + SPI_TX_DATA_REG);
+ 		}
+-		if (xfer->rx_buf) {
+-			reg_val = readl(mdata->base + SPI_RX_DATA_REG);
+-			memcpy(xfer->rx_buf + (cnt * 4), &reg_val, remainder);
+-		}
+ 	}
+ 
+ 	mtk_spi_enable_transfer(master);
+diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
+index a92a28933edbb..05618a618939c 100644
+--- a/drivers/spi/spi-stm32.c
++++ b/drivers/spi/spi-stm32.c
+@@ -884,15 +884,18 @@ static irqreturn_t stm32h7_spi_irq_thread(int irq, void *dev_id)
+ 	ier = readl_relaxed(spi->base + STM32H7_SPI_IER);
+ 
+ 	mask = ier;
+-	/* EOTIE is triggered on EOT, SUSP and TXC events. */
++	/*
++	 * EOTIE enables irq from EOT, SUSP and TXC events. We need to set
++	 * SUSP to acknowledge it later. TXC is automatically cleared
++	 */
++
+ 	mask |= STM32H7_SPI_SR_SUSP;
+ 	/*
+-	 * When TXTF is set, DXPIE and TXPIE are cleared. So in case of
+-	 * Full-Duplex, need to poll RXP event to know if there are remaining
+-	 * data, before disabling SPI.
++	 * DXPIE is set in Full-Duplex, one IT will be raised if TXP and RXP
++	 * are set. So in case of Full-Duplex, need to poll TXP and RXP event.
+ 	 */
+-	if (spi->rx_buf && !spi->cur_usedma)
+-		mask |= STM32H7_SPI_SR_RXP;
++	if ((spi->cur_comm == SPI_FULL_DUPLEX) && !spi->cur_usedma)
++		mask |= STM32H7_SPI_SR_TXP | STM32H7_SPI_SR_RXP;
+ 
+ 	if (!(sr & mask)) {
+ 		dev_warn(spi->dev, "spurious IT (sr=0x%08x, ier=0x%08x)\n",
+diff --git a/drivers/watchdog/iTCO_wdt.c b/drivers/watchdog/iTCO_wdt.c
+index 3f1324871cfd6..bf31d7b67a697 100644
+--- a/drivers/watchdog/iTCO_wdt.c
++++ b/drivers/watchdog/iTCO_wdt.c
+@@ -71,8 +71,6 @@
+ #define TCOBASE(p)	((p)->tco_res->start)
+ /* SMI Control and Enable Register */
+ #define SMI_EN(p)	((p)->smi_res->start)
+-#define TCO_EN		(1 << 13)
+-#define GBL_SMI_EN	(1 << 0)
+ 
+ #define TCO_RLD(p)	(TCOBASE(p) + 0x00) /* TCO Timer Reload/Curr. Value */
+ #define TCOv1_TMR(p)	(TCOBASE(p) + 0x01) /* TCOv1 Timer Initial Value*/
+@@ -357,12 +355,8 @@ static int iTCO_wdt_set_timeout(struct watchdog_device *wd_dev, unsigned int t)
+ 
+ 	tmrval = seconds_to_ticks(p, t);
+ 
+-	/*
+-	 * If TCO SMIs are off, the timer counts down twice before rebooting.
+-	 * Otherwise, the BIOS generally reboots when the SMI triggers.
+-	 */
+-	if (p->smi_res &&
+-	    (SMI_EN(p) & (TCO_EN | GBL_SMI_EN)) != (TCO_EN | GBL_SMI_EN))
++	/* For TCO v1 the timer counts down twice before rebooting */
++	if (p->iTCO_version == 1)
+ 		tmrval /= 2;
+ 
+ 	/* from the specs: */
+@@ -527,7 +521,7 @@ static int iTCO_wdt_probe(struct platform_device *pdev)
+ 		 * Disables TCO logic generating an SMI#
+ 		 */
+ 		val32 = inl(SMI_EN(p));
+-		val32 &= ~TCO_EN;	/* Turn off SMI clearing watchdog */
++		val32 &= 0xffffdfff;	/* Turn off SMI clearing watchdog */
+ 		outl(val32, SMI_EN(p));
+ 	}
+ 
+diff --git a/fs/cifs/fs_context.c b/fs/cifs/fs_context.c
+index 92d4ab029c917..72742eb1df4a7 100644
+--- a/fs/cifs/fs_context.c
++++ b/fs/cifs/fs_context.c
+@@ -322,7 +322,6 @@ smb3_fs_context_dup(struct smb3_fs_context *new_ctx, struct smb3_fs_context *ctx
+ 	new_ctx->UNC = NULL;
+ 	new_ctx->source = NULL;
+ 	new_ctx->iocharset = NULL;
+-
+ 	/*
+ 	 * Make sure to stay in sync with smb3_cleanup_fs_context_contents()
+ 	 */
+@@ -792,6 +791,8 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ 	int i, opt;
+ 	bool is_smb3 = !strcmp(fc->fs_type->name, "smb3");
+ 	bool skip_parsing = false;
++	kuid_t uid;
++	kgid_t gid;
+ 
+ 	cifs_dbg(FYI, "CIFS: parsing cifs mount option '%s'\n", param->key);
+ 
+@@ -904,18 +905,38 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ 		}
+ 		break;
+ 	case Opt_uid:
+-		ctx->linux_uid.val = result.uint_32;
++		uid = make_kuid(current_user_ns(), result.uint_32);
++		if (!uid_valid(uid))
++			goto cifs_parse_mount_err;
++		ctx->linux_uid = uid;
+ 		ctx->uid_specified = true;
+ 		break;
+ 	case Opt_cruid:
+-		ctx->cred_uid.val = result.uint_32;
++		uid = make_kuid(current_user_ns(), result.uint_32);
++		if (!uid_valid(uid))
++			goto cifs_parse_mount_err;
++		ctx->cred_uid = uid;
++		ctx->cruid_specified = true;
++		break;
++	case Opt_backupuid:
++		uid = make_kuid(current_user_ns(), result.uint_32);
++		if (!uid_valid(uid))
++			goto cifs_parse_mount_err;
++		ctx->backupuid = uid;
++		ctx->backupuid_specified = true;
+ 		break;
+ 	case Opt_backupgid:
+-		ctx->backupgid.val = result.uint_32;
++		gid = make_kgid(current_user_ns(), result.uint_32);
++		if (!gid_valid(gid))
++			goto cifs_parse_mount_err;
++		ctx->backupgid = gid;
+ 		ctx->backupgid_specified = true;
+ 		break;
+ 	case Opt_gid:
+-		ctx->linux_gid.val = result.uint_32;
++		gid = make_kgid(current_user_ns(), result.uint_32);
++		if (!gid_valid(gid))
++			goto cifs_parse_mount_err;
++		ctx->linux_gid = gid;
+ 		ctx->gid_specified = true;
+ 		break;
+ 	case Opt_port:
+diff --git a/fs/cifs/fs_context.h b/fs/cifs/fs_context.h
+index 2a71c8e411ac1..b6243972edf3b 100644
+--- a/fs/cifs/fs_context.h
++++ b/fs/cifs/fs_context.h
+@@ -155,6 +155,7 @@ enum cifs_param {
+ 
+ struct smb3_fs_context {
+ 	bool uid_specified;
++	bool cruid_specified;
+ 	bool gid_specified;
+ 	bool sloppy;
+ 	bool got_ip;
+diff --git a/fs/io-wq.c b/fs/io-wq.c
+index 60f58efdb5f48..9efecdf025b9c 100644
+--- a/fs/io-wq.c
++++ b/fs/io-wq.c
+@@ -736,7 +736,12 @@ static void io_wqe_enqueue(struct io_wqe *wqe, struct io_wq_work *work)
+ 	int work_flags;
+ 	unsigned long flags;
+ 
+-	if (test_bit(IO_WQ_BIT_EXIT, &wqe->wq->state)) {
++	/*
++	 * If io-wq is exiting for this task, or if the request has explicitly
++	 * been marked as one that should not get executed, cancel it here.
++	 */
++	if (test_bit(IO_WQ_BIT_EXIT, &wqe->wq->state) ||
++	    (work->flags & IO_WQ_WORK_CANCEL)) {
+ 		io_run_cancel(work, wqe);
+ 		return;
+ 	}
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index d465e99971574..32f3df13a812d 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -1282,6 +1282,17 @@ static void io_queue_async_work(struct io_kiocb *req)
+ 
+ 	/* init ->work of the whole link before punting */
+ 	io_prep_async_link(req);
++
++	/*
++	 * Not expected to happen, but if we do have a bug where this _can_
++	 * happen, catch it here and ensure the request is marked as
++	 * canceled. That will make io-wq go through the usual work cancel
++	 * procedure rather than attempt to run this request (or create a new
++	 * worker for it).
++	 */
++	if (WARN_ON_ONCE(!same_thread_group(req->task, current)))
++		req->work.flags |= IO_WQ_WORK_CANCEL;
++
+ 	trace_io_uring_queue_async_work(ctx, io_wq_is_hashed(&req->work), req,
+ 					&req->work, req->flags);
+ 	io_wq_enqueue(tctx->io_wq, &req->work);
+@@ -2252,7 +2263,7 @@ static inline bool io_run_task_work(void)
+  * Find and free completed poll iocbs
+  */
+ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
+-			       struct list_head *done)
++			       struct list_head *done, bool resubmit)
+ {
+ 	struct req_batch rb;
+ 	struct io_kiocb *req;
+@@ -2267,7 +2278,7 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
+ 		req = list_first_entry(done, struct io_kiocb, inflight_entry);
+ 		list_del(&req->inflight_entry);
+ 
+-		if (READ_ONCE(req->result) == -EAGAIN &&
++		if (READ_ONCE(req->result) == -EAGAIN && resubmit &&
+ 		    !(req->flags & REQ_F_DONT_REISSUE)) {
+ 			req->iopoll_completed = 0;
+ 			req_ref_get(req);
+@@ -2291,7 +2302,7 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events,
+ }
+ 
+ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
+-			long min)
++			long min, bool resubmit)
+ {
+ 	struct io_kiocb *req, *tmp;
+ 	LIST_HEAD(done);
+@@ -2334,7 +2345,7 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
+ 	}
+ 
+ 	if (!list_empty(&done))
+-		io_iopoll_complete(ctx, nr_events, &done);
++		io_iopoll_complete(ctx, nr_events, &done, resubmit);
+ 
+ 	return ret;
+ }
+@@ -2352,7 +2363,7 @@ static void io_iopoll_try_reap_events(struct io_ring_ctx *ctx)
+ 	while (!list_empty(&ctx->iopoll_list)) {
+ 		unsigned int nr_events = 0;
+ 
+-		io_do_iopoll(ctx, &nr_events, 0);
++		io_do_iopoll(ctx, &nr_events, 0, false);
+ 
+ 		/* let it sleep and repeat later if can't complete a request */
+ 		if (nr_events == 0)
+@@ -2410,7 +2421,7 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, long min)
+ 			if (list_empty(&ctx->iopoll_list))
+ 				break;
+ 		}
+-		ret = io_do_iopoll(ctx, &nr_events, min);
++		ret = io_do_iopoll(ctx, &nr_events, min, true);
+ 	} while (!ret && nr_events < min && !need_resched());
+ out:
+ 	mutex_unlock(&ctx->uring_lock);
+@@ -6804,7 +6815,7 @@ static int __io_sq_thread(struct io_ring_ctx *ctx, bool cap_entries)
+ 
+ 		mutex_lock(&ctx->uring_lock);
+ 		if (!list_empty(&ctx->iopoll_list))
+-			io_do_iopoll(ctx, &nr_events, 0);
++			io_do_iopoll(ctx, &nr_events, 0, true);
+ 
+ 		/*
+ 		 * Don't submit if refs are dying, good for io_uring_register(),
+diff --git a/include/linux/mfd/rt5033-private.h b/include/linux/mfd/rt5033-private.h
+index 2d1895c3efbf2..40a0c2dfb80ff 100644
+--- a/include/linux/mfd/rt5033-private.h
++++ b/include/linux/mfd/rt5033-private.h
+@@ -200,13 +200,13 @@ enum rt5033_reg {
+ #define RT5033_REGULATOR_BUCK_VOLTAGE_MIN		1000000U
+ #define RT5033_REGULATOR_BUCK_VOLTAGE_MAX		3000000U
+ #define RT5033_REGULATOR_BUCK_VOLTAGE_STEP		100000U
+-#define RT5033_REGULATOR_BUCK_VOLTAGE_STEP_NUM		32
++#define RT5033_REGULATOR_BUCK_VOLTAGE_STEP_NUM		21
+ 
+ /* RT5033 regulator LDO output voltage uV */
+ #define RT5033_REGULATOR_LDO_VOLTAGE_MIN		1200000U
+ #define RT5033_REGULATOR_LDO_VOLTAGE_MAX		3000000U
+ #define RT5033_REGULATOR_LDO_VOLTAGE_STEP		100000U
+-#define RT5033_REGULATOR_LDO_VOLTAGE_STEP_NUM		32
++#define RT5033_REGULATOR_LDO_VOLTAGE_STEP_NUM		19
+ 
+ /* RT5033 regulator SAFE LDO output voltage uV */
+ #define RT5033_REGULATOR_SAFE_LDO_VOLTAGE		4900000U
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index ded55f54d9c8e..7d71d104fdfda 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -1721,6 +1721,14 @@ int hci_dev_do_close(struct hci_dev *hdev)
+ 
+ 	BT_DBG("%s %p", hdev->name, hdev);
+ 
++	if (!hci_dev_test_flag(hdev, HCI_UNREGISTER) &&
++	    !hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
++	    test_bit(HCI_UP, &hdev->flags)) {
++		/* Execute vendor specific shutdown routine */
++		if (hdev->shutdown)
++			hdev->shutdown(hdev);
++	}
++
+ 	cancel_delayed_work(&hdev->power_off);
+ 
+ 	hci_request_cancel_all(hdev);
+@@ -1797,14 +1805,6 @@ int hci_dev_do_close(struct hci_dev *hdev)
+ 		clear_bit(HCI_INIT, &hdev->flags);
+ 	}
+ 
+-	if (!hci_dev_test_flag(hdev, HCI_UNREGISTER) &&
+-	    !hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
+-	    test_bit(HCI_UP, &hdev->flags)) {
+-		/* Execute vendor specific shutdown routine */
+-		if (hdev->shutdown)
+-			hdev->shutdown(hdev);
+-	}
+-
+ 	/* flush cmd  work */
+ 	flush_work(&hdev->cmd_work);
+ 
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 30ca61d91b69a..17b93177a68f8 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -3006,8 +3006,11 @@ skb_zerocopy_headlen(const struct sk_buff *from)
+ 
+ 	if (!from->head_frag ||
+ 	    skb_headlen(from) < L1_CACHE_BYTES ||
+-	    skb_shinfo(from)->nr_frags >= MAX_SKB_FRAGS)
++	    skb_shinfo(from)->nr_frags >= MAX_SKB_FRAGS) {
+ 		hlen = skb_headlen(from);
++		if (!hlen)
++			hlen = from->len;
++	}
+ 
+ 	if (skb_has_frag_list(from))
+ 		hlen = from->len;
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 45b3a3adc886f..7e7205e93258b 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -607,23 +607,48 @@ static int sk_psock_handle_skb(struct sk_psock *psock, struct sk_buff *skb,
+ 	return sk_psock_skb_ingress(psock, skb);
+ }
+ 
++static void sock_drop(struct sock *sk, struct sk_buff *skb)
++{
++	sk_drops_add(sk, skb);
++	kfree_skb(skb);
++}
++
++static void sk_psock_skb_state(struct sk_psock *psock,
++			       struct sk_psock_work_state *state,
++			       struct sk_buff *skb,
++			       int len, int off)
++{
++	spin_lock_bh(&psock->ingress_lock);
++	if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) {
++		state->skb = skb;
++		state->len = len;
++		state->off = off;
++	} else {
++		sock_drop(psock->sk, skb);
++	}
++	spin_unlock_bh(&psock->ingress_lock);
++}
++
+ static void sk_psock_backlog(struct work_struct *work)
+ {
+ 	struct sk_psock *psock = container_of(work, struct sk_psock, work);
+ 	struct sk_psock_work_state *state = &psock->work_state;
+-	struct sk_buff *skb;
++	struct sk_buff *skb = NULL;
+ 	bool ingress;
+ 	u32 len, off;
+ 	int ret;
+ 
+ 	mutex_lock(&psock->work_mutex);
+-	if (state->skb) {
++	if (unlikely(state->skb)) {
++		spin_lock_bh(&psock->ingress_lock);
+ 		skb = state->skb;
+ 		len = state->len;
+ 		off = state->off;
+ 		state->skb = NULL;
+-		goto start;
++		spin_unlock_bh(&psock->ingress_lock);
+ 	}
++	if (skb)
++		goto start;
+ 
+ 	while ((skb = skb_dequeue(&psock->ingress_skb))) {
+ 		len = skb->len;
+@@ -638,15 +663,14 @@ start:
+ 							  len, ingress);
+ 			if (ret <= 0) {
+ 				if (ret == -EAGAIN) {
+-					state->skb = skb;
+-					state->len = len;
+-					state->off = off;
++					sk_psock_skb_state(psock, state, skb,
++							   len, off);
+ 					goto end;
+ 				}
+ 				/* Hard errors break pipe and stop xmit. */
+ 				sk_psock_report_error(psock, ret ? -ret : EPIPE);
+ 				sk_psock_clear_state(psock, SK_PSOCK_TX_ENABLED);
+-				kfree_skb(skb);
++				sock_drop(psock->sk, skb);
+ 				goto end;
+ 			}
+ 			off += ret;
+@@ -737,8 +761,13 @@ static void __sk_psock_zap_ingress(struct sk_psock *psock)
+ 
+ 	while ((skb = skb_dequeue(&psock->ingress_skb)) != NULL) {
+ 		skb_bpf_redirect_clear(skb);
+-		kfree_skb(skb);
++		sock_drop(psock->sk, skb);
+ 	}
++	kfree_skb(psock->work_state.skb);
++	/* We null the skb here to ensure that calls to sk_psock_backlog
++	 * do not pick up the free'd skb.
++	 */
++	psock->work_state.skb = NULL;
+ 	__sk_psock_purge_ingress_msg(psock);
+ }
+ 
+@@ -853,7 +882,7 @@ out:
+ }
+ EXPORT_SYMBOL_GPL(sk_psock_msg_verdict);
+ 
+-static int sk_psock_skb_redirect(struct sk_buff *skb)
++static int sk_psock_skb_redirect(struct sk_psock *from, struct sk_buff *skb)
+ {
+ 	struct sk_psock *psock_other;
+ 	struct sock *sk_other;
+@@ -863,7 +892,7 @@ static int sk_psock_skb_redirect(struct sk_buff *skb)
+ 	 * return code, but then didn't set a redirect interface.
+ 	 */
+ 	if (unlikely(!sk_other)) {
+-		kfree_skb(skb);
++		sock_drop(from->sk, skb);
+ 		return -EIO;
+ 	}
+ 	psock_other = sk_psock(sk_other);
+@@ -873,14 +902,14 @@ static int sk_psock_skb_redirect(struct sk_buff *skb)
+ 	 */
+ 	if (!psock_other || sock_flag(sk_other, SOCK_DEAD)) {
+ 		skb_bpf_redirect_clear(skb);
+-		kfree_skb(skb);
++		sock_drop(from->sk, skb);
+ 		return -EIO;
+ 	}
+ 	spin_lock_bh(&psock_other->ingress_lock);
+ 	if (!sk_psock_test_state(psock_other, SK_PSOCK_TX_ENABLED)) {
+ 		spin_unlock_bh(&psock_other->ingress_lock);
+ 		skb_bpf_redirect_clear(skb);
+-		kfree_skb(skb);
++		sock_drop(from->sk, skb);
+ 		return -EIO;
+ 	}
+ 
+@@ -890,11 +919,12 @@ static int sk_psock_skb_redirect(struct sk_buff *skb)
+ 	return 0;
+ }
+ 
+-static void sk_psock_tls_verdict_apply(struct sk_buff *skb, struct sock *sk, int verdict)
++static void sk_psock_tls_verdict_apply(struct sk_buff *skb,
++				       struct sk_psock *from, int verdict)
+ {
+ 	switch (verdict) {
+ 	case __SK_REDIRECT:
+-		sk_psock_skb_redirect(skb);
++		sk_psock_skb_redirect(from, skb);
+ 		break;
+ 	case __SK_PASS:
+ 	case __SK_DROP:
+@@ -918,7 +948,7 @@ int sk_psock_tls_strp_read(struct sk_psock *psock, struct sk_buff *skb)
+ 		ret = sk_psock_map_verd(ret, skb_bpf_redirect_fetch(skb));
+ 		skb->sk = NULL;
+ 	}
+-	sk_psock_tls_verdict_apply(skb, psock->sk, ret);
++	sk_psock_tls_verdict_apply(skb, psock, ret);
+ 	rcu_read_unlock();
+ 	return ret;
+ }
+@@ -965,12 +995,12 @@ static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb,
+ 		}
+ 		break;
+ 	case __SK_REDIRECT:
+-		err = sk_psock_skb_redirect(skb);
++		err = sk_psock_skb_redirect(psock, skb);
+ 		break;
+ 	case __SK_DROP:
+ 	default:
+ out_free:
+-		kfree_skb(skb);
++		sock_drop(psock->sk, skb);
+ 	}
+ 
+ 	return err;
+@@ -1005,7 +1035,7 @@ static void sk_psock_strp_read(struct strparser *strp, struct sk_buff *skb)
+ 	sk = strp->sk;
+ 	psock = sk_psock(sk);
+ 	if (unlikely(!psock)) {
+-		kfree_skb(skb);
++		sock_drop(sk, skb);
+ 		goto out;
+ 	}
+ 	prog = READ_ONCE(psock->progs.stream_verdict);
+@@ -1126,7 +1156,7 @@ static int sk_psock_verdict_recv(read_descriptor_t *desc, struct sk_buff *skb,
+ 	psock = sk_psock(sk);
+ 	if (unlikely(!psock)) {
+ 		len = 0;
+-		kfree_skb(skb);
++		sock_drop(sk, skb);
+ 		goto out;
+ 	}
+ 	prog = READ_ONCE(psock->progs.stream_verdict);
+diff --git a/sound/soc/codecs/rt5682.c b/sound/soc/codecs/rt5682.c
+index e4c91571abaef..abcd6f4837888 100644
+--- a/sound/soc/codecs/rt5682.c
++++ b/sound/soc/codecs/rt5682.c
+@@ -973,10 +973,14 @@ int rt5682_headset_detect(struct snd_soc_component *component, int jack_insert)
+ 		rt5682_enable_push_button_irq(component, false);
+ 		snd_soc_component_update_bits(component, RT5682_CBJ_CTRL_1,
+ 			RT5682_TRIG_JD_MASK, RT5682_TRIG_JD_LOW);
+-		if (!snd_soc_dapm_get_pin_status(dapm, "MICBIAS"))
++		if (!snd_soc_dapm_get_pin_status(dapm, "MICBIAS") &&
++			!snd_soc_dapm_get_pin_status(dapm, "PLL1") &&
++			!snd_soc_dapm_get_pin_status(dapm, "PLL2B"))
+ 			snd_soc_component_update_bits(component,
+ 				RT5682_PWR_ANLG_1, RT5682_PWR_MB, 0);
+-		if (!snd_soc_dapm_get_pin_status(dapm, "Vref2"))
++		if (!snd_soc_dapm_get_pin_status(dapm, "Vref2") &&
++			!snd_soc_dapm_get_pin_status(dapm, "PLL1") &&
++			!snd_soc_dapm_get_pin_status(dapm, "PLL2B"))
+ 			snd_soc_component_update_bits(component,
+ 				RT5682_PWR_ANLG_1, RT5682_PWR_VREF2, 0);
+ 		snd_soc_component_update_bits(component, RT5682_PWR_ANLG_3,
+diff --git a/sound/soc/codecs/tlv320aic31xx.h b/sound/soc/codecs/tlv320aic31xx.h
+index 81952984613d2..2513922a02923 100644
+--- a/sound/soc/codecs/tlv320aic31xx.h
++++ b/sound/soc/codecs/tlv320aic31xx.h
+@@ -151,8 +151,8 @@ struct aic31xx_pdata {
+ #define AIC31XX_WORD_LEN_24BITS		0x02
+ #define AIC31XX_WORD_LEN_32BITS		0x03
+ #define AIC31XX_IFACE1_MASTER_MASK	GENMASK(3, 2)
+-#define AIC31XX_BCLK_MASTER		BIT(2)
+-#define AIC31XX_WCLK_MASTER		BIT(3)
++#define AIC31XX_BCLK_MASTER		BIT(3)
++#define AIC31XX_WCLK_MASTER		BIT(2)
+ 
+ /* AIC31XX_DATA_OFFSET */
+ #define AIC31XX_DATA_OFFSET_MASK	GENMASK(7, 0)
+diff --git a/sound/soc/intel/boards/Kconfig b/sound/soc/intel/boards/Kconfig
+index 58379393b8e4c..ceeb618bd9508 100644
+--- a/sound/soc/intel/boards/Kconfig
++++ b/sound/soc/intel/boards/Kconfig
+@@ -26,6 +26,12 @@ config SND_SOC_INTEL_USER_FRIENDLY_LONG_NAMES
+ 	  interface.
+ 	  If unsure select N.
+ 
++config SND_SOC_INTEL_HDA_DSP_COMMON
++	tristate
++
++config SND_SOC_INTEL_SOF_MAXIM_COMMON
++	tristate
++
+ if SND_SOC_INTEL_CATPT
+ 
+ config SND_SOC_INTEL_HASWELL_MACH
+@@ -278,6 +284,7 @@ config SND_SOC_INTEL_DA7219_MAX98357A_GENERIC
+ 	select SND_SOC_MAX98390
+ 	select SND_SOC_DMIC
+ 	select SND_SOC_HDAC_HDMI
++	select SND_SOC_INTEL_HDA_DSP_COMMON
+ 
+ config SND_SOC_INTEL_BXT_DA7219_MAX98357A_COMMON
+ 	tristate
+@@ -304,6 +311,7 @@ config SND_SOC_INTEL_BXT_RT298_MACH
+ 	select SND_SOC_RT298
+ 	select SND_SOC_DMIC
+ 	select SND_SOC_HDAC_HDMI
++	select SND_SOC_INTEL_HDA_DSP_COMMON
+ 	help
+ 	   This adds support for ASoC machine driver for Broxton platforms
+ 	   with RT286 I2S audio codec.
+@@ -422,6 +430,7 @@ config SND_SOC_INTEL_GLK_RT5682_MAX98357A_MACH
+ 	select SND_SOC_MAX98357A
+ 	select SND_SOC_DMIC
+ 	select SND_SOC_HDAC_HDMI
++	select SND_SOC_INTEL_HDA_DSP_COMMON
+ 	help
+ 	   This adds support for ASoC machine driver for Geminilake platforms
+ 	   with RT5682 + MAX98357A I2S audio codec.
+@@ -437,6 +446,7 @@ config SND_SOC_INTEL_SKL_HDA_DSP_GENERIC_MACH
+ 	depends on SND_HDA_CODEC_HDMI
+ 	depends on GPIOLIB
+ 	select SND_SOC_HDAC_HDMI
++	select SND_SOC_INTEL_HDA_DSP_COMMON
+ 	select SND_SOC_DMIC
+ 	# SND_SOC_HDAC_HDA is already selected
+ 	help
+@@ -461,6 +471,8 @@ config SND_SOC_INTEL_SOF_RT5682_MACH
+ 	select SND_SOC_RT5682_I2C
+ 	select SND_SOC_DMIC
+ 	select SND_SOC_HDAC_HDMI
++	select SND_SOC_INTEL_HDA_DSP_COMMON
++	select SND_SOC_INTEL_SOF_MAXIM_COMMON
+ 	help
+ 	   This adds support for ASoC machine driver for SOF platforms
+ 	   with rt5682 codec.
+@@ -473,6 +485,7 @@ config SND_SOC_INTEL_SOF_PCM512x_MACH
+ 	depends on (SND_SOC_SOF_HDA_AUDIO_CODEC && (MFD_INTEL_LPSS || COMPILE_TEST)) ||\
+ 		   (SND_SOC_SOF_BAYTRAIL && (X86_INTEL_LPSS || COMPILE_TEST))
+ 	depends on SND_HDA_CODEC_HDMI
++	select SND_SOC_INTEL_HDA_DSP_COMMON
+ 	select SND_SOC_PCM512x_I2C
+ 	help
+ 	  This adds support for ASoC machine driver for SOF platforms
+@@ -504,6 +517,7 @@ config SND_SOC_INTEL_SOF_CML_RT1011_RT5682_MACH
+ 	select SND_SOC_RT5682_I2C
+ 	select SND_SOC_DMIC
+ 	select SND_SOC_HDAC_HDMI
++	select SND_SOC_INTEL_HDA_DSP_COMMON
+ 	help
+ 	  This adds support for ASoC machine driver for SOF platform with
+ 	  RT1011 + RT5682 I2S codec.
+@@ -519,6 +533,7 @@ config SND_SOC_INTEL_SOF_DA7219_MAX98373_MACH
+ 	depends on I2C && ACPI && GPIOLIB
+ 	depends on MFD_INTEL_LPSS || COMPILE_TEST
+ 	depends on SND_HDA_CODEC_HDMI && SND_SOC_SOF_HDA_AUDIO_CODEC
++	select SND_SOC_INTEL_HDA_DSP_COMMON
+ 	select SND_SOC_DA7219
+ 	select SND_SOC_MAX98373_I2C
+ 	select SND_SOC_DMIC
+@@ -539,6 +554,7 @@ config SND_SOC_INTEL_EHL_RT5660_MACH
+ 	depends on SND_HDA_CODEC_HDMI && SND_SOC_SOF_HDA_AUDIO_CODEC
+ 	select SND_SOC_RT5660
+ 	select SND_SOC_DMIC
++	select SND_SOC_INTEL_HDA_DSP_COMMON
+ 	help
+ 	  This adds support for ASoC machine driver for Elkhart Lake
+ 	  platform with RT5660 I2S audio codec.
+@@ -566,6 +582,8 @@ config SND_SOC_INTEL_SOUNDWIRE_SOF_MACH
+ 	select SND_SOC_RT715_SDCA_SDW
+ 	select SND_SOC_RT5682_SDW
+ 	select SND_SOC_DMIC
++	select SND_SOC_INTEL_HDA_DSP_COMMON
++	select SND_SOC_INTEL_SOF_MAXIM_COMMON
+ 	help
+ 	  Add support for Intel SoundWire-based platforms connected to
+ 	  MAX98373, RT700, RT711, RT1308 and RT715
+diff --git a/sound/soc/intel/boards/Makefile b/sound/soc/intel/boards/Makefile
+index 616c5fbab7d5a..855296e8dfb83 100644
+--- a/sound/soc/intel/boards/Makefile
++++ b/sound/soc/intel/boards/Makefile
+@@ -3,11 +3,11 @@ snd-soc-sst-haswell-objs := haswell.o
+ snd-soc-sst-bdw-rt5650-mach-objs := bdw-rt5650.o
+ snd-soc-sst-bdw-rt5677-mach-objs := bdw-rt5677.o
+ snd-soc-sst-broadwell-objs := broadwell.o
+-snd-soc-sst-bxt-da7219_max98357a-objs := bxt_da7219_max98357a.o hda_dsp_common.o
+-snd-soc-sst-bxt-rt298-objs := bxt_rt298.o hda_dsp_common.o
+-snd-soc-sst-sof-pcm512x-objs := sof_pcm512x.o hda_dsp_common.o
++snd-soc-sst-bxt-da7219_max98357a-objs := bxt_da7219_max98357a.o
++snd-soc-sst-bxt-rt298-objs := bxt_rt298.o
++snd-soc-sst-sof-pcm512x-objs := sof_pcm512x.o
+ snd-soc-sst-sof-wm8804-objs := sof_wm8804.o
+-snd-soc-sst-glk-rt5682_max98357a-objs := glk_rt5682_max98357a.o hda_dsp_common.o
++snd-soc-sst-glk-rt5682_max98357a-objs := glk_rt5682_max98357a.o
+ snd-soc-sst-bytcr-rt5640-objs := bytcr_rt5640.o
+ snd-soc-sst-bytcr-rt5651-objs := bytcr_rt5651.o
+ snd-soc-sst-bytcr-wm5102-objs := bytcr_wm5102.o
+@@ -19,27 +19,26 @@ snd-soc-sst-byt-cht-cx2072x-objs := bytcht_cx2072x.o
+ snd-soc-sst-byt-cht-da7213-objs := bytcht_da7213.o
+ snd-soc-sst-byt-cht-es8316-objs := bytcht_es8316.o
+ snd-soc-sst-byt-cht-nocodec-objs := bytcht_nocodec.o
+-snd-soc-sof_rt5682-objs := sof_rt5682.o hda_dsp_common.o sof_maxim_common.o sof_realtek_common.o
+-snd-soc-cml_rt1011_rt5682-objs := cml_rt1011_rt5682.o hda_dsp_common.o
++snd-soc-sof_rt5682-objs := sof_rt5682.o sof_realtek_common.o
++snd-soc-cml_rt1011_rt5682-objs := cml_rt1011_rt5682.o
+ snd-soc-kbl_da7219_max98357a-objs := kbl_da7219_max98357a.o
+ snd-soc-kbl_da7219_max98927-objs := kbl_da7219_max98927.o
+ snd-soc-kbl_rt5663_max98927-objs := kbl_rt5663_max98927.o
+ snd-soc-kbl_rt5663_rt5514_max98927-objs := kbl_rt5663_rt5514_max98927.o
+ snd-soc-kbl_rt5660-objs := kbl_rt5660.o
+ snd-soc-skl_rt286-objs := skl_rt286.o
+-snd-soc-skl_hda_dsp-objs := skl_hda_dsp_generic.o skl_hda_dsp_common.o hda_dsp_common.o
++snd-soc-skl_hda_dsp-objs := skl_hda_dsp_generic.o skl_hda_dsp_common.o
+ snd-skl_nau88l25_max98357a-objs := skl_nau88l25_max98357a.o
+ snd-soc-skl_nau88l25_ssm4567-objs := skl_nau88l25_ssm4567.o
+-snd-soc-sof_da7219_max98373-objs := sof_da7219_max98373.o hda_dsp_common.o
+-snd-soc-ehl-rt5660-objs := ehl_rt5660.o hda_dsp_common.o
++snd-soc-sof_da7219_max98373-objs := sof_da7219_max98373.o
++snd-soc-ehl-rt5660-objs := ehl_rt5660.o
+ snd-soc-sof-sdw-objs += sof_sdw.o				\
+ 			sof_sdw_max98373.o			\
+ 			sof_sdw_rt1308.o sof_sdw_rt1316.o	\
+ 			sof_sdw_rt5682.o sof_sdw_rt700.o	\
+ 			sof_sdw_rt711.o sof_sdw_rt711_sdca.o 	\
+ 			sof_sdw_rt715.o	sof_sdw_rt715_sdca.o 	\
+-			sof_maxim_common.o                      \
+-			sof_sdw_dmic.o sof_sdw_hdmi.o hda_dsp_common.o
++			sof_sdw_dmic.o sof_sdw_hdmi.o
+ obj-$(CONFIG_SND_SOC_INTEL_SOF_RT5682_MACH) += snd-soc-sof_rt5682.o
+ obj-$(CONFIG_SND_SOC_INTEL_HASWELL_MACH) += snd-soc-sst-haswell.o
+ obj-$(CONFIG_SND_SOC_INTEL_BXT_DA7219_MAX98357A_COMMON) += snd-soc-sst-bxt-da7219_max98357a.o
+@@ -74,3 +73,10 @@ obj-$(CONFIG_SND_SOC_INTEL_SKL_HDA_DSP_GENERIC_MACH) += snd-soc-skl_hda_dsp.o
+ obj-$(CONFIG_SND_SOC_INTEL_SOF_DA7219_MAX98373_MACH) += snd-soc-sof_da7219_max98373.o
+ obj-$(CONFIG_SND_SOC_INTEL_EHL_RT5660_MACH) += snd-soc-ehl-rt5660.o
+ obj-$(CONFIG_SND_SOC_INTEL_SOUNDWIRE_SOF_MACH) += snd-soc-sof-sdw.o
++
++# common modules
++snd-soc-intel-hda-dsp-common-objs := hda_dsp_common.o
++obj-$(CONFIG_SND_SOC_INTEL_HDA_DSP_COMMON) += snd-soc-intel-hda-dsp-common.o
++
++snd-soc-intel-sof-maxim-common-objs += sof_maxim_common.o
++obj-$(CONFIG_SND_SOC_INTEL_SOF_MAXIM_COMMON) += snd-soc-intel-sof-maxim-common.o
+diff --git a/sound/soc/intel/boards/bxt_da7219_max98357a.c b/sound/soc/intel/boards/bxt_da7219_max98357a.c
+index 9ffef396f8f2d..07ae950b01277 100644
+--- a/sound/soc/intel/boards/bxt_da7219_max98357a.c
++++ b/sound/soc/intel/boards/bxt_da7219_max98357a.c
+@@ -869,3 +869,4 @@ MODULE_LICENSE("GPL v2");
+ MODULE_ALIAS("platform:bxt_da7219_max98357a");
+ MODULE_ALIAS("platform:glk_da7219_max98357a");
+ MODULE_ALIAS("platform:cml_da7219_max98357a");
++MODULE_IMPORT_NS(SND_SOC_INTEL_HDA_DSP_COMMON);
+diff --git a/sound/soc/intel/boards/bxt_rt298.c b/sound/soc/intel/boards/bxt_rt298.c
+index 0f3157dfa8384..32a776fa0b865 100644
+--- a/sound/soc/intel/boards/bxt_rt298.c
++++ b/sound/soc/intel/boards/bxt_rt298.c
+@@ -667,3 +667,4 @@ MODULE_DESCRIPTION("Intel SST Audio for Broxton");
+ MODULE_LICENSE("GPL v2");
+ MODULE_ALIAS("platform:bxt_alc298s_i2s");
+ MODULE_ALIAS("platform:glk_alc298s_i2s");
++MODULE_IMPORT_NS(SND_SOC_INTEL_HDA_DSP_COMMON);
+diff --git a/sound/soc/intel/boards/cml_rt1011_rt5682.c b/sound/soc/intel/boards/cml_rt1011_rt5682.c
+index 14813beb33d16..27615acddacdf 100644
+--- a/sound/soc/intel/boards/cml_rt1011_rt5682.c
++++ b/sound/soc/intel/boards/cml_rt1011_rt5682.c
+@@ -594,3 +594,4 @@ MODULE_AUTHOR("Shuming Fan <shumingf@realtek.com>");
+ MODULE_AUTHOR("Mac Chiang <mac.chiang@intel.com>");
+ MODULE_LICENSE("GPL v2");
+ MODULE_ALIAS("platform:cml_rt1011_rt5682");
++MODULE_IMPORT_NS(SND_SOC_INTEL_HDA_DSP_COMMON);
+diff --git a/sound/soc/intel/boards/ehl_rt5660.c b/sound/soc/intel/boards/ehl_rt5660.c
+index 7c0d4e9154067..b9b72d05b3358 100644
+--- a/sound/soc/intel/boards/ehl_rt5660.c
++++ b/sound/soc/intel/boards/ehl_rt5660.c
+@@ -321,3 +321,4 @@ MODULE_DESCRIPTION("ASoC Intel(R) Elkhartlake + rt5660 Machine driver");
+ MODULE_AUTHOR("libin.yang@intel.com");
+ MODULE_LICENSE("GPL v2");
+ MODULE_ALIAS("platform:ehl_rt5660");
++MODULE_IMPORT_NS(SND_SOC_INTEL_HDA_DSP_COMMON);
+diff --git a/sound/soc/intel/boards/glk_rt5682_max98357a.c b/sound/soc/intel/boards/glk_rt5682_max98357a.c
+index 62cca511522ea..19e2ff90886a9 100644
+--- a/sound/soc/intel/boards/glk_rt5682_max98357a.c
++++ b/sound/soc/intel/boards/glk_rt5682_max98357a.c
+@@ -642,3 +642,4 @@ MODULE_AUTHOR("Naveen Manohar <naveen.m@intel.com>");
+ MODULE_AUTHOR("Harsha Priya <harshapriya.n@intel.com>");
+ MODULE_LICENSE("GPL v2");
+ MODULE_ALIAS("platform:glk_rt5682_max98357a");
++MODULE_IMPORT_NS(SND_SOC_INTEL_HDA_DSP_COMMON);
+diff --git a/sound/soc/intel/boards/hda_dsp_common.c b/sound/soc/intel/boards/hda_dsp_common.c
+index 91ad2a0ad1ce1..efdc4bc4bb1f9 100644
+--- a/sound/soc/intel/boards/hda_dsp_common.c
++++ b/sound/soc/intel/boards/hda_dsp_common.c
+@@ -2,6 +2,7 @@
+ //
+ // Copyright(c) 2019 Intel Corporation. All rights reserved.
+ 
++#include <linux/module.h>
+ #include <sound/pcm.h>
+ #include <sound/soc.h>
+ #include <sound/hda_codec.h>
+@@ -82,5 +83,9 @@ int hda_dsp_hdmi_build_controls(struct snd_soc_card *card,
+ 
+ 	return err;
+ }
++EXPORT_SYMBOL_NS(hda_dsp_hdmi_build_controls, SND_SOC_INTEL_HDA_DSP_COMMON);
+ 
+ #endif
++
++MODULE_DESCRIPTION("ASoC Intel HDMI helpers");
++MODULE_LICENSE("GPL");
+diff --git a/sound/soc/intel/boards/skl_hda_dsp_generic.c b/sound/soc/intel/boards/skl_hda_dsp_generic.c
+index bc50eda297ab7..f4b4eeca3e03c 100644
+--- a/sound/soc/intel/boards/skl_hda_dsp_generic.c
++++ b/sound/soc/intel/boards/skl_hda_dsp_generic.c
+@@ -258,3 +258,4 @@ MODULE_DESCRIPTION("SKL/KBL/BXT/APL HDA Generic Machine driver");
+ MODULE_AUTHOR("Rakesh Ughreja <rakesh.a.ughreja@intel.com>");
+ MODULE_LICENSE("GPL v2");
+ MODULE_ALIAS("platform:skl_hda_dsp_generic");
++MODULE_IMPORT_NS(SND_SOC_INTEL_HDA_DSP_COMMON);
+diff --git a/sound/soc/intel/boards/sof_da7219_max98373.c b/sound/soc/intel/boards/sof_da7219_max98373.c
+index 8d1ad892e86b6..2116d70d1ea8b 100644
+--- a/sound/soc/intel/boards/sof_da7219_max98373.c
++++ b/sound/soc/intel/boards/sof_da7219_max98373.c
+@@ -458,3 +458,4 @@ MODULE_AUTHOR("Yong Zhi <yong.zhi@intel.com>");
+ MODULE_LICENSE("GPL v2");
+ MODULE_ALIAS("platform:sof_da7219_max98360a");
+ MODULE_ALIAS("platform:sof_da7219_max98373");
++MODULE_IMPORT_NS(SND_SOC_INTEL_HDA_DSP_COMMON);
+diff --git a/sound/soc/intel/boards/sof_maxim_common.c b/sound/soc/intel/boards/sof_maxim_common.c
+index 437d205627539..7c4af6ec58e82 100644
+--- a/sound/soc/intel/boards/sof_maxim_common.c
++++ b/sound/soc/intel/boards/sof_maxim_common.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ //
+ // Copyright(c) 2020 Intel Corporation. All rights reserved.
++#include <linux/module.h>
+ #include <linux/string.h>
+ #include <sound/pcm.h>
+ #include <sound/soc.h>
+@@ -16,6 +17,7 @@ const struct snd_soc_dapm_route max_98373_dapm_routes[] = {
+ 	{ "Left Spk", NULL, "Left BE_OUT" },
+ 	{ "Right Spk", NULL, "Right BE_OUT" },
+ };
++EXPORT_SYMBOL_NS(max_98373_dapm_routes, SND_SOC_INTEL_SOF_MAXIM_COMMON);
+ 
+ static struct snd_soc_codec_conf max_98373_codec_conf[] = {
+ 	{
+@@ -38,9 +40,10 @@ struct snd_soc_dai_link_component max_98373_components[] = {
+ 		.dai_name = MAX_98373_CODEC_DAI,
+ 	},
+ };
++EXPORT_SYMBOL_NS(max_98373_components, SND_SOC_INTEL_SOF_MAXIM_COMMON);
+ 
+-static int max98373_hw_params(struct snd_pcm_substream *substream,
+-			      struct snd_pcm_hw_params *params)
++static int max_98373_hw_params(struct snd_pcm_substream *substream,
++			       struct snd_pcm_hw_params *params)
+ {
+ 	struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream);
+ 	struct snd_soc_dai *codec_dai;
+@@ -59,7 +62,7 @@ static int max98373_hw_params(struct snd_pcm_substream *substream,
+ 	return 0;
+ }
+ 
+-int max98373_trigger(struct snd_pcm_substream *substream, int cmd)
++int max_98373_trigger(struct snd_pcm_substream *substream, int cmd)
+ {
+ 	struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream);
+ 	struct snd_soc_dai *codec_dai;
+@@ -102,13 +105,15 @@ int max98373_trigger(struct snd_pcm_substream *substream, int cmd)
+ 
+ 	return ret;
+ }
++EXPORT_SYMBOL_NS(max_98373_trigger, SND_SOC_INTEL_SOF_MAXIM_COMMON);
+ 
+ struct snd_soc_ops max_98373_ops = {
+-	.hw_params = max98373_hw_params,
+-	.trigger = max98373_trigger,
++	.hw_params = max_98373_hw_params,
++	.trigger = max_98373_trigger,
+ };
++EXPORT_SYMBOL_NS(max_98373_ops, SND_SOC_INTEL_SOF_MAXIM_COMMON);
+ 
+-int max98373_spk_codec_init(struct snd_soc_pcm_runtime *rtd)
++int max_98373_spk_codec_init(struct snd_soc_pcm_runtime *rtd)
+ {
+ 	struct snd_soc_card *card = rtd->card;
+ 	int ret;
+@@ -119,9 +124,14 @@ int max98373_spk_codec_init(struct snd_soc_pcm_runtime *rtd)
+ 		dev_err(rtd->dev, "Speaker map addition failed: %d\n", ret);
+ 	return ret;
+ }
++EXPORT_SYMBOL_NS(max_98373_spk_codec_init, SND_SOC_INTEL_SOF_MAXIM_COMMON);
+ 
+-void sof_max98373_codec_conf(struct snd_soc_card *card)
++void max_98373_set_codec_conf(struct snd_soc_card *card)
+ {
+ 	card->codec_conf = max_98373_codec_conf;
+ 	card->num_configs = ARRAY_SIZE(max_98373_codec_conf);
+ }
++EXPORT_SYMBOL_NS(max_98373_set_codec_conf, SND_SOC_INTEL_SOF_MAXIM_COMMON);
++
++MODULE_DESCRIPTION("ASoC Intel SOF Maxim helpers");
++MODULE_LICENSE("GPL");
+diff --git a/sound/soc/intel/boards/sof_maxim_common.h b/sound/soc/intel/boards/sof_maxim_common.h
+index 5240b1c9d379b..566a664d5a636 100644
+--- a/sound/soc/intel/boards/sof_maxim_common.h
++++ b/sound/soc/intel/boards/sof_maxim_common.h
+@@ -20,8 +20,8 @@ extern struct snd_soc_dai_link_component max_98373_components[2];
+ extern struct snd_soc_ops max_98373_ops;
+ extern const struct snd_soc_dapm_route max_98373_dapm_routes[];
+ 
+-int max98373_spk_codec_init(struct snd_soc_pcm_runtime *rtd);
+-void sof_max98373_codec_conf(struct snd_soc_card *card);
+-int max98373_trigger(struct snd_pcm_substream *substream, int cmd);
++int max_98373_spk_codec_init(struct snd_soc_pcm_runtime *rtd);
++void max_98373_set_codec_conf(struct snd_soc_card *card);
++int max_98373_trigger(struct snd_pcm_substream *substream, int cmd);
+ 
+ #endif /* __SOF_MAXIM_COMMON_H */
+diff --git a/sound/soc/intel/boards/sof_pcm512x.c b/sound/soc/intel/boards/sof_pcm512x.c
+index d2b0456236c72..8620d4f38493a 100644
+--- a/sound/soc/intel/boards/sof_pcm512x.c
++++ b/sound/soc/intel/boards/sof_pcm512x.c
+@@ -437,3 +437,4 @@ MODULE_DESCRIPTION("ASoC Intel(R) SOF + PCM512x Machine driver");
+ MODULE_AUTHOR("Pierre-Louis Bossart");
+ MODULE_LICENSE("GPL v2");
+ MODULE_ALIAS("platform:sof_pcm512x");
++MODULE_IMPORT_NS(SND_SOC_INTEL_HDA_DSP_COMMON);
+diff --git a/sound/soc/intel/boards/sof_rt5682.c b/sound/soc/intel/boards/sof_rt5682.c
+index cf1d053733e22..78262c659983e 100644
+--- a/sound/soc/intel/boards/sof_rt5682.c
++++ b/sound/soc/intel/boards/sof_rt5682.c
+@@ -742,7 +742,7 @@ static struct snd_soc_dai_link *sof_card_dai_links_create(struct device *dev,
+ 				SOF_MAX98373_SPEAKER_AMP_PRESENT) {
+ 			links[id].codecs = max_98373_components;
+ 			links[id].num_codecs = ARRAY_SIZE(max_98373_components);
+-			links[id].init = max98373_spk_codec_init;
++			links[id].init = max_98373_spk_codec_init;
+ 			links[id].ops = &max_98373_ops;
+ 			/* feedback stream */
+ 			links[id].dpcm_capture = 1;
+@@ -863,7 +863,7 @@ static int sof_audio_probe(struct platform_device *pdev)
+ 		sof_audio_card_rt5682.num_links++;
+ 
+ 	if (sof_rt5682_quirk & SOF_MAX98373_SPEAKER_AMP_PRESENT)
+-		sof_max98373_codec_conf(&sof_audio_card_rt5682);
++		max_98373_set_codec_conf(&sof_audio_card_rt5682);
+ 	else if (sof_rt5682_quirk & SOF_RT1011_SPEAKER_AMP_PRESENT)
+ 		sof_rt1011_codec_conf(&sof_audio_card_rt5682);
+ 	else if (sof_rt5682_quirk & SOF_RT1015P_SPEAKER_AMP_PRESENT)
+@@ -994,3 +994,5 @@ MODULE_ALIAS("platform:jsl_rt5682_max98360a");
+ MODULE_ALIAS("platform:cml_rt1015_rt5682");
+ MODULE_ALIAS("platform:tgl_rt1011_rt5682");
+ MODULE_ALIAS("platform:jsl_rt5682_rt1015p");
++MODULE_IMPORT_NS(SND_SOC_INTEL_HDA_DSP_COMMON);
++MODULE_IMPORT_NS(SND_SOC_INTEL_SOF_MAXIM_COMMON);
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 5827a16773c90..3ca7e1ab48ab1 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -1318,3 +1318,5 @@ MODULE_AUTHOR("Rander Wang <rander.wang@linux.intel.com>");
+ MODULE_AUTHOR("Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>");
+ MODULE_LICENSE("GPL v2");
+ MODULE_ALIAS("platform:sof_sdw");
++MODULE_IMPORT_NS(SND_SOC_INTEL_HDA_DSP_COMMON);
++MODULE_IMPORT_NS(SND_SOC_INTEL_SOF_MAXIM_COMMON);
+diff --git a/sound/soc/intel/boards/sof_sdw_max98373.c b/sound/soc/intel/boards/sof_sdw_max98373.c
+index cfdf970c5800f..25daef910aee1 100644
+--- a/sound/soc/intel/boards/sof_sdw_max98373.c
++++ b/sound/soc/intel/boards/sof_sdw_max98373.c
+@@ -55,43 +55,68 @@ static int spk_init(struct snd_soc_pcm_runtime *rtd)
+ 	return ret;
+ }
+ 
+-static int max98373_sdw_trigger(struct snd_pcm_substream *substream, int cmd)
++static int mx8373_enable_spk_pin(struct snd_pcm_substream *substream, bool enable)
+ {
++	struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream);
++	struct snd_soc_dai *codec_dai;
++	struct snd_soc_dai *cpu_dai;
+ 	int ret;
++	int j;
+ 
+-	switch (cmd) {
+-	case SNDRV_PCM_TRIGGER_START:
+-	case SNDRV_PCM_TRIGGER_RESUME:
+-	case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
+-		/* enable max98373 first */
+-		ret = max98373_trigger(substream, cmd);
+-		if (ret < 0)
+-			break;
+-
+-		ret = sdw_trigger(substream, cmd);
+-		break;
+-	case SNDRV_PCM_TRIGGER_STOP:
+-	case SNDRV_PCM_TRIGGER_SUSPEND:
+-	case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+-		ret = sdw_trigger(substream, cmd);
+-		if (ret < 0)
+-			break;
+-
+-		ret = max98373_trigger(substream, cmd);
+-		break;
+-	default:
+-		ret = -EINVAL;
+-		break;
++	/* set spk pin by playback only */
++	if (substream->stream == SNDRV_PCM_STREAM_CAPTURE)
++		return 0;
++
++	cpu_dai = asoc_rtd_to_cpu(rtd, 0);
++	for_each_rtd_codec_dais(rtd, j, codec_dai) {
++		struct snd_soc_dapm_context *dapm =
++				snd_soc_component_get_dapm(cpu_dai->component);
++		char pin_name[16];
++
++		snprintf(pin_name, ARRAY_SIZE(pin_name), "%s Spk",
++			 codec_dai->component->name_prefix);
++
++		if (enable)
++			ret = snd_soc_dapm_enable_pin(dapm, pin_name);
++		else
++			ret = snd_soc_dapm_disable_pin(dapm, pin_name);
++
++		if (!ret)
++			snd_soc_dapm_sync(dapm);
+ 	}
+ 
+-	return ret;
++	return 0;
++}
++
++static int mx8373_sdw_prepare(struct snd_pcm_substream *substream)
++{
++	int ret = 0;
++
++	/* according to soc_pcm_prepare dai link prepare is called first */
++	ret = sdw_prepare(substream);
++	if (ret < 0)
++		return ret;
++
++	return mx8373_enable_spk_pin(substream, true);
++}
++
++static int mx8373_sdw_hw_free(struct snd_pcm_substream *substream)
++{
++	int ret = 0;
++
++	/* according to soc_pcm_hw_free dai link free is called first */
++	ret = sdw_hw_free(substream);
++	if (ret < 0)
++		return ret;
++
++	return mx8373_enable_spk_pin(substream, false);
+ }
+ 
+ static const struct snd_soc_ops max_98373_sdw_ops = {
+ 	.startup = sdw_startup,
+-	.prepare = sdw_prepare,
+-	.trigger = max98373_sdw_trigger,
+-	.hw_free = sdw_hw_free,
++	.prepare = mx8373_sdw_prepare,
++	.trigger = sdw_trigger,
++	.hw_free = mx8373_sdw_hw_free,
+ 	.shutdown = sdw_shutdown,
+ };
+ 
+diff --git a/sound/soc/ti/j721e-evm.c b/sound/soc/ti/j721e-evm.c
+index a7c0484d44ec7..265bbc5a2f96a 100644
+--- a/sound/soc/ti/j721e-evm.c
++++ b/sound/soc/ti/j721e-evm.c
+@@ -197,7 +197,7 @@ static int j721e_configure_refclk(struct j721e_priv *priv,
+ 		return ret;
+ 	}
+ 
+-	if (priv->hsdiv_rates[domain->parent_clk_id] != scki) {
++	if (domain->parent_clk_id == -1 || priv->hsdiv_rates[domain->parent_clk_id] != scki) {
+ 		dev_dbg(priv->dev,
+ 			"%s configuration for %u Hz: %s, %dxFS (SCKI: %u Hz)\n",
+ 			audio_domain == J721E_AUDIO_DOMAIN_CPB ? "CPB" : "IVI",
+@@ -278,23 +278,29 @@ static int j721e_audio_startup(struct snd_pcm_substream *substream)
+ 					  j721e_rule_rate, &priv->rate_range,
+ 					  SNDRV_PCM_HW_PARAM_RATE, -1);
+ 
+-	mutex_unlock(&priv->mutex);
+ 
+ 	if (ret)
+-		return ret;
++		goto out;
+ 
+ 	/* Reset TDM slots to 32 */
+ 	ret = snd_soc_dai_set_tdm_slot(cpu_dai, 0x3, 0x3, 2, 32);
+ 	if (ret && ret != -ENOTSUPP)
+-		return ret;
++		goto out;
+ 
+ 	for_each_rtd_codec_dais(rtd, i, codec_dai) {
+ 		ret = snd_soc_dai_set_tdm_slot(codec_dai, 0x3, 0x3, 2, 32);
+ 		if (ret && ret != -ENOTSUPP)
+-			return ret;
++			goto out;
+ 	}
+ 
+-	return 0;
++	if (ret == -ENOTSUPP)
++		ret = 0;
++out:
++	if (ret)
++		domain->active--;
++	mutex_unlock(&priv->mutex);
++
++	return ret;
+ }
+ 
+ static int j721e_audio_hw_params(struct snd_pcm_substream *substream,


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-08-10 12:13 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-08-10 12:13 UTC (permalink / raw
  To: gentoo-commits

commit:     9e6a2ebc2c0534ddb6855d999bbf6cd98a7e3370
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Aug  9 23:18:23 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Aug 10 12:13:32 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9e6a2ebc

Fix GCC_PLUGINS depends

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 429e9d4..864f86a 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -6,9 +6,9 @@
  source "Documentation/Kconfig"
 +
 +source "distro/Kconfig"
---- /dev/null	2021-08-03 06:44:27.767516067 -0400
-+++ b/distro/Kconfig	2021-08-03 18:43:33.303563865 -0400
-@@ -0,0 +1,268 @@
+--- /dev/null	2021-08-09 07:18:54.945580285 -0400
++++ b/distro/Kconfig	2021-08-09 19:15:34.418191114 -0400
+@@ -0,0 +1,267 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -170,7 +170,7 @@
 +	bool "Kernel Self Protection Project"
 +	depends on GENTOO_LINUX
 +	help
-+  		Recommended Kernel settings based on the suggestions from the Kernel Self Protection Project
++		Recommended Kernel settings based on the suggestions from the Kernel Self Protection Project
 +		See: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings
 +		Note, there may be additional settings for which the CONFIG_ setting is invisible in menuconfig due 
 +		to unmet dependencies. Search for GENTOO_KERNEL_SELF_PROTECTION_COMMON and search for 
@@ -183,7 +183,7 @@
 +config GENTOO_KERNEL_SELF_PROTECTION_COMMON
 +	bool "Enable Kernel Self Protection Project Recommendations"
 +
-+	depends on GENTOO_LINUX && !ACPI_CUSTOM_METHOD && !COMPAT_BRK && !DEVKMEM && !PROC_KCORE && !COMPAT_VDSO && !KEXEC && !HIBERNATION && !LEGACY_PTYS && !X86_X32 && !MODIFY_LDT_SYSCALL
++	depends on GENTOO_LINUX && !ACPI_CUSTOM_METHOD && !COMPAT_BRK && !DEVKMEM && !PROC_KCORE && !COMPAT_VDSO && !KEXEC && !HIBERNATION && !LEGACY_PTYS && !X86_X32 && !MODIFY_LDT_SYSCALL && GCC_PLUGINS
 +
 +	select BUG
 +	select STRICT_KERNEL_RWX
@@ -216,7 +216,6 @@
 +	select FORTIFY_SOURCE
 +	select SECURITY_DMESG_RESTRICT
 +	select PANIC_ON_OOPS
-+	select CONFIG_GCC_PLUGINS
 +	select GCC_PLUGIN_LATENT_ENTROPY
 +	select GCC_PLUGIN_STRUCTLEAK
 +	select GCC_PLUGIN_STRUCTLEAK_BYREF_ALL


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-08-10 12:13 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-08-10 12:13 UTC (permalink / raw
  To: gentoo-commits

commit:     99ef7f8a1cf389dd3c0cbf2fd7beb6c3404a0cea
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Aug  3 22:49:56 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Aug 10 12:13:01 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=99ef7f8a

Add CONFIG_RELOCATABLE when selecting RANDOMIZE_BASE

Redo menu's to make more user-friendly

Bug: https://bugs.gentoo.org/806300

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 51 ++++++++++++++++++++++------------------
 1 file changed, 28 insertions(+), 23 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index fa005e6..429e9d4 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -6,9 +6,9 @@
  source "Documentation/Kconfig"
 +
 +source "distro/Kconfig"
---- /dev/null	2021-07-04 10:53:51.006624416 -0400
-+++ b/distro/Kconfig	2021-07-04 11:07:33.534248860 -0400
-@@ -0,0 +1,263 @@
+--- /dev/null	2021-08-03 06:44:27.767516067 -0400
++++ b/distro/Kconfig	2021-08-03 18:43:33.303563865 -0400
+@@ -0,0 +1,268 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -166,11 +166,22 @@
 +
 +endmenu
 +
-+menu "Enable Kernel Self Protection Project Recommendations"
-+	visible if GENTOO_LINUX
++menuconfig GENTOO_KERNEL_SELF_PROTECTION
++	bool "Kernel Self Protection Project"
++	depends on GENTOO_LINUX
++	help
++  		Recommended Kernel settings based on the suggestions from the Kernel Self Protection Project
++		See: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings
++		Note, there may be additional settings for which the CONFIG_ setting is invisible in menuconfig due 
++		to unmet dependencies. Search for GENTOO_KERNEL_SELF_PROTECTION_COMMON and search for 
++		GENTOO_KERNEL_SELF_PROTECTION_{X86_64, ARM64, X86_32, ARM} for dependency information on your 
++		specific architecture.
++		Note 2: Please see the URL above for numeric settings, e.g. CONFIG_DEFAULT_MMAP_MIN_ADDR=65536 
++		for X86_64
 +
-+config GENTOO_KERNEL_SELF_PROTECTION
-+	bool "Architecture Independant Kernel Self Protection Project Recommendations"
++if GENTOO_KERNEL_SELF_PROTECTION
++config GENTOO_KERNEL_SELF_PROTECTION_COMMON
++	bool "Enable Kernel Self Protection Project Recommendations"
 +
 +	depends on GENTOO_LINUX && !ACPI_CUSTOM_METHOD && !COMPAT_BRK && !DEVKMEM && !PROC_KCORE && !COMPAT_VDSO && !KEXEC && !HIBERNATION && !LEGACY_PTYS && !X86_X32 && !MODIFY_LDT_SYSCALL
 +
@@ -214,26 +225,21 @@
 +	select GCC_PLUGIN_RANDSTRUCT_PERFORMANCE
 +
 +	help
-+  		Recommended Kernel settings based on the suggestions from the Kernel Self Protection Project
-+		See: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings
-+		Note, there may be additional settings for which the CONFIG_ setting is invisible in menuconfig due 
-+		to unmet dependencies. Search for GENTOO_KERNEL_SELF_PROTECTION_{X86_64, ARM64, X86_32, ARM} for 
-+		dependency information on your specific architecture.
-+		Note 2: Please see the URL above for numeric settings, e.g. CONFIG_DEFAULT_MMAP_MIN_ADDR=65536 
-+		for X86_64
-+
-+menu "Architecture Specific Self Protection Project Recommendations"
++		Search for GENTOO_KERNEL_SELF_PROTECTION_{X86_64, ARM64, X86_32, ARM} for dependency 
++		information on your specific architecture.  Note 2: Please see the URL above for 
++		numeric settings, e.g. CONFIG_DEFAULT_MMAP_MIN_ADDR=65536 for X86_64
 +
 +config GENTOO_KERNEL_SELF_PROTECTION_X86_64
-+	bool "X86_64 KSPP Settings"
++	bool "X86_64 KSPP Settings" if GENTOO_KERNEL_SELF_PROTECTION_COMMON
 +
-+	depends on !X86_MSR && X86_64
++	depends on !X86_MSR && X86_64 && GENTOO_KERNEL_SELF_PROTECTION
 +	default n
 +	
 +	select RANDOMIZE_BASE
 +	select RANDOMIZE_MEMORY
++	select RELOCATABLE
 +	select LEGACY_VSYSCALL_NONE
-+ select PAGE_TABLE_ISOLATION
++ 	select PAGE_TABLE_ISOLATION
 +
 +
 +config GENTOO_KERNEL_SELF_PROTECTION_ARM64
@@ -243,6 +249,7 @@
 +	default n
 +
 +	select RANDOMIZE_BASE
++	select RELOCATABLE
 +	select ARM64_SW_TTBR0_PAN
 +	select CONFIG_UNMAP_KERNEL_AT_EL0
 +
@@ -255,6 +262,7 @@
 +	select HIGHMEM64G
 +	select X86_PAE
 +	select RANDOMIZE_BASE
++	select RELOCATABLE
 +	select PAGE_TABLE_ISOLATION
 +
 +config GENTOO_KERNEL_SELF_PROTECTION_ARM
@@ -267,10 +275,7 @@
 +	select STRICT_MEMORY_RWX
 +	select CPU_SW_DOMAIN_PAN
 +
-+endmenu
-+
-+endmenu
-+
++endif
 +endmenu
 diff --git a/security/Kconfig b/security/Kconfig
 index 7561f6f99..01f0bf73f 100644


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-08-12 11:54 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-08-12 11:54 UTC (permalink / raw
  To: gentoo-commits

commit:     4f6147ba258215799948704b0f8626c7b5611a11
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 12 11:54:36 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Aug 12 11:54:36 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4f6147ba

Linux patch 5.13.10

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1009_linux-5.13.10.patch | 6040 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6044 insertions(+)

diff --git a/0000_README b/0000_README
index fd05e40..deff891 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch:  1008_linux-5.13.9.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.13.9
 
+Patch:  1009_linux-5.13.10.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.13.10
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1009_linux-5.13.10.patch b/1009_linux-5.13.10.patch
new file mode 100644
index 0000000..612fced
--- /dev/null
+++ b/1009_linux-5.13.10.patch
@@ -0,0 +1,6040 @@
+diff --git a/Makefile b/Makefile
+index 9d810e13a83f4..4e9f877f513f9 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 13
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+@@ -1366,6 +1366,15 @@ scripts_unifdef: scripts_basic
+ 	$(Q)$(MAKE) $(build)=scripts scripts/unifdef
+ 
+ # ---------------------------------------------------------------------------
++# Install
++
++# Many distributions have the custom install script, /sbin/installkernel.
++# If DKMS is installed, 'make install' will eventually recuses back
++# to the this Makefile to build and install external modules.
++# Cancel sub_make_done so that options such as M=, V=, etc. are parsed.
++
++install: sub_make_done :=
++
+ # Kernel selftest
+ 
+ PHONY += kselftest
+diff --git a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c
+index 4b2575f936d46..cb64e4797d2a8 100644
+--- a/arch/alpha/kernel/smp.c
++++ b/arch/alpha/kernel/smp.c
+@@ -582,7 +582,7 @@ void
+ smp_send_stop(void)
+ {
+ 	cpumask_t to_whom;
+-	cpumask_copy(&to_whom, cpu_possible_mask);
++	cpumask_copy(&to_whom, cpu_online_mask);
+ 	cpumask_clear_cpu(smp_processor_id(), &to_whom);
+ #ifdef DEBUG_IPI_MSG
+ 	if (hard_smp_processor_id() != boot_cpu_id)
+diff --git a/arch/arm/boot/dts/am437x-l4.dtsi b/arch/arm/boot/dts/am437x-l4.dtsi
+index a6f19ae7d3e6b..f73ecec1995a3 100644
+--- a/arch/arm/boot/dts/am437x-l4.dtsi
++++ b/arch/arm/boot/dts/am437x-l4.dtsi
+@@ -1595,7 +1595,7 @@
+ 				compatible = "ti,am4372-d_can", "ti,am3352-d_can";
+ 				reg = <0x0 0x2000>;
+ 				clocks = <&dcan1_fck>;
+-				clock-name = "fck";
++				clock-names = "fck";
+ 				syscon-raminit = <&scm_conf 0x644 1>;
+ 				interrupts = <GIC_SPI 49 IRQ_TYPE_LEVEL_HIGH>;
+ 				status = "disabled";
+diff --git a/arch/arm/boot/dts/imx53-m53menlo.dts b/arch/arm/boot/dts/imx53-m53menlo.dts
+index f98691ae4415b..d3082b9774e40 100644
+--- a/arch/arm/boot/dts/imx53-m53menlo.dts
++++ b/arch/arm/boot/dts/imx53-m53menlo.dts
+@@ -388,13 +388,13 @@
+ 
+ 		pinctrl_power_button: powerbutgrp {
+ 			fsl,pins = <
+-				MX53_PAD_SD2_DATA2__GPIO1_13		0x1e4
++				MX53_PAD_SD2_DATA0__GPIO1_15		0x1e4
+ 			>;
+ 		};
+ 
+ 		pinctrl_power_out: poweroutgrp {
+ 			fsl,pins = <
+-				MX53_PAD_SD2_DATA0__GPIO1_15		0x1e4
++				MX53_PAD_SD2_DATA2__GPIO1_13		0x1e4
+ 			>;
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/imx6qdl-sr-som.dtsi b/arch/arm/boot/dts/imx6qdl-sr-som.dtsi
+index 0ad8ccde0cf87..f86efd0ccc404 100644
+--- a/arch/arm/boot/dts/imx6qdl-sr-som.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-sr-som.dtsi
+@@ -54,7 +54,13 @@
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_microsom_enet_ar8035>;
+ 	phy-mode = "rgmii-id";
+-	phy-reset-duration = <2>;
++
++	/*
++	 * The PHY seems to require a long-enough reset duration to avoid
++	 * some rare issues where the PHY gets stuck in an inconsistent and
++	 * non-functional state at boot-up. 10ms proved to be fine .
++	 */
++	phy-reset-duration = <10>;
+ 	phy-reset-gpios = <&gpio4 15 GPIO_ACTIVE_LOW>;
+ 	status = "okay";
+ 
+diff --git a/arch/arm/boot/dts/imx6ull-colibri-wifi.dtsi b/arch/arm/boot/dts/imx6ull-colibri-wifi.dtsi
+index a0545431b3dc3..9f1e38282bee7 100644
+--- a/arch/arm/boot/dts/imx6ull-colibri-wifi.dtsi
++++ b/arch/arm/boot/dts/imx6ull-colibri-wifi.dtsi
+@@ -43,6 +43,7 @@
+ 	assigned-clock-rates = <0>, <198000000>;
+ 	cap-power-off-card;
+ 	keep-power-in-suspend;
++	max-frequency = <25000000>;
+ 	mmc-pwrseq = <&wifi_pwrseq>;
+ 	no-1-8-v;
+ 	non-removable;
+diff --git a/arch/arm/boot/dts/omap5-board-common.dtsi b/arch/arm/boot/dts/omap5-board-common.dtsi
+index d8f13626cfd1b..3a8f102314758 100644
+--- a/arch/arm/boot/dts/omap5-board-common.dtsi
++++ b/arch/arm/boot/dts/omap5-board-common.dtsi
+@@ -30,14 +30,6 @@
+ 		regulator-max-microvolt = <5000000>;
+ 	};
+ 
+-	vdds_1v8_main: fixedregulator-vdds_1v8_main {
+-		compatible = "regulator-fixed";
+-		regulator-name = "vdds_1v8_main";
+-		vin-supply = <&smps7_reg>;
+-		regulator-min-microvolt = <1800000>;
+-		regulator-max-microvolt = <1800000>;
+-	};
+-
+ 	vmmcsd_fixed: fixedregulator-mmcsd {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "vmmcsd_fixed";
+@@ -487,6 +479,7 @@
+ 					regulator-boot-on;
+ 				};
+ 
++				vdds_1v8_main:
+ 				smps7_reg: smps7 {
+ 					/* VDDS_1v8_OMAP over VDDS_1v8_MAIN */
+ 					regulator-name = "smps7";
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
+index c5ea08fec535f..6cf1c8b4c6e28 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
+@@ -37,7 +37,7 @@
+ 		poll-interval = <20>;
+ 
+ 		/*
+-		 * The EXTi IRQ line 3 is shared with touchscreen and ethernet,
++		 * The EXTi IRQ line 3 is shared with ethernet,
+ 		 * so mark this as polled GPIO key.
+ 		 */
+ 		button-0 {
+@@ -46,6 +46,16 @@
+ 			gpios = <&gpiof 3 GPIO_ACTIVE_LOW>;
+ 		};
+ 
++		/*
++		 * The EXTi IRQ line 6 is shared with touchscreen,
++		 * so mark this as polled GPIO key.
++		 */
++		button-1 {
++			label = "TA2-GPIO-B";
++			linux,code = <KEY_B>;
++			gpios = <&gpiod 6 GPIO_ACTIVE_LOW>;
++		};
++
+ 		/*
+ 		 * The EXTi IRQ line 0 is shared with PMIC,
+ 		 * so mark this as polled GPIO key.
+@@ -60,13 +70,6 @@
+ 	gpio-keys {
+ 		compatible = "gpio-keys";
+ 
+-		button-1 {
+-			label = "TA2-GPIO-B";
+-			linux,code = <KEY_B>;
+-			gpios = <&gpiod 6 GPIO_ACTIVE_LOW>;
+-			wakeup-source;
+-		};
+-
+ 		button-3 {
+ 			label = "TA4-GPIO-D";
+ 			linux,code = <KEY_D>;
+@@ -82,6 +85,7 @@
+ 			label = "green:led5";
+ 			gpios = <&gpioc 6 GPIO_ACTIVE_HIGH>;
+ 			default-state = "off";
++			status = "disabled";
+ 		};
+ 
+ 		led-1 {
+@@ -185,8 +189,8 @@
+ 	touchscreen@38 {
+ 		compatible = "edt,edt-ft5406";
+ 		reg = <0x38>;
+-		interrupt-parent = <&gpiog>;
+-		interrupts = <2 IRQ_TYPE_EDGE_FALLING>; /* GPIO E */
++		interrupt-parent = <&gpioc>;
++		interrupts = <6 IRQ_TYPE_EDGE_FALLING>; /* GPIO E */
+ 	};
+ };
+ 
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
+index 2af0a67526747..8c41f819f7769 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
+@@ -12,6 +12,8 @@
+ 	aliases {
+ 		ethernet0 = &ethernet0;
+ 		ethernet1 = &ksz8851;
++		rtc0 = &hwrtc;
++		rtc1 = &rtc;
+ 	};
+ 
+ 	memory@c0000000 {
+@@ -138,6 +140,7 @@
+ 			reset-gpios = <&gpioh 3 GPIO_ACTIVE_LOW>;
+ 			reset-assert-us = <500>;
+ 			reset-deassert-us = <500>;
++			smsc,disable-energy-detect;
+ 			interrupt-parent = <&gpioi>;
+ 			interrupts = <11 IRQ_TYPE_LEVEL_LOW>;
+ 		};
+@@ -248,7 +251,7 @@
+ 	/delete-property/dmas;
+ 	/delete-property/dma-names;
+ 
+-	rtc@32 {
++	hwrtc: rtc@32 {
+ 		compatible = "microcrystal,rv8803";
+ 		reg = <0x32>;
+ 	};
+diff --git a/arch/arm/mach-imx/mmdc.c b/arch/arm/mach-imx/mmdc.c
+index 0dfd0ae7a63dd..af12668d0bf51 100644
+--- a/arch/arm/mach-imx/mmdc.c
++++ b/arch/arm/mach-imx/mmdc.c
+@@ -103,6 +103,7 @@ struct mmdc_pmu {
+ 	struct perf_event *mmdc_events[MMDC_NUM_COUNTERS];
+ 	struct hlist_node node;
+ 	struct fsl_mmdc_devtype_data *devtype_data;
++	struct clk *mmdc_ipg_clk;
+ };
+ 
+ /*
+@@ -462,11 +463,14 @@ static int imx_mmdc_remove(struct platform_device *pdev)
+ 
+ 	cpuhp_state_remove_instance_nocalls(cpuhp_mmdc_state, &pmu_mmdc->node);
+ 	perf_pmu_unregister(&pmu_mmdc->pmu);
++	iounmap(pmu_mmdc->mmdc_base);
++	clk_disable_unprepare(pmu_mmdc->mmdc_ipg_clk);
+ 	kfree(pmu_mmdc);
+ 	return 0;
+ }
+ 
+-static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_base)
++static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_base,
++			      struct clk *mmdc_ipg_clk)
+ {
+ 	struct mmdc_pmu *pmu_mmdc;
+ 	char *name;
+@@ -494,6 +498,7 @@ static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_b
+ 	}
+ 
+ 	mmdc_num = mmdc_pmu_init(pmu_mmdc, mmdc_base, &pdev->dev);
++	pmu_mmdc->mmdc_ipg_clk = mmdc_ipg_clk;
+ 	if (mmdc_num == 0)
+ 		name = "mmdc";
+ 	else
+@@ -529,7 +534,7 @@ pmu_free:
+ 
+ #else
+ #define imx_mmdc_remove NULL
+-#define imx_mmdc_perf_init(pdev, mmdc_base) 0
++#define imx_mmdc_perf_init(pdev, mmdc_base, mmdc_ipg_clk) 0
+ #endif
+ 
+ static int imx_mmdc_probe(struct platform_device *pdev)
+@@ -567,7 +572,13 @@ static int imx_mmdc_probe(struct platform_device *pdev)
+ 	val &= ~(1 << BP_MMDC_MAPSR_PSD);
+ 	writel_relaxed(val, reg);
+ 
+-	return imx_mmdc_perf_init(pdev, mmdc_base);
++	err = imx_mmdc_perf_init(pdev, mmdc_base, mmdc_ipg_clk);
++	if (err) {
++		iounmap(mmdc_base);
++		clk_disable_unprepare(mmdc_ipg_clk);
++	}
++
++	return err;
+ }
+ 
+ int imx_mmdc_get_ddr_type(void)
+diff --git a/arch/arm/mach-omap2/omap_hwmod.c b/arch/arm/mach-omap2/omap_hwmod.c
+index 65934b2924fb5..12b26e04686fa 100644
+--- a/arch/arm/mach-omap2/omap_hwmod.c
++++ b/arch/arm/mach-omap2/omap_hwmod.c
+@@ -3776,6 +3776,7 @@ struct powerdomain *omap_hwmod_get_pwrdm(struct omap_hwmod *oh)
+ 	struct omap_hwmod_ocp_if *oi;
+ 	struct clockdomain *clkdm;
+ 	struct clk_hw_omap *clk;
++	struct clk_hw *hw;
+ 
+ 	if (!oh)
+ 		return NULL;
+@@ -3792,7 +3793,14 @@ struct powerdomain *omap_hwmod_get_pwrdm(struct omap_hwmod *oh)
+ 		c = oi->_clk;
+ 	}
+ 
+-	clk = to_clk_hw_omap(__clk_get_hw(c));
++	hw = __clk_get_hw(c);
++	if (!hw)
++		return NULL;
++
++	clk = to_clk_hw_omap(hw);
++	if (!clk)
++		return NULL;
++
+ 	clkdm = clk->clkdm;
+ 	if (!clkdm)
+ 		return NULL;
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28-var2.dts b/arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28-var2.dts
+index dd764b720fb0a..f6a79c8080d14 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28-var2.dts
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28-var2.dts
+@@ -54,6 +54,7 @@
+ 
+ &mscc_felix_port0 {
+ 	label = "swp0";
++	managed = "in-band-status";
+ 	phy-handle = <&phy0>;
+ 	phy-mode = "sgmii";
+ 	status = "okay";
+@@ -61,6 +62,7 @@
+ 
+ &mscc_felix_port1 {
+ 	label = "swp1";
++	managed = "in-band-status";
+ 	phy-handle = <&phy1>;
+ 	phy-mode = "sgmii";
+ 	status = "okay";
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
+index a30249ebffa8c..a94cbd6dcce66 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
+@@ -66,7 +66,7 @@
+ 		};
+ 	};
+ 
+-	sysclk: clock-sysclk {
++	sysclk: sysclk {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <100000000>;
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+index ce2bcddf396f8..a05b1ab2dd12c 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+@@ -19,6 +19,8 @@
+ 	aliases {
+ 		spi0 = &spi0;
+ 		ethernet1 = &eth1;
++		mmc0 = &sdhci0;
++		mmc1 = &sdhci1;
+ 	};
+ 
+ 	chosen {
+@@ -119,6 +121,7 @@
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&i2c1_pins>;
+ 	clock-frequency = <100000>;
++	/delete-property/ mrvl,i2c-fast-mode;
+ 	status = "okay";
+ 
+ 	rtc@6f {
+diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
+index e58bca832dfff..41b332c054ab8 100644
+--- a/arch/arm64/include/asm/ptrace.h
++++ b/arch/arm64/include/asm/ptrace.h
+@@ -320,7 +320,17 @@ static inline unsigned long kernel_stack_pointer(struct pt_regs *regs)
+ 
+ static inline unsigned long regs_return_value(struct pt_regs *regs)
+ {
+-	return regs->regs[0];
++	unsigned long val = regs->regs[0];
++
++	/*
++	 * Audit currently uses regs_return_value() instead of
++	 * syscall_get_return_value(). Apply the same sign-extension here until
++	 * audit is updated to use syscall_get_return_value().
++	 */
++	if (compat_user_mode(regs))
++		val = sign_extend64(val, 31);
++
++	return val;
+ }
+ 
+ static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
+diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h
+index cfc0672013f67..03e20895453a7 100644
+--- a/arch/arm64/include/asm/syscall.h
++++ b/arch/arm64/include/asm/syscall.h
+@@ -29,22 +29,23 @@ static inline void syscall_rollback(struct task_struct *task,
+ 	regs->regs[0] = regs->orig_x0;
+ }
+ 
+-
+-static inline long syscall_get_error(struct task_struct *task,
+-				     struct pt_regs *regs)
++static inline long syscall_get_return_value(struct task_struct *task,
++					    struct pt_regs *regs)
+ {
+-	unsigned long error = regs->regs[0];
++	unsigned long val = regs->regs[0];
+ 
+ 	if (is_compat_thread(task_thread_info(task)))
+-		error = sign_extend64(error, 31);
++		val = sign_extend64(val, 31);
+ 
+-	return IS_ERR_VALUE(error) ? error : 0;
++	return val;
+ }
+ 
+-static inline long syscall_get_return_value(struct task_struct *task,
+-					    struct pt_regs *regs)
++static inline long syscall_get_error(struct task_struct *task,
++				     struct pt_regs *regs)
+ {
+-	return regs->regs[0];
++	unsigned long error = syscall_get_return_value(task, regs);
++
++	return IS_ERR_VALUE(error) ? error : 0;
+ }
+ 
+ static inline void syscall_set_return_value(struct task_struct *task,
+diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
+index eb2f73939b7bb..af3b64ca482d0 100644
+--- a/arch/arm64/kernel/ptrace.c
++++ b/arch/arm64/kernel/ptrace.c
+@@ -1862,7 +1862,7 @@ void syscall_trace_exit(struct pt_regs *regs)
+ 	audit_syscall_exit(regs);
+ 
+ 	if (flags & _TIF_SYSCALL_TRACEPOINT)
+-		trace_sys_exit(regs, regs_return_value(regs));
++		trace_sys_exit(regs, syscall_get_return_value(current, regs));
+ 
+ 	if (flags & (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP))
+ 		tracehook_report_syscall(regs, PTRACE_SYSCALL_EXIT);
+diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
+index 6237486ff6bb7..22899c86711aa 100644
+--- a/arch/arm64/kernel/signal.c
++++ b/arch/arm64/kernel/signal.c
+@@ -29,6 +29,7 @@
+ #include <asm/unistd.h>
+ #include <asm/fpsimd.h>
+ #include <asm/ptrace.h>
++#include <asm/syscall.h>
+ #include <asm/signal32.h>
+ #include <asm/traps.h>
+ #include <asm/vdso.h>
+@@ -890,7 +891,7 @@ static void do_signal(struct pt_regs *regs)
+ 		     retval == -ERESTART_RESTARTBLOCK ||
+ 		     (retval == -ERESTARTSYS &&
+ 		      !(ksig.ka.sa.sa_flags & SA_RESTART)))) {
+-			regs->regs[0] = -EINTR;
++			syscall_set_return_value(current, regs, -EINTR, 0);
+ 			regs->pc = continue_addr;
+ 		}
+ 
+diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
+index de07147a79260..7ae41b35c923e 100644
+--- a/arch/arm64/kernel/stacktrace.c
++++ b/arch/arm64/kernel/stacktrace.c
+@@ -220,7 +220,7 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
+ 
+ #ifdef CONFIG_STACKTRACE
+ 
+-noinline void arch_stack_walk(stack_trace_consume_fn consume_entry,
++noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry,
+ 			      void *cookie, struct task_struct *task,
+ 			      struct pt_regs *regs)
+ {
+diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
+index 263d6c1a525f3..50a0f1a38e849 100644
+--- a/arch/arm64/kernel/syscall.c
++++ b/arch/arm64/kernel/syscall.c
+@@ -54,10 +54,7 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno,
+ 		ret = do_ni_syscall(regs, scno);
+ 	}
+ 
+-	if (is_compat_task())
+-		ret = lower_32_bits(ret);
+-
+-	regs->regs[0] = ret;
++	syscall_set_return_value(current, regs, 0, ret);
+ 
+ 	/*
+ 	 * Ultimately, this value will get limited by KSTACK_OFFSET_MAX(),
+@@ -115,7 +112,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
+ 		 * syscall. do_notify_resume() will send a signal to userspace
+ 		 * before the syscall is restarted.
+ 		 */
+-		regs->regs[0] = -ERESTARTNOINTR;
++		syscall_set_return_value(current, regs, -ERESTARTNOINTR, 0);
+ 		return;
+ 	}
+ 
+@@ -136,7 +133,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
+ 		 * anyway.
+ 		 */
+ 		if (scno == NO_SYSCALL)
+-			regs->regs[0] = -ENOSYS;
++			syscall_set_return_value(current, regs, -ENOSYS, 0);
+ 		scno = syscall_trace_enter(regs);
+ 		if (scno == NO_SYSCALL)
+ 			goto trace_exit;
+diff --git a/arch/mips/Makefile b/arch/mips/Makefile
+index 258234c35a096..674f68d16a73f 100644
+--- a/arch/mips/Makefile
++++ b/arch/mips/Makefile
+@@ -321,7 +321,7 @@ KBUILD_LDFLAGS		+= -m $(ld-emul)
+ 
+ ifdef CONFIG_MIPS
+ CHECKFLAGS += $(shell $(CC) $(KBUILD_CFLAGS) -dM -E -x c /dev/null | \
+-	egrep -vw '__GNUC_(|MINOR_|PATCHLEVEL_)_' | \
++	egrep -vw '__GNUC_(MINOR_|PATCHLEVEL_)?_' | \
+ 	sed -e "s/^\#define /-D'/" -e "s/ /'='/" -e "s/$$/'/" -e 's/\$$/&&/g')
+ endif
+ 
+diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h
+index d0cf997b4ba84..139b4050259fa 100644
+--- a/arch/mips/include/asm/pgalloc.h
++++ b/arch/mips/include/asm/pgalloc.h
+@@ -59,15 +59,20 @@ do {							\
+ 
+ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
+ {
+-	pmd_t *pmd = NULL;
++	pmd_t *pmd;
+ 	struct page *pg;
+ 
+-	pg = alloc_pages(GFP_KERNEL | __GFP_ACCOUNT, PMD_ORDER);
+-	if (pg) {
+-		pgtable_pmd_page_ctor(pg);
+-		pmd = (pmd_t *)page_address(pg);
+-		pmd_init((unsigned long)pmd, (unsigned long)invalid_pte_table);
++	pg = alloc_pages(GFP_KERNEL_ACCOUNT, PMD_ORDER);
++	if (!pg)
++		return NULL;
++
++	if (!pgtable_pmd_page_ctor(pg)) {
++		__free_pages(pg, PMD_ORDER);
++		return NULL;
+ 	}
++
++	pmd = (pmd_t *)page_address(pg);
++	pmd_init((unsigned long)pmd, (unsigned long)invalid_pte_table);
+ 	return pmd;
+ }
+ 
+diff --git a/arch/mips/mti-malta/malta-platform.c b/arch/mips/mti-malta/malta-platform.c
+index 11e9527c6e441..62ffac500eb52 100644
+--- a/arch/mips/mti-malta/malta-platform.c
++++ b/arch/mips/mti-malta/malta-platform.c
+@@ -47,7 +47,8 @@ static struct plat_serial8250_port uart8250_data[] = {
+ 		.mapbase	= 0x1f000900,	/* The CBUS UART */
+ 		.irq		= MIPS_CPU_IRQ_BASE + MIPSCPU_INT_MB2,
+ 		.uartclk	= 3686400,	/* Twice the usual clk! */
+-		.iotype		= UPIO_MEM32,
++		.iotype		= IS_ENABLED(CONFIG_CPU_BIG_ENDIAN) ?
++				  UPIO_MEM32BE : UPIO_MEM32,
+ 		.flags		= CBUS_UART_FLAGS,
+ 		.regshift	= 3,
+ 	},
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index 18ec0f9bb8d5c..3c3647ac33cb8 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -489,6 +489,7 @@ config CC_HAVE_STACKPROTECTOR_TLS
+ 
+ config STACKPROTECTOR_PER_TASK
+ 	def_bool y
++	depends on !GCC_PLUGIN_RANDSTRUCT
+ 	depends on STACKPROTECTOR && CC_HAVE_STACKPROTECTOR_TLS
+ 
+ config PHYS_RAM_BASE_FIXED
+diff --git a/arch/riscv/boot/dts/sifive/hifive-unmatched-a00.dts b/arch/riscv/boot/dts/sifive/hifive-unmatched-a00.dts
+index b1c3c596578f1..2e4ea84f27e77 100644
+--- a/arch/riscv/boot/dts/sifive/hifive-unmatched-a00.dts
++++ b/arch/riscv/boot/dts/sifive/hifive-unmatched-a00.dts
+@@ -24,7 +24,7 @@
+ 
+ 	memory@80000000 {
+ 		device_type = "memory";
+-		reg = <0x0 0x80000000 0x2 0x00000000>;
++		reg = <0x0 0x80000000 0x4 0x00000000>;
+ 	};
+ 
+ 	soc {
+diff --git a/arch/riscv/kernel/stacktrace.c b/arch/riscv/kernel/stacktrace.c
+index bde85fc53357f..7bc8af75933a7 100644
+--- a/arch/riscv/kernel/stacktrace.c
++++ b/arch/riscv/kernel/stacktrace.c
+@@ -27,7 +27,7 @@ void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs,
+ 		fp = frame_pointer(regs);
+ 		sp = user_stack_pointer(regs);
+ 		pc = instruction_pointer(regs);
+-	} else if (task == current) {
++	} else if (task == NULL || task == current) {
+ 		fp = (unsigned long)__builtin_frame_address(1);
+ 		sp = (unsigned long)__builtin_frame_address(0);
+ 		pc = (unsigned long)__builtin_return_address(0);
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index 2bf1c7ea2758d..2938c902ffbe4 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -1115,9 +1115,10 @@ void x86_pmu_stop(struct perf_event *event, int flags);
+ 
+ static inline void x86_pmu_disable_event(struct perf_event *event)
+ {
++	u64 disable_mask = __this_cpu_read(cpu_hw_events.perf_ctr_virt_mask);
+ 	struct hw_perf_event *hwc = &event->hw;
+ 
+-	wrmsrl(hwc->config_base, hwc->config);
++	wrmsrl(hwc->config_base, hwc->config & ~disable_mask);
+ 
+ 	if (is_counter_pair(hwc))
+ 		wrmsrl(x86_pmu_config_addr(hwc->idx + 1), 0);
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 716266ab177f4..ed469ddf20741 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -1546,7 +1546,7 @@ static int is_empty_shadow_page(u64 *spt)
+  * aggregate version in order to make the slab shrinker
+  * faster
+  */
+-static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, unsigned long nr)
++static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, long nr)
+ {
+ 	kvm->arch.n_used_mmu_pages += nr;
+ 	percpu_counter_add(&kvm_total_used_mmu_pages, nr);
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index 02d60d7f903da..7498c1384b938 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -188,7 +188,7 @@ static void sev_asid_free(struct kvm_sev_info *sev)
+ 
+ 	for_each_possible_cpu(cpu) {
+ 		sd = per_cpu(svm_data, cpu);
+-		sd->sev_vmcbs[pos] = NULL;
++		sd->sev_vmcbs[sev->asid] = NULL;
+ 	}
+ 
+ 	mutex_unlock(&sev_bitmap_lock);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index d6a9f05187849..1e11198f89934 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -4252,8 +4252,17 @@ static int kvm_cpu_accept_dm_intr(struct kvm_vcpu *vcpu)
+ 
+ static int kvm_vcpu_ready_for_interrupt_injection(struct kvm_vcpu *vcpu)
+ {
+-	return kvm_arch_interrupt_allowed(vcpu) &&
+-		kvm_cpu_accept_dm_intr(vcpu);
++	/*
++	 * Do not cause an interrupt window exit if an exception
++	 * is pending or an event needs reinjection; userspace
++	 * might want to inject the interrupt manually using KVM_SET_REGS
++	 * or KVM_SET_SREGS.  For that to work, we must be at an
++	 * instruction boundary and with no events half-injected.
++	 */
++	return (kvm_arch_interrupt_allowed(vcpu) &&
++		kvm_cpu_accept_dm_intr(vcpu) &&
++		!kvm_event_needs_reinjection(vcpu) &&
++		!vcpu->arch.exception.pending);
+ }
+ 
+ static int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu,
+diff --git a/arch/x86/tools/relocs.c b/arch/x86/tools/relocs.c
+index 04c5a44b96827..9ba700dc47de4 100644
+--- a/arch/x86/tools/relocs.c
++++ b/arch/x86/tools/relocs.c
+@@ -57,12 +57,12 @@ static const char * const sym_regex_kernel[S_NSYMTYPES] = {
+ 	[S_REL] =
+ 	"^(__init_(begin|end)|"
+ 	"__x86_cpu_dev_(start|end)|"
+-	"(__parainstructions|__alt_instructions)(|_end)|"
+-	"(__iommu_table|__apicdrivers|__smp_locks)(|_end)|"
++	"(__parainstructions|__alt_instructions)(_end)?|"
++	"(__iommu_table|__apicdrivers|__smp_locks)(_end)?|"
+ 	"__(start|end)_pci_.*|"
+ 	"__(start|end)_builtin_fw|"
+-	"__(start|stop)___ksymtab(|_gpl)|"
+-	"__(start|stop)___kcrctab(|_gpl)|"
++	"__(start|stop)___ksymtab(_gpl)?|"
++	"__(start|stop)___kcrctab(_gpl)?|"
+ 	"__(start|stop)___param|"
+ 	"__(start|stop)___modver|"
+ 	"__(start|stop)___bug_table|"
+diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
+index 81be0096411da..d8b0d8bd132bc 100644
+--- a/block/blk-iolatency.c
++++ b/block/blk-iolatency.c
+@@ -833,7 +833,11 @@ static ssize_t iolatency_set_limit(struct kernfs_open_file *of, char *buf,
+ 
+ 	enable = iolatency_set_min_lat_nsec(blkg, lat_val);
+ 	if (enable) {
+-		WARN_ON_ONCE(!blk_get_queue(blkg->q));
++		if (!blk_get_queue(blkg->q)) {
++			ret = -ENODEV;
++			goto out;
++		}
++
+ 		blkg_get(blkg);
+ 	}
+ 
+diff --git a/drivers/acpi/acpica/nsrepair2.c b/drivers/acpi/acpica/nsrepair2.c
+index 38e10ab976e67..14b71b41e8453 100644
+--- a/drivers/acpi/acpica/nsrepair2.c
++++ b/drivers/acpi/acpica/nsrepair2.c
+@@ -379,13 +379,6 @@ acpi_ns_repair_CID(struct acpi_evaluate_info *info,
+ 
+ 			(*element_ptr)->common.reference_count =
+ 			    original_ref_count;
+-
+-			/*
+-			 * The original_element holds a reference from the package object
+-			 * that represents _HID. Since a new element was created by _HID,
+-			 * remove the reference from the _CID package.
+-			 */
+-			acpi_ut_remove_reference(original_element);
+ 		}
+ 
+ 		element_ptr++;
+diff --git a/drivers/ata/libata-sff.c b/drivers/ata/libata-sff.c
+index ae7189d1a5682..b71ea4a680b01 100644
+--- a/drivers/ata/libata-sff.c
++++ b/drivers/ata/libata-sff.c
+@@ -637,6 +637,20 @@ unsigned int ata_sff_data_xfer32(struct ata_queued_cmd *qc, unsigned char *buf,
+ }
+ EXPORT_SYMBOL_GPL(ata_sff_data_xfer32);
+ 
++static void ata_pio_xfer(struct ata_queued_cmd *qc, struct page *page,
++		unsigned int offset, size_t xfer_size)
++{
++	bool do_write = (qc->tf.flags & ATA_TFLAG_WRITE);
++	unsigned char *buf;
++
++	buf = kmap_atomic(page);
++	qc->ap->ops->sff_data_xfer(qc, buf + offset, xfer_size, do_write);
++	kunmap_atomic(buf);
++
++	if (!do_write && !PageSlab(page))
++		flush_dcache_page(page);
++}
++
+ /**
+  *	ata_pio_sector - Transfer a sector of data.
+  *	@qc: Command on going
+@@ -648,11 +662,9 @@ EXPORT_SYMBOL_GPL(ata_sff_data_xfer32);
+  */
+ static void ata_pio_sector(struct ata_queued_cmd *qc)
+ {
+-	int do_write = (qc->tf.flags & ATA_TFLAG_WRITE);
+ 	struct ata_port *ap = qc->ap;
+ 	struct page *page;
+ 	unsigned int offset;
+-	unsigned char *buf;
+ 
+ 	if (!qc->cursg) {
+ 		qc->curbytes = qc->nbytes;
+@@ -670,13 +682,20 @@ static void ata_pio_sector(struct ata_queued_cmd *qc)
+ 
+ 	DPRINTK("data %s\n", qc->tf.flags & ATA_TFLAG_WRITE ? "write" : "read");
+ 
+-	/* do the actual data transfer */
+-	buf = kmap_atomic(page);
+-	ap->ops->sff_data_xfer(qc, buf + offset, qc->sect_size, do_write);
+-	kunmap_atomic(buf);
++	/*
++	 * Split the transfer when it splits a page boundary.  Note that the
++	 * split still has to be dword aligned like all ATA data transfers.
++	 */
++	WARN_ON_ONCE(offset % 4);
++	if (offset + qc->sect_size > PAGE_SIZE) {
++		unsigned int split_len = PAGE_SIZE - offset;
+ 
+-	if (!do_write && !PageSlab(page))
+-		flush_dcache_page(page);
++		ata_pio_xfer(qc, page, offset, split_len);
++		ata_pio_xfer(qc, nth_page(page, 1), 0,
++			     qc->sect_size - split_len);
++	} else {
++		ata_pio_xfer(qc, page, offset, qc->sect_size);
++	}
+ 
+ 	qc->curbytes += qc->sect_size;
+ 	qc->cursg_ofs += qc->sect_size;
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index ecd7cf848daff..592b3955abe22 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -634,8 +634,6 @@ dev_groups_failed:
+ 	else if (drv->remove)
+ 		drv->remove(dev);
+ probe_failed:
+-	kfree(dev->dma_range_map);
+-	dev->dma_range_map = NULL;
+ 	if (dev->bus)
+ 		blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
+ 					     BUS_NOTIFY_DRIVER_NOT_BOUND, dev);
+@@ -643,6 +641,8 @@ pinctrl_bind_failed:
+ 	device_links_no_driver(dev);
+ 	devres_release_all(dev);
+ 	arch_teardown_dma_ops(dev);
++	kfree(dev->dma_range_map);
++	dev->dma_range_map = NULL;
+ 	driver_sysfs_remove(dev);
+ 	dev->driver = NULL;
+ 	dev_set_drvdata(dev, NULL);
+diff --git a/drivers/base/firmware_loader/fallback.c b/drivers/base/firmware_loader/fallback.c
+index 91899d185e311..d7d63c1aa993f 100644
+--- a/drivers/base/firmware_loader/fallback.c
++++ b/drivers/base/firmware_loader/fallback.c
+@@ -89,12 +89,11 @@ static void __fw_load_abort(struct fw_priv *fw_priv)
+ {
+ 	/*
+ 	 * There is a small window in which user can write to 'loading'
+-	 * between loading done and disappearance of 'loading'
++	 * between loading done/aborted and disappearance of 'loading'
+ 	 */
+-	if (fw_sysfs_done(fw_priv))
++	if (fw_state_is_aborted(fw_priv) || fw_sysfs_done(fw_priv))
+ 		return;
+ 
+-	list_del_init(&fw_priv->pending_list);
+ 	fw_state_aborted(fw_priv);
+ }
+ 
+@@ -280,7 +279,6 @@ static ssize_t firmware_loading_store(struct device *dev,
+ 			 * Same logic as fw_load_abort, only the DONE bit
+ 			 * is ignored and we set ABORT only on failure.
+ 			 */
+-			list_del_init(&fw_priv->pending_list);
+ 			if (rc) {
+ 				fw_state_aborted(fw_priv);
+ 				written = rc;
+@@ -513,6 +511,11 @@ static int fw_load_sysfs_fallback(struct fw_sysfs *fw_sysfs, long timeout)
+ 	}
+ 
+ 	mutex_lock(&fw_lock);
++	if (fw_state_is_aborted(fw_priv)) {
++		mutex_unlock(&fw_lock);
++		retval = -EINTR;
++		goto out;
++	}
+ 	list_add(&fw_priv->pending_list, &pending_fw_head);
+ 	mutex_unlock(&fw_lock);
+ 
+@@ -535,11 +538,10 @@ static int fw_load_sysfs_fallback(struct fw_sysfs *fw_sysfs, long timeout)
+ 	if (fw_state_is_aborted(fw_priv)) {
+ 		if (retval == -ERESTARTSYS)
+ 			retval = -EINTR;
+-		else
+-			retval = -EAGAIN;
+ 	} else if (fw_priv->is_paged_buf && !fw_priv->data)
+ 		retval = -ENOMEM;
+ 
++out:
+ 	device_del(f_dev);
+ err_put_dev:
+ 	put_device(f_dev);
+diff --git a/drivers/base/firmware_loader/firmware.h b/drivers/base/firmware_loader/firmware.h
+index 63bd29fdcb9c5..a3014e9e2c852 100644
+--- a/drivers/base/firmware_loader/firmware.h
++++ b/drivers/base/firmware_loader/firmware.h
+@@ -117,8 +117,16 @@ static inline void __fw_state_set(struct fw_priv *fw_priv,
+ 
+ 	WRITE_ONCE(fw_st->status, status);
+ 
+-	if (status == FW_STATUS_DONE || status == FW_STATUS_ABORTED)
++	if (status == FW_STATUS_DONE || status == FW_STATUS_ABORTED) {
++#ifdef CONFIG_FW_LOADER_USER_HELPER
++		/*
++		 * Doing this here ensures that the fw_priv is deleted from
++		 * the pending list in all abort/done paths.
++		 */
++		list_del_init(&fw_priv->pending_list);
++#endif
+ 		complete_all(&fw_st->completion);
++	}
+ }
+ 
+ static inline void fw_state_aborted(struct fw_priv *fw_priv)
+diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c
+index 4fdb8219cd083..68c549d712304 100644
+--- a/drivers/base/firmware_loader/main.c
++++ b/drivers/base/firmware_loader/main.c
+@@ -783,8 +783,10 @@ static void fw_abort_batch_reqs(struct firmware *fw)
+ 		return;
+ 
+ 	fw_priv = fw->priv;
++	mutex_lock(&fw_lock);
+ 	if (!fw_state_is_aborted(fw_priv))
+ 		fw_state_aborted(fw_priv);
++	mutex_unlock(&fw_lock);
+ }
+ 
+ /* called from request_firmware() and request_firmware_work_func() */
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 38cb116ed433f..0ef98e3ba3410 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -100,6 +100,7 @@ static const char * const clock_names[SYSC_MAX_CLOCKS] = {
+  * @cookie: data used by legacy platform callbacks
+  * @name: name if available
+  * @revision: interconnect target module revision
++ * @reserved: target module is reserved and already in use
+  * @enabled: sysc runtime enabled status
+  * @needs_resume: runtime resume needed on resume from suspend
+  * @child_needs_resume: runtime resume needed for child on resume from suspend
+@@ -130,6 +131,7 @@ struct sysc {
+ 	struct ti_sysc_cookie cookie;
+ 	const char *name;
+ 	u32 revision;
++	unsigned int reserved:1;
+ 	unsigned int enabled:1;
+ 	unsigned int needs_resume:1;
+ 	unsigned int child_needs_resume:1;
+@@ -2951,6 +2953,8 @@ static int sysc_init_soc(struct sysc *ddata)
+ 		case SOC_3430 ... SOC_3630:
+ 			sysc_add_disabled(0x48304000);	/* timer12 */
+ 			break;
++		case SOC_AM3:
++			sysc_add_disabled(0x48310000);  /* rng */
+ 		default:
+ 			break;
+ 		}
+@@ -3093,8 +3097,8 @@ static int sysc_probe(struct platform_device *pdev)
+ 		return error;
+ 
+ 	error = sysc_check_active_timer(ddata);
+-	if (error)
+-		return error;
++	if (error == -EBUSY)
++		ddata->reserved = true;
+ 
+ 	error = sysc_get_clocks(ddata);
+ 	if (error)
+@@ -3130,11 +3134,15 @@ static int sysc_probe(struct platform_device *pdev)
+ 	sysc_show_registers(ddata);
+ 
+ 	ddata->dev->type = &sysc_device_type;
+-	error = of_platform_populate(ddata->dev->of_node, sysc_match_table,
+-				     pdata ? pdata->auxdata : NULL,
+-				     ddata->dev);
+-	if (error)
+-		goto err;
++
++	if (!ddata->reserved) {
++		error = of_platform_populate(ddata->dev->of_node,
++					     sysc_match_table,
++					     pdata ? pdata->auxdata : NULL,
++					     ddata->dev);
++		if (error)
++			goto err;
++	}
+ 
+ 	INIT_DELAYED_WORK(&ddata->idle_work, ti_sysc_idle);
+ 
+diff --git a/drivers/char/tpm/tpm_ftpm_tee.c b/drivers/char/tpm/tpm_ftpm_tee.c
+index 2ccdf8ac69948..6e3235565a4d8 100644
+--- a/drivers/char/tpm/tpm_ftpm_tee.c
++++ b/drivers/char/tpm/tpm_ftpm_tee.c
+@@ -254,11 +254,11 @@ static int ftpm_tee_probe(struct device *dev)
+ 	pvt_data->session = sess_arg.session;
+ 
+ 	/* Allocate dynamic shared memory with fTPM TA */
+-	pvt_data->shm = tee_shm_alloc(pvt_data->ctx,
+-				      MAX_COMMAND_SIZE + MAX_RESPONSE_SIZE,
+-				      TEE_SHM_MAPPED | TEE_SHM_DMA_BUF);
++	pvt_data->shm = tee_shm_alloc_kernel_buf(pvt_data->ctx,
++						 MAX_COMMAND_SIZE +
++						 MAX_RESPONSE_SIZE);
+ 	if (IS_ERR(pvt_data->shm)) {
+-		dev_err(dev, "%s: tee_shm_alloc failed\n", __func__);
++		dev_err(dev, "%s: tee_shm_alloc_kernel_buf failed\n", __func__);
+ 		rc = -ENOMEM;
+ 		goto out_shm_alloc;
+ 	}
+diff --git a/drivers/clk/clk-devres.c b/drivers/clk/clk-devres.c
+index be160764911bf..f9d5b73343417 100644
+--- a/drivers/clk/clk-devres.c
++++ b/drivers/clk/clk-devres.c
+@@ -92,13 +92,20 @@ int __must_check devm_clk_bulk_get_optional(struct device *dev, int num_clks,
+ }
+ EXPORT_SYMBOL_GPL(devm_clk_bulk_get_optional);
+ 
++static void devm_clk_bulk_release_all(struct device *dev, void *res)
++{
++	struct clk_bulk_devres *devres = res;
++
++	clk_bulk_put_all(devres->num_clks, devres->clks);
++}
++
+ int __must_check devm_clk_bulk_get_all(struct device *dev,
+ 				       struct clk_bulk_data **clks)
+ {
+ 	struct clk_bulk_devres *devres;
+ 	int ret;
+ 
+-	devres = devres_alloc(devm_clk_bulk_release,
++	devres = devres_alloc(devm_clk_bulk_release_all,
+ 			      sizeof(*devres), GFP_KERNEL);
+ 	if (!devres)
+ 		return -ENOMEM;
+diff --git a/drivers/clk/clk-stm32f4.c b/drivers/clk/clk-stm32f4.c
+index 18117ce5ff85f..5c75e3d906c20 100644
+--- a/drivers/clk/clk-stm32f4.c
++++ b/drivers/clk/clk-stm32f4.c
+@@ -526,7 +526,7 @@ struct stm32f4_pll {
+ 
+ struct stm32f4_pll_post_div_data {
+ 	int idx;
+-	u8 pll_num;
++	int pll_idx;
+ 	const char *name;
+ 	const char *parent;
+ 	u8 flag;
+@@ -557,13 +557,13 @@ static const struct clk_div_table post_divr_table[] = {
+ 
+ #define MAX_POST_DIV 3
+ static const struct stm32f4_pll_post_div_data  post_div_data[MAX_POST_DIV] = {
+-	{ CLK_I2SQ_PDIV, PLL_I2S, "plli2s-q-div", "plli2s-q",
++	{ CLK_I2SQ_PDIV, PLL_VCO_I2S, "plli2s-q-div", "plli2s-q",
+ 		CLK_SET_RATE_PARENT, STM32F4_RCC_DCKCFGR, 0, 5, 0, NULL},
+ 
+-	{ CLK_SAIQ_PDIV, PLL_SAI, "pllsai-q-div", "pllsai-q",
++	{ CLK_SAIQ_PDIV, PLL_VCO_SAI, "pllsai-q-div", "pllsai-q",
+ 		CLK_SET_RATE_PARENT, STM32F4_RCC_DCKCFGR, 8, 5, 0, NULL },
+ 
+-	{ NO_IDX, PLL_SAI, "pllsai-r-div", "pllsai-r", CLK_SET_RATE_PARENT,
++	{ NO_IDX, PLL_VCO_SAI, "pllsai-r-div", "pllsai-r", CLK_SET_RATE_PARENT,
+ 		STM32F4_RCC_DCKCFGR, 16, 2, 0, post_divr_table },
+ };
+ 
+@@ -1774,7 +1774,7 @@ static void __init stm32f4_rcc_init(struct device_node *np)
+ 				post_div->width,
+ 				post_div->flag_div,
+ 				post_div->div_table,
+-				clks[post_div->pll_num],
++				clks[post_div->pll_idx],
+ 				&stm32f4_clk_lock);
+ 
+ 		if (post_div->idx != NO_IDX)
+diff --git a/drivers/clk/tegra/clk-sdmmc-mux.c b/drivers/clk/tegra/clk-sdmmc-mux.c
+index 316912d3b1a4f..4f2c3309eea4d 100644
+--- a/drivers/clk/tegra/clk-sdmmc-mux.c
++++ b/drivers/clk/tegra/clk-sdmmc-mux.c
+@@ -194,6 +194,15 @@ static void clk_sdmmc_mux_disable(struct clk_hw *hw)
+ 	gate_ops->disable(gate_hw);
+ }
+ 
++static void clk_sdmmc_mux_disable_unused(struct clk_hw *hw)
++{
++	struct tegra_sdmmc_mux *sdmmc_mux = to_clk_sdmmc_mux(hw);
++	const struct clk_ops *gate_ops = sdmmc_mux->gate_ops;
++	struct clk_hw *gate_hw = &sdmmc_mux->gate.hw;
++
++	gate_ops->disable_unused(gate_hw);
++}
++
+ static void clk_sdmmc_mux_restore_context(struct clk_hw *hw)
+ {
+ 	struct clk_hw *parent = clk_hw_get_parent(hw);
+@@ -218,6 +227,7 @@ static const struct clk_ops tegra_clk_sdmmc_mux_ops = {
+ 	.is_enabled = clk_sdmmc_mux_is_enabled,
+ 	.enable = clk_sdmmc_mux_enable,
+ 	.disable = clk_sdmmc_mux_disable,
++	.disable_unused = clk_sdmmc_mux_disable_unused,
+ 	.restore_context = clk_sdmmc_mux_restore_context,
+ };
+ 
+diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h
+index 26482c7d4c3a3..fc708be7ad9a2 100644
+--- a/drivers/dma/idxd/idxd.h
++++ b/drivers/dma/idxd/idxd.h
+@@ -294,6 +294,14 @@ struct idxd_desc {
+ 	struct idxd_wq *wq;
+ };
+ 
++/*
++ * This is software defined error for the completion status. We overload the error code
++ * that will never appear in completion status and only SWERR register.
++ */
++enum idxd_completion_status {
++	IDXD_COMP_DESC_ABORT = 0xff,
++};
++
+ #define confdev_to_idxd(dev) container_of(dev, struct idxd_device, conf_dev)
+ #define confdev_to_wq(dev) container_of(dev, struct idxd_wq, conf_dev)
+ 
+@@ -482,4 +490,10 @@ static inline void perfmon_init(void) {}
+ static inline void perfmon_exit(void) {}
+ #endif
+ 
++static inline void complete_desc(struct idxd_desc *desc, enum idxd_complete_type reason)
++{
++	idxd_dma_complete_txd(desc, reason);
++	idxd_free_desc(desc->wq, desc);
++}
++
+ #endif
+diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c
+index 442d55c11a5f4..32cca6a0e66ac 100644
+--- a/drivers/dma/idxd/init.c
++++ b/drivers/dma/idxd/init.c
+@@ -102,6 +102,8 @@ static int idxd_setup_interrupts(struct idxd_device *idxd)
+ 		spin_lock_init(&idxd->irq_entries[i].list_lock);
+ 	}
+ 
++	idxd_msix_perm_setup(idxd);
++
+ 	irq_entry = &idxd->irq_entries[0];
+ 	rc = request_threaded_irq(irq_entry->vector, NULL, idxd_misc_thread,
+ 				  0, "idxd-misc", irq_entry);
+@@ -148,7 +150,6 @@ static int idxd_setup_interrupts(struct idxd_device *idxd)
+ 	}
+ 
+ 	idxd_unmask_error_interrupts(idxd);
+-	idxd_msix_perm_setup(idxd);
+ 	return 0;
+ 
+  err_wq_irqs:
+@@ -162,6 +163,7 @@ static int idxd_setup_interrupts(struct idxd_device *idxd)
+  err_misc_irq:
+ 	/* Disable error interrupt generation */
+ 	idxd_mask_error_interrupts(idxd);
++	idxd_msix_perm_clear(idxd);
+  err_irq_entries:
+ 	pci_free_irq_vectors(pdev);
+ 	dev_err(dev, "No usable interrupts\n");
+@@ -757,32 +759,40 @@ static void idxd_shutdown(struct pci_dev *pdev)
+ 	for (i = 0; i < msixcnt; i++) {
+ 		irq_entry = &idxd->irq_entries[i];
+ 		synchronize_irq(irq_entry->vector);
+-		free_irq(irq_entry->vector, irq_entry);
+ 		if (i == 0)
+ 			continue;
+ 		idxd_flush_pending_llist(irq_entry);
+ 		idxd_flush_work_list(irq_entry);
+ 	}
+-
+-	idxd_msix_perm_clear(idxd);
+-	idxd_release_int_handles(idxd);
+-	pci_free_irq_vectors(pdev);
+-	pci_iounmap(pdev, idxd->reg_base);
+-	pci_disable_device(pdev);
+-	destroy_workqueue(idxd->wq);
++	flush_workqueue(idxd->wq);
+ }
+ 
+ static void idxd_remove(struct pci_dev *pdev)
+ {
+ 	struct idxd_device *idxd = pci_get_drvdata(pdev);
++	struct idxd_irq_entry *irq_entry;
++	int msixcnt = pci_msix_vec_count(pdev);
++	int i;
+ 
+ 	dev_dbg(&pdev->dev, "%s called\n", __func__);
+ 	idxd_shutdown(pdev);
+ 	if (device_pasid_enabled(idxd))
+ 		idxd_disable_system_pasid(idxd);
+ 	idxd_unregister_devices(idxd);
+-	perfmon_pmu_remove(idxd);
++
++	for (i = 0; i < msixcnt; i++) {
++		irq_entry = &idxd->irq_entries[i];
++		free_irq(irq_entry->vector, irq_entry);
++	}
++	idxd_msix_perm_clear(idxd);
++	idxd_release_int_handles(idxd);
++	pci_free_irq_vectors(pdev);
++	pci_iounmap(pdev, idxd->reg_base);
+ 	iommu_dev_disable_feature(&pdev->dev, IOMMU_DEV_FEAT_SVA);
++	pci_disable_device(pdev);
++	destroy_workqueue(idxd->wq);
++	perfmon_pmu_remove(idxd);
++	device_unregister(&idxd->conf_dev);
+ }
+ 
+ static struct pci_driver idxd_pci_driver = {
+diff --git a/drivers/dma/idxd/irq.c b/drivers/dma/idxd/irq.c
+index ae68e1e5487a0..4e3a7198c0caf 100644
+--- a/drivers/dma/idxd/irq.c
++++ b/drivers/dma/idxd/irq.c
+@@ -245,12 +245,6 @@ static inline bool match_fault(struct idxd_desc *desc, u64 fault_addr)
+ 	return false;
+ }
+ 
+-static inline void complete_desc(struct idxd_desc *desc, enum idxd_complete_type reason)
+-{
+-	idxd_dma_complete_txd(desc, reason);
+-	idxd_free_desc(desc->wq, desc);
+-}
+-
+ static int irq_process_pending_llist(struct idxd_irq_entry *irq_entry,
+ 				     enum irq_work_type wtype,
+ 				     int *processed, u64 data)
+@@ -272,8 +266,16 @@ static int irq_process_pending_llist(struct idxd_irq_entry *irq_entry,
+ 		reason = IDXD_COMPLETE_DEV_FAIL;
+ 
+ 	llist_for_each_entry_safe(desc, t, head, llnode) {
+-		if (desc->completion->status) {
+-			if ((desc->completion->status & DSA_COMP_STATUS_MASK) != DSA_COMP_SUCCESS)
++		u8 status = desc->completion->status & DSA_COMP_STATUS_MASK;
++
++		if (status) {
++			if (unlikely(status == IDXD_COMP_DESC_ABORT)) {
++				complete_desc(desc, IDXD_COMPLETE_ABORT);
++				(*processed)++;
++				continue;
++			}
++
++			if (unlikely(status != DSA_COMP_SUCCESS))
+ 				match_fault(desc, data);
+ 			complete_desc(desc, reason);
+ 			(*processed)++;
+@@ -329,7 +331,14 @@ static int irq_process_work_list(struct idxd_irq_entry *irq_entry,
+ 	spin_unlock_irqrestore(&irq_entry->list_lock, flags);
+ 
+ 	list_for_each_entry(desc, &flist, list) {
+-		if ((desc->completion->status & DSA_COMP_STATUS_MASK) != DSA_COMP_SUCCESS)
++		u8 status = desc->completion->status & DSA_COMP_STATUS_MASK;
++
++		if (unlikely(status == IDXD_COMP_DESC_ABORT)) {
++			complete_desc(desc, IDXD_COMPLETE_ABORT);
++			continue;
++		}
++
++		if (unlikely(status != DSA_COMP_SUCCESS))
+ 			match_fault(desc, data);
+ 		complete_desc(desc, reason);
+ 	}
+diff --git a/drivers/dma/idxd/submit.c b/drivers/dma/idxd/submit.c
+index 19afb62abaffd..36c9c1a89b7e7 100644
+--- a/drivers/dma/idxd/submit.c
++++ b/drivers/dma/idxd/submit.c
+@@ -25,11 +25,10 @@ static struct idxd_desc *__get_desc(struct idxd_wq *wq, int idx, int cpu)
+ 	 * Descriptor completion vectors are 1...N for MSIX. We will round
+ 	 * robin through the N vectors.
+ 	 */
+-	wq->vec_ptr = (wq->vec_ptr % idxd->num_wq_irqs) + 1;
++	wq->vec_ptr = desc->vector = (wq->vec_ptr % idxd->num_wq_irqs) + 1;
+ 	if (!idxd->int_handles) {
+ 		desc->hw->int_handle = wq->vec_ptr;
+ 	} else {
+-		desc->vector = wq->vec_ptr;
+ 		/*
+ 		 * int_handles are only for descriptor completion. However for device
+ 		 * MSIX enumeration, vec 0 is used for misc interrupts. Therefore even
+@@ -88,9 +87,64 @@ void idxd_free_desc(struct idxd_wq *wq, struct idxd_desc *desc)
+ 	sbitmap_queue_clear(&wq->sbq, desc->id, cpu);
+ }
+ 
++static struct idxd_desc *list_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie,
++					 struct idxd_desc *desc)
++{
++	struct idxd_desc *d, *n;
++
++	lockdep_assert_held(&ie->list_lock);
++	list_for_each_entry_safe(d, n, &ie->work_list, list) {
++		if (d == desc) {
++			list_del(&d->list);
++			return d;
++		}
++	}
++
++	/*
++	 * At this point, the desc needs to be aborted is held by the completion
++	 * handler where it has taken it off the pending list but has not added to the
++	 * work list. It will be cleaned up by the interrupt handler when it sees the
++	 * IDXD_COMP_DESC_ABORT for completion status.
++	 */
++	return NULL;
++}
++
++static void llist_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie,
++			     struct idxd_desc *desc)
++{
++	struct idxd_desc *d, *t, *found = NULL;
++	struct llist_node *head;
++	unsigned long flags;
++
++	desc->completion->status = IDXD_COMP_DESC_ABORT;
++	/*
++	 * Grab the list lock so it will block the irq thread handler. This allows the
++	 * abort code to locate the descriptor need to be aborted.
++	 */
++	spin_lock_irqsave(&ie->list_lock, flags);
++	head = llist_del_all(&ie->pending_llist);
++	if (head) {
++		llist_for_each_entry_safe(d, t, head, llnode) {
++			if (d == desc) {
++				found = desc;
++				continue;
++			}
++			list_add_tail(&desc->list, &ie->work_list);
++		}
++	}
++
++	if (!found)
++		found = list_abort_desc(wq, ie, desc);
++	spin_unlock_irqrestore(&ie->list_lock, flags);
++
++	if (found)
++		complete_desc(found, IDXD_COMPLETE_ABORT);
++}
++
+ int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc)
+ {
+ 	struct idxd_device *idxd = wq->idxd;
++	struct idxd_irq_entry *ie = NULL;
+ 	void __iomem *portal;
+ 	int rc;
+ 
+@@ -108,6 +162,16 @@ int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc)
+ 	 * even on UP because the recipient is a device.
+ 	 */
+ 	wmb();
++
++	/*
++	 * Pending the descriptor to the lockless list for the irq_entry
++	 * that we designated the descriptor to.
++	 */
++	if (desc->hw->flags & IDXD_OP_FLAG_RCI) {
++		ie = &idxd->irq_entries[desc->vector];
++		llist_add(&desc->llnode, &ie->pending_llist);
++	}
++
+ 	if (wq_dedicated(wq)) {
+ 		iosubmit_cmds512(portal, desc->hw, 1);
+ 	} else {
+@@ -118,29 +182,13 @@ int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc)
+ 		 * device is not accepting descriptor at all.
+ 		 */
+ 		rc = enqcmds(portal, desc->hw);
+-		if (rc < 0)
++		if (rc < 0) {
++			if (ie)
++				llist_abort_desc(wq, ie, desc);
+ 			return rc;
++		}
+ 	}
+ 
+ 	percpu_ref_put(&wq->wq_active);
+-
+-	/*
+-	 * Pending the descriptor to the lockless list for the irq_entry
+-	 * that we designated the descriptor to.
+-	 */
+-	if (desc->hw->flags & IDXD_OP_FLAG_RCI) {
+-		int vec;
+-
+-		/*
+-		 * If the driver is on host kernel, it would be the value
+-		 * assigned to interrupt handle, which is index for MSIX
+-		 * vector. If it's guest then can't use the int_handle since
+-		 * that is the index to IMS for the entire device. The guest
+-		 * device local index will be used.
+-		 */
+-		vec = !idxd->int_handles ? desc->hw->int_handle : desc->vector;
+-		llist_add(&desc->llnode, &idxd->irq_entries[vec].pending_llist);
+-	}
+-
+ 	return 0;
+ }
+diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c
+index 0460d58e3941f..bb4df63906a72 100644
+--- a/drivers/dma/idxd/sysfs.c
++++ b/drivers/dma/idxd/sysfs.c
+@@ -1744,8 +1744,6 @@ void idxd_unregister_devices(struct idxd_device *idxd)
+ 
+ 		device_unregister(&group->conf_dev);
+ 	}
+-
+-	device_unregister(&idxd->conf_dev);
+ }
+ 
+ int idxd_register_bus_type(void)
+diff --git a/drivers/dma/imx-dma.c b/drivers/dma/imx-dma.c
+index 7f116bbcfad2a..2ddc31e64db03 100644
+--- a/drivers/dma/imx-dma.c
++++ b/drivers/dma/imx-dma.c
+@@ -812,6 +812,8 @@ static struct dma_async_tx_descriptor *imxdma_prep_slave_sg(
+ 		dma_length += sg_dma_len(sg);
+ 	}
+ 
++	imxdma_config_write(chan, &imxdmac->config, direction);
++
+ 	switch (imxdmac->word_size) {
+ 	case DMA_SLAVE_BUSWIDTH_4_BYTES:
+ 		if (sg_dma_len(sgl) & 3 || sgl->dma_address & 3)
+diff --git a/drivers/dma/stm32-dma.c b/drivers/dma/stm32-dma.c
+index f54ecb123a521..7dd1d3d0bf063 100644
+--- a/drivers/dma/stm32-dma.c
++++ b/drivers/dma/stm32-dma.c
+@@ -1200,7 +1200,7 @@ static int stm32_dma_alloc_chan_resources(struct dma_chan *c)
+ 
+ 	chan->config_init = false;
+ 
+-	ret = pm_runtime_get_sync(dmadev->ddev.dev);
++	ret = pm_runtime_resume_and_get(dmadev->ddev.dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -1470,7 +1470,7 @@ static int stm32_dma_suspend(struct device *dev)
+ 	struct stm32_dma_device *dmadev = dev_get_drvdata(dev);
+ 	int id, ret, scr;
+ 
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/dma/stm32-dmamux.c b/drivers/dma/stm32-dmamux.c
+index ef0d0555103d9..a42164389ebc2 100644
+--- a/drivers/dma/stm32-dmamux.c
++++ b/drivers/dma/stm32-dmamux.c
+@@ -137,7 +137,7 @@ static void *stm32_dmamux_route_allocate(struct of_phandle_args *dma_spec,
+ 
+ 	/* Set dma request */
+ 	spin_lock_irqsave(&dmamux->lock, flags);
+-	ret = pm_runtime_get_sync(&pdev->dev);
++	ret = pm_runtime_resume_and_get(&pdev->dev);
+ 	if (ret < 0) {
+ 		spin_unlock_irqrestore(&dmamux->lock, flags);
+ 		goto error;
+@@ -336,7 +336,7 @@ static int stm32_dmamux_suspend(struct device *dev)
+ 	struct stm32_dmamux_data *stm32_dmamux = platform_get_drvdata(pdev);
+ 	int i, ret;
+ 
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -361,7 +361,7 @@ static int stm32_dmamux_resume(struct device *dev)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/dma/uniphier-xdmac.c b/drivers/dma/uniphier-xdmac.c
+index 16b19654873df..d6b8a202474f4 100644
+--- a/drivers/dma/uniphier-xdmac.c
++++ b/drivers/dma/uniphier-xdmac.c
+@@ -209,8 +209,8 @@ static int uniphier_xdmac_chan_stop(struct uniphier_xdmac_chan *xc)
+ 	writel(0, xc->reg_ch_base + XDMAC_TSS);
+ 
+ 	/* wait until transfer is stopped */
+-	return readl_poll_timeout(xc->reg_ch_base + XDMAC_STAT, val,
+-				  !(val & XDMAC_STAT_TENF), 100, 1000);
++	return readl_poll_timeout_atomic(xc->reg_ch_base + XDMAC_STAT, val,
++					 !(val & XDMAC_STAT_TENF), 100, 1000);
+ }
+ 
+ /* xc->vc.lock must be held by caller */
+diff --git a/drivers/fpga/dfl-fme-perf.c b/drivers/fpga/dfl-fme-perf.c
+index 4299145ef347e..587c82be12f7a 100644
+--- a/drivers/fpga/dfl-fme-perf.c
++++ b/drivers/fpga/dfl-fme-perf.c
+@@ -953,6 +953,8 @@ static int fme_perf_offline_cpu(unsigned int cpu, struct hlist_node *node)
+ 		return 0;
+ 
+ 	priv->cpu = target;
++	perf_pmu_migrate_context(&priv->pmu, cpu, target);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpio/gpio-mpc8xxx.c b/drivers/gpio/gpio-mpc8xxx.c
+index 4b9157a69fca0..50b321a1ab1b6 100644
+--- a/drivers/gpio/gpio-mpc8xxx.c
++++ b/drivers/gpio/gpio-mpc8xxx.c
+@@ -405,7 +405,7 @@ static int mpc8xxx_probe(struct platform_device *pdev)
+ 
+ 	ret = devm_request_irq(&pdev->dev, mpc8xxx_gc->irqn,
+ 			       mpc8xxx_gpio_irq_cascade,
+-			       IRQF_SHARED, "gpio-cascade",
++			       IRQF_NO_THREAD | IRQF_SHARED, "gpio-cascade",
+ 			       mpc8xxx_gc);
+ 	if (ret) {
+ 		dev_err(&pdev->dev,
+diff --git a/drivers/gpio/gpio-tqmx86.c b/drivers/gpio/gpio-tqmx86.c
+index 5022e0ad0faee..0f5d17f343f1e 100644
+--- a/drivers/gpio/gpio-tqmx86.c
++++ b/drivers/gpio/gpio-tqmx86.c
+@@ -238,8 +238,8 @@ static int tqmx86_gpio_probe(struct platform_device *pdev)
+ 	struct resource *res;
+ 	int ret, irq;
+ 
+-	irq = platform_get_irq(pdev, 0);
+-	if (irq < 0)
++	irq = platform_get_irq_optional(pdev, 0);
++	if (irq < 0 && irq != -ENXIO)
+ 		return irq;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_IO, 0);
+@@ -278,7 +278,7 @@ static int tqmx86_gpio_probe(struct platform_device *pdev)
+ 
+ 	pm_runtime_enable(&pdev->dev);
+ 
+-	if (irq) {
++	if (irq > 0) {
+ 		struct irq_chip *irq_chip = &gpio->irq_chip;
+ 		u8 irq_status;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+index 355a6923849d3..b53eab384adb7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+@@ -904,7 +904,7 @@ void amdgpu_acpi_fini(struct amdgpu_device *adev)
+  */
+ bool amdgpu_acpi_is_s0ix_supported(struct amdgpu_device *adev)
+ {
+-#if defined(CONFIG_AMD_PMC) || defined(CONFIG_AMD_PMC_MODULE)
++#if IS_ENABLED(CONFIG_AMD_PMC) && IS_ENABLED(CONFIG_PM_SLEEP)
+ 	if (acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0) {
+ 		if (adev->flags & AMD_IS_APU)
+ 			return pm_suspend_target_state == PM_SUSPEND_TO_IDLE;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index eeaae1cf2bc2b..0894cd505361a 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1493,6 +1493,7 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
+ 	}
+ 
+ 	hdr = (const struct dmcub_firmware_header_v1_0 *)adev->dm.dmub_fw->data;
++	adev->dm.dmcub_fw_version = le32_to_cpu(hdr->header.ucode_version);
+ 
+ 	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+ 		adev->firmware.ucode[AMDGPU_UCODE_ID_DMCUB].ucode_id =
+@@ -1506,7 +1507,6 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
+ 			 adev->dm.dmcub_fw_version);
+ 	}
+ 
+-	adev->dm.dmcub_fw_version = le32_to_cpu(hdr->header.ucode_version);
+ 
+ 	adev->dm.dmub_srv = kzalloc(sizeof(*adev->dm.dmub_srv), GFP_KERNEL);
+ 	dmub_srv = adev->dm.dmub_srv;
+@@ -2367,9 +2367,9 @@ static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector)
+ 	max_cll = conn_base->hdr_sink_metadata.hdmi_type1.max_cll;
+ 	min_cll = conn_base->hdr_sink_metadata.hdmi_type1.min_cll;
+ 
+-	if (caps->ext_caps->bits.oled == 1 ||
++	if (caps->ext_caps->bits.oled == 1 /*||
+ 	    caps->ext_caps->bits.sdr_aux_backlight_control == 1 ||
+-	    caps->ext_caps->bits.hdr_aux_backlight_control == 1)
++	    caps->ext_caps->bits.hdr_aux_backlight_control == 1*/)
+ 		caps->aux_support = true;
+ 
+ 	if (amdgpu_backlight == 0)
+diff --git a/drivers/gpu/drm/i915/i915_globals.c b/drivers/gpu/drm/i915/i915_globals.c
+index 3aa2136842935..57d2943884ab6 100644
+--- a/drivers/gpu/drm/i915/i915_globals.c
++++ b/drivers/gpu/drm/i915/i915_globals.c
+@@ -139,7 +139,7 @@ void i915_globals_unpark(void)
+ 	atomic_inc(&active);
+ }
+ 
+-static void __exit __i915_globals_flush(void)
++static void  __i915_globals_flush(void)
+ {
+ 	atomic_inc(&active); /* skip shrinking */
+ 
+@@ -149,7 +149,7 @@ static void __exit __i915_globals_flush(void)
+ 	atomic_dec(&active);
+ }
+ 
+-void __exit i915_globals_exit(void)
++void i915_globals_exit(void)
+ {
+ 	GEM_BUG_ON(atomic_read(&active));
+ 
+diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
+index 480553746794f..a6261a8103f42 100644
+--- a/drivers/gpu/drm/i915/i915_pci.c
++++ b/drivers/gpu/drm/i915/i915_pci.c
+@@ -1168,6 +1168,7 @@ static int __init i915_init(void)
+ 	err = pci_register_driver(&i915_pci_driver);
+ 	if (err) {
+ 		i915_pmu_exit();
++		i915_globals_exit();
+ 		return err;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index cbf7a60afe542..97fc7a51c1006 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -416,7 +416,7 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
+ #define GEN11_VECS_SFC_USAGE(engine)		_MMIO((engine)->mmio_base + 0x2014)
+ #define   GEN11_VECS_SFC_USAGE_BIT		(1 << 0)
+ 
+-#define GEN12_SFC_DONE(n)		_MMIO(0x1cc00 + (n) * 0x100)
++#define GEN12_SFC_DONE(n)		_MMIO(0x1cc000 + (n) * 0x1000)
+ #define GEN12_SFC_DONE_MAX		4
+ 
+ #define RING_PP_DIR_BASE(base)		_MMIO((base) + 0x228)
+diff --git a/drivers/gpu/drm/kmb/kmb_drv.c b/drivers/gpu/drm/kmb/kmb_drv.c
+index 96ea1a2c11dd6..c0b1c6f992496 100644
+--- a/drivers/gpu/drm/kmb/kmb_drv.c
++++ b/drivers/gpu/drm/kmb/kmb_drv.c
+@@ -203,6 +203,7 @@ static irqreturn_t handle_lcd_irq(struct drm_device *dev)
+ 	unsigned long status, val, val1;
+ 	int plane_id, dma0_state, dma1_state;
+ 	struct kmb_drm_private *kmb = to_kmb(dev);
++	u32 ctrl = 0;
+ 
+ 	status = kmb_read_lcd(kmb, LCD_INT_STATUS);
+ 
+@@ -227,6 +228,19 @@ static irqreturn_t handle_lcd_irq(struct drm_device *dev)
+ 				kmb_clr_bitmask_lcd(kmb, LCD_CONTROL,
+ 						    kmb->plane_status[plane_id].ctrl);
+ 
++				ctrl = kmb_read_lcd(kmb, LCD_CONTROL);
++				if (!(ctrl & (LCD_CTRL_VL1_ENABLE |
++				    LCD_CTRL_VL2_ENABLE |
++				    LCD_CTRL_GL1_ENABLE |
++				    LCD_CTRL_GL2_ENABLE))) {
++					/* If no LCD layers are using DMA,
++					 * then disable DMA pipelined AXI read
++					 * transactions.
++					 */
++					kmb_clr_bitmask_lcd(kmb, LCD_CONTROL,
++							    LCD_CTRL_PIPELINE_DMA);
++				}
++
+ 				kmb->plane_status[plane_id].disable = false;
+ 			}
+ 		}
+diff --git a/drivers/gpu/drm/kmb/kmb_plane.c b/drivers/gpu/drm/kmb/kmb_plane.c
+index d5b6195856d12..ecee6782612d8 100644
+--- a/drivers/gpu/drm/kmb/kmb_plane.c
++++ b/drivers/gpu/drm/kmb/kmb_plane.c
+@@ -427,8 +427,14 @@ static void kmb_plane_atomic_update(struct drm_plane *plane,
+ 
+ 	kmb_set_bitmask_lcd(kmb, LCD_CONTROL, ctrl);
+ 
+-	/* FIXME no doc on how to set output format,these values are
+-	 * taken from the Myriadx tests
++	/* Enable pipeline AXI read transactions for the DMA
++	 * after setting graphics layers. This must be done
++	 * in a separate write cycle.
++	 */
++	kmb_set_bitmask_lcd(kmb, LCD_CONTROL, LCD_CTRL_PIPELINE_DMA);
++
++	/* FIXME no doc on how to set output format, these values are taken
++	 * from the Myriadx tests
+ 	 */
+ 	out_format |= LCD_OUTF_FORMAT_RGB888;
+ 
+@@ -526,6 +532,11 @@ struct kmb_plane *kmb_plane_init(struct drm_device *drm)
+ 		plane->id = i;
+ 	}
+ 
++	/* Disable pipeline AXI read transactions for the DMA
++	 * prior to setting graphics layers
++	 */
++	kmb_clr_bitmask_lcd(kmb, LCD_CONTROL, LCD_CTRL_PIPELINE_DMA);
++
+ 	return primary;
+ cleanup:
+ 	drmm_kfree(drm, plane);
+diff --git a/drivers/hid/hid-ft260.c b/drivers/hid/hid-ft260.c
+index f43a8406cb9a9..e73776ae6976f 100644
+--- a/drivers/hid/hid-ft260.c
++++ b/drivers/hid/hid-ft260.c
+@@ -742,7 +742,7 @@ static int ft260_is_interface_enabled(struct hid_device *hdev)
+ 	int ret;
+ 
+ 	ret = ft260_get_system_config(hdev, &cfg);
+-	if (ret)
++	if (ret < 0)
+ 		return ret;
+ 
+ 	ft260_dbg("interface:  0x%02x\n", interface);
+@@ -754,23 +754,16 @@ static int ft260_is_interface_enabled(struct hid_device *hdev)
+ 	switch (cfg.chip_mode) {
+ 	case FT260_MODE_ALL:
+ 	case FT260_MODE_BOTH:
+-		if (interface == 1) {
++		if (interface == 1)
+ 			hid_info(hdev, "uart interface is not supported\n");
+-			return 0;
+-		}
+-		ret = 1;
++		else
++			ret = 1;
+ 		break;
+ 	case FT260_MODE_UART:
+-		if (interface == 0) {
+-			hid_info(hdev, "uart is unsupported on interface 0\n");
+-			ret = 0;
+-		}
++		hid_info(hdev, "uart interface is not supported\n");
+ 		break;
+ 	case FT260_MODE_I2C:
+-		if (interface == 1) {
+-			hid_info(hdev, "i2c is unsupported on interface 1\n");
+-			ret = 0;
+-		}
++		ret = 1;
+ 		break;
+ 	}
+ 	return ret;
+@@ -1004,11 +997,9 @@ err_hid_stop:
+ 
+ static void ft260_remove(struct hid_device *hdev)
+ {
+-	int ret;
+ 	struct ft260_device *dev = hid_get_drvdata(hdev);
+ 
+-	ret = ft260_is_interface_enabled(hdev);
+-	if (ret <= 0)
++	if (!dev)
+ 		return;
+ 
+ 	sysfs_remove_group(&hdev->dev.kobj, &ft260_attr_group);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_cmd.c b/drivers/infiniband/hw/hns/hns_roce_cmd.c
+index 8f68cc3ff193f..84f3f2b5f0976 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_cmd.c
++++ b/drivers/infiniband/hw/hns/hns_roce_cmd.c
+@@ -213,8 +213,10 @@ int hns_roce_cmd_use_events(struct hns_roce_dev *hr_dev)
+ 
+ 	hr_cmd->context =
+ 		kcalloc(hr_cmd->max_cmds, sizeof(*hr_cmd->context), GFP_KERNEL);
+-	if (!hr_cmd->context)
++	if (!hr_cmd->context) {
++		hr_dev->cmd_mod = 0;
+ 		return -ENOMEM;
++	}
+ 
+ 	for (i = 0; i < hr_cmd->max_cmds; ++i) {
+ 		hr_cmd->context[i].token = i;
+@@ -228,7 +230,6 @@ int hns_roce_cmd_use_events(struct hns_roce_dev *hr_dev)
+ 	spin_lock_init(&hr_cmd->context_lock);
+ 
+ 	hr_cmd->use_events = 1;
+-	down(&hr_cmd->poll_sem);
+ 
+ 	return 0;
+ }
+@@ -239,8 +240,6 @@ void hns_roce_cmd_use_polling(struct hns_roce_dev *hr_dev)
+ 
+ 	kfree(hr_cmd->context);
+ 	hr_cmd->use_events = 0;
+-
+-	up(&hr_cmd->poll_sem);
+ }
+ 
+ struct hns_roce_cmd_mailbox *
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index 6c6e82b11d8bc..33b84f219d0d0 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -897,11 +897,9 @@ int hns_roce_init(struct hns_roce_dev *hr_dev)
+ 
+ 	if (hr_dev->cmd_mod) {
+ 		ret = hns_roce_cmd_use_events(hr_dev);
+-		if (ret) {
++		if (ret)
+ 			dev_warn(dev,
+ 				 "Cmd event  mode failed, set back to poll!\n");
+-			hns_roce_cmd_use_polling(hr_dev);
+-		}
+ 	}
+ 
+ 	ret = hns_roce_init_hem(hr_dev);
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 425423dfac724..fd113ddf6e862 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -530,8 +530,8 @@ static void __cache_work_func(struct mlx5_cache_ent *ent)
+ 		 */
+ 		spin_unlock_irq(&ent->lock);
+ 		need_delay = need_resched() || someone_adding(cache) ||
+-			     time_after(jiffies,
+-					READ_ONCE(cache->last_add) + 300 * HZ);
++			     !time_after(jiffies,
++					 READ_ONCE(cache->last_add) + 300 * HZ);
+ 		spin_lock_irq(&ent->lock);
+ 		if (ent->disabled)
+ 			goto out;
+diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
+index 8a1e70e008764..7887941730dbb 100644
+--- a/drivers/interconnect/core.c
++++ b/drivers/interconnect/core.c
+@@ -403,7 +403,7 @@ struct icc_path *devm_of_icc_get(struct device *dev, const char *name)
+ {
+ 	struct icc_path **ptr, *path;
+ 
+-	ptr = devres_alloc(devm_icc_release, sizeof(**ptr), GFP_KERNEL);
++	ptr = devres_alloc(devm_icc_release, sizeof(*ptr), GFP_KERNEL);
+ 	if (!ptr)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -973,9 +973,14 @@ void icc_node_add(struct icc_node *node, struct icc_provider *provider)
+ 	}
+ 	node->avg_bw = node->init_avg;
+ 	node->peak_bw = node->init_peak;
++
++	if (provider->pre_aggregate)
++		provider->pre_aggregate(node);
++
+ 	if (provider->aggregate)
+ 		provider->aggregate(node, 0, node->init_avg, node->init_peak,
+ 				    &node->avg_bw, &node->peak_bw);
++
+ 	provider->set(node, node);
+ 	node->avg_bw = 0;
+ 	node->peak_bw = 0;
+@@ -1106,6 +1111,8 @@ void icc_sync_state(struct device *dev)
+ 		dev_dbg(p->dev, "interconnect provider is in synced state\n");
+ 		list_for_each_entry(n, &p->nodes, node_list) {
+ 			if (n->init_avg || n->init_peak) {
++				n->init_avg = 0;
++				n->init_peak = 0;
+ 				aggregate_requests(n);
+ 				p->set(n, n);
+ 			}
+diff --git a/drivers/interconnect/qcom/icc-rpmh.c b/drivers/interconnect/qcom/icc-rpmh.c
+index bf01d09dba6c4..f6fae64861ce8 100644
+--- a/drivers/interconnect/qcom/icc-rpmh.c
++++ b/drivers/interconnect/qcom/icc-rpmh.c
+@@ -57,6 +57,11 @@ int qcom_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
+ 			qn->sum_avg[i] += avg_bw;
+ 			qn->max_peak[i] = max_t(u32, qn->max_peak[i], peak_bw);
+ 		}
++
++		if (node->init_avg || node->init_peak) {
++			qn->sum_avg[i] = max_t(u64, qn->sum_avg[i], node->init_avg);
++			qn->max_peak[i] = max_t(u64, qn->max_peak[i], node->init_peak);
++		}
+ 	}
+ 
+ 	*agg_avg += avg_bw;
+@@ -79,7 +84,6 @@ EXPORT_SYMBOL_GPL(qcom_icc_aggregate);
+ int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
+ {
+ 	struct qcom_icc_provider *qp;
+-	struct qcom_icc_node *qn;
+ 	struct icc_node *node;
+ 
+ 	if (!src)
+@@ -88,12 +92,6 @@ int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
+ 		node = src;
+ 
+ 	qp = to_qcom_provider(node->provider);
+-	qn = node->data;
+-
+-	qn->sum_avg[QCOM_ICC_BUCKET_AMC] = max_t(u64, qn->sum_avg[QCOM_ICC_BUCKET_AMC],
+-						 node->avg_bw);
+-	qn->max_peak[QCOM_ICC_BUCKET_AMC] = max_t(u64, qn->max_peak[QCOM_ICC_BUCKET_AMC],
+-						  node->peak_bw);
+ 
+ 	qcom_icc_bcm_voter_commit(qp->voter);
+ 
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index ced076ba560e1..753822ca96131 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -472,8 +472,6 @@ static void raid1_end_write_request(struct bio *bio)
+ 		/*
+ 		 * When the device is faulty, it is not necessary to
+ 		 * handle write error.
+-		 * For failfast, this is the only remaining device,
+-		 * We need to retry the write without FailFast.
+ 		 */
+ 		if (!test_bit(Faulty, &rdev->flags))
+ 			set_bit(R1BIO_WriteError, &r1_bio->state);
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 13f5e6b2a73d6..40e845fb97170 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -469,12 +469,12 @@ static void raid10_end_write_request(struct bio *bio)
+ 			/*
+ 			 * When the device is faulty, it is not necessary to
+ 			 * handle write error.
+-			 * For failfast, this is the only remaining device,
+-			 * We need to retry the write without FailFast.
+ 			 */
+ 			if (!test_bit(Faulty, &rdev->flags))
+ 				set_bit(R10BIO_WriteError, &r10_bio->state);
+ 			else {
++				/* Fail the request */
++				set_bit(R10BIO_Degraded, &r10_bio->state);
+ 				r10_bio->devs[slot].bio = NULL;
+ 				to_put = bio;
+ 				dec_rdev = 1;
+diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c
+index 02281d13505f4..508ac295eb06e 100644
+--- a/drivers/media/common/videobuf2/videobuf2-core.c
++++ b/drivers/media/common/videobuf2/videobuf2-core.c
+@@ -1573,6 +1573,7 @@ int vb2_core_qbuf(struct vb2_queue *q, unsigned int index, void *pb,
+ 		  struct media_request *req)
+ {
+ 	struct vb2_buffer *vb;
++	enum vb2_buffer_state orig_state;
+ 	int ret;
+ 
+ 	if (q->error) {
+@@ -1673,6 +1674,7 @@ int vb2_core_qbuf(struct vb2_queue *q, unsigned int index, void *pb,
+ 	 * Add to the queued buffers list, a buffer will stay on it until
+ 	 * dequeued in dqbuf.
+ 	 */
++	orig_state = vb->state;
+ 	list_add_tail(&vb->queued_entry, &q->queued_list);
+ 	q->queued_count++;
+ 	q->waiting_for_buffers = false;
+@@ -1703,8 +1705,17 @@ int vb2_core_qbuf(struct vb2_queue *q, unsigned int index, void *pb,
+ 	if (q->streaming && !q->start_streaming_called &&
+ 	    q->queued_count >= q->min_buffers_needed) {
+ 		ret = vb2_start_streaming(q);
+-		if (ret)
++		if (ret) {
++			/*
++			 * Since vb2_core_qbuf will return with an error,
++			 * we should return it to state DEQUEUED since
++			 * the error indicates that the buffer wasn't queued.
++			 */
++			list_del(&vb->queued_entry);
++			q->queued_count--;
++			vb->state = orig_state;
+ 			return ret;
++		}
+ 	}
+ 
+ 	dprintk(q, 2, "qbuf of buffer %d succeeded\n", vb->index);
+diff --git a/drivers/media/usb/dvb-usb-v2/rtl28xxu.c b/drivers/media/usb/dvb-usb-v2/rtl28xxu.c
+index 97ed17a141bbf..a6124472cb06f 100644
+--- a/drivers/media/usb/dvb-usb-v2/rtl28xxu.c
++++ b/drivers/media/usb/dvb-usb-v2/rtl28xxu.c
+@@ -37,7 +37,16 @@ static int rtl28xxu_ctrl_msg(struct dvb_usb_device *d, struct rtl28xxu_req *req)
+ 	} else {
+ 		/* read */
+ 		requesttype = (USB_TYPE_VENDOR | USB_DIR_IN);
+-		pipe = usb_rcvctrlpipe(d->udev, 0);
++
++		/*
++		 * Zero-length transfers must use usb_sndctrlpipe() and
++		 * rtl28xxu_identify_state() uses a zero-length i2c read
++		 * command to determine the chip type.
++		 */
++		if (req->size)
++			pipe = usb_rcvctrlpipe(d->udev, 0);
++		else
++			pipe = usb_sndctrlpipe(d->udev, 0);
+ 	}
+ 
+ 	ret = usb_control_msg(d->udev, pipe, 0, requesttype, req->value,
+diff --git a/drivers/net/dsa/qca/ar9331.c b/drivers/net/dsa/qca/ar9331.c
+index ca2ad77b71f1c..6686192e1883e 100644
+--- a/drivers/net/dsa/qca/ar9331.c
++++ b/drivers/net/dsa/qca/ar9331.c
+@@ -837,16 +837,24 @@ static int ar9331_mdio_write(void *ctx, u32 reg, u32 val)
+ 		return 0;
+ 	}
+ 
+-	ret = __ar9331_mdio_write(sbus, AR9331_SW_MDIO_PHY_MODE_REG, reg, val);
++	/* In case of this switch we work with 32bit registers on top of 16bit
++	 * bus. Some registers (for example access to forwarding database) have
++	 * trigger bit on the first 16bit half of request, the result and
++	 * configuration of request in the second half.
++	 * To make it work properly, we should do the second part of transfer
++	 * before the first one is done.
++	 */
++	ret = __ar9331_mdio_write(sbus, AR9331_SW_MDIO_PHY_MODE_REG, reg + 2,
++				  val >> 16);
+ 	if (ret < 0)
+ 		goto error;
+ 
+-	ret = __ar9331_mdio_write(sbus, AR9331_SW_MDIO_PHY_MODE_REG, reg + 2,
+-				  val >> 16);
++	ret = __ar9331_mdio_write(sbus, AR9331_SW_MDIO_PHY_MODE_REG, reg, val);
+ 	if (ret < 0)
+ 		goto error;
+ 
+ 	return 0;
++
+ error:
+ 	dev_err_ratelimited(&sbus->dev, "Bus error. Failed to write register.\n");
+ 	return ret;
+diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
+index 5b7947832b877..4b05a2424623c 100644
+--- a/drivers/net/dsa/sja1105/sja1105_main.c
++++ b/drivers/net/dsa/sja1105/sja1105_main.c
+@@ -1308,10 +1308,11 @@ static int sja1105et_is_fdb_entry_in_bin(struct sja1105_private *priv, int bin,
+ int sja1105et_fdb_add(struct dsa_switch *ds, int port,
+ 		      const unsigned char *addr, u16 vid)
+ {
+-	struct sja1105_l2_lookup_entry l2_lookup = {0};
++	struct sja1105_l2_lookup_entry l2_lookup = {0}, tmp;
+ 	struct sja1105_private *priv = ds->priv;
+ 	struct device *dev = ds->dev;
+ 	int last_unused = -1;
++	int start, end, i;
+ 	int bin, way, rc;
+ 
+ 	bin = sja1105et_fdb_hash(priv, addr, vid);
+@@ -1323,7 +1324,7 @@ int sja1105et_fdb_add(struct dsa_switch *ds, int port,
+ 		 * mask? If yes, we need to do nothing. If not, we need
+ 		 * to rewrite the entry by adding this port to it.
+ 		 */
+-		if (l2_lookup.destports & BIT(port))
++		if ((l2_lookup.destports & BIT(port)) && l2_lookup.lockeds)
+ 			return 0;
+ 		l2_lookup.destports |= BIT(port);
+ 	} else {
+@@ -1354,6 +1355,7 @@ int sja1105et_fdb_add(struct dsa_switch *ds, int port,
+ 						     index, NULL, false);
+ 		}
+ 	}
++	l2_lookup.lockeds = true;
+ 	l2_lookup.index = sja1105et_fdb_index(bin, way);
+ 
+ 	rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
+@@ -1362,6 +1364,29 @@ int sja1105et_fdb_add(struct dsa_switch *ds, int port,
+ 	if (rc < 0)
+ 		return rc;
+ 
++	/* Invalidate a dynamically learned entry if that exists */
++	start = sja1105et_fdb_index(bin, 0);
++	end = sja1105et_fdb_index(bin, way);
++
++	for (i = start; i < end; i++) {
++		rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP,
++						 i, &tmp);
++		if (rc == -ENOENT)
++			continue;
++		if (rc)
++			return rc;
++
++		if (tmp.macaddr != ether_addr_to_u64(addr) || tmp.vlanid != vid)
++			continue;
++
++		rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
++						  i, NULL, false);
++		if (rc)
++			return rc;
++
++		break;
++	}
++
+ 	return sja1105_static_fdb_change(priv, port, &l2_lookup, true);
+ }
+ 
+@@ -1403,32 +1428,30 @@ int sja1105et_fdb_del(struct dsa_switch *ds, int port,
+ int sja1105pqrs_fdb_add(struct dsa_switch *ds, int port,
+ 			const unsigned char *addr, u16 vid)
+ {
+-	struct sja1105_l2_lookup_entry l2_lookup = {0};
++	struct sja1105_l2_lookup_entry l2_lookup = {0}, tmp;
+ 	struct sja1105_private *priv = ds->priv;
+ 	int rc, i;
+ 
+ 	/* Search for an existing entry in the FDB table */
+ 	l2_lookup.macaddr = ether_addr_to_u64(addr);
+ 	l2_lookup.vlanid = vid;
+-	l2_lookup.iotag = SJA1105_S_TAG;
+ 	l2_lookup.mask_macaddr = GENMASK_ULL(ETH_ALEN * 8 - 1, 0);
+-	if (priv->vlan_state != SJA1105_VLAN_UNAWARE) {
+-		l2_lookup.mask_vlanid = VLAN_VID_MASK;
+-		l2_lookup.mask_iotag = BIT(0);
+-	} else {
+-		l2_lookup.mask_vlanid = 0;
+-		l2_lookup.mask_iotag = 0;
+-	}
++	l2_lookup.mask_vlanid = VLAN_VID_MASK;
+ 	l2_lookup.destports = BIT(port);
+ 
++	tmp = l2_lookup;
++
+ 	rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP,
+-					 SJA1105_SEARCH, &l2_lookup);
+-	if (rc == 0) {
+-		/* Found and this port is already in the entry's
++					 SJA1105_SEARCH, &tmp);
++	if (rc == 0 && tmp.index != SJA1105_MAX_L2_LOOKUP_COUNT - 1) {
++		/* Found a static entry and this port is already in the entry's
+ 		 * port mask => job done
+ 		 */
+-		if (l2_lookup.destports & BIT(port))
++		if ((tmp.destports & BIT(port)) && tmp.lockeds)
+ 			return 0;
++
++		l2_lookup = tmp;
++
+ 		/* l2_lookup.index is populated by the switch in case it
+ 		 * found something.
+ 		 */
+@@ -1450,16 +1473,46 @@ int sja1105pqrs_fdb_add(struct dsa_switch *ds, int port,
+ 		dev_err(ds->dev, "FDB is full, cannot add entry.\n");
+ 		return -EINVAL;
+ 	}
+-	l2_lookup.lockeds = true;
+ 	l2_lookup.index = i;
+ 
+ skip_finding_an_index:
++	l2_lookup.lockeds = true;
++
+ 	rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
+ 					  l2_lookup.index, &l2_lookup,
+ 					  true);
+ 	if (rc < 0)
+ 		return rc;
+ 
++	/* The switch learns dynamic entries and looks up the FDB left to
++	 * right. It is possible that our addition was concurrent with the
++	 * dynamic learning of the same address, so now that the static entry
++	 * has been installed, we are certain that address learning for this
++	 * particular address has been turned off, so the dynamic entry either
++	 * is in the FDB at an index smaller than the static one, or isn't (it
++	 * can also be at a larger index, but in that case it is inactive
++	 * because the static FDB entry will match first, and the dynamic one
++	 * will eventually age out). Search for a dynamically learned address
++	 * prior to our static one and invalidate it.
++	 */
++	tmp = l2_lookup;
++
++	rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP,
++					 SJA1105_SEARCH, &tmp);
++	if (rc < 0) {
++		dev_err(ds->dev,
++			"port %d failed to read back entry for %pM vid %d: %pe\n",
++			port, addr, vid, ERR_PTR(rc));
++		return rc;
++	}
++
++	if (tmp.index < l2_lookup.index) {
++		rc = sja1105_dynamic_config_write(priv, BLK_IDX_L2_LOOKUP,
++						  tmp.index, NULL, false);
++		if (rc < 0)
++			return rc;
++	}
++
+ 	return sja1105_static_fdb_change(priv, port, &l2_lookup, true);
+ }
+ 
+@@ -1473,15 +1526,8 @@ int sja1105pqrs_fdb_del(struct dsa_switch *ds, int port,
+ 
+ 	l2_lookup.macaddr = ether_addr_to_u64(addr);
+ 	l2_lookup.vlanid = vid;
+-	l2_lookup.iotag = SJA1105_S_TAG;
+ 	l2_lookup.mask_macaddr = GENMASK_ULL(ETH_ALEN * 8 - 1, 0);
+-	if (priv->vlan_state != SJA1105_VLAN_UNAWARE) {
+-		l2_lookup.mask_vlanid = VLAN_VID_MASK;
+-		l2_lookup.mask_iotag = BIT(0);
+-	} else {
+-		l2_lookup.mask_vlanid = 0;
+-		l2_lookup.mask_iotag = 0;
+-	}
++	l2_lookup.mask_vlanid = VLAN_VID_MASK;
+ 	l2_lookup.destports = BIT(port);
+ 
+ 	rc = sja1105_dynamic_config_read(priv, BLK_IDX_L2_LOOKUP,
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+index 1a6ec1a12d531..b5d954cb409ae 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+@@ -2669,7 +2669,8 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
+ 	}
+ 
+ 	/* Allocated memory for FW statistics  */
+-	if (bnx2x_alloc_fw_stats_mem(bp))
++	rc = bnx2x_alloc_fw_stats_mem(bp);
++	if (rc)
+ 		LOAD_ERROR_EXIT(bp, load_error0);
+ 
+ 	/* request pf to initialize status blocks */
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 8aea707a65a77..7e4c4980ced79 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -3843,13 +3843,13 @@ fec_drv_remove(struct platform_device *pdev)
+ 	if (of_phy_is_fixed_link(np))
+ 		of_phy_deregister_fixed_link(np);
+ 	of_node_put(fep->phy_node);
+-	free_netdev(ndev);
+ 
+ 	clk_disable_unprepare(fep->clk_ahb);
+ 	clk_disable_unprepare(fep->clk_ipg);
+ 	pm_runtime_put_noidle(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 
++	free_netdev(ndev);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/natsemi/natsemi.c b/drivers/net/ethernet/natsemi/natsemi.c
+index b81e1487945c8..14a17ad730f03 100644
+--- a/drivers/net/ethernet/natsemi/natsemi.c
++++ b/drivers/net/ethernet/natsemi/natsemi.c
+@@ -819,7 +819,7 @@ static int natsemi_probe1(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		printk(version);
+ #endif
+ 
+-	i = pci_enable_device(pdev);
++	i = pcim_enable_device(pdev);
+ 	if (i) return i;
+ 
+ 	/* natsemi has a non-standard PM control register
+@@ -852,7 +852,7 @@ static int natsemi_probe1(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	ioaddr = ioremap(iostart, iosize);
+ 	if (!ioaddr) {
+ 		i = -ENOMEM;
+-		goto err_ioremap;
++		goto err_pci_request_regions;
+ 	}
+ 
+ 	/* Work around the dropped serial bit. */
+@@ -974,9 +974,6 @@ static int natsemi_probe1(struct pci_dev *pdev, const struct pci_device_id *ent)
+  err_register_netdev:
+ 	iounmap(ioaddr);
+ 
+- err_ioremap:
+-	pci_release_regions(pdev);
+-
+  err_pci_request_regions:
+ 	free_netdev(dev);
+ 	return i;
+@@ -3241,7 +3238,6 @@ static void natsemi_remove1(struct pci_dev *pdev)
+ 
+ 	NATSEMI_REMOVE_FILE(pdev, dspcfg_workaround);
+ 	unregister_netdev (dev);
+-	pci_release_regions (pdev);
+ 	iounmap(ioaddr);
+ 	free_netdev (dev);
+ }
+diff --git a/drivers/net/ethernet/neterion/vxge/vxge-main.c b/drivers/net/ethernet/neterion/vxge/vxge-main.c
+index 87892bd992b18..56556373548c4 100644
+--- a/drivers/net/ethernet/neterion/vxge/vxge-main.c
++++ b/drivers/net/ethernet/neterion/vxge/vxge-main.c
+@@ -3527,13 +3527,13 @@ static void vxge_device_unregister(struct __vxge_hw_device *hldev)
+ 
+ 	kfree(vdev->vpaths);
+ 
+-	/* we are safe to free it now */
+-	free_netdev(dev);
+-
+ 	vxge_debug_init(vdev->level_trace, "%s: ethernet device unregistered",
+ 			buf);
+ 	vxge_debug_entryexit(vdev->level_trace,	"%s: %s:%d  Exiting...", buf,
+ 			     __func__, __LINE__);
++
++	/* we are safe to free it now */
++	free_netdev(dev);
+ }
+ 
+ /*
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+index 1b482446536dc..8803faadd3020 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+@@ -286,6 +286,8 @@ nfp_net_get_link_ksettings(struct net_device *netdev,
+ 
+ 	/* Init to unknowns */
+ 	ethtool_link_ksettings_add_link_mode(cmd, supported, FIBRE);
++	ethtool_link_ksettings_add_link_mode(cmd, supported, Pause);
++	ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause);
+ 	cmd->base.port = PORT_OTHER;
+ 	cmd->base.speed = SPEED_UNKNOWN;
+ 	cmd->base.duplex = DUPLEX_UNKNOWN;
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_filter.c b/drivers/net/ethernet/qlogic/qede/qede_filter.c
+index c59b72c902932..a2e4dfb5cb44e 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_filter.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_filter.c
+@@ -831,7 +831,7 @@ int qede_configure_vlan_filters(struct qede_dev *edev)
+ int qede_vlan_rx_kill_vid(struct net_device *dev, __be16 proto, u16 vid)
+ {
+ 	struct qede_dev *edev = netdev_priv(dev);
+-	struct qede_vlan *vlan = NULL;
++	struct qede_vlan *vlan;
+ 	int rc = 0;
+ 
+ 	DP_VERBOSE(edev, NETIF_MSG_IFDOWN, "Removing vlan 0x%04x\n", vid);
+@@ -842,7 +842,7 @@ int qede_vlan_rx_kill_vid(struct net_device *dev, __be16 proto, u16 vid)
+ 		if (vlan->vid == vid)
+ 			break;
+ 
+-	if (!vlan || (vlan->vid != vid)) {
++	if (list_entry_is_head(vlan, &edev->vlan_list, list)) {
+ 		DP_VERBOSE(edev, (NETIF_MSG_IFUP | NETIF_MSG_IFDOWN),
+ 			   "Vlan isn't configured\n");
+ 		goto out;
+diff --git a/drivers/net/ethernet/qlogic/qla3xxx.c b/drivers/net/ethernet/qlogic/qla3xxx.c
+index 2376b2729633f..c00ad57575eab 100644
+--- a/drivers/net/ethernet/qlogic/qla3xxx.c
++++ b/drivers/net/ethernet/qlogic/qla3xxx.c
+@@ -154,7 +154,7 @@ static int ql_wait_for_drvr_lock(struct ql3_adapter *qdev)
+ 				      "driver lock acquired\n");
+ 			return 1;
+ 		}
+-		ssleep(1);
++		mdelay(1000);
+ 	} while (++i < 10);
+ 
+ 	netdev_err(qdev->ndev, "Timed out waiting for driver lock...\n");
+@@ -3274,7 +3274,7 @@ static int ql_adapter_reset(struct ql3_adapter *qdev)
+ 		if ((value & ISP_CONTROL_SR) == 0)
+ 			break;
+ 
+-		ssleep(1);
++		mdelay(1000);
+ 	} while ((--max_wait_time));
+ 
+ 	/*
+@@ -3310,7 +3310,7 @@ static int ql_adapter_reset(struct ql3_adapter *qdev)
+ 						   ispControlStatus);
+ 			if ((value & ISP_CONTROL_FSR) == 0)
+ 				break;
+-			ssleep(1);
++			mdelay(1000);
+ 		} while ((--max_wait_time));
+ 	}
+ 	if (max_wait_time == 0)
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index 718539cdd2f2e..67a08cbba859d 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -2060,8 +2060,12 @@ static void am65_cpsw_port_offload_fwd_mark_update(struct am65_cpsw_common *comm
+ 
+ 	for (i = 1; i <= common->port_num; i++) {
+ 		struct am65_cpsw_port *port = am65_common_get_port(common, i);
+-		struct am65_cpsw_ndev_priv *priv = am65_ndev_to_priv(port->ndev);
++		struct am65_cpsw_ndev_priv *priv;
+ 
++		if (!port->ndev)
++			continue;
++
++		priv = am65_ndev_to_priv(port->ndev);
+ 		priv->offload_fwd_mark = set_val;
+ 	}
+ }
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index a14a00328fa37..7afd9edaf2490 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -382,11 +382,11 @@ static int ksz8041_config_aneg(struct phy_device *phydev)
+ }
+ 
+ static int ksz8051_ksz8795_match_phy_device(struct phy_device *phydev,
+-					    const u32 ksz_phy_id)
++					    const bool ksz_8051)
+ {
+ 	int ret;
+ 
+-	if ((phydev->phy_id & MICREL_PHY_ID_MASK) != ksz_phy_id)
++	if ((phydev->phy_id & MICREL_PHY_ID_MASK) != PHY_ID_KSZ8051)
+ 		return 0;
+ 
+ 	ret = phy_read(phydev, MII_BMSR);
+@@ -399,7 +399,7 @@ static int ksz8051_ksz8795_match_phy_device(struct phy_device *phydev,
+ 	 * the switch does not.
+ 	 */
+ 	ret &= BMSR_ERCAP;
+-	if (ksz_phy_id == PHY_ID_KSZ8051)
++	if (ksz_8051)
+ 		return ret;
+ 	else
+ 		return !ret;
+@@ -407,7 +407,7 @@ static int ksz8051_ksz8795_match_phy_device(struct phy_device *phydev,
+ 
+ static int ksz8051_match_phy_device(struct phy_device *phydev)
+ {
+-	return ksz8051_ksz8795_match_phy_device(phydev, PHY_ID_KSZ8051);
++	return ksz8051_ksz8795_match_phy_device(phydev, true);
+ }
+ 
+ static int ksz8081_config_init(struct phy_device *phydev)
+@@ -435,7 +435,7 @@ static int ksz8061_config_init(struct phy_device *phydev)
+ 
+ static int ksz8795_match_phy_device(struct phy_device *phydev)
+ {
+-	return ksz8051_ksz8795_match_phy_device(phydev, PHY_ID_KSZ87XX);
++	return ksz8051_ksz8795_match_phy_device(phydev, false);
+ }
+ 
+ static int ksz9021_load_values_from_of(struct phy_device *phydev,
+diff --git a/drivers/net/usb/pegasus.c b/drivers/net/usb/pegasus.c
+index 9a907182569cf..bc2dbf86496b5 100644
+--- a/drivers/net/usb/pegasus.c
++++ b/drivers/net/usb/pegasus.c
+@@ -735,12 +735,16 @@ static inline void disable_net_traffic(pegasus_t *pegasus)
+ 	set_registers(pegasus, EthCtrl0, sizeof(tmp), &tmp);
+ }
+ 
+-static inline void get_interrupt_interval(pegasus_t *pegasus)
++static inline int get_interrupt_interval(pegasus_t *pegasus)
+ {
+ 	u16 data;
+ 	u8 interval;
++	int ret;
++
++	ret = read_eprom_word(pegasus, 4, &data);
++	if (ret < 0)
++		return ret;
+ 
+-	read_eprom_word(pegasus, 4, &data);
+ 	interval = data >> 8;
+ 	if (pegasus->usb->speed != USB_SPEED_HIGH) {
+ 		if (interval < 0x80) {
+@@ -755,6 +759,8 @@ static inline void get_interrupt_interval(pegasus_t *pegasus)
+ 		}
+ 	}
+ 	pegasus->intr_interval = interval;
++
++	return 0;
+ }
+ 
+ static void set_carrier(struct net_device *net)
+@@ -1149,7 +1155,9 @@ static int pegasus_probe(struct usb_interface *intf,
+ 				| NETIF_MSG_PROBE | NETIF_MSG_LINK);
+ 
+ 	pegasus->features = usb_dev_id[dev_index].private;
+-	get_interrupt_interval(pegasus);
++	res = get_interrupt_interval(pegasus);
++	if (res)
++		goto out2;
+ 	if (reset_mac(pegasus)) {
+ 		dev_err(&intf->dev, "can't reset MAC\n");
+ 		res = -EIO;
+diff --git a/drivers/net/wireless/virt_wifi.c b/drivers/net/wireless/virt_wifi.c
+index 1df959532c7d3..514f2c1124b61 100644
+--- a/drivers/net/wireless/virt_wifi.c
++++ b/drivers/net/wireless/virt_wifi.c
+@@ -136,6 +136,29 @@ static struct ieee80211_supported_band band_5ghz = {
+ /* Assigned at module init. Guaranteed locally-administered and unicast. */
+ static u8 fake_router_bssid[ETH_ALEN] __ro_after_init = {};
+ 
++static void virt_wifi_inform_bss(struct wiphy *wiphy)
++{
++	u64 tsf = div_u64(ktime_get_boottime_ns(), 1000);
++	struct cfg80211_bss *informed_bss;
++	static const struct {
++		u8 tag;
++		u8 len;
++		u8 ssid[8];
++	} __packed ssid = {
++		.tag = WLAN_EID_SSID,
++		.len = 8,
++		.ssid = "VirtWifi",
++	};
++
++	informed_bss = cfg80211_inform_bss(wiphy, &channel_5ghz,
++					   CFG80211_BSS_FTYPE_PRESP,
++					   fake_router_bssid, tsf,
++					   WLAN_CAPABILITY_ESS, 0,
++					   (void *)&ssid, sizeof(ssid),
++					   DBM_TO_MBM(-50), GFP_KERNEL);
++	cfg80211_put_bss(wiphy, informed_bss);
++}
++
+ /* Called with the rtnl lock held. */
+ static int virt_wifi_scan(struct wiphy *wiphy,
+ 			  struct cfg80211_scan_request *request)
+@@ -156,28 +179,13 @@ static int virt_wifi_scan(struct wiphy *wiphy,
+ /* Acquires and releases the rdev BSS lock. */
+ static void virt_wifi_scan_result(struct work_struct *work)
+ {
+-	struct {
+-		u8 tag;
+-		u8 len;
+-		u8 ssid[8];
+-	} __packed ssid = {
+-		.tag = WLAN_EID_SSID, .len = 8, .ssid = "VirtWifi",
+-	};
+-	struct cfg80211_bss *informed_bss;
+ 	struct virt_wifi_wiphy_priv *priv =
+ 		container_of(work, struct virt_wifi_wiphy_priv,
+ 			     scan_result.work);
+ 	struct wiphy *wiphy = priv_to_wiphy(priv);
+ 	struct cfg80211_scan_info scan_info = { .aborted = false };
+-	u64 tsf = div_u64(ktime_get_boottime_ns(), 1000);
+ 
+-	informed_bss = cfg80211_inform_bss(wiphy, &channel_5ghz,
+-					   CFG80211_BSS_FTYPE_PRESP,
+-					   fake_router_bssid, tsf,
+-					   WLAN_CAPABILITY_ESS, 0,
+-					   (void *)&ssid, sizeof(ssid),
+-					   DBM_TO_MBM(-50), GFP_KERNEL);
+-	cfg80211_put_bss(wiphy, informed_bss);
++	virt_wifi_inform_bss(wiphy);
+ 
+ 	/* Schedules work which acquires and releases the rtnl lock. */
+ 	cfg80211_scan_done(priv->scan_request, &scan_info);
+@@ -225,10 +233,12 @@ static int virt_wifi_connect(struct wiphy *wiphy, struct net_device *netdev,
+ 	if (!could_schedule)
+ 		return -EBUSY;
+ 
+-	if (sme->bssid)
++	if (sme->bssid) {
+ 		ether_addr_copy(priv->connect_requested_bss, sme->bssid);
+-	else
++	} else {
++		virt_wifi_inform_bss(wiphy);
+ 		eth_zero_addr(priv->connect_requested_bss);
++	}
+ 
+ 	wiphy_debug(wiphy, "connect\n");
+ 
+@@ -241,11 +251,13 @@ static void virt_wifi_connect_complete(struct work_struct *work)
+ 	struct virt_wifi_netdev_priv *priv =
+ 		container_of(work, struct virt_wifi_netdev_priv, connect.work);
+ 	u8 *requested_bss = priv->connect_requested_bss;
+-	bool has_addr = !is_zero_ether_addr(requested_bss);
+ 	bool right_addr = ether_addr_equal(requested_bss, fake_router_bssid);
+ 	u16 status = WLAN_STATUS_SUCCESS;
+ 
+-	if (!priv->is_up || (has_addr && !right_addr))
++	if (is_zero_ether_addr(requested_bss))
++		requested_bss = NULL;
++
++	if (!priv->is_up || (requested_bss && !right_addr))
+ 		status = WLAN_STATUS_UNSPECIFIED_FAILURE;
+ 	else
+ 		priv->is_connected = true;
+diff --git a/drivers/pcmcia/i82092.c b/drivers/pcmcia/i82092.c
+index 85887d885b5f3..192c9049d654f 100644
+--- a/drivers/pcmcia/i82092.c
++++ b/drivers/pcmcia/i82092.c
+@@ -112,6 +112,7 @@ static int i82092aa_pci_probe(struct pci_dev *dev,
+ 	for (i = 0; i < socket_count; i++) {
+ 		sockets[i].card_state = 1; /* 1 = present but empty */
+ 		sockets[i].io_base = pci_resource_start(dev, 0);
++		sockets[i].dev = dev;
+ 		sockets[i].socket.features |= SS_CAP_PCCARD;
+ 		sockets[i].socket.map_size = 0x1000;
+ 		sockets[i].socket.irq_mask = 0;
+diff --git a/drivers/platform/x86/gigabyte-wmi.c b/drivers/platform/x86/gigabyte-wmi.c
+index 5529d7b0abea3..fbb224a82e34c 100644
+--- a/drivers/platform/x86/gigabyte-wmi.c
++++ b/drivers/platform/x86/gigabyte-wmi.c
+@@ -141,6 +141,7 @@ static u8 gigabyte_wmi_detect_sensor_usability(struct wmi_device *wdev)
+ 
+ static const struct dmi_system_id gigabyte_wmi_known_working_platforms[] = {
+ 	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE"),
++	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE V2"),
+ 	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 GAMING X V2"),
+ 	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550M AORUS PRO-P"),
+ 	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550M DS3H"),
+diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c
+index a6ac505cbdd7d..701c1d0094ec6 100644
+--- a/drivers/s390/block/dasd_eckd.c
++++ b/drivers/s390/block/dasd_eckd.c
+@@ -1004,15 +1004,23 @@ static unsigned char dasd_eckd_path_access(void *conf_data, int conf_len)
+ static void dasd_eckd_store_conf_data(struct dasd_device *device,
+ 				      struct dasd_conf_data *conf_data, int chp)
+ {
++	struct dasd_eckd_private *private = device->private;
+ 	struct channel_path_desc_fmt0 *chp_desc;
+ 	struct subchannel_id sch_id;
++	void *cdp;
+ 
+-	ccw_device_get_schid(device->cdev, &sch_id);
+ 	/*
+ 	 * path handling and read_conf allocate data
+ 	 * free it before replacing the pointer
++	 * also replace the old private->conf_data pointer
++	 * with the new one if this points to the same data
+ 	 */
+-	kfree(device->path[chp].conf_data);
++	cdp = device->path[chp].conf_data;
++	if (private->conf_data == cdp) {
++		private->conf_data = (void *)conf_data;
++		dasd_eckd_identify_conf_parts(private);
++	}
++	ccw_device_get_schid(device->cdev, &sch_id);
+ 	device->path[chp].conf_data = conf_data;
+ 	device->path[chp].cssid = sch_id.cssid;
+ 	device->path[chp].ssid = sch_id.ssid;
+@@ -1020,6 +1028,7 @@ static void dasd_eckd_store_conf_data(struct dasd_device *device,
+ 	if (chp_desc)
+ 		device->path[chp].chpid = chp_desc->chpid;
+ 	kfree(chp_desc);
++	kfree(cdp);
+ }
+ 
+ static void dasd_eckd_clear_conf_data(struct dasd_device *device)
+diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
+index 6540d48eb0e8e..23fd361343b48 100644
+--- a/drivers/scsi/ibmvscsi/ibmvfc.c
++++ b/drivers/scsi/ibmvscsi/ibmvfc.c
+@@ -804,6 +804,13 @@ static int ibmvfc_init_event_pool(struct ibmvfc_host *vhost,
+ 	for (i = 0; i < size; ++i) {
+ 		struct ibmvfc_event *evt = &pool->events[i];
+ 
++		/*
++		 * evt->active states
++		 *  1 = in flight
++		 *  0 = being completed
++		 * -1 = free/freed
++		 */
++		atomic_set(&evt->active, -1);
+ 		atomic_set(&evt->free, 1);
+ 		evt->crq.valid = 0x80;
+ 		evt->crq.ioba = cpu_to_be64(pool->iu_token + (sizeof(*evt->xfer_iu) * i));
+@@ -1014,6 +1021,7 @@ static void ibmvfc_free_event(struct ibmvfc_event *evt)
+ 
+ 	BUG_ON(!ibmvfc_valid_event(pool, evt));
+ 	BUG_ON(atomic_inc_return(&evt->free) != 1);
++	BUG_ON(atomic_dec_and_test(&evt->active));
+ 
+ 	spin_lock_irqsave(&evt->queue->l_lock, flags);
+ 	list_add_tail(&evt->queue_list, &evt->queue->free);
+@@ -1069,6 +1077,12 @@ static void ibmvfc_complete_purge(struct list_head *purge_list)
+  **/
+ static void ibmvfc_fail_request(struct ibmvfc_event *evt, int error_code)
+ {
++	/*
++	 * Anything we are failing should still be active. Otherwise, it
++	 * implies we already got a response for the command and are doing
++	 * something bad like double completing it.
++	 */
++	BUG_ON(!atomic_dec_and_test(&evt->active));
+ 	if (evt->cmnd) {
+ 		evt->cmnd->result = (error_code << 16);
+ 		evt->done = ibmvfc_scsi_eh_done;
+@@ -1720,6 +1734,7 @@ static int ibmvfc_send_event(struct ibmvfc_event *evt,
+ 
+ 		evt->done(evt);
+ 	} else {
++		atomic_set(&evt->active, 1);
+ 		spin_unlock_irqrestore(&evt->queue->l_lock, flags);
+ 		ibmvfc_trc_start(evt);
+ 	}
+@@ -3248,7 +3263,7 @@ static void ibmvfc_handle_crq(struct ibmvfc_crq *crq, struct ibmvfc_host *vhost,
+ 		return;
+ 	}
+ 
+-	if (unlikely(atomic_read(&evt->free))) {
++	if (unlikely(atomic_dec_if_positive(&evt->active))) {
+ 		dev_err(vhost->dev, "Received duplicate correlation_token 0x%08llx!\n",
+ 			crq->ioba);
+ 		return;
+@@ -3775,7 +3790,7 @@ static void ibmvfc_handle_scrq(struct ibmvfc_crq *crq, struct ibmvfc_host *vhost
+ 		return;
+ 	}
+ 
+-	if (unlikely(atomic_read(&evt->free))) {
++	if (unlikely(atomic_dec_if_positive(&evt->active))) {
+ 		dev_err(vhost->dev, "Received duplicate correlation_token 0x%08llx!\n",
+ 			crq->ioba);
+ 		return;
+diff --git a/drivers/scsi/ibmvscsi/ibmvfc.h b/drivers/scsi/ibmvscsi/ibmvfc.h
+index 19dcec3ae9ba7..994846ec64c6b 100644
+--- a/drivers/scsi/ibmvscsi/ibmvfc.h
++++ b/drivers/scsi/ibmvscsi/ibmvfc.h
+@@ -744,6 +744,7 @@ struct ibmvfc_event {
+ 	struct ibmvfc_target *tgt;
+ 	struct scsi_cmnd *cmnd;
+ 	atomic_t free;
++	atomic_t active;
+ 	union ibmvfc_iu *xfer_iu;
+ 	void (*done)(struct ibmvfc_event *evt);
+ 	void (*_done)(struct ibmvfc_event *evt);
+diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
+index 1a94c7b1de2df..261d3663cbb70 100644
+--- a/drivers/scsi/sr.c
++++ b/drivers/scsi/sr.c
+@@ -221,7 +221,7 @@ static unsigned int sr_get_events(struct scsi_device *sdev)
+ 	else if (med->media_event_code == 2)
+ 		return DISK_EVENT_MEDIA_CHANGE;
+ 	else if (med->media_event_code == 3)
+-		return DISK_EVENT_EJECT_REQUEST;
++		return DISK_EVENT_MEDIA_CHANGE;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/soc/imx/soc-imx8m.c b/drivers/soc/imx/soc-imx8m.c
+index 071e14496e4ba..cc57a384d74d2 100644
+--- a/drivers/soc/imx/soc-imx8m.c
++++ b/drivers/soc/imx/soc-imx8m.c
+@@ -5,8 +5,6 @@
+ 
+ #include <linux/init.h>
+ #include <linux/io.h>
+-#include <linux/module.h>
+-#include <linux/nvmem-consumer.h>
+ #include <linux/of_address.h>
+ #include <linux/slab.h>
+ #include <linux/sys_soc.h>
+@@ -31,7 +29,7 @@
+ 
+ struct imx8_soc_data {
+ 	char *name;
+-	u32 (*soc_revision)(struct device *dev);
++	u32 (*soc_revision)(void);
+ };
+ 
+ static u64 soc_uid;
+@@ -52,7 +50,7 @@ static u32 imx8mq_soc_revision_from_atf(void)
+ static inline u32 imx8mq_soc_revision_from_atf(void) { return 0; };
+ #endif
+ 
+-static u32 __init imx8mq_soc_revision(struct device *dev)
++static u32 __init imx8mq_soc_revision(void)
+ {
+ 	struct device_node *np;
+ 	void __iomem *ocotp_base;
+@@ -77,20 +75,9 @@ static u32 __init imx8mq_soc_revision(struct device *dev)
+ 			rev = REV_B1;
+ 	}
+ 
+-	if (dev) {
+-		int ret;
+-
+-		ret = nvmem_cell_read_u64(dev, "soc_unique_id", &soc_uid);
+-		if (ret) {
+-			iounmap(ocotp_base);
+-			of_node_put(np);
+-			return ret;
+-		}
+-	} else {
+-		soc_uid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH);
+-		soc_uid <<= 32;
+-		soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW);
+-	}
++	soc_uid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH);
++	soc_uid <<= 32;
++	soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW);
+ 
+ 	iounmap(ocotp_base);
+ 	of_node_put(np);
+@@ -120,7 +107,7 @@ static void __init imx8mm_soc_uid(void)
+ 	of_node_put(np);
+ }
+ 
+-static u32 __init imx8mm_soc_revision(struct device *dev)
++static u32 __init imx8mm_soc_revision(void)
+ {
+ 	struct device_node *np;
+ 	void __iomem *anatop_base;
+@@ -138,15 +125,7 @@ static u32 __init imx8mm_soc_revision(struct device *dev)
+ 	iounmap(anatop_base);
+ 	of_node_put(np);
+ 
+-	if (dev) {
+-		int ret;
+-
+-		ret = nvmem_cell_read_u64(dev, "soc_unique_id", &soc_uid);
+-		if (ret)
+-			return ret;
+-	} else {
+-		imx8mm_soc_uid();
+-	}
++	imx8mm_soc_uid();
+ 
+ 	return rev;
+ }
+@@ -171,7 +150,7 @@ static const struct imx8_soc_data imx8mp_soc_data = {
+ 	.soc_revision = imx8mm_soc_revision,
+ };
+ 
+-static __maybe_unused const struct of_device_id imx8_machine_match[] = {
++static __maybe_unused const struct of_device_id imx8_soc_match[] = {
+ 	{ .compatible = "fsl,imx8mq", .data = &imx8mq_soc_data, },
+ 	{ .compatible = "fsl,imx8mm", .data = &imx8mm_soc_data, },
+ 	{ .compatible = "fsl,imx8mn", .data = &imx8mn_soc_data, },
+@@ -179,20 +158,12 @@ static __maybe_unused const struct of_device_id imx8_machine_match[] = {
+ 	{ }
+ };
+ 
+-static __maybe_unused const struct of_device_id imx8_soc_match[] = {
+-	{ .compatible = "fsl,imx8mq-soc", .data = &imx8mq_soc_data, },
+-	{ .compatible = "fsl,imx8mm-soc", .data = &imx8mm_soc_data, },
+-	{ .compatible = "fsl,imx8mn-soc", .data = &imx8mn_soc_data, },
+-	{ .compatible = "fsl,imx8mp-soc", .data = &imx8mp_soc_data, },
+-	{ }
+-};
+-
+ #define imx8_revision(soc_rev) \
+ 	soc_rev ? \
+ 	kasprintf(GFP_KERNEL, "%d.%d", (soc_rev >> 4) & 0xf,  soc_rev & 0xf) : \
+ 	"unknown"
+ 
+-static int imx8_soc_info(struct platform_device *pdev)
++static int __init imx8_soc_init(void)
+ {
+ 	struct soc_device_attribute *soc_dev_attr;
+ 	struct soc_device *soc_dev;
+@@ -211,10 +182,7 @@ static int imx8_soc_info(struct platform_device *pdev)
+ 	if (ret)
+ 		goto free_soc;
+ 
+-	if (pdev)
+-		id = of_match_node(imx8_soc_match, pdev->dev.of_node);
+-	else
+-		id = of_match_node(imx8_machine_match, of_root);
++	id = of_match_node(imx8_soc_match, of_root);
+ 	if (!id) {
+ 		ret = -ENODEV;
+ 		goto free_soc;
+@@ -223,16 +191,8 @@ static int imx8_soc_info(struct platform_device *pdev)
+ 	data = id->data;
+ 	if (data) {
+ 		soc_dev_attr->soc_id = data->name;
+-		if (data->soc_revision) {
+-			if (pdev) {
+-				soc_rev = data->soc_revision(&pdev->dev);
+-				ret = soc_rev;
+-				if (ret < 0)
+-					goto free_soc;
+-			} else {
+-				soc_rev = data->soc_revision(NULL);
+-			}
+-		}
++		if (data->soc_revision)
++			soc_rev = data->soc_revision();
+ 	}
+ 
+ 	soc_dev_attr->revision = imx8_revision(soc_rev);
+@@ -270,24 +230,4 @@ free_soc:
+ 	kfree(soc_dev_attr);
+ 	return ret;
+ }
+-
+-/* Retain device_initcall is for backward compatibility with DTS. */
+-static int __init imx8_soc_init(void)
+-{
+-	if (of_find_matching_node_and_match(NULL, imx8_soc_match, NULL))
+-		return 0;
+-
+-	return imx8_soc_info(NULL);
+-}
+ device_initcall(imx8_soc_init);
+-
+-static struct platform_driver imx8_soc_info_driver = {
+-	.probe = imx8_soc_info,
+-	.driver = {
+-		.name = "imx8_soc_info",
+-		.of_match_table = imx8_soc_match,
+-	},
+-};
+-
+-module_platform_driver(imx8_soc_info_driver);
+-MODULE_LICENSE("GPL v2");
+diff --git a/drivers/soc/ixp4xx/ixp4xx-npe.c b/drivers/soc/ixp4xx/ixp4xx-npe.c
+index ec90b44fa0cd3..6065aaab67403 100644
+--- a/drivers/soc/ixp4xx/ixp4xx-npe.c
++++ b/drivers/soc/ixp4xx/ixp4xx-npe.c
+@@ -690,8 +690,8 @@ static int ixp4xx_npe_probe(struct platform_device *pdev)
+ 
+ 		if (!(ixp4xx_read_feature_bits() &
+ 		      (IXP4XX_FEATURE_RESET_NPEA << i))) {
+-			dev_info(dev, "NPE%d at 0x%08x-0x%08x not available\n",
+-				 i, res->start, res->end);
++			dev_info(dev, "NPE%d at %pR not available\n",
++				 i, res);
+ 			continue; /* NPE already disabled or not present */
+ 		}
+ 		npe->regs = devm_ioremap_resource(dev, res);
+@@ -699,13 +699,12 @@ static int ixp4xx_npe_probe(struct platform_device *pdev)
+ 			return PTR_ERR(npe->regs);
+ 
+ 		if (npe_reset(npe)) {
+-			dev_info(dev, "NPE%d at 0x%08x-0x%08x does not reset\n",
+-				 i, res->start, res->end);
++			dev_info(dev, "NPE%d at %pR does not reset\n",
++				 i, res);
+ 			continue;
+ 		}
+ 		npe->valid = 1;
+-		dev_info(dev, "NPE%d at 0x%08x-0x%08x registered\n",
+-			 i, res->start, res->end);
++		dev_info(dev, "NPE%d at %pR registered\n", i, res);
+ 		found++;
+ 	}
+ 
+diff --git a/drivers/soc/ixp4xx/ixp4xx-qmgr.c b/drivers/soc/ixp4xx/ixp4xx-qmgr.c
+index 8c968382cea76..065a800717bd5 100644
+--- a/drivers/soc/ixp4xx/ixp4xx-qmgr.c
++++ b/drivers/soc/ixp4xx/ixp4xx-qmgr.c
+@@ -145,12 +145,12 @@ static irqreturn_t qmgr_irq1_a0(int irq, void *pdev)
+ 	/* ACK - it may clear any bits so don't rely on it */
+ 	__raw_writel(0xFFFFFFFF, &qmgr_regs->irqstat[0]);
+ 
+-	en_bitmap = qmgr_regs->irqen[0];
++	en_bitmap = __raw_readl(&qmgr_regs->irqen[0]);
+ 	while (en_bitmap) {
+ 		i = __fls(en_bitmap); /* number of the last "low" queue */
+ 		en_bitmap &= ~BIT(i);
+-		src = qmgr_regs->irqsrc[i >> 3];
+-		stat = qmgr_regs->stat1[i >> 3];
++		src = __raw_readl(&qmgr_regs->irqsrc[i >> 3]);
++		stat = __raw_readl(&qmgr_regs->stat1[i >> 3]);
+ 		if (src & 4) /* the IRQ condition is inverted */
+ 			stat = ~stat;
+ 		if (stat & BIT(src & 3)) {
+@@ -170,7 +170,8 @@ static irqreturn_t qmgr_irq2_a0(int irq, void *pdev)
+ 	/* ACK - it may clear any bits so don't rely on it */
+ 	__raw_writel(0xFFFFFFFF, &qmgr_regs->irqstat[1]);
+ 
+-	req_bitmap = qmgr_regs->irqen[1] & qmgr_regs->statne_h;
++	req_bitmap = __raw_readl(&qmgr_regs->irqen[1]) &
++		     __raw_readl(&qmgr_regs->statne_h);
+ 	while (req_bitmap) {
+ 		i = __fls(req_bitmap); /* number of the last "high" queue */
+ 		req_bitmap &= ~BIT(i);
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index 39dc02e366f4b..2872993550bd5 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -505,8 +505,10 @@ static int mx51_ecspi_prepare_message(struct spi_imx_data *spi_imx,
+ 				      struct spi_message *msg)
+ {
+ 	struct spi_device *spi = msg->spi;
++	struct spi_transfer *xfer;
+ 	u32 ctrl = MX51_ECSPI_CTRL_ENABLE;
+-	u32 testreg;
++	u32 min_speed_hz = ~0U;
++	u32 testreg, delay;
+ 	u32 cfg = readl(spi_imx->base + MX51_ECSPI_CONFIG);
+ 
+ 	/* set Master or Slave mode */
+@@ -567,6 +569,35 @@ static int mx51_ecspi_prepare_message(struct spi_imx_data *spi_imx,
+ 
+ 	writel(cfg, spi_imx->base + MX51_ECSPI_CONFIG);
+ 
++	/*
++	 * Wait until the changes in the configuration register CONFIGREG
++	 * propagate into the hardware. It takes exactly one tick of the
++	 * SCLK clock, but we will wait two SCLK clock just to be sure. The
++	 * effect of the delay it takes for the hardware to apply changes
++	 * is noticable if the SCLK clock run very slow. In such a case, if
++	 * the polarity of SCLK should be inverted, the GPIO ChipSelect might
++	 * be asserted before the SCLK polarity changes, which would disrupt
++	 * the SPI communication as the device on the other end would consider
++	 * the change of SCLK polarity as a clock tick already.
++	 *
++	 * Because spi_imx->spi_bus_clk is only set in bitbang prepare_message
++	 * callback, iterate over all the transfers in spi_message, find the
++	 * one with lowest bus frequency, and use that bus frequency for the
++	 * delay calculation. In case all transfers have speed_hz == 0, then
++	 * min_speed_hz is ~0 and the resulting delay is zero.
++	 */
++	list_for_each_entry(xfer, &msg->transfers, transfer_list) {
++		if (!xfer->speed_hz)
++			continue;
++		min_speed_hz = min(xfer->speed_hz, min_speed_hz);
++	}
++
++	delay = (2 * 1000000) / min_speed_hz;
++	if (likely(delay < 10))	/* SCLK is faster than 100 kHz */
++		udelay(delay);
++	else			/* SCLK is _very_ slow */
++		usleep_range(delay, delay + 10);
++
+ 	return 0;
+ }
+ 
+@@ -574,7 +605,7 @@ static int mx51_ecspi_prepare_transfer(struct spi_imx_data *spi_imx,
+ 				       struct spi_device *spi)
+ {
+ 	u32 ctrl = readl(spi_imx->base + MX51_ECSPI_CTRL);
+-	u32 clk, delay;
++	u32 clk;
+ 
+ 	/* Clear BL field and set the right value */
+ 	ctrl &= ~MX51_ECSPI_CTRL_BL_MASK;
+@@ -596,23 +627,6 @@ static int mx51_ecspi_prepare_transfer(struct spi_imx_data *spi_imx,
+ 
+ 	writel(ctrl, spi_imx->base + MX51_ECSPI_CTRL);
+ 
+-	/*
+-	 * Wait until the changes in the configuration register CONFIGREG
+-	 * propagate into the hardware. It takes exactly one tick of the
+-	 * SCLK clock, but we will wait two SCLK clock just to be sure. The
+-	 * effect of the delay it takes for the hardware to apply changes
+-	 * is noticable if the SCLK clock run very slow. In such a case, if
+-	 * the polarity of SCLK should be inverted, the GPIO ChipSelect might
+-	 * be asserted before the SCLK polarity changes, which would disrupt
+-	 * the SPI communication as the device on the other end would consider
+-	 * the change of SCLK polarity as a clock tick already.
+-	 */
+-	delay = (2 * 1000000) / clk;
+-	if (likely(delay < 10))	/* SCLK is faster than 100 kHz */
+-		udelay(delay);
+-	else			/* SCLK is _very_ slow */
+-		usleep_range(delay, delay + 10);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/spi/spi-meson-spicc.c b/drivers/spi/spi-meson-spicc.c
+index b2c4621db34d7..c208efeadd184 100644
+--- a/drivers/spi/spi-meson-spicc.c
++++ b/drivers/spi/spi-meson-spicc.c
+@@ -785,6 +785,8 @@ static int meson_spicc_remove(struct platform_device *pdev)
+ 	clk_disable_unprepare(spicc->core);
+ 	clk_disable_unprepare(spicc->pclk);
+ 
++	spi_master_put(spicc->master);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/staging/rtl8712/hal_init.c b/drivers/staging/rtl8712/hal_init.c
+index 22974277afa08..4eff3fdecdb8a 100644
+--- a/drivers/staging/rtl8712/hal_init.c
++++ b/drivers/staging/rtl8712/hal_init.c
+@@ -29,21 +29,31 @@
+ #define FWBUFF_ALIGN_SZ 512
+ #define MAX_DUMP_FWSZ (48 * 1024)
+ 
++static void rtl871x_load_fw_fail(struct _adapter *adapter)
++{
++	struct usb_device *udev = adapter->dvobjpriv.pusbdev;
++	struct device *dev = &udev->dev;
++	struct device *parent = dev->parent;
++
++	complete(&adapter->rtl8712_fw_ready);
++
++	dev_err(&udev->dev, "r8712u: Firmware request failed\n");
++
++	if (parent)
++		device_lock(parent);
++
++	device_release_driver(dev);
++
++	if (parent)
++		device_unlock(parent);
++}
++
+ static void rtl871x_load_fw_cb(const struct firmware *firmware, void *context)
+ {
+ 	struct _adapter *adapter = context;
+ 
+ 	if (!firmware) {
+-		struct usb_device *udev = adapter->dvobjpriv.pusbdev;
+-		struct usb_interface *usb_intf = adapter->pusb_intf;
+-
+-		dev_err(&udev->dev, "r8712u: Firmware request failed\n");
+-		usb_put_dev(udev);
+-		usb_set_intfdata(usb_intf, NULL);
+-		r8712_free_drv_sw(adapter);
+-		adapter->dvobj_deinit(adapter);
+-		complete(&adapter->rtl8712_fw_ready);
+-		free_netdev(adapter->pnetdev);
++		rtl871x_load_fw_fail(adapter);
+ 		return;
+ 	}
+ 	adapter->fw = firmware;
+diff --git a/drivers/staging/rtl8712/rtl8712_led.c b/drivers/staging/rtl8712/rtl8712_led.c
+index 5901026949f25..d5fc9026b036e 100644
+--- a/drivers/staging/rtl8712/rtl8712_led.c
++++ b/drivers/staging/rtl8712/rtl8712_led.c
+@@ -1820,3 +1820,11 @@ void LedControl871x(struct _adapter *padapter, enum LED_CTL_MODE LedAction)
+ 		break;
+ 	}
+ }
++
++void r8712_flush_led_works(struct _adapter *padapter)
++{
++	struct led_priv *pledpriv = &padapter->ledpriv;
++
++	flush_work(&pledpriv->SwLed0.BlinkWorkItem);
++	flush_work(&pledpriv->SwLed1.BlinkWorkItem);
++}
+diff --git a/drivers/staging/rtl8712/rtl871x_led.h b/drivers/staging/rtl8712/rtl871x_led.h
+index ee19c873cf010..2f0768132ad8f 100644
+--- a/drivers/staging/rtl8712/rtl871x_led.h
++++ b/drivers/staging/rtl8712/rtl871x_led.h
+@@ -112,6 +112,7 @@ struct led_priv {
+ void r8712_InitSwLeds(struct _adapter *padapter);
+ void r8712_DeInitSwLeds(struct _adapter *padapter);
+ void LedControl871x(struct _adapter *padapter, enum LED_CTL_MODE LedAction);
++void r8712_flush_led_works(struct _adapter *padapter);
+ 
+ #endif
+ 
+diff --git a/drivers/staging/rtl8712/rtl871x_pwrctrl.c b/drivers/staging/rtl8712/rtl871x_pwrctrl.c
+index 23cff43437e21..cd6d9ff0bebca 100644
+--- a/drivers/staging/rtl8712/rtl871x_pwrctrl.c
++++ b/drivers/staging/rtl8712/rtl871x_pwrctrl.c
+@@ -224,3 +224,11 @@ void r8712_unregister_cmd_alive(struct _adapter *padapter)
+ 	}
+ 	mutex_unlock(&pwrctrl->mutex_lock);
+ }
++
++void r8712_flush_rwctrl_works(struct _adapter *padapter)
++{
++	struct pwrctrl_priv *pwrctrl = &padapter->pwrctrlpriv;
++
++	flush_work(&pwrctrl->SetPSModeWorkItem);
++	flush_work(&pwrctrl->rpwm_workitem);
++}
+diff --git a/drivers/staging/rtl8712/rtl871x_pwrctrl.h b/drivers/staging/rtl8712/rtl871x_pwrctrl.h
+index bf6623cfaf27b..b35b9c7920ebb 100644
+--- a/drivers/staging/rtl8712/rtl871x_pwrctrl.h
++++ b/drivers/staging/rtl8712/rtl871x_pwrctrl.h
+@@ -108,5 +108,6 @@ void r8712_cpwm_int_hdl(struct _adapter *padapter,
+ void r8712_set_ps_mode(struct _adapter *padapter, uint ps_mode,
+ 			uint smart_ps);
+ void r8712_set_rpwm(struct _adapter *padapter, u8 val8);
++void r8712_flush_rwctrl_works(struct _adapter *padapter);
+ 
+ #endif  /* __RTL871X_PWRCTRL_H_ */
+diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c
+index b760bc3559373..17d28af0d0867 100644
+--- a/drivers/staging/rtl8712/usb_intf.c
++++ b/drivers/staging/rtl8712/usb_intf.c
+@@ -594,35 +594,30 @@ static void r871xu_dev_remove(struct usb_interface *pusb_intf)
+ {
+ 	struct net_device *pnetdev = usb_get_intfdata(pusb_intf);
+ 	struct usb_device *udev = interface_to_usbdev(pusb_intf);
++	struct _adapter *padapter = netdev_priv(pnetdev);
++
++	/* never exit with a firmware callback pending */
++	wait_for_completion(&padapter->rtl8712_fw_ready);
++	usb_set_intfdata(pusb_intf, NULL);
++	release_firmware(padapter->fw);
++	if (drvpriv.drv_registered)
++		padapter->surprise_removed = true;
++	if (pnetdev->reg_state != NETREG_UNINITIALIZED)
++		unregister_netdev(pnetdev); /* will call netdev_close() */
++	r8712_flush_rwctrl_works(padapter);
++	r8712_flush_led_works(padapter);
++	udelay(1);
++	/* Stop driver mlme relation timer */
++	r8712_stop_drv_timers(padapter);
++	r871x_dev_unload(padapter);
++	r8712_free_drv_sw(padapter);
++	free_netdev(pnetdev);
++
++	/* decrease the reference count of the usb device structure
++	 * when disconnect
++	 */
++	usb_put_dev(udev);
+ 
+-	if (pnetdev) {
+-		struct _adapter *padapter = netdev_priv(pnetdev);
+-
+-		/* never exit with a firmware callback pending */
+-		wait_for_completion(&padapter->rtl8712_fw_ready);
+-		pnetdev = usb_get_intfdata(pusb_intf);
+-		usb_set_intfdata(pusb_intf, NULL);
+-		if (!pnetdev)
+-			goto firmware_load_fail;
+-		release_firmware(padapter->fw);
+-		if (drvpriv.drv_registered)
+-			padapter->surprise_removed = true;
+-		if (pnetdev->reg_state != NETREG_UNINITIALIZED)
+-			unregister_netdev(pnetdev); /* will call netdev_close() */
+-		flush_scheduled_work();
+-		udelay(1);
+-		/* Stop driver mlme relation timer */
+-		r8712_stop_drv_timers(padapter);
+-		r871x_dev_unload(padapter);
+-		r8712_free_drv_sw(padapter);
+-		free_netdev(pnetdev);
+-
+-		/* decrease the reference count of the usb device structure
+-		 * when disconnect
+-		 */
+-		usb_put_dev(udev);
+-	}
+-firmware_load_fail:
+ 	/* If we didn't unplug usb dongle and remove/insert module, driver
+ 	 * fails on sitesurvey for the first time when device is up.
+ 	 * Reset usb port for sitesurvey fail issue.
+diff --git a/drivers/staging/rtl8723bs/hal/sdio_ops.c b/drivers/staging/rtl8723bs/hal/sdio_ops.c
+index a31694525bc1e..eaf7d689b0c55 100644
+--- a/drivers/staging/rtl8723bs/hal/sdio_ops.c
++++ b/drivers/staging/rtl8723bs/hal/sdio_ops.c
+@@ -921,6 +921,8 @@ void sd_int_dpc(struct adapter *adapter)
+ 				} else {
+ 					rtw_c2h_wk_cmd(adapter, (u8 *)c2h_evt);
+ 				}
++			} else {
++				kfree(c2h_evt);
+ 			}
+ 		} else {
+ 			/* Error handling for malloc fail */
+diff --git a/drivers/tee/optee/call.c b/drivers/tee/optee/call.c
+index 6e6eb836e9b62..945f03da02237 100644
+--- a/drivers/tee/optee/call.c
++++ b/drivers/tee/optee/call.c
+@@ -184,7 +184,7 @@ static struct tee_shm *get_msg_arg(struct tee_context *ctx, size_t num_params,
+ 	struct optee_msg_arg *ma;
+ 
+ 	shm = tee_shm_alloc(ctx, OPTEE_MSG_GET_ARG_SIZE(num_params),
+-			    TEE_SHM_MAPPED);
++			    TEE_SHM_MAPPED | TEE_SHM_PRIV);
+ 	if (IS_ERR(shm))
+ 		return shm;
+ 
+@@ -416,11 +416,13 @@ void optee_enable_shm_cache(struct optee *optee)
+ }
+ 
+ /**
+- * optee_disable_shm_cache() - Disables caching of some shared memory allocation
+- *			      in OP-TEE
++ * __optee_disable_shm_cache() - Disables caching of some shared memory
++ *                               allocation in OP-TEE
+  * @optee:	main service struct
++ * @is_mapped:	true if the cached shared memory addresses were mapped by this
++ *		kernel, are safe to dereference, and should be freed
+  */
+-void optee_disable_shm_cache(struct optee *optee)
++static void __optee_disable_shm_cache(struct optee *optee, bool is_mapped)
+ {
+ 	struct optee_call_waiter w;
+ 
+@@ -439,6 +441,13 @@ void optee_disable_shm_cache(struct optee *optee)
+ 		if (res.result.status == OPTEE_SMC_RETURN_OK) {
+ 			struct tee_shm *shm;
+ 
++			/*
++			 * Shared memory references that were not mapped by
++			 * this kernel must be ignored to prevent a crash.
++			 */
++			if (!is_mapped)
++				continue;
++
+ 			shm = reg_pair_to_ptr(res.result.shm_upper32,
+ 					      res.result.shm_lower32);
+ 			tee_shm_free(shm);
+@@ -449,6 +458,27 @@ void optee_disable_shm_cache(struct optee *optee)
+ 	optee_cq_wait_final(&optee->call_queue, &w);
+ }
+ 
++/**
++ * optee_disable_shm_cache() - Disables caching of mapped shared memory
++ *                             allocations in OP-TEE
++ * @optee:	main service struct
++ */
++void optee_disable_shm_cache(struct optee *optee)
++{
++	return __optee_disable_shm_cache(optee, true);
++}
++
++/**
++ * optee_disable_unmapped_shm_cache() - Disables caching of shared memory
++ *                                      allocations in OP-TEE which are not
++ *                                      currently mapped
++ * @optee:	main service struct
++ */
++void optee_disable_unmapped_shm_cache(struct optee *optee)
++{
++	return __optee_disable_shm_cache(optee, false);
++}
++
+ #define PAGELIST_ENTRIES_PER_PAGE				\
+ 	((OPTEE_MSG_NONCONTIG_PAGE_SIZE / sizeof(u64)) - 1)
+ 
+diff --git a/drivers/tee/optee/core.c b/drivers/tee/optee/core.c
+index ddb8f9ecf3078..5ce13b099d7dc 100644
+--- a/drivers/tee/optee/core.c
++++ b/drivers/tee/optee/core.c
+@@ -6,6 +6,7 @@
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+ 
+ #include <linux/arm-smccc.h>
++#include <linux/crash_dump.h>
+ #include <linux/errno.h>
+ #include <linux/io.h>
+ #include <linux/module.h>
+@@ -277,7 +278,8 @@ static void optee_release(struct tee_context *ctx)
+ 	if (!ctxdata)
+ 		return;
+ 
+-	shm = tee_shm_alloc(ctx, sizeof(struct optee_msg_arg), TEE_SHM_MAPPED);
++	shm = tee_shm_alloc(ctx, sizeof(struct optee_msg_arg),
++			    TEE_SHM_MAPPED | TEE_SHM_PRIV);
+ 	if (!IS_ERR(shm)) {
+ 		arg = tee_shm_get_va(shm, 0);
+ 		/*
+@@ -572,6 +574,13 @@ static optee_invoke_fn *get_invoke_func(struct device *dev)
+ 	return ERR_PTR(-EINVAL);
+ }
+ 
++/* optee_remove - Device Removal Routine
++ * @pdev: platform device information struct
++ *
++ * optee_remove is called by platform subsystem to alert the driver
++ * that it should release the device
++ */
++
+ static int optee_remove(struct platform_device *pdev)
+ {
+ 	struct optee *optee = platform_get_drvdata(pdev);
+@@ -602,6 +611,18 @@ static int optee_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++/* optee_shutdown - Device Removal Routine
++ * @pdev: platform device information struct
++ *
++ * platform_shutdown is called by the platform subsystem to alert
++ * the driver that a shutdown, reboot, or kexec is happening and
++ * device must be disabled.
++ */
++static void optee_shutdown(struct platform_device *pdev)
++{
++	optee_disable_shm_cache(platform_get_drvdata(pdev));
++}
++
+ static int optee_probe(struct platform_device *pdev)
+ {
+ 	optee_invoke_fn *invoke_fn;
+@@ -612,6 +633,16 @@ static int optee_probe(struct platform_device *pdev)
+ 	u32 sec_caps;
+ 	int rc;
+ 
++	/*
++	 * The kernel may have crashed at the same time that all available
++	 * secure world threads were suspended and we cannot reschedule the
++	 * suspended threads without access to the crashed kernel's wait_queue.
++	 * Therefore, we cannot reliably initialize the OP-TEE driver in the
++	 * kdump kernel.
++	 */
++	if (is_kdump_kernel())
++		return -ENODEV;
++
+ 	invoke_fn = get_invoke_func(&pdev->dev);
+ 	if (IS_ERR(invoke_fn))
+ 		return PTR_ERR(invoke_fn);
+@@ -686,6 +717,15 @@ static int optee_probe(struct platform_device *pdev)
+ 	optee->memremaped_shm = memremaped_shm;
+ 	optee->pool = pool;
+ 
++	/*
++	 * Ensure that there are no pre-existing shm objects before enabling
++	 * the shm cache so that there's no chance of receiving an invalid
++	 * address during shutdown. This could occur, for example, if we're
++	 * kexec booting from an older kernel that did not properly cleanup the
++	 * shm cache.
++	 */
++	optee_disable_unmapped_shm_cache(optee);
++
+ 	optee_enable_shm_cache(optee);
+ 
+ 	if (optee->sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM)
+@@ -728,6 +768,7 @@ MODULE_DEVICE_TABLE(of, optee_dt_match);
+ static struct platform_driver optee_driver = {
+ 	.probe  = optee_probe,
+ 	.remove = optee_remove,
++	.shutdown = optee_shutdown,
+ 	.driver = {
+ 		.name = "optee",
+ 		.of_match_table = optee_dt_match,
+diff --git a/drivers/tee/optee/optee_private.h b/drivers/tee/optee/optee_private.h
+index e25b216a14ef8..dbdd367be1568 100644
+--- a/drivers/tee/optee/optee_private.h
++++ b/drivers/tee/optee/optee_private.h
+@@ -159,6 +159,7 @@ int optee_cancel_req(struct tee_context *ctx, u32 cancel_id, u32 session);
+ 
+ void optee_enable_shm_cache(struct optee *optee);
+ void optee_disable_shm_cache(struct optee *optee);
++void optee_disable_unmapped_shm_cache(struct optee *optee);
+ 
+ int optee_shm_register(struct tee_context *ctx, struct tee_shm *shm,
+ 		       struct page **pages, size_t num_pages,
+diff --git a/drivers/tee/optee/rpc.c b/drivers/tee/optee/rpc.c
+index 1849180b0278b..efbaff7ad7e59 100644
+--- a/drivers/tee/optee/rpc.c
++++ b/drivers/tee/optee/rpc.c
+@@ -314,7 +314,7 @@ static void handle_rpc_func_cmd_shm_alloc(struct tee_context *ctx,
+ 		shm = cmd_alloc_suppl(ctx, sz);
+ 		break;
+ 	case OPTEE_RPC_SHM_TYPE_KERNEL:
+-		shm = tee_shm_alloc(ctx, sz, TEE_SHM_MAPPED);
++		shm = tee_shm_alloc(ctx, sz, TEE_SHM_MAPPED | TEE_SHM_PRIV);
+ 		break;
+ 	default:
+ 		arg->ret = TEEC_ERROR_BAD_PARAMETERS;
+@@ -502,7 +502,8 @@ void optee_handle_rpc(struct tee_context *ctx, struct optee_rpc_param *param,
+ 
+ 	switch (OPTEE_SMC_RETURN_GET_RPC_FUNC(param->a0)) {
+ 	case OPTEE_SMC_RPC_FUNC_ALLOC:
+-		shm = tee_shm_alloc(ctx, param->a1, TEE_SHM_MAPPED);
++		shm = tee_shm_alloc(ctx, param->a1,
++				    TEE_SHM_MAPPED | TEE_SHM_PRIV);
+ 		if (!IS_ERR(shm) && !tee_shm_get_pa(shm, 0, &pa)) {
+ 			reg_pair_from_64(&param->a1, &param->a2, pa);
+ 			reg_pair_from_64(&param->a4, &param->a5,
+diff --git a/drivers/tee/optee/shm_pool.c b/drivers/tee/optee/shm_pool.c
+index d767eebf30bdd..c41a9a501a6e9 100644
+--- a/drivers/tee/optee/shm_pool.c
++++ b/drivers/tee/optee/shm_pool.c
+@@ -27,13 +27,19 @@ static int pool_op_alloc(struct tee_shm_pool_mgr *poolm,
+ 	shm->paddr = page_to_phys(page);
+ 	shm->size = PAGE_SIZE << order;
+ 
+-	if (shm->flags & TEE_SHM_DMA_BUF) {
++	/*
++	 * Shared memory private to the OP-TEE driver doesn't need
++	 * to be registered with OP-TEE.
++	 */
++	if (!(shm->flags & TEE_SHM_PRIV)) {
+ 		unsigned int nr_pages = 1 << order, i;
+ 		struct page **pages;
+ 
+ 		pages = kcalloc(nr_pages, sizeof(pages), GFP_KERNEL);
+-		if (!pages)
+-			return -ENOMEM;
++		if (!pages) {
++			rc = -ENOMEM;
++			goto err;
++		}
+ 
+ 		for (i = 0; i < nr_pages; i++) {
+ 			pages[i] = page;
+@@ -44,15 +50,21 @@ static int pool_op_alloc(struct tee_shm_pool_mgr *poolm,
+ 		rc = optee_shm_register(shm->ctx, shm, pages, nr_pages,
+ 					(unsigned long)shm->kaddr);
+ 		kfree(pages);
++		if (rc)
++			goto err;
+ 	}
+ 
++	return 0;
++
++err:
++	__free_pages(page, order);
+ 	return rc;
+ }
+ 
+ static void pool_op_free(struct tee_shm_pool_mgr *poolm,
+ 			 struct tee_shm *shm)
+ {
+-	if (shm->flags & TEE_SHM_DMA_BUF)
++	if (!(shm->flags & TEE_SHM_PRIV))
+ 		optee_shm_unregister(shm->ctx, shm);
+ 
+ 	free_pages((unsigned long)shm->kaddr, get_order(shm->size));
+diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c
+index 00472f5ce22e4..8a9384a64f3e2 100644
+--- a/drivers/tee/tee_shm.c
++++ b/drivers/tee/tee_shm.c
+@@ -117,7 +117,7 @@ struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags)
+ 		return ERR_PTR(-EINVAL);
+ 	}
+ 
+-	if ((flags & ~(TEE_SHM_MAPPED | TEE_SHM_DMA_BUF))) {
++	if ((flags & ~(TEE_SHM_MAPPED | TEE_SHM_DMA_BUF | TEE_SHM_PRIV))) {
+ 		dev_err(teedev->dev.parent, "invalid shm flags 0x%x", flags);
+ 		return ERR_PTR(-EINVAL);
+ 	}
+@@ -193,6 +193,24 @@ err_dev_put:
+ }
+ EXPORT_SYMBOL_GPL(tee_shm_alloc);
+ 
++/**
++ * tee_shm_alloc_kernel_buf() - Allocate shared memory for kernel buffer
++ * @ctx:	Context that allocates the shared memory
++ * @size:	Requested size of shared memory
++ *
++ * The returned memory registered in secure world and is suitable to be
++ * passed as a memory buffer in parameter argument to
++ * tee_client_invoke_func(). The memory allocated is later freed with a
++ * call to tee_shm_free().
++ *
++ * @returns a pointer to 'struct tee_shm'
++ */
++struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size)
++{
++	return tee_shm_alloc(ctx, size, TEE_SHM_MAPPED);
++}
++EXPORT_SYMBOL_GPL(tee_shm_alloc_kernel_buf);
++
+ struct tee_shm *tee_shm_register(struct tee_context *ctx, unsigned long addr,
+ 				 size_t length, u32 flags)
+ {
+diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
+index e73cd296db7ef..a82032c081e83 100644
+--- a/drivers/thunderbolt/switch.c
++++ b/drivers/thunderbolt/switch.c
+@@ -1740,18 +1740,6 @@ static struct attribute *switch_attrs[] = {
+ 	NULL,
+ };
+ 
+-static bool has_port(const struct tb_switch *sw, enum tb_port_type type)
+-{
+-	const struct tb_port *port;
+-
+-	tb_switch_for_each_port(sw, port) {
+-		if (!port->disabled && port->config.type == type)
+-			return true;
+-	}
+-
+-	return false;
+-}
+-
+ static umode_t switch_attr_is_visible(struct kobject *kobj,
+ 				      struct attribute *attr, int n)
+ {
+@@ -1760,8 +1748,7 @@ static umode_t switch_attr_is_visible(struct kobject *kobj,
+ 
+ 	if (attr == &dev_attr_authorized.attr) {
+ 		if (sw->tb->security_level == TB_SECURITY_NOPCIE ||
+-		    sw->tb->security_level == TB_SECURITY_DPONLY ||
+-		    !has_port(sw, TB_TYPE_PCIE_UP))
++		    sw->tb->security_level == TB_SECURITY_DPONLY)
+ 			return 0;
+ 	} else if (attr == &dev_attr_device.attr) {
+ 		if (!sw->device)
+diff --git a/drivers/tty/serial/8250/8250_aspeed_vuart.c b/drivers/tty/serial/8250/8250_aspeed_vuart.c
+index d035d08cb9871..60dfd1aa4ad22 100644
+--- a/drivers/tty/serial/8250/8250_aspeed_vuart.c
++++ b/drivers/tty/serial/8250/8250_aspeed_vuart.c
+@@ -320,6 +320,7 @@ static int aspeed_vuart_handle_irq(struct uart_port *port)
+ {
+ 	struct uart_8250_port *up = up_to_u8250p(port);
+ 	unsigned int iir, lsr;
++	unsigned long flags;
+ 	int space, count;
+ 
+ 	iir = serial_port_in(port, UART_IIR);
+@@ -327,7 +328,7 @@ static int aspeed_vuart_handle_irq(struct uart_port *port)
+ 	if (iir & UART_IIR_NO_INT)
+ 		return 0;
+ 
+-	spin_lock(&port->lock);
++	spin_lock_irqsave(&port->lock, flags);
+ 
+ 	lsr = serial_port_in(port, UART_LSR);
+ 
+@@ -363,7 +364,7 @@ static int aspeed_vuart_handle_irq(struct uart_port *port)
+ 	if (lsr & UART_LSR_THRE)
+ 		serial8250_tx_chars(up);
+ 
+-	uart_unlock_and_check_sysrq(port);
++	uart_unlock_and_check_sysrq_irqrestore(port, flags);
+ 
+ 	return 1;
+ }
+diff --git a/drivers/tty/serial/8250/8250_fsl.c b/drivers/tty/serial/8250/8250_fsl.c
+index 4e75d2e4f87cb..fc65a2293ce9e 100644
+--- a/drivers/tty/serial/8250/8250_fsl.c
++++ b/drivers/tty/serial/8250/8250_fsl.c
+@@ -30,10 +30,11 @@ struct fsl8250_data {
+ int fsl8250_handle_irq(struct uart_port *port)
+ {
+ 	unsigned char lsr, orig_lsr;
++	unsigned long flags;
+ 	unsigned int iir;
+ 	struct uart_8250_port *up = up_to_u8250p(port);
+ 
+-	spin_lock(&up->port.lock);
++	spin_lock_irqsave(&up->port.lock, flags);
+ 
+ 	iir = port->serial_in(port, UART_IIR);
+ 	if (iir & UART_IIR_NO_INT) {
+@@ -82,7 +83,7 @@ int fsl8250_handle_irq(struct uart_port *port)
+ 
+ 	up->lsr_saved_flags = orig_lsr;
+ 
+-	uart_unlock_and_check_sysrq(&up->port);
++	uart_unlock_and_check_sysrq_irqrestore(&up->port, flags);
+ 
+ 	return 1;
+ }
+diff --git a/drivers/tty/serial/8250/8250_mtk.c b/drivers/tty/serial/8250/8250_mtk.c
+index f7d3023f860f0..fb65dc601b237 100644
+--- a/drivers/tty/serial/8250/8250_mtk.c
++++ b/drivers/tty/serial/8250/8250_mtk.c
+@@ -93,10 +93,13 @@ static void mtk8250_dma_rx_complete(void *param)
+ 	struct dma_tx_state state;
+ 	int copied, total, cnt;
+ 	unsigned char *ptr;
++	unsigned long flags;
+ 
+ 	if (data->rx_status == DMA_RX_SHUTDOWN)
+ 		return;
+ 
++	spin_lock_irqsave(&up->port.lock, flags);
++
+ 	dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state);
+ 	total = dma->rx_size - state.residue;
+ 	cnt = total;
+@@ -120,6 +123,8 @@ static void mtk8250_dma_rx_complete(void *param)
+ 	tty_flip_buffer_push(tty_port);
+ 
+ 	mtk8250_rx_dma(up);
++
++	spin_unlock_irqrestore(&up->port.lock, flags);
+ }
+ 
+ static void mtk8250_rx_dma(struct uart_8250_port *up)
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index 780cc99732b62..1934940b96170 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -3804,6 +3804,12 @@ static const struct pci_device_id blacklist[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x0f0c), },
+ 	{ PCI_VDEVICE(INTEL, 0x228a), },
+ 	{ PCI_VDEVICE(INTEL, 0x228c), },
++	{ PCI_VDEVICE(INTEL, 0x4b96), },
++	{ PCI_VDEVICE(INTEL, 0x4b97), },
++	{ PCI_VDEVICE(INTEL, 0x4b98), },
++	{ PCI_VDEVICE(INTEL, 0x4b99), },
++	{ PCI_VDEVICE(INTEL, 0x4b9a), },
++	{ PCI_VDEVICE(INTEL, 0x4b9b), },
+ 	{ PCI_VDEVICE(INTEL, 0x9ce3), },
+ 	{ PCI_VDEVICE(INTEL, 0x9ce4), },
+ 
+@@ -3964,6 +3970,7 @@ pciserial_init_ports(struct pci_dev *dev, const struct pciserial_board *board)
+ 		if (pci_match_id(pci_use_msi, dev)) {
+ 			dev_dbg(&dev->dev, "Using MSI(-X) interrupts\n");
+ 			pci_set_master(dev);
++			uart.port.flags &= ~UPF_SHARE_IRQ;
+ 			rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_ALL_TYPES);
+ 		} else {
+ 			dev_dbg(&dev->dev, "Using legacy interrupts\n");
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index ff3f13693def7..9422284bb3f33 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -311,7 +311,11 @@ static const struct serial8250_config uart_config[] = {
+ /* Uart divisor latch read */
+ static int default_serial_dl_read(struct uart_8250_port *up)
+ {
+-	return serial_in(up, UART_DLL) | serial_in(up, UART_DLM) << 8;
++	/* Assign these in pieces to truncate any bits above 7.  */
++	unsigned char dll = serial_in(up, UART_DLL);
++	unsigned char dlm = serial_in(up, UART_DLM);
++
++	return dll | dlm << 8;
+ }
+ 
+ /* Uart divisor latch write */
+@@ -1297,9 +1301,11 @@ static void autoconfig(struct uart_8250_port *up)
+ 	serial_out(up, UART_LCR, 0);
+ 
+ 	serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO);
+-	scratch = serial_in(up, UART_IIR) >> 6;
+ 
+-	switch (scratch) {
++	/* Assign this as it is to truncate any bits above 7.  */
++	scratch = serial_in(up, UART_IIR);
++
++	switch (scratch >> 6) {
+ 	case 0:
+ 		autoconfig_8250(up);
+ 		break;
+@@ -1893,11 +1899,12 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
+ 	unsigned char status;
+ 	struct uart_8250_port *up = up_to_u8250p(port);
+ 	bool skip_rx = false;
++	unsigned long flags;
+ 
+ 	if (iir & UART_IIR_NO_INT)
+ 		return 0;
+ 
+-	spin_lock(&port->lock);
++	spin_lock_irqsave(&port->lock, flags);
+ 
+ 	status = serial_port_in(port, UART_LSR);
+ 
+@@ -1923,7 +1930,7 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
+ 		(up->ier & UART_IER_THRI))
+ 		serial8250_tx_chars(up);
+ 
+-	uart_unlock_and_check_sysrq(port);
++	uart_unlock_and_check_sysrq_irqrestore(port, flags);
+ 
+ 	return 1;
+ }
+diff --git a/drivers/tty/serial/serial-tegra.c b/drivers/tty/serial/serial-tegra.c
+index 222032792d6c2..eba5b9ecba348 100644
+--- a/drivers/tty/serial/serial-tegra.c
++++ b/drivers/tty/serial/serial-tegra.c
+@@ -1045,9 +1045,11 @@ static int tegra_uart_hw_init(struct tegra_uart_port *tup)
+ 
+ 	if (tup->cdata->fifo_mode_enable_status) {
+ 		ret = tegra_uart_wait_fifo_mode_enabled(tup);
+-		dev_err(tup->uport.dev, "FIFO mode not enabled\n");
+-		if (ret < 0)
++		if (ret < 0) {
++			dev_err(tup->uport.dev,
++				"Failed to enable FIFO mode: %d\n", ret);
+ 			return ret;
++		}
+ 	} else {
+ 		/*
+ 		 * For all tegra devices (up to t210), there is a hardware
+diff --git a/drivers/usb/cdns3/cdns3-ep0.c b/drivers/usb/cdns3/cdns3-ep0.c
+index 9a17802275d51..ec5bfd8944c36 100644
+--- a/drivers/usb/cdns3/cdns3-ep0.c
++++ b/drivers/usb/cdns3/cdns3-ep0.c
+@@ -731,6 +731,7 @@ static int cdns3_gadget_ep0_queue(struct usb_ep *ep,
+ 		request->actual = 0;
+ 		priv_dev->status_completion_no_call = true;
+ 		priv_dev->pending_status_request = request;
++		usb_gadget_set_state(&priv_dev->gadget, USB_STATE_CONFIGURED);
+ 		spin_unlock_irqrestore(&priv_dev->lock, flags);
+ 
+ 		/*
+diff --git a/drivers/usb/cdns3/cdnsp-gadget.c b/drivers/usb/cdns3/cdnsp-gadget.c
+index c083985e387b2..cf03ba79a3553 100644
+--- a/drivers/usb/cdns3/cdnsp-gadget.c
++++ b/drivers/usb/cdns3/cdnsp-gadget.c
+@@ -1881,7 +1881,7 @@ static int __cdnsp_gadget_init(struct cdns *cdns)
+ 	pdev->gadget.name = "cdnsp-gadget";
+ 	pdev->gadget.speed = USB_SPEED_UNKNOWN;
+ 	pdev->gadget.sg_supported = 1;
+-	pdev->gadget.max_speed = USB_SPEED_SUPER_PLUS;
++	pdev->gadget.max_speed = max_speed;
+ 	pdev->gadget.lpm_capable = 1;
+ 
+ 	pdev->setup_buf = kzalloc(CDNSP_EP0_SETUP_SIZE, GFP_KERNEL);
+diff --git a/drivers/usb/cdns3/cdnsp-gadget.h b/drivers/usb/cdns3/cdnsp-gadget.h
+index 783ca8ffde007..f740fa6089d85 100644
+--- a/drivers/usb/cdns3/cdnsp-gadget.h
++++ b/drivers/usb/cdns3/cdnsp-gadget.h
+@@ -383,8 +383,8 @@ struct cdnsp_intr_reg {
+ #define IMAN_IE			BIT(1)
+ #define IMAN_IP			BIT(0)
+ /* bits 2:31 need to be preserved */
+-#define IMAN_IE_SET(p)		(((p) & IMAN_IE) | 0x2)
+-#define IMAN_IE_CLEAR(p)	(((p) & IMAN_IE) & ~(0x2))
++#define IMAN_IE_SET(p)		((p) | IMAN_IE)
++#define IMAN_IE_CLEAR(p)	((p) & ~IMAN_IE)
+ 
+ /* IMOD - Interrupter Moderation Register - irq_control bitmasks. */
+ /*
+diff --git a/drivers/usb/cdns3/cdnsp-ring.c b/drivers/usb/cdns3/cdnsp-ring.c
+index 68972746e3636..1b1438457fb04 100644
+--- a/drivers/usb/cdns3/cdnsp-ring.c
++++ b/drivers/usb/cdns3/cdnsp-ring.c
+@@ -1932,15 +1932,13 @@ int cdnsp_queue_bulk_tx(struct cdnsp_device *pdev, struct cdnsp_request *preq)
+ 		}
+ 
+ 		if (enqd_len + trb_buff_len >= full_len) {
+-			if (need_zero_pkt && zero_len_trb) {
+-				zero_len_trb = true;
+-			} else {
+-				field &= ~TRB_CHAIN;
+-				field |= TRB_IOC;
+-				more_trbs_coming = false;
+-				need_zero_pkt = false;
+-				preq->td.last_trb = ring->enqueue;
+-			}
++			if (need_zero_pkt)
++				zero_len_trb = !zero_len_trb;
++
++			field &= ~TRB_CHAIN;
++			field |= TRB_IOC;
++			more_trbs_coming = false;
++			preq->td.last_trb = ring->enqueue;
+ 		}
+ 
+ 		/* Only set interrupt on short packet for OUT endpoints. */
+@@ -1955,7 +1953,7 @@ int cdnsp_queue_bulk_tx(struct cdnsp_device *pdev, struct cdnsp_request *preq)
+ 		length_field = TRB_LEN(trb_buff_len) | TRB_TD_SIZE(remainder) |
+ 			TRB_INTR_TARGET(0);
+ 
+-		cdnsp_queue_trb(pdev, ring, more_trbs_coming | need_zero_pkt,
++		cdnsp_queue_trb(pdev, ring, more_trbs_coming | zero_len_trb,
+ 				lower_32_bits(send_addr),
+ 				upper_32_bits(send_addr),
+ 				length_field,
+diff --git a/drivers/usb/class/usbtmc.c b/drivers/usb/class/usbtmc.c
+index 74d5a9c5238af..73f419adce610 100644
+--- a/drivers/usb/class/usbtmc.c
++++ b/drivers/usb/class/usbtmc.c
+@@ -2324,17 +2324,10 @@ static void usbtmc_interrupt(struct urb *urb)
+ 		dev_err(dev, "overflow with length %d, actual length is %d\n",
+ 			data->iin_wMaxPacketSize, urb->actual_length);
+ 		fallthrough;
+-	case -ECONNRESET:
+-	case -ENOENT:
+-	case -ESHUTDOWN:
+-	case -EILSEQ:
+-	case -ETIME:
+-	case -EPIPE:
++	default:
+ 		/* urb terminated, clean up */
+ 		dev_dbg(dev, "urb terminated, status: %d\n", status);
+ 		return;
+-	default:
+-		dev_err(dev, "unknown status received: %d\n", status);
+ 	}
+ exit:
+ 	rv = usb_submit_urb(urb, GFP_ATOMIC);
+diff --git a/drivers/usb/common/usb-otg-fsm.c b/drivers/usb/common/usb-otg-fsm.c
+index 3740cf95560e9..0697fde51d00f 100644
+--- a/drivers/usb/common/usb-otg-fsm.c
++++ b/drivers/usb/common/usb-otg-fsm.c
+@@ -193,7 +193,11 @@ static void otg_start_hnp_polling(struct otg_fsm *fsm)
+ 	if (!fsm->host_req_flag)
+ 		return;
+ 
+-	INIT_DELAYED_WORK(&fsm->hnp_polling_work, otg_hnp_polling_work);
++	if (!fsm->hnp_work_inited) {
++		INIT_DELAYED_WORK(&fsm->hnp_polling_work, otg_hnp_polling_work);
++		fsm->hnp_work_inited = true;
++	}
++
+ 	schedule_delayed_work(&fsm->hnp_polling_work,
+ 					msecs_to_jiffies(T_HOST_REQ_POLL));
+ }
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index f14c2aa837598..b9ff414a0e9d2 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1741,9 +1741,13 @@ static void dwc3_gadget_ep_cleanup_cancelled_requests(struct dwc3_ep *dep)
+ {
+ 	struct dwc3_request		*req;
+ 	struct dwc3_request		*tmp;
++	struct list_head		local;
+ 	struct dwc3			*dwc = dep->dwc;
+ 
+-	list_for_each_entry_safe(req, tmp, &dep->cancelled_list, list) {
++restart:
++	list_replace_init(&dep->cancelled_list, &local);
++
++	list_for_each_entry_safe(req, tmp, &local, list) {
+ 		dwc3_gadget_ep_skip_trbs(dep, req);
+ 		switch (req->status) {
+ 		case DWC3_REQUEST_STATUS_DISCONNECTED:
+@@ -1761,6 +1765,9 @@ static void dwc3_gadget_ep_cleanup_cancelled_requests(struct dwc3_ep *dep)
+ 			break;
+ 		}
+ 	}
++
++	if (!list_empty(&dep->cancelled_list))
++		goto restart;
+ }
+ 
+ static int dwc3_gadget_ep_dequeue(struct usb_ep *ep,
+@@ -2249,6 +2256,17 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
+ 		}
+ 	}
+ 
++	/*
++	 * Avoid issuing a runtime resume if the device is already in the
++	 * suspended state during gadget disconnect.  DWC3 gadget was already
++	 * halted/stopped during runtime suspend.
++	 */
++	if (!is_on) {
++		pm_runtime_barrier(dwc->dev);
++		if (pm_runtime_suspended(dwc->dev))
++			return 0;
++	}
++
+ 	/*
+ 	 * Check the return value for successful resume, or error.  For a
+ 	 * successful resume, the DWC3 runtime PM resume routine will handle
+@@ -2945,8 +2963,12 @@ static void dwc3_gadget_ep_cleanup_completed_requests(struct dwc3_ep *dep,
+ {
+ 	struct dwc3_request	*req;
+ 	struct dwc3_request	*tmp;
++	struct list_head	local;
+ 
+-	list_for_each_entry_safe(req, tmp, &dep->started_list, list) {
++restart:
++	list_replace_init(&dep->started_list, &local);
++
++	list_for_each_entry_safe(req, tmp, &local, list) {
+ 		int ret;
+ 
+ 		ret = dwc3_gadget_ep_cleanup_completed_request(dep, event,
+@@ -2954,6 +2976,9 @@ static void dwc3_gadget_ep_cleanup_completed_requests(struct dwc3_ep *dep,
+ 		if (ret)
+ 			break;
+ 	}
++
++	if (!list_empty(&dep->started_list))
++		goto restart;
+ }
+ 
+ static bool dwc3_gadget_ep_should_continue(struct dwc3_ep *dep)
+diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c
+index a82b3de1a54be..6742271cd6e6a 100644
+--- a/drivers/usb/gadget/function/f_hid.c
++++ b/drivers/usb/gadget/function/f_hid.c
+@@ -41,6 +41,7 @@ struct f_hidg {
+ 	unsigned char			bInterfaceSubClass;
+ 	unsigned char			bInterfaceProtocol;
+ 	unsigned char			protocol;
++	unsigned char			idle;
+ 	unsigned short			report_desc_length;
+ 	char				*report_desc;
+ 	unsigned short			report_length;
+@@ -338,6 +339,11 @@ static ssize_t f_hidg_write(struct file *file, const char __user *buffer,
+ 
+ 	spin_lock_irqsave(&hidg->write_spinlock, flags);
+ 
++	if (!hidg->req) {
++		spin_unlock_irqrestore(&hidg->write_spinlock, flags);
++		return -ESHUTDOWN;
++	}
++
+ #define WRITE_COND (!hidg->write_pending)
+ try_again:
+ 	/* write queue */
+@@ -358,8 +364,14 @@ try_again:
+ 	count  = min_t(unsigned, count, hidg->report_length);
+ 
+ 	spin_unlock_irqrestore(&hidg->write_spinlock, flags);
+-	status = copy_from_user(req->buf, buffer, count);
+ 
++	if (!req) {
++		ERROR(hidg->func.config->cdev, "hidg->req is NULL\n");
++		status = -ESHUTDOWN;
++		goto release_write_pending;
++	}
++
++	status = copy_from_user(req->buf, buffer, count);
+ 	if (status != 0) {
+ 		ERROR(hidg->func.config->cdev,
+ 			"copy_from_user error\n");
+@@ -387,14 +399,17 @@ try_again:
+ 
+ 	spin_unlock_irqrestore(&hidg->write_spinlock, flags);
+ 
++	if (!hidg->in_ep->enabled) {
++		ERROR(hidg->func.config->cdev, "in_ep is disabled\n");
++		status = -ESHUTDOWN;
++		goto release_write_pending;
++	}
++
+ 	status = usb_ep_queue(hidg->in_ep, req, GFP_ATOMIC);
+-	if (status < 0) {
+-		ERROR(hidg->func.config->cdev,
+-			"usb_ep_queue error on int endpoint %zd\n", status);
++	if (status < 0)
+ 		goto release_write_pending;
+-	} else {
++	else
+ 		status = count;
+-	}
+ 
+ 	return status;
+ release_write_pending:
+@@ -523,6 +538,14 @@ static int hidg_setup(struct usb_function *f,
+ 		goto respond;
+ 		break;
+ 
++	case ((USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8
++		  | HID_REQ_GET_IDLE):
++		VDBG(cdev, "get_idle\n");
++		length = min_t(unsigned int, length, 1);
++		((u8 *) req->buf)[0] = hidg->idle;
++		goto respond;
++		break;
++
+ 	case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8
+ 		  | HID_REQ_SET_REPORT):
+ 		VDBG(cdev, "set_report | wLength=%d\n", ctrl->wLength);
+@@ -546,6 +569,14 @@ static int hidg_setup(struct usb_function *f,
+ 		goto stall;
+ 		break;
+ 
++	case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8
++		  | HID_REQ_SET_IDLE):
++		VDBG(cdev, "set_idle\n");
++		length = 0;
++		hidg->idle = value >> 8;
++		goto respond;
++		break;
++
+ 	case ((USB_DIR_IN | USB_TYPE_STANDARD | USB_RECIP_INTERFACE) << 8
+ 		  | USB_REQ_GET_DESCRIPTOR):
+ 		switch (value >> 8) {
+@@ -773,6 +804,7 @@ static int hidg_bind(struct usb_configuration *c, struct usb_function *f)
+ 	hidg_interface_desc.bInterfaceSubClass = hidg->bInterfaceSubClass;
+ 	hidg_interface_desc.bInterfaceProtocol = hidg->bInterfaceProtocol;
+ 	hidg->protocol = HID_REPORT_PROTOCOL;
++	hidg->idle = 1;
+ 	hidg_ss_in_ep_desc.wMaxPacketSize = cpu_to_le16(hidg->report_length);
+ 	hidg_ss_in_comp_desc.wBytesPerInterval =
+ 				cpu_to_le16(hidg->report_length);
+diff --git a/drivers/usb/gadget/udc/max3420_udc.c b/drivers/usb/gadget/udc/max3420_udc.c
+index 35179543c3272..91c9e9057cff3 100644
+--- a/drivers/usb/gadget/udc/max3420_udc.c
++++ b/drivers/usb/gadget/udc/max3420_udc.c
+@@ -1260,12 +1260,14 @@ static int max3420_probe(struct spi_device *spi)
+ 	err = devm_request_irq(&spi->dev, irq, max3420_irq_handler, 0,
+ 			       "max3420", udc);
+ 	if (err < 0)
+-		return err;
++		goto del_gadget;
+ 
+ 	udc->thread_task = kthread_create(max3420_thread, udc,
+ 					  "max3420-thread");
+-	if (IS_ERR(udc->thread_task))
+-		return PTR_ERR(udc->thread_task);
++	if (IS_ERR(udc->thread_task)) {
++		err = PTR_ERR(udc->thread_task);
++		goto del_gadget;
++	}
+ 
+ 	irq = of_irq_get_byname(spi->dev.of_node, "vbus");
+ 	if (irq <= 0) { /* no vbus irq implies self-powered design */
+@@ -1285,10 +1287,14 @@ static int max3420_probe(struct spi_device *spi)
+ 		err = devm_request_irq(&spi->dev, irq,
+ 				       max3420_vbus_handler, 0, "vbus", udc);
+ 		if (err < 0)
+-			return err;
++			goto del_gadget;
+ 	}
+ 
+ 	return 0;
++
++del_gadget:
++	usb_del_gadget_udc(&udc->gadget);
++	return err;
+ }
+ 
+ static int max3420_remove(struct spi_device *spi)
+diff --git a/drivers/usb/host/ohci-at91.c b/drivers/usb/host/ohci-at91.c
+index 9bbd7ddd0003e..a24aea3d2759e 100644
+--- a/drivers/usb/host/ohci-at91.c
++++ b/drivers/usb/host/ohci-at91.c
+@@ -611,8 +611,6 @@ ohci_hcd_at91_drv_suspend(struct device *dev)
+ 	if (ohci_at91->wakeup)
+ 		enable_irq_wake(hcd->irq);
+ 
+-	ohci_at91_port_suspend(ohci_at91->sfr_regmap, 1);
+-
+ 	ret = ohci_suspend(hcd, ohci_at91->wakeup);
+ 	if (ret) {
+ 		if (ohci_at91->wakeup)
+@@ -632,7 +630,10 @@ ohci_hcd_at91_drv_suspend(struct device *dev)
+ 		/* flush the writes */
+ 		(void) ohci_readl (ohci, &ohci->regs->control);
+ 		msleep(1);
++		ohci_at91_port_suspend(ohci_at91->sfr_regmap, 1);
+ 		at91_stop_clock(ohci_at91);
++	} else {
++		ohci_at91_port_suspend(ohci_at91->sfr_regmap, 1);
+ 	}
+ 
+ 	return ret;
+@@ -644,6 +645,8 @@ ohci_hcd_at91_drv_resume(struct device *dev)
+ 	struct usb_hcd	*hcd = dev_get_drvdata(dev);
+ 	struct ohci_at91_priv *ohci_at91 = hcd_to_ohci_at91_priv(hcd);
+ 
++	ohci_at91_port_suspend(ohci_at91->sfr_regmap, 0);
++
+ 	if (ohci_at91->wakeup)
+ 		disable_irq_wake(hcd->irq);
+ 	else
+@@ -651,8 +654,6 @@ ohci_hcd_at91_drv_resume(struct device *dev)
+ 
+ 	ohci_resume(hcd, false);
+ 
+-	ohci_at91_port_suspend(ohci_at91->sfr_regmap, 0);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/usb/serial/ch341.c b/drivers/usb/serial/ch341.c
+index 2db917eab7995..8a521b5ea769e 100644
+--- a/drivers/usb/serial/ch341.c
++++ b/drivers/usb/serial/ch341.c
+@@ -851,6 +851,7 @@ static struct usb_serial_driver ch341_device = {
+ 		.owner	= THIS_MODULE,
+ 		.name	= "ch341-uart",
+ 	},
++	.bulk_in_size      = 512,
+ 	.id_table          = id_table,
+ 	.num_ports         = 1,
+ 	.open              = ch341_open,
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 4a1f3a95d0177..33bbb3470ca3b 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -219,6 +219,7 @@ static const struct usb_device_id id_table_combined[] = {
+ 	{ USB_DEVICE(FTDI_VID, FTDI_MTXORB_6_PID) },
+ 	{ USB_DEVICE(FTDI_VID, FTDI_R2000KU_TRUE_RNG) },
+ 	{ USB_DEVICE(FTDI_VID, FTDI_VARDAAN_PID) },
++	{ USB_DEVICE(FTDI_VID, FTDI_AUTO_M3_OP_COM_V2_PID) },
+ 	{ USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_0100_PID) },
+ 	{ USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_0101_PID) },
+ 	{ USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_0102_PID) },
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index add602bebd820..755858ca20bac 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -159,6 +159,9 @@
+ /* Vardaan Enterprises Serial Interface VEUSB422R3 */
+ #define FTDI_VARDAAN_PID	0xF070
+ 
++/* Auto-M3 Ltd. - OP-COM USB V2 - OBD interface Adapter */
++#define FTDI_AUTO_M3_OP_COM_V2_PID	0x4f50
++
+ /*
+  * Xsens Technologies BV products (http://www.xsens.com).
+  */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 0fbe253dc570b..039450069ca45 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1203,6 +1203,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(2) | RSVD(3) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1055, 0xff),	/* Telit FN980 (PCIe) */
+ 	  .driver_info = NCTRL(0) | RSVD(1) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1056, 0xff),	/* Telit FD980 */
++	  .driver_info = NCTRL(2) | RSVD(3) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index 940050c314822..2ce9cbf49e974 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -418,24 +418,34 @@ static int pl2303_detect_type(struct usb_serial *serial)
+ 	bcdDevice = le16_to_cpu(desc->bcdDevice);
+ 	bcdUSB = le16_to_cpu(desc->bcdUSB);
+ 
+-	switch (bcdDevice) {
+-	case 0x100:
+-		/*
+-		 * Assume it's an HXN-type if the device doesn't support the old read
+-		 * request value.
+-		 */
+-		if (bcdUSB == 0x200 && !pl2303_supports_hx_status(serial))
+-			return TYPE_HXN;
++	switch (bcdUSB) {
++	case 0x110:
++		switch (bcdDevice) {
++		case 0x300:
++			return TYPE_HX;
++		case 0x400:
++			return TYPE_HXD;
++		default:
++			return TYPE_HX;
++		}
+ 		break;
+-	case 0x300:
+-		if (bcdUSB == 0x200)
++	case 0x200:
++		switch (bcdDevice) {
++		case 0x100:
++		case 0x305:
++			/*
++			 * Assume it's an HXN-type if the device doesn't
++			 * support the old read request value.
++			 */
++			if (!pl2303_supports_hx_status(serial))
++				return TYPE_HXN;
++			break;
++		case 0x300:
+ 			return TYPE_TA;
+-
+-		return TYPE_HX;
+-	case 0x400:
+-		return TYPE_HXD;
+-	case 0x500:
+-		return TYPE_TB;
++		case 0x500:
++			return TYPE_TB;
++		}
++		break;
+ 	}
+ 
+ 	dev_err(&serial->interface->dev,
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 1b7f18d35df45..426e37a1e78c5 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -5355,7 +5355,7 @@ EXPORT_SYMBOL_GPL(tcpm_pd_hard_reset);
+ void tcpm_sink_frs(struct tcpm_port *port)
+ {
+ 	spin_lock(&port->pd_event_lock);
+-	port->pd_events = TCPM_FRS_EVENT;
++	port->pd_events |= TCPM_FRS_EVENT;
+ 	spin_unlock(&port->pd_event_lock);
+ 	kthread_queue_work(port->wq, &port->event_work);
+ }
+@@ -5364,7 +5364,7 @@ EXPORT_SYMBOL_GPL(tcpm_sink_frs);
+ void tcpm_sourcing_vbus(struct tcpm_port *port)
+ {
+ 	spin_lock(&port->pd_event_lock);
+-	port->pd_events = TCPM_SOURCING_VBUS;
++	port->pd_events |= TCPM_SOURCING_VBUS;
+ 	spin_unlock(&port->pd_event_lock);
+ 	kthread_queue_work(port->wq, &port->event_work);
+ }
+diff --git a/drivers/virt/acrn/vm.c b/drivers/virt/acrn/vm.c
+index 0d002a355a936..fbc9f1042000c 100644
+--- a/drivers/virt/acrn/vm.c
++++ b/drivers/virt/acrn/vm.c
+@@ -64,6 +64,14 @@ int acrn_vm_destroy(struct acrn_vm *vm)
+ 	    test_and_set_bit(ACRN_VM_FLAG_DESTROYED, &vm->flags))
+ 		return 0;
+ 
++	ret = hcall_destroy_vm(vm->vmid);
++	if (ret < 0) {
++		dev_err(acrn_dev.this_device,
++			"Failed to destroy VM %u\n", vm->vmid);
++		clear_bit(ACRN_VM_FLAG_DESTROYED, &vm->flags);
++		return ret;
++	}
++
+ 	/* Remove from global VM list */
+ 	write_lock_bh(&acrn_vm_list_lock);
+ 	list_del_init(&vm->list);
+@@ -78,14 +86,6 @@ int acrn_vm_destroy(struct acrn_vm *vm)
+ 		vm->monitor_page = NULL;
+ 	}
+ 
+-	ret = hcall_destroy_vm(vm->vmid);
+-	if (ret < 0) {
+-		dev_err(acrn_dev.this_device,
+-			"Failed to destroy VM %u\n", vm->vmid);
+-		clear_bit(ACRN_VM_FLAG_DESTROYED, &vm->flags);
+-		return ret;
+-	}
+-
+ 	acrn_vm_all_ram_unmap(vm);
+ 
+ 	dev_dbg(acrn_dev.this_device, "VM %u destroyed.\n", vm->vmid);
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 398c941e38974..f77156187a0ae 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -3613,7 +3613,8 @@ static int smb3_simple_fallocate_write_range(unsigned int xid,
+ 					     char *buf)
+ {
+ 	struct cifs_io_parms io_parms = {0};
+-	int rc, nbytes;
++	int nbytes;
++	int rc = 0;
+ 	struct kvec iov[2];
+ 
+ 	io_parms.netfid = cfile->fid.netfid;
+diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
+index bc364c119af6a..cebea4270817e 100644
+--- a/fs/ext4/mmp.c
++++ b/fs/ext4/mmp.c
+@@ -138,7 +138,7 @@ static int kmmpd(void *data)
+ 	unsigned mmp_check_interval;
+ 	unsigned long last_update_time;
+ 	unsigned long diff;
+-	int retval;
++	int retval = 0;
+ 
+ 	mmp_block = le64_to_cpu(es->s_mmp_block);
+ 	mmp = (struct mmp_struct *)(bh->b_data);
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index a4af26d4459a3..18332550b4464 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -2517,7 +2517,7 @@ again:
+ 				goto journal_error;
+ 			err = ext4_handle_dirty_dx_node(handle, dir,
+ 							frame->bh);
+-			if (err)
++			if (restart || err)
+ 				goto journal_error;
+ 		} else {
+ 			struct dx_root *dxroot;
+diff --git a/fs/io-wq.c b/fs/io-wq.c
+index 9efecdf025b9c..77026d42cb799 100644
+--- a/fs/io-wq.c
++++ b/fs/io-wq.c
+@@ -131,6 +131,7 @@ struct io_cb_cancel_data {
+ };
+ 
+ static void create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index);
++static void io_wqe_dec_running(struct io_worker *worker);
+ 
+ static bool io_worker_get(struct io_worker *worker)
+ {
+@@ -169,26 +170,21 @@ static void io_worker_exit(struct io_worker *worker)
+ {
+ 	struct io_wqe *wqe = worker->wqe;
+ 	struct io_wqe_acct *acct = io_wqe_get_acct(worker);
+-	unsigned flags;
+ 
+ 	if (refcount_dec_and_test(&worker->ref))
+ 		complete(&worker->ref_done);
+ 	wait_for_completion(&worker->ref_done);
+ 
+-	preempt_disable();
+-	current->flags &= ~PF_IO_WORKER;
+-	flags = worker->flags;
+-	worker->flags = 0;
+-	if (flags & IO_WORKER_F_RUNNING)
+-		atomic_dec(&acct->nr_running);
+-	worker->flags = 0;
+-	preempt_enable();
+-
+ 	raw_spin_lock_irq(&wqe->lock);
+-	if (flags & IO_WORKER_F_FREE)
++	if (worker->flags & IO_WORKER_F_FREE)
+ 		hlist_nulls_del_rcu(&worker->nulls_node);
+ 	list_del_rcu(&worker->all_list);
+ 	acct->nr_workers--;
++	preempt_disable();
++	io_wqe_dec_running(worker);
++	worker->flags = 0;
++	current->flags &= ~PF_IO_WORKER;
++	preempt_enable();
+ 	raw_spin_unlock_irq(&wqe->lock);
+ 
+ 	kfree_rcu(worker, rcu);
+@@ -215,15 +211,19 @@ static bool io_wqe_activate_free_worker(struct io_wqe *wqe)
+ 	struct hlist_nulls_node *n;
+ 	struct io_worker *worker;
+ 
+-	n = rcu_dereference(hlist_nulls_first_rcu(&wqe->free_list));
+-	if (is_a_nulls(n))
+-		return false;
+-
+-	worker = hlist_nulls_entry(n, struct io_worker, nulls_node);
+-	if (io_worker_get(worker)) {
+-		wake_up_process(worker->task);
++	/*
++	 * Iterate free_list and see if we can find an idle worker to
++	 * activate. If a given worker is on the free_list but in the process
++	 * of exiting, keep trying.
++	 */
++	hlist_nulls_for_each_entry_rcu(worker, n, &wqe->free_list, nulls_node) {
++		if (!io_worker_get(worker))
++			continue;
++		if (wake_up_process(worker->task)) {
++			io_worker_release(worker);
++			return true;
++		}
+ 		io_worker_release(worker);
+-		return true;
+ 	}
+ 
+ 	return false;
+@@ -248,10 +248,19 @@ static void io_wqe_wake_worker(struct io_wqe *wqe, struct io_wqe_acct *acct)
+ 	ret = io_wqe_activate_free_worker(wqe);
+ 	rcu_read_unlock();
+ 
+-	if (!ret && acct->nr_workers < acct->max_workers) {
+-		atomic_inc(&acct->nr_running);
+-		atomic_inc(&wqe->wq->worker_refs);
+-		create_io_worker(wqe->wq, wqe, acct->index);
++	if (!ret) {
++		bool do_create = false;
++
++		raw_spin_lock_irq(&wqe->lock);
++		if (acct->nr_workers < acct->max_workers) {
++			atomic_inc(&acct->nr_running);
++			atomic_inc(&wqe->wq->worker_refs);
++			acct->nr_workers++;
++			do_create = true;
++		}
++		raw_spin_unlock_irq(&wqe->lock);
++		if (do_create)
++			create_io_worker(wqe->wq, wqe, acct->index);
+ 	}
+ }
+ 
+@@ -272,9 +281,17 @@ static void create_worker_cb(struct callback_head *cb)
+ {
+ 	struct create_worker_data *cwd;
+ 	struct io_wq *wq;
++	struct io_wqe *wqe;
++	struct io_wqe_acct *acct;
+ 
+ 	cwd = container_of(cb, struct create_worker_data, work);
+-	wq = cwd->wqe->wq;
++	wqe = cwd->wqe;
++	wq = wqe->wq;
++	acct = &wqe->acct[cwd->index];
++	raw_spin_lock_irq(&wqe->lock);
++	if (acct->nr_workers < acct->max_workers)
++		acct->nr_workers++;
++	raw_spin_unlock_irq(&wqe->lock);
+ 	create_io_worker(wq, cwd->wqe, cwd->index);
+ 	kfree(cwd);
+ }
+@@ -640,6 +657,9 @@ static void create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index)
+ 		kfree(worker);
+ fail:
+ 		atomic_dec(&acct->nr_running);
++		raw_spin_lock_irq(&wqe->lock);
++		acct->nr_workers--;
++		raw_spin_unlock_irq(&wqe->lock);
+ 		io_worker_ref_put(wq);
+ 		return;
+ 	}
+@@ -655,9 +675,8 @@ fail:
+ 	worker->flags |= IO_WORKER_F_FREE;
+ 	if (index == IO_WQ_ACCT_BOUND)
+ 		worker->flags |= IO_WORKER_F_BOUND;
+-	if (!acct->nr_workers && (worker->flags & IO_WORKER_F_BOUND))
++	if ((acct->nr_workers == 1) && (worker->flags & IO_WORKER_F_BOUND))
+ 		worker->flags |= IO_WORKER_F_FIXED;
+-	acct->nr_workers++;
+ 	raw_spin_unlock_irq(&wqe->lock);
+ 	wake_up_new_task(tsk);
+ }
+diff --git a/fs/pipe.c b/fs/pipe.c
+index 9ef4231cce61c..8e6ef62aeb1c6 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -31,6 +31,21 @@
+ 
+ #include "internal.h"
+ 
++/*
++ * New pipe buffers will be restricted to this size while the user is exceeding
++ * their pipe buffer quota. The general pipe use case needs at least two
++ * buffers: one for data yet to be read, and one for new data. If this is less
++ * than two, then a write to a non-empty pipe may block even if the pipe is not
++ * full. This can occur with GNU make jobserver or similar uses of pipes as
++ * semaphores: multiple processes may be waiting to write tokens back to the
++ * pipe before reading tokens: https://lore.kernel.org/lkml/1628086770.5rn8p04n6j.none@localhost/.
++ *
++ * Users can reduce their pipe buffers with F_SETPIPE_SZ below this at their
++ * own risk, namely: pipe writes to non-full pipes may block until the pipe is
++ * emptied.
++ */
++#define PIPE_MIN_DEF_BUFFERS 2
++
+ /*
+  * The max size that a non-root user is allowed to grow the pipe. Can
+  * be set by root in /proc/sys/fs/pipe-max-size
+@@ -781,8 +796,8 @@ struct pipe_inode_info *alloc_pipe_info(void)
+ 	user_bufs = account_pipe_buffers(user, 0, pipe_bufs);
+ 
+ 	if (too_many_pipe_buffers_soft(user_bufs) && pipe_is_unprivileged_user()) {
+-		user_bufs = account_pipe_buffers(user, pipe_bufs, 1);
+-		pipe_bufs = 1;
++		user_bufs = account_pipe_buffers(user, pipe_bufs, PIPE_MIN_DEF_BUFFERS);
++		pipe_bufs = PIPE_MIN_DEF_BUFFERS;
+ 	}
+ 
+ 	if (too_many_pipe_buffers_hard(user_bufs) && pipe_is_unprivileged_user())
+diff --git a/fs/reiserfs/stree.c b/fs/reiserfs/stree.c
+index 476a7ff494822..ef42729216d1f 100644
+--- a/fs/reiserfs/stree.c
++++ b/fs/reiserfs/stree.c
+@@ -387,6 +387,24 @@ void pathrelse(struct treepath *search_path)
+ 	search_path->path_length = ILLEGAL_PATH_ELEMENT_OFFSET;
+ }
+ 
++static int has_valid_deh_location(struct buffer_head *bh, struct item_head *ih)
++{
++	struct reiserfs_de_head *deh;
++	int i;
++
++	deh = B_I_DEH(bh, ih);
++	for (i = 0; i < ih_entry_count(ih); i++) {
++		if (deh_location(&deh[i]) > ih_item_len(ih)) {
++			reiserfs_warning(NULL, "reiserfs-5094",
++					 "directory entry location seems wrong %h",
++					 &deh[i]);
++			return 0;
++		}
++	}
++
++	return 1;
++}
++
+ static int is_leaf(char *buf, int blocksize, struct buffer_head *bh)
+ {
+ 	struct block_head *blkh;
+@@ -454,11 +472,14 @@ static int is_leaf(char *buf, int blocksize, struct buffer_head *bh)
+ 					 "(second one): %h", ih);
+ 			return 0;
+ 		}
+-		if (is_direntry_le_ih(ih) && (ih_item_len(ih) < (ih_entry_count(ih) * IH_SIZE))) {
+-			reiserfs_warning(NULL, "reiserfs-5093",
+-					 "item entry count seems wrong %h",
+-					 ih);
+-			return 0;
++		if (is_direntry_le_ih(ih)) {
++			if (ih_item_len(ih) < (ih_entry_count(ih) * IH_SIZE)) {
++				reiserfs_warning(NULL, "reiserfs-5093",
++						 "item entry count seems wrong %h",
++						 ih);
++				return 0;
++			}
++			return has_valid_deh_location(bh, ih);
+ 		}
+ 		prev_location = ih_location(ih);
+ 	}
+diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c
+index 3ffafc73acf02..58481f8d63d5b 100644
+--- a/fs/reiserfs/super.c
++++ b/fs/reiserfs/super.c
+@@ -2082,6 +2082,14 @@ static int reiserfs_fill_super(struct super_block *s, void *data, int silent)
+ 		unlock_new_inode(root_inode);
+ 	}
+ 
++	if (!S_ISDIR(root_inode->i_mode) || !inode_get_bytes(root_inode) ||
++	    !root_inode->i_size) {
++		SWARN(silent, s, "", "corrupt root inode, run fsck");
++		iput(root_inode);
++		errval = -EUCLEAN;
++		goto error;
++	}
++
+ 	s->s_root = d_make_root(root_inode);
+ 	if (!s->s_root)
+ 		goto error;
+diff --git a/include/linux/serial_core.h b/include/linux/serial_core.h
+index d7ed00f1594ef..a11067cebdabc 100644
+--- a/include/linux/serial_core.h
++++ b/include/linux/serial_core.h
+@@ -517,6 +517,25 @@ static inline void uart_unlock_and_check_sysrq(struct uart_port *port)
+ 	if (sysrq_ch)
+ 		handle_sysrq(sysrq_ch);
+ }
++
++static inline void uart_unlock_and_check_sysrq_irqrestore(struct uart_port *port,
++		unsigned long flags)
++{
++	int sysrq_ch;
++
++	if (!port->has_sysrq) {
++		spin_unlock_irqrestore(&port->lock, flags);
++		return;
++	}
++
++	sysrq_ch = port->sysrq_ch;
++	port->sysrq_ch = 0;
++
++	spin_unlock_irqrestore(&port->lock, flags);
++
++	if (sysrq_ch)
++		handle_sysrq(sysrq_ch);
++}
+ #else	/* CONFIG_MAGIC_SYSRQ_SERIAL */
+ static inline int uart_handle_sysrq_char(struct uart_port *port, unsigned int ch)
+ {
+@@ -530,6 +549,11 @@ static inline void uart_unlock_and_check_sysrq(struct uart_port *port)
+ {
+ 	spin_unlock(&port->lock);
+ }
++static inline void uart_unlock_and_check_sysrq_irqrestore(struct uart_port *port,
++		unsigned long flags)
++{
++	spin_unlock_irqrestore(&port->lock, flags);
++}
+ #endif	/* CONFIG_MAGIC_SYSRQ_SERIAL */
+ 
+ /*
+diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h
+index 54269e47ac9a3..3ebfea0781f10 100644
+--- a/include/linux/tee_drv.h
++++ b/include/linux/tee_drv.h
+@@ -27,6 +27,7 @@
+ #define TEE_SHM_USER_MAPPED	BIT(4)  /* Memory mapped in user space */
+ #define TEE_SHM_POOL		BIT(5)  /* Memory allocated from pool */
+ #define TEE_SHM_KERNEL_MAPPED	BIT(6)  /* Memory mapped in kernel space */
++#define TEE_SHM_PRIV		BIT(7)  /* Memory private to TEE driver */
+ 
+ struct device;
+ struct tee_device;
+@@ -332,6 +333,7 @@ void *tee_get_drvdata(struct tee_device *teedev);
+  * @returns a pointer to 'struct tee_shm'
+  */
+ struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags);
++struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size);
+ 
+ /**
+  * tee_shm_register() - Register shared memory buffer
+diff --git a/include/linux/usb/otg-fsm.h b/include/linux/usb/otg-fsm.h
+index e78eb577d0fa1..8ef7d148c1493 100644
+--- a/include/linux/usb/otg-fsm.h
++++ b/include/linux/usb/otg-fsm.h
+@@ -196,6 +196,7 @@ struct otg_fsm {
+ 	struct mutex lock;
+ 	u8 *host_req_flag;
+ 	struct delayed_work hnp_polling_work;
++	bool hnp_work_inited;
+ 	bool state_changed;
+ };
+ 
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 89c8406dddb4a..34a92d5ed12b5 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -1229,6 +1229,7 @@ struct hci_dev *hci_alloc_dev(void);
+ void hci_free_dev(struct hci_dev *hdev);
+ int hci_register_dev(struct hci_dev *hdev);
+ void hci_unregister_dev(struct hci_dev *hdev);
++void hci_cleanup_dev(struct hci_dev *hdev);
+ int hci_suspend_dev(struct hci_dev *hdev);
+ int hci_resume_dev(struct hci_dev *hdev);
+ int hci_reset_dev(struct hci_dev *hdev);
+diff --git a/include/net/ip6_route.h b/include/net/ip6_route.h
+index 625a38ccb5d94..0bf09a9bca4e0 100644
+--- a/include/net/ip6_route.h
++++ b/include/net/ip6_route.h
+@@ -265,7 +265,7 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ 
+ static inline unsigned int ip6_skb_dst_mtu(struct sk_buff *skb)
+ {
+-	int mtu;
++	unsigned int mtu;
+ 
+ 	struct ipv6_pinfo *np = skb->sk && !dev_recursion_level() ?
+ 				inet6_sk(skb->sk) : NULL;
+diff --git a/include/net/netns/xfrm.h b/include/net/netns/xfrm.h
+index e816b6a3ef2b0..9b376b87bd543 100644
+--- a/include/net/netns/xfrm.h
++++ b/include/net/netns/xfrm.h
+@@ -74,6 +74,7 @@ struct netns_xfrm {
+ #endif
+ 	spinlock_t		xfrm_state_lock;
+ 	seqcount_spinlock_t	xfrm_state_hash_generation;
++	seqcount_spinlock_t	xfrm_policy_hash_generation;
+ 
+ 	spinlock_t xfrm_policy_lock;
+ 	struct mutex xfrm_cfg_mutex;
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 9ebac2a794679..49a5678750fbf 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -12159,10 +12159,33 @@ SYSCALL_DEFINE5(perf_event_open,
+ 	}
+ 
+ 	if (task) {
++		unsigned int ptrace_mode = PTRACE_MODE_READ_REALCREDS;
++		bool is_capable;
++
+ 		err = down_read_interruptible(&task->signal->exec_update_lock);
+ 		if (err)
+ 			goto err_file;
+ 
++		is_capable = perfmon_capable();
++		if (attr.sigtrap) {
++			/*
++			 * perf_event_attr::sigtrap sends signals to the other
++			 * task. Require the current task to also have
++			 * CAP_KILL.
++			 */
++			rcu_read_lock();
++			is_capable &= ns_capable(__task_cred(task)->user_ns, CAP_KILL);
++			rcu_read_unlock();
++
++			/*
++			 * If the required capabilities aren't available, checks
++			 * for ptrace permissions: upgrade to ATTACH, since
++			 * sending signals can effectively change the target
++			 * task.
++			 */
++			ptrace_mode = PTRACE_MODE_ATTACH_REALCREDS;
++		}
++
+ 		/*
+ 		 * Preserve ptrace permission check for backwards compatibility.
+ 		 *
+@@ -12172,7 +12195,7 @@ SYSCALL_DEFINE5(perf_event_open,
+ 		 * perf_event_exit_task() that could imply).
+ 		 */
+ 		err = -EACCES;
+-		if (!perfmon_capable() && !ptrace_may_access(task, PTRACE_MODE_READ_REALCREDS))
++		if (!is_capable && !ptrace_may_access(task, ptrace_mode))
+ 			goto err_cred;
+ 	}
+ 
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index e5858999b54de..15b4d2fb6be38 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1632,12 +1632,18 @@ void deactivate_task(struct rq *rq, struct task_struct *p, int flags)
+ 	dequeue_task(rq, p, flags);
+ }
+ 
+-/*
+- * __normal_prio - return the priority that is based on the static prio
+- */
+-static inline int __normal_prio(struct task_struct *p)
++static inline int __normal_prio(int policy, int rt_prio, int nice)
+ {
+-	return p->static_prio;
++	int prio;
++
++	if (dl_policy(policy))
++		prio = MAX_DL_PRIO - 1;
++	else if (rt_policy(policy))
++		prio = MAX_RT_PRIO - 1 - rt_prio;
++	else
++		prio = NICE_TO_PRIO(nice);
++
++	return prio;
+ }
+ 
+ /*
+@@ -1649,15 +1655,7 @@ static inline int __normal_prio(struct task_struct *p)
+  */
+ static inline int normal_prio(struct task_struct *p)
+ {
+-	int prio;
+-
+-	if (task_has_dl_policy(p))
+-		prio = MAX_DL_PRIO-1;
+-	else if (task_has_rt_policy(p))
+-		prio = MAX_RT_PRIO-1 - p->rt_priority;
+-	else
+-		prio = __normal_prio(p);
+-	return prio;
++	return __normal_prio(p->policy, p->rt_priority, PRIO_TO_NICE(p->static_prio));
+ }
+ 
+ /*
+@@ -3759,7 +3757,7 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
+ 		} else if (PRIO_TO_NICE(p->static_prio) < 0)
+ 			p->static_prio = NICE_TO_PRIO(0);
+ 
+-		p->prio = p->normal_prio = __normal_prio(p);
++		p->prio = p->normal_prio = p->static_prio;
+ 		set_load_weight(p, false);
+ 
+ 		/*
+@@ -5552,6 +5550,18 @@ int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flag
+ }
+ EXPORT_SYMBOL(default_wake_function);
+ 
++static void __setscheduler_prio(struct task_struct *p, int prio)
++{
++	if (dl_prio(prio))
++		p->sched_class = &dl_sched_class;
++	else if (rt_prio(prio))
++		p->sched_class = &rt_sched_class;
++	else
++		p->sched_class = &fair_sched_class;
++
++	p->prio = prio;
++}
++
+ #ifdef CONFIG_RT_MUTEXES
+ 
+ static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
+@@ -5667,22 +5677,19 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
+ 		} else {
+ 			p->dl.pi_se = &p->dl;
+ 		}
+-		p->sched_class = &dl_sched_class;
+ 	} else if (rt_prio(prio)) {
+ 		if (dl_prio(oldprio))
+ 			p->dl.pi_se = &p->dl;
+ 		if (oldprio < prio)
+ 			queue_flag |= ENQUEUE_HEAD;
+-		p->sched_class = &rt_sched_class;
+ 	} else {
+ 		if (dl_prio(oldprio))
+ 			p->dl.pi_se = &p->dl;
+ 		if (rt_prio(oldprio))
+ 			p->rt.timeout = 0;
+-		p->sched_class = &fair_sched_class;
+ 	}
+ 
+-	p->prio = prio;
++	__setscheduler_prio(p, prio);
+ 
+ 	if (queued)
+ 		enqueue_task(rq, p, queue_flag);
+@@ -6035,35 +6042,6 @@ static void __setscheduler_params(struct task_struct *p,
+ 	set_load_weight(p, true);
+ }
+ 
+-/* Actually do priority change: must hold pi & rq lock. */
+-static void __setscheduler(struct rq *rq, struct task_struct *p,
+-			   const struct sched_attr *attr, bool keep_boost)
+-{
+-	/*
+-	 * If params can't change scheduling class changes aren't allowed
+-	 * either.
+-	 */
+-	if (attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)
+-		return;
+-
+-	__setscheduler_params(p, attr);
+-
+-	/*
+-	 * Keep a potential priority boosting if called from
+-	 * sched_setscheduler().
+-	 */
+-	p->prio = normal_prio(p);
+-	if (keep_boost)
+-		p->prio = rt_effective_prio(p, p->prio);
+-
+-	if (dl_prio(p->prio))
+-		p->sched_class = &dl_sched_class;
+-	else if (rt_prio(p->prio))
+-		p->sched_class = &rt_sched_class;
+-	else
+-		p->sched_class = &fair_sched_class;
+-}
+-
+ /*
+  * Check the target process has a UID that matches the current process's:
+  */
+@@ -6084,10 +6062,8 @@ static int __sched_setscheduler(struct task_struct *p,
+ 				const struct sched_attr *attr,
+ 				bool user, bool pi)
+ {
+-	int newprio = dl_policy(attr->sched_policy) ? MAX_DL_PRIO - 1 :
+-		      MAX_RT_PRIO - 1 - attr->sched_priority;
+-	int retval, oldprio, oldpolicy = -1, queued, running;
+-	int new_effective_prio, policy = attr->sched_policy;
++	int oldpolicy = -1, policy = attr->sched_policy;
++	int retval, oldprio, newprio, queued, running;
+ 	const struct sched_class *prev_class;
+ 	struct callback_head *head;
+ 	struct rq_flags rf;
+@@ -6285,6 +6261,7 @@ change:
+ 	p->sched_reset_on_fork = reset_on_fork;
+ 	oldprio = p->prio;
+ 
++	newprio = __normal_prio(policy, attr->sched_priority, attr->sched_nice);
+ 	if (pi) {
+ 		/*
+ 		 * Take priority boosted tasks into account. If the new
+@@ -6293,8 +6270,8 @@ change:
+ 		 * the runqueue. This will be done when the task deboost
+ 		 * itself.
+ 		 */
+-		new_effective_prio = rt_effective_prio(p, newprio);
+-		if (new_effective_prio == oldprio)
++		newprio = rt_effective_prio(p, newprio);
++		if (newprio == oldprio)
+ 			queue_flags &= ~DEQUEUE_MOVE;
+ 	}
+ 
+@@ -6307,7 +6284,10 @@ change:
+ 
+ 	prev_class = p->sched_class;
+ 
+-	__setscheduler(rq, p, attr, pi);
++	if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
++		__setscheduler_params(p, attr);
++		__setscheduler_prio(p, newprio);
++	}
+ 	__setscheduler_uclamp(p, attr);
+ 
+ 	if (queued) {
+diff --git a/kernel/time/timer.c b/kernel/time/timer.c
+index 99b97ccefdbdf..2870a7a51638c 100644
+--- a/kernel/time/timer.c
++++ b/kernel/time/timer.c
+@@ -1279,8 +1279,10 @@ static inline void timer_base_unlock_expiry(struct timer_base *base)
+ static void timer_sync_wait_running(struct timer_base *base)
+ {
+ 	if (atomic_read(&base->timer_waiters)) {
++		raw_spin_unlock_irq(&base->lock);
+ 		spin_unlock(&base->expiry_lock);
+ 		spin_lock(&base->expiry_lock);
++		raw_spin_lock_irq(&base->lock);
+ 	}
+ }
+ 
+@@ -1471,14 +1473,14 @@ static void expire_timers(struct timer_base *base, struct hlist_head *head)
+ 		if (timer->flags & TIMER_IRQSAFE) {
+ 			raw_spin_unlock(&base->lock);
+ 			call_timer_fn(timer, fn, baseclk);
+-			base->running_timer = NULL;
+ 			raw_spin_lock(&base->lock);
++			base->running_timer = NULL;
+ 		} else {
+ 			raw_spin_unlock_irq(&base->lock);
+ 			call_timer_fn(timer, fn, baseclk);
++			raw_spin_lock_irq(&base->lock);
+ 			base->running_timer = NULL;
+ 			timer_sync_wait_running(base);
+-			raw_spin_lock_irq(&base->lock);
+ 		}
+ 	}
+ }
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 20196380fc545..018067e379f2b 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -9006,8 +9006,10 @@ static int trace_array_create_dir(struct trace_array *tr)
+ 		return -EINVAL;
+ 
+ 	ret = event_trace_add_tracer(tr->dir, tr);
+-	if (ret)
++	if (ret) {
+ 		tracefs_remove(tr->dir);
++		return ret;
++	}
+ 
+ 	init_tracer_tracefs(tr, tr->dir);
+ 	__update_tracer_options(tr);
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 023777a4a04d7..c59793ffd59ce 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -65,7 +65,8 @@
+ 	C(INVALID_SORT_MODIFIER,"Invalid sort modifier"),		\
+ 	C(EMPTY_SORT_FIELD,	"Empty sort field"),			\
+ 	C(TOO_MANY_SORT_FIELDS,	"Too many sort fields (Max = 2)"),	\
+-	C(INVALID_SORT_FIELD,	"Sort field must be a key or a val"),
++	C(INVALID_SORT_FIELD,	"Sort field must be a key or a val"),	\
++	C(INVALID_STR_OPERAND,	"String type can not be an operand in expression"),
+ 
+ #undef C
+ #define C(a, b)		HIST_ERR_##a
+@@ -2156,6 +2157,13 @@ static struct hist_field *parse_unary(struct hist_trigger_data *hist_data,
+ 		ret = PTR_ERR(operand1);
+ 		goto free;
+ 	}
++	if (operand1->flags & HIST_FIELD_FL_STRING) {
++		/* String type can not be the operand of unary operator. */
++		hist_err(file->tr, HIST_ERR_INVALID_STR_OPERAND, errpos(str));
++		destroy_hist_field(operand1, 0);
++		ret = -EINVAL;
++		goto free;
++	}
+ 
+ 	expr->flags |= operand1->flags &
+ 		(HIST_FIELD_FL_TIMESTAMP | HIST_FIELD_FL_TIMESTAMP_USECS);
+@@ -2257,6 +2265,11 @@ static struct hist_field *parse_expr(struct hist_trigger_data *hist_data,
+ 		operand1 = NULL;
+ 		goto free;
+ 	}
++	if (operand1->flags & HIST_FIELD_FL_STRING) {
++		hist_err(file->tr, HIST_ERR_INVALID_STR_OPERAND, errpos(operand1_str));
++		ret = -EINVAL;
++		goto free;
++	}
+ 
+ 	/* rest of string could be another expression e.g. b+c in a+b+c */
+ 	operand_flags = 0;
+@@ -2266,6 +2279,11 @@ static struct hist_field *parse_expr(struct hist_trigger_data *hist_data,
+ 		operand2 = NULL;
+ 		goto free;
+ 	}
++	if (operand2->flags & HIST_FIELD_FL_STRING) {
++		hist_err(file->tr, HIST_ERR_INVALID_STR_OPERAND, errpos(str));
++		ret = -EINVAL;
++		goto free;
++	}
+ 
+ 	ret = check_expr_operands(file->tr, operand1, operand2);
+ 	if (ret)
+@@ -2287,6 +2305,10 @@ static struct hist_field *parse_expr(struct hist_trigger_data *hist_data,
+ 
+ 	expr->operands[0] = operand1;
+ 	expr->operands[1] = operand2;
++
++	/* The operand sizes should be the same, so just pick one */
++	expr->size = operand1->size;
++
+ 	expr->operator = field_op;
+ 	expr->name = expr_str(expr, 0);
+ 	expr->type = kstrdup(operand1->type, GFP_KERNEL);
+diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
+index fc32821f8240b..efd14c79fab41 100644
+--- a/kernel/tracepoint.c
++++ b/kernel/tracepoint.c
+@@ -15,12 +15,57 @@
+ #include <linux/sched/task.h>
+ #include <linux/static_key.h>
+ 
++enum tp_func_state {
++	TP_FUNC_0,
++	TP_FUNC_1,
++	TP_FUNC_2,
++	TP_FUNC_N,
++};
++
+ extern tracepoint_ptr_t __start___tracepoints_ptrs[];
+ extern tracepoint_ptr_t __stop___tracepoints_ptrs[];
+ 
+ DEFINE_SRCU(tracepoint_srcu);
+ EXPORT_SYMBOL_GPL(tracepoint_srcu);
+ 
++enum tp_transition_sync {
++	TP_TRANSITION_SYNC_1_0_1,
++	TP_TRANSITION_SYNC_N_2_1,
++
++	_NR_TP_TRANSITION_SYNC,
++};
++
++struct tp_transition_snapshot {
++	unsigned long rcu;
++	unsigned long srcu;
++	bool ongoing;
++};
++
++/* Protected by tracepoints_mutex */
++static struct tp_transition_snapshot tp_transition_snapshot[_NR_TP_TRANSITION_SYNC];
++
++static void tp_rcu_get_state(enum tp_transition_sync sync)
++{
++	struct tp_transition_snapshot *snapshot = &tp_transition_snapshot[sync];
++
++	/* Keep the latest get_state snapshot. */
++	snapshot->rcu = get_state_synchronize_rcu();
++	snapshot->srcu = start_poll_synchronize_srcu(&tracepoint_srcu);
++	snapshot->ongoing = true;
++}
++
++static void tp_rcu_cond_sync(enum tp_transition_sync sync)
++{
++	struct tp_transition_snapshot *snapshot = &tp_transition_snapshot[sync];
++
++	if (!snapshot->ongoing)
++		return;
++	cond_synchronize_rcu(snapshot->rcu);
++	if (!poll_state_synchronize_srcu(&tracepoint_srcu, snapshot->srcu))
++		synchronize_srcu(&tracepoint_srcu);
++	snapshot->ongoing = false;
++}
++
+ /* Set to 1 to enable tracepoint debug output */
+ static const int tracepoint_debug;
+ 
+@@ -246,26 +291,29 @@ static void *func_remove(struct tracepoint_func **funcs,
+ 	return old;
+ }
+ 
+-static void tracepoint_update_call(struct tracepoint *tp, struct tracepoint_func *tp_funcs, bool sync)
++/*
++ * Count the number of functions (enum tp_func_state) in a tp_funcs array.
++ */
++static enum tp_func_state nr_func_state(const struct tracepoint_func *tp_funcs)
++{
++	if (!tp_funcs)
++		return TP_FUNC_0;
++	if (!tp_funcs[1].func)
++		return TP_FUNC_1;
++	if (!tp_funcs[2].func)
++		return TP_FUNC_2;
++	return TP_FUNC_N;	/* 3 or more */
++}
++
++static void tracepoint_update_call(struct tracepoint *tp, struct tracepoint_func *tp_funcs)
+ {
+ 	void *func = tp->iterator;
+ 
+ 	/* Synthetic events do not have static call sites */
+ 	if (!tp->static_call_key)
+ 		return;
+-
+-	if (!tp_funcs[1].func) {
++	if (nr_func_state(tp_funcs) == TP_FUNC_1)
+ 		func = tp_funcs[0].func;
+-		/*
+-		 * If going from the iterator back to a single caller,
+-		 * we need to synchronize with __DO_TRACE to make sure
+-		 * that the data passed to the callback is the one that
+-		 * belongs to that callback.
+-		 */
+-		if (sync)
+-			tracepoint_synchronize_unregister();
+-	}
+-
+ 	__static_call_update(tp->static_call_key, tp->static_call_tramp, func);
+ }
+ 
+@@ -299,9 +347,41 @@ static int tracepoint_add_func(struct tracepoint *tp,
+ 	 * a pointer to it.  This array is referenced by __DO_TRACE from
+ 	 * include/linux/tracepoint.h using rcu_dereference_sched().
+ 	 */
+-	tracepoint_update_call(tp, tp_funcs, false);
+-	rcu_assign_pointer(tp->funcs, tp_funcs);
+-	static_key_enable(&tp->key);
++	switch (nr_func_state(tp_funcs)) {
++	case TP_FUNC_1:		/* 0->1 */
++		/*
++		 * Make sure new static func never uses old data after a
++		 * 1->0->1 transition sequence.
++		 */
++		tp_rcu_cond_sync(TP_TRANSITION_SYNC_1_0_1);
++		/* Set static call to first function */
++		tracepoint_update_call(tp, tp_funcs);
++		/* Both iterator and static call handle NULL tp->funcs */
++		rcu_assign_pointer(tp->funcs, tp_funcs);
++		static_key_enable(&tp->key);
++		break;
++	case TP_FUNC_2:		/* 1->2 */
++		/* Set iterator static call */
++		tracepoint_update_call(tp, tp_funcs);
++		/*
++		 * Iterator callback installed before updating tp->funcs.
++		 * Requires ordering between RCU assign/dereference and
++		 * static call update/call.
++		 */
++		fallthrough;
++	case TP_FUNC_N:		/* N->N+1 (N>1) */
++		rcu_assign_pointer(tp->funcs, tp_funcs);
++		/*
++		 * Make sure static func never uses incorrect data after a
++		 * N->...->2->1 (N>1) transition sequence.
++		 */
++		if (tp_funcs[0].data != old[0].data)
++			tp_rcu_get_state(TP_TRANSITION_SYNC_N_2_1);
++		break;
++	default:
++		WARN_ON_ONCE(1);
++		break;
++	}
+ 
+ 	release_probes(old);
+ 	return 0;
+@@ -328,17 +408,52 @@ static int tracepoint_remove_func(struct tracepoint *tp,
+ 		/* Failed allocating new tp_funcs, replaced func with stub */
+ 		return 0;
+ 
+-	if (!tp_funcs) {
++	switch (nr_func_state(tp_funcs)) {
++	case TP_FUNC_0:		/* 1->0 */
+ 		/* Removed last function */
+ 		if (tp->unregfunc && static_key_enabled(&tp->key))
+ 			tp->unregfunc();
+ 
+ 		static_key_disable(&tp->key);
++		/* Set iterator static call */
++		tracepoint_update_call(tp, tp_funcs);
++		/* Both iterator and static call handle NULL tp->funcs */
++		rcu_assign_pointer(tp->funcs, NULL);
++		/*
++		 * Make sure new static func never uses old data after a
++		 * 1->0->1 transition sequence.
++		 */
++		tp_rcu_get_state(TP_TRANSITION_SYNC_1_0_1);
++		break;
++	case TP_FUNC_1:		/* 2->1 */
+ 		rcu_assign_pointer(tp->funcs, tp_funcs);
+-	} else {
++		/*
++		 * Make sure static func never uses incorrect data after a
++		 * N->...->2->1 (N>2) transition sequence. If the first
++		 * element's data has changed, then force the synchronization
++		 * to prevent current readers that have loaded the old data
++		 * from calling the new function.
++		 */
++		if (tp_funcs[0].data != old[0].data)
++			tp_rcu_get_state(TP_TRANSITION_SYNC_N_2_1);
++		tp_rcu_cond_sync(TP_TRANSITION_SYNC_N_2_1);
++		/* Set static call to first function */
++		tracepoint_update_call(tp, tp_funcs);
++		break;
++	case TP_FUNC_2:		/* N->N-1 (N>2) */
++		fallthrough;
++	case TP_FUNC_N:
+ 		rcu_assign_pointer(tp->funcs, tp_funcs);
+-		tracepoint_update_call(tp, tp_funcs,
+-				       tp_funcs[0].func != old[0].func);
++		/*
++		 * Make sure static func never uses incorrect data after a
++		 * N->...->2->1 (N>2) transition sequence.
++		 */
++		if (tp_funcs[0].data != old[0].data)
++			tp_rcu_get_state(TP_TRANSITION_SYNC_N_2_1);
++		break;
++	default:
++		WARN_ON_ONCE(1);
++		break;
+ 	}
+ 	release_probes(old);
+ 	return 0;
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 7d71d104fdfda..ee59d1c7f1f6c 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -3976,14 +3976,10 @@ EXPORT_SYMBOL(hci_register_dev);
+ /* Unregister HCI device */
+ void hci_unregister_dev(struct hci_dev *hdev)
+ {
+-	int id;
+-
+ 	BT_DBG("%p name %s bus %d", hdev, hdev->name, hdev->bus);
+ 
+ 	hci_dev_set_flag(hdev, HCI_UNREGISTER);
+ 
+-	id = hdev->id;
+-
+ 	write_lock(&hci_dev_list_lock);
+ 	list_del(&hdev->list);
+ 	write_unlock(&hci_dev_list_lock);
+@@ -4018,7 +4014,14 @@ void hci_unregister_dev(struct hci_dev *hdev)
+ 	}
+ 
+ 	device_del(&hdev->dev);
++	/* Actual cleanup is deferred until hci_cleanup_dev(). */
++	hci_dev_put(hdev);
++}
++EXPORT_SYMBOL(hci_unregister_dev);
+ 
++/* Cleanup HCI device */
++void hci_cleanup_dev(struct hci_dev *hdev)
++{
+ 	debugfs_remove_recursive(hdev->debugfs);
+ 	kfree_const(hdev->hw_info);
+ 	kfree_const(hdev->fw_info);
+@@ -4043,11 +4046,8 @@ void hci_unregister_dev(struct hci_dev *hdev)
+ 	hci_blocked_keys_clear(hdev);
+ 	hci_dev_unlock(hdev);
+ 
+-	hci_dev_put(hdev);
+-
+-	ida_simple_remove(&hci_index_ida, id);
++	ida_simple_remove(&hci_index_ida, hdev->id);
+ }
+-EXPORT_SYMBOL(hci_unregister_dev);
+ 
+ /* Suspend HCI device */
+ int hci_suspend_dev(struct hci_dev *hdev)
+diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
+index eed0dd066e12c..53f85d7c5f9e5 100644
+--- a/net/bluetooth/hci_sock.c
++++ b/net/bluetooth/hci_sock.c
+@@ -59,6 +59,17 @@ struct hci_pinfo {
+ 	char              comm[TASK_COMM_LEN];
+ };
+ 
++static struct hci_dev *hci_hdev_from_sock(struct sock *sk)
++{
++	struct hci_dev *hdev = hci_pi(sk)->hdev;
++
++	if (!hdev)
++		return ERR_PTR(-EBADFD);
++	if (hci_dev_test_flag(hdev, HCI_UNREGISTER))
++		return ERR_PTR(-EPIPE);
++	return hdev;
++}
++
+ void hci_sock_set_flag(struct sock *sk, int nr)
+ {
+ 	set_bit(nr, &hci_pi(sk)->flags);
+@@ -759,19 +770,13 @@ void hci_sock_dev_event(struct hci_dev *hdev, int event)
+ 	if (event == HCI_DEV_UNREG) {
+ 		struct sock *sk;
+ 
+-		/* Detach sockets from device */
++		/* Wake up sockets using this dead device */
+ 		read_lock(&hci_sk_list.lock);
+ 		sk_for_each(sk, &hci_sk_list.head) {
+-			lock_sock(sk);
+ 			if (hci_pi(sk)->hdev == hdev) {
+-				hci_pi(sk)->hdev = NULL;
+ 				sk->sk_err = EPIPE;
+-				sk->sk_state = BT_OPEN;
+ 				sk->sk_state_change(sk);
+-
+-				hci_dev_put(hdev);
+ 			}
+-			release_sock(sk);
+ 		}
+ 		read_unlock(&hci_sk_list.lock);
+ 	}
+@@ -930,10 +935,10 @@ static int hci_sock_blacklist_del(struct hci_dev *hdev, void __user *arg)
+ static int hci_sock_bound_ioctl(struct sock *sk, unsigned int cmd,
+ 				unsigned long arg)
+ {
+-	struct hci_dev *hdev = hci_pi(sk)->hdev;
++	struct hci_dev *hdev = hci_hdev_from_sock(sk);
+ 
+-	if (!hdev)
+-		return -EBADFD;
++	if (IS_ERR(hdev))
++		return PTR_ERR(hdev);
+ 
+ 	if (hci_dev_test_flag(hdev, HCI_USER_CHANNEL))
+ 		return -EBUSY;
+@@ -1103,6 +1108,18 @@ static int hci_sock_bind(struct socket *sock, struct sockaddr *addr,
+ 
+ 	lock_sock(sk);
+ 
++	/* Allow detaching from dead device and attaching to alive device, if
++	 * the caller wants to re-bind (instead of close) this socket in
++	 * response to hci_sock_dev_event(HCI_DEV_UNREG) notification.
++	 */
++	hdev = hci_pi(sk)->hdev;
++	if (hdev && hci_dev_test_flag(hdev, HCI_UNREGISTER)) {
++		hci_pi(sk)->hdev = NULL;
++		sk->sk_state = BT_OPEN;
++		hci_dev_put(hdev);
++	}
++	hdev = NULL;
++
+ 	if (sk->sk_state == BT_BOUND) {
+ 		err = -EALREADY;
+ 		goto done;
+@@ -1379,9 +1396,9 @@ static int hci_sock_getname(struct socket *sock, struct sockaddr *addr,
+ 
+ 	lock_sock(sk);
+ 
+-	hdev = hci_pi(sk)->hdev;
+-	if (!hdev) {
+-		err = -EBADFD;
++	hdev = hci_hdev_from_sock(sk);
++	if (IS_ERR(hdev)) {
++		err = PTR_ERR(hdev);
+ 		goto done;
+ 	}
+ 
+@@ -1743,9 +1760,9 @@ static int hci_sock_sendmsg(struct socket *sock, struct msghdr *msg,
+ 		goto done;
+ 	}
+ 
+-	hdev = hci_pi(sk)->hdev;
+-	if (!hdev) {
+-		err = -EBADFD;
++	hdev = hci_hdev_from_sock(sk);
++	if (IS_ERR(hdev)) {
++		err = PTR_ERR(hdev);
+ 		goto done;
+ 	}
+ 
+diff --git a/net/bluetooth/hci_sysfs.c b/net/bluetooth/hci_sysfs.c
+index 9874844a95a98..b69d88b88d2e4 100644
+--- a/net/bluetooth/hci_sysfs.c
++++ b/net/bluetooth/hci_sysfs.c
+@@ -83,6 +83,9 @@ void hci_conn_del_sysfs(struct hci_conn *conn)
+ static void bt_host_release(struct device *dev)
+ {
+ 	struct hci_dev *hdev = to_hci_dev(dev);
++
++	if (hci_dev_test_flag(hdev, HCI_UNREGISTER))
++		hci_cleanup_dev(hdev);
+ 	kfree(hdev);
+ 	module_put(THIS_MODULE);
+ }
+diff --git a/net/bridge/br.c b/net/bridge/br.c
+index ef743f94254d7..bbab9984f24e5 100644
+--- a/net/bridge/br.c
++++ b/net/bridge/br.c
+@@ -166,7 +166,8 @@ static int br_switchdev_event(struct notifier_block *unused,
+ 	case SWITCHDEV_FDB_ADD_TO_BRIDGE:
+ 		fdb_info = ptr;
+ 		err = br_fdb_external_learn_add(br, p, fdb_info->addr,
+-						fdb_info->vid, false);
++						fdb_info->vid,
++						fdb_info->is_local, false);
+ 		if (err) {
+ 			err = notifier_from_errno(err);
+ 			break;
+diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c
+index 698b79747d32e..87ce52bba6498 100644
+--- a/net/bridge/br_fdb.c
++++ b/net/bridge/br_fdb.c
+@@ -1001,7 +1001,8 @@ static int fdb_add_entry(struct net_bridge *br, struct net_bridge_port *source,
+ 
+ static int __br_fdb_add(struct ndmsg *ndm, struct net_bridge *br,
+ 			struct net_bridge_port *p, const unsigned char *addr,
+-			u16 nlh_flags, u16 vid, struct nlattr *nfea_tb[])
++			u16 nlh_flags, u16 vid, struct nlattr *nfea_tb[],
++			struct netlink_ext_ack *extack)
+ {
+ 	int err = 0;
+ 
+@@ -1020,7 +1021,15 @@ static int __br_fdb_add(struct ndmsg *ndm, struct net_bridge *br,
+ 		rcu_read_unlock();
+ 		local_bh_enable();
+ 	} else if (ndm->ndm_flags & NTF_EXT_LEARNED) {
+-		err = br_fdb_external_learn_add(br, p, addr, vid, true);
++		if (!p && !(ndm->ndm_state & NUD_PERMANENT)) {
++			NL_SET_ERR_MSG_MOD(extack,
++					   "FDB entry towards bridge must be permanent");
++			return -EINVAL;
++		}
++
++		err = br_fdb_external_learn_add(br, p, addr, vid,
++						ndm->ndm_state & NUD_PERMANENT,
++						true);
+ 	} else {
+ 		spin_lock_bh(&br->hash_lock);
+ 		err = fdb_add_entry(br, p, addr, ndm, nlh_flags, vid, nfea_tb);
+@@ -1092,9 +1101,11 @@ int br_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
+ 		}
+ 
+ 		/* VID was specified, so use it. */
+-		err = __br_fdb_add(ndm, br, p, addr, nlh_flags, vid, nfea_tb);
++		err = __br_fdb_add(ndm, br, p, addr, nlh_flags, vid, nfea_tb,
++				   extack);
+ 	} else {
+-		err = __br_fdb_add(ndm, br, p, addr, nlh_flags, 0, nfea_tb);
++		err = __br_fdb_add(ndm, br, p, addr, nlh_flags, 0, nfea_tb,
++				   extack);
+ 		if (err || !vg || !vg->num_vlans)
+ 			goto out;
+ 
+@@ -1106,7 +1117,7 @@ int br_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
+ 			if (!br_vlan_should_use(v))
+ 				continue;
+ 			err = __br_fdb_add(ndm, br, p, addr, nlh_flags, v->vid,
+-					   nfea_tb);
++					   nfea_tb, extack);
+ 			if (err)
+ 				goto out;
+ 		}
+@@ -1246,7 +1257,7 @@ void br_fdb_unsync_static(struct net_bridge *br, struct net_bridge_port *p)
+ }
+ 
+ int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p,
+-			      const unsigned char *addr, u16 vid,
++			      const unsigned char *addr, u16 vid, bool is_local,
+ 			      bool swdev_notify)
+ {
+ 	struct net_bridge_fdb_entry *fdb;
+@@ -1263,6 +1274,10 @@ int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p,
+ 
+ 		if (swdev_notify)
+ 			flags |= BIT(BR_FDB_ADDED_BY_USER);
++
++		if (is_local)
++			flags |= BIT(BR_FDB_LOCAL);
++
+ 		fdb = fdb_create(br, p, addr, vid, flags);
+ 		if (!fdb) {
+ 			err = -ENOMEM;
+@@ -1289,6 +1304,9 @@ int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p,
+ 		if (swdev_notify)
+ 			set_bit(BR_FDB_ADDED_BY_USER, &fdb->flags);
+ 
++		if (is_local)
++			set_bit(BR_FDB_LOCAL, &fdb->flags);
++
+ 		if (modified)
+ 			fdb_notify(br, fdb, RTM_NEWNEIGH, swdev_notify);
+ 	}
+diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
+index e013d33f1c7ca..4e3d26e0a2d11 100644
+--- a/net/bridge/br_private.h
++++ b/net/bridge/br_private.h
+@@ -707,7 +707,7 @@ int br_fdb_get(struct sk_buff *skb, struct nlattr *tb[], struct net_device *dev,
+ int br_fdb_sync_static(struct net_bridge *br, struct net_bridge_port *p);
+ void br_fdb_unsync_static(struct net_bridge *br, struct net_bridge_port *p);
+ int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p,
+-			      const unsigned char *addr, u16 vid,
++			      const unsigned char *addr, u16 vid, bool is_local,
+ 			      bool swdev_notify);
+ int br_fdb_external_learn_del(struct net_bridge *br, struct net_bridge_port *p,
+ 			      const unsigned char *addr, u16 vid,
+diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c
+index e09147ac9a990..fc61cd3fea652 100644
+--- a/net/ipv4/tcp_offload.c
++++ b/net/ipv4/tcp_offload.c
+@@ -298,6 +298,9 @@ int tcp_gro_complete(struct sk_buff *skb)
+ 	if (th->cwr)
+ 		skb_shinfo(skb)->gso_type |= SKB_GSO_TCP_ECN;
+ 
++	if (skb->encapsulation)
++		skb->inner_transport_header = skb->transport_header;
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL(tcp_gro_complete);
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index 9dde1e5fb449b..1380a6b6f4ff4 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -624,6 +624,10 @@ static int udp_gro_complete_segment(struct sk_buff *skb)
+ 
+ 	skb_shinfo(skb)->gso_segs = NAPI_GRO_CB(skb)->count;
+ 	skb_shinfo(skb)->gso_type |= SKB_GSO_UDP_L4;
++
++	if (skb->encapsulation)
++		skb->inner_transport_header = skb->transport_header;
++
+ 	return 0;
+ }
+ 
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index fc8b56bcabf39..1ee96a5c5ee0a 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -886,7 +886,7 @@ struct Qdisc *qdisc_alloc(struct netdev_queue *dev_queue,
+ 
+ 	/* seqlock has the same scope of busylock, for NOLOCK qdisc */
+ 	spin_lock_init(&sch->seqlock);
+-	lockdep_set_class(&sch->busylock,
++	lockdep_set_class(&sch->seqlock,
+ 			  dev->qdisc_tx_busylock ?: &qdisc_tx_busylock);
+ 
+ 	seqcount_init(&sch->running);
+diff --git a/net/sctp/auth.c b/net/sctp/auth.c
+index fe74c5f956303..db6b7373d16c3 100644
+--- a/net/sctp/auth.c
++++ b/net/sctp/auth.c
+@@ -857,14 +857,18 @@ int sctp_auth_set_key(struct sctp_endpoint *ep,
+ 	memcpy(key->data, &auth_key->sca_key[0], auth_key->sca_keylength);
+ 	cur_key->key = key;
+ 
+-	if (replace) {
+-		list_del_init(&shkey->key_list);
+-		sctp_auth_shkey_release(shkey);
+-		if (asoc && asoc->active_key_id == auth_key->sca_keynumber)
+-			sctp_auth_asoc_init_active_key(asoc, GFP_KERNEL);
++	if (!replace) {
++		list_add(&cur_key->key_list, sh_keys);
++		return 0;
+ 	}
++
++	list_del_init(&shkey->key_list);
++	sctp_auth_shkey_release(shkey);
+ 	list_add(&cur_key->key_list, sh_keys);
+ 
++	if (asoc && asoc->active_key_id == auth_key->sca_keynumber)
++		sctp_auth_asoc_init_active_key(asoc, GFP_KERNEL);
++
+ 	return 0;
+ }
+ 
+diff --git a/net/xfrm/xfrm_compat.c b/net/xfrm/xfrm_compat.c
+index a20aec9d73933..2bf2693901631 100644
+--- a/net/xfrm/xfrm_compat.c
++++ b/net/xfrm/xfrm_compat.c
+@@ -298,8 +298,16 @@ static int xfrm_xlate64(struct sk_buff *dst, const struct nlmsghdr *nlh_src)
+ 	len = nlmsg_attrlen(nlh_src, xfrm_msg_min[type]);
+ 
+ 	nla_for_each_attr(nla, attrs, len, remaining) {
+-		int err = xfrm_xlate64_attr(dst, nla);
++		int err;
+ 
++		switch (type) {
++		case XFRM_MSG_NEWSPDINFO:
++			err = xfrm_nla_cpy(dst, nla, nla_len(nla));
++			break;
++		default:
++			err = xfrm_xlate64_attr(dst, nla);
++			break;
++		}
+ 		if (err)
+ 			return err;
+ 	}
+@@ -341,7 +349,8 @@ static int xfrm_alloc_compat(struct sk_buff *skb, const struct nlmsghdr *nlh_src
+ 
+ /* Calculates len of translated 64-bit message. */
+ static size_t xfrm_user_rcv_calculate_len64(const struct nlmsghdr *src,
+-					    struct nlattr *attrs[XFRMA_MAX+1])
++					    struct nlattr *attrs[XFRMA_MAX + 1],
++					    int maxtype)
+ {
+ 	size_t len = nlmsg_len(src);
+ 
+@@ -358,10 +367,20 @@ static size_t xfrm_user_rcv_calculate_len64(const struct nlmsghdr *src,
+ 	case XFRM_MSG_POLEXPIRE:
+ 		len += 8;
+ 		break;
++	case XFRM_MSG_NEWSPDINFO:
++		/* attirbutes are xfrm_spdattr_type_t, not xfrm_attr_type_t */
++		return len;
+ 	default:
+ 		break;
+ 	}
+ 
++	/* Unexpected for anything, but XFRM_MSG_NEWSPDINFO, please
++	 * correct both 64=>32-bit and 32=>64-bit translators to copy
++	 * new attributes.
++	 */
++	if (WARN_ON_ONCE(maxtype))
++		return len;
++
+ 	if (attrs[XFRMA_SA])
+ 		len += 4;
+ 	if (attrs[XFRMA_POLICY])
+@@ -440,7 +459,8 @@ static int xfrm_xlate32_attr(void *dst, const struct nlattr *nla,
+ 
+ static int xfrm_xlate32(struct nlmsghdr *dst, const struct nlmsghdr *src,
+ 			struct nlattr *attrs[XFRMA_MAX+1],
+-			size_t size, u8 type, struct netlink_ext_ack *extack)
++			size_t size, u8 type, int maxtype,
++			struct netlink_ext_ack *extack)
+ {
+ 	size_t pos;
+ 	int i;
+@@ -520,6 +540,25 @@ static int xfrm_xlate32(struct nlmsghdr *dst, const struct nlmsghdr *src,
+ 	}
+ 	pos = dst->nlmsg_len;
+ 
++	if (maxtype) {
++		/* attirbutes are xfrm_spdattr_type_t, not xfrm_attr_type_t */
++		WARN_ON_ONCE(src->nlmsg_type != XFRM_MSG_NEWSPDINFO);
++
++		for (i = 1; i <= maxtype; i++) {
++			int err;
++
++			if (!attrs[i])
++				continue;
++
++			/* just copy - no need for translation */
++			err = xfrm_attr_cpy32(dst, &pos, attrs[i], size,
++					nla_len(attrs[i]), nla_len(attrs[i]));
++			if (err)
++				return err;
++		}
++		return 0;
++	}
++
+ 	for (i = 1; i < XFRMA_MAX + 1; i++) {
+ 		int err;
+ 
+@@ -564,7 +603,7 @@ static struct nlmsghdr *xfrm_user_rcv_msg_compat(const struct nlmsghdr *h32,
+ 	if (err < 0)
+ 		return ERR_PTR(err);
+ 
+-	len = xfrm_user_rcv_calculate_len64(h32, attrs);
++	len = xfrm_user_rcv_calculate_len64(h32, attrs, maxtype);
+ 	/* The message doesn't need translation */
+ 	if (len == nlmsg_len(h32))
+ 		return NULL;
+@@ -574,7 +613,7 @@ static struct nlmsghdr *xfrm_user_rcv_msg_compat(const struct nlmsghdr *h32,
+ 	if (!h64)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	err = xfrm_xlate32(h64, h32, attrs, len, type, extack);
++	err = xfrm_xlate32(h64, h32, attrs, len, type, maxtype, extack);
+ 	if (err < 0) {
+ 		kvfree(h64);
+ 		return ERR_PTR(err);
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index ce500f847b991..46a6d15b66d6f 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -155,7 +155,6 @@ static struct xfrm_policy_afinfo const __rcu *xfrm_policy_afinfo[AF_INET6 + 1]
+ 						__read_mostly;
+ 
+ static struct kmem_cache *xfrm_dst_cache __ro_after_init;
+-static __read_mostly seqcount_mutex_t xfrm_policy_hash_generation;
+ 
+ static struct rhashtable xfrm_policy_inexact_table;
+ static const struct rhashtable_params xfrm_pol_inexact_params;
+@@ -585,7 +584,7 @@ static void xfrm_bydst_resize(struct net *net, int dir)
+ 		return;
+ 
+ 	spin_lock_bh(&net->xfrm.xfrm_policy_lock);
+-	write_seqcount_begin(&xfrm_policy_hash_generation);
++	write_seqcount_begin(&net->xfrm.xfrm_policy_hash_generation);
+ 
+ 	odst = rcu_dereference_protected(net->xfrm.policy_bydst[dir].table,
+ 				lockdep_is_held(&net->xfrm.xfrm_policy_lock));
+@@ -596,7 +595,7 @@ static void xfrm_bydst_resize(struct net *net, int dir)
+ 	rcu_assign_pointer(net->xfrm.policy_bydst[dir].table, ndst);
+ 	net->xfrm.policy_bydst[dir].hmask = nhashmask;
+ 
+-	write_seqcount_end(&xfrm_policy_hash_generation);
++	write_seqcount_end(&net->xfrm.xfrm_policy_hash_generation);
+ 	spin_unlock_bh(&net->xfrm.xfrm_policy_lock);
+ 
+ 	synchronize_rcu();
+@@ -1245,7 +1244,7 @@ static void xfrm_hash_rebuild(struct work_struct *work)
+ 	} while (read_seqretry(&net->xfrm.policy_hthresh.lock, seq));
+ 
+ 	spin_lock_bh(&net->xfrm.xfrm_policy_lock);
+-	write_seqcount_begin(&xfrm_policy_hash_generation);
++	write_seqcount_begin(&net->xfrm.xfrm_policy_hash_generation);
+ 
+ 	/* make sure that we can insert the indirect policies again before
+ 	 * we start with destructive action.
+@@ -1354,7 +1353,7 @@ static void xfrm_hash_rebuild(struct work_struct *work)
+ 
+ out_unlock:
+ 	__xfrm_policy_inexact_flush(net);
+-	write_seqcount_end(&xfrm_policy_hash_generation);
++	write_seqcount_end(&net->xfrm.xfrm_policy_hash_generation);
+ 	spin_unlock_bh(&net->xfrm.xfrm_policy_lock);
+ 
+ 	mutex_unlock(&hash_resize_mutex);
+@@ -2095,9 +2094,9 @@ static struct xfrm_policy *xfrm_policy_lookup_bytype(struct net *net, u8 type,
+ 	rcu_read_lock();
+  retry:
+ 	do {
+-		sequence = read_seqcount_begin(&xfrm_policy_hash_generation);
++		sequence = read_seqcount_begin(&net->xfrm.xfrm_policy_hash_generation);
+ 		chain = policy_hash_direct(net, daddr, saddr, family, dir);
+-	} while (read_seqcount_retry(&xfrm_policy_hash_generation, sequence));
++	} while (read_seqcount_retry(&net->xfrm.xfrm_policy_hash_generation, sequence));
+ 
+ 	ret = NULL;
+ 	hlist_for_each_entry_rcu(pol, chain, bydst) {
+@@ -2128,7 +2127,7 @@ static struct xfrm_policy *xfrm_policy_lookup_bytype(struct net *net, u8 type,
+ 	}
+ 
+ skip_inexact:
+-	if (read_seqcount_retry(&xfrm_policy_hash_generation, sequence))
++	if (read_seqcount_retry(&net->xfrm.xfrm_policy_hash_generation, sequence))
+ 		goto retry;
+ 
+ 	if (ret && !xfrm_pol_hold_rcu(ret))
+@@ -4084,6 +4083,7 @@ static int __net_init xfrm_net_init(struct net *net)
+ 	/* Initialize the per-net locks here */
+ 	spin_lock_init(&net->xfrm.xfrm_state_lock);
+ 	spin_lock_init(&net->xfrm.xfrm_policy_lock);
++	seqcount_spinlock_init(&net->xfrm.xfrm_policy_hash_generation, &net->xfrm.xfrm_policy_lock);
+ 	mutex_init(&net->xfrm.xfrm_cfg_mutex);
+ 
+ 	rv = xfrm_statistics_init(net);
+@@ -4128,7 +4128,6 @@ void __init xfrm_init(void)
+ {
+ 	register_pernet_subsys(&xfrm_net_ops);
+ 	xfrm_dev_init();
+-	seqcount_mutex_init(&xfrm_policy_hash_generation, &hash_resize_mutex);
+ 	xfrm_input_init();
+ 
+ #ifdef CONFIG_XFRM_ESPINTCP
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index b47d613409b70..7aff641c717d7 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -2811,6 +2811,16 @@ static int xfrm_user_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 
+ 	err = link->doit(skb, nlh, attrs);
+ 
++	/* We need to free skb allocated in xfrm_alloc_compat() before
++	 * returning from this function, because consume_skb() won't take
++	 * care of frag_list since netlink destructor sets
++	 * sbk->head to NULL. (see netlink_skb_destructor())
++	 */
++	if (skb_has_frag_list(skb)) {
++		kfree_skb(skb_shinfo(skb)->frag_list);
++		skb_shinfo(skb)->frag_list = NULL;
++	}
++
+ err:
+ 	kvfree(nlh64);
+ 	return err;
+diff --git a/scripts/tracing/draw_functrace.py b/scripts/tracing/draw_functrace.py
+index 74f8aadfd4cbc..7011fbe003ff2 100755
+--- a/scripts/tracing/draw_functrace.py
++++ b/scripts/tracing/draw_functrace.py
+@@ -17,7 +17,7 @@ Usage:
+ 	$ cat /sys/kernel/debug/tracing/trace_pipe > ~/raw_trace_func
+ 	Wait some times but not too much, the script is a bit slow.
+ 	Break the pipe (Ctrl + Z)
+-	$ scripts/draw_functrace.py < raw_trace_func > draw_functrace
++	$ scripts/tracing/draw_functrace.py < ~/raw_trace_func > draw_functrace
+ 	Then you have your drawn trace in draw_functrace
+ """
+ 
+@@ -103,10 +103,10 @@ def parseLine(line):
+ 	line = line.strip()
+ 	if line.startswith("#"):
+ 		raise CommentLineException
+-	m = re.match("[^]]+?\\] +([0-9.]+): (\\w+) <-(\\w+)", line)
++	m = re.match("[^]]+?\\] +([a-z.]+) +([0-9.]+): (\\w+) <-(\\w+)", line)
+ 	if m is None:
+ 		raise BrokenLineException
+-	return (m.group(1), m.group(2), m.group(3))
++	return (m.group(2), m.group(3), m.group(4))
+ 
+ 
+ def main():
+diff --git a/security/selinux/ss/policydb.c b/security/selinux/ss/policydb.c
+index 9fccf417006b0..6a04de21343f8 100644
+--- a/security/selinux/ss/policydb.c
++++ b/security/selinux/ss/policydb.c
+@@ -874,7 +874,7 @@ int policydb_load_isids(struct policydb *p, struct sidtab *s)
+ 	rc = sidtab_init(s);
+ 	if (rc) {
+ 		pr_err("SELinux:  out of memory on SID table init\n");
+-		goto out;
++		return rc;
+ 	}
+ 
+ 	head = p->ocontexts[OCON_ISID];
+@@ -885,7 +885,7 @@ int policydb_load_isids(struct policydb *p, struct sidtab *s)
+ 		if (sid == SECSID_NULL) {
+ 			pr_err("SELinux:  SID 0 was assigned a context.\n");
+ 			sidtab_destroy(s);
+-			goto out;
++			return -EINVAL;
+ 		}
+ 
+ 		/* Ignore initial SIDs unused by this kernel. */
+@@ -897,12 +897,10 @@ int policydb_load_isids(struct policydb *p, struct sidtab *s)
+ 			pr_err("SELinux:  unable to load initial SID %s.\n",
+ 			       name);
+ 			sidtab_destroy(s);
+-			goto out;
++			return rc;
+ 		}
+ 	}
+-	rc = 0;
+-out:
+-	return rc;
++	return 0;
+ }
+ 
+ int policydb_class_isvalid(struct policydb *p, unsigned int class)
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index a9fd486089a7a..2b3c164d21f17 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -246,7 +246,7 @@ static bool hw_support_mmap(struct snd_pcm_substream *substream)
+ 	if (!(substream->runtime->hw.info & SNDRV_PCM_INFO_MMAP))
+ 		return false;
+ 
+-	if (substream->ops->mmap)
++	if (substream->ops->mmap || substream->ops->page)
+ 		return true;
+ 
+ 	switch (substream->dma_buffer.dev.type) {
+diff --git a/sound/core/seq/seq_ports.c b/sound/core/seq/seq_ports.c
+index b9c2ce2b8d5a3..84d78630463e4 100644
+--- a/sound/core/seq/seq_ports.c
++++ b/sound/core/seq/seq_ports.c
+@@ -514,10 +514,11 @@ static int check_and_subscribe_port(struct snd_seq_client *client,
+ 	return err;
+ }
+ 
+-static void delete_and_unsubscribe_port(struct snd_seq_client *client,
+-					struct snd_seq_client_port *port,
+-					struct snd_seq_subscribers *subs,
+-					bool is_src, bool ack)
++/* called with grp->list_mutex held */
++static void __delete_and_unsubscribe_port(struct snd_seq_client *client,
++					  struct snd_seq_client_port *port,
++					  struct snd_seq_subscribers *subs,
++					  bool is_src, bool ack)
+ {
+ 	struct snd_seq_port_subs_info *grp;
+ 	struct list_head *list;
+@@ -525,7 +526,6 @@ static void delete_and_unsubscribe_port(struct snd_seq_client *client,
+ 
+ 	grp = is_src ? &port->c_src : &port->c_dest;
+ 	list = is_src ? &subs->src_list : &subs->dest_list;
+-	down_write(&grp->list_mutex);
+ 	write_lock_irq(&grp->list_lock);
+ 	empty = list_empty(list);
+ 	if (!empty)
+@@ -535,6 +535,18 @@ static void delete_and_unsubscribe_port(struct snd_seq_client *client,
+ 
+ 	if (!empty)
+ 		unsubscribe_port(client, port, grp, &subs->info, ack);
++}
++
++static void delete_and_unsubscribe_port(struct snd_seq_client *client,
++					struct snd_seq_client_port *port,
++					struct snd_seq_subscribers *subs,
++					bool is_src, bool ack)
++{
++	struct snd_seq_port_subs_info *grp;
++
++	grp = is_src ? &port->c_src : &port->c_dest;
++	down_write(&grp->list_mutex);
++	__delete_and_unsubscribe_port(client, port, subs, is_src, ack);
+ 	up_write(&grp->list_mutex);
+ }
+ 
+@@ -590,27 +602,30 @@ int snd_seq_port_disconnect(struct snd_seq_client *connector,
+ 			    struct snd_seq_client_port *dest_port,
+ 			    struct snd_seq_port_subscribe *info)
+ {
+-	struct snd_seq_port_subs_info *src = &src_port->c_src;
++	struct snd_seq_port_subs_info *dest = &dest_port->c_dest;
+ 	struct snd_seq_subscribers *subs;
+ 	int err = -ENOENT;
+ 
+-	down_write(&src->list_mutex);
++	/* always start from deleting the dest port for avoiding concurrent
++	 * deletions
++	 */
++	down_write(&dest->list_mutex);
+ 	/* look for the connection */
+-	list_for_each_entry(subs, &src->list_head, src_list) {
++	list_for_each_entry(subs, &dest->list_head, dest_list) {
+ 		if (match_subs_info(info, &subs->info)) {
+-			atomic_dec(&subs->ref_count); /* mark as not ready */
++			__delete_and_unsubscribe_port(dest_client, dest_port,
++						      subs, false,
++						      connector->number != dest_client->number);
+ 			err = 0;
+ 			break;
+ 		}
+ 	}
+-	up_write(&src->list_mutex);
++	up_write(&dest->list_mutex);
+ 	if (err < 0)
+ 		return err;
+ 
+ 	delete_and_unsubscribe_port(src_client, src_port, subs, true,
+ 				    connector->number != src_client->number);
+-	delete_and_unsubscribe_port(dest_client, dest_port, subs, false,
+-				    connector->number != dest_client->number);
+ 	kfree(subs);
+ 	return 0;
+ }
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 19a3ae79c0012..c92d9b9cf9441 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8214,9 +8214,11 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x1290, "Acer Veriton Z4860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x1291, "Acer Veriton Z4660G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x129c, "Acer SWIFT SF314-55", ALC256_FIXUP_ACER_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1025, 0x1300, "Acer SWIFT SF314-56", ALC256_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1308, "Acer Aspire Z24-890", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x132a, "Acer TravelMate B114-21", ALC233_FIXUP_ACER_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x1330, "Acer TravelMate X514-51T", ALC255_FIXUP_ACER_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1025, 0x142b, "Acer Swift SF314-42", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1430, "Acer TravelMate B311R-31", ALC256_FIXUP_ACER_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1466, "Acer Aspire A515-56", ALC255_FIXUP_ACER_HEADPHONE_AND_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z),
+diff --git a/sound/usb/card.c b/sound/usb/card.c
+index 2f6a62416c057..a1f8c3a026f57 100644
+--- a/sound/usb/card.c
++++ b/sound/usb/card.c
+@@ -907,7 +907,7 @@ static void usb_audio_disconnect(struct usb_interface *intf)
+ 		}
+ 	}
+ 
+-	if (chip->quirk_type & QUIRK_SETUP_DISABLE_AUTOSUSPEND)
++	if (chip->quirk_type == QUIRK_SETUP_DISABLE_AUTOSUSPEND)
+ 		usb_enable_autosuspend(interface_to_usbdev(intf));
+ 
+ 	chip->num_interfaces--;
+diff --git a/sound/usb/clock.c b/sound/usb/clock.c
+index 17bbde73d4d15..14772209194bc 100644
+--- a/sound/usb/clock.c
++++ b/sound/usb/clock.c
+@@ -325,6 +325,12 @@ static int __uac_clock_find_source(struct snd_usb_audio *chip,
+ 					      selector->baCSourceID[ret - 1],
+ 					      visited, validate);
+ 		if (ret > 0) {
++			/*
++			 * For Samsung USBC Headset (AKG), setting clock selector again
++			 * will result in incorrect default clock setting problems
++			 */
++			if (chip->usb_id == USB_ID(0x04e8, 0xa051))
++				return ret;
+ 			err = uac_clock_selector_set_val(chip, entity_id, cur);
+ 			if (err < 0)
+ 				return err;
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index f4cdaf1ba44ac..9b713b4a5ec4c 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -1816,6 +1816,15 @@ static void get_connector_control_name(struct usb_mixer_interface *mixer,
+ 		strlcat(name, " - Output Jack", name_size);
+ }
+ 
++/* get connector value to "wake up" the USB audio */
++static int connector_mixer_resume(struct usb_mixer_elem_list *list)
++{
++	struct usb_mixer_elem_info *cval = mixer_elem_list_to_info(list);
++
++	get_connector_value(cval, NULL, NULL);
++	return 0;
++}
++
+ /* Build a mixer control for a UAC connector control (jack-detect) */
+ static void build_connector_control(struct usb_mixer_interface *mixer,
+ 				    const struct usbmix_name_map *imap,
+@@ -1833,6 +1842,10 @@ static void build_connector_control(struct usb_mixer_interface *mixer,
+ 	if (!cval)
+ 		return;
+ 	snd_usb_mixer_elem_init_std(&cval->head, mixer, term->id);
++
++	/* set up a specific resume callback */
++	cval->head.resume = connector_mixer_resume;
++
+ 	/*
+ 	 * UAC2: The first byte from reading the UAC2_TE_CONNECTOR control returns the
+ 	 * number of channels connected.
+@@ -3642,23 +3655,15 @@ static int restore_mixer_value(struct usb_mixer_elem_list *list)
+ 	return 0;
+ }
+ 
+-static int default_mixer_resume(struct usb_mixer_elem_list *list)
+-{
+-	struct usb_mixer_elem_info *cval = mixer_elem_list_to_info(list);
+-
+-	/* get connector value to "wake up" the USB audio */
+-	if (cval->val_type == USB_MIXER_BOOLEAN && cval->channels == 1)
+-		get_connector_value(cval, NULL, NULL);
+-
+-	return 0;
+-}
+-
+ static int default_mixer_reset_resume(struct usb_mixer_elem_list *list)
+ {
+-	int err = default_mixer_resume(list);
++	int err;
+ 
+-	if (err < 0)
+-		return err;
++	if (list->resume) {
++		err = list->resume(list);
++		if (err < 0)
++			return err;
++	}
+ 	return restore_mixer_value(list);
+ }
+ 
+@@ -3697,7 +3702,7 @@ void snd_usb_mixer_elem_init_std(struct usb_mixer_elem_list *list,
+ 	list->id = unitid;
+ 	list->dump = snd_usb_mixer_dump_cval;
+ #ifdef CONFIG_PM
+-	list->resume = default_mixer_resume;
++	list->resume = NULL;
+ 	list->reset_resume = default_mixer_reset_resume;
+ #endif
+ }
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index e7accd87e0632..326d1b0ea5e69 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1899,6 +1899,7 @@ static const struct registration_quirk registration_quirks[] = {
+ 	REG_QUIRK_ENTRY(0x0951, 0x16ea, 2),	/* Kingston HyperX Cloud Flight S */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x1f46, 2),	/* JBL Quantum 600 */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x2039, 2),	/* JBL Quantum 400 */
++	REG_QUIRK_ENTRY(0x0ecb, 0x203c, 2),	/* JBL Quantum 600 */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x203e, 2),	/* JBL Quantum 800 */
+ 	{ 0 }					/* terminator */
+ };
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 0119466677b7d..1dcc66060a19a 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -845,6 +845,8 @@ static void kvm_destroy_vm_debugfs(struct kvm *kvm)
+ 
+ static int kvm_create_vm_debugfs(struct kvm *kvm, int fd)
+ {
++	static DEFINE_MUTEX(kvm_debugfs_lock);
++	struct dentry *dent;
+ 	char dir_name[ITOA_MAX_LEN * 2];
+ 	struct kvm_stat_data *stat_data;
+ 	struct kvm_stats_debugfs_item *p;
+@@ -853,8 +855,20 @@ static int kvm_create_vm_debugfs(struct kvm *kvm, int fd)
+ 		return 0;
+ 
+ 	snprintf(dir_name, sizeof(dir_name), "%d-%d", task_pid_nr(current), fd);
+-	kvm->debugfs_dentry = debugfs_create_dir(dir_name, kvm_debugfs_dir);
++	mutex_lock(&kvm_debugfs_lock);
++	dent = debugfs_lookup(dir_name, kvm_debugfs_dir);
++	if (dent) {
++		pr_warn_ratelimited("KVM: debugfs: duplicate directory %s\n", dir_name);
++		dput(dent);
++		mutex_unlock(&kvm_debugfs_lock);
++		return 0;
++	}
++	dent = debugfs_create_dir(dir_name, kvm_debugfs_dir);
++	mutex_unlock(&kvm_debugfs_lock);
++	if (IS_ERR(dent))
++		return 0;
+ 
++	kvm->debugfs_dentry = dent;
+ 	kvm->debugfs_stat_data = kcalloc(kvm_debugfs_num_entries,
+ 					 sizeof(*kvm->debugfs_stat_data),
+ 					 GFP_KERNEL_ACCOUNT);
+@@ -4993,7 +5007,7 @@ static void kvm_uevent_notify_change(unsigned int type, struct kvm *kvm)
+ 	}
+ 	add_uevent_var(env, "PID=%d", kvm->userspace_pid);
+ 
+-	if (!IS_ERR_OR_NULL(kvm->debugfs_dentry)) {
++	if (kvm->debugfs_dentry) {
+ 		char *tmp, *p = kmalloc(PATH_MAX, GFP_KERNEL_ACCOUNT);
+ 
+ 		if (p) {


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-08-13 14:30 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-08-13 14:30 UTC (permalink / raw
  To: gentoo-commits

commit:     ffb49f890b00cb9a26645c14a4ec7cdd68ed7769
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Aug 13 14:30:02 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Aug 13 14:30:02 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ffb49f89

Bump BMQ(BitMap Queue) Scheduler patch to r2

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |   2 +-
 ...=> 5020_BMQ-and-PDS-io-scheduler-v5.13-r2.patch | 202 +++++++--------------
 2 files changed, 66 insertions(+), 138 deletions(-)

diff --git a/0000_README b/0000_README
index deff891..e72cd88 100644
--- a/0000_README
+++ b/0000_README
@@ -111,7 +111,7 @@ Patch:  5010_enable-cpu-optimizations-universal.patch
 From:   https://github.com/graysky2/kernel_compiler_patch
 Desc:   Kernel >= 5.8 patch enables gcc = v9+ optimizations for additional CPUs.
 
-Patch:  5020_BMQ-and-PDS-io-scheduler-v5.13-r1.patch
+Patch:  5020_BMQ-and-PDS-io-scheduler-v5.13-r2.patch
 From:   https://gitlab.com/alfredchen/linux-prjc
 Desc:   BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
 

diff --git a/5020_BMQ-and-PDS-io-scheduler-v5.13-r1.patch b/5020_BMQ-and-PDS-io-scheduler-v5.13-r2.patch
similarity index 98%
rename from 5020_BMQ-and-PDS-io-scheduler-v5.13-r1.patch
rename to 5020_BMQ-and-PDS-io-scheduler-v5.13-r2.patch
index 82d7f5a..72533b6 100644
--- a/5020_BMQ-and-PDS-io-scheduler-v5.13-r1.patch
+++ b/5020_BMQ-and-PDS-io-scheduler-v5.13-r2.patch
@@ -1,5 +1,5 @@
 diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
-index cb89dbdedc46..37192ffbd3f8 100644
+index cb89dbdedc46..11e17f2f3a26 100644
 --- a/Documentation/admin-guide/kernel-parameters.txt
 +++ b/Documentation/admin-guide/kernel-parameters.txt
 @@ -4878,6 +4878,12 @@
@@ -7,9 +7,9 @@ index cb89dbdedc46..37192ffbd3f8 100644
  	sbni=		[NET] Granch SBNI12 leased line adapter
  
 +	sched_timeslice=
-+			[KNL] Time slice in us for BMQ/PDS scheduler.
-+			Format: <int> (must be >= 1000)
-+			Default: 4000
++			[KNL] Time slice in ms for Project C BMQ/PDS scheduler.
++			Format: integer 2, 4
++			Default: 4
 +			See Documentation/scheduler/sched-BMQ.txt
 +
  	sched_verbose	[KNL] Enables verbose scheduler debug messages.
@@ -647,10 +647,10 @@ index 5fc9c9b70862..06b60d612535 100644
  obj-$(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) += cpufreq_schedutil.o
 diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
 new file mode 100644
-index 000000000000..b65b12c6014f
+index 000000000000..e296d56e85f0
 --- /dev/null
 +++ b/kernel/sched/alt_core.c
-@@ -0,0 +1,7249 @@
+@@ -0,0 +1,7227 @@
 +/*
 + *  kernel/sched/alt_core.c
 + *
@@ -720,7 +720,7 @@ index 000000000000..b65b12c6014f
 +#define sched_feat(x)	(0)
 +#endif /* CONFIG_SCHED_DEBUG */
 +
-+#define ALT_SCHED_VERSION "v5.13-r1"
++#define ALT_SCHED_VERSION "v5.13-r2"
 +
 +/* rt_prio(prio) defined in include/linux/sched/rt.h */
 +#define rt_task(p)		rt_prio((p)->prio)
@@ -769,11 +769,9 @@ index 000000000000..b65b12c6014f
 +#ifdef CONFIG_SMP
 +static cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
 +
-+DEFINE_PER_CPU(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_affinity_masks);
-+DEFINE_PER_CPU(cpumask_t *, sched_cpu_affinity_end_mask);
-+
 +DEFINE_PER_CPU(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
 +DEFINE_PER_CPU(cpumask_t *, sched_cpu_llc_mask);
++DEFINE_PER_CPU(cpumask_t *, sched_cpu_topo_end_mask);
 +
 +#ifdef CONFIG_SCHED_SMT
 +DEFINE_STATIC_KEY_FALSE(sched_smt_present);
@@ -799,8 +797,6 @@ index 000000000000..b65b12c6014f
 +# define finish_arch_post_lock_switch()	do { } while (0)
 +#endif
 +
-+#define IDLE_WM	(IDLE_TASK_SCHED_PRIO)
-+
 +#ifdef CONFIG_SCHED_SMT
 +static cpumask_t sched_sg_idle_mask ____cacheline_aligned_in_smp;
 +#endif
@@ -842,28 +838,28 @@ index 000000000000..b65b12c6014f
 +	rq->watermark = watermark;
 +	cpu = cpu_of(rq);
 +	if (watermark < last_wm) {
-+		for (i = watermark + 1; i <= last_wm; i++)
-+			cpumask_andnot(&sched_rq_watermark[i],
-+				       &sched_rq_watermark[i], cpumask_of(cpu));
++		for (i = last_wm; i > watermark; i--)
++			cpumask_clear_cpu(cpu, sched_rq_watermark + SCHED_BITS - 1 - i);
 +#ifdef CONFIG_SCHED_SMT
 +		if (static_branch_likely(&sched_smt_present) &&
-+		    IDLE_WM == last_wm)
++		    IDLE_TASK_SCHED_PRIO == last_wm)
 +			cpumask_andnot(&sched_sg_idle_mask,
 +				       &sched_sg_idle_mask, cpu_smt_mask(cpu));
 +#endif
 +		return;
 +	}
 +	/* last_wm < watermark */
-+	for (i = last_wm + 1; i <= watermark; i++)
-+		cpumask_set_cpu(cpu, &sched_rq_watermark[i]);
++	for (i = watermark; i > last_wm; i--)
++		cpumask_set_cpu(cpu, sched_rq_watermark + SCHED_BITS - 1 - i);
 +#ifdef CONFIG_SCHED_SMT
-+	if (static_branch_likely(&sched_smt_present) && IDLE_WM == watermark) {
++	if (static_branch_likely(&sched_smt_present) &&
++	    IDLE_TASK_SCHED_PRIO == watermark) {
 +		cpumask_t tmp;
 +
-+		cpumask_and(&tmp, cpu_smt_mask(cpu), &sched_rq_watermark[IDLE_WM]);
++		cpumask_and(&tmp, cpu_smt_mask(cpu), sched_rq_watermark);
 +		if (cpumask_equal(&tmp, cpu_smt_mask(cpu)))
-+			cpumask_or(&sched_sg_idle_mask, cpu_smt_mask(cpu),
-+				   &sched_sg_idle_mask);
++			cpumask_or(&sched_sg_idle_mask,
++				   &sched_sg_idle_mask, cpu_smt_mask(cpu));
 +	}
 +#endif
 +}
@@ -1546,8 +1542,8 @@ index 000000000000..b65b12c6014f
 +		default_cpu = cpu;
 +	}
 +
-+	for (mask = per_cpu(sched_cpu_affinity_masks, cpu) + 1;
-+	     mask < per_cpu(sched_cpu_affinity_end_mask, cpu); mask++)
++	for (mask = per_cpu(sched_cpu_topo_masks, cpu) + 1;
++	     mask < per_cpu(sched_cpu_topo_end_mask, cpu); mask++)
 +		for_each_cpu_and(i, mask, housekeeping_cpumask(HK_FLAG_TIMER))
 +			if (!idle_cpu(i))
 +				return i;
@@ -2389,9 +2385,9 @@ index 000000000000..b65b12c6014f
 +#ifdef CONFIG_SCHED_SMT
 +	    cpumask_and(&tmp, &chk_mask, &sched_sg_idle_mask) ||
 +#endif
-+	    cpumask_and(&tmp, &chk_mask, &sched_rq_watermark[IDLE_WM]) ||
++	    cpumask_and(&tmp, &chk_mask, sched_rq_watermark) ||
 +	    cpumask_and(&tmp, &chk_mask,
-+			&sched_rq_watermark[task_sched_prio(p) + 1]))
++			sched_rq_watermark + SCHED_BITS - task_sched_prio(p)))
 +		return best_mask_cpu(task_cpu(p), &tmp);
 +
 +	return best_mask_cpu(task_cpu(p), &chk_mask);
@@ -4183,8 +4179,7 @@ index 000000000000..b65b12c6014f
 +	    cpumask_and(&tmp, p->cpus_ptr, &sched_sg_idle_mask) &&
 +	    !is_migration_disabled(p)) {
 +		int cpu = cpu_of(rq);
-+		int dcpu = __best_mask_cpu(cpu, &tmp,
-+					   per_cpu(sched_cpu_llc_mask, cpu));
++		int dcpu = __best_mask_cpu(&tmp, per_cpu(sched_cpu_llc_mask, cpu));
 +		rq = move_queued_task(rq, p, dcpu);
 +	}
 +
@@ -4228,34 +4223,25 @@ index 000000000000..b65b12c6014f
 +static inline void sg_balance_check(struct rq *rq)
 +{
 +	cpumask_t chk;
-+	int cpu;
-+
-+	/* exit when no sg in idle */
-+	if (cpumask_empty(&sched_sg_idle_mask))
-+		return;
++	int cpu = cpu_of(rq);
 +
 +	/* exit when cpu is offline */
 +	if (unlikely(!rq->online))
 +		return;
 +
-+	cpu = cpu_of(rq);
 +	/*
 +	 * Only cpu in slibing idle group will do the checking and then
 +	 * find potential cpus which can migrate the current running task
 +	 */
 +	if (cpumask_test_cpu(cpu, &sched_sg_idle_mask) &&
-+	    cpumask_andnot(&chk, cpu_online_mask, &sched_rq_pending_mask) &&
-+	    cpumask_andnot(&chk, &chk, &sched_rq_watermark[IDLE_WM])) {
-+		int i, tried = 0;
++	    cpumask_andnot(&chk, cpu_online_mask, sched_rq_watermark) &&
++	    cpumask_andnot(&chk, &chk, &sched_rq_pending_mask)) {
++		int i;
 +
 +		for_each_cpu_wrap(i, &chk, cpu) {
-+			if (cpumask_subset(cpu_smt_mask(i), &chk)) {
-+				if (sg_balance_trigger(i))
-+					return;
-+				if (tried)
-+					return;
-+				tried++;
-+			}
++			if (cpumask_subset(cpu_smt_mask(i), &chk) &&
++			    sg_balance_trigger(i))
++				return;
 +		}
 +	}
 +}
@@ -4558,7 +4544,7 @@ index 000000000000..b65b12c6014f
 +{
 +	printk(KERN_INFO "sched: pending: 0x%04lx, idle: 0x%04lx, sg_idle: 0x%04lx\n",
 +	       sched_rq_pending_mask.bits[0],
-+	       sched_rq_watermark[IDLE_WM].bits[0],
++	       sched_rq_watermark[0].bits[0],
 +	       sched_sg_idle_mask.bits[0]);
 +}
 +#else
@@ -4597,7 +4583,7 @@ index 000000000000..b65b12c6014f
 +
 +static inline int take_other_rq_tasks(struct rq *rq, int cpu)
 +{
-+	struct cpumask *affinity_mask, *end_mask;
++	struct cpumask *topo_mask, *end_mask;
 +
 +	if (unlikely(!rq->online))
 +		return 0;
@@ -4605,11 +4591,11 @@ index 000000000000..b65b12c6014f
 +	if (cpumask_empty(&sched_rq_pending_mask))
 +		return 0;
 +
-+	affinity_mask = per_cpu(sched_cpu_affinity_masks, cpu) + 1;
-+	end_mask = per_cpu(sched_cpu_affinity_end_mask, cpu);
++	topo_mask = per_cpu(sched_cpu_topo_masks, cpu) + 1;
++	end_mask = per_cpu(sched_cpu_topo_end_mask, cpu);
 +	do {
 +		int i;
-+		for_each_cpu_and(i, &sched_rq_pending_mask, affinity_mask) {
++		for_each_cpu_and(i, &sched_rq_pending_mask, topo_mask) {
 +			int nr_migrated;
 +			struct rq *src_rq;
 +
@@ -4640,7 +4626,7 @@ index 000000000000..b65b12c6014f
 +			spin_release(&src_rq->lock.dep_map, _RET_IP_);
 +			do_raw_spin_unlock(&src_rq->lock);
 +		}
-+	} while (++affinity_mask < end_mask);
++	} while (++topo_mask < end_mask);
 +
 +	return 0;
 +}
@@ -7302,14 +7288,6 @@ index 000000000000..b65b12c6014f
 +	cpumask_t *tmp;
 +
 +	for_each_possible_cpu(cpu) {
-+		/* init affinity masks */
-+		tmp = per_cpu(sched_cpu_affinity_masks, cpu);
-+
-+		cpumask_copy(tmp, cpumask_of(cpu));
-+		tmp++;
-+		cpumask_copy(tmp, cpu_possible_mask);
-+		cpumask_clear_cpu(cpu, tmp);
-+		per_cpu(sched_cpu_affinity_end_mask, cpu) = ++tmp;
 +		/* init topo masks */
 +		tmp = per_cpu(sched_cpu_topo_masks, cpu);
 +
@@ -7317,32 +7295,32 @@ index 000000000000..b65b12c6014f
 +		tmp++;
 +		cpumask_copy(tmp, cpu_possible_mask);
 +		per_cpu(sched_cpu_llc_mask, cpu) = tmp;
++		per_cpu(sched_cpu_topo_end_mask, cpu) = ++tmp;
 +		/*per_cpu(sd_llc_id, cpu) = cpu;*/
 +	}
 +}
 +
-+#define TOPOLOGY_CPUMASK(name, mask, last) \
-+	if (cpumask_and(chk, chk, mask)) {					\
++#define TOPOLOGY_CPUMASK(name, mask, last)\
++	if (cpumask_and(topo, topo, mask)) {					\
 +		cpumask_copy(topo, mask);					\
-+		printk(KERN_INFO "sched: cpu#%02d affinity: 0x%08lx topo: 0x%08lx - "#name,\
-+		       cpu, (chk++)->bits[0], (topo++)->bits[0]);		\
++		printk(KERN_INFO "sched: cpu#%02d topo: 0x%08lx - "#name,	\
++		       cpu, (topo++)->bits[0]);					\
 +	}									\
 +	if (!last)								\
-+		cpumask_complement(chk, mask)
++		cpumask_complement(topo, mask)
 +
 +static void sched_init_topology_cpumask(void)
 +{
 +	int cpu;
-+	cpumask_t *chk, *topo;
++	cpumask_t *topo;
 +
 +	for_each_online_cpu(cpu) {
 +		/* take chance to reset time slice for idle tasks */
 +		cpu_rq(cpu)->idle->time_slice = sched_timeslice_ns;
 +
-+		chk = per_cpu(sched_cpu_affinity_masks, cpu) + 1;
 +		topo = per_cpu(sched_cpu_topo_masks, cpu) + 1;
 +
-+		cpumask_complement(chk, cpumask_of(cpu));
++		cpumask_complement(topo, cpumask_of(cpu));
 +#ifdef CONFIG_SCHED_SMT
 +		TOPOLOGY_CPUMASK(smt, topology_sibling_cpumask(cpu), false);
 +#endif
@@ -7354,7 +7332,7 @@ index 000000000000..b65b12c6014f
 +
 +		TOPOLOGY_CPUMASK(others, cpu_online_mask, true);
 +
-+		per_cpu(sched_cpu_affinity_end_mask, cpu) = chk;
++		per_cpu(sched_cpu_topo_end_mask, cpu) = topo;
 +		printk(KERN_INFO "sched: cpu#%02d llc_id = %d, llc_mask idx = %d\n",
 +		       cpu, per_cpu(sd_llc_id, cpu),
 +		       (int) (per_cpu(sched_cpu_llc_mask, cpu) -
@@ -7425,7 +7403,7 @@ index 000000000000..b65b12c6014f
 +
 +#ifdef CONFIG_SMP
 +	for (i = 0; i < SCHED_BITS; i++)
-+		cpumask_copy(&sched_rq_watermark[i], cpu_present_mask);
++		cpumask_copy(sched_rq_watermark + i, cpu_present_mask);
 +#endif
 +
 +#ifdef CONFIG_CGROUP_SCHED
@@ -7439,7 +7417,7 @@ index 000000000000..b65b12c6014f
 +		rq = cpu_rq(i);
 +
 +		sched_queue_init(&rq->queue);
-+		rq->watermark = IDLE_WM;
++		rq->watermark = IDLE_TASK_SCHED_PRIO;
 +		rq->skip = NULL;
 +
 +		raw_spin_lock_init(&rq->lock);
@@ -7939,10 +7917,10 @@ index 000000000000..1212a031700e
 +{}
 diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
 new file mode 100644
-index 000000000000..f9f79422bf0e
+index 000000000000..7a48809550bf
 --- /dev/null
 +++ b/kernel/sched/alt_sched.h
-@@ -0,0 +1,710 @@
+@@ -0,0 +1,662 @@
 +#ifndef ALT_SCHED_H
 +#define ALT_SCHED_H
 +
@@ -8247,68 +8225,20 @@ index 000000000000..f9f79422bf0e
 +DECLARE_PER_CPU(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
 +DECLARE_PER_CPU(cpumask_t *, sched_cpu_llc_mask);
 +
-+static inline int __best_mask_cpu(int cpu, const cpumask_t *cpumask,
-+				  const cpumask_t *mask)
++static inline int
++__best_mask_cpu(const cpumask_t *cpumask, const cpumask_t *mask)
 +{
-+#if NR_CPUS <= 64
-+	unsigned long t;
-+
-+	while ((t = cpumask->bits[0] & mask->bits[0]) == 0UL)
-+		mask++;
++	int cpu;
 +
-+	return __ffs(t);
-+#else
 +	while ((cpu = cpumask_any_and(cpumask, mask)) >= nr_cpu_ids)
 +		mask++;
++
 +	return cpu;
-+#endif
 +}
 +
 +static inline int best_mask_cpu(int cpu, const cpumask_t *mask)
 +{
-+#if NR_CPUS <= 64
-+	unsigned long llc_match;
-+	cpumask_t *chk = per_cpu(sched_cpu_llc_mask, cpu);
-+
-+	if ((llc_match = mask->bits[0] & chk->bits[0])) {
-+		unsigned long match;
-+
-+		chk = per_cpu(sched_cpu_topo_masks, cpu);
-+		if (mask->bits[0] & chk->bits[0])
-+			return cpu;
-+
-+#ifdef CONFIG_SCHED_SMT
-+		chk++;
-+		if ((match = mask->bits[0] & chk->bits[0]))
-+			return __ffs(match);
-+#endif
-+
-+		return __ffs(llc_match);
-+	}
-+
-+	return __best_mask_cpu(cpu, mask, chk + 1);
-+#else
-+	cpumask_t llc_match;
-+	cpumask_t *chk = per_cpu(sched_cpu_llc_mask, cpu);
-+
-+	if (cpumask_and(&llc_match, mask, chk)) {
-+		cpumask_t tmp;
-+
-+		chk = per_cpu(sched_cpu_topo_masks, cpu);
-+		if (cpumask_test_cpu(cpu, mask))
-+			return cpu;
-+
-+#ifdef CONFIG_SCHED_SMT
-+		chk++;
-+		if (cpumask_and(&tmp, mask, chk))
-+			return cpumask_any(&tmp);
-+#endif
-+
-+		return cpumask_any(&llc_match);
-+	}
-+
-+	return __best_mask_cpu(cpu, mask, chk + 1);
-+#endif
++	return __best_mask_cpu(mask, per_cpu(sched_cpu_topo_masks, cpu));
 +}
 +
 +extern void flush_smp_call_function_from_idle(void);
@@ -8655,7 +8585,7 @@ index 000000000000..f9f79422bf0e
 +#endif /* ALT_SCHED_H */
 diff --git a/kernel/sched/bmq.h b/kernel/sched/bmq.h
 new file mode 100644
-index 000000000000..7635c00dde7f
+index 000000000000..be3ee4a553ca
 --- /dev/null
 +++ b/kernel/sched/bmq.h
 @@ -0,0 +1,111 @@
@@ -8750,20 +8680,20 @@ index 000000000000..7635c00dde7f
 +		p->boost_prio + MAX_PRIORITY_ADJ : MAX_PRIORITY_ADJ;
 +}
 +
-+static void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
 +{
 +	p->boost_prio = MAX_PRIORITY_ADJ;
 +}
 +
 +#ifdef CONFIG_SMP
-+static void sched_task_ttwu(struct task_struct *p)
++static inline void sched_task_ttwu(struct task_struct *p)
 +{
 +	if(this_rq()->clock_task - p->last_ran > sched_timeslice_ns)
 +		boost_task(p);
 +}
 +#endif
 +
-+static void sched_task_deactivate(struct task_struct *p, struct rq *rq)
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq)
 +{
 +	if (rq_switch_time(rq) < boost_threshold(p))
 +		boost_task(p);
@@ -9043,10 +8973,10 @@ index 7ca3d3d86c2a..23e890141939 100644
 +#endif
 diff --git a/kernel/sched/pds.h b/kernel/sched/pds.h
 new file mode 100644
-index 000000000000..06d88e72b543
+index 000000000000..0f1f0d708b77
 --- /dev/null
 +++ b/kernel/sched/pds.h
-@@ -0,0 +1,129 @@
+@@ -0,0 +1,127 @@
 +#define ALT_SCHED_VERSION_MSG "sched/pds: PDS CPU Scheduler "ALT_SCHED_VERSION" by Alfred Chen.\n"
 +
 +static int sched_timeslice_shift = 22;
@@ -9067,11 +8997,9 @@ index 000000000000..06d88e72b543
 +{
 +	s64 delta = p->deadline - rq->time_edge + NORMAL_PRIO_NUM - NICE_WIDTH;
 +
-+	if (unlikely(delta > NORMAL_PRIO_NUM - 1)) {
-+		pr_info("pds: task_sched_prio_normal delta %lld, deadline %llu, time_edge %llu\n",
-+			delta, p->deadline, rq->time_edge);
++	if (WARN_ONCE(delta > NORMAL_PRIO_NUM - 1,
++		      "pds: task_sched_prio_normal() delta %lld\n", delta))
 +		return NORMAL_PRIO_NUM - 1;
-+	}
 +
 +	return (delta < 0) ? 0 : delta;
 +}
@@ -9167,15 +9095,15 @@ index 000000000000..06d88e72b543
 +	sched_renew_deadline(p, rq);
 +}
 +
-+static void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
 +{
 +	time_slice_expired(p, rq);
 +}
 +
 +#ifdef CONFIG_SMP
-+static void sched_task_ttwu(struct task_struct *p) {}
++static inline void sched_task_ttwu(struct task_struct *p) {}
 +#endif
-+static void sched_task_deactivate(struct task_struct *p, struct rq *rq) {}
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq) {}
 diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
 index a554e3bbab2b..3e56f5e6ff5c 100644
 --- a/kernel/sched/pelt.c


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-08-15 20:04 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-08-15 20:04 UTC (permalink / raw
  To: gentoo-commits

commit:     b9c2b0217ef78ff90c95324dd69e7946f661de57
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Aug 15 20:04:07 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Aug 15 20:04:07 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b9c2b021

Linux patch 5.13.11

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |   4 +
 1010_linux-5.13.11.patch | 257 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 261 insertions(+)

diff --git a/0000_README b/0000_README
index e72cd88..75d23bb 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch:  1009_linux-5.13.10.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.13.10
 
+Patch:  1010_linux-5.13.11.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.13.11
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1010_linux-5.13.11.patch b/1010_linux-5.13.11.patch
new file mode 100644
index 0000000..0ca1f85
--- /dev/null
+++ b/1010_linux-5.13.11.patch
@@ -0,0 +1,257 @@
+diff --git a/Makefile b/Makefile
+index 4e9f877f513f9..eaf1df54ad123 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 13
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/drivers/firmware/broadcom/tee_bnxt_fw.c b/drivers/firmware/broadcom/tee_bnxt_fw.c
+index ed10da5313e86..a5bf4c3f6dc74 100644
+--- a/drivers/firmware/broadcom/tee_bnxt_fw.c
++++ b/drivers/firmware/broadcom/tee_bnxt_fw.c
+@@ -212,10 +212,9 @@ static int tee_bnxt_fw_probe(struct device *dev)
+ 
+ 	pvt_data.dev = dev;
+ 
+-	fw_shm_pool = tee_shm_alloc(pvt_data.ctx, MAX_SHM_MEM_SZ,
+-				    TEE_SHM_MAPPED | TEE_SHM_DMA_BUF);
++	fw_shm_pool = tee_shm_alloc_kernel_buf(pvt_data.ctx, MAX_SHM_MEM_SZ);
+ 	if (IS_ERR(fw_shm_pool)) {
+-		dev_err(pvt_data.dev, "tee_shm_alloc failed\n");
++		dev_err(pvt_data.dev, "tee_shm_alloc_kernel_buf failed\n");
+ 		err = PTR_ERR(fw_shm_pool);
+ 		goto out_sess;
+ 	}
+@@ -242,6 +241,14 @@ static int tee_bnxt_fw_remove(struct device *dev)
+ 	return 0;
+ }
+ 
++static void tee_bnxt_fw_shutdown(struct device *dev)
++{
++	tee_shm_free(pvt_data.fw_shm_pool);
++	tee_client_close_session(pvt_data.ctx, pvt_data.session_id);
++	tee_client_close_context(pvt_data.ctx);
++	pvt_data.ctx = NULL;
++}
++
+ static const struct tee_client_device_id tee_bnxt_fw_id_table[] = {
+ 	{UUID_INIT(0x6272636D, 0x2019, 0x0716,
+ 		    0x42, 0x43, 0x4D, 0x5F, 0x53, 0x43, 0x48, 0x49)},
+@@ -257,6 +264,7 @@ static struct tee_client_driver tee_bnxt_fw_driver = {
+ 		.bus		= &tee_bus_type,
+ 		.probe		= tee_bnxt_fw_probe,
+ 		.remove		= tee_bnxt_fw_remove,
++		.shutdown	= tee_bnxt_fw_shutdown,
+ 	},
+ };
+ 
+diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
+index 930e49ef15f6a..b9dd47bd597ff 100644
+--- a/drivers/net/ppp/ppp_generic.c
++++ b/drivers/net/ppp/ppp_generic.c
+@@ -284,7 +284,7 @@ static struct channel *ppp_find_channel(struct ppp_net *pn, int unit);
+ static int ppp_connect_channel(struct channel *pch, int unit);
+ static int ppp_disconnect_channel(struct channel *pch);
+ static void ppp_destroy_channel(struct channel *pch);
+-static int unit_get(struct idr *p, void *ptr);
++static int unit_get(struct idr *p, void *ptr, int min);
+ static int unit_set(struct idr *p, void *ptr, int n);
+ static void unit_put(struct idr *p, int n);
+ static void *unit_find(struct idr *p, int n);
+@@ -1155,9 +1155,20 @@ static int ppp_unit_register(struct ppp *ppp, int unit, bool ifname_is_set)
+ 	mutex_lock(&pn->all_ppp_mutex);
+ 
+ 	if (unit < 0) {
+-		ret = unit_get(&pn->units_idr, ppp);
++		ret = unit_get(&pn->units_idr, ppp, 0);
+ 		if (ret < 0)
+ 			goto err;
++		if (!ifname_is_set) {
++			while (1) {
++				snprintf(ppp->dev->name, IFNAMSIZ, "ppp%i", ret);
++				if (!__dev_get_by_name(ppp->ppp_net, ppp->dev->name))
++					break;
++				unit_put(&pn->units_idr, ret);
++				ret = unit_get(&pn->units_idr, ppp, ret + 1);
++				if (ret < 0)
++					goto err;
++			}
++		}
+ 	} else {
+ 		/* Caller asked for a specific unit number. Fail with -EEXIST
+ 		 * if unavailable. For backward compatibility, return -EEXIST
+@@ -3552,9 +3563,9 @@ static int unit_set(struct idr *p, void *ptr, int n)
+ }
+ 
+ /* get new free unit number and associate pointer with it */
+-static int unit_get(struct idr *p, void *ptr)
++static int unit_get(struct idr *p, void *ptr, int min)
+ {
+-	return idr_alloc(p, ptr, 0, 0, GFP_KERNEL);
++	return idr_alloc(p, ptr, min, 0, GFP_KERNEL);
+ }
+ 
+ /* put unit number back to a pool */
+diff --git a/fs/namespace.c b/fs/namespace.c
+index c3f1a78ba3697..caad091fb204d 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -1938,6 +1938,20 @@ void drop_collected_mounts(struct vfsmount *mnt)
+ 	namespace_unlock();
+ }
+ 
++static bool has_locked_children(struct mount *mnt, struct dentry *dentry)
++{
++	struct mount *child;
++
++	list_for_each_entry(child, &mnt->mnt_mounts, mnt_child) {
++		if (!is_subdir(child->mnt_mountpoint, dentry))
++			continue;
++
++		if (child->mnt.mnt_flags & MNT_LOCKED)
++			return true;
++	}
++	return false;
++}
++
+ /**
+  * clone_private_mount - create a private clone of a path
+  * @path: path to clone
+@@ -1953,10 +1967,19 @@ struct vfsmount *clone_private_mount(const struct path *path)
+ 	struct mount *old_mnt = real_mount(path->mnt);
+ 	struct mount *new_mnt;
+ 
++	down_read(&namespace_sem);
+ 	if (IS_MNT_UNBINDABLE(old_mnt))
+-		return ERR_PTR(-EINVAL);
++		goto invalid;
++
++	if (!check_mnt(old_mnt))
++		goto invalid;
++
++	if (has_locked_children(old_mnt, path->dentry))
++		goto invalid;
+ 
+ 	new_mnt = clone_mnt(old_mnt, path->dentry, CL_PRIVATE);
++	up_read(&namespace_sem);
++
+ 	if (IS_ERR(new_mnt))
+ 		return ERR_CAST(new_mnt);
+ 
+@@ -1964,6 +1987,10 @@ struct vfsmount *clone_private_mount(const struct path *path)
+ 	new_mnt->mnt_ns = MNT_NS_INTERNAL;
+ 
+ 	return &new_mnt->mnt;
++
++invalid:
++	up_read(&namespace_sem);
++	return ERR_PTR(-EINVAL);
+ }
+ EXPORT_SYMBOL_GPL(clone_private_mount);
+ 
+@@ -2315,19 +2342,6 @@ static int do_change_type(struct path *path, int ms_flags)
+ 	return err;
+ }
+ 
+-static bool has_locked_children(struct mount *mnt, struct dentry *dentry)
+-{
+-	struct mount *child;
+-	list_for_each_entry(child, &mnt->mnt_mounts, mnt_child) {
+-		if (!is_subdir(child->mnt_mountpoint, dentry))
+-			continue;
+-
+-		if (child->mnt.mnt_flags & MNT_LOCKED)
+-			return true;
+-	}
+-	return false;
+-}
+-
+ static struct mount *__do_loopback(struct path *old_path, int recurse)
+ {
+ 	struct mount *mnt = ERR_PTR(-EINVAL), *old = real_mount(old_path->mnt);
+diff --git a/include/linux/security.h b/include/linux/security.h
+index 06f7c50ce77f9..0acd1b68bf301 100644
+--- a/include/linux/security.h
++++ b/include/linux/security.h
+@@ -120,6 +120,7 @@ enum lockdown_reason {
+ 	LOCKDOWN_MMIOTRACE,
+ 	LOCKDOWN_DEBUGFS,
+ 	LOCKDOWN_XMON_WR,
++	LOCKDOWN_BPF_WRITE_USER,
+ 	LOCKDOWN_INTEGRITY_MAX,
+ 	LOCKDOWN_KCORE,
+ 	LOCKDOWN_KPROBES,
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index f0568b3d6bd1e..77a0d0fb97a99 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -990,12 +990,13 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
+ 		return &bpf_get_numa_node_id_proto;
+ 	case BPF_FUNC_perf_event_read:
+ 		return &bpf_perf_event_read_proto;
+-	case BPF_FUNC_probe_write_user:
+-		return bpf_get_probe_write_proto();
+ 	case BPF_FUNC_current_task_under_cgroup:
+ 		return &bpf_current_task_under_cgroup_proto;
+ 	case BPF_FUNC_get_prandom_u32:
+ 		return &bpf_get_prandom_u32_proto;
++	case BPF_FUNC_probe_write_user:
++		return security_locked_down(LOCKDOWN_BPF_WRITE_USER) < 0 ?
++		       NULL : bpf_get_probe_write_proto();
+ 	case BPF_FUNC_probe_read_user:
+ 		return &bpf_probe_read_user_proto;
+ 	case BPF_FUNC_probe_read_kernel:
+diff --git a/security/security.c b/security/security.c
+index b38155b2de83f..0d626c0dafccd 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -58,6 +58,7 @@ const char *const lockdown_reasons[LOCKDOWN_CONFIDENTIALITY_MAX+1] = {
+ 	[LOCKDOWN_MMIOTRACE] = "unsafe mmio",
+ 	[LOCKDOWN_DEBUGFS] = "debugfs access",
+ 	[LOCKDOWN_XMON_WR] = "xmon write access",
++	[LOCKDOWN_BPF_WRITE_USER] = "use of bpf to write user RAM",
+ 	[LOCKDOWN_INTEGRITY_MAX] = "integrity",
+ 	[LOCKDOWN_KCORE] = "/proc/kcore access",
+ 	[LOCKDOWN_KPROBES] = "use of kprobes",
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index 2b3c164d21f17..cb795135ffc27 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -251,7 +251,10 @@ static bool hw_support_mmap(struct snd_pcm_substream *substream)
+ 
+ 	switch (substream->dma_buffer.dev.type) {
+ 	case SNDRV_DMA_TYPE_UNKNOWN:
+-		return false;
++		/* we can't know the device, so just assume that the driver does
++		 * everything right
++		 */
++		return true;
+ 	case SNDRV_DMA_TYPE_CONTINUOUS:
+ 	case SNDRV_DMA_TYPE_VMALLOC:
+ 		return true;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index c92d9b9cf9441..6d8c4dedfe0fe 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8371,6 +8371,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x87f4, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
++	SND_PCI_QUIRK(0x103c, 0x8805, "HP ProBook 650 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x880d, "HP EliteBook 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8846, "HP EliteBook 850 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8847, "HP EliteBook x360 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+@@ -8405,6 +8406,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1740, "ASUS UX430UA", ALC295_FIXUP_ASUS_DACS),
+ 	SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_DUAL_SPK),
++	SND_PCI_QUIRK(0x1043, 0x1662, "ASUS GV301QH", ALC294_FIXUP_ASUS_DUAL_SPK),
+ 	SND_PCI_QUIRK(0x1043, 0x1881, "ASUS Zephyrus S/M", ALC294_FIXUP_ASUS_GX502_PINS),
+ 	SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x18f1, "Asus FX505DT", ALC256_FIXUP_ASUS_HEADSET_MIC),


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-08-18 12:45 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-08-18 12:45 UTC (permalink / raw
  To: gentoo-commits

commit:     fc20684f72af5a3d140212a120a3e6282e27bcfc
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 18 12:44:54 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 18 12:44:54 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fc20684f

Linux patch 5.13.12

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1011_linux-5.13.12.patch | 5823 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5827 insertions(+)

diff --git a/0000_README b/0000_README
index 75d23bb..2eae665 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch:  1010_linux-5.13.11.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.13.11
 
+Patch:  1011_linux-5.13.12.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.13.12
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1011_linux-5.13.12.patch b/1011_linux-5.13.12.patch
new file mode 100644
index 0000000..f5a37d9
--- /dev/null
+++ b/1011_linux-5.13.12.patch
@@ -0,0 +1,5823 @@
+diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst
+index 1fc860c007a3c..69b090fdd2e72 100644
+--- a/Documentation/virt/kvm/locking.rst
++++ b/Documentation/virt/kvm/locking.rst
+@@ -20,10 +20,10 @@ On x86:
+ 
+ - vcpu->mutex is taken outside kvm->arch.hyperv.hv_lock
+ 
+-- kvm->arch.mmu_lock is an rwlock.  kvm->arch.tdp_mmu_pages_lock is
+-  taken inside kvm->arch.mmu_lock, and cannot be taken without already
+-  holding kvm->arch.mmu_lock (typically with ``read_lock``, otherwise
+-  there's no need to take kvm->arch.tdp_mmu_pages_lock at all).
++- kvm->arch.mmu_lock is an rwlock.  kvm->arch.tdp_mmu_pages_lock and
++  kvm->arch.mmu_unsync_pages_lock are taken inside kvm->arch.mmu_lock, and
++  cannot be taken without already holding kvm->arch.mmu_lock (typically with
++  ``read_lock`` for the TDP MMU, thus the need for additional spinlocks).
+ 
+ Everything else is a leaf: no other lock is taken inside the critical
+ sections.
+diff --git a/Makefile b/Makefile
+index eaf1df54ad123..2458a4abcbccd 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 13
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/arc/kernel/fpu.c b/arch/arc/kernel/fpu.c
+index c67c0f0f5f778..ec640219d989f 100644
+--- a/arch/arc/kernel/fpu.c
++++ b/arch/arc/kernel/fpu.c
+@@ -57,23 +57,26 @@ void fpu_save_restore(struct task_struct *prev, struct task_struct *next)
+ 
+ void fpu_init_task(struct pt_regs *regs)
+ {
++	const unsigned int fwe = 0x80000000;
++
+ 	/* default rounding mode */
+ 	write_aux_reg(ARC_REG_FPU_CTRL, 0x100);
+ 
+-	/* set "Write enable" to allow explicit write to exception flags */
+-	write_aux_reg(ARC_REG_FPU_STATUS, 0x80000000);
++	/* Initialize to zero: setting requires FWE be set */
++	write_aux_reg(ARC_REG_FPU_STATUS, fwe);
+ }
+ 
+ void fpu_save_restore(struct task_struct *prev, struct task_struct *next)
+ {
+ 	struct arc_fpu *save = &prev->thread.fpu;
+ 	struct arc_fpu *restore = &next->thread.fpu;
++	const unsigned int fwe = 0x80000000;
+ 
+ 	save->ctrl = read_aux_reg(ARC_REG_FPU_CTRL);
+ 	save->status = read_aux_reg(ARC_REG_FPU_STATUS);
+ 
+ 	write_aux_reg(ARC_REG_FPU_CTRL, restore->ctrl);
+-	write_aux_reg(ARC_REG_FPU_STATUS, restore->status);
++	write_aux_reg(ARC_REG_FPU_STATUS, (fwe | restore->status));
+ }
+ 
+ #endif
+diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+index 4b60c0056c041..fa1b77fe629dc 100644
+--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
++++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+@@ -190,7 +190,7 @@ static bool range_is_memory(u64 start, u64 end)
+ {
+ 	struct kvm_mem_range r1, r2;
+ 
+-	if (!find_mem_range(start, &r1) || !find_mem_range(end, &r2))
++	if (!find_mem_range(start, &r1) || !find_mem_range(end - 1, &r2))
+ 		return false;
+ 	if (r1.start != r2.start)
+ 		return false;
+diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
+index a26aad41ef3e7..bea3e2528aab1 100644
+--- a/arch/powerpc/include/asm/interrupt.h
++++ b/arch/powerpc/include/asm/interrupt.h
+@@ -531,6 +531,9 @@ DECLARE_INTERRUPT_HANDLER_NMI(hmi_exception_realmode);
+ 
+ DECLARE_INTERRUPT_HANDLER_ASYNC(TAUException);
+ 
++/* irq.c */
++DECLARE_INTERRUPT_HANDLER_ASYNC(do_IRQ);
++
+ void __noreturn unrecoverable_exception(struct pt_regs *regs);
+ 
+ void replay_system_reset(void);
+diff --git a/arch/powerpc/include/asm/irq.h b/arch/powerpc/include/asm/irq.h
+index b2bd588304300..b95ddc72bc0fa 100644
+--- a/arch/powerpc/include/asm/irq.h
++++ b/arch/powerpc/include/asm/irq.h
+@@ -53,7 +53,7 @@ extern void *mcheckirq_ctx[NR_CPUS];
+ extern void *hardirq_ctx[NR_CPUS];
+ extern void *softirq_ctx[NR_CPUS];
+ 
+-extern void do_IRQ(struct pt_regs *regs);
++void __do_IRQ(struct pt_regs *regs);
+ extern void __init init_IRQ(void);
+ extern void __do_irq(struct pt_regs *regs);
+ 
+diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
+index b476a685f066e..379f18169764d 100644
+--- a/arch/powerpc/include/asm/ptrace.h
++++ b/arch/powerpc/include/asm/ptrace.h
+@@ -68,6 +68,22 @@ struct pt_regs
+ 		};
+ 		unsigned long __pad[4];	/* Maintain 16 byte interrupt stack alignment */
+ 	};
++#if defined(CONFIG_PPC32) && defined(CONFIG_BOOKE)
++	struct { /* Must be a multiple of 16 bytes */
++		unsigned long mas0;
++		unsigned long mas1;
++		unsigned long mas2;
++		unsigned long mas3;
++		unsigned long mas6;
++		unsigned long mas7;
++		unsigned long srr0;
++		unsigned long srr1;
++		unsigned long csrr0;
++		unsigned long csrr1;
++		unsigned long dsrr0;
++		unsigned long dsrr1;
++	};
++#endif
+ };
+ #endif
+ 
+diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
+index 28af4efb45870..f633f09dc9129 100644
+--- a/arch/powerpc/kernel/asm-offsets.c
++++ b/arch/powerpc/kernel/asm-offsets.c
+@@ -348,24 +348,21 @@ int main(void)
+ #endif
+ 
+ 
+-#if defined(CONFIG_PPC32)
+-#if defined(CONFIG_BOOKE) || defined(CONFIG_40x)
+-	DEFINE(EXC_LVL_SIZE, STACK_EXC_LVL_FRAME_SIZE);
+-	DEFINE(MAS0, STACK_INT_FRAME_SIZE+offsetof(struct exception_regs, mas0));
++#if defined(CONFIG_PPC32) && defined(CONFIG_BOOKE)
++	STACK_PT_REGS_OFFSET(MAS0, mas0);
+ 	/* we overload MMUCR for 44x on MAS0 since they are mutually exclusive */
+-	DEFINE(MMUCR, STACK_INT_FRAME_SIZE+offsetof(struct exception_regs, mas0));
+-	DEFINE(MAS1, STACK_INT_FRAME_SIZE+offsetof(struct exception_regs, mas1));
+-	DEFINE(MAS2, STACK_INT_FRAME_SIZE+offsetof(struct exception_regs, mas2));
+-	DEFINE(MAS3, STACK_INT_FRAME_SIZE+offsetof(struct exception_regs, mas3));
+-	DEFINE(MAS6, STACK_INT_FRAME_SIZE+offsetof(struct exception_regs, mas6));
+-	DEFINE(MAS7, STACK_INT_FRAME_SIZE+offsetof(struct exception_regs, mas7));
+-	DEFINE(_SRR0, STACK_INT_FRAME_SIZE+offsetof(struct exception_regs, srr0));
+-	DEFINE(_SRR1, STACK_INT_FRAME_SIZE+offsetof(struct exception_regs, srr1));
+-	DEFINE(_CSRR0, STACK_INT_FRAME_SIZE+offsetof(struct exception_regs, csrr0));
+-	DEFINE(_CSRR1, STACK_INT_FRAME_SIZE+offsetof(struct exception_regs, csrr1));
+-	DEFINE(_DSRR0, STACK_INT_FRAME_SIZE+offsetof(struct exception_regs, dsrr0));
+-	DEFINE(_DSRR1, STACK_INT_FRAME_SIZE+offsetof(struct exception_regs, dsrr1));
+-#endif
++	STACK_PT_REGS_OFFSET(MMUCR, mas0);
++	STACK_PT_REGS_OFFSET(MAS1, mas1);
++	STACK_PT_REGS_OFFSET(MAS2, mas2);
++	STACK_PT_REGS_OFFSET(MAS3, mas3);
++	STACK_PT_REGS_OFFSET(MAS6, mas6);
++	STACK_PT_REGS_OFFSET(MAS7, mas7);
++	STACK_PT_REGS_OFFSET(_SRR0, srr0);
++	STACK_PT_REGS_OFFSET(_SRR1, srr1);
++	STACK_PT_REGS_OFFSET(_CSRR0, csrr0);
++	STACK_PT_REGS_OFFSET(_CSRR1, csrr1);
++	STACK_PT_REGS_OFFSET(_DSRR0, dsrr0);
++	STACK_PT_REGS_OFFSET(_DSRR1, dsrr1);
+ #endif
+ 
+ #ifndef CONFIG_PPC64
+diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
+index 065178f19a3d6..8dc65abadb1e0 100644
+--- a/arch/powerpc/kernel/head_book3s_32.S
++++ b/arch/powerpc/kernel/head_book3s_32.S
+@@ -300,7 +300,7 @@ ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_HPTE_TABLE)
+ 	EXCEPTION_PROLOG_1
+ 	EXCEPTION_PROLOG_2 INTERRUPT_DATA_STORAGE DataAccess handle_dar_dsisr=1
+ 	prepare_transfer_to_handler
+-	lwz	r5, _DSISR(r11)
++	lwz	r5, _DSISR(r1)
+ 	andis.	r0, r5, DSISR_DABRMATCH@h
+ 	bne-	1f
+ 	bl	do_page_fault
+diff --git a/arch/powerpc/kernel/head_booke.h b/arch/powerpc/kernel/head_booke.h
+index f824700916970..2355e7d4787fe 100644
+--- a/arch/powerpc/kernel/head_booke.h
++++ b/arch/powerpc/kernel/head_booke.h
+@@ -185,20 +185,18 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
+ /* only on e500mc */
+ #define DBG_STACK_BASE		dbgirq_ctx
+ 
+-#define EXC_LVL_FRAME_OVERHEAD	(THREAD_SIZE - INT_FRAME_SIZE - EXC_LVL_SIZE)
+-
+ #ifdef CONFIG_SMP
+ #define BOOKE_LOAD_EXC_LEVEL_STACK(level)		\
+ 	mfspr	r8,SPRN_PIR;				\
+ 	slwi	r8,r8,2;				\
+ 	addis	r8,r8,level##_STACK_BASE@ha;		\
+ 	lwz	r8,level##_STACK_BASE@l(r8);		\
+-	addi	r8,r8,EXC_LVL_FRAME_OVERHEAD;
++	addi	r8,r8,THREAD_SIZE - INT_FRAME_SIZE;
+ #else
+ #define BOOKE_LOAD_EXC_LEVEL_STACK(level)		\
+ 	lis	r8,level##_STACK_BASE@ha;		\
+ 	lwz	r8,level##_STACK_BASE@l(r8);		\
+-	addi	r8,r8,EXC_LVL_FRAME_OVERHEAD;
++	addi	r8,r8,THREAD_SIZE - INT_FRAME_SIZE;
+ #endif
+ 
+ /*
+@@ -225,7 +223,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_EMB_HV)
+ 	mtmsr	r11;							\
+ 	mfspr	r11,SPRN_SPRG_THREAD;	/* if from user, start at top of   */\
+ 	lwz	r11, TASK_STACK - THREAD(r11); /* this thread's kernel stack */\
+-	addi	r11,r11,EXC_LVL_FRAME_OVERHEAD;	/* allocate stack frame    */\
++	addi	r11,r11,THREAD_SIZE - INT_FRAME_SIZE;	/* allocate stack frame    */\
+ 	beq	1f;							     \
+ 	/* COMING FROM USER MODE */					     \
+ 	stw	r9,_CCR(r11);		/* save CR			   */\
+@@ -533,24 +531,5 @@ label:
+ 	bl	kernel_fp_unavailable_exception;			      \
+ 	b	interrupt_return
+ 
+-#else /* __ASSEMBLY__ */
+-struct exception_regs {
+-	unsigned long mas0;
+-	unsigned long mas1;
+-	unsigned long mas2;
+-	unsigned long mas3;
+-	unsigned long mas6;
+-	unsigned long mas7;
+-	unsigned long srr0;
+-	unsigned long srr1;
+-	unsigned long csrr0;
+-	unsigned long csrr1;
+-	unsigned long dsrr0;
+-	unsigned long dsrr1;
+-};
+-
+-/* ensure this structure is always sized to a multiple of the stack alignment */
+-#define STACK_EXC_LVL_FRAME_SIZE	ALIGN(sizeof (struct exception_regs), 16)
+-
+ #endif /* __ASSEMBLY__ */
+ #endif /* __HEAD_BOOKE_H__ */
+diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
+index 72cb45393ef29..ca58ad3b06da2 100644
+--- a/arch/powerpc/kernel/irq.c
++++ b/arch/powerpc/kernel/irq.c
+@@ -654,7 +654,7 @@ void __do_irq(struct pt_regs *regs)
+ 	trace_irq_exit(regs);
+ }
+ 
+-DEFINE_INTERRUPT_HANDLER_ASYNC(do_IRQ)
++void __do_IRQ(struct pt_regs *regs)
+ {
+ 	struct pt_regs *old_regs = set_irq_regs(regs);
+ 	void *cursp, *irqsp, *sirqsp;
+@@ -678,6 +678,11 @@ DEFINE_INTERRUPT_HANDLER_ASYNC(do_IRQ)
+ 	set_irq_regs(old_regs);
+ }
+ 
++DEFINE_INTERRUPT_HANDLER_ASYNC(do_IRQ)
++{
++	__do_IRQ(regs);
++}
++
+ static void *__init alloc_vm_stack(void)
+ {
+ 	return __vmalloc_node(THREAD_SIZE, THREAD_ALIGN, THREADINFO_GFP,
+diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
+index e8c2a6373157d..00fafc8b249eb 100644
+--- a/arch/powerpc/kernel/kprobes.c
++++ b/arch/powerpc/kernel/kprobes.c
+@@ -276,7 +276,8 @@ int kprobe_handler(struct pt_regs *regs)
+ 	if (user_mode(regs))
+ 		return 0;
+ 
+-	if (!(regs->msr & MSR_IR) || !(regs->msr & MSR_DR))
++	if (!IS_ENABLED(CONFIG_BOOKE) &&
++	    (!(regs->msr & MSR_IR) || !(regs->msr & MSR_DR)))
+ 		return 0;
+ 
+ 	/*
+diff --git a/arch/powerpc/kernel/sysfs.c b/arch/powerpc/kernel/sysfs.c
+index 2e08640bb3b4b..d36e71ba002c1 100644
+--- a/arch/powerpc/kernel/sysfs.c
++++ b/arch/powerpc/kernel/sysfs.c
+@@ -1167,7 +1167,7 @@ static int __init topology_init(void)
+ 		 * CPU.  For instance, the boot cpu might never be valid
+ 		 * for hotplugging.
+ 		 */
+-		if (smp_ops->cpu_offline_self)
++		if (smp_ops && smp_ops->cpu_offline_self)
+ 			c->hotpluggable = 1;
+ #endif
+ 
+diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
+index b67d93a609a2d..aea62a438cc31 100644
+--- a/arch/powerpc/kernel/time.c
++++ b/arch/powerpc/kernel/time.c
+@@ -607,7 +607,7 @@ DEFINE_INTERRUPT_HANDLER_ASYNC(timer_interrupt)
+ 
+ #if defined(CONFIG_PPC32) && defined(CONFIG_PPC_PMAC)
+ 	if (atomic_read(&ppc_n_lost_interrupts) != 0)
+-		do_IRQ(regs);
++		__do_IRQ(regs);
+ #endif
+ 
+ 	old_regs = set_irq_regs(regs);
+diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
+index b4ab95c9e94a8..04090fde27c81 100644
+--- a/arch/powerpc/kernel/traps.c
++++ b/arch/powerpc/kernel/traps.c
+@@ -1103,7 +1103,7 @@ DEFINE_INTERRUPT_HANDLER(RunModeException)
+ 	_exception(SIGTRAP, regs, TRAP_UNK, 0);
+ }
+ 
+-DEFINE_INTERRUPT_HANDLER(single_step_exception)
++static void __single_step_exception(struct pt_regs *regs)
+ {
+ 	clear_single_step(regs);
+ 	clear_br_trace(regs);
+@@ -1120,6 +1120,11 @@ DEFINE_INTERRUPT_HANDLER(single_step_exception)
+ 	_exception(SIGTRAP, regs, TRAP_TRACE, regs->nip);
+ }
+ 
++DEFINE_INTERRUPT_HANDLER(single_step_exception)
++{
++	__single_step_exception(regs);
++}
++
+ /*
+  * After we have successfully emulated an instruction, we have to
+  * check if the instruction was being single-stepped, and if so,
+@@ -1129,7 +1134,7 @@ DEFINE_INTERRUPT_HANDLER(single_step_exception)
+ static void emulate_single_step(struct pt_regs *regs)
+ {
+ 	if (single_stepping(regs))
+-		single_step_exception(regs);
++		__single_step_exception(regs);
+ }
+ 
+ static inline int __parse_fpscr(unsigned long fpscr)
+diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
+index 0338f481c12bb..7b48af047cec9 100644
+--- a/arch/powerpc/platforms/pseries/setup.c
++++ b/arch/powerpc/platforms/pseries/setup.c
+@@ -539,9 +539,10 @@ static void init_cpu_char_feature_flags(struct h_cpu_char_result *result)
+ 	 * H_CPU_BEHAV_FAVOUR_SECURITY_H could be set only if
+ 	 * H_CPU_BEHAV_FAVOUR_SECURITY is.
+ 	 */
+-	if (!(result->behaviour & H_CPU_BEHAV_FAVOUR_SECURITY))
++	if (!(result->behaviour & H_CPU_BEHAV_FAVOUR_SECURITY)) {
+ 		security_ftr_clear(SEC_FTR_FAVOUR_SECURITY);
+-	else if (result->behaviour & H_CPU_BEHAV_FAVOUR_SECURITY_H)
++		pseries_security_flavor = 0;
++	} else if (result->behaviour & H_CPU_BEHAV_FAVOUR_SECURITY_H)
+ 		pseries_security_flavor = 1;
+ 	else
+ 		pseries_security_flavor = 2;
+diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c
+index dbdbbc2f1dc51..943fd30095af4 100644
+--- a/arch/powerpc/sysdev/xive/common.c
++++ b/arch/powerpc/sysdev/xive/common.c
+@@ -67,6 +67,7 @@ static struct irq_domain *xive_irq_domain;
+ static struct xive_ipi_desc {
+ 	unsigned int irq;
+ 	char name[16];
++	atomic_t started;
+ } *xive_ipis;
+ 
+ /*
+@@ -1120,7 +1121,7 @@ static const struct irq_domain_ops xive_ipi_irq_domain_ops = {
+ 	.alloc  = xive_ipi_irq_domain_alloc,
+ };
+ 
+-static int __init xive_request_ipi(void)
++static int __init xive_init_ipis(void)
+ {
+ 	struct fwnode_handle *fwnode;
+ 	struct irq_domain *ipi_domain;
+@@ -1144,10 +1145,6 @@ static int __init xive_request_ipi(void)
+ 		struct xive_ipi_desc *xid = &xive_ipis[node];
+ 		struct xive_ipi_alloc_info info = { node };
+ 
+-		/* Skip nodes without CPUs */
+-		if (cpumask_empty(cpumask_of_node(node)))
+-			continue;
+-
+ 		/*
+ 		 * Map one IPI interrupt per node for all cpus of that node.
+ 		 * Since the HW interrupt number doesn't have any meaning,
+@@ -1159,11 +1156,6 @@ static int __init xive_request_ipi(void)
+ 		xid->irq = ret;
+ 
+ 		snprintf(xid->name, sizeof(xid->name), "IPI-%d", node);
+-
+-		ret = request_irq(xid->irq, xive_muxed_ipi_action,
+-				  IRQF_PERCPU | IRQF_NO_THREAD, xid->name, NULL);
+-
+-		WARN(ret < 0, "Failed to request IPI %d: %d\n", xid->irq, ret);
+ 	}
+ 
+ 	return ret;
+@@ -1178,6 +1170,22 @@ out:
+ 	return ret;
+ }
+ 
++static int __init xive_request_ipi(unsigned int cpu)
++{
++	struct xive_ipi_desc *xid = &xive_ipis[early_cpu_to_node(cpu)];
++	int ret;
++
++	if (atomic_inc_return(&xid->started) > 1)
++		return 0;
++
++	ret = request_irq(xid->irq, xive_muxed_ipi_action,
++			  IRQF_PERCPU | IRQF_NO_THREAD,
++			  xid->name, NULL);
++
++	WARN(ret < 0, "Failed to request IPI %d: %d\n", xid->irq, ret);
++	return ret;
++}
++
+ static int xive_setup_cpu_ipi(unsigned int cpu)
+ {
+ 	unsigned int xive_ipi_irq = xive_ipi_cpu_to_irq(cpu);
+@@ -1192,6 +1200,9 @@ static int xive_setup_cpu_ipi(unsigned int cpu)
+ 	if (xc->hw_ipi != XIVE_BAD_IRQ)
+ 		return 0;
+ 
++	/* Register the IPI */
++	xive_request_ipi(cpu);
++
+ 	/* Grab an IPI from the backend, this will populate xc->hw_ipi */
+ 	if (xive_ops->get_ipi(cpu, xc))
+ 		return -EIO;
+@@ -1231,6 +1242,8 @@ static void xive_cleanup_cpu_ipi(unsigned int cpu, struct xive_cpu *xc)
+ 	if (xc->hw_ipi == XIVE_BAD_IRQ)
+ 		return;
+ 
++	/* TODO: clear IPI mapping */
++
+ 	/* Mask the IPI */
+ 	xive_do_source_set_mask(&xc->ipi_data, true);
+ 
+@@ -1253,7 +1266,7 @@ void __init xive_smp_probe(void)
+ 	smp_ops->cause_ipi = xive_cause_ipi;
+ 
+ 	/* Register the IPI */
+-	xive_request_ipi();
++	xive_init_ipis();
+ 
+ 	/* Allocate and setup IPI for the boot CPU */
+ 	xive_setup_cpu_ipi(smp_processor_id());
+diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
+index d3081e4d96006..3397ddac1a30c 100644
+--- a/arch/riscv/kernel/Makefile
++++ b/arch/riscv/kernel/Makefile
+@@ -11,7 +11,7 @@ endif
+ CFLAGS_syscall_table.o	+= $(call cc-option,-Wno-override-init,)
+ 
+ ifdef CONFIG_KEXEC
+-AFLAGS_kexec_relocate.o := -mcmodel=medany -mno-relax
++AFLAGS_kexec_relocate.o := -mcmodel=medany $(call cc-option,-mno-relax)
+ endif
+ 
+ extra-y += head.o
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index d76be3bba11e4..511d1f9a9bf80 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -2904,24 +2904,28 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
+  */
+ static int intel_pmu_handle_irq(struct pt_regs *regs)
+ {
+-	struct cpu_hw_events *cpuc;
++	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
++	bool late_ack = hybrid_bit(cpuc->pmu, late_ack);
++	bool mid_ack = hybrid_bit(cpuc->pmu, mid_ack);
+ 	int loops;
+ 	u64 status;
+ 	int handled;
+ 	int pmu_enabled;
+ 
+-	cpuc = this_cpu_ptr(&cpu_hw_events);
+-
+ 	/*
+ 	 * Save the PMU state.
+ 	 * It needs to be restored when leaving the handler.
+ 	 */
+ 	pmu_enabled = cpuc->enabled;
+ 	/*
+-	 * No known reason to not always do late ACK,
+-	 * but just in case do it opt-in.
++	 * In general, the early ACK is only applied for old platforms.
++	 * For the big core starts from Haswell, the late ACK should be
++	 * applied.
++	 * For the small core after Tremont, we have to do the ACK right
++	 * before re-enabling counters, which is in the middle of the
++	 * NMI handler.
+ 	 */
+-	if (!x86_pmu.late_ack)
++	if (!late_ack && !mid_ack)
+ 		apic_write(APIC_LVTPC, APIC_DM_NMI);
+ 	intel_bts_disable_local();
+ 	cpuc->enabled = 0;
+@@ -2958,6 +2962,8 @@ again:
+ 		goto again;
+ 
+ done:
++	if (mid_ack)
++		apic_write(APIC_LVTPC, APIC_DM_NMI);
+ 	/* Only restore PMU state when it's active. See x86_pmu_disable(). */
+ 	cpuc->enabled = pmu_enabled;
+ 	if (pmu_enabled)
+@@ -2969,7 +2975,7 @@ done:
+ 	 * have been reset. This avoids spurious NMIs on
+ 	 * Haswell CPUs.
+ 	 */
+-	if (x86_pmu.late_ack)
++	if (late_ack)
+ 		apic_write(APIC_LVTPC, APIC_DM_NMI);
+ 	return handled;
+ }
+@@ -6123,7 +6129,6 @@ __init int intel_pmu_init(void)
+ 		static_branch_enable(&perf_is_hybrid);
+ 		x86_pmu.num_hybrid_pmus = X86_HYBRID_NUM_PMUS;
+ 
+-		x86_pmu.late_ack = true;
+ 		x86_pmu.pebs_aliases = NULL;
+ 		x86_pmu.pebs_prec_dist = true;
+ 		x86_pmu.pebs_block = true;
+@@ -6161,6 +6166,7 @@ __init int intel_pmu_init(void)
+ 		pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_CORE_IDX];
+ 		pmu->name = "cpu_core";
+ 		pmu->cpu_type = hybrid_big;
++		pmu->late_ack = true;
+ 		if (cpu_feature_enabled(X86_FEATURE_HYBRID_CPU)) {
+ 			pmu->num_counters = x86_pmu.num_counters + 2;
+ 			pmu->num_counters_fixed = x86_pmu.num_counters_fixed + 1;
+@@ -6186,6 +6192,7 @@ __init int intel_pmu_init(void)
+ 		pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_ATOM_IDX];
+ 		pmu->name = "cpu_atom";
+ 		pmu->cpu_type = hybrid_small;
++		pmu->mid_ack = true;
+ 		pmu->num_counters = x86_pmu.num_counters;
+ 		pmu->num_counters_fixed = x86_pmu.num_counters_fixed;
+ 		pmu->max_pebs_events = x86_pmu.max_pebs_events;
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index 2938c902ffbe4..e3ac05c97b5e5 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -656,6 +656,10 @@ struct x86_hybrid_pmu {
+ 	struct event_constraint		*event_constraints;
+ 	struct event_constraint		*pebs_constraints;
+ 	struct extra_reg		*extra_regs;
++
++	unsigned int			late_ack	:1,
++					mid_ack		:1,
++					enabled_ack	:1;
+ };
+ 
+ static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu)
+@@ -686,6 +690,16 @@ extern struct static_key_false perf_is_hybrid;
+ 	__Fp;						\
+ }))
+ 
++#define hybrid_bit(_pmu, _field)			\
++({							\
++	bool __Fp = x86_pmu._field;			\
++							\
++	if (is_hybrid() && (_pmu))			\
++		__Fp = hybrid_pmu(_pmu)->_field;	\
++							\
++	__Fp;						\
++})
++
+ enum hybrid_pmu_type {
+ 	hybrid_big		= 0x40,
+ 	hybrid_small		= 0x20,
+@@ -755,6 +769,7 @@ struct x86_pmu {
+ 
+ 	/* PMI handler bits */
+ 	unsigned int	late_ack		:1,
++			mid_ack			:1,
+ 			enabled_ack		:1;
+ 	/*
+ 	 * sysfs attrs
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index fbd55c682d5e7..5e5f06acba55f 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -987,6 +987,13 @@ struct kvm_arch {
+ 	struct list_head lpage_disallowed_mmu_pages;
+ 	struct kvm_page_track_notifier_node mmu_sp_tracker;
+ 	struct kvm_page_track_notifier_head track_notifier_head;
++	/*
++	 * Protects marking pages unsync during page faults, as TDP MMU page
++	 * faults only take mmu_lock for read.  For simplicity, the unsync
++	 * pages lock is always taken when marking pages unsync regardless of
++	 * whether mmu_lock is held for read or write.
++	 */
++	spinlock_t mmu_unsync_pages_lock;
+ 
+ 	struct list_head assigned_dev_head;
+ 	struct iommu_domain *iommu_domain;
+diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
+index 772e60efe243a..b6c445ced0e5e 100644
+--- a/arch/x86/include/asm/svm.h
++++ b/arch/x86/include/asm/svm.h
+@@ -178,6 +178,8 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
+ #define V_IGN_TPR_SHIFT 20
+ #define V_IGN_TPR_MASK (1 << V_IGN_TPR_SHIFT)
+ 
++#define V_IRQ_INJECTION_BITS_MASK (V_IRQ_MASK | V_INTR_PRIO_MASK | V_IGN_TPR_MASK)
++
+ #define V_INTR_MASKING_SHIFT 24
+ #define V_INTR_MASKING_MASK (1 << V_INTR_MASKING_SHIFT)
+ 
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index d5c691a3208b6..39224e035e475 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -1986,7 +1986,8 @@ static struct irq_chip ioapic_chip __read_mostly = {
+ 	.irq_set_affinity	= ioapic_set_affinity,
+ 	.irq_retrigger		= irq_chip_retrigger_hierarchy,
+ 	.irq_get_irqchip_state	= ioapic_irq_get_chip_state,
+-	.flags			= IRQCHIP_SKIP_SET_WAKE,
++	.flags			= IRQCHIP_SKIP_SET_WAKE |
++				  IRQCHIP_AFFINITY_PRE_STARTUP,
+ };
+ 
+ static struct irq_chip ioapic_ir_chip __read_mostly = {
+@@ -1999,7 +2000,8 @@ static struct irq_chip ioapic_ir_chip __read_mostly = {
+ 	.irq_set_affinity	= ioapic_set_affinity,
+ 	.irq_retrigger		= irq_chip_retrigger_hierarchy,
+ 	.irq_get_irqchip_state	= ioapic_irq_get_chip_state,
+-	.flags			= IRQCHIP_SKIP_SET_WAKE,
++	.flags			= IRQCHIP_SKIP_SET_WAKE |
++				  IRQCHIP_AFFINITY_PRE_STARTUP,
+ };
+ 
+ static inline void init_IO_APIC_traps(void)
+diff --git a/arch/x86/kernel/apic/msi.c b/arch/x86/kernel/apic/msi.c
+index 44ebe25e77036..dbacb9ec8843a 100644
+--- a/arch/x86/kernel/apic/msi.c
++++ b/arch/x86/kernel/apic/msi.c
+@@ -58,11 +58,13 @@ msi_set_affinity(struct irq_data *irqd, const struct cpumask *mask, bool force)
+ 	 *   The quirk bit is not set in this case.
+ 	 * - The new vector is the same as the old vector
+ 	 * - The old vector is MANAGED_IRQ_SHUTDOWN_VECTOR (interrupt starts up)
++	 * - The interrupt is not yet started up
+ 	 * - The new destination CPU is the same as the old destination CPU
+ 	 */
+ 	if (!irqd_msi_nomask_quirk(irqd) ||
+ 	    cfg->vector == old_cfg.vector ||
+ 	    old_cfg.vector == MANAGED_IRQ_SHUTDOWN_VECTOR ||
++	    !irqd_is_started(irqd) ||
+ 	    cfg->dest_apicid == old_cfg.dest_apicid) {
+ 		irq_msi_update_msg(irqd, cfg);
+ 		return ret;
+@@ -150,7 +152,8 @@ static struct irq_chip pci_msi_controller = {
+ 	.irq_ack		= irq_chip_ack_parent,
+ 	.irq_retrigger		= irq_chip_retrigger_hierarchy,
+ 	.irq_set_affinity	= msi_set_affinity,
+-	.flags			= IRQCHIP_SKIP_SET_WAKE,
++	.flags			= IRQCHIP_SKIP_SET_WAKE |
++				  IRQCHIP_AFFINITY_PRE_STARTUP,
+ };
+ 
+ int pci_msi_prepare(struct irq_domain *domain, struct device *dev, int nvec,
+@@ -219,7 +222,8 @@ static struct irq_chip pci_msi_ir_controller = {
+ 	.irq_mask		= pci_msi_mask_irq,
+ 	.irq_ack		= irq_chip_ack_parent,
+ 	.irq_retrigger		= irq_chip_retrigger_hierarchy,
+-	.flags			= IRQCHIP_SKIP_SET_WAKE,
++	.flags			= IRQCHIP_SKIP_SET_WAKE |
++				  IRQCHIP_AFFINITY_PRE_STARTUP,
+ };
+ 
+ static struct msi_domain_info pci_msi_ir_domain_info = {
+@@ -273,7 +277,8 @@ static struct irq_chip dmar_msi_controller = {
+ 	.irq_retrigger		= irq_chip_retrigger_hierarchy,
+ 	.irq_compose_msi_msg	= dmar_msi_compose_msg,
+ 	.irq_write_msi_msg	= dmar_msi_write_msg,
+-	.flags			= IRQCHIP_SKIP_SET_WAKE,
++	.flags			= IRQCHIP_SKIP_SET_WAKE |
++				  IRQCHIP_AFFINITY_PRE_STARTUP,
+ };
+ 
+ static int dmar_msi_init(struct irq_domain *domain,
+diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
+index f07c10b87a873..57e4bb695ff96 100644
+--- a/arch/x86/kernel/cpu/resctrl/monitor.c
++++ b/arch/x86/kernel/cpu/resctrl/monitor.c
+@@ -285,15 +285,14 @@ static u64 mbm_overflow_count(u64 prev_msr, u64 cur_msr, unsigned int width)
+ 	return chunks >>= shift;
+ }
+ 
+-static int __mon_event_count(u32 rmid, struct rmid_read *rr)
++static u64 __mon_event_count(u32 rmid, struct rmid_read *rr)
+ {
+ 	struct mbm_state *m;
+ 	u64 chunks, tval;
+ 
+ 	tval = __rmid_read(rmid, rr->evtid);
+ 	if (tval & (RMID_VAL_ERROR | RMID_VAL_UNAVAIL)) {
+-		rr->val = tval;
+-		return -EINVAL;
++		return tval;
+ 	}
+ 	switch (rr->evtid) {
+ 	case QOS_L3_OCCUP_EVENT_ID:
+@@ -305,12 +304,6 @@ static int __mon_event_count(u32 rmid, struct rmid_read *rr)
+ 	case QOS_L3_MBM_LOCAL_EVENT_ID:
+ 		m = &rr->d->mbm_local[rmid];
+ 		break;
+-	default:
+-		/*
+-		 * Code would never reach here because
+-		 * an invalid event id would fail the __rmid_read.
+-		 */
+-		return -EINVAL;
+ 	}
+ 
+ 	if (rr->first) {
+@@ -361,23 +354,29 @@ void mon_event_count(void *info)
+ 	struct rdtgroup *rdtgrp, *entry;
+ 	struct rmid_read *rr = info;
+ 	struct list_head *head;
++	u64 ret_val;
+ 
+ 	rdtgrp = rr->rgrp;
+ 
+-	if (__mon_event_count(rdtgrp->mon.rmid, rr))
+-		return;
++	ret_val = __mon_event_count(rdtgrp->mon.rmid, rr);
+ 
+ 	/*
+-	 * For Ctrl groups read data from child monitor groups.
++	 * For Ctrl groups read data from child monitor groups and
++	 * add them together. Count events which are read successfully.
++	 * Discard the rmid_read's reporting errors.
+ 	 */
+ 	head = &rdtgrp->mon.crdtgrp_list;
+ 
+ 	if (rdtgrp->type == RDTCTRL_GROUP) {
+ 		list_for_each_entry(entry, head, mon.crdtgrp_list) {
+-			if (__mon_event_count(entry->mon.rmid, rr))
+-				return;
++			if (__mon_event_count(entry->mon.rmid, rr) == 0)
++				ret_val = 0;
+ 		}
+ 	}
++
++	/* Report error if none of rmid_reads are successful */
++	if (ret_val)
++		rr->val = ret_val;
+ }
+ 
+ /*
+diff --git a/arch/x86/kernel/hpet.c b/arch/x86/kernel/hpet.c
+index 08651a4e6aa0f..42fc41dd0e1f1 100644
+--- a/arch/x86/kernel/hpet.c
++++ b/arch/x86/kernel/hpet.c
+@@ -508,7 +508,7 @@ static struct irq_chip hpet_msi_controller __ro_after_init = {
+ 	.irq_set_affinity = msi_domain_set_affinity,
+ 	.irq_retrigger = irq_chip_retrigger_hierarchy,
+ 	.irq_write_msi_msg = hpet_msi_write_msg,
+-	.flags = IRQCHIP_SKIP_SET_WAKE,
++	.flags = IRQCHIP_SKIP_SET_WAKE | IRQCHIP_AFFINITY_PRE_STARTUP,
+ };
+ 
+ static int hpet_msi_init(struct irq_domain *domain,
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index ed469ddf20741..9d3783800c8ce 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -2454,6 +2454,7 @@ bool mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t gfn,
+ 			    bool can_unsync)
+ {
+ 	struct kvm_mmu_page *sp;
++	bool locked = false;
+ 
+ 	if (kvm_page_track_is_active(vcpu, gfn, KVM_PAGE_TRACK_WRITE))
+ 		return true;
+@@ -2465,9 +2466,34 @@ bool mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t gfn,
+ 		if (sp->unsync)
+ 			continue;
+ 
++		/*
++		 * TDP MMU page faults require an additional spinlock as they
++		 * run with mmu_lock held for read, not write, and the unsync
++		 * logic is not thread safe.  Take the spinklock regardless of
++		 * the MMU type to avoid extra conditionals/parameters, there's
++		 * no meaningful penalty if mmu_lock is held for write.
++		 */
++		if (!locked) {
++			locked = true;
++			spin_lock(&vcpu->kvm->arch.mmu_unsync_pages_lock);
++
++			/*
++			 * Recheck after taking the spinlock, a different vCPU
++			 * may have since marked the page unsync.  A false
++			 * positive on the unprotected check above is not
++			 * possible as clearing sp->unsync _must_ hold mmu_lock
++			 * for write, i.e. unsync cannot transition from 0->1
++			 * while this CPU holds mmu_lock for read (or write).
++			 */
++			if (READ_ONCE(sp->unsync))
++				continue;
++		}
++
+ 		WARN_ON(sp->role.level != PG_LEVEL_4K);
+ 		kvm_unsync_page(vcpu, sp);
+ 	}
++	if (locked)
++		spin_unlock(&vcpu->kvm->arch.mmu_unsync_pages_lock);
+ 
+ 	/*
+ 	 * We need to ensure that the marking of unsync pages is visible
+@@ -5514,6 +5540,8 @@ void kvm_mmu_init_vm(struct kvm *kvm)
+ {
+ 	struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker;
+ 
++	spin_lock_init(&kvm->arch.mmu_unsync_pages_lock);
++
+ 	kvm_mmu_init_tdp_mmu(kvm);
+ 
+ 	node->track_write = kvm_mmu_pte_write;
+diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
+index 8773bd5287da8..41ef3ed5349f1 100644
+--- a/arch/x86/kvm/mmu/tdp_mmu.c
++++ b/arch/x86/kvm/mmu/tdp_mmu.c
+@@ -41,6 +41,7 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm)
+ 	if (!kvm->arch.tdp_mmu_enabled)
+ 		return;
+ 
++	WARN_ON(!list_empty(&kvm->arch.tdp_mmu_pages));
+ 	WARN_ON(!list_empty(&kvm->arch.tdp_mmu_roots));
+ 
+ 	/*
+@@ -79,8 +80,6 @@ static void tdp_mmu_free_sp_rcu_callback(struct rcu_head *head)
+ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
+ 			  bool shared)
+ {
+-	gfn_t max_gfn = 1ULL << (shadow_phys_bits - PAGE_SHIFT);
+-
+ 	kvm_lockdep_assert_mmu_lock_held(kvm, shared);
+ 
+ 	if (!refcount_dec_and_test(&root->tdp_mmu_root_count))
+@@ -92,7 +91,7 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
+ 	list_del_rcu(&root->link);
+ 	spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
+ 
+-	zap_gfn_range(kvm, root, 0, max_gfn, false, false, shared);
++	zap_gfn_range(kvm, root, 0, -1ull, false, false, shared);
+ 
+ 	call_rcu(&root->rcu_head, tdp_mmu_free_sp_rcu_callback);
+ }
+@@ -722,8 +721,17 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
+ 			  gfn_t start, gfn_t end, bool can_yield, bool flush,
+ 			  bool shared)
+ {
++	gfn_t max_gfn_host = 1ULL << (shadow_phys_bits - PAGE_SHIFT);
++	bool zap_all = (start == 0 && end >= max_gfn_host);
+ 	struct tdp_iter iter;
+ 
++	/*
++	 * Bound the walk at host.MAXPHYADDR, guest accesses beyond that will
++	 * hit a #PF(RSVD) and never get to an EPT Violation/Misconfig / #NPF,
++	 * and so KVM will never install a SPTE for such addresses.
++	 */
++	end = min(end, max_gfn_host);
++
+ 	kvm_lockdep_assert_mmu_lock_held(kvm, shared);
+ 
+ 	rcu_read_lock();
+@@ -742,9 +750,10 @@ retry:
+ 		/*
+ 		 * If this is a non-last-level SPTE that covers a larger range
+ 		 * than should be zapped, continue, and zap the mappings at a
+-		 * lower level.
++		 * lower level, except when zapping all SPTEs.
+ 		 */
+-		if ((iter.gfn < start ||
++		if (!zap_all &&
++		    (iter.gfn < start ||
+ 		     iter.gfn + KVM_PAGES_PER_HPAGE(iter.level) > end) &&
+ 		    !is_last_spte(iter.old_spte, iter.level))
+ 			continue;
+@@ -792,12 +801,11 @@ bool __kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, int as_id, gfn_t start,
+ 
+ void kvm_tdp_mmu_zap_all(struct kvm *kvm)
+ {
+-	gfn_t max_gfn = 1ULL << (shadow_phys_bits - PAGE_SHIFT);
+ 	bool flush = false;
+ 	int i;
+ 
+ 	for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
+-		flush = kvm_tdp_mmu_zap_gfn_range(kvm, i, 0, max_gfn,
++		flush = kvm_tdp_mmu_zap_gfn_range(kvm, i, 0, -1ull,
+ 						  flush, false);
+ 
+ 	if (flush)
+@@ -836,7 +844,6 @@ static struct kvm_mmu_page *next_invalidated_root(struct kvm *kvm,
+  */
+ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm)
+ {
+-	gfn_t max_gfn = 1ULL << (shadow_phys_bits - PAGE_SHIFT);
+ 	struct kvm_mmu_page *next_root;
+ 	struct kvm_mmu_page *root;
+ 	bool flush = false;
+@@ -852,8 +859,7 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm)
+ 
+ 		rcu_read_unlock();
+ 
+-		flush = zap_gfn_range(kvm, root, 0, max_gfn, true, flush,
+-				      true);
++		flush = zap_gfn_range(kvm, root, 0, -1ull, true, flush, true);
+ 
+ 		/*
+ 		 * Put the reference acquired in
+diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
+index 5e8d8443154e8..61f4186442352 100644
+--- a/arch/x86/kvm/svm/nested.c
++++ b/arch/x86/kvm/svm/nested.c
+@@ -149,6 +149,9 @@ void recalc_intercepts(struct vcpu_svm *svm)
+ 
+ 	for (i = 0; i < MAX_INTERCEPT; i++)
+ 		c->intercepts[i] |= g->intercepts[i];
++
++	vmcb_set_intercept(c, INTERCEPT_VMLOAD);
++	vmcb_set_intercept(c, INTERCEPT_VMSAVE);
+ }
+ 
+ static void copy_vmcb_control_area(struct vmcb_control_area *dst,
+@@ -480,7 +483,10 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm *svm, struct vmcb *vmcb12
+ 
+ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm)
+ {
+-	const u32 mask = V_INTR_MASKING_MASK | V_GIF_ENABLE_MASK | V_GIF_MASK;
++	const u32 int_ctl_vmcb01_bits =
++		V_INTR_MASKING_MASK | V_GIF_MASK | V_GIF_ENABLE_MASK;
++
++	const u32 int_ctl_vmcb12_bits = V_TPR_MASK | V_IRQ_INJECTION_BITS_MASK;
+ 
+ 	/*
+ 	 * Filled at exit: exit_code, exit_code_hi, exit_info_1, exit_info_2,
+@@ -511,8 +517,8 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm)
+ 		svm->vcpu.arch.l1_tsc_offset + svm->nested.ctl.tsc_offset;
+ 
+ 	svm->vmcb->control.int_ctl             =
+-		(svm->nested.ctl.int_ctl & ~mask) |
+-		(svm->vmcb01.ptr->control.int_ctl & mask);
++		(svm->nested.ctl.int_ctl & int_ctl_vmcb12_bits) |
++		(svm->vmcb01.ptr->control.int_ctl & int_ctl_vmcb01_bits);
+ 
+ 	svm->vmcb->control.virt_ext            = svm->nested.ctl.virt_ext;
+ 	svm->vmcb->control.int_vector          = svm->nested.ctl.int_vector;
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 62f1f06fe1970..0a49a0db54e1e 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -1552,17 +1552,18 @@ static void svm_set_vintr(struct vcpu_svm *svm)
+ 
+ static void svm_clear_vintr(struct vcpu_svm *svm)
+ {
+-	const u32 mask = V_TPR_MASK | V_GIF_ENABLE_MASK | V_GIF_MASK | V_INTR_MASKING_MASK;
+ 	svm_clr_intercept(svm, INTERCEPT_VINTR);
+ 
+ 	/* Drop int_ctl fields related to VINTR injection.  */
+-	svm->vmcb->control.int_ctl &= mask;
++	svm->vmcb->control.int_ctl &= ~V_IRQ_INJECTION_BITS_MASK;
+ 	if (is_guest_mode(&svm->vcpu)) {
+-		svm->vmcb01.ptr->control.int_ctl &= mask;
++		svm->vmcb01.ptr->control.int_ctl &= ~V_IRQ_INJECTION_BITS_MASK;
+ 
+ 		WARN_ON((svm->vmcb->control.int_ctl & V_TPR_MASK) !=
+ 			(svm->nested.ctl.int_ctl & V_TPR_MASK));
+-		svm->vmcb->control.int_ctl |= svm->nested.ctl.int_ctl & ~mask;
++
++		svm->vmcb->control.int_ctl |= svm->nested.ctl.int_ctl &
++			V_IRQ_INJECTION_BITS_MASK;
+ 	}
+ 
+ 	vmcb_mark_dirty(svm->vmcb, VMCB_INTR);
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 2e63171864a74..df3b7e5644169 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -5798,7 +5798,8 @@ static bool nested_vmx_l0_wants_exit(struct kvm_vcpu *vcpu,
+ 		if (is_nmi(intr_info))
+ 			return true;
+ 		else if (is_page_fault(intr_info))
+-			return vcpu->arch.apf.host_apf_flags || !enable_ept;
++			return vcpu->arch.apf.host_apf_flags ||
++			       vmx_need_pf_intercept(vcpu);
+ 		else if (is_debug(intr_info) &&
+ 			 vcpu->guest_debug &
+ 			 (KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP))
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index d91869c8c1fc2..35368ad313ad2 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -538,7 +538,7 @@ static inline void decache_tsc_multiplier(struct vcpu_vmx *vmx)
+ 
+ static inline bool vmx_has_waitpkg(struct vcpu_vmx *vmx)
+ {
+-	return vmx->secondary_exec_control &
++	return secondary_exec_controls_get(vmx) &
+ 		SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE;
+ }
+ 
+diff --git a/arch/x86/tools/chkobjdump.awk b/arch/x86/tools/chkobjdump.awk
+index fd1ab80be0dec..a4cf678cf5c80 100644
+--- a/arch/x86/tools/chkobjdump.awk
++++ b/arch/x86/tools/chkobjdump.awk
+@@ -10,6 +10,7 @@ BEGIN {
+ 
+ /^GNU objdump/ {
+ 	verstr = ""
++	gsub(/\(.*\)/, "");
+ 	for (i = 3; i <= NF; i++)
+ 		if (match($(i), "^[0-9]")) {
+ 			verstr = $(i);
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 582d2f18717ee..9e6981d28595c 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -774,6 +774,7 @@ static void blkcg_rstat_flush(struct cgroup_subsys_state *css, int cpu)
+ 		struct blkcg_gq *parent = blkg->parent;
+ 		struct blkg_iostat_set *bisc = per_cpu_ptr(blkg->iostat_cpu, cpu);
+ 		struct blkg_iostat cur, delta;
++		unsigned long flags;
+ 		unsigned int seq;
+ 
+ 		/* fetch the current per-cpu values */
+@@ -783,21 +784,21 @@ static void blkcg_rstat_flush(struct cgroup_subsys_state *css, int cpu)
+ 		} while (u64_stats_fetch_retry(&bisc->sync, seq));
+ 
+ 		/* propagate percpu delta to global */
+-		u64_stats_update_begin(&blkg->iostat.sync);
++		flags = u64_stats_update_begin_irqsave(&blkg->iostat.sync);
+ 		blkg_iostat_set(&delta, &cur);
+ 		blkg_iostat_sub(&delta, &bisc->last);
+ 		blkg_iostat_add(&blkg->iostat.cur, &delta);
+ 		blkg_iostat_add(&bisc->last, &delta);
+-		u64_stats_update_end(&blkg->iostat.sync);
++		u64_stats_update_end_irqrestore(&blkg->iostat.sync, flags);
+ 
+ 		/* propagate global delta to parent (unless that's root) */
+ 		if (parent && parent->parent) {
+-			u64_stats_update_begin(&parent->iostat.sync);
++			flags = u64_stats_update_begin_irqsave(&parent->iostat.sync);
+ 			blkg_iostat_set(&delta, &blkg->iostat.cur);
+ 			blkg_iostat_sub(&delta, &blkg->iostat.last);
+ 			blkg_iostat_add(&parent->iostat.cur, &delta);
+ 			blkg_iostat_add(&blkg->iostat.last, &delta);
+-			u64_stats_update_end(&parent->iostat.sync);
++			u64_stats_update_end_irqrestore(&parent->iostat.sync, flags);
+ 		}
+ 	}
+ 
+@@ -832,6 +833,7 @@ static void blkcg_fill_root_iostats(void)
+ 		memset(&tmp, 0, sizeof(tmp));
+ 		for_each_possible_cpu(cpu) {
+ 			struct disk_stats *cpu_dkstats;
++			unsigned long flags;
+ 
+ 			cpu_dkstats = per_cpu_ptr(bdev->bd_stats, cpu);
+ 			tmp.ios[BLKG_IOSTAT_READ] +=
+@@ -848,9 +850,9 @@ static void blkcg_fill_root_iostats(void)
+ 			tmp.bytes[BLKG_IOSTAT_DISCARD] +=
+ 				cpu_dkstats->sectors[STAT_DISCARD] << 9;
+ 
+-			u64_stats_update_begin(&blkg->iostat.sync);
++			flags = u64_stats_update_begin_irqsave(&blkg->iostat.sync);
+ 			blkg_iostat_set(&blkg->iostat.cur, &tmp);
+-			u64_stats_update_end(&blkg->iostat.sync);
++			u64_stats_update_end_irqrestore(&blkg->iostat.sync, flags);
+ 		}
+ 	}
+ }
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index 23d9a09d70604..a3ef6cce644cc 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -3021,6 +3021,9 @@ static int acpi_nfit_register_region(struct acpi_nfit_desc *acpi_desc,
+ 		struct acpi_nfit_memory_map *memdev = nfit_memdev->memdev;
+ 		struct nd_mapping_desc *mapping;
+ 
++		/* range index 0 == unmapped in SPA or invalid-SPA */
++		if (memdev->range_index == 0 || spa->range_index == 0)
++			continue;
+ 		if (memdev->range_index != spa->range_index)
+ 			continue;
+ 		if (count >= ND_MAX_MAPPINGS) {
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 042b13d88f179..923cccc8fcfcc 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -2809,6 +2809,7 @@ void device_initialize(struct device *dev)
+ 	device_pm_init(dev);
+ 	set_dev_node(dev, -1);
+ #ifdef CONFIG_GENERIC_MSI_IRQ
++	raw_spin_lock_init(&dev->msi_lock);
+ 	INIT_LIST_HEAD(&dev->msi_list);
+ #endif
+ 	INIT_LIST_HEAD(&dev->links.consumers);
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 45d2c28c8fc83..1061894a55df2 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -805,6 +805,10 @@ static bool nbd_clear_req(struct request *req, void *data, bool reserved)
+ {
+ 	struct nbd_cmd *cmd = blk_mq_rq_to_pdu(req);
+ 
++	/* don't abort one completed request */
++	if (blk_mq_request_completed(req))
++		return true;
++
+ 	mutex_lock(&cmd->lock);
+ 	cmd->status = BLK_STS_IOERR;
+ 	mutex_unlock(&cmd->lock);
+@@ -1973,15 +1977,19 @@ static void nbd_disconnect_and_put(struct nbd_device *nbd)
+ {
+ 	mutex_lock(&nbd->config_lock);
+ 	nbd_disconnect(nbd);
+-	nbd_clear_sock(nbd);
+-	mutex_unlock(&nbd->config_lock);
++	sock_shutdown(nbd);
+ 	/*
+ 	 * Make sure recv thread has finished, so it does not drop the last
+ 	 * config ref and try to destroy the workqueue from inside the work
+-	 * queue.
++	 * queue. And this also ensure that we can safely call nbd_clear_que()
++	 * to cancel the inflight I/Os.
+ 	 */
+ 	if (nbd->recv_workq)
+ 		flush_workqueue(nbd->recv_workq);
++	nbd_clear_que(nbd);
++	nbd->task_setup = NULL;
++	mutex_unlock(&nbd->config_lock);
++
+ 	if (test_and_clear_bit(NBD_RT_HAS_CONFIG_REF,
+ 			       &nbd->config->runtime_flags))
+ 		nbd_config_put(nbd);
+diff --git a/drivers/firmware/efi/libstub/arm64-stub.c b/drivers/firmware/efi/libstub/arm64-stub.c
+index 7bf0a7acae5e6..da0252a4c3695 100644
+--- a/drivers/firmware/efi/libstub/arm64-stub.c
++++ b/drivers/firmware/efi/libstub/arm64-stub.c
+@@ -35,15 +35,48 @@ efi_status_t check_platform_features(void)
+ }
+ 
+ /*
+- * Although relocatable kernels can fix up the misalignment with respect to
+- * MIN_KIMG_ALIGN, the resulting virtual text addresses are subtly out of
+- * sync with those recorded in the vmlinux when kaslr is disabled but the
+- * image required relocation anyway. Therefore retain 2M alignment unless
+- * KASLR is in use.
++ * Distro versions of GRUB may ignore the BSS allocation entirely (i.e., fail
++ * to provide space, and fail to zero it). Check for this condition by double
++ * checking that the first and the last byte of the image are covered by the
++ * same EFI memory map entry.
+  */
+-static u64 min_kimg_align(void)
++static bool check_image_region(u64 base, u64 size)
+ {
+-	return efi_nokaslr ? MIN_KIMG_ALIGN : EFI_KIMG_ALIGN;
++	unsigned long map_size, desc_size, buff_size;
++	efi_memory_desc_t *memory_map;
++	struct efi_boot_memmap map;
++	efi_status_t status;
++	bool ret = false;
++	int map_offset;
++
++	map.map =	&memory_map;
++	map.map_size =	&map_size;
++	map.desc_size =	&desc_size;
++	map.desc_ver =	NULL;
++	map.key_ptr =	NULL;
++	map.buff_size =	&buff_size;
++
++	status = efi_get_memory_map(&map);
++	if (status != EFI_SUCCESS)
++		return false;
++
++	for (map_offset = 0; map_offset < map_size; map_offset += desc_size) {
++		efi_memory_desc_t *md = (void *)memory_map + map_offset;
++		u64 end = md->phys_addr + md->num_pages * EFI_PAGE_SIZE;
++
++		/*
++		 * Find the region that covers base, and return whether
++		 * it covers base+size bytes.
++		 */
++		if (base >= md->phys_addr && base < end) {
++			ret = (base + size) <= end;
++			break;
++		}
++	}
++
++	efi_bs_call(free_pool, memory_map);
++
++	return ret;
+ }
+ 
+ efi_status_t handle_kernel_image(unsigned long *image_addr,
+@@ -56,6 +89,16 @@ efi_status_t handle_kernel_image(unsigned long *image_addr,
+ 	unsigned long kernel_size, kernel_memsize = 0;
+ 	u32 phys_seed = 0;
+ 
++	/*
++	 * Although relocatable kernels can fix up the misalignment with
++	 * respect to MIN_KIMG_ALIGN, the resulting virtual text addresses are
++	 * subtly out of sync with those recorded in the vmlinux when kaslr is
++	 * disabled but the image required relocation anyway. Therefore retain
++	 * 2M alignment if KASLR was explicitly disabled, even if it was not
++	 * going to be activated to begin with.
++	 */
++	u64 min_kimg_align = efi_nokaslr ? MIN_KIMG_ALIGN : EFI_KIMG_ALIGN;
++
+ 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
+ 		if (!efi_nokaslr) {
+ 			status = efi_get_random_bytes(sizeof(phys_seed),
+@@ -76,6 +119,10 @@ efi_status_t handle_kernel_image(unsigned long *image_addr,
+ 	if (image->image_base != _text)
+ 		efi_err("FIRMWARE BUG: efi_loaded_image_t::image_base has bogus value\n");
+ 
++	if (!IS_ALIGNED((u64)_text, EFI_KIMG_ALIGN))
++		efi_err("FIRMWARE BUG: kernel image not aligned on %ldk boundary\n",
++			EFI_KIMG_ALIGN >> 10);
++
+ 	kernel_size = _edata - _text;
+ 	kernel_memsize = kernel_size + (_end - _edata);
+ 	*reserve_size = kernel_memsize;
+@@ -85,14 +132,16 @@ efi_status_t handle_kernel_image(unsigned long *image_addr,
+ 		 * If KASLR is enabled, and we have some randomness available,
+ 		 * locate the kernel at a randomized offset in physical memory.
+ 		 */
+-		status = efi_random_alloc(*reserve_size, min_kimg_align(),
++		status = efi_random_alloc(*reserve_size, min_kimg_align,
+ 					  reserve_addr, phys_seed);
+ 	} else {
+ 		status = EFI_OUT_OF_RESOURCES;
+ 	}
+ 
+ 	if (status != EFI_SUCCESS) {
+-		if (IS_ALIGNED((u64)_text, min_kimg_align())) {
++		if (!check_image_region((u64)_text, kernel_memsize)) {
++			efi_err("FIRMWARE BUG: Image BSS overlaps adjacent EFI memory region\n");
++		} else if (IS_ALIGNED((u64)_text, min_kimg_align)) {
+ 			/*
+ 			 * Just execute from wherever we were loaded by the
+ 			 * UEFI PE/COFF loader if the alignment is suitable.
+@@ -103,7 +152,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr,
+ 		}
+ 
+ 		status = efi_allocate_pages_aligned(*reserve_size, reserve_addr,
+-						    ULONG_MAX, min_kimg_align());
++						    ULONG_MAX, min_kimg_align);
+ 
+ 		if (status != EFI_SUCCESS) {
+ 			efi_err("Failed to relocate kernel\n");
+diff --git a/drivers/firmware/efi/libstub/randomalloc.c b/drivers/firmware/efi/libstub/randomalloc.c
+index a408df474d837..724155b9e10dc 100644
+--- a/drivers/firmware/efi/libstub/randomalloc.c
++++ b/drivers/firmware/efi/libstub/randomalloc.c
+@@ -30,6 +30,8 @@ static unsigned long get_entry_num_slots(efi_memory_desc_t *md,
+ 
+ 	region_end = min(md->phys_addr + md->num_pages * EFI_PAGE_SIZE - 1,
+ 			 (u64)ULONG_MAX);
++	if (region_end < size)
++		return 0;
+ 
+ 	first_slot = round_up(md->phys_addr, align);
+ 	last_slot = round_down(region_end - size + 1, align);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+index e1b6f58917599..73f45f2e7fc4e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+@@ -299,6 +299,9 @@ int amdgpu_discovery_reg_base_init(struct amdgpu_device *adev)
+ 				  ip->major, ip->minor,
+ 				  ip->revision);
+ 
++			if (le16_to_cpu(ip->hw_id) == VCN_HWID)
++				adev->vcn.num_vcn_inst++;
++
+ 			for (k = 0; k < num_base_address; k++) {
+ 				/*
+ 				 * convert the endianness of base addresses in place,
+@@ -377,7 +380,7 @@ void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev)
+ {
+ 	struct binary_header *bhdr;
+ 	struct harvest_table *harvest_info;
+-	int i;
++	int i, vcn_harvest_count = 0;
+ 
+ 	bhdr = (struct binary_header *)adev->mman.discovery_bin;
+ 	harvest_info = (struct harvest_table *)(adev->mman.discovery_bin +
+@@ -389,8 +392,7 @@ void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev)
+ 
+ 		switch (le32_to_cpu(harvest_info->list[i].hw_id)) {
+ 		case VCN_HWID:
+-			adev->harvest_ip_mask |= AMD_HARVEST_IP_VCN_MASK;
+-			adev->harvest_ip_mask |= AMD_HARVEST_IP_JPEG_MASK;
++			vcn_harvest_count++;
+ 			break;
+ 		case DMU_HWID:
+ 			adev->harvest_ip_mask |= AMD_HARVEST_IP_DMU_MASK;
+@@ -399,6 +401,10 @@ void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev)
+ 			break;
+ 		}
+ 	}
++	if (vcn_harvest_count == adev->vcn.num_vcn_inst) {
++		adev->harvest_ip_mask |= AMD_HARVEST_IP_VCN_MASK;
++		adev->harvest_ip_mask |= AMD_HARVEST_IP_JPEG_MASK;
++	}
+ }
+ 
+ int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 9dfed9d560dc4..7571154689b75 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -1537,6 +1537,8 @@ static int amdgpu_pmops_runtime_suspend(struct device *dev)
+ 		pci_ignore_hotplug(pdev);
+ 		pci_set_power_state(pdev, PCI_D3cold);
+ 		drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF;
++	} else if (amdgpu_device_supports_boco(drm_dev)) {
++		/* nothing to do */
+ 	} else if (amdgpu_device_supports_baco(drm_dev)) {
+ 		amdgpu_device_baco_enter(drm_dev);
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 0894cd505361a..ed221f815a1fa 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -9410,7 +9410,12 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm,
+ 		} else if (amdgpu_freesync_vid_mode && aconnector &&
+ 			   is_freesync_video_mode(&new_crtc_state->mode,
+ 						  aconnector)) {
+-			set_freesync_fixed_config(dm_new_crtc_state);
++			struct drm_display_mode *high_mode;
++
++			high_mode = get_highest_refresh_rate_mode(aconnector, false);
++			if (!drm_mode_equal(&new_crtc_state->mode, high_mode)) {
++				set_freesync_fixed_config(dm_new_crtc_state);
++			}
+ 		}
+ 
+ 		ret = dm_atomic_get_state(state, &dm_state);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+index b3ed7e7777204..61ee83c32ee70 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+@@ -584,7 +584,7 @@ static void amdgpu_dm_irq_schedule_work(struct amdgpu_device *adev,
+ 		handler_data = container_of(handler_list->next, struct amdgpu_dm_irq_handler_data, list);
+ 
+ 		/*allocate a new amdgpu_dm_irq_handler_data*/
+-		handler_data_add = kzalloc(sizeof(*handler_data), GFP_KERNEL);
++		handler_data_add = kzalloc(sizeof(*handler_data), GFP_ATOMIC);
+ 		if (!handler_data_add) {
+ 			DRM_ERROR("DM_IRQ: failed to allocate irq handler!\n");
+ 			return;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
+index 5fcc2e64305d5..a5a1cb62f967f 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
+@@ -1788,7 +1788,6 @@ static bool dcn30_split_stream_for_mpc_or_odm(
+ 		}
+ 		pri_pipe->next_odm_pipe = sec_pipe;
+ 		sec_pipe->prev_odm_pipe = pri_pipe;
+-		ASSERT(sec_pipe->top_pipe == NULL);
+ 
+ 		if (!sec_pipe->top_pipe)
+ 			sec_pipe->stream_res.opp = pool->opps[pipe_idx];
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+index 77f532a49e37f..bacef9120b8dc 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+@@ -242,7 +242,7 @@ static int vangogh_tables_init(struct smu_context *smu)
+ 	return 0;
+ 
+ err3_out:
+-	kfree(smu_table->clocks_table);
++	kfree(smu_table->watermarks_table);
+ err2_out:
+ 	kfree(smu_table->gpu_metrics_table);
+ err1_out:
+diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
+index 64e9107d70f7e..a1d4c09f6d918 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.c
++++ b/drivers/gpu/drm/i915/display/intel_display.c
+@@ -5424,16 +5424,18 @@ static void bdw_set_pipemisc(const struct intel_crtc_state *crtc_state)
+ 
+ 	switch (crtc_state->pipe_bpp) {
+ 	case 18:
+-		val |= PIPEMISC_DITHER_6_BPC;
++		val |= PIPEMISC_6_BPC;
+ 		break;
+ 	case 24:
+-		val |= PIPEMISC_DITHER_8_BPC;
++		val |= PIPEMISC_8_BPC;
+ 		break;
+ 	case 30:
+-		val |= PIPEMISC_DITHER_10_BPC;
++		val |= PIPEMISC_10_BPC;
+ 		break;
+ 	case 36:
+-		val |= PIPEMISC_DITHER_12_BPC;
++		/* Port output 12BPC defined for ADLP+ */
++		if (DISPLAY_VER(dev_priv) > 12)
++			val |= PIPEMISC_12_BPC_ADLP;
+ 		break;
+ 	default:
+ 		MISSING_CASE(crtc_state->pipe_bpp);
+@@ -5469,15 +5471,27 @@ int bdw_get_pipemisc_bpp(struct intel_crtc *crtc)
+ 
+ 	tmp = intel_de_read(dev_priv, PIPEMISC(crtc->pipe));
+ 
+-	switch (tmp & PIPEMISC_DITHER_BPC_MASK) {
+-	case PIPEMISC_DITHER_6_BPC:
++	switch (tmp & PIPEMISC_BPC_MASK) {
++	case PIPEMISC_6_BPC:
+ 		return 18;
+-	case PIPEMISC_DITHER_8_BPC:
++	case PIPEMISC_8_BPC:
+ 		return 24;
+-	case PIPEMISC_DITHER_10_BPC:
++	case PIPEMISC_10_BPC:
+ 		return 30;
+-	case PIPEMISC_DITHER_12_BPC:
+-		return 36;
++	/*
++	 * PORT OUTPUT 12 BPC defined for ADLP+.
++	 *
++	 * TODO:
++	 * For previous platforms with DSI interface, bits 5:7
++	 * are used for storing pipe_bpp irrespective of dithering.
++	 * Since the value of 12 BPC is not defined for these bits
++	 * on older platforms, need to find a workaround for 12 BPC
++	 * MIPI DSI HW readout.
++	 */
++	case PIPEMISC_12_BPC_ADLP:
++		if (DISPLAY_VER(dev_priv) > 12)
++			return 36;
++		fallthrough;
+ 	default:
+ 		MISSING_CASE(tmp);
+ 		return 0;
+diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
+index 2358c92733b0d..55611c7dbcec8 100644
+--- a/drivers/gpu/drm/i915/gvt/handlers.c
++++ b/drivers/gpu/drm/i915/gvt/handlers.c
+@@ -3149,6 +3149,7 @@ static int init_bdw_mmio_info(struct intel_gvt *gvt)
+ 	MMIO_DFH(_MMIO(0xb100), D_BDW, F_CMD_ACCESS, NULL, NULL);
+ 	MMIO_DFH(_MMIO(0xb10c), D_BDW, F_CMD_ACCESS, NULL, NULL);
+ 	MMIO_D(_MMIO(0xb110), D_BDW);
++	MMIO_D(GEN9_SCRATCH_LNCF1, D_BDW_PLUS);
+ 
+ 	MMIO_F(_MMIO(0x24d0), 48, F_CMD_ACCESS | F_CMD_WRITE_PATCH, 0, 0,
+ 		D_BDW_PLUS, NULL, force_nonpriv_write);
+diff --git a/drivers/gpu/drm/i915/gvt/mmio_context.c b/drivers/gpu/drm/i915/gvt/mmio_context.c
+index c9589e26af936..ac24adc320d33 100644
+--- a/drivers/gpu/drm/i915/gvt/mmio_context.c
++++ b/drivers/gpu/drm/i915/gvt/mmio_context.c
+@@ -105,6 +105,8 @@ static struct engine_mmio gen9_engine_mmio_list[] __cacheline_aligned = {
+ 	{RCS0, COMMON_SLICE_CHICKEN2, 0xffff, true}, /* 0x7014 */
+ 	{RCS0, GEN9_CS_DEBUG_MODE1, 0xffff, false}, /* 0x20ec */
+ 	{RCS0, GEN8_L3SQCREG4, 0, false}, /* 0xb118 */
++	{RCS0, GEN9_SCRATCH1, 0, false}, /* 0xb11c */
++	{RCS0, GEN9_SCRATCH_LNCF1, 0, false}, /* 0xb008 */
+ 	{RCS0, GEN7_HALF_SLICE_CHICKEN1, 0xffff, true}, /* 0xe100 */
+ 	{RCS0, HALF_SLICE_CHICKEN2, 0xffff, true}, /* 0xe180 */
+ 	{RCS0, HALF_SLICE_CHICKEN3, 0xffff, true}, /* 0xe184 */
+diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
+index bb181fe5d47ee..725f241a428c0 100644
+--- a/drivers/gpu/drm/i915/i915_gpu_error.c
++++ b/drivers/gpu/drm/i915/i915_gpu_error.c
+@@ -728,9 +728,18 @@ static void err_print_gt(struct drm_i915_error_state_buf *m,
+ 	if (INTEL_GEN(m->i915) >= 12) {
+ 		int i;
+ 
+-		for (i = 0; i < GEN12_SFC_DONE_MAX; i++)
++		for (i = 0; i < GEN12_SFC_DONE_MAX; i++) {
++			/*
++			 * SFC_DONE resides in the VD forcewake domain, so it
++			 * only exists if the corresponding VCS engine is
++			 * present.
++			 */
++			if (!HAS_ENGINE(gt->_gt, _VCS(i * 2)))
++				continue;
++
+ 			err_printf(m, "  SFC_DONE[%d]: 0x%08x\n", i,
+ 				   gt->sfc_done[i]);
++		}
+ 
+ 		err_printf(m, "  GAM_DONE: 0x%08x\n", gt->gam_done);
+ 	}
+@@ -1586,6 +1595,14 @@ static void gt_record_regs(struct intel_gt_coredump *gt)
+ 
+ 	if (INTEL_GEN(i915) >= 12) {
+ 		for (i = 0; i < GEN12_SFC_DONE_MAX; i++) {
++			/*
++			 * SFC_DONE resides in the VD forcewake domain, so it
++			 * only exists if the corresponding VCS engine is
++			 * present.
++			 */
++			if (!HAS_ENGINE(gt->_gt, _VCS(i * 2)))
++				continue;
++
+ 			gt->sfc_done[i] =
+ 				intel_uncore_read(uncore, GEN12_SFC_DONE(i));
+ 		}
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 97fc7a51c1006..dfccba962dc10 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -6134,11 +6134,17 @@ enum {
+ #define   PIPEMISC_HDR_MODE_PRECISION	(1 << 23) /* icl+ */
+ #define   PIPEMISC_OUTPUT_COLORSPACE_YUV  (1 << 11)
+ #define   PIPEMISC_PIXEL_ROUNDING_TRUNC	REG_BIT(8) /* tgl+ */
+-#define   PIPEMISC_DITHER_BPC_MASK	(7 << 5)
+-#define   PIPEMISC_DITHER_8_BPC		(0 << 5)
+-#define   PIPEMISC_DITHER_10_BPC	(1 << 5)
+-#define   PIPEMISC_DITHER_6_BPC		(2 << 5)
+-#define   PIPEMISC_DITHER_12_BPC	(3 << 5)
++/*
++ * For Display < 13, Bits 5-7 of PIPE MISC represent DITHER BPC with
++ * valid values of: 6, 8, 10 BPC.
++ * ADLP+, the bits 5-7 represent PORT OUTPUT BPC with valid values of:
++ * 6, 8, 10, 12 BPC.
++ */
++#define   PIPEMISC_BPC_MASK		(7 << 5)
++#define   PIPEMISC_8_BPC		(0 << 5)
++#define   PIPEMISC_10_BPC		(1 << 5)
++#define   PIPEMISC_6_BPC		(2 << 5)
++#define   PIPEMISC_12_BPC_ADLP		(4 << 5) /* adlp+ */
+ #define   PIPEMISC_DITHER_ENABLE	(1 << 4)
+ #define   PIPEMISC_DITHER_TYPE_MASK	(3 << 2)
+ #define   PIPEMISC_DITHER_TYPE_SP	(0 << 2)
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+index 474efb8442493..735efe79f0759 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+@@ -532,13 +532,10 @@ void mtk_drm_crtc_async_update(struct drm_crtc *crtc, struct drm_plane *plane,
+ 			       struct drm_atomic_state *state)
+ {
+ 	struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc);
+-	const struct drm_plane_helper_funcs *plane_helper_funcs =
+-			plane->helper_private;
+ 
+ 	if (!mtk_crtc->enabled)
+ 		return;
+ 
+-	plane_helper_funcs->atomic_update(plane, state);
+ 	mtk_drm_crtc_update_config(mtk_crtc, false);
+ }
+ 
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_plane.c b/drivers/gpu/drm/mediatek/mtk_drm_plane.c
+index b5582dcf564ce..e6dcb34d30522 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_plane.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_plane.c
+@@ -110,6 +110,35 @@ static int mtk_plane_atomic_async_check(struct drm_plane *plane,
+ 						   true, true);
+ }
+ 
++static void mtk_plane_update_new_state(struct drm_plane_state *new_state,
++				       struct mtk_plane_state *mtk_plane_state)
++{
++	struct drm_framebuffer *fb = new_state->fb;
++	struct drm_gem_object *gem;
++	struct mtk_drm_gem_obj *mtk_gem;
++	unsigned int pitch, format;
++	dma_addr_t addr;
++
++	gem = fb->obj[0];
++	mtk_gem = to_mtk_gem_obj(gem);
++	addr = mtk_gem->dma_addr;
++	pitch = fb->pitches[0];
++	format = fb->format->format;
++
++	addr += (new_state->src.x1 >> 16) * fb->format->cpp[0];
++	addr += (new_state->src.y1 >> 16) * pitch;
++
++	mtk_plane_state->pending.enable = true;
++	mtk_plane_state->pending.pitch = pitch;
++	mtk_plane_state->pending.format = format;
++	mtk_plane_state->pending.addr = addr;
++	mtk_plane_state->pending.x = new_state->dst.x1;
++	mtk_plane_state->pending.y = new_state->dst.y1;
++	mtk_plane_state->pending.width = drm_rect_width(&new_state->dst);
++	mtk_plane_state->pending.height = drm_rect_height(&new_state->dst);
++	mtk_plane_state->pending.rotation = new_state->rotation;
++}
++
+ static void mtk_plane_atomic_async_update(struct drm_plane *plane,
+ 					  struct drm_atomic_state *state)
+ {
+@@ -126,8 +155,10 @@ static void mtk_plane_atomic_async_update(struct drm_plane *plane,
+ 	plane->state->src_h = new_state->src_h;
+ 	plane->state->src_w = new_state->src_w;
+ 	swap(plane->state->fb, new_state->fb);
+-	new_plane_state->pending.async_dirty = true;
+ 
++	mtk_plane_update_new_state(new_state, new_plane_state);
++	wmb(); /* Make sure the above parameters are set before update */
++	new_plane_state->pending.async_dirty = true;
+ 	mtk_drm_crtc_async_update(new_state->crtc, plane, state);
+ }
+ 
+@@ -189,14 +220,8 @@ static void mtk_plane_atomic_update(struct drm_plane *plane,
+ 	struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state,
+ 									   plane);
+ 	struct mtk_plane_state *mtk_plane_state = to_mtk_plane_state(new_state);
+-	struct drm_crtc *crtc = new_state->crtc;
+-	struct drm_framebuffer *fb = new_state->fb;
+-	struct drm_gem_object *gem;
+-	struct mtk_drm_gem_obj *mtk_gem;
+-	unsigned int pitch, format;
+-	dma_addr_t addr;
+ 
+-	if (!crtc || WARN_ON(!fb))
++	if (!new_state->crtc || WARN_ON(!new_state->fb))
+ 		return;
+ 
+ 	if (!new_state->visible) {
+@@ -204,24 +229,7 @@ static void mtk_plane_atomic_update(struct drm_plane *plane,
+ 		return;
+ 	}
+ 
+-	gem = fb->obj[0];
+-	mtk_gem = to_mtk_gem_obj(gem);
+-	addr = mtk_gem->dma_addr;
+-	pitch = fb->pitches[0];
+-	format = fb->format->format;
+-
+-	addr += (new_state->src.x1 >> 16) * fb->format->cpp[0];
+-	addr += (new_state->src.y1 >> 16) * pitch;
+-
+-	mtk_plane_state->pending.enable = true;
+-	mtk_plane_state->pending.pitch = pitch;
+-	mtk_plane_state->pending.format = format;
+-	mtk_plane_state->pending.addr = addr;
+-	mtk_plane_state->pending.x = new_state->dst.x1;
+-	mtk_plane_state->pending.y = new_state->dst.y1;
+-	mtk_plane_state->pending.width = drm_rect_width(&new_state->dst);
+-	mtk_plane_state->pending.height = drm_rect_height(&new_state->dst);
+-	mtk_plane_state->pending.rotation = new_state->rotation;
++	mtk_plane_update_new_state(new_state, mtk_plane_state);
+ 	wmb(); /* Make sure the above parameters are set before update */
+ 	mtk_plane_state->pending.dirty = true;
+ }
+diff --git a/drivers/gpu/drm/meson/meson_registers.h b/drivers/gpu/drm/meson/meson_registers.h
+index 446e7961da486..0f3cafab88600 100644
+--- a/drivers/gpu/drm/meson/meson_registers.h
++++ b/drivers/gpu/drm/meson/meson_registers.h
+@@ -634,6 +634,11 @@
+ #define VPP_WRAP_OSD3_MATRIX_PRE_OFFSET2 0x3dbc
+ #define VPP_WRAP_OSD3_MATRIX_EN_CTRL 0x3dbd
+ 
++/* osd1 HDR */
++#define OSD1_HDR2_CTRL 0x38a0
++#define OSD1_HDR2_CTRL_VDIN0_HDR2_TOP_EN       BIT(13)
++#define OSD1_HDR2_CTRL_REG_ONLY_MAT            BIT(16)
++
+ /* osd2 scaler */
+ #define OSD2_VSC_PHASE_STEP 0x3d00
+ #define OSD2_VSC_INI_PHASE 0x3d01
+diff --git a/drivers/gpu/drm/meson/meson_viu.c b/drivers/gpu/drm/meson/meson_viu.c
+index aede0c67a57f0..259f3e6bec90a 100644
+--- a/drivers/gpu/drm/meson/meson_viu.c
++++ b/drivers/gpu/drm/meson/meson_viu.c
+@@ -425,9 +425,14 @@ void meson_viu_init(struct meson_drm *priv)
+ 	if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXM) ||
+ 	    meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXL))
+ 		meson_viu_load_matrix(priv);
+-	else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A))
++	else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A)) {
+ 		meson_viu_set_g12a_osd1_matrix(priv, RGB709_to_YUV709l_coeff,
+ 					       true);
++		/* fix green/pink color distortion from vendor u-boot */
++		writel_bits_relaxed(OSD1_HDR2_CTRL_REG_ONLY_MAT |
++				OSD1_HDR2_CTRL_VDIN0_HDR2_TOP_EN, 0,
++				priv->io_base + _REG(OSD1_HDR2_CTRL));
++	}
+ 
+ 	/* Initialize OSD1 fifo control register */
+ 	reg = VIU_OSD_DDR_PRIORITY_URGENT |
+diff --git a/drivers/i2c/busses/i2c-bcm-iproc.c b/drivers/i2c/busses/i2c-bcm-iproc.c
+index cceaf69279a94..6304d1dd2dd6f 100644
+--- a/drivers/i2c/busses/i2c-bcm-iproc.c
++++ b/drivers/i2c/busses/i2c-bcm-iproc.c
+@@ -1224,14 +1224,14 @@ static int bcm_iproc_i2c_unreg_slave(struct i2c_client *slave)
+ 
+ 	disable_irq(iproc_i2c->irq);
+ 
++	tasklet_kill(&iproc_i2c->slave_rx_tasklet);
++
+ 	/* disable all slave interrupts */
+ 	tmp = iproc_i2c_rd_reg(iproc_i2c, IE_OFFSET);
+ 	tmp &= ~(IE_S_ALL_INTERRUPT_MASK <<
+ 			IE_S_ALL_INTERRUPT_SHIFT);
+ 	iproc_i2c_wr_reg(iproc_i2c, IE_OFFSET, tmp);
+ 
+-	tasklet_kill(&iproc_i2c->slave_rx_tasklet);
+-
+ 	/* Erase the slave address programmed */
+ 	tmp = iproc_i2c_rd_reg(iproc_i2c, S_CFG_SMBUS_ADDR_OFFSET);
+ 	tmp &= ~BIT(S_CFG_EN_NIC_SMB_ADDR3_SHIFT);
+diff --git a/drivers/i2c/i2c-dev.c b/drivers/i2c/i2c-dev.c
+index cb64fe649390e..77f576e516522 100644
+--- a/drivers/i2c/i2c-dev.c
++++ b/drivers/i2c/i2c-dev.c
+@@ -141,7 +141,7 @@ static ssize_t i2cdev_read(struct file *file, char __user *buf, size_t count,
+ 	if (count > 8192)
+ 		count = 8192;
+ 
+-	tmp = kmalloc(count, GFP_KERNEL);
++	tmp = kzalloc(count, GFP_KERNEL);
+ 	if (tmp == NULL)
+ 		return -ENOMEM;
+ 
+@@ -150,7 +150,8 @@ static ssize_t i2cdev_read(struct file *file, char __user *buf, size_t count,
+ 
+ 	ret = i2c_master_recv(client, tmp, count);
+ 	if (ret >= 0)
+-		ret = copy_to_user(buf, tmp, count) ? -EFAULT : ret;
++		if (copy_to_user(buf, tmp, ret))
++			ret = -EFAULT;
+ 	kfree(tmp);
+ 	return ret;
+ }
+diff --git a/drivers/iio/adc/palmas_gpadc.c b/drivers/iio/adc/palmas_gpadc.c
+index 6ef09609be9fe..f9c8385c72d3d 100644
+--- a/drivers/iio/adc/palmas_gpadc.c
++++ b/drivers/iio/adc/palmas_gpadc.c
+@@ -664,8 +664,8 @@ static int palmas_adc_wakeup_configure(struct palmas_gpadc *adc)
+ 
+ 	adc_period = adc->auto_conversion_period;
+ 	for (i = 0; i < 16; ++i) {
+-		if (((1000 * (1 << i)) / 32) < adc_period)
+-			continue;
++		if (((1000 * (1 << i)) / 32) >= adc_period)
++			break;
+ 	}
+ 	if (i > 0)
+ 		i--;
+diff --git a/drivers/iio/adc/ti-ads7950.c b/drivers/iio/adc/ti-ads7950.c
+index 2383eacada87d..a2b83f0bd5260 100644
+--- a/drivers/iio/adc/ti-ads7950.c
++++ b/drivers/iio/adc/ti-ads7950.c
+@@ -568,7 +568,6 @@ static int ti_ads7950_probe(struct spi_device *spi)
+ 	st->ring_xfer.tx_buf = &st->tx_buf[0];
+ 	st->ring_xfer.rx_buf = &st->rx_buf[0];
+ 	/* len will be set later */
+-	st->ring_xfer.cs_change = true;
+ 
+ 	spi_message_add_tail(&st->ring_xfer, &st->ring_msg);
+ 
+diff --git a/drivers/iio/humidity/hdc100x.c b/drivers/iio/humidity/hdc100x.c
+index 2a957f19048ee..9e0fce917ce4c 100644
+--- a/drivers/iio/humidity/hdc100x.c
++++ b/drivers/iio/humidity/hdc100x.c
+@@ -25,6 +25,8 @@
+ #include <linux/iio/trigger_consumer.h>
+ #include <linux/iio/triggered_buffer.h>
+ 
++#include <linux/time.h>
++
+ #define HDC100X_REG_TEMP			0x00
+ #define HDC100X_REG_HUMIDITY			0x01
+ 
+@@ -166,7 +168,7 @@ static int hdc100x_get_measurement(struct hdc100x_data *data,
+ 				   struct iio_chan_spec const *chan)
+ {
+ 	struct i2c_client *client = data->client;
+-	int delay = data->adc_int_us[chan->address];
++	int delay = data->adc_int_us[chan->address] + 1*USEC_PER_MSEC;
+ 	int ret;
+ 	__be16 val;
+ 
+@@ -316,7 +318,7 @@ static irqreturn_t hdc100x_trigger_handler(int irq, void *p)
+ 	struct iio_dev *indio_dev = pf->indio_dev;
+ 	struct hdc100x_data *data = iio_priv(indio_dev);
+ 	struct i2c_client *client = data->client;
+-	int delay = data->adc_int_us[0] + data->adc_int_us[1];
++	int delay = data->adc_int_us[0] + data->adc_int_us[1] + 2*USEC_PER_MSEC;
+ 	int ret;
+ 
+ 	/* dual read starts at temp register */
+diff --git a/drivers/iio/imu/adis.c b/drivers/iio/imu/adis.c
+index 319b64b2fd887..f8b7837d8b8f6 100644
+--- a/drivers/iio/imu/adis.c
++++ b/drivers/iio/imu/adis.c
+@@ -415,12 +415,11 @@ int __adis_initial_startup(struct adis *adis)
+ 	int ret;
+ 
+ 	/* check if the device has rst pin low */
+-	gpio = devm_gpiod_get_optional(&adis->spi->dev, "reset", GPIOD_ASIS);
++	gpio = devm_gpiod_get_optional(&adis->spi->dev, "reset", GPIOD_OUT_HIGH);
+ 	if (IS_ERR(gpio))
+ 		return PTR_ERR(gpio);
+ 
+ 	if (gpio) {
+-		gpiod_set_value_cansleep(gpio, 1);
+ 		msleep(10);
+ 		/* bring device out of reset */
+ 		gpiod_set_value_cansleep(gpio, 0);
+diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
+index 9ce01f7296739..e14a14b634a5f 100644
+--- a/drivers/infiniband/hw/mlx5/cq.c
++++ b/drivers/infiniband/hw/mlx5/cq.c
+@@ -941,7 +941,6 @@ int mlx5_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ 	u32 *cqb = NULL;
+ 	void *cqc;
+ 	int cqe_size;
+-	unsigned int irqn;
+ 	int eqn;
+ 	int err;
+ 
+@@ -980,7 +979,7 @@ int mlx5_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ 		INIT_WORK(&cq->notify_work, notify_soft_wc_handler);
+ 	}
+ 
+-	err = mlx5_vector2eqn(dev->mdev, vector, &eqn, &irqn);
++	err = mlx5_vector2eqn(dev->mdev, vector, &eqn);
+ 	if (err)
+ 		goto err_cqb;
+ 
+@@ -1003,7 +1002,6 @@ int mlx5_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ 		goto err_cqb;
+ 
+ 	mlx5_ib_dbg(dev, "cqn 0x%x\n", cq->mcq.cqn);
+-	cq->mcq.irqn = irqn;
+ 	if (udata)
+ 		cq->mcq.tasklet_ctx.comp = mlx5_ib_cq_comp;
+ 	else
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index eb9b0a2707f80..c869b2a91a289 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -975,7 +975,6 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_QUERY_EQN)(
+ 	struct mlx5_ib_dev *dev;
+ 	int user_vector;
+ 	int dev_eqn;
+-	unsigned int irqn;
+ 	int err;
+ 
+ 	if (uverbs_copy_from(&user_vector, attrs,
+@@ -987,7 +986,7 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_DEVX_QUERY_EQN)(
+ 		return PTR_ERR(c);
+ 	dev = to_mdev(c->ibucontext.device);
+ 
+-	err = mlx5_vector2eqn(dev->mdev, user_vector, &dev_eqn, &irqn);
++	err = mlx5_vector2eqn(dev->mdev, user_vector, &dev_eqn);
+ 	if (err < 0)
+ 		return err;
+ 
+diff --git a/drivers/net/bareudp.c b/drivers/net/bareudp.c
+index edfad93e7b686..22e26458a86e7 100644
+--- a/drivers/net/bareudp.c
++++ b/drivers/net/bareudp.c
+@@ -71,12 +71,18 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ 		family = AF_INET6;
+ 
+ 	if (bareudp->ethertype == htons(ETH_P_IP)) {
+-		struct iphdr *iphdr;
++		__u8 ipversion;
+ 
+-		iphdr = (struct iphdr *)(skb->data + BAREUDP_BASE_HLEN);
+-		if (iphdr->version == 4) {
+-			proto = bareudp->ethertype;
+-		} else if (bareudp->multi_proto_mode && (iphdr->version == 6)) {
++		if (skb_copy_bits(skb, BAREUDP_BASE_HLEN, &ipversion,
++				  sizeof(ipversion))) {
++			bareudp->dev->stats.rx_dropped++;
++			goto drop;
++		}
++		ipversion >>= 4;
++
++		if (ipversion == 4) {
++			proto = htons(ETH_P_IP);
++		} else if (ipversion == 6 && bareudp->multi_proto_mode) {
+ 			proto = htons(ETH_P_IPV6);
+ 		} else {
+ 			bareudp->dev->stats.rx_dropped++;
+diff --git a/drivers/net/dsa/hirschmann/hellcreek.c b/drivers/net/dsa/hirschmann/hellcreek.c
+index 4d78219da2530..50109218baadd 100644
+--- a/drivers/net/dsa/hirschmann/hellcreek.c
++++ b/drivers/net/dsa/hirschmann/hellcreek.c
+@@ -912,6 +912,7 @@ static int hellcreek_fdb_dump(struct dsa_switch *ds, int port,
+ {
+ 	struct hellcreek *hellcreek = ds->priv;
+ 	u16 entries;
++	int ret = 0;
+ 	size_t i;
+ 
+ 	mutex_lock(&hellcreek->reg_lock);
+@@ -944,12 +945,14 @@ static int hellcreek_fdb_dump(struct dsa_switch *ds, int port,
+ 		if (!(entry.portmask & BIT(port)))
+ 			continue;
+ 
+-		cb(entry.mac, 0, entry.is_static, data);
++		ret = cb(entry.mac, 0, entry.is_static, data);
++		if (ret)
++			break;
+ 	}
+ 
+ 	mutex_unlock(&hellcreek->reg_lock);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int hellcreek_vlan_filtering(struct dsa_switch *ds, int port,
+diff --git a/drivers/net/dsa/lan9303-core.c b/drivers/net/dsa/lan9303-core.c
+index 3443740254261..d7ce281570b54 100644
+--- a/drivers/net/dsa/lan9303-core.c
++++ b/drivers/net/dsa/lan9303-core.c
+@@ -557,12 +557,12 @@ static int lan9303_alr_make_entry_raw(struct lan9303 *chip, u32 dat0, u32 dat1)
+ 	return 0;
+ }
+ 
+-typedef void alr_loop_cb_t(struct lan9303 *chip, u32 dat0, u32 dat1,
+-			   int portmap, void *ctx);
++typedef int alr_loop_cb_t(struct lan9303 *chip, u32 dat0, u32 dat1,
++			  int portmap, void *ctx);
+ 
+-static void lan9303_alr_loop(struct lan9303 *chip, alr_loop_cb_t *cb, void *ctx)
++static int lan9303_alr_loop(struct lan9303 *chip, alr_loop_cb_t *cb, void *ctx)
+ {
+-	int i;
++	int ret = 0, i;
+ 
+ 	mutex_lock(&chip->alr_mutex);
+ 	lan9303_write_switch_reg(chip, LAN9303_SWE_ALR_CMD,
+@@ -582,13 +582,17 @@ static void lan9303_alr_loop(struct lan9303 *chip, alr_loop_cb_t *cb, void *ctx)
+ 						LAN9303_ALR_DAT1_PORT_BITOFFS;
+ 		portmap = alrport_2_portmap[alrport];
+ 
+-		cb(chip, dat0, dat1, portmap, ctx);
++		ret = cb(chip, dat0, dat1, portmap, ctx);
++		if (ret)
++			break;
+ 
+ 		lan9303_write_switch_reg(chip, LAN9303_SWE_ALR_CMD,
+ 					 LAN9303_ALR_CMD_GET_NEXT);
+ 		lan9303_write_switch_reg(chip, LAN9303_SWE_ALR_CMD, 0);
+ 	}
+ 	mutex_unlock(&chip->alr_mutex);
++
++	return ret;
+ }
+ 
+ static void alr_reg_to_mac(u32 dat0, u32 dat1, u8 mac[6])
+@@ -606,18 +610,20 @@ struct del_port_learned_ctx {
+ };
+ 
+ /* Clear learned (non-static) entry on given port */
+-static void alr_loop_cb_del_port_learned(struct lan9303 *chip, u32 dat0,
+-					 u32 dat1, int portmap, void *ctx)
++static int alr_loop_cb_del_port_learned(struct lan9303 *chip, u32 dat0,
++					u32 dat1, int portmap, void *ctx)
+ {
+ 	struct del_port_learned_ctx *del_ctx = ctx;
+ 	int port = del_ctx->port;
+ 
+ 	if (((BIT(port) & portmap) == 0) || (dat1 & LAN9303_ALR_DAT1_STATIC))
+-		return;
++		return 0;
+ 
+ 	/* learned entries has only one port, we can just delete */
+ 	dat1 &= ~LAN9303_ALR_DAT1_VALID; /* delete entry */
+ 	lan9303_alr_make_entry_raw(chip, dat0, dat1);
++
++	return 0;
+ }
+ 
+ struct port_fdb_dump_ctx {
+@@ -626,19 +632,19 @@ struct port_fdb_dump_ctx {
+ 	dsa_fdb_dump_cb_t *cb;
+ };
+ 
+-static void alr_loop_cb_fdb_port_dump(struct lan9303 *chip, u32 dat0,
+-				      u32 dat1, int portmap, void *ctx)
++static int alr_loop_cb_fdb_port_dump(struct lan9303 *chip, u32 dat0,
++				     u32 dat1, int portmap, void *ctx)
+ {
+ 	struct port_fdb_dump_ctx *dump_ctx = ctx;
+ 	u8 mac[ETH_ALEN];
+ 	bool is_static;
+ 
+ 	if ((BIT(dump_ctx->port) & portmap) == 0)
+-		return;
++		return 0;
+ 
+ 	alr_reg_to_mac(dat0, dat1, mac);
+ 	is_static = !!(dat1 & LAN9303_ALR_DAT1_STATIC);
+-	dump_ctx->cb(mac, 0, is_static, dump_ctx->data);
++	return dump_ctx->cb(mac, 0, is_static, dump_ctx->data);
+ }
+ 
+ /* Set a static ALR entry. Delete entry if port_map is zero */
+@@ -1210,9 +1216,7 @@ static int lan9303_port_fdb_dump(struct dsa_switch *ds, int port,
+ 	};
+ 
+ 	dev_dbg(chip->dev, "%s(%d)\n", __func__, port);
+-	lan9303_alr_loop(chip, alr_loop_cb_fdb_port_dump, &dump_ctx);
+-
+-	return 0;
++	return lan9303_alr_loop(chip, alr_loop_cb_fdb_port_dump, &dump_ctx);
+ }
+ 
+ static int lan9303_port_mdb_prepare(struct dsa_switch *ds, int port,
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index 314ae78bbdd63..e78026ef6d8cc 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -1404,11 +1404,17 @@ static int gswip_port_fdb_dump(struct dsa_switch *ds, int port,
+ 		addr[1] = mac_bridge.key[2] & 0xff;
+ 		addr[0] = (mac_bridge.key[2] >> 8) & 0xff;
+ 		if (mac_bridge.val[1] & GSWIP_TABLE_MAC_BRIDGE_STATIC) {
+-			if (mac_bridge.val[0] & BIT(port))
+-				cb(addr, 0, true, data);
++			if (mac_bridge.val[0] & BIT(port)) {
++				err = cb(addr, 0, true, data);
++				if (err)
++					return err;
++			}
+ 		} else {
+-			if (((mac_bridge.val[0] & GENMASK(7, 4)) >> 4) == port)
+-				cb(addr, 0, false, data);
++			if (((mac_bridge.val[0] & GENMASK(7, 4)) >> 4) == port) {
++				err = cb(addr, 0, false, data);
++				if (err)
++					return err;
++			}
+ 		}
+ 	}
+ 	return 0;
+diff --git a/drivers/net/dsa/microchip/ksz8795.c b/drivers/net/dsa/microchip/ksz8795.c
+index ad509a57a9457..8eb9a45c98cfb 100644
+--- a/drivers/net/dsa/microchip/ksz8795.c
++++ b/drivers/net/dsa/microchip/ksz8795.c
+@@ -684,8 +684,8 @@ static void ksz8_r_vlan_entries(struct ksz_device *dev, u16 addr)
+ 	shifts = ksz8->shifts;
+ 
+ 	ksz8_r_table(dev, TABLE_VLAN, addr, &data);
+-	addr *= dev->phy_port_cnt;
+-	for (i = 0; i < dev->phy_port_cnt; i++) {
++	addr *= 4;
++	for (i = 0; i < 4; i++) {
+ 		dev->vlan_cache[addr + i].table[0] = (u16)data;
+ 		data >>= shifts[VLAN_TABLE];
+ 	}
+@@ -699,7 +699,7 @@ static void ksz8_r_vlan_table(struct ksz_device *dev, u16 vid, u16 *vlan)
+ 	u64 buf;
+ 
+ 	data = (u16 *)&buf;
+-	addr = vid / dev->phy_port_cnt;
++	addr = vid / 4;
+ 	index = vid & 3;
+ 	ksz8_r_table(dev, TABLE_VLAN, addr, &buf);
+ 	*vlan = data[index];
+@@ -713,7 +713,7 @@ static void ksz8_w_vlan_table(struct ksz_device *dev, u16 vid, u16 vlan)
+ 	u64 buf;
+ 
+ 	data = (u16 *)&buf;
+-	addr = vid / dev->phy_port_cnt;
++	addr = vid / 4;
+ 	index = vid & 3;
+ 	ksz8_r_table(dev, TABLE_VLAN, addr, &buf);
+ 	data[index] = vlan;
+@@ -1078,24 +1078,67 @@ static int ksz8_port_vlan_filtering(struct dsa_switch *ds, int port, bool flag,
+ 	if (ksz_is_ksz88x3(dev))
+ 		return -ENOTSUPP;
+ 
++	/* Discard packets with VID not enabled on the switch */
+ 	ksz_cfg(dev, S_MIRROR_CTRL, SW_VLAN_ENABLE, flag);
+ 
++	/* Discard packets with VID not enabled on the ingress port */
++	for (port = 0; port < dev->phy_port_cnt; ++port)
++		ksz_port_cfg(dev, port, REG_PORT_CTRL_2, PORT_INGRESS_FILTER,
++			     flag);
++
+ 	return 0;
+ }
+ 
++static void ksz8_port_enable_pvid(struct ksz_device *dev, int port, bool state)
++{
++	if (ksz_is_ksz88x3(dev)) {
++		ksz_cfg(dev, REG_SW_INSERT_SRC_PVID,
++			0x03 << (4 - 2 * port), state);
++	} else {
++		ksz_pwrite8(dev, port, REG_PORT_CTRL_12, state ? 0x0f : 0x00);
++	}
++}
++
+ static int ksz8_port_vlan_add(struct dsa_switch *ds, int port,
+ 			      const struct switchdev_obj_port_vlan *vlan,
+ 			      struct netlink_ext_ack *extack)
+ {
+ 	bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED;
+ 	struct ksz_device *dev = ds->priv;
++	struct ksz_port *p = &dev->ports[port];
+ 	u16 data, new_pvid = 0;
+ 	u8 fid, member, valid;
+ 
+ 	if (ksz_is_ksz88x3(dev))
+ 		return -ENOTSUPP;
+ 
+-	ksz_port_cfg(dev, port, P_TAG_CTRL, PORT_REMOVE_TAG, untagged);
++	/* If a VLAN is added with untagged flag different from the
++	 * port's Remove Tag flag, we need to change the latter.
++	 * Ignore VID 0, which is always untagged.
++	 * Ignore CPU port, which will always be tagged.
++	 */
++	if (untagged != p->remove_tag && vlan->vid != 0 &&
++	    port != dev->cpu_port) {
++		unsigned int vid;
++
++		/* Reject attempts to add a VLAN that requires the
++		 * Remove Tag flag to be changed, unless there are no
++		 * other VLANs currently configured.
++		 */
++		for (vid = 1; vid < dev->num_vlans; ++vid) {
++			/* Skip the VID we are going to add or reconfigure */
++			if (vid == vlan->vid)
++				continue;
++
++			ksz8_from_vlan(dev, dev->vlan_cache[vid].table[0],
++				       &fid, &member, &valid);
++			if (valid && (member & BIT(port)))
++				return -EINVAL;
++		}
++
++		ksz_port_cfg(dev, port, P_TAG_CTRL, PORT_REMOVE_TAG, untagged);
++		p->remove_tag = untagged;
++	}
+ 
+ 	ksz8_r_vlan_table(dev, vlan->vid, &data);
+ 	ksz8_from_vlan(dev, data, &fid, &member, &valid);
+@@ -1119,9 +1162,11 @@ static int ksz8_port_vlan_add(struct dsa_switch *ds, int port,
+ 		u16 vid;
+ 
+ 		ksz_pread16(dev, port, REG_PORT_CTRL_VID, &vid);
+-		vid &= 0xfff;
++		vid &= ~VLAN_VID_MASK;
+ 		vid |= new_pvid;
+ 		ksz_pwrite16(dev, port, REG_PORT_CTRL_VID, vid);
++
++		ksz8_port_enable_pvid(dev, port, true);
+ 	}
+ 
+ 	return 0;
+@@ -1130,9 +1175,8 @@ static int ksz8_port_vlan_add(struct dsa_switch *ds, int port,
+ static int ksz8_port_vlan_del(struct dsa_switch *ds, int port,
+ 			      const struct switchdev_obj_port_vlan *vlan)
+ {
+-	bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED;
+ 	struct ksz_device *dev = ds->priv;
+-	u16 data, pvid, new_pvid = 0;
++	u16 data, pvid;
+ 	u8 fid, member, valid;
+ 
+ 	if (ksz_is_ksz88x3(dev))
+@@ -1141,8 +1185,6 @@ static int ksz8_port_vlan_del(struct dsa_switch *ds, int port,
+ 	ksz_pread16(dev, port, REG_PORT_CTRL_VID, &pvid);
+ 	pvid = pvid & 0xFFF;
+ 
+-	ksz_port_cfg(dev, port, P_TAG_CTRL, PORT_REMOVE_TAG, untagged);
+-
+ 	ksz8_r_vlan_table(dev, vlan->vid, &data);
+ 	ksz8_from_vlan(dev, data, &fid, &member, &valid);
+ 
+@@ -1154,14 +1196,11 @@ static int ksz8_port_vlan_del(struct dsa_switch *ds, int port,
+ 		valid = 0;
+ 	}
+ 
+-	if (pvid == vlan->vid)
+-		new_pvid = 1;
+-
+ 	ksz8_to_vlan(dev, fid, member, valid, &data);
+ 	ksz8_w_vlan_table(dev, vlan->vid, data);
+ 
+-	if (new_pvid != pvid)
+-		ksz_pwrite16(dev, port, REG_PORT_CTRL_VID, pvid);
++	if (pvid == vlan->vid)
++		ksz8_port_enable_pvid(dev, port, false);
+ 
+ 	return 0;
+ }
+@@ -1394,6 +1433,9 @@ static int ksz8_setup(struct dsa_switch *ds)
+ 
+ 	ksz_cfg(dev, S_MIRROR_CTRL, SW_MIRROR_RX_TX, false);
+ 
++	if (!ksz_is_ksz88x3(dev))
++		ksz_cfg(dev, REG_SW_CTRL_19, SW_INS_TAG_ENABLE, true);
++
+ 	/* set broadcast storm protection 10% rate */
+ 	regmap_update_bits(dev->regmap[1], S_REPLACE_VID_CTRL,
+ 			   BROADCAST_STORM_RATE,
+@@ -1621,6 +1663,16 @@ static int ksz8_switch_init(struct ksz_device *dev)
+ 	/* set the real number of ports */
+ 	dev->ds->num_ports = dev->port_cnt;
+ 
++	/* We rely on software untagging on the CPU port, so that we
++	 * can support both tagged and untagged VLANs
++	 */
++	dev->ds->untag_bridge_pvid = true;
++
++	/* VLAN filtering is partly controlled by the global VLAN
++	 * Enable flag
++	 */
++	dev->ds->vlan_filtering_is_global = true;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/dsa/microchip/ksz8795_reg.h b/drivers/net/dsa/microchip/ksz8795_reg.h
+index c2e52c40a54c5..383ba7a90f9cb 100644
+--- a/drivers/net/dsa/microchip/ksz8795_reg.h
++++ b/drivers/net/dsa/microchip/ksz8795_reg.h
+@@ -631,6 +631,10 @@
+ #define REG_PORT_4_OUT_RATE_3		0xEE
+ #define REG_PORT_5_OUT_RATE_3		0xFE
+ 
++/* 88x3 specific */
++
++#define REG_SW_INSERT_SRC_PVID		0xC2
++
+ /* PME */
+ 
+ #define SW_PME_OUTPUT_ENABLE		BIT(1)
+diff --git a/drivers/net/dsa/microchip/ksz_common.h b/drivers/net/dsa/microchip/ksz_common.h
+index 2e6bfd333f504..1597c63988b4e 100644
+--- a/drivers/net/dsa/microchip/ksz_common.h
++++ b/drivers/net/dsa/microchip/ksz_common.h
+@@ -27,6 +27,7 @@ struct ksz_port_mib {
+ struct ksz_port {
+ 	u16 member;
+ 	u16 vid_member;
++	bool remove_tag;		/* Remove Tag flag set, for ksz8795 only */
+ 	int stp_state;
+ 	struct phy_device phydev;
+ 
+@@ -205,12 +206,8 @@ static inline int ksz_read64(struct ksz_device *dev, u32 reg, u64 *val)
+ 	int ret;
+ 
+ 	ret = regmap_bulk_read(dev->regmap[2], reg, value, 2);
+-	if (!ret) {
+-		/* Ick! ToDo: Add 64bit R/W to regmap on 32bit systems */
+-		value[0] = swab32(value[0]);
+-		value[1] = swab32(value[1]);
+-		*val = swab64((u64)*value);
+-	}
++	if (!ret)
++		*val = (u64)value[0] << 32 | value[1];
+ 
+ 	return ret;
+ }
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 9b90f3d3a8f50..167c599a81a55 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -46,6 +46,7 @@ static const struct mt7530_mib_desc mt7530_mib[] = {
+ 	MIB_DESC(2, 0x48, "TxBytes"),
+ 	MIB_DESC(1, 0x60, "RxDrop"),
+ 	MIB_DESC(1, 0x64, "RxFiltering"),
++	MIB_DESC(1, 0x68, "RxUnicast"),
+ 	MIB_DESC(1, 0x6c, "RxMulticast"),
+ 	MIB_DESC(1, 0x70, "RxBroadcast"),
+ 	MIB_DESC(1, 0x74, "RxAlignErr"),
+diff --git a/drivers/net/dsa/qca/ar9331.c b/drivers/net/dsa/qca/ar9331.c
+index 6686192e1883e..563d8a2790306 100644
+--- a/drivers/net/dsa/qca/ar9331.c
++++ b/drivers/net/dsa/qca/ar9331.c
+@@ -101,6 +101,23 @@
+ 	 AR9331_SW_PORT_STATUS_RX_FLOW_EN | AR9331_SW_PORT_STATUS_TX_FLOW_EN | \
+ 	 AR9331_SW_PORT_STATUS_SPEED_M)
+ 
++#define AR9331_SW_REG_PORT_CTRL(_port)			(0x104 + (_port) * 0x100)
++#define AR9331_SW_PORT_CTRL_HEAD_EN			BIT(11)
++#define AR9331_SW_PORT_CTRL_PORT_STATE			GENMASK(2, 0)
++#define AR9331_SW_PORT_CTRL_PORT_STATE_DISABLED		0
++#define AR9331_SW_PORT_CTRL_PORT_STATE_BLOCKING		1
++#define AR9331_SW_PORT_CTRL_PORT_STATE_LISTENING	2
++#define AR9331_SW_PORT_CTRL_PORT_STATE_LEARNING		3
++#define AR9331_SW_PORT_CTRL_PORT_STATE_FORWARD		4
++
++#define AR9331_SW_REG_PORT_VLAN(_port)			(0x108 + (_port) * 0x100)
++#define AR9331_SW_PORT_VLAN_8021Q_MODE			GENMASK(31, 30)
++#define AR9331_SW_8021Q_MODE_SECURE			3
++#define AR9331_SW_8021Q_MODE_CHECK			2
++#define AR9331_SW_8021Q_MODE_FALLBACK			1
++#define AR9331_SW_8021Q_MODE_NONE			0
++#define AR9331_SW_PORT_VLAN_PORT_VID_MEMBER		GENMASK(25, 16)
++
+ /* MIB registers */
+ #define AR9331_MIB_COUNTER(x)			(0x20000 + ((x) * 0x100))
+ 
+@@ -371,12 +388,60 @@ static int ar9331_sw_mbus_init(struct ar9331_sw_priv *priv)
+ 	return 0;
+ }
+ 
+-static int ar9331_sw_setup(struct dsa_switch *ds)
++static int ar9331_sw_setup_port(struct dsa_switch *ds, int port)
+ {
+ 	struct ar9331_sw_priv *priv = (struct ar9331_sw_priv *)ds->priv;
+ 	struct regmap *regmap = priv->regmap;
++	u32 port_mask, port_ctrl, val;
+ 	int ret;
+ 
++	/* Generate default port settings */
++	port_ctrl = FIELD_PREP(AR9331_SW_PORT_CTRL_PORT_STATE,
++			       AR9331_SW_PORT_CTRL_PORT_STATE_FORWARD);
++
++	if (dsa_is_cpu_port(ds, port)) {
++		/* CPU port should be allowed to communicate with all user
++		 * ports.
++		 */
++		port_mask = dsa_user_ports(ds);
++		/* Enable Atheros header on CPU port. This will allow us
++		 * communicate with each port separately
++		 */
++		port_ctrl |= AR9331_SW_PORT_CTRL_HEAD_EN;
++	} else if (dsa_is_user_port(ds, port)) {
++		/* User ports should communicate only with the CPU port.
++		 */
++		port_mask = BIT(dsa_upstream_port(ds, port));
++	} else {
++		/* Other ports do not need to communicate at all */
++		port_mask = 0;
++	}
++
++	val = FIELD_PREP(AR9331_SW_PORT_VLAN_8021Q_MODE,
++			 AR9331_SW_8021Q_MODE_NONE) |
++		FIELD_PREP(AR9331_SW_PORT_VLAN_PORT_VID_MEMBER, port_mask);
++
++	ret = regmap_write(regmap, AR9331_SW_REG_PORT_VLAN(port), val);
++	if (ret)
++		goto error;
++
++	ret = regmap_write(regmap, AR9331_SW_REG_PORT_CTRL(port), port_ctrl);
++	if (ret)
++		goto error;
++
++	return 0;
++error:
++	dev_err(priv->dev, "%s: error: %i\n", __func__, ret);
++
++	return ret;
++}
++
++static int ar9331_sw_setup(struct dsa_switch *ds)
++{
++	struct ar9331_sw_priv *priv = (struct ar9331_sw_priv *)ds->priv;
++	struct regmap *regmap = priv->regmap;
++	int ret, i;
++
+ 	ret = ar9331_sw_reset(priv);
+ 	if (ret)
+ 		return ret;
+@@ -402,6 +467,12 @@ static int ar9331_sw_setup(struct dsa_switch *ds)
+ 	if (ret)
+ 		goto error;
+ 
++	for (i = 0; i < ds->num_ports; i++) {
++		ret = ar9331_sw_setup_port(ds, i);
++		if (ret)
++			goto error;
++	}
++
+ 	ds->configure_vlan_while_not_filtering = false;
+ 
+ 	return 0;
+diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
+index 4b05a2424623c..0aaf599119cd5 100644
+--- a/drivers/net/dsa/sja1105/sja1105_main.c
++++ b/drivers/net/dsa/sja1105/sja1105_main.c
+@@ -1625,7 +1625,9 @@ static int sja1105_fdb_dump(struct dsa_switch *ds, int port,
+ 		/* We need to hide the dsa_8021q VLANs from the user. */
+ 		if (priv->vlan_state == SJA1105_VLAN_UNAWARE)
+ 			l2_lookup.vlanid = 0;
+-		cb(macaddr, l2_lookup.vlanid, l2_lookup.lockeds, data);
++		rc = cb(macaddr, l2_lookup.vlanid, l2_lookup.lockeds, data);
++		if (rc)
++			return rc;
+ 	}
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 44bafedd09f28..244ec74ceca76 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -1506,11 +1506,6 @@ static int iavf_reinit_interrupt_scheme(struct iavf_adapter *adapter)
+ 	set_bit(__IAVF_VSI_DOWN, adapter->vsi.state);
+ 
+ 	iavf_map_rings_to_vectors(adapter);
+-
+-	if (RSS_AQ(adapter))
+-		adapter->aq_required |= IAVF_FLAG_AQ_CONFIGURE_RSS;
+-	else
+-		err = iavf_init_rss(adapter);
+ err:
+ 	return err;
+ }
+@@ -2200,6 +2195,14 @@ continue_reset:
+ 			goto reset_err;
+ 	}
+ 
++	if (RSS_AQ(adapter)) {
++		adapter->aq_required |= IAVF_FLAG_AQ_CONFIGURE_RSS;
++	} else {
++		err = iavf_init_rss(adapter);
++		if (err)
++			goto reset_err;
++	}
++
+ 	adapter->aq_required |= IAVF_FLAG_AQ_GET_CONFIG;
+ 	adapter->aq_required |= IAVF_FLAG_AQ_MAP_VECTORS;
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index 2924c67567b8a..13ffa3f6a5216 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -226,6 +226,7 @@ enum ice_pf_state {
+ 	ICE_VFLR_EVENT_PENDING,
+ 	ICE_FLTR_OVERFLOW_PROMISC,
+ 	ICE_VF_DIS,
++	ICE_VF_DEINIT_IN_PROGRESS,
+ 	ICE_CFG_BUSY,
+ 	ICE_SERVICE_SCHED,
+ 	ICE_SERVICE_DIS,
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 0eb2307325d3b..a7f2f5c490e30 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -183,6 +183,14 @@ static int ice_add_mac_to_unsync_list(struct net_device *netdev, const u8 *addr)
+ 	struct ice_netdev_priv *np = netdev_priv(netdev);
+ 	struct ice_vsi *vsi = np->vsi;
+ 
++	/* Under some circumstances, we might receive a request to delete our
++	 * own device address from our uc list. Because we store the device
++	 * address in the VSI's MAC filter list, we need to ignore such
++	 * requests and not delete our device address from this list.
++	 */
++	if (ether_addr_equal(addr, netdev->dev_addr))
++		return 0;
++
+ 	if (ice_fltr_add_mac_to_list(vsi, &vsi->tmp_unsync_list, addr,
+ 				     ICE_FWD_TO_VSI))
+ 		return -EINVAL;
+@@ -4014,6 +4022,11 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
+ 	struct ice_hw *hw;
+ 	int i, err;
+ 
++	if (pdev->is_virtfn) {
++		dev_err(dev, "can't probe a virtual function\n");
++		return -EINVAL;
++	}
++
+ 	/* this driver uses devres, see
+ 	 * Documentation/driver-api/driver-model/devres.rst
+ 	 */
+@@ -4908,7 +4921,7 @@ static int ice_set_mac_address(struct net_device *netdev, void *pi)
+ 		return -EADDRNOTAVAIL;
+ 
+ 	if (ether_addr_equal(netdev->dev_addr, mac)) {
+-		netdev_warn(netdev, "already using mac %pM\n", mac);
++		netdev_dbg(netdev, "already using mac %pM\n", mac);
+ 		return 0;
+ 	}
+ 
+@@ -4919,6 +4932,7 @@ static int ice_set_mac_address(struct net_device *netdev, void *pi)
+ 		return -EBUSY;
+ 	}
+ 
++	netif_addr_lock_bh(netdev);
+ 	/* Clean up old MAC filter. Not an error if old filter doesn't exist */
+ 	status = ice_fltr_remove_mac(vsi, netdev->dev_addr, ICE_FWD_TO_VSI);
+ 	if (status && status != ICE_ERR_DOES_NOT_EXIST) {
+@@ -4928,30 +4942,28 @@ static int ice_set_mac_address(struct net_device *netdev, void *pi)
+ 
+ 	/* Add filter for new MAC. If filter exists, return success */
+ 	status = ice_fltr_add_mac(vsi, mac, ICE_FWD_TO_VSI);
+-	if (status == ICE_ERR_ALREADY_EXISTS) {
++	if (status == ICE_ERR_ALREADY_EXISTS)
+ 		/* Although this MAC filter is already present in hardware it's
+ 		 * possible in some cases (e.g. bonding) that dev_addr was
+ 		 * modified outside of the driver and needs to be restored back
+ 		 * to this value.
+ 		 */
+-		memcpy(netdev->dev_addr, mac, netdev->addr_len);
+ 		netdev_dbg(netdev, "filter for MAC %pM already exists\n", mac);
+-		return 0;
+-	}
+-
+-	/* error if the new filter addition failed */
+-	if (status)
++	else if (status)
++		/* error if the new filter addition failed */
+ 		err = -EADDRNOTAVAIL;
+ 
+ err_update_filters:
+ 	if (err) {
+ 		netdev_err(netdev, "can't set MAC %pM. filter update failed\n",
+ 			   mac);
++		netif_addr_unlock_bh(netdev);
+ 		return err;
+ 	}
+ 
+ 	/* change the netdev's MAC address */
+ 	memcpy(netdev->dev_addr, mac, netdev->addr_len);
++	netif_addr_unlock_bh(netdev);
+ 	netdev_dbg(vsi->netdev, "updated MAC address to %pM\n",
+ 		   netdev->dev_addr);
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+index 97a46c616aca7..671902d9fc353 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+@@ -615,6 +615,8 @@ void ice_free_vfs(struct ice_pf *pf)
+ 	struct ice_hw *hw = &pf->hw;
+ 	unsigned int tmp, i;
+ 
++	set_bit(ICE_VF_DEINIT_IN_PROGRESS, pf->state);
++
+ 	if (!pf->vf)
+ 		return;
+ 
+@@ -680,6 +682,7 @@ void ice_free_vfs(struct ice_pf *pf)
+ 				i);
+ 
+ 	clear_bit(ICE_VF_DIS, pf->state);
++	clear_bit(ICE_VF_DEINIT_IN_PROGRESS, pf->state);
+ 	clear_bit(ICE_FLAG_SRIOV_ENA, pf->flags);
+ }
+ 
+@@ -4292,6 +4295,10 @@ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event)
+ 	struct device *dev;
+ 	int err = 0;
+ 
++	/* if de-init is underway, don't process messages from VF */
++	if (test_bit(ICE_VF_DEINIT_IN_PROGRESS, pf->state))
++		return;
++
+ 	dev = ice_pf_to_dev(pf);
+ 	if (ice_validate_vf_id(pf, vf_id)) {
+ 		err = -EINVAL;
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+index 4a61c90003b5e..722209a14f538 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+@@ -938,7 +938,7 @@ enum mvpp22_ptp_packet_format {
+ #define MVPP2_BM_COOKIE_POOL_OFFS	8
+ #define MVPP2_BM_COOKIE_CPU_OFFS	24
+ 
+-#define MVPP2_BM_SHORT_FRAME_SIZE	704	/* frame size 128 */
++#define MVPP2_BM_SHORT_FRAME_SIZE	736	/* frame size 128 */
+ #define MVPP2_BM_LONG_FRAME_SIZE	2240	/* frame size 1664 */
+ #define MVPP2_BM_JUMBO_FRAME_SIZE	10432	/* frame size 9856 */
+ /* BM short pool packet size
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cq.c b/drivers/net/ethernet/mellanox/mlx5/core/cq.c
+index df3e4938ecdd9..360e093874d4f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cq.c
+@@ -134,6 +134,7 @@ int mlx5_core_create_cq(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq,
+ 			      cq->cqn);
+ 
+ 	cq->uar = dev->priv.uar;
++	cq->irqn = eq->core.irqn;
+ 
+ 	return 0;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+index 01a1d02dcf15d..3f8a98093f8cb 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+@@ -1019,12 +1019,19 @@ int mlx5_fw_tracer_init(struct mlx5_fw_tracer *tracer)
+ 	MLX5_NB_INIT(&tracer->nb, fw_tracer_event, DEVICE_TRACER);
+ 	mlx5_eq_notifier_register(dev, &tracer->nb);
+ 
+-	mlx5_fw_tracer_start(tracer);
+-
++	err = mlx5_fw_tracer_start(tracer);
++	if (err) {
++		mlx5_core_warn(dev, "FWTracer: Failed to start tracer %d\n", err);
++		goto err_notifier_unregister;
++	}
+ 	return 0;
+ 
++err_notifier_unregister:
++	mlx5_eq_notifier_unregister(dev, &tracer->nb);
++	mlx5_core_destroy_mkey(dev, &tracer->buff.mkey);
+ err_dealloc_pd:
+ 	mlx5_core_dealloc_pd(dev, tracer->buff.pdn);
++	cancel_work_sync(&tracer->read_fw_strings_work);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+index 172e0474f2e6e..3980a39050848 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+@@ -124,6 +124,11 @@ static int mlx5e_route_lookup_ipv4_get(struct mlx5e_priv *priv,
+ 	if (IS_ERR(rt))
+ 		return PTR_ERR(rt);
+ 
++	if (rt->rt_type != RTN_UNICAST) {
++		ret = -ENETUNREACH;
++		goto err_rt_release;
++	}
++
+ 	if (mlx5_lag_is_multipath(mdev) && rt->rt_gw_family != AF_INET) {
+ 		ret = -ENETUNREACH;
+ 		goto err_rt_release;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index d0d9acb172536..779a4abead01b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -1531,15 +1531,9 @@ static int mlx5e_alloc_cq_common(struct mlx5e_priv *priv,
+ {
+ 	struct mlx5_core_dev *mdev = priv->mdev;
+ 	struct mlx5_core_cq *mcq = &cq->mcq;
+-	int eqn_not_used;
+-	unsigned int irqn;
+ 	int err;
+ 	u32 i;
+ 
+-	err = mlx5_vector2eqn(mdev, param->eq_ix, &eqn_not_used, &irqn);
+-	if (err)
+-		return err;
+-
+ 	err = mlx5_cqwq_create(mdev, &param->wq, param->cqc, &cq->wq,
+ 			       &cq->wq_ctrl);
+ 	if (err)
+@@ -1553,7 +1547,6 @@ static int mlx5e_alloc_cq_common(struct mlx5e_priv *priv,
+ 	mcq->vector     = param->eq_ix;
+ 	mcq->comp       = mlx5e_completion_event;
+ 	mcq->event      = mlx5e_cq_error_event;
+-	mcq->irqn       = irqn;
+ 
+ 	for (i = 0; i < mlx5_cqwq_get_size(&cq->wq); i++) {
+ 		struct mlx5_cqe64 *cqe = mlx5_cqwq_get_wqe(&cq->wq, i);
+@@ -1601,11 +1594,10 @@ static int mlx5e_create_cq(struct mlx5e_cq *cq, struct mlx5e_cq_param *param)
+ 	void *in;
+ 	void *cqc;
+ 	int inlen;
+-	unsigned int irqn_not_used;
+ 	int eqn;
+ 	int err;
+ 
+-	err = mlx5_vector2eqn(mdev, param->eq_ix, &eqn, &irqn_not_used);
++	err = mlx5_vector2eqn(mdev, param->eq_ix, &eqn);
+ 	if (err)
+ 		return err;
+ 
+@@ -1887,30 +1879,30 @@ static int mlx5e_open_queues(struct mlx5e_channel *c,
+ 	if (err)
+ 		goto err_close_icosq;
+ 
++	err = mlx5e_open_rxq_rq(c, params, &cparam->rq);
++	if (err)
++		goto err_close_sqs;
++
+ 	if (c->xdp) {
+ 		err = mlx5e_open_xdpsq(c, params, &cparam->xdp_sq, NULL,
+ 				       &c->rq_xdpsq, false);
+ 		if (err)
+-			goto err_close_sqs;
++			goto err_close_rq;
+ 	}
+ 
+-	err = mlx5e_open_rxq_rq(c, params, &cparam->rq);
+-	if (err)
+-		goto err_close_xdp_sq;
+-
+ 	err = mlx5e_open_xdpsq(c, params, &cparam->xdp_sq, NULL, &c->xdpsq, true);
+ 	if (err)
+-		goto err_close_rq;
++		goto err_close_xdp_sq;
+ 
+ 	return 0;
+ 
+-err_close_rq:
+-	mlx5e_close_rq(&c->rq);
+-
+ err_close_xdp_sq:
+ 	if (c->xdp)
+ 		mlx5e_close_xdpsq(&c->rq_xdpsq);
+ 
++err_close_rq:
++	mlx5e_close_rq(&c->rq);
++
+ err_close_sqs:
+ 	mlx5e_close_sqs(c);
+ 
+@@ -1945,9 +1937,9 @@ err_close_async_icosq_cq:
+ static void mlx5e_close_queues(struct mlx5e_channel *c)
+ {
+ 	mlx5e_close_xdpsq(&c->xdpsq);
+-	mlx5e_close_rq(&c->rq);
+ 	if (c->xdp)
+ 		mlx5e_close_xdpsq(&c->rq_xdpsq);
++	mlx5e_close_rq(&c->rq);
+ 	mlx5e_close_sqs(c);
+ 	mlx5e_close_icosq(&c->icosq);
+ 	mlx5e_close_icosq(&c->async_icosq);
+@@ -1979,9 +1971,8 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
+ 	struct mlx5e_channel *c;
+ 	unsigned int irq;
+ 	int err;
+-	int eqn;
+ 
+-	err = mlx5_vector2eqn(priv->mdev, ix, &eqn, &irq);
++	err = mlx5_vector2irqn(priv->mdev, ix, &irq);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+index 9403334102675..0879551161d27 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+@@ -871,8 +871,8 @@ clean:
+ 	return err;
+ }
+ 
+-int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn,
+-		    unsigned int *irqn)
++static int vector2eqnirqn(struct mlx5_core_dev *dev, int vector, int *eqn,
++			  unsigned int *irqn)
+ {
+ 	struct mlx5_eq_table *table = dev->priv.eq_table;
+ 	struct mlx5_eq_comp *eq, *n;
+@@ -881,8 +881,10 @@ int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn,
+ 
+ 	list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) {
+ 		if (i++ == vector) {
+-			*eqn = eq->core.eqn;
+-			*irqn = eq->core.irqn;
++			if (irqn)
++				*irqn = eq->core.irqn;
++			if (eqn)
++				*eqn = eq->core.eqn;
+ 			err = 0;
+ 			break;
+ 		}
+@@ -890,8 +892,18 @@ int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn,
+ 
+ 	return err;
+ }
++
++int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn)
++{
++	return vector2eqnirqn(dev, vector, eqn, NULL);
++}
+ EXPORT_SYMBOL(mlx5_vector2eqn);
+ 
++int mlx5_vector2irqn(struct mlx5_core_dev *dev, int vector, unsigned int *irqn)
++{
++	return vector2eqnirqn(dev, vector, NULL, irqn);
++}
++
+ unsigned int mlx5_comp_vectors_count(struct mlx5_core_dev *dev)
+ {
+ 	return dev->priv.eq_table->num_comp_eqs;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/sample.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/sample.c
+index 794012c5c4765..d3ad78aa9d450 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/sample.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/sample.c
+@@ -501,6 +501,7 @@ err_sampler:
+ err_offload_rule:
+ 	mlx5_esw_vporttbl_put(esw, &per_vport_tbl_attr);
+ err_default_tbl:
++	kfree(sample_flow);
+ 	return ERR_PTR(err);
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index b66e12753f37f..d0e4daa55a4a1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -48,6 +48,7 @@
+ #include "lib/fs_chains.h"
+ #include "en_tc.h"
+ #include "en/mapping.h"
++#include "devlink.h"
+ 
+ #define mlx5_esw_for_each_rep(esw, i, rep) \
+ 	xa_for_each(&((esw)->offloads.vport_reps), i, rep)
+@@ -2984,12 +2985,19 @@ int mlx5_devlink_eswitch_mode_set(struct devlink *devlink, u16 mode,
+ 	if (cur_mlx5_mode == mlx5_mode)
+ 		goto unlock;
+ 
+-	if (mode == DEVLINK_ESWITCH_MODE_SWITCHDEV)
++	if (mode == DEVLINK_ESWITCH_MODE_SWITCHDEV) {
++		if (mlx5_devlink_trap_get_num_active(esw->dev)) {
++			NL_SET_ERR_MSG_MOD(extack,
++					   "Can't change mode while devlink traps are active");
++			err = -EOPNOTSUPP;
++			goto unlock;
++		}
+ 		err = esw_offloads_start(esw, extack);
+-	else if (mode == DEVLINK_ESWITCH_MODE_LEGACY)
++	} else if (mode == DEVLINK_ESWITCH_MODE_LEGACY) {
+ 		err = esw_offloads_stop(esw, extack);
+-	else
++	} else {
+ 		err = -EINVAL;
++	}
+ 
+ unlock:
+ 	mlx5_esw_unlock(esw);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c b/drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c
+index bd66ab2af5b54..d5da4ab65766d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c
+@@ -417,7 +417,6 @@ static int mlx5_fpga_conn_create_cq(struct mlx5_fpga_conn *conn, int cq_size)
+ 	struct mlx5_wq_param wqp;
+ 	struct mlx5_cqe64 *cqe;
+ 	int inlen, err, eqn;
+-	unsigned int irqn;
+ 	void *cqc, *in;
+ 	__be64 *pas;
+ 	u32 i;
+@@ -446,7 +445,7 @@ static int mlx5_fpga_conn_create_cq(struct mlx5_fpga_conn *conn, int cq_size)
+ 		goto err_cqwq;
+ 	}
+ 
+-	err = mlx5_vector2eqn(mdev, smp_processor_id(), &eqn, &irqn);
++	err = mlx5_vector2eqn(mdev, smp_processor_id(), &eqn);
+ 	if (err) {
+ 		kvfree(in);
+ 		goto err_cqwq;
+@@ -476,7 +475,6 @@ static int mlx5_fpga_conn_create_cq(struct mlx5_fpga_conn *conn, int cq_size)
+ 	*conn->cq.mcq.arm_db    = 0;
+ 	conn->cq.mcq.vector     = 0;
+ 	conn->cq.mcq.comp       = mlx5_fpga_conn_cq_complete;
+-	conn->cq.mcq.irqn       = irqn;
+ 	conn->cq.mcq.uar        = fdev->conn_res.uar;
+ 	tasklet_setup(&conn->cq.tasklet, mlx5_fpga_conn_cq_tasklet);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h
+index f607a3858ef56..bd3ed86604839 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h
+@@ -103,4 +103,6 @@ void mlx5_core_eq_free_irqs(struct mlx5_core_dev *dev);
+ struct cpu_rmap *mlx5_eq_table_get_rmap(struct mlx5_core_dev *dev);
+ #endif
+ 
++int mlx5_vector2irqn(struct mlx5_core_dev *dev, int vector, unsigned int *irqn);
++
+ #endif
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 0d0f63a27aba8..8c6d7f70e783a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -1781,16 +1781,14 @@ static int __init init(void)
+ 	if (err)
+ 		goto err_sf;
+ 
+-#ifdef CONFIG_MLX5_CORE_EN
+ 	err = mlx5e_init();
+-	if (err) {
+-		pci_unregister_driver(&mlx5_core_driver);
+-		goto err_debug;
+-	}
+-#endif
++	if (err)
++		goto err_en;
+ 
+ 	return 0;
+ 
++err_en:
++	mlx5_sf_driver_unregister();
+ err_sf:
+ 	pci_unregister_driver(&mlx5_core_driver);
+ err_debug:
+@@ -1800,9 +1798,7 @@ err_debug:
+ 
+ static void __exit cleanup(void)
+ {
+-#ifdef CONFIG_MLX5_CORE_EN
+ 	mlx5e_cleanup();
+-#endif
+ 	mlx5_sf_driver_unregister();
+ 	pci_unregister_driver(&mlx5_core_driver);
+ 	mlx5_unregister_debugfs();
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+index a22b706eebd39..1824eb0b0e9a4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+@@ -223,8 +223,13 @@ int mlx5_firmware_flash(struct mlx5_core_dev *dev, const struct firmware *fw,
+ int mlx5_fw_version_query(struct mlx5_core_dev *dev,
+ 			  u32 *running_ver, u32 *stored_ver);
+ 
++#ifdef CONFIG_MLX5_CORE_EN
+ int mlx5e_init(void);
+ void mlx5e_cleanup(void);
++#else
++static inline int mlx5e_init(void){ return 0; }
++static inline void mlx5e_cleanup(void){}
++#endif
+ 
+ static inline bool mlx5_sriov_is_enabled(struct mlx5_core_dev *dev)
+ {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+index 12cf323a59430..9df0e73d1c358 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+@@ -749,7 +749,6 @@ static struct mlx5dr_cq *dr_create_cq(struct mlx5_core_dev *mdev,
+ 	struct mlx5_cqe64 *cqe;
+ 	struct mlx5dr_cq *cq;
+ 	int inlen, err, eqn;
+-	unsigned int irqn;
+ 	void *cqc, *in;
+ 	__be64 *pas;
+ 	int vector;
+@@ -782,7 +781,7 @@ static struct mlx5dr_cq *dr_create_cq(struct mlx5_core_dev *mdev,
+ 		goto err_cqwq;
+ 
+ 	vector = raw_smp_processor_id() % mlx5_comp_vectors_count(mdev);
+-	err = mlx5_vector2eqn(mdev, vector, &eqn, &irqn);
++	err = mlx5_vector2eqn(mdev, vector, &eqn);
+ 	if (err) {
+ 		kvfree(in);
+ 		goto err_cqwq;
+@@ -818,7 +817,6 @@ static struct mlx5dr_cq *dr_create_cq(struct mlx5_core_dev *mdev,
+ 	*cq->mcq.arm_db = cpu_to_be32(2 << 28);
+ 
+ 	cq->mcq.vector = 0;
+-	cq->mcq.irqn = irqn;
+ 	cq->mcq.uar = uar;
+ 
+ 	return cq;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v0.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v0.c
+index 0757a4e8540e2..42446e92aa38d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v0.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v0.c
+@@ -352,6 +352,7 @@ static void dr_ste_v0_set_rx_decap(u8 *hw_ste_p)
+ {
+ 	MLX5_SET(ste_rx_steering_mult, hw_ste_p, tunneling_action,
+ 		 DR_STE_TUNL_ACTION_DECAP);
++	MLX5_SET(ste_rx_steering_mult, hw_ste_p, fail_on_error, 1);
+ }
+ 
+ static void dr_ste_v0_set_rx_pop_vlan(u8 *hw_ste_p)
+@@ -365,6 +366,7 @@ static void dr_ste_v0_set_rx_decap_l3(u8 *hw_ste_p, bool vlan)
+ 	MLX5_SET(ste_rx_steering_mult, hw_ste_p, tunneling_action,
+ 		 DR_STE_TUNL_ACTION_L3_DECAP);
+ 	MLX5_SET(ste_modify_packet, hw_ste_p, action_description, vlan ? 1 : 0);
++	MLX5_SET(ste_rx_steering_mult, hw_ste_p, fail_on_error, 1);
+ }
+ 
+ static void dr_ste_v0_set_rewrite_actions(u8 *hw_ste_p, u16 num_of_actions,
+diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c
+index 69b7a4e0220aa..5c26ab1d1fc73 100644
+--- a/drivers/net/ethernet/ti/cpsw_new.c
++++ b/drivers/net/ethernet/ti/cpsw_new.c
+@@ -920,7 +920,7 @@ static netdev_tx_t cpsw_ndo_start_xmit(struct sk_buff *skb,
+ 	struct cpdma_chan *txch;
+ 	int ret, q_idx;
+ 
+-	if (skb_padto(skb, CPSW_MIN_PACKET_SIZE)) {
++	if (skb_put_padto(skb, READ_ONCE(priv->tx_packet_min))) {
+ 		cpsw_err(priv, tx_err, "packet pad failed\n");
+ 		ndev->stats.tx_dropped++;
+ 		return NET_XMIT_DROP;
+@@ -1100,7 +1100,7 @@ static int cpsw_ndo_xdp_xmit(struct net_device *ndev, int n,
+ 
+ 	for (i = 0; i < n; i++) {
+ 		xdpf = frames[i];
+-		if (xdpf->len < CPSW_MIN_PACKET_SIZE)
++		if (xdpf->len < READ_ONCE(priv->tx_packet_min))
+ 			break;
+ 
+ 		if (cpsw_xdp_tx_frame(priv, xdpf, NULL, priv->emac_port))
+@@ -1389,6 +1389,7 @@ static int cpsw_create_ports(struct cpsw_common *cpsw)
+ 		priv->dev  = dev;
+ 		priv->msg_enable = netif_msg_init(debug_level, CPSW_DEBUG);
+ 		priv->emac_port = i + 1;
++		priv->tx_packet_min = CPSW_MIN_PACKET_SIZE;
+ 
+ 		if (is_valid_ether_addr(slave_data->mac_addr)) {
+ 			ether_addr_copy(priv->mac_addr, slave_data->mac_addr);
+@@ -1686,6 +1687,7 @@ static int cpsw_dl_switch_mode_set(struct devlink *dl, u32 id,
+ 
+ 			priv = netdev_priv(sl_ndev);
+ 			slave->port_vlan = vlan;
++			WRITE_ONCE(priv->tx_packet_min, CPSW_MIN_PACKET_SIZE_VLAN);
+ 			if (netif_running(sl_ndev))
+ 				cpsw_port_add_switch_def_ale_entries(priv,
+ 								     slave);
+@@ -1714,6 +1716,7 @@ static int cpsw_dl_switch_mode_set(struct devlink *dl, u32 id,
+ 
+ 			priv = netdev_priv(slave->ndev);
+ 			slave->port_vlan = slave->data->dual_emac_res_vlan;
++			WRITE_ONCE(priv->tx_packet_min, CPSW_MIN_PACKET_SIZE);
+ 			cpsw_port_add_dual_emac_def_ale_entries(priv, slave);
+ 		}
+ 
+diff --git a/drivers/net/ethernet/ti/cpsw_priv.h b/drivers/net/ethernet/ti/cpsw_priv.h
+index a323bea54faa2..2951fb7b9dae7 100644
+--- a/drivers/net/ethernet/ti/cpsw_priv.h
++++ b/drivers/net/ethernet/ti/cpsw_priv.h
+@@ -89,7 +89,8 @@ do {								\
+ 
+ #define CPSW_POLL_WEIGHT	64
+ #define CPSW_RX_VLAN_ENCAP_HDR_SIZE		4
+-#define CPSW_MIN_PACKET_SIZE	(VLAN_ETH_ZLEN)
++#define CPSW_MIN_PACKET_SIZE_VLAN	(VLAN_ETH_ZLEN)
++#define CPSW_MIN_PACKET_SIZE	(ETH_ZLEN)
+ #define CPSW_MAX_PACKET_SIZE	(VLAN_ETH_FRAME_LEN +\
+ 				 ETH_FCS_LEN +\
+ 				 CPSW_RX_VLAN_ENCAP_HDR_SIZE)
+@@ -380,6 +381,7 @@ struct cpsw_priv {
+ 	u32 emac_port;
+ 	struct cpsw_common *cpsw;
+ 	int offload_fwd_mark;
++	u32 tx_packet_min;
+ };
+ 
+ #define ndev_to_cpsw(ndev) (((struct cpsw_priv *)netdev_priv(ndev))->cpsw)
+diff --git a/drivers/net/ieee802154/mac802154_hwsim.c b/drivers/net/ieee802154/mac802154_hwsim.c
+index ebc976b7fcc2a..8caa61ec718f5 100644
+--- a/drivers/net/ieee802154/mac802154_hwsim.c
++++ b/drivers/net/ieee802154/mac802154_hwsim.c
+@@ -418,7 +418,7 @@ static int hwsim_new_edge_nl(struct sk_buff *msg, struct genl_info *info)
+ 	struct hwsim_edge *e;
+ 	u32 v0, v1;
+ 
+-	if (!info->attrs[MAC802154_HWSIM_ATTR_RADIO_ID] &&
++	if (!info->attrs[MAC802154_HWSIM_ATTR_RADIO_ID] ||
+ 	    !info->attrs[MAC802154_HWSIM_ATTR_RADIO_EDGE])
+ 		return -EINVAL;
+ 
+@@ -528,14 +528,14 @@ static int hwsim_set_edge_lqi(struct sk_buff *msg, struct genl_info *info)
+ 	u32 v0, v1;
+ 	u8 lqi;
+ 
+-	if (!info->attrs[MAC802154_HWSIM_ATTR_RADIO_ID] &&
++	if (!info->attrs[MAC802154_HWSIM_ATTR_RADIO_ID] ||
+ 	    !info->attrs[MAC802154_HWSIM_ATTR_RADIO_EDGE])
+ 		return -EINVAL;
+ 
+ 	if (nla_parse_nested_deprecated(edge_attrs, MAC802154_HWSIM_EDGE_ATTR_MAX, info->attrs[MAC802154_HWSIM_ATTR_RADIO_EDGE], hwsim_edge_policy, NULL))
+ 		return -EINVAL;
+ 
+-	if (!edge_attrs[MAC802154_HWSIM_EDGE_ATTR_ENDPOINT_ID] &&
++	if (!edge_attrs[MAC802154_HWSIM_EDGE_ATTR_ENDPOINT_ID] ||
+ 	    !edge_attrs[MAC802154_HWSIM_EDGE_ATTR_LQI])
+ 		return -EINVAL;
+ 
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 7afd9edaf2490..22ca29cc9ad7d 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -1406,8 +1406,6 @@ static struct phy_driver ksphy_driver[] = {
+ 	.name		= "Micrel KSZ87XX Switch",
+ 	/* PHY_BASIC_FEATURES */
+ 	.config_init	= kszphy_config_init,
+-	.config_aneg	= ksz8873mll_config_aneg,
+-	.read_status	= ksz8873mll_read_status,
+ 	.match_phy_device = ksz8795_match_phy_device,
+ 	.suspend	= genphy_suspend,
+ 	.resume		= genphy_resume,
+diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
+index b9dd47bd597ff..7a099c37527f0 100644
+--- a/drivers/net/ppp/ppp_generic.c
++++ b/drivers/net/ppp/ppp_generic.c
+@@ -1317,7 +1317,7 @@ static int ppp_nl_newlink(struct net *src_net, struct net_device *dev,
+ 	 * the PPP unit identifer as suffix (i.e. ppp<unit_id>). This allows
+ 	 * userspace to infer the device name using to the PPPIOCGUNIT ioctl.
+ 	 */
+-	if (!tb[IFLA_IFNAME])
++	if (!tb[IFLA_IFNAME] || !nla_len(tb[IFLA_IFNAME]) || !*(char *)nla_data(tb[IFLA_IFNAME]))
+ 		conf.ifname_is_set = false;
+ 
+ 	err = ppp_dev_configure(src_net, dev, &conf);
+diff --git a/drivers/net/wwan/mhi_wwan_ctrl.c b/drivers/net/wwan/mhi_wwan_ctrl.c
+index 1bc6b69aa5302..e4d0f696687f2 100644
+--- a/drivers/net/wwan/mhi_wwan_ctrl.c
++++ b/drivers/net/wwan/mhi_wwan_ctrl.c
+@@ -41,14 +41,14 @@ struct mhi_wwan_dev {
+ /* Increment RX budget and schedule RX refill if necessary */
+ static void mhi_wwan_rx_budget_inc(struct mhi_wwan_dev *mhiwwan)
+ {
+-	spin_lock(&mhiwwan->rx_lock);
++	spin_lock_bh(&mhiwwan->rx_lock);
+ 
+ 	mhiwwan->rx_budget++;
+ 
+ 	if (test_bit(MHI_WWAN_RX_REFILL, &mhiwwan->flags))
+ 		schedule_work(&mhiwwan->rx_refill);
+ 
+-	spin_unlock(&mhiwwan->rx_lock);
++	spin_unlock_bh(&mhiwwan->rx_lock);
+ }
+ 
+ /* Decrement RX budget if non-zero and return true on success */
+@@ -56,7 +56,7 @@ static bool mhi_wwan_rx_budget_dec(struct mhi_wwan_dev *mhiwwan)
+ {
+ 	bool ret = false;
+ 
+-	spin_lock(&mhiwwan->rx_lock);
++	spin_lock_bh(&mhiwwan->rx_lock);
+ 
+ 	if (mhiwwan->rx_budget) {
+ 		mhiwwan->rx_budget--;
+@@ -64,7 +64,7 @@ static bool mhi_wwan_rx_budget_dec(struct mhi_wwan_dev *mhiwwan)
+ 			ret = true;
+ 	}
+ 
+-	spin_unlock(&mhiwwan->rx_lock);
++	spin_unlock_bh(&mhiwwan->rx_lock);
+ 
+ 	return ret;
+ }
+@@ -130,9 +130,9 @@ static void mhi_wwan_ctrl_stop(struct wwan_port *port)
+ {
+ 	struct mhi_wwan_dev *mhiwwan = wwan_port_get_drvdata(port);
+ 
+-	spin_lock(&mhiwwan->rx_lock);
++	spin_lock_bh(&mhiwwan->rx_lock);
+ 	clear_bit(MHI_WWAN_RX_REFILL, &mhiwwan->flags);
+-	spin_unlock(&mhiwwan->rx_lock);
++	spin_unlock_bh(&mhiwwan->rx_lock);
+ 
+ 	cancel_work_sync(&mhiwwan->rx_refill);
+ 
+diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
+index 2403b71b601e9..745478213ff21 100644
+--- a/drivers/nvdimm/namespace_devs.c
++++ b/drivers/nvdimm/namespace_devs.c
+@@ -2527,7 +2527,7 @@ static void deactivate_labels(void *region)
+ 
+ static int init_active_labels(struct nd_region *nd_region)
+ {
+-	int i;
++	int i, rc = 0;
+ 
+ 	for (i = 0; i < nd_region->ndr_mappings; i++) {
+ 		struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+@@ -2546,13 +2546,14 @@ static int init_active_labels(struct nd_region *nd_region)
+ 			else if (test_bit(NDD_LABELING, &nvdimm->flags))
+ 				/* fail, labels needed to disambiguate dpa */;
+ 			else
+-				return 0;
++				continue;
+ 
+ 			dev_err(&nd_region->dev, "%s: is %s, failing probe\n",
+ 					dev_name(&nd_mapping->nvdimm->dev),
+ 					test_bit(NDD_LOCKED, &nvdimm->flags)
+ 					? "locked" : "disabled");
+-			return -ENXIO;
++			rc = -ENXIO;
++			goto out;
+ 		}
+ 		nd_mapping->ndd = ndd;
+ 		atomic_inc(&nvdimm->busy);
+@@ -2586,13 +2587,17 @@ static int init_active_labels(struct nd_region *nd_region)
+ 			break;
+ 	}
+ 
+-	if (i < nd_region->ndr_mappings) {
++	if (i < nd_region->ndr_mappings)
++		rc = -ENOMEM;
++
++out:
++	if (rc) {
+ 		deactivate_labels(nd_region);
+-		return -ENOMEM;
++		return rc;
+ 	}
+ 
+ 	return devm_add_action_or_reset(&nd_region->dev, deactivate_labels,
+-			nd_region);
++					nd_region);
+ }
+ 
+ int nd_region_register_namespaces(struct nd_region *nd_region, int *err)
+diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
+index 217dc9f0231fd..5516647e53a87 100644
+--- a/drivers/pci/msi.c
++++ b/drivers/pci/msi.c
+@@ -143,24 +143,25 @@ static inline __attribute_const__ u32 msi_mask(unsigned x)
+  * reliably as devices without an INTx disable bit will then generate a
+  * level IRQ which will never be cleared.
+  */
+-u32 __pci_msi_desc_mask_irq(struct msi_desc *desc, u32 mask, u32 flag)
++void __pci_msi_desc_mask_irq(struct msi_desc *desc, u32 mask, u32 flag)
+ {
+-	u32 mask_bits = desc->masked;
++	raw_spinlock_t *lock = &desc->dev->msi_lock;
++	unsigned long flags;
+ 
+ 	if (pci_msi_ignore_mask || !desc->msi_attrib.maskbit)
+-		return 0;
++		return;
+ 
+-	mask_bits &= ~mask;
+-	mask_bits |= flag;
++	raw_spin_lock_irqsave(lock, flags);
++	desc->masked &= ~mask;
++	desc->masked |= flag;
+ 	pci_write_config_dword(msi_desc_to_pci_dev(desc), desc->mask_pos,
+-			       mask_bits);
+-
+-	return mask_bits;
++			       desc->masked);
++	raw_spin_unlock_irqrestore(lock, flags);
+ }
+ 
+ static void msi_mask_irq(struct msi_desc *desc, u32 mask, u32 flag)
+ {
+-	desc->masked = __pci_msi_desc_mask_irq(desc, mask, flag);
++	__pci_msi_desc_mask_irq(desc, mask, flag);
+ }
+ 
+ static void __iomem *pci_msix_desc_addr(struct msi_desc *desc)
+@@ -289,13 +290,31 @@ void __pci_write_msi_msg(struct msi_desc *entry, struct msi_msg *msg)
+ 		/* Don't touch the hardware now */
+ 	} else if (entry->msi_attrib.is_msix) {
+ 		void __iomem *base = pci_msix_desc_addr(entry);
++		bool unmasked = !(entry->masked & PCI_MSIX_ENTRY_CTRL_MASKBIT);
+ 
+ 		if (!base)
+ 			goto skip;
+ 
++		/*
++		 * The specification mandates that the entry is masked
++		 * when the message is modified:
++		 *
++		 * "If software changes the Address or Data value of an
++		 * entry while the entry is unmasked, the result is
++		 * undefined."
++		 */
++		if (unmasked)
++			__pci_msix_desc_mask_irq(entry, PCI_MSIX_ENTRY_CTRL_MASKBIT);
++
+ 		writel(msg->address_lo, base + PCI_MSIX_ENTRY_LOWER_ADDR);
+ 		writel(msg->address_hi, base + PCI_MSIX_ENTRY_UPPER_ADDR);
+ 		writel(msg->data, base + PCI_MSIX_ENTRY_DATA);
++
++		if (unmasked)
++			__pci_msix_desc_mask_irq(entry, 0);
++
++		/* Ensure that the writes are visible in the device */
++		readl(base + PCI_MSIX_ENTRY_DATA);
+ 	} else {
+ 		int pos = dev->msi_cap;
+ 		u16 msgctl;
+@@ -316,6 +335,8 @@ void __pci_write_msi_msg(struct msi_desc *entry, struct msi_msg *msg)
+ 			pci_write_config_word(dev, pos + PCI_MSI_DATA_32,
+ 					      msg->data);
+ 		}
++		/* Ensure that the writes are visible in the device */
++		pci_read_config_word(dev, pos + PCI_MSI_FLAGS, &msgctl);
+ 	}
+ 
+ skip:
+@@ -636,21 +657,21 @@ static int msi_capability_init(struct pci_dev *dev, int nvec,
+ 	/* Configure MSI capability structure */
+ 	ret = pci_msi_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSI);
+ 	if (ret) {
+-		msi_mask_irq(entry, mask, ~mask);
++		msi_mask_irq(entry, mask, 0);
+ 		free_msi_irqs(dev);
+ 		return ret;
+ 	}
+ 
+ 	ret = msi_verify_entries(dev);
+ 	if (ret) {
+-		msi_mask_irq(entry, mask, ~mask);
++		msi_mask_irq(entry, mask, 0);
+ 		free_msi_irqs(dev);
+ 		return ret;
+ 	}
+ 
+ 	ret = populate_msi_sysfs(dev);
+ 	if (ret) {
+-		msi_mask_irq(entry, mask, ~mask);
++		msi_mask_irq(entry, mask, 0);
+ 		free_msi_irqs(dev);
+ 		return ret;
+ 	}
+@@ -691,6 +712,7 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
+ {
+ 	struct irq_affinity_desc *curmsk, *masks = NULL;
+ 	struct msi_desc *entry;
++	void __iomem *addr;
+ 	int ret, i;
+ 	int vec_count = pci_msix_vec_count(dev);
+ 
+@@ -711,6 +733,7 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
+ 
+ 		entry->msi_attrib.is_msix	= 1;
+ 		entry->msi_attrib.is_64		= 1;
++
+ 		if (entries)
+ 			entry->msi_attrib.entry_nr = entries[i].entry;
+ 		else
+@@ -722,6 +745,10 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
+ 		entry->msi_attrib.default_irq	= dev->irq;
+ 		entry->mask_base		= base;
+ 
++		addr = pci_msix_desc_addr(entry);
++		if (addr)
++			entry->masked = readl(addr + PCI_MSIX_ENTRY_VECTOR_CTRL);
++
+ 		list_add_tail(&entry->list, dev_to_msi_list(&dev->dev));
+ 		if (masks)
+ 			curmsk++;
+@@ -732,26 +759,25 @@ out:
+ 	return ret;
+ }
+ 
+-static void msix_program_entries(struct pci_dev *dev,
+-				 struct msix_entry *entries)
++static void msix_update_entries(struct pci_dev *dev, struct msix_entry *entries)
+ {
+ 	struct msi_desc *entry;
+-	int i = 0;
+-	void __iomem *desc_addr;
+ 
+ 	for_each_pci_msi_entry(entry, dev) {
+-		if (entries)
+-			entries[i++].vector = entry->irq;
++		if (entries) {
++			entries->vector = entry->irq;
++			entries++;
++		}
++	}
++}
+ 
+-		desc_addr = pci_msix_desc_addr(entry);
+-		if (desc_addr)
+-			entry->masked = readl(desc_addr +
+-					      PCI_MSIX_ENTRY_VECTOR_CTRL);
+-		else
+-			entry->masked = 0;
++static void msix_mask_all(void __iomem *base, int tsize)
++{
++	u32 ctrl = PCI_MSIX_ENTRY_CTRL_MASKBIT;
++	int i;
+ 
+-		msix_mask_irq(entry, 1);
+-	}
++	for (i = 0; i < tsize; i++, base += PCI_MSIX_ENTRY_SIZE)
++		writel(ctrl, base + PCI_MSIX_ENTRY_VECTOR_CTRL);
+ }
+ 
+ /**
+@@ -768,22 +794,33 @@ static void msix_program_entries(struct pci_dev *dev,
+ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
+ 				int nvec, struct irq_affinity *affd)
+ {
+-	int ret;
+-	u16 control;
+ 	void __iomem *base;
++	int ret, tsize;
++	u16 control;
+ 
+-	/* Ensure MSI-X is disabled while it is set up */
+-	pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0);
++	/*
++	 * Some devices require MSI-X to be enabled before the MSI-X
++	 * registers can be accessed.  Mask all the vectors to prevent
++	 * interrupts coming in before they're fully set up.
++	 */
++	pci_msix_clear_and_set_ctrl(dev, 0, PCI_MSIX_FLAGS_MASKALL |
++				    PCI_MSIX_FLAGS_ENABLE);
+ 
+ 	pci_read_config_word(dev, dev->msix_cap + PCI_MSIX_FLAGS, &control);
+ 	/* Request & Map MSI-X table region */
+-	base = msix_map_region(dev, msix_table_size(control));
+-	if (!base)
+-		return -ENOMEM;
++	tsize = msix_table_size(control);
++	base = msix_map_region(dev, tsize);
++	if (!base) {
++		ret = -ENOMEM;
++		goto out_disable;
++	}
++
++	/* Ensure that all table entries are masked. */
++	msix_mask_all(base, tsize);
+ 
+ 	ret = msix_setup_entries(dev, base, entries, nvec, affd);
+ 	if (ret)
+-		return ret;
++		goto out_disable;
+ 
+ 	ret = pci_msi_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSIX);
+ 	if (ret)
+@@ -794,15 +831,7 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
+ 	if (ret)
+ 		goto out_free;
+ 
+-	/*
+-	 * Some devices require MSI-X to be enabled before we can touch the
+-	 * MSI-X registers.  We need to mask all the vectors to prevent
+-	 * interrupts coming in before they're fully set up.
+-	 */
+-	pci_msix_clear_and_set_ctrl(dev, 0,
+-				PCI_MSIX_FLAGS_MASKALL | PCI_MSIX_FLAGS_ENABLE);
+-
+-	msix_program_entries(dev, entries);
++	msix_update_entries(dev, entries);
+ 
+ 	ret = populate_msi_sysfs(dev);
+ 	if (ret)
+@@ -836,6 +865,9 @@ out_avail:
+ out_free:
+ 	free_msi_irqs(dev);
+ 
++out_disable:
++	pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0);
++
+ 	return ret;
+ }
+ 
+@@ -930,8 +962,7 @@ static void pci_msi_shutdown(struct pci_dev *dev)
+ 
+ 	/* Return the device with MSI unmasked as initial states */
+ 	mask = msi_mask(desc->msi_attrib.multi_cap);
+-	/* Keep cached state to be restored */
+-	__pci_msi_desc_mask_irq(desc, mask, ~mask);
++	msi_mask_irq(desc, mask, 0);
+ 
+ 	/* Restore dev->irq to its default pin-assertion IRQ */
+ 	dev->irq = desc->msi_attrib.default_irq;
+@@ -1016,10 +1047,8 @@ static void pci_msix_shutdown(struct pci_dev *dev)
+ 	}
+ 
+ 	/* Return the device with MSI-X masked as initial states */
+-	for_each_pci_msi_entry(entry, dev) {
+-		/* Keep cached states to be restored */
++	for_each_pci_msi_entry(entry, dev)
+ 		__pci_msix_desc_mask_irq(entry, 1);
+-	}
+ 
+ 	pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0);
+ 	pci_intx_for_msi(dev, 1);
+diff --git a/drivers/pinctrl/intel/pinctrl-tigerlake.c b/drivers/pinctrl/intel/pinctrl-tigerlake.c
+index 75b6d66955bfb..3ddaeffc04150 100644
+--- a/drivers/pinctrl/intel/pinctrl-tigerlake.c
++++ b/drivers/pinctrl/intel/pinctrl-tigerlake.c
+@@ -701,32 +701,32 @@ static const struct pinctrl_pin_desc tglh_pins[] = {
+ 
+ static const struct intel_padgroup tglh_community0_gpps[] = {
+ 	TGL_GPP(0, 0, 24, 0),				/* GPP_A */
+-	TGL_GPP(1, 25, 44, 128),			/* GPP_R */
+-	TGL_GPP(2, 45, 70, 32),				/* GPP_B */
+-	TGL_GPP(3, 71, 78, INTEL_GPIO_BASE_NOMAP),	/* vGPIO_0 */
++	TGL_GPP(1, 25, 44, 32),				/* GPP_R */
++	TGL_GPP(2, 45, 70, 64),				/* GPP_B */
++	TGL_GPP(3, 71, 78, 96),				/* vGPIO_0 */
+ };
+ 
+ static const struct intel_padgroup tglh_community1_gpps[] = {
+-	TGL_GPP(0, 79, 104, 96),			/* GPP_D */
+-	TGL_GPP(1, 105, 128, 64),			/* GPP_C */
+-	TGL_GPP(2, 129, 136, 160),			/* GPP_S */
+-	TGL_GPP(3, 137, 153, 192),			/* GPP_G */
+-	TGL_GPP(4, 154, 180, 224),			/* vGPIO */
++	TGL_GPP(0, 79, 104, 128),			/* GPP_D */
++	TGL_GPP(1, 105, 128, 160),			/* GPP_C */
++	TGL_GPP(2, 129, 136, 192),			/* GPP_S */
++	TGL_GPP(3, 137, 153, 224),			/* GPP_G */
++	TGL_GPP(4, 154, 180, 256),			/* vGPIO */
+ };
+ 
+ static const struct intel_padgroup tglh_community3_gpps[] = {
+-	TGL_GPP(0, 181, 193, 256),			/* GPP_E */
+-	TGL_GPP(1, 194, 217, 288),			/* GPP_F */
++	TGL_GPP(0, 181, 193, 288),			/* GPP_E */
++	TGL_GPP(1, 194, 217, 320),			/* GPP_F */
+ };
+ 
+ static const struct intel_padgroup tglh_community4_gpps[] = {
+-	TGL_GPP(0, 218, 241, 320),			/* GPP_H */
++	TGL_GPP(0, 218, 241, 352),			/* GPP_H */
+ 	TGL_GPP(1, 242, 251, 384),			/* GPP_J */
+-	TGL_GPP(2, 252, 266, 352),			/* GPP_K */
++	TGL_GPP(2, 252, 266, 416),			/* GPP_K */
+ };
+ 
+ static const struct intel_padgroup tglh_community5_gpps[] = {
+-	TGL_GPP(0, 267, 281, 416),			/* GPP_I */
++	TGL_GPP(0, 267, 281, 448),			/* GPP_I */
+ 	TGL_GPP(1, 282, 290, INTEL_GPIO_BASE_NOMAP),	/* JTAG */
+ };
+ 
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+index 5b3b048725cc8..45ebdeba985ae 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+@@ -925,12 +925,10 @@ int mtk_pinconf_adv_pull_set(struct mtk_pinctrl *hw,
+ 			err = hw->soc->bias_set(hw, desc, pullup);
+ 			if (err)
+ 				return err;
+-		} else if (hw->soc->bias_set_combo) {
+-			err = hw->soc->bias_set_combo(hw, desc, pullup, arg);
+-			if (err)
+-				return err;
+ 		} else {
+-			return -ENOTSUPP;
++			err = mtk_pinconf_bias_set_rev1(hw, desc, pullup);
++			if (err)
++				err = mtk_pinconf_bias_set(hw, desc, pullup);
+ 		}
+ 	}
+ 
+diff --git a/drivers/pinctrl/pinctrl-k210.c b/drivers/pinctrl/pinctrl-k210.c
+index f831526d06ff6..49e32684dbb25 100644
+--- a/drivers/pinctrl/pinctrl-k210.c
++++ b/drivers/pinctrl/pinctrl-k210.c
+@@ -950,23 +950,37 @@ static int k210_fpioa_probe(struct platform_device *pdev)
+ 		return ret;
+ 
+ 	pdata->pclk = devm_clk_get_optional(dev, "pclk");
+-	if (!IS_ERR(pdata->pclk))
+-		clk_prepare_enable(pdata->pclk);
++	if (!IS_ERR(pdata->pclk)) {
++		ret = clk_prepare_enable(pdata->pclk);
++		if (ret)
++			goto disable_clk;
++	}
+ 
+ 	pdata->sysctl_map =
+ 		syscon_regmap_lookup_by_phandle_args(np,
+ 						"canaan,k210-sysctl-power",
+ 						1, &pdata->power_offset);
+-	if (IS_ERR(pdata->sysctl_map))
+-		return PTR_ERR(pdata->sysctl_map);
++	if (IS_ERR(pdata->sysctl_map)) {
++		ret = PTR_ERR(pdata->sysctl_map);
++		goto disable_pclk;
++	}
+ 
+ 	k210_fpioa_init_ties(pdata);
+ 
+ 	pdata->pctl = pinctrl_register(&k210_pinctrl_desc, dev, (void *)pdata);
+-	if (IS_ERR(pdata->pctl))
+-		return PTR_ERR(pdata->pctl);
++	if (IS_ERR(pdata->pctl)) {
++		ret = PTR_ERR(pdata->pctl);
++		goto disable_pclk;
++	}
+ 
+ 	return 0;
++
++disable_pclk:
++	clk_disable_unprepare(pdata->pclk);
++disable_clk:
++	clk_disable_unprepare(pdata->clk);
++
++	return ret;
+ }
+ 
+ static const struct of_device_id k210_fpioa_dt_ids[] = {
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sunxi.c b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+index dc8d39ae045b2..9c7679c06dcad 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sunxi.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+@@ -1219,10 +1219,12 @@ static int sunxi_pinctrl_build_state(struct platform_device *pdev)
+ 	}
+ 
+ 	/*
+-	 * We suppose that we won't have any more functions than pins,
+-	 * we'll reallocate that later anyway
++	 * Find an upper bound for the maximum number of functions: in
++	 * the worst case we have gpio_in, gpio_out, irq and up to four
++	 * special functions per pin, plus one entry for the sentinel.
++	 * We'll reallocate that later anyway.
+ 	 */
+-	pctl->functions = kcalloc(pctl->ngroups,
++	pctl->functions = kcalloc(4 * pctl->ngroups + 4,
+ 				  sizeof(*pctl->functions),
+ 				  GFP_KERNEL);
+ 	if (!pctl->functions)
+diff --git a/drivers/platform/x86/pcengines-apuv2.c b/drivers/platform/x86/pcengines-apuv2.c
+index c37349f97bb80..d063d91db9bcb 100644
+--- a/drivers/platform/x86/pcengines-apuv2.c
++++ b/drivers/platform/x86/pcengines-apuv2.c
+@@ -94,6 +94,7 @@ static struct gpiod_lookup_table gpios_led_table = {
+ 				NULL, 1, GPIO_ACTIVE_LOW),
+ 		GPIO_LOOKUP_IDX(AMD_FCH_GPIO_DRIVER_NAME, APU2_GPIO_LINE_LED3,
+ 				NULL, 2, GPIO_ACTIVE_LOW),
++		{} /* Terminating entry */
+ 	}
+ };
+ 
+@@ -123,6 +124,7 @@ static struct gpiod_lookup_table gpios_key_table = {
+ 	.table = {
+ 		GPIO_LOOKUP_IDX(AMD_FCH_GPIO_DRIVER_NAME, APU2_GPIO_LINE_MODESW,
+ 				NULL, 0, GPIO_ACTIVE_LOW),
++		{} /* Terminating entry */
+ 	}
+ };
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index f81dfa3cb0a1e..35a75b89bb642 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -13091,6 +13091,8 @@ lpfc_pci_probe_one_s4(struct pci_dev *pdev, const struct pci_device_id *pid)
+ 	if (!phba)
+ 		return -ENOMEM;
+ 
++	INIT_LIST_HEAD(&phba->poll_list);
++
+ 	/* Perform generic PCI device enabling operation */
+ 	error = lpfc_enable_pci_dev(phba);
+ 	if (error)
+@@ -13225,7 +13227,6 @@ lpfc_pci_probe_one_s4(struct pci_dev *pdev, const struct pci_device_id *pid)
+ 	/* Enable RAS FW log support */
+ 	lpfc_sli4_ras_setup(phba);
+ 
+-	INIT_LIST_HEAD(&phba->poll_list);
+ 	timer_setup(&phba->cpuhp_poll_timer, lpfc_sli4_poll_hbtimer, 0);
+ 	cpuhp_state_add_instance_nocalls(lpfc_cpuhp_state, &phba->cpuhp);
+ 
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index b9ff414a0e9d2..8e3e0c30f420e 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1741,13 +1741,9 @@ static void dwc3_gadget_ep_cleanup_cancelled_requests(struct dwc3_ep *dep)
+ {
+ 	struct dwc3_request		*req;
+ 	struct dwc3_request		*tmp;
+-	struct list_head		local;
+ 	struct dwc3			*dwc = dep->dwc;
+ 
+-restart:
+-	list_replace_init(&dep->cancelled_list, &local);
+-
+-	list_for_each_entry_safe(req, tmp, &local, list) {
++	list_for_each_entry_safe(req, tmp, &dep->cancelled_list, list) {
+ 		dwc3_gadget_ep_skip_trbs(dep, req);
+ 		switch (req->status) {
+ 		case DWC3_REQUEST_STATUS_DISCONNECTED:
+@@ -1765,9 +1761,6 @@ restart:
+ 			break;
+ 		}
+ 	}
+-
+-	if (!list_empty(&dep->cancelled_list))
+-		goto restart;
+ }
+ 
+ static int dwc3_gadget_ep_dequeue(struct usb_ep *ep,
+@@ -2963,12 +2956,8 @@ static void dwc3_gadget_ep_cleanup_completed_requests(struct dwc3_ep *dep,
+ {
+ 	struct dwc3_request	*req;
+ 	struct dwc3_request	*tmp;
+-	struct list_head	local;
+ 
+-restart:
+-	list_replace_init(&dep->started_list, &local);
+-
+-	list_for_each_entry_safe(req, tmp, &local, list) {
++	list_for_each_entry_safe(req, tmp, &dep->started_list, list) {
+ 		int ret;
+ 
+ 		ret = dwc3_gadget_ep_cleanup_completed_request(dep, event,
+@@ -2976,9 +2965,6 @@ restart:
+ 		if (ret)
+ 			break;
+ 	}
+-
+-	if (!list_empty(&dep->started_list))
+-		goto restart;
+ }
+ 
+ static bool dwc3_gadget_ep_should_continue(struct dwc3_ep *dep)
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index 32dd5ed712cb4..f3495386698a2 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -526,7 +526,6 @@ static int cq_create(struct mlx5_vdpa_net *ndev, u16 idx, u32 num_ent)
+ 	void __iomem *uar_page = ndev->mvdev.res.uar->map;
+ 	u32 out[MLX5_ST_SZ_DW(create_cq_out)];
+ 	struct mlx5_vdpa_cq *vcq = &mvq->cq;
+-	unsigned int irqn;
+ 	__be64 *pas;
+ 	int inlen;
+ 	void *cqc;
+@@ -566,7 +565,7 @@ static int cq_create(struct mlx5_vdpa_net *ndev, u16 idx, u32 num_ent)
+ 	/* Use vector 0 by default. Consider adding code to choose least used
+ 	 * vector.
+ 	 */
+-	err = mlx5_vector2eqn(mdev, 0, &eqn, &irqn);
++	err = mlx5_vector2eqn(mdev, 0, &eqn);
+ 	if (err)
+ 		goto err_vec;
+ 
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index d7e361fb05482..0e44098f39773 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -198,12 +198,12 @@ static void disable_dynirq(struct irq_data *data);
+ 
+ static DEFINE_PER_CPU(unsigned int, irq_epoch);
+ 
+-static void clear_evtchn_to_irq_row(unsigned row)
++static void clear_evtchn_to_irq_row(int *evtchn_row)
+ {
+ 	unsigned col;
+ 
+ 	for (col = 0; col < EVTCHN_PER_ROW; col++)
+-		WRITE_ONCE(evtchn_to_irq[row][col], -1);
++		WRITE_ONCE(evtchn_row[col], -1);
+ }
+ 
+ static void clear_evtchn_to_irq_all(void)
+@@ -213,7 +213,7 @@ static void clear_evtchn_to_irq_all(void)
+ 	for (row = 0; row < EVTCHN_ROW(xen_evtchn_max_channels()); row++) {
+ 		if (evtchn_to_irq[row] == NULL)
+ 			continue;
+-		clear_evtchn_to_irq_row(row);
++		clear_evtchn_to_irq_row(evtchn_to_irq[row]);
+ 	}
+ }
+ 
+@@ -221,6 +221,7 @@ static int set_evtchn_to_irq(evtchn_port_t evtchn, unsigned int irq)
+ {
+ 	unsigned row;
+ 	unsigned col;
++	int *evtchn_row;
+ 
+ 	if (evtchn >= xen_evtchn_max_channels())
+ 		return -EINVAL;
+@@ -233,11 +234,18 @@ static int set_evtchn_to_irq(evtchn_port_t evtchn, unsigned int irq)
+ 		if (irq == -1)
+ 			return 0;
+ 
+-		evtchn_to_irq[row] = (int *)get_zeroed_page(GFP_KERNEL);
+-		if (evtchn_to_irq[row] == NULL)
++		evtchn_row = (int *) __get_free_pages(GFP_KERNEL, 0);
++		if (evtchn_row == NULL)
+ 			return -ENOMEM;
+ 
+-		clear_evtchn_to_irq_row(row);
++		clear_evtchn_to_irq_row(evtchn_row);
++
++		/*
++		 * We've prepared an empty row for the mapping. If a different
++		 * thread was faster inserting it, we can drop ours.
++		 */
++		if (cmpxchg(&evtchn_to_irq[row], NULL, evtchn_row) != NULL)
++			free_page((unsigned long) evtchn_row);
+ 	}
+ 
+ 	WRITE_ONCE(evtchn_to_irq[row][col], irq);
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index a5e93b1855158..c79b8dff25d7d 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -4224,11 +4224,19 @@ bad:
+ 
+ /*
+  * Delayed work handler to process end of delayed cap release LRU list.
++ *
++ * If new caps are added to the list while processing it, these won't get
++ * processed in this run.  In this case, the ci->i_hold_caps_max will be
++ * returned so that the work can be scheduled accordingly.
+  */
+-void ceph_check_delayed_caps(struct ceph_mds_client *mdsc)
++unsigned long ceph_check_delayed_caps(struct ceph_mds_client *mdsc)
+ {
+ 	struct inode *inode;
+ 	struct ceph_inode_info *ci;
++	struct ceph_mount_options *opt = mdsc->fsc->mount_options;
++	unsigned long delay_max = opt->caps_wanted_delay_max * HZ;
++	unsigned long loop_start = jiffies;
++	unsigned long delay = 0;
+ 
+ 	dout("check_delayed_caps\n");
+ 	spin_lock(&mdsc->cap_delay_lock);
+@@ -4236,6 +4244,11 @@ void ceph_check_delayed_caps(struct ceph_mds_client *mdsc)
+ 		ci = list_first_entry(&mdsc->cap_delay_list,
+ 				      struct ceph_inode_info,
+ 				      i_cap_delay_list);
++		if (time_before(loop_start, ci->i_hold_caps_max - delay_max)) {
++			dout("%s caps added recently.  Exiting loop", __func__);
++			delay = ci->i_hold_caps_max;
++			break;
++		}
+ 		if ((ci->i_ceph_flags & CEPH_I_FLUSH) == 0 &&
+ 		    time_before(jiffies, ci->i_hold_caps_max))
+ 			break;
+@@ -4252,6 +4265,8 @@ void ceph_check_delayed_caps(struct ceph_mds_client *mdsc)
+ 		}
+ 	}
+ 	spin_unlock(&mdsc->cap_delay_lock);
++
++	return delay;
+ }
+ 
+ /*
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 86f09b1110a2f..2c5701bfe4e8c 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -4502,22 +4502,29 @@ void inc_session_sequence(struct ceph_mds_session *s)
+ }
+ 
+ /*
+- * delayed work -- periodically trim expired leases, renew caps with mds
++ * delayed work -- periodically trim expired leases, renew caps with mds.  If
++ * the @delay parameter is set to 0 or if it's more than 5 secs, the default
++ * workqueue delay value of 5 secs will be used.
+  */
+-static void schedule_delayed(struct ceph_mds_client *mdsc)
++static void schedule_delayed(struct ceph_mds_client *mdsc, unsigned long delay)
+ {
+-	int delay = 5;
+-	unsigned hz = round_jiffies_relative(HZ * delay);
+-	schedule_delayed_work(&mdsc->delayed_work, hz);
++	unsigned long max_delay = HZ * 5;
++
++	/* 5 secs default delay */
++	if (!delay || (delay > max_delay))
++		delay = max_delay;
++	schedule_delayed_work(&mdsc->delayed_work,
++			      round_jiffies_relative(delay));
+ }
+ 
+ static void delayed_work(struct work_struct *work)
+ {
+-	int i;
+ 	struct ceph_mds_client *mdsc =
+ 		container_of(work, struct ceph_mds_client, delayed_work.work);
++	unsigned long delay;
+ 	int renew_interval;
+ 	int renew_caps;
++	int i;
+ 
+ 	dout("mdsc delayed_work\n");
+ 
+@@ -4557,7 +4564,7 @@ static void delayed_work(struct work_struct *work)
+ 	}
+ 	mutex_unlock(&mdsc->mutex);
+ 
+-	ceph_check_delayed_caps(mdsc);
++	delay = ceph_check_delayed_caps(mdsc);
+ 
+ 	ceph_queue_cap_reclaim_work(mdsc);
+ 
+@@ -4565,7 +4572,7 @@ static void delayed_work(struct work_struct *work)
+ 
+ 	maybe_recover_session(mdsc);
+ 
+-	schedule_delayed(mdsc);
++	schedule_delayed(mdsc, delay);
+ }
+ 
+ int ceph_mdsc_init(struct ceph_fs_client *fsc)
+@@ -5042,7 +5049,7 @@ void ceph_mdsc_handle_mdsmap(struct ceph_mds_client *mdsc, struct ceph_msg *msg)
+ 			  mdsc->mdsmap->m_epoch);
+ 
+ 	mutex_unlock(&mdsc->mutex);
+-	schedule_delayed(mdsc);
++	schedule_delayed(mdsc, 0);
+ 	return;
+ 
+ bad_unlock:
+diff --git a/fs/ceph/snap.c b/fs/ceph/snap.c
+index 4ce18055d9316..ae20504326b88 100644
+--- a/fs/ceph/snap.c
++++ b/fs/ceph/snap.c
+@@ -60,24 +60,26 @@
+ /*
+  * increase ref count for the realm
+  *
+- * caller must hold snap_rwsem for write.
++ * caller must hold snap_rwsem.
+  */
+ void ceph_get_snap_realm(struct ceph_mds_client *mdsc,
+ 			 struct ceph_snap_realm *realm)
+ {
+-	dout("get_realm %p %d -> %d\n", realm,
+-	     atomic_read(&realm->nref), atomic_read(&realm->nref)+1);
++	lockdep_assert_held(&mdsc->snap_rwsem);
++
+ 	/*
+-	 * since we _only_ increment realm refs or empty the empty
+-	 * list with snap_rwsem held, adjusting the empty list here is
+-	 * safe.  we do need to protect against concurrent empty list
+-	 * additions, however.
++	 * The 0->1 and 1->0 transitions must take the snap_empty_lock
++	 * atomically with the refcount change. Go ahead and bump the
++	 * nref here, unless it's 0, in which case we take the spinlock
++	 * and then do the increment and remove it from the list.
+ 	 */
+-	if (atomic_inc_return(&realm->nref) == 1) {
+-		spin_lock(&mdsc->snap_empty_lock);
++	if (atomic_inc_not_zero(&realm->nref))
++		return;
++
++	spin_lock(&mdsc->snap_empty_lock);
++	if (atomic_inc_return(&realm->nref) == 1)
+ 		list_del_init(&realm->empty_item);
+-		spin_unlock(&mdsc->snap_empty_lock);
+-	}
++	spin_unlock(&mdsc->snap_empty_lock);
+ }
+ 
+ static void __insert_snap_realm(struct rb_root *root,
+@@ -113,6 +115,8 @@ static struct ceph_snap_realm *ceph_create_snap_realm(
+ {
+ 	struct ceph_snap_realm *realm;
+ 
++	lockdep_assert_held_write(&mdsc->snap_rwsem);
++
+ 	realm = kzalloc(sizeof(*realm), GFP_NOFS);
+ 	if (!realm)
+ 		return ERR_PTR(-ENOMEM);
+@@ -135,7 +139,7 @@ static struct ceph_snap_realm *ceph_create_snap_realm(
+ /*
+  * lookup the realm rooted at @ino.
+  *
+- * caller must hold snap_rwsem for write.
++ * caller must hold snap_rwsem.
+  */
+ static struct ceph_snap_realm *__lookup_snap_realm(struct ceph_mds_client *mdsc,
+ 						   u64 ino)
+@@ -143,6 +147,8 @@ static struct ceph_snap_realm *__lookup_snap_realm(struct ceph_mds_client *mdsc,
+ 	struct rb_node *n = mdsc->snap_realms.rb_node;
+ 	struct ceph_snap_realm *r;
+ 
++	lockdep_assert_held(&mdsc->snap_rwsem);
++
+ 	while (n) {
+ 		r = rb_entry(n, struct ceph_snap_realm, node);
+ 		if (ino < r->ino)
+@@ -176,6 +182,8 @@ static void __put_snap_realm(struct ceph_mds_client *mdsc,
+ static void __destroy_snap_realm(struct ceph_mds_client *mdsc,
+ 				 struct ceph_snap_realm *realm)
+ {
++	lockdep_assert_held_write(&mdsc->snap_rwsem);
++
+ 	dout("__destroy_snap_realm %p %llx\n", realm, realm->ino);
+ 
+ 	rb_erase(&realm->node, &mdsc->snap_realms);
+@@ -198,28 +206,30 @@ static void __destroy_snap_realm(struct ceph_mds_client *mdsc,
+ static void __put_snap_realm(struct ceph_mds_client *mdsc,
+ 			     struct ceph_snap_realm *realm)
+ {
+-	dout("__put_snap_realm %llx %p %d -> %d\n", realm->ino, realm,
+-	     atomic_read(&realm->nref), atomic_read(&realm->nref)-1);
++	lockdep_assert_held_write(&mdsc->snap_rwsem);
++
++	/*
++	 * We do not require the snap_empty_lock here, as any caller that
++	 * increments the value must hold the snap_rwsem.
++	 */
+ 	if (atomic_dec_and_test(&realm->nref))
+ 		__destroy_snap_realm(mdsc, realm);
+ }
+ 
+ /*
+- * caller needn't hold any locks
++ * See comments in ceph_get_snap_realm. Caller needn't hold any locks.
+  */
+ void ceph_put_snap_realm(struct ceph_mds_client *mdsc,
+ 			 struct ceph_snap_realm *realm)
+ {
+-	dout("put_snap_realm %llx %p %d -> %d\n", realm->ino, realm,
+-	     atomic_read(&realm->nref), atomic_read(&realm->nref)-1);
+-	if (!atomic_dec_and_test(&realm->nref))
++	if (!atomic_dec_and_lock(&realm->nref, &mdsc->snap_empty_lock))
+ 		return;
+ 
+ 	if (down_write_trylock(&mdsc->snap_rwsem)) {
++		spin_unlock(&mdsc->snap_empty_lock);
+ 		__destroy_snap_realm(mdsc, realm);
+ 		up_write(&mdsc->snap_rwsem);
+ 	} else {
+-		spin_lock(&mdsc->snap_empty_lock);
+ 		list_add(&realm->empty_item, &mdsc->snap_empty);
+ 		spin_unlock(&mdsc->snap_empty_lock);
+ 	}
+@@ -236,6 +246,8 @@ static void __cleanup_empty_realms(struct ceph_mds_client *mdsc)
+ {
+ 	struct ceph_snap_realm *realm;
+ 
++	lockdep_assert_held_write(&mdsc->snap_rwsem);
++
+ 	spin_lock(&mdsc->snap_empty_lock);
+ 	while (!list_empty(&mdsc->snap_empty)) {
+ 		realm = list_first_entry(&mdsc->snap_empty,
+@@ -269,6 +281,8 @@ static int adjust_snap_realm_parent(struct ceph_mds_client *mdsc,
+ {
+ 	struct ceph_snap_realm *parent;
+ 
++	lockdep_assert_held_write(&mdsc->snap_rwsem);
++
+ 	if (realm->parent_ino == parentino)
+ 		return 0;
+ 
+@@ -696,6 +710,8 @@ int ceph_update_snap_trace(struct ceph_mds_client *mdsc,
+ 	int err = -ENOMEM;
+ 	LIST_HEAD(dirty_realms);
+ 
++	lockdep_assert_held_write(&mdsc->snap_rwsem);
++
+ 	dout("update_snap_trace deletion=%d\n", deletion);
+ more:
+ 	ceph_decode_need(&p, e, sizeof(*ri), bad);
+diff --git a/fs/ceph/super.h b/fs/ceph/super.h
+index 839e6b0239eeb..3b5207c827673 100644
+--- a/fs/ceph/super.h
++++ b/fs/ceph/super.h
+@@ -1170,7 +1170,7 @@ extern void ceph_flush_snaps(struct ceph_inode_info *ci,
+ extern bool __ceph_should_report_size(struct ceph_inode_info *ci);
+ extern void ceph_check_caps(struct ceph_inode_info *ci, int flags,
+ 			    struct ceph_mds_session *session);
+-extern void ceph_check_delayed_caps(struct ceph_mds_client *mdsc);
++extern unsigned long ceph_check_delayed_caps(struct ceph_mds_client *mdsc);
+ extern void ceph_flush_dirty_caps(struct ceph_mds_client *mdsc);
+ extern int  ceph_drop_caps_for_unlink(struct inode *inode);
+ extern int ceph_encode_inode_release(void **p, struct inode *inode,
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 016082024d7d1..0283cb75b355d 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -1615,6 +1615,11 @@ struct dfs_info3_param {
+ 	int ttl;
+ };
+ 
++struct file_list {
++	struct list_head list;
++	struct cifsFileInfo *cfile;
++};
++
+ /*
+  * common struct for holding inode info when searching for or updating an
+  * inode with new info
+diff --git a/fs/cifs/dir.c b/fs/cifs/dir.c
+index 7c641f9a3dac2..bbfc540b48582 100644
+--- a/fs/cifs/dir.c
++++ b/fs/cifs/dir.c
+@@ -112,7 +112,7 @@ build_path_from_dentry_optional_prefix(struct dentry *direntry, void *page,
+ 	if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_USE_PREFIX_PATH)
+ 		pplen = cifs_sb->prepath ? strlen(cifs_sb->prepath) + 1 : 0;
+ 
+-	s = dentry_path_raw(direntry, page, PAGE_SIZE);
++	s = dentry_path_raw(direntry, page, PATH_MAX);
+ 	if (IS_ERR(s))
+ 		return s;
+ 	if (!s[1])	// for root we want "", not "/"
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index ae4ce762f4fb4..fe097c15e5aa5 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -4859,17 +4859,6 @@ void cifs_oplock_break(struct work_struct *work)
+ 		cifs_dbg(VFS, "Push locks rc = %d\n", rc);
+ 
+ oplock_break_ack:
+-	/*
+-	 * releasing stale oplock after recent reconnect of smb session using
+-	 * a now incorrect file handle is not a data integrity issue but do
+-	 * not bother sending an oplock release if session to server still is
+-	 * disconnected since oplock already released by the server
+-	 */
+-	if (!cfile->oplock_break_cancelled) {
+-		rc = tcon->ses->server->ops->oplock_response(tcon, &cfile->fid,
+-							     cinode);
+-		cifs_dbg(FYI, "Oplock release rc = %d\n", rc);
+-	}
+ 	/*
+ 	 * When oplock break is received and there are no active
+ 	 * file handles but cached, then schedule deferred close immediately.
+@@ -4877,17 +4866,27 @@ oplock_break_ack:
+ 	 */
+ 	spin_lock(&CIFS_I(inode)->deferred_lock);
+ 	is_deferred = cifs_is_deferred_close(cfile, &dclose);
++	spin_unlock(&CIFS_I(inode)->deferred_lock);
+ 	if (is_deferred &&
+ 	    cfile->deferred_close_scheduled &&
+ 	    delayed_work_pending(&cfile->deferred)) {
+-		/*
+-		 * If there is no pending work, mod_delayed_work queues new work.
+-		 * So, Increase the ref count to avoid use-after-free.
+-		 */
+-		if (!mod_delayed_work(deferredclose_wq, &cfile->deferred, 0))
+-			cifsFileInfo_get(cfile);
++		if (cancel_delayed_work(&cfile->deferred)) {
++			_cifsFileInfo_put(cfile, false, false);
++			goto oplock_break_done;
++		}
+ 	}
+-	spin_unlock(&CIFS_I(inode)->deferred_lock);
++	/*
++	 * releasing stale oplock after recent reconnect of smb session using
++	 * a now incorrect file handle is not a data integrity issue but do
++	 * not bother sending an oplock release if session to server still is
++	 * disconnected since oplock already released by the server
++	 */
++	if (!cfile->oplock_break_cancelled) {
++		rc = tcon->ses->server->ops->oplock_response(tcon, &cfile->fid,
++							     cinode);
++		cifs_dbg(FYI, "Oplock release rc = %d\n", rc);
++	}
++oplock_break_done:
+ 	_cifsFileInfo_put(cfile, false /* do not wait for ourself */, false);
+ 	cifs_done_oplock_break(cinode);
+ }
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index f60f068d33e86..5b8789b245310 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -1637,7 +1637,7 @@ int cifs_unlink(struct inode *dir, struct dentry *dentry)
+ 		goto unlink_out;
+ 	}
+ 
+-	cifs_close_all_deferred_files(tcon);
++	cifs_close_deferred_file(CIFS_I(inode));
+ 	if (cap_unix(tcon->ses) && (CIFS_UNIX_POSIX_PATH_OPS_CAP &
+ 				le64_to_cpu(tcon->fsUnixInfo.Capability))) {
+ 		rc = CIFSPOSIXDelFile(xid, tcon, full_path,
+@@ -2096,6 +2096,7 @@ cifs_rename2(struct user_namespace *mnt_userns, struct inode *source_dir,
+ 	FILE_UNIX_BASIC_INFO *info_buf_target;
+ 	unsigned int xid;
+ 	int rc, tmprc;
++	int retry_count = 0;
+ 
+ 	if (flags & ~RENAME_NOREPLACE)
+ 		return -EINVAL;
+@@ -2125,10 +2126,24 @@ cifs_rename2(struct user_namespace *mnt_userns, struct inode *source_dir,
+ 		goto cifs_rename_exit;
+ 	}
+ 
+-	cifs_close_all_deferred_files(tcon);
++	cifs_close_deferred_file(CIFS_I(d_inode(source_dentry)));
++	if (d_inode(target_dentry) != NULL)
++		cifs_close_deferred_file(CIFS_I(d_inode(target_dentry)));
++
+ 	rc = cifs_do_rename(xid, source_dentry, from_name, target_dentry,
+ 			    to_name);
+ 
++	if (rc == -EACCES) {
++		while (retry_count < 3) {
++			cifs_close_all_deferred_files(tcon);
++			rc = cifs_do_rename(xid, source_dentry, from_name, target_dentry,
++					    to_name);
++			if (rc != -EACCES)
++				break;
++			retry_count++;
++		}
++	}
++
+ 	/*
+ 	 * No-replace is the natural behavior for CIFS, so skip unlink hacks.
+ 	 */
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index cccaccfb02d04..8a2cabed4137a 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -735,13 +735,31 @@ void
+ cifs_close_deferred_file(struct cifsInodeInfo *cifs_inode)
+ {
+ 	struct cifsFileInfo *cfile = NULL;
+-	struct cifs_deferred_close *dclose;
++	struct file_list *tmp_list, *tmp_next_list;
++	struct list_head file_head;
++
++	if (cifs_inode == NULL)
++		return;
+ 
++	INIT_LIST_HEAD(&file_head);
++	spin_lock(&cifs_inode->open_file_lock);
+ 	list_for_each_entry(cfile, &cifs_inode->openFileList, flist) {
+-		spin_lock(&cifs_inode->deferred_lock);
+-		if (cifs_is_deferred_close(cfile, &dclose))
+-			mod_delayed_work(deferredclose_wq, &cfile->deferred, 0);
+-		spin_unlock(&cifs_inode->deferred_lock);
++		if (delayed_work_pending(&cfile->deferred)) {
++			if (cancel_delayed_work(&cfile->deferred)) {
++				tmp_list = kmalloc(sizeof(struct file_list), GFP_ATOMIC);
++				if (tmp_list == NULL)
++					continue;
++				tmp_list->cfile = cfile;
++				list_add_tail(&tmp_list->list, &file_head);
++			}
++		}
++	}
++	spin_unlock(&cifs_inode->open_file_lock);
++
++	list_for_each_entry_safe(tmp_list, tmp_next_list, &file_head, list) {
++		_cifsFileInfo_put(tmp_list->cfile, true, false);
++		list_del(&tmp_list->list);
++		kfree(tmp_list);
+ 	}
+ }
+ 
+@@ -750,20 +768,30 @@ cifs_close_all_deferred_files(struct cifs_tcon *tcon)
+ {
+ 	struct cifsFileInfo *cfile;
+ 	struct list_head *tmp;
++	struct file_list *tmp_list, *tmp_next_list;
++	struct list_head file_head;
+ 
++	INIT_LIST_HEAD(&file_head);
+ 	spin_lock(&tcon->open_file_lock);
+ 	list_for_each(tmp, &tcon->openFileList) {
+ 		cfile = list_entry(tmp, struct cifsFileInfo, tlist);
+ 		if (delayed_work_pending(&cfile->deferred)) {
+-			/*
+-			 * If there is no pending work, mod_delayed_work queues new work.
+-			 * So, Increase the ref count to avoid use-after-free.
+-			 */
+-			if (!mod_delayed_work(deferredclose_wq, &cfile->deferred, 0))
+-				cifsFileInfo_get(cfile);
++			if (cancel_delayed_work(&cfile->deferred)) {
++				tmp_list = kmalloc(sizeof(struct file_list), GFP_ATOMIC);
++				if (tmp_list == NULL)
++					continue;
++				tmp_list->cfile = cfile;
++				list_add_tail(&tmp_list->list, &file_head);
++			}
+ 		}
+ 	}
+ 	spin_unlock(&tcon->open_file_lock);
++
++	list_for_each_entry_safe(tmp_list, tmp_next_list, &file_head, list) {
++		_cifsFileInfo_put(tmp_list->cfile, true, false);
++		list_del(&tmp_list->list);
++		kfree(tmp_list);
++	}
+ }
+ 
+ /* parses DFS refferal V3 structure
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index c205f93e0a10f..134ed3f836c6e 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -2375,7 +2375,7 @@ create_sd_buf(umode_t mode, bool set_owner, unsigned int *len)
+ 	memcpy(aclptr, &acl, sizeof(struct cifs_acl));
+ 
+ 	buf->ccontext.DataLength = cpu_to_le32(ptr - (__u8 *)&buf->sd);
+-	*len = ptr - (__u8 *)buf;
++	*len = roundup(ptr - (__u8 *)buf, 8);
+ 
+ 	return buf;
+ }
+diff --git a/fs/io-wq.c b/fs/io-wq.c
+index 77026d42cb799..91b0d1fb90eb3 100644
+--- a/fs/io-wq.c
++++ b/fs/io-wq.c
+@@ -130,7 +130,7 @@ struct io_cb_cancel_data {
+ 	bool cancel_all;
+ };
+ 
+-static void create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index);
++static void create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index, bool first);
+ static void io_wqe_dec_running(struct io_worker *worker);
+ 
+ static bool io_worker_get(struct io_worker *worker)
+@@ -249,18 +249,20 @@ static void io_wqe_wake_worker(struct io_wqe *wqe, struct io_wqe_acct *acct)
+ 	rcu_read_unlock();
+ 
+ 	if (!ret) {
+-		bool do_create = false;
++		bool do_create = false, first = false;
+ 
+ 		raw_spin_lock_irq(&wqe->lock);
+ 		if (acct->nr_workers < acct->max_workers) {
+ 			atomic_inc(&acct->nr_running);
+ 			atomic_inc(&wqe->wq->worker_refs);
++			if (!acct->nr_workers)
++				first = true;
+ 			acct->nr_workers++;
+ 			do_create = true;
+ 		}
+ 		raw_spin_unlock_irq(&wqe->lock);
+ 		if (do_create)
+-			create_io_worker(wqe->wq, wqe, acct->index);
++			create_io_worker(wqe->wq, wqe, acct->index, first);
+ 	}
+ }
+ 
+@@ -283,16 +285,26 @@ static void create_worker_cb(struct callback_head *cb)
+ 	struct io_wq *wq;
+ 	struct io_wqe *wqe;
+ 	struct io_wqe_acct *acct;
++	bool do_create = false, first = false;
+ 
+ 	cwd = container_of(cb, struct create_worker_data, work);
+ 	wqe = cwd->wqe;
+ 	wq = wqe->wq;
+ 	acct = &wqe->acct[cwd->index];
+ 	raw_spin_lock_irq(&wqe->lock);
+-	if (acct->nr_workers < acct->max_workers)
++	if (acct->nr_workers < acct->max_workers) {
++		if (!acct->nr_workers)
++			first = true;
+ 		acct->nr_workers++;
++		do_create = true;
++	}
+ 	raw_spin_unlock_irq(&wqe->lock);
+-	create_io_worker(wq, cwd->wqe, cwd->index);
++	if (do_create) {
++		create_io_worker(wq, wqe, cwd->index, first);
++	} else {
++		atomic_dec(&acct->nr_running);
++		io_worker_ref_put(wq);
++	}
+ 	kfree(cwd);
+ }
+ 
+@@ -634,7 +646,7 @@ void io_wq_worker_sleeping(struct task_struct *tsk)
+ 	raw_spin_unlock_irq(&worker->wqe->lock);
+ }
+ 
+-static void create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index)
++static void create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index, bool first)
+ {
+ 	struct io_wqe_acct *acct = &wqe->acct[index];
+ 	struct io_worker *worker;
+@@ -675,7 +687,7 @@ fail:
+ 	worker->flags |= IO_WORKER_F_FREE;
+ 	if (index == IO_WQ_ACCT_BOUND)
+ 		worker->flags |= IO_WORKER_F_BOUND;
+-	if ((acct->nr_workers == 1) && (worker->flags & IO_WORKER_F_BOUND))
++	if (first && (worker->flags & IO_WORKER_F_BOUND))
+ 		worker->flags |= IO_WORKER_F_FIXED;
+ 	raw_spin_unlock_irq(&wqe->lock);
+ 	wake_up_new_task(tsk);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 32f3df13a812d..f23ff39f7697e 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -78,6 +78,7 @@
+ #include <linux/task_work.h>
+ #include <linux/pagemap.h>
+ #include <linux/io_uring.h>
++#include <linux/tracehook.h>
+ 
+ #define CREATE_TRACE_POINTS
+ #include <trace/events/io_uring.h>
+@@ -2250,9 +2251,9 @@ static inline unsigned int io_put_rw_kbuf(struct io_kiocb *req)
+ 
+ static inline bool io_run_task_work(void)
+ {
+-	if (current->task_works) {
++	if (test_thread_flag(TIF_NOTIFY_SIGNAL) || current->task_works) {
+ 		__set_current_state(TASK_RUNNING);
+-		task_work_run();
++		tracehook_notify_signal();
+ 		return true;
+ 	}
+ 
+@@ -7166,17 +7167,19 @@ static int io_rsrc_ref_quiesce(struct io_rsrc_data *data, struct io_ring_ctx *ct
+ 		/* kill initial ref, already quiesced if zero */
+ 		if (atomic_dec_and_test(&data->refs))
+ 			break;
++		mutex_unlock(&ctx->uring_lock);
+ 		flush_delayed_work(&ctx->rsrc_put_work);
+ 		ret = wait_for_completion_interruptible(&data->done);
+-		if (!ret)
++		if (!ret) {
++			mutex_lock(&ctx->uring_lock);
+ 			break;
++		}
+ 
+ 		atomic_inc(&data->refs);
+ 		/* wait for all works potentially completing data->done */
+ 		flush_delayed_work(&ctx->rsrc_put_work);
+ 		reinit_completion(&data->done);
+ 
+-		mutex_unlock(&ctx->uring_lock);
+ 		ret = io_run_task_work_sig();
+ 		mutex_lock(&ctx->uring_lock);
+ 	} while (ret >= 0);
+@@ -8612,13 +8615,10 @@ static void io_req_caches_free(struct io_ring_ctx *ctx)
+ 	mutex_unlock(&ctx->uring_lock);
+ }
+ 
+-static bool io_wait_rsrc_data(struct io_rsrc_data *data)
++static void io_wait_rsrc_data(struct io_rsrc_data *data)
+ {
+-	if (!data)
+-		return false;
+-	if (!atomic_dec_and_test(&data->refs))
++	if (data && !atomic_dec_and_test(&data->refs))
+ 		wait_for_completion(&data->done);
+-	return true;
+ }
+ 
+ static void io_ring_ctx_free(struct io_ring_ctx *ctx)
+@@ -8630,10 +8630,14 @@ static void io_ring_ctx_free(struct io_ring_ctx *ctx)
+ 		ctx->mm_account = NULL;
+ 	}
+ 
++	/* __io_rsrc_put_work() may need uring_lock to progress, wait w/o it */
++	io_wait_rsrc_data(ctx->buf_data);
++	io_wait_rsrc_data(ctx->file_data);
++
+ 	mutex_lock(&ctx->uring_lock);
+-	if (io_wait_rsrc_data(ctx->buf_data))
++	if (ctx->buf_data)
+ 		__io_sqe_buffers_unregister(ctx);
+-	if (io_wait_rsrc_data(ctx->file_data))
++	if (ctx->file_data)
+ 		__io_sqe_files_unregister(ctx);
+ 	if (ctx->rings)
+ 		__io_cqring_overflow_flush(ctx, true);
+diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
+index 4d53d3b7e5fe1..d081faa55e830 100644
+--- a/fs/overlayfs/file.c
++++ b/fs/overlayfs/file.c
+@@ -392,6 +392,51 @@ out_unlock:
+ 	return ret;
+ }
+ 
++/*
++ * Calling iter_file_splice_write() directly from overlay's f_op may deadlock
++ * due to lock order inversion between pipe->mutex in iter_file_splice_write()
++ * and file_start_write(real.file) in ovl_write_iter().
++ *
++ * So do everything ovl_write_iter() does and call iter_file_splice_write() on
++ * the real file.
++ */
++static ssize_t ovl_splice_write(struct pipe_inode_info *pipe, struct file *out,
++				loff_t *ppos, size_t len, unsigned int flags)
++{
++	struct fd real;
++	const struct cred *old_cred;
++	struct inode *inode = file_inode(out);
++	struct inode *realinode = ovl_inode_real(inode);
++	ssize_t ret;
++
++	inode_lock(inode);
++	/* Update mode */
++	ovl_copyattr(realinode, inode);
++	ret = file_remove_privs(out);
++	if (ret)
++		goto out_unlock;
++
++	ret = ovl_real_fdget(out, &real);
++	if (ret)
++		goto out_unlock;
++
++	old_cred = ovl_override_creds(inode->i_sb);
++	file_start_write(real.file);
++
++	ret = iter_file_splice_write(pipe, real.file, ppos, len, flags);
++
++	file_end_write(real.file);
++	/* Update size */
++	ovl_copyattr(realinode, inode);
++	revert_creds(old_cred);
++	fdput(real);
++
++out_unlock:
++	inode_unlock(inode);
++
++	return ret;
++}
++
+ static int ovl_fsync(struct file *file, loff_t start, loff_t end, int datasync)
+ {
+ 	struct fd real;
+@@ -603,7 +648,7 @@ const struct file_operations ovl_file_operations = {
+ 	.fadvise	= ovl_fadvise,
+ 	.flush		= ovl_flush,
+ 	.splice_read    = generic_file_splice_read,
+-	.splice_write   = iter_file_splice_write,
++	.splice_write   = ovl_splice_write,
+ 
+ 	.copy_file_range	= ovl_copy_file_range,
+ 	.remap_file_range	= ovl_remap_file_range,
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index 17325416e2dee..62669b36a772e 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -586,6 +586,7 @@
+ 		NOINSTR_TEXT						\
+ 		*(.text..refcount)					\
+ 		*(.ref.text)						\
++		*(.text.asan.* .text.tsan.*)				\
+ 		TEXT_CFI_JT						\
+ 	MEM_KEEP(init.text*)						\
+ 	MEM_KEEP(exit.text*)						\
+diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
+index 8b77d08d4b47f..6c9b10d82c809 100644
+--- a/include/linux/bpf-cgroup.h
++++ b/include/linux/bpf-cgroup.h
+@@ -201,8 +201,8 @@ static inline void bpf_cgroup_storage_unset(void)
+ {
+ 	int i;
+ 
+-	for (i = 0; i < BPF_CGROUP_STORAGE_NEST_MAX; i++) {
+-		if (unlikely(this_cpu_read(bpf_cgroup_storage_info[i].task) != current))
++	for (i = BPF_CGROUP_STORAGE_NEST_MAX - 1; i >= 0; i--) {
++		if (likely(this_cpu_read(bpf_cgroup_storage_info[i].task) != current))
+ 			continue;
+ 
+ 		this_cpu_write(bpf_cgroup_storage_info[i].task, NULL);
+diff --git a/include/linux/device.h b/include/linux/device.h
+index f1a00040fa534..cb3824b6ddedd 100644
+--- a/include/linux/device.h
++++ b/include/linux/device.h
+@@ -496,6 +496,7 @@ struct device {
+ 	struct dev_pin_info	*pins;
+ #endif
+ #ifdef CONFIG_GENERIC_MSI_IRQ
++	raw_spinlock_t		msi_lock;
+ 	struct list_head	msi_list;
+ #endif
+ #ifdef CONFIG_DMA_OPS
+diff --git a/include/linux/inetdevice.h b/include/linux/inetdevice.h
+index 53aa0343bf694..aaf4f1b4c277c 100644
+--- a/include/linux/inetdevice.h
++++ b/include/linux/inetdevice.h
+@@ -41,7 +41,7 @@ struct in_device {
+ 	unsigned long		mr_qri;		/* Query Response Interval */
+ 	unsigned char		mr_qrv;		/* Query Robustness Variable */
+ 	unsigned char		mr_gq_running;
+-	unsigned char		mr_ifc_count;
++	u32			mr_ifc_count;
+ 	struct timer_list	mr_gq_timer;	/* general query timer */
+ 	struct timer_list	mr_ifc_timer;	/* interface change timer */
+ 
+diff --git a/include/linux/irq.h b/include/linux/irq.h
+index 31b347c9f8dd0..df0998316f89c 100644
+--- a/include/linux/irq.h
++++ b/include/linux/irq.h
+@@ -567,6 +567,7 @@ struct irq_chip {
+  * IRQCHIP_SUPPORTS_NMI:              Chip can deliver NMIs, only for root irqchips
+  * IRQCHIP_ENABLE_WAKEUP_ON_SUSPEND:  Invokes __enable_irq()/__disable_irq() for wake irqs
+  *                                    in the suspend path if they are in disabled state
++ * IRQCHIP_AFFINITY_PRE_STARTUP:      Default affinity update before startup
+  */
+ enum {
+ 	IRQCHIP_SET_TYPE_MASKED			= (1 <<  0),
+@@ -579,6 +580,7 @@ enum {
+ 	IRQCHIP_SUPPORTS_LEVEL_MSI		= (1 <<  7),
+ 	IRQCHIP_SUPPORTS_NMI			= (1 <<  8),
+ 	IRQCHIP_ENABLE_WAKEUP_ON_SUSPEND	= (1 <<  9),
++	IRQCHIP_AFFINITY_PRE_STARTUP		= (1 << 10),
+ };
+ 
+ #include <linux/irqdesc.h>
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index f8902bcd91e26..58236808fdf4e 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -1042,8 +1042,7 @@ void mlx5_unregister_debugfs(void);
+ void mlx5_fill_page_array(struct mlx5_frag_buf *buf, __be64 *pas);
+ void mlx5_fill_page_frag_array_perm(struct mlx5_frag_buf *buf, __be64 *pas, u8 perm);
+ void mlx5_fill_page_frag_array(struct mlx5_frag_buf *frag_buf, __be64 *pas);
+-int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn,
+-		    unsigned int *irqn);
++int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn);
+ int mlx5_core_attach_mcg(struct mlx5_core_dev *dev, union ib_gid *mgid, u32 qpn);
+ int mlx5_core_detach_mcg(struct mlx5_core_dev *dev, union ib_gid *mgid, u32 qpn);
+ 
+diff --git a/include/linux/msi.h b/include/linux/msi.h
+index 6aff469e511d1..e8bdcb83172b0 100644
+--- a/include/linux/msi.h
++++ b/include/linux/msi.h
+@@ -233,7 +233,7 @@ void __pci_read_msi_msg(struct msi_desc *entry, struct msi_msg *msg);
+ void __pci_write_msi_msg(struct msi_desc *entry, struct msi_msg *msg);
+ 
+ u32 __pci_msix_desc_mask_irq(struct msi_desc *desc, u32 flag);
+-u32 __pci_msi_desc_mask_irq(struct msi_desc *desc, u32 mask, u32 flag);
++void __pci_msi_desc_mask_irq(struct msi_desc *desc, u32 mask, u32 flag);
+ void pci_msi_mask_irq(struct irq_data *data);
+ void pci_msi_unmask_irq(struct irq_data *data);
+ 
+diff --git a/include/net/psample.h b/include/net/psample.h
+index e328c51277571..0509d2d6be676 100644
+--- a/include/net/psample.h
++++ b/include/net/psample.h
+@@ -31,6 +31,8 @@ struct psample_group *psample_group_get(struct net *net, u32 group_num);
+ void psample_group_take(struct psample_group *group);
+ void psample_group_put(struct psample_group *group);
+ 
++struct sk_buff;
++
+ #if IS_ENABLED(CONFIG_PSAMPLE)
+ 
+ void psample_sample_packet(struct psample_group *group, struct sk_buff *skb,
+diff --git a/include/uapi/linux/neighbour.h b/include/uapi/linux/neighbour.h
+index dc8b72201f6c5..00a60695fa538 100644
+--- a/include/uapi/linux/neighbour.h
++++ b/include/uapi/linux/neighbour.h
+@@ -66,8 +66,11 @@ enum {
+ #define NUD_NONE	0x00
+ 
+ /* NUD_NOARP & NUD_PERMANENT are pseudostates, they never change
+-   and make no address resolution or NUD.
+-   NUD_PERMANENT also cannot be deleted by garbage collectors.
++ * and make no address resolution or NUD.
++ * NUD_PERMANENT also cannot be deleted by garbage collectors.
++ * When NTF_EXT_LEARNED is set for a bridge fdb entry the different cache entry
++ * states don't make sense and thus are ignored. Such entries don't age and
++ * can roam.
+  */
+ 
+ struct nda_cacheinfo {
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index d7ebb12ffffca..49857e8cd6ce6 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -1464,8 +1464,8 @@ alloc:
+ 	/* We cannot do copy_from_user or copy_to_user inside
+ 	 * the rcu_read_lock. Allocate enough space here.
+ 	 */
+-	keys = kvmalloc(key_size * bucket_size, GFP_USER | __GFP_NOWARN);
+-	values = kvmalloc(value_size * bucket_size, GFP_USER | __GFP_NOWARN);
++	keys = kvmalloc_array(key_size, bucket_size, GFP_USER | __GFP_NOWARN);
++	values = kvmalloc_array(value_size, bucket_size, GFP_USER | __GFP_NOWARN);
+ 	if (!keys || !values) {
+ 		ret = -ENOMEM;
+ 		goto after_loop;
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index a2f1f15ce4321..728f1a0fb4423 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -397,8 +397,8 @@ BPF_CALL_2(bpf_get_local_storage, struct bpf_map *, map, u64, flags)
+ 	void *ptr;
+ 	int i;
+ 
+-	for (i = 0; i < BPF_CGROUP_STORAGE_NEST_MAX; i++) {
+-		if (unlikely(this_cpu_read(bpf_cgroup_storage_info[i].task) != current))
++	for (i = BPF_CGROUP_STORAGE_NEST_MAX - 1; i >= 0; i--) {
++		if (likely(this_cpu_read(bpf_cgroup_storage_info[i].task) != current))
+ 			continue;
+ 
+ 		storage = this_cpu_read(bpf_cgroup_storage_info[i].storage[stype]);
+diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
+index cee265cb535cc..03527f518dfb7 100644
+--- a/kernel/cgroup/rstat.c
++++ b/kernel/cgroup/rstat.c
+@@ -347,19 +347,20 @@ static void cgroup_base_stat_flush(struct cgroup *cgrp, int cpu)
+ }
+ 
+ static struct cgroup_rstat_cpu *
+-cgroup_base_stat_cputime_account_begin(struct cgroup *cgrp)
++cgroup_base_stat_cputime_account_begin(struct cgroup *cgrp, unsigned long *flags)
+ {
+ 	struct cgroup_rstat_cpu *rstatc;
+ 
+ 	rstatc = get_cpu_ptr(cgrp->rstat_cpu);
+-	u64_stats_update_begin(&rstatc->bsync);
++	*flags = u64_stats_update_begin_irqsave(&rstatc->bsync);
+ 	return rstatc;
+ }
+ 
+ static void cgroup_base_stat_cputime_account_end(struct cgroup *cgrp,
+-						 struct cgroup_rstat_cpu *rstatc)
++						 struct cgroup_rstat_cpu *rstatc,
++						 unsigned long flags)
+ {
+-	u64_stats_update_end(&rstatc->bsync);
++	u64_stats_update_end_irqrestore(&rstatc->bsync, flags);
+ 	cgroup_rstat_updated(cgrp, smp_processor_id());
+ 	put_cpu_ptr(rstatc);
+ }
+@@ -367,18 +368,20 @@ static void cgroup_base_stat_cputime_account_end(struct cgroup *cgrp,
+ void __cgroup_account_cputime(struct cgroup *cgrp, u64 delta_exec)
+ {
+ 	struct cgroup_rstat_cpu *rstatc;
++	unsigned long flags;
+ 
+-	rstatc = cgroup_base_stat_cputime_account_begin(cgrp);
++	rstatc = cgroup_base_stat_cputime_account_begin(cgrp, &flags);
+ 	rstatc->bstat.cputime.sum_exec_runtime += delta_exec;
+-	cgroup_base_stat_cputime_account_end(cgrp, rstatc);
++	cgroup_base_stat_cputime_account_end(cgrp, rstatc, flags);
+ }
+ 
+ void __cgroup_account_cputime_field(struct cgroup *cgrp,
+ 				    enum cpu_usage_stat index, u64 delta_exec)
+ {
+ 	struct cgroup_rstat_cpu *rstatc;
++	unsigned long flags;
+ 
+-	rstatc = cgroup_base_stat_cputime_account_begin(cgrp);
++	rstatc = cgroup_base_stat_cputime_account_begin(cgrp, &flags);
+ 
+ 	switch (index) {
+ 	case CPUTIME_USER:
+@@ -394,7 +397,7 @@ void __cgroup_account_cputime_field(struct cgroup *cgrp,
+ 		break;
+ 	}
+ 
+-	cgroup_base_stat_cputime_account_end(cgrp, rstatc);
++	cgroup_base_stat_cputime_account_end(cgrp, rstatc, flags);
+ }
+ 
+ /*
+diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
+index 8cc8e57132870..d223e07e94a38 100644
+--- a/kernel/irq/chip.c
++++ b/kernel/irq/chip.c
+@@ -265,8 +265,11 @@ int irq_startup(struct irq_desc *desc, bool resend, bool force)
+ 	} else {
+ 		switch (__irq_startup_managed(desc, aff, force)) {
+ 		case IRQ_STARTUP_NORMAL:
++			if (d->chip->flags & IRQCHIP_AFFINITY_PRE_STARTUP)
++				irq_setup_affinity(desc);
+ 			ret = __irq_startup(desc);
+-			irq_setup_affinity(desc);
++			if (!(d->chip->flags & IRQCHIP_AFFINITY_PRE_STARTUP))
++				irq_setup_affinity(desc);
+ 			break;
+ 		case IRQ_STARTUP_MANAGED:
+ 			irq_do_set_affinity(d, aff, false);
+diff --git a/kernel/irq/msi.c b/kernel/irq/msi.c
+index c41965e348b5b..85df3ca03efe8 100644
+--- a/kernel/irq/msi.c
++++ b/kernel/irq/msi.c
+@@ -476,11 +476,6 @@ skip_activate:
+ 	return 0;
+ 
+ cleanup:
+-	for_each_msi_vector(desc, i, dev) {
+-		irq_data = irq_domain_get_irq_data(domain, i);
+-		if (irqd_is_activated(irq_data))
+-			irq_domain_deactivate_irq(irq_data);
+-	}
+ 	msi_domain_free_irqs(domain, dev);
+ 	return ret;
+ }
+@@ -505,7 +500,15 @@ int msi_domain_alloc_irqs(struct irq_domain *domain, struct device *dev,
+ 
+ void __msi_domain_free_irqs(struct irq_domain *domain, struct device *dev)
+ {
++	struct irq_data *irq_data;
+ 	struct msi_desc *desc;
++	int i;
++
++	for_each_msi_vector(desc, i, dev) {
++		irq_data = irq_domain_get_irq_data(domain, i);
++		if (irqd_is_activated(irq_data))
++			irq_domain_deactivate_irq(irq_data);
++	}
+ 
+ 	for_each_msi_entry(desc, dev) {
+ 		/*
+diff --git a/kernel/irq/timings.c b/kernel/irq/timings.c
+index d309d6fbf5bdd..4d2a702d7aa95 100644
+--- a/kernel/irq/timings.c
++++ b/kernel/irq/timings.c
+@@ -453,6 +453,11 @@ static __always_inline void __irq_timings_store(int irq, struct irqt_stat *irqs,
+ 	 */
+ 	index = irq_timings_interval_index(interval);
+ 
++	if (index > PREDICTION_BUFFER_SIZE - 1) {
++		irqs->count = 0;
++		return;
++	}
++
+ 	/*
+ 	 * Store the index as an element of the pattern in another
+ 	 * circular array.
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index 406818196a9f7..3c20afbc19e13 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -343,7 +343,7 @@ static __always_inline bool
+ rt_mutex_cond_detect_deadlock(struct rt_mutex_waiter *waiter,
+ 			      enum rtmutex_chainwalk chwalk)
+ {
+-	if (IS_ENABLED(CONFIG_DEBUG_RT_MUTEX))
++	if (IS_ENABLED(CONFIG_DEBUG_RT_MUTEXES))
+ 		return waiter != NULL;
+ 	return chwalk == RT_MUTEX_FULL_CHAINWALK;
+ }
+diff --git a/kernel/seccomp.c b/kernel/seccomp.c
+index 057e17f3215d5..6469eca8078ca 100644
+--- a/kernel/seccomp.c
++++ b/kernel/seccomp.c
+@@ -602,7 +602,7 @@ static inline void seccomp_sync_threads(unsigned long flags)
+ 		smp_store_release(&thread->seccomp.filter,
+ 				  caller->seccomp.filter);
+ 		atomic_set(&thread->seccomp.filter_count,
+-			   atomic_read(&thread->seccomp.filter_count));
++			   atomic_read(&caller->seccomp.filter_count));
+ 
+ 		/*
+ 		 * Don't let an unprivileged task work around
+diff --git a/lib/devmem_is_allowed.c b/lib/devmem_is_allowed.c
+index c0d67c541849a..60be9e24bd576 100644
+--- a/lib/devmem_is_allowed.c
++++ b/lib/devmem_is_allowed.c
+@@ -19,7 +19,7 @@
+  */
+ int devmem_is_allowed(unsigned long pfn)
+ {
+-	if (iomem_is_exclusive(pfn << PAGE_SHIFT))
++	if (iomem_is_exclusive(PFN_PHYS(pfn)))
+ 		return 0;
+ 	if (!page_is_ram(pfn))
+ 		return 1;
+diff --git a/mm/slub.c b/mm/slub.c
+index 61bd40e3eb9a4..e32ded30506e8 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -551,8 +551,8 @@ static void print_section(char *level, char *text, u8 *addr,
+ 			  unsigned int length)
+ {
+ 	metadata_access_enable();
+-	print_hex_dump(level, kasan_reset_tag(text), DUMP_PREFIX_ADDRESS,
+-			16, 1, addr, length, 1);
++	print_hex_dump(level, text, DUMP_PREFIX_ADDRESS,
++			16, 1, kasan_reset_tag((void *)addr), length, 1);
+ 	metadata_access_disable();
+ }
+ 
+diff --git a/net/bridge/br.c b/net/bridge/br.c
+index bbab9984f24e5..ef743f94254d7 100644
+--- a/net/bridge/br.c
++++ b/net/bridge/br.c
+@@ -166,8 +166,7 @@ static int br_switchdev_event(struct notifier_block *unused,
+ 	case SWITCHDEV_FDB_ADD_TO_BRIDGE:
+ 		fdb_info = ptr;
+ 		err = br_fdb_external_learn_add(br, p, fdb_info->addr,
+-						fdb_info->vid,
+-						fdb_info->is_local, false);
++						fdb_info->vid, false);
+ 		if (err) {
+ 			err = notifier_from_errno(err);
+ 			break;
+diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c
+index 87ce52bba6498..3451c888ff793 100644
+--- a/net/bridge/br_fdb.c
++++ b/net/bridge/br_fdb.c
+@@ -1026,10 +1026,7 @@ static int __br_fdb_add(struct ndmsg *ndm, struct net_bridge *br,
+ 					   "FDB entry towards bridge must be permanent");
+ 			return -EINVAL;
+ 		}
+-
+-		err = br_fdb_external_learn_add(br, p, addr, vid,
+-						ndm->ndm_state & NUD_PERMANENT,
+-						true);
++		err = br_fdb_external_learn_add(br, p, addr, vid, true);
+ 	} else {
+ 		spin_lock_bh(&br->hash_lock);
+ 		err = fdb_add_entry(br, p, addr, ndm, nlh_flags, vid, nfea_tb);
+@@ -1257,7 +1254,7 @@ void br_fdb_unsync_static(struct net_bridge *br, struct net_bridge_port *p)
+ }
+ 
+ int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p,
+-			      const unsigned char *addr, u16 vid, bool is_local,
++			      const unsigned char *addr, u16 vid,
+ 			      bool swdev_notify)
+ {
+ 	struct net_bridge_fdb_entry *fdb;
+@@ -1275,7 +1272,7 @@ int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p,
+ 		if (swdev_notify)
+ 			flags |= BIT(BR_FDB_ADDED_BY_USER);
+ 
+-		if (is_local)
++		if (!p)
+ 			flags |= BIT(BR_FDB_LOCAL);
+ 
+ 		fdb = fdb_create(br, p, addr, vid, flags);
+@@ -1304,7 +1301,7 @@ int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p,
+ 		if (swdev_notify)
+ 			set_bit(BR_FDB_ADDED_BY_USER, &fdb->flags);
+ 
+-		if (is_local)
++		if (!p)
+ 			set_bit(BR_FDB_LOCAL, &fdb->flags);
+ 
+ 		if (modified)
+diff --git a/net/bridge/br_if.c b/net/bridge/br_if.c
+index 6e4a32354a138..14cd6ef961117 100644
+--- a/net/bridge/br_if.c
++++ b/net/bridge/br_if.c
+@@ -616,6 +616,7 @@ int br_add_if(struct net_bridge *br, struct net_device *dev,
+ 
+ 	err = dev_set_allmulti(dev, 1);
+ 	if (err) {
++		br_multicast_del_port(p);
+ 		kfree(p);	/* kobject not yet init'd, manually free */
+ 		goto err1;
+ 	}
+@@ -729,6 +730,7 @@ err4:
+ err3:
+ 	sysfs_remove_link(br->ifobj, p->dev->name);
+ err2:
++	br_multicast_del_port(p);
+ 	kobject_put(&p->kobj);
+ 	dev_set_allmulti(dev, -1);
+ err1:
+diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
+index 4e3d26e0a2d11..e013d33f1c7ca 100644
+--- a/net/bridge/br_private.h
++++ b/net/bridge/br_private.h
+@@ -707,7 +707,7 @@ int br_fdb_get(struct sk_buff *skb, struct nlattr *tb[], struct net_device *dev,
+ int br_fdb_sync_static(struct net_bridge *br, struct net_bridge_port *p);
+ void br_fdb_unsync_static(struct net_bridge *br, struct net_bridge_port *p);
+ int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p,
+-			      const unsigned char *addr, u16 vid, bool is_local,
++			      const unsigned char *addr, u16 vid,
+ 			      bool swdev_notify);
+ int br_fdb_external_learn_del(struct net_bridge *br, struct net_bridge_port *p,
+ 			      const unsigned char *addr, u16 vid,
+diff --git a/net/bridge/netfilter/nf_conntrack_bridge.c b/net/bridge/netfilter/nf_conntrack_bridge.c
+index 8d033a75a766e..fdbed31585553 100644
+--- a/net/bridge/netfilter/nf_conntrack_bridge.c
++++ b/net/bridge/netfilter/nf_conntrack_bridge.c
+@@ -88,6 +88,12 @@ static int nf_br_ip_fragment(struct net *net, struct sock *sk,
+ 
+ 			skb = ip_fraglist_next(&iter);
+ 		}
++
++		if (!err)
++			return 0;
++
++		kfree_skb_list(iter.frag);
++
+ 		return err;
+ 	}
+ slow_path:
+diff --git a/net/core/link_watch.c b/net/core/link_watch.c
+index 75431ca9300fb..1a455847da54f 100644
+--- a/net/core/link_watch.c
++++ b/net/core/link_watch.c
+@@ -158,7 +158,7 @@ static void linkwatch_do_dev(struct net_device *dev)
+ 	clear_bit(__LINK_STATE_LINKWATCH_PENDING, &dev->state);
+ 
+ 	rfc2863_policy(dev);
+-	if (dev->flags & IFF_UP && netif_device_present(dev)) {
++	if (dev->flags & IFF_UP) {
+ 		if (netif_carrier_ok(dev))
+ 			dev_activate(dev);
+ 		else
+@@ -204,7 +204,8 @@ static void __linkwatch_run_queue(int urgent_only)
+ 		dev = list_first_entry(&wrk, struct net_device, link_watch_list);
+ 		list_del_init(&dev->link_watch_list);
+ 
+-		if (urgent_only && !linkwatch_urgent_event(dev)) {
++		if (!netif_device_present(dev) ||
++		    (urgent_only && !linkwatch_urgent_event(dev))) {
+ 			list_add_tail(&dev->link_watch_list, &lweventlist);
+ 			continue;
+ 		}
+diff --git a/net/ieee802154/socket.c b/net/ieee802154/socket.c
+index a45a0401adc50..c25f7617770c8 100644
+--- a/net/ieee802154/socket.c
++++ b/net/ieee802154/socket.c
+@@ -984,6 +984,11 @@ static const struct proto_ops ieee802154_dgram_ops = {
+ 	.sendpage	   = sock_no_sendpage,
+ };
+ 
++static void ieee802154_sock_destruct(struct sock *sk)
++{
++	skb_queue_purge(&sk->sk_receive_queue);
++}
++
+ /* Create a socket. Initialise the socket, blank the addresses
+  * set the state.
+  */
+@@ -1024,7 +1029,7 @@ static int ieee802154_create(struct net *net, struct socket *sock,
+ 	sock->ops = ops;
+ 
+ 	sock_init_data(sock, sk);
+-	/* FIXME: sk->sk_destruct */
++	sk->sk_destruct = ieee802154_sock_destruct;
+ 	sk->sk_family = PF_IEEE802154;
+ 
+ 	/* Checksums on by default */
+diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
+index 6b3c558a4f232..00576bae183d3 100644
+--- a/net/ipv4/igmp.c
++++ b/net/ipv4/igmp.c
+@@ -803,10 +803,17 @@ static void igmp_gq_timer_expire(struct timer_list *t)
+ static void igmp_ifc_timer_expire(struct timer_list *t)
+ {
+ 	struct in_device *in_dev = from_timer(in_dev, t, mr_ifc_timer);
++	u32 mr_ifc_count;
+ 
+ 	igmpv3_send_cr(in_dev);
+-	if (in_dev->mr_ifc_count) {
+-		in_dev->mr_ifc_count--;
++restart:
++	mr_ifc_count = READ_ONCE(in_dev->mr_ifc_count);
++
++	if (mr_ifc_count) {
++		if (cmpxchg(&in_dev->mr_ifc_count,
++			    mr_ifc_count,
++			    mr_ifc_count - 1) != mr_ifc_count)
++			goto restart;
+ 		igmp_ifc_start_timer(in_dev,
+ 				     unsolicited_report_interval(in_dev));
+ 	}
+@@ -818,7 +825,7 @@ static void igmp_ifc_event(struct in_device *in_dev)
+ 	struct net *net = dev_net(in_dev->dev);
+ 	if (IGMP_V1_SEEN(in_dev) || IGMP_V2_SEEN(in_dev))
+ 		return;
+-	in_dev->mr_ifc_count = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv;
++	WRITE_ONCE(in_dev->mr_ifc_count, in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv);
+ 	igmp_ifc_start_timer(in_dev, 1);
+ }
+ 
+@@ -957,7 +964,7 @@ static bool igmp_heard_query(struct in_device *in_dev, struct sk_buff *skb,
+ 				in_dev->mr_qri;
+ 		}
+ 		/* cancel the interface change timer */
+-		in_dev->mr_ifc_count = 0;
++		WRITE_ONCE(in_dev->mr_ifc_count, 0);
+ 		if (del_timer(&in_dev->mr_ifc_timer))
+ 			__in_dev_put(in_dev);
+ 		/* clear deleted report items */
+@@ -1724,7 +1731,7 @@ void ip_mc_down(struct in_device *in_dev)
+ 		igmp_group_dropped(pmc);
+ 
+ #ifdef CONFIG_IP_MULTICAST
+-	in_dev->mr_ifc_count = 0;
++	WRITE_ONCE(in_dev->mr_ifc_count, 0);
+ 	if (del_timer(&in_dev->mr_ifc_timer))
+ 		__in_dev_put(in_dev);
+ 	in_dev->mr_gq_running = 0;
+@@ -1941,7 +1948,7 @@ static int ip_mc_del_src(struct in_device *in_dev, __be32 *pmca, int sfmode,
+ 		pmc->sfmode = MCAST_INCLUDE;
+ #ifdef CONFIG_IP_MULTICAST
+ 		pmc->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv;
+-		in_dev->mr_ifc_count = pmc->crcount;
++		WRITE_ONCE(in_dev->mr_ifc_count, pmc->crcount);
+ 		for (psf = pmc->sources; psf; psf = psf->sf_next)
+ 			psf->sf_crcount = 0;
+ 		igmp_ifc_event(pmc->interface);
+@@ -2120,7 +2127,7 @@ static int ip_mc_add_src(struct in_device *in_dev, __be32 *pmca, int sfmode,
+ 		/* else no filters; keep old mode for reports */
+ 
+ 		pmc->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv;
+-		in_dev->mr_ifc_count = pmc->crcount;
++		WRITE_ONCE(in_dev->mr_ifc_count, pmc->crcount);
+ 		for (psf = pmc->sources; psf; psf = psf->sf_next)
+ 			psf->sf_crcount = 0;
+ 		igmp_ifc_event(in_dev);
+diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c
+index 6ea3dc2e42194..6274462b86b4b 100644
+--- a/net/ipv4/tcp_bbr.c
++++ b/net/ipv4/tcp_bbr.c
+@@ -1041,7 +1041,7 @@ static void bbr_init(struct sock *sk)
+ 	bbr->prior_cwnd = 0;
+ 	tp->snd_ssthresh = TCP_INFINITE_SSTHRESH;
+ 	bbr->rtt_cnt = 0;
+-	bbr->next_rtt_delivered = 0;
++	bbr->next_rtt_delivered = tp->delivered;
+ 	bbr->prev_ca_state = TCP_CA_Open;
+ 	bbr->packet_conservation = 0;
+ 
+diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c
+index 7153c67f641e1..2ef4cd2c848b2 100644
+--- a/net/sched/act_mirred.c
++++ b/net/sched/act_mirred.c
+@@ -273,6 +273,9 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a,
+ 			goto out;
+ 	}
+ 
++	/* All mirred/redirected skbs should clear previous ct info */
++	nf_reset_ct(skb2);
++
+ 	want_ingress = tcf_mirred_act_wants_ingress(m_eaction);
+ 
+ 	expects_nh = want_ingress || !m_mac_header_xmit;
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 5eff7cccceffc..66fbdc63f965b 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -757,7 +757,7 @@ static int smc_connect_rdma(struct smc_sock *smc,
+ 			reason_code = SMC_CLC_DECL_NOSRVLINK;
+ 			goto connect_abort;
+ 		}
+-		smc->conn.lnk = link;
++		smc_switch_link_and_count(&smc->conn, link);
+ 	}
+ 
+ 	/* create send buffer and rmb */
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index 0df85a12651e9..39b24f98eac5e 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -916,8 +916,8 @@ static int smc_switch_cursor(struct smc_sock *smc, struct smc_cdc_tx_pend *pend,
+ 	return rc;
+ }
+ 
+-static void smc_switch_link_and_count(struct smc_connection *conn,
+-				      struct smc_link *to_lnk)
++void smc_switch_link_and_count(struct smc_connection *conn,
++			       struct smc_link *to_lnk)
+ {
+ 	atomic_dec(&conn->lnk->conn_cnt);
+ 	conn->lnk = to_lnk;
+diff --git a/net/smc/smc_core.h b/net/smc/smc_core.h
+index 6d6fd1397c87d..c043ecdca5c44 100644
+--- a/net/smc/smc_core.h
++++ b/net/smc/smc_core.h
+@@ -97,6 +97,7 @@ struct smc_link {
+ 	unsigned long		*wr_tx_mask;	/* bit mask of used indexes */
+ 	u32			wr_tx_cnt;	/* number of WR send buffers */
+ 	wait_queue_head_t	wr_tx_wait;	/* wait for free WR send buf */
++	atomic_t		wr_tx_refcnt;	/* tx refs to link */
+ 
+ 	struct smc_wr_buf	*wr_rx_bufs;	/* WR recv payload buffers */
+ 	struct ib_recv_wr	*wr_rx_ibs;	/* WR recv meta data */
+@@ -109,6 +110,7 @@ struct smc_link {
+ 
+ 	struct ib_reg_wr	wr_reg;		/* WR register memory region */
+ 	wait_queue_head_t	wr_reg_wait;	/* wait for wr_reg result */
++	atomic_t		wr_reg_refcnt;	/* reg refs to link */
+ 	enum smc_wr_reg_state	wr_reg_state;	/* state of wr_reg request */
+ 
+ 	u8			gid[SMC_GID_SIZE];/* gid matching used vlan id*/
+@@ -444,6 +446,8 @@ void smc_core_exit(void);
+ int smcr_link_init(struct smc_link_group *lgr, struct smc_link *lnk,
+ 		   u8 link_idx, struct smc_init_info *ini);
+ void smcr_link_clear(struct smc_link *lnk, bool log);
++void smc_switch_link_and_count(struct smc_connection *conn,
++			       struct smc_link *to_lnk);
+ int smcr_buf_map_lgr(struct smc_link *lnk);
+ int smcr_buf_reg_lgr(struct smc_link *lnk);
+ void smcr_lgr_set_type(struct smc_link_group *lgr, enum smc_lgr_type new_type);
+diff --git a/net/smc/smc_llc.c b/net/smc/smc_llc.c
+index 273eaf1bfe49a..2e7560eba9812 100644
+--- a/net/smc/smc_llc.c
++++ b/net/smc/smc_llc.c
+@@ -888,6 +888,7 @@ int smc_llc_cli_add_link(struct smc_link *link, struct smc_llc_qentry *qentry)
+ 	if (!rc)
+ 		goto out;
+ out_clear_lnk:
++	lnk_new->state = SMC_LNK_INACTIVE;
+ 	smcr_link_clear(lnk_new, false);
+ out_reject:
+ 	smc_llc_cli_add_link_reject(qentry);
+@@ -1184,6 +1185,7 @@ int smc_llc_srv_add_link(struct smc_link *link)
+ 		goto out_err;
+ 	return 0;
+ out_err:
++	link_new->state = SMC_LNK_INACTIVE;
+ 	smcr_link_clear(link_new, false);
+ 	return rc;
+ }
+@@ -1286,10 +1288,8 @@ static void smc_llc_process_cli_delete_link(struct smc_link_group *lgr)
+ 	del_llc->reason = 0;
+ 	smc_llc_send_message(lnk, &qentry->msg); /* response */
+ 
+-	if (smc_link_downing(&lnk_del->state)) {
+-		if (smc_switch_conns(lgr, lnk_del, false))
+-			smc_wr_tx_wait_no_pending_sends(lnk_del);
+-	}
++	if (smc_link_downing(&lnk_del->state))
++		smc_switch_conns(lgr, lnk_del, false);
+ 	smcr_link_clear(lnk_del, true);
+ 
+ 	active_links = smc_llc_active_link_count(lgr);
+@@ -1805,8 +1805,6 @@ void smc_llc_link_clear(struct smc_link *link, bool log)
+ 				    link->smcibdev->ibdev->name, link->ibport);
+ 	complete(&link->llc_testlink_resp);
+ 	cancel_delayed_work_sync(&link->llc_testlink_wrk);
+-	smc_wr_wakeup_reg_wait(link);
+-	smc_wr_wakeup_tx_wait(link);
+ }
+ 
+ /* register a new rtoken at the remote peer (for all links) */
+diff --git a/net/smc/smc_tx.c b/net/smc/smc_tx.c
+index 4532c16bf85ec..ff02952b3d03e 100644
+--- a/net/smc/smc_tx.c
++++ b/net/smc/smc_tx.c
+@@ -479,7 +479,7 @@ static int smc_tx_rdma_writes(struct smc_connection *conn,
+ /* Wakeup sndbuf consumers from any context (IRQ or process)
+  * since there is more data to transmit; usable snd_wnd as max transmit
+  */
+-static int smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
++static int _smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
+ {
+ 	struct smc_cdc_producer_flags *pflags = &conn->local_tx_ctrl.prod_flags;
+ 	struct smc_link *link = conn->lnk;
+@@ -533,6 +533,22 @@ out_unlock:
+ 	return rc;
+ }
+ 
++static int smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
++{
++	struct smc_link *link = conn->lnk;
++	int rc = -ENOLINK;
++
++	if (!link)
++		return rc;
++
++	atomic_inc(&link->wr_tx_refcnt);
++	if (smc_link_usable(link))
++		rc = _smcr_tx_sndbuf_nonempty(conn);
++	if (atomic_dec_and_test(&link->wr_tx_refcnt))
++		wake_up_all(&link->wr_tx_wait);
++	return rc;
++}
++
+ static int smcd_tx_sndbuf_nonempty(struct smc_connection *conn)
+ {
+ 	struct smc_cdc_producer_flags *pflags = &conn->local_tx_ctrl.prod_flags;
+diff --git a/net/smc/smc_wr.c b/net/smc/smc_wr.c
+index cbc73a7e4d590..a419e9af36b98 100644
+--- a/net/smc/smc_wr.c
++++ b/net/smc/smc_wr.c
+@@ -322,9 +322,12 @@ int smc_wr_reg_send(struct smc_link *link, struct ib_mr *mr)
+ 	if (rc)
+ 		return rc;
+ 
++	atomic_inc(&link->wr_reg_refcnt);
+ 	rc = wait_event_interruptible_timeout(link->wr_reg_wait,
+ 					      (link->wr_reg_state != POSTED),
+ 					      SMC_WR_REG_MR_WAIT_TIME);
++	if (atomic_dec_and_test(&link->wr_reg_refcnt))
++		wake_up_all(&link->wr_reg_wait);
+ 	if (!rc) {
+ 		/* timeout - terminate link */
+ 		smcr_link_down_cond_sched(link);
+@@ -566,10 +569,15 @@ void smc_wr_free_link(struct smc_link *lnk)
+ 		return;
+ 	ibdev = lnk->smcibdev->ibdev;
+ 
++	smc_wr_wakeup_reg_wait(lnk);
++	smc_wr_wakeup_tx_wait(lnk);
++
+ 	if (smc_wr_tx_wait_no_pending_sends(lnk))
+ 		memset(lnk->wr_tx_mask, 0,
+ 		       BITS_TO_LONGS(SMC_WR_BUF_CNT) *
+ 						sizeof(*lnk->wr_tx_mask));
++	wait_event(lnk->wr_reg_wait, (!atomic_read(&lnk->wr_reg_refcnt)));
++	wait_event(lnk->wr_tx_wait, (!atomic_read(&lnk->wr_tx_refcnt)));
+ 
+ 	if (lnk->wr_rx_dma_addr) {
+ 		ib_dma_unmap_single(ibdev, lnk->wr_rx_dma_addr,
+@@ -728,7 +736,9 @@ int smc_wr_create_link(struct smc_link *lnk)
+ 	memset(lnk->wr_tx_mask, 0,
+ 	       BITS_TO_LONGS(SMC_WR_BUF_CNT) * sizeof(*lnk->wr_tx_mask));
+ 	init_waitqueue_head(&lnk->wr_tx_wait);
++	atomic_set(&lnk->wr_tx_refcnt, 0);
+ 	init_waitqueue_head(&lnk->wr_reg_wait);
++	atomic_set(&lnk->wr_reg_refcnt, 0);
+ 	return rc;
+ 
+ dma_unmap:
+diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
+index 2700a63ab095e..3a056f8affd1d 100644
+--- a/net/vmw_vsock/virtio_transport.c
++++ b/net/vmw_vsock/virtio_transport.c
+@@ -356,11 +356,14 @@ static void virtio_vsock_event_fill(struct virtio_vsock *vsock)
+ 
+ static void virtio_vsock_reset_sock(struct sock *sk)
+ {
+-	lock_sock(sk);
++	/* vmci_transport.c doesn't take sk_lock here either.  At least we're
++	 * under vsock_table_lock so the sock cannot disappear while we're
++	 * executing.
++	 */
++
+ 	sk->sk_state = TCP_CLOSE;
+ 	sk->sk_err = ECONNRESET;
+ 	sk->sk_error_report(sk);
+-	release_sock(sk);
+ }
+ 
+ static void virtio_vsock_update_guest_cid(struct virtio_vsock *vsock)
+diff --git a/sound/soc/amd/acp-pcm-dma.c b/sound/soc/amd/acp-pcm-dma.c
+index 143155a840aca..cc1ce6f22caad 100644
+--- a/sound/soc/amd/acp-pcm-dma.c
++++ b/sound/soc/amd/acp-pcm-dma.c
+@@ -969,7 +969,7 @@ static int acp_dma_hw_params(struct snd_soc_component *component,
+ 
+ 	acp_set_sram_bank_state(rtd->acp_mmio, 0, true);
+ 	/* Save for runtime private data */
+-	rtd->dma_addr = substream->dma_buffer.addr;
++	rtd->dma_addr = runtime->dma_addr;
+ 	rtd->order = get_order(size);
+ 
+ 	/* Fill the page table entries in ACP SRAM */
+diff --git a/sound/soc/amd/raven/acp3x-pcm-dma.c b/sound/soc/amd/raven/acp3x-pcm-dma.c
+index 8148b0d22e880..597d7c4b2a6b0 100644
+--- a/sound/soc/amd/raven/acp3x-pcm-dma.c
++++ b/sound/soc/amd/raven/acp3x-pcm-dma.c
+@@ -286,7 +286,7 @@ static int acp3x_dma_hw_params(struct snd_soc_component *component,
+ 		pr_err("pinfo failed\n");
+ 	}
+ 	size = params_buffer_bytes(params);
+-	rtd->dma_addr = substream->dma_buffer.addr;
++	rtd->dma_addr = substream->runtime->dma_addr;
+ 	rtd->num_pages = (PAGE_ALIGN(size) >> PAGE_SHIFT);
+ 	config_acp3x_dma(rtd, substream->stream);
+ 	return 0;
+diff --git a/sound/soc/amd/renoir/acp3x-pdm-dma.c b/sound/soc/amd/renoir/acp3x-pdm-dma.c
+index 4c2810e58dcef..f35794d7c91a1 100644
+--- a/sound/soc/amd/renoir/acp3x-pdm-dma.c
++++ b/sound/soc/amd/renoir/acp3x-pdm-dma.c
+@@ -246,7 +246,7 @@ static int acp_pdm_dma_hw_params(struct snd_soc_component *component,
+ 		return -EINVAL;
+ 	size = params_buffer_bytes(params);
+ 	period_bytes = params_period_bytes(params);
+-	rtd->dma_addr = substream->dma_buffer.addr;
++	rtd->dma_addr = substream->runtime->dma_addr;
+ 	rtd->num_pages = (PAGE_ALIGN(size) >> PAGE_SHIFT);
+ 	config_acp_dma(rtd, substream->stream);
+ 	init_pdm_ring_buffer(MEM_WINDOW_START, size, period_bytes,
+diff --git a/sound/soc/codecs/cs42l42.c b/sound/soc/codecs/cs42l42.c
+index 8434c48354f16..e0a524f8e16c4 100644
+--- a/sound/soc/codecs/cs42l42.c
++++ b/sound/soc/codecs/cs42l42.c
+@@ -404,7 +404,7 @@ static const struct regmap_config cs42l42_regmap = {
+ 	.use_single_write = true,
+ };
+ 
+-static DECLARE_TLV_DB_SCALE(adc_tlv, -9600, 100, false);
++static DECLARE_TLV_DB_SCALE(adc_tlv, -9700, 100, true);
+ static DECLARE_TLV_DB_SCALE(mixer_tlv, -6300, 100, true);
+ 
+ static const char * const cs42l42_hpf_freq_text[] = {
+@@ -424,34 +424,23 @@ static SOC_ENUM_SINGLE_DECL(cs42l42_wnf3_freq_enum, CS42L42_ADC_WNF_HPF_CTL,
+ 			    CS42L42_ADC_WNF_CF_SHIFT,
+ 			    cs42l42_wnf3_freq_text);
+ 
+-static const char * const cs42l42_wnf05_freq_text[] = {
+-	"280Hz", "315Hz", "350Hz", "385Hz",
+-	"420Hz", "455Hz", "490Hz", "525Hz"
+-};
+-
+-static SOC_ENUM_SINGLE_DECL(cs42l42_wnf05_freq_enum, CS42L42_ADC_WNF_HPF_CTL,
+-			    CS42L42_ADC_WNF_CF_SHIFT,
+-			    cs42l42_wnf05_freq_text);
+-
+ static const struct snd_kcontrol_new cs42l42_snd_controls[] = {
+ 	/* ADC Volume and Filter Controls */
+ 	SOC_SINGLE("ADC Notch Switch", CS42L42_ADC_CTL,
+-				CS42L42_ADC_NOTCH_DIS_SHIFT, true, false),
++				CS42L42_ADC_NOTCH_DIS_SHIFT, true, true),
+ 	SOC_SINGLE("ADC Weak Force Switch", CS42L42_ADC_CTL,
+ 				CS42L42_ADC_FORCE_WEAK_VCM_SHIFT, true, false),
+ 	SOC_SINGLE("ADC Invert Switch", CS42L42_ADC_CTL,
+ 				CS42L42_ADC_INV_SHIFT, true, false),
+ 	SOC_SINGLE("ADC Boost Switch", CS42L42_ADC_CTL,
+ 				CS42L42_ADC_DIG_BOOST_SHIFT, true, false),
+-	SOC_SINGLE_SX_TLV("ADC Volume", CS42L42_ADC_VOLUME,
+-				CS42L42_ADC_VOL_SHIFT, 0xA0, 0x6C, adc_tlv),
++	SOC_SINGLE_S8_TLV("ADC Volume", CS42L42_ADC_VOLUME, -97, 12, adc_tlv),
+ 	SOC_SINGLE("ADC WNF Switch", CS42L42_ADC_WNF_HPF_CTL,
+ 				CS42L42_ADC_WNF_EN_SHIFT, true, false),
+ 	SOC_SINGLE("ADC HPF Switch", CS42L42_ADC_WNF_HPF_CTL,
+ 				CS42L42_ADC_HPF_EN_SHIFT, true, false),
+ 	SOC_ENUM("HPF Corner Freq", cs42l42_hpf_freq_enum),
+ 	SOC_ENUM("WNF 3dB Freq", cs42l42_wnf3_freq_enum),
+-	SOC_ENUM("WNF 05dB Freq", cs42l42_wnf05_freq_enum),
+ 
+ 	/* DAC Volume and Filter Controls */
+ 	SOC_SINGLE("DACA Invert Switch", CS42L42_DAC_CTL1,
+@@ -470,8 +459,8 @@ static const struct snd_soc_dapm_widget cs42l42_dapm_widgets[] = {
+ 	SND_SOC_DAPM_OUTPUT("HP"),
+ 	SND_SOC_DAPM_DAC("DAC", NULL, CS42L42_PWR_CTL1, CS42L42_HP_PDN_SHIFT, 1),
+ 	SND_SOC_DAPM_MIXER("MIXER", CS42L42_PWR_CTL1, CS42L42_MIXER_PDN_SHIFT, 1, NULL, 0),
+-	SND_SOC_DAPM_AIF_IN("SDIN1", NULL, 0, CS42L42_ASP_RX_DAI0_EN, CS42L42_ASP_RX0_CH1_SHIFT, 0),
+-	SND_SOC_DAPM_AIF_IN("SDIN2", NULL, 1, CS42L42_ASP_RX_DAI0_EN, CS42L42_ASP_RX0_CH2_SHIFT, 0),
++	SND_SOC_DAPM_AIF_IN("SDIN1", NULL, 0, SND_SOC_NOPM, 0, 0),
++	SND_SOC_DAPM_AIF_IN("SDIN2", NULL, 1, SND_SOC_NOPM, 0, 0),
+ 
+ 	/* Playback Requirements */
+ 	SND_SOC_DAPM_SUPPLY("ASP DAI0", CS42L42_PWR_CTL1, CS42L42_ASP_DAI_PDN_SHIFT, 1, NULL, 0),
+@@ -620,6 +609,8 @@ static int cs42l42_pll_config(struct snd_soc_component *component)
+ 
+ 	for (i = 0; i < ARRAY_SIZE(pll_ratio_table); i++) {
+ 		if (pll_ratio_table[i].sclk == clk) {
++			cs42l42->pll_config = i;
++
+ 			/* Configure the internal sample rate */
+ 			snd_soc_component_update_bits(component, CS42L42_MCLK_CTL,
+ 					CS42L42_INTERNAL_FS_MASK,
+@@ -628,14 +619,9 @@ static int cs42l42_pll_config(struct snd_soc_component *component)
+ 					(pll_ratio_table[i].mclk_int !=
+ 					24000000)) <<
+ 					CS42L42_INTERNAL_FS_SHIFT);
+-			/* Set the MCLK src (PLL or SCLK) and the divide
+-			 * ratio
+-			 */
++
+ 			snd_soc_component_update_bits(component, CS42L42_MCLK_SRC_SEL,
+-					CS42L42_MCLK_SRC_SEL_MASK |
+ 					CS42L42_MCLKDIV_MASK,
+-					(pll_ratio_table[i].mclk_src_sel
+-					<< CS42L42_MCLK_SRC_SEL_SHIFT) |
+ 					(pll_ratio_table[i].mclk_div <<
+ 					CS42L42_MCLKDIV_SHIFT));
+ 			/* Set up the LRCLK */
+@@ -671,15 +657,6 @@ static int cs42l42_pll_config(struct snd_soc_component *component)
+ 					CS42L42_FSYNC_PULSE_WIDTH_MASK,
+ 					CS42L42_FRAC1_VAL(fsync - 1) <<
+ 					CS42L42_FSYNC_PULSE_WIDTH_SHIFT);
+-			snd_soc_component_update_bits(component,
+-					CS42L42_ASP_FRM_CFG,
+-					CS42L42_ASP_5050_MASK,
+-					CS42L42_ASP_5050_MASK);
+-			/* Set the frame delay to 1.0 SCLK clocks */
+-			snd_soc_component_update_bits(component, CS42L42_ASP_FRM_CFG,
+-					CS42L42_ASP_FSD_MASK,
+-					CS42L42_ASP_FSD_1_0 <<
+-					CS42L42_ASP_FSD_SHIFT);
+ 			/* Set the sample rates (96k or lower) */
+ 			snd_soc_component_update_bits(component, CS42L42_FS_RATE_EN,
+ 					CS42L42_FS_EN_MASK,
+@@ -779,7 +756,18 @@ static int cs42l42_set_dai_fmt(struct snd_soc_dai *codec_dai, unsigned int fmt)
+ 	/* interface format */
+ 	switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
+ 	case SND_SOC_DAIFMT_I2S:
+-	case SND_SOC_DAIFMT_LEFT_J:
++		/*
++		 * 5050 mode, frame starts on falling edge of LRCLK,
++		 * frame delayed by 1.0 SCLKs
++		 */
++		snd_soc_component_update_bits(component,
++					      CS42L42_ASP_FRM_CFG,
++					      CS42L42_ASP_STP_MASK |
++					      CS42L42_ASP_5050_MASK |
++					      CS42L42_ASP_FSD_MASK,
++					      CS42L42_ASP_5050_MASK |
++					      (CS42L42_ASP_FSD_1_0 <<
++						CS42L42_ASP_FSD_SHIFT));
+ 		break;
+ 	default:
+ 		return -EINVAL;
+@@ -822,6 +810,10 @@ static int cs42l42_pcm_hw_params(struct snd_pcm_substream *substream,
+ 	cs42l42->srate = params_rate(params);
+ 	cs42l42->bclk = snd_soc_params_to_bclk(params);
+ 
++	/* I2S frame always has 2 channels even for mono audio */
++	if (channels == 1)
++		cs42l42->bclk *= 2;
++
+ 	switch(substream->stream) {
+ 	case SNDRV_PCM_STREAM_CAPTURE:
+ 		if (channels == 2) {
+@@ -845,6 +837,17 @@ static int cs42l42_pcm_hw_params(struct snd_pcm_substream *substream,
+ 		snd_soc_component_update_bits(component, CS42L42_ASP_RX_DAI0_CH2_AP_RES,
+ 							 CS42L42_ASP_RX_CH_AP_MASK |
+ 							 CS42L42_ASP_RX_CH_RES_MASK, val);
++
++		/* Channel B comes from the last active channel */
++		snd_soc_component_update_bits(component, CS42L42_SP_RX_CH_SEL,
++					      CS42L42_SP_RX_CHB_SEL_MASK,
++					      (channels - 1) << CS42L42_SP_RX_CHB_SEL_SHIFT);
++
++		/* Both LRCLK slots must be enabled */
++		snd_soc_component_update_bits(component, CS42L42_ASP_RX_DAI0_EN,
++					      CS42L42_ASP_RX0_CH_EN_MASK,
++					      BIT(CS42L42_ASP_RX0_CH1_SHIFT) |
++					      BIT(CS42L42_ASP_RX0_CH2_SHIFT));
+ 		break;
+ 	default:
+ 		break;
+@@ -890,13 +893,21 @@ static int cs42l42_mute_stream(struct snd_soc_dai *dai, int mute, int stream)
+ 			 */
+ 			regmap_multi_reg_write(cs42l42->regmap, cs42l42_to_osc_seq,
+ 					       ARRAY_SIZE(cs42l42_to_osc_seq));
++
++			/* Must disconnect PLL before stopping it */
++			snd_soc_component_update_bits(component,
++						      CS42L42_MCLK_SRC_SEL,
++						      CS42L42_MCLK_SRC_SEL_MASK,
++						      0);
++			usleep_range(100, 200);
++
+ 			snd_soc_component_update_bits(component, CS42L42_PLL_CTL1,
+ 						      CS42L42_PLL_START_MASK, 0);
+ 		}
+ 	} else {
+ 		if (!cs42l42->stream_use) {
+ 			/* SCLK must be running before codec unmute */
+-			if ((cs42l42->bclk < 11289600) && (cs42l42->sclk < 11289600)) {
++			if (pll_ratio_table[cs42l42->pll_config].mclk_src_sel) {
+ 				snd_soc_component_update_bits(component, CS42L42_PLL_CTL1,
+ 							      CS42L42_PLL_START_MASK, 1);
+ 
+@@ -917,6 +928,12 @@ static int cs42l42_mute_stream(struct snd_soc_dai *dai, int mute, int stream)
+ 							       CS42L42_PLL_LOCK_TIMEOUT_US);
+ 				if (ret < 0)
+ 					dev_warn(component->dev, "PLL failed to lock: %d\n", ret);
++
++				/* PLL must be running to drive glitchless switch logic */
++				snd_soc_component_update_bits(component,
++							      CS42L42_MCLK_SRC_SEL,
++							      CS42L42_MCLK_SRC_SEL_MASK,
++							      CS42L42_MCLK_SRC_SEL_MASK);
+ 			}
+ 
+ 			/* Mark SCLK as present, turn off internal oscillator */
+diff --git a/sound/soc/codecs/cs42l42.h b/sound/soc/codecs/cs42l42.h
+index 5384105afe503..10cf2e4c8eadb 100644
+--- a/sound/soc/codecs/cs42l42.h
++++ b/sound/soc/codecs/cs42l42.h
+@@ -653,6 +653,8 @@
+ 
+ /* Page 0x25 Audio Port Registers */
+ #define CS42L42_SP_RX_CH_SEL		(CS42L42_PAGE_25 + 0x01)
++#define CS42L42_SP_RX_CHB_SEL_SHIFT	2
++#define CS42L42_SP_RX_CHB_SEL_MASK	(3 << CS42L42_SP_RX_CHB_SEL_SHIFT)
+ 
+ #define CS42L42_SP_RX_ISOC_CTL		(CS42L42_PAGE_25 + 0x02)
+ #define CS42L42_SP_RX_RSYNC_SHIFT	6
+@@ -775,6 +777,7 @@ struct  cs42l42_private {
+ 	struct gpio_desc *reset_gpio;
+ 	struct completion pdn_done;
+ 	struct snd_soc_jack jack;
++	int pll_config;
+ 	int bclk;
+ 	u32 sclk;
+ 	u32 srate;
+diff --git a/sound/soc/codecs/tlv320aic31xx.c b/sound/soc/codecs/tlv320aic31xx.c
+index 51870d50f4195..b522efe9f53d1 100644
+--- a/sound/soc/codecs/tlv320aic31xx.c
++++ b/sound/soc/codecs/tlv320aic31xx.c
+@@ -35,6 +35,9 @@
+ 
+ #include "tlv320aic31xx.h"
+ 
++static int aic31xx_set_jack(struct snd_soc_component *component,
++                            struct snd_soc_jack *jack, void *data);
++
+ static const struct reg_default aic31xx_reg_defaults[] = {
+ 	{ AIC31XX_CLKMUX, 0x00 },
+ 	{ AIC31XX_PLLPR, 0x11 },
+@@ -1256,6 +1259,13 @@ static int aic31xx_power_on(struct snd_soc_component *component)
+ 		return ret;
+ 	}
+ 
++	/*
++	 * The jack detection configuration is in the same register
++	 * that is used to report jack detect status so is volatile
++	 * and not covered by the cache sync, restore it separately.
++	 */
++	aic31xx_set_jack(component, aic31xx->jack, NULL);
++
+ 	return 0;
+ }
+ 
+diff --git a/sound/soc/intel/atom/sst-mfld-platform-pcm.c b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+index 4124aa2fc2479..5db2f4865bbba 100644
+--- a/sound/soc/intel/atom/sst-mfld-platform-pcm.c
++++ b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+@@ -127,7 +127,7 @@ static void sst_fill_alloc_params(struct snd_pcm_substream *substream,
+ 	snd_pcm_uframes_t period_size;
+ 	ssize_t periodbytes;
+ 	ssize_t buffer_bytes = snd_pcm_lib_buffer_bytes(substream);
+-	u32 buffer_addr = virt_to_phys(substream->dma_buffer.area);
++	u32 buffer_addr = substream->runtime->dma_addr;
+ 
+ 	channels = substream->runtime->channels;
+ 	period_size = substream->runtime->period_size;
+@@ -233,7 +233,6 @@ static int sst_platform_alloc_stream(struct snd_pcm_substream *substream,
+ 	/* set codec params and inform SST driver the same */
+ 	sst_fill_pcm_params(substream, &param);
+ 	sst_fill_alloc_params(substream, &alloc_params);
+-	substream->runtime->dma_area = substream->dma_buffer.area;
+ 	str_params.sparams = param;
+ 	str_params.aparams = alloc_params;
+ 	str_params.codec = SST_CODEC_TYPE_PCM;
+diff --git a/sound/soc/kirkwood/kirkwood-dma.c b/sound/soc/kirkwood/kirkwood-dma.c
+index c2a5933bfcfc1..700a18561a940 100644
+--- a/sound/soc/kirkwood/kirkwood-dma.c
++++ b/sound/soc/kirkwood/kirkwood-dma.c
+@@ -104,8 +104,6 @@ static int kirkwood_dma_open(struct snd_soc_component *component,
+ 	int err;
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+ 	struct kirkwood_dma_data *priv = kirkwood_priv(substream);
+-	const struct mbus_dram_target_info *dram;
+-	unsigned long addr;
+ 
+ 	snd_soc_set_runtime_hwparams(substream, &kirkwood_dma_snd_hw);
+ 
+@@ -142,20 +140,14 @@ static int kirkwood_dma_open(struct snd_soc_component *component,
+ 		writel((unsigned int)-1, priv->io + KIRKWOOD_ERR_MASK);
+ 	}
+ 
+-	dram = mv_mbus_dram_info();
+-	addr = substream->dma_buffer.addr;
+ 	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ 		if (priv->substream_play)
+ 			return -EBUSY;
+ 		priv->substream_play = substream;
+-		kirkwood_dma_conf_mbus_windows(priv->io,
+-			KIRKWOOD_PLAYBACK_WIN, addr, dram);
+ 	} else {
+ 		if (priv->substream_rec)
+ 			return -EBUSY;
+ 		priv->substream_rec = substream;
+-		kirkwood_dma_conf_mbus_windows(priv->io,
+-			KIRKWOOD_RECORD_WIN, addr, dram);
+ 	}
+ 
+ 	return 0;
+@@ -182,6 +174,23 @@ static int kirkwood_dma_close(struct snd_soc_component *component,
+ 	return 0;
+ }
+ 
++static int kirkwood_dma_hw_params(struct snd_soc_component *component,
++				  struct snd_pcm_substream *substream,
++				  struct snd_pcm_hw_params *params)
++{
++	struct kirkwood_dma_data *priv = kirkwood_priv(substream);
++	const struct mbus_dram_target_info *dram = mv_mbus_dram_info();
++	unsigned long addr = substream->runtime->dma_addr;
++
++	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
++		kirkwood_dma_conf_mbus_windows(priv->io,
++			KIRKWOOD_PLAYBACK_WIN, addr, dram);
++	else
++		kirkwood_dma_conf_mbus_windows(priv->io,
++			KIRKWOOD_RECORD_WIN, addr, dram);
++	return 0;
++}
++
+ static int kirkwood_dma_prepare(struct snd_soc_component *component,
+ 				struct snd_pcm_substream *substream)
+ {
+@@ -246,6 +255,7 @@ const struct snd_soc_component_driver kirkwood_soc_component = {
+ 	.name		= DRV_NAME,
+ 	.open		= kirkwood_dma_open,
+ 	.close		= kirkwood_dma_close,
++	.hw_params	= kirkwood_dma_hw_params,
+ 	.prepare	= kirkwood_dma_prepare,
+ 	.pointer	= kirkwood_dma_pointer,
+ 	.pcm_construct	= kirkwood_dma_new,
+diff --git a/sound/soc/sof/intel/Kconfig b/sound/soc/sof/intel/Kconfig
+index 4bce89b5ea407..4447f515e8b19 100644
+--- a/sound/soc/sof/intel/Kconfig
++++ b/sound/soc/sof/intel/Kconfig
+@@ -278,6 +278,8 @@ config SND_SOC_SOF_HDA
+ 
+ config SND_SOC_SOF_INTEL_SOUNDWIRE_LINK_BASELINE
+ 	tristate
++	select SOUNDWIRE_INTEL if SND_SOC_SOF_INTEL_SOUNDWIRE
++	select SND_INTEL_SOUNDWIRE_ACPI if SND_SOC_SOF_INTEL_SOUNDWIRE
+ 
+ config SND_SOC_SOF_INTEL_SOUNDWIRE
+ 	tristate "SOF support for SoundWire"
+@@ -285,8 +287,6 @@ config SND_SOC_SOF_INTEL_SOUNDWIRE
+ 	depends on SND_SOC_SOF_INTEL_SOUNDWIRE_LINK_BASELINE
+ 	depends on ACPI && SOUNDWIRE
+ 	depends on !(SOUNDWIRE=m && SND_SOC_SOF_INTEL_SOUNDWIRE_LINK_BASELINE=y)
+-	select SOUNDWIRE_INTEL
+-	select SND_INTEL_SOUNDWIRE_ACPI
+ 	help
+ 	  This adds support for SoundWire with Sound Open Firmware
+ 	  for Intel(R) platforms.
+diff --git a/sound/soc/sof/intel/hda-ipc.c b/sound/soc/sof/intel/hda-ipc.c
+index c91aa951df226..acfeca42604cd 100644
+--- a/sound/soc/sof/intel/hda-ipc.c
++++ b/sound/soc/sof/intel/hda-ipc.c
+@@ -107,8 +107,8 @@ void hda_dsp_ipc_get_reply(struct snd_sof_dev *sdev)
+ 	} else {
+ 		/* reply correct size ? */
+ 		if (reply.hdr.size != msg->reply_size &&
+-			/* getter payload is never known upfront */
+-			!(reply.hdr.cmd & SOF_IPC_GLB_PROBE)) {
++		    /* getter payload is never known upfront */
++		    ((reply.hdr.cmd & SOF_GLB_TYPE_MASK) != SOF_IPC_GLB_PROBE)) {
+ 			dev_err(sdev->dev, "error: reply expected %zu got %u bytes\n",
+ 				msg->reply_size, reply.hdr.size);
+ 			ret = -EINVAL;
+diff --git a/sound/soc/uniphier/aio-dma.c b/sound/soc/uniphier/aio-dma.c
+index 3c1628a3a1acd..3d9736e7381f8 100644
+--- a/sound/soc/uniphier/aio-dma.c
++++ b/sound/soc/uniphier/aio-dma.c
+@@ -198,7 +198,7 @@ static int uniphier_aiodma_mmap(struct snd_soc_component *component,
+ 	vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
+ 
+ 	return remap_pfn_range(vma, vma->vm_start,
+-			       substream->dma_buffer.addr >> PAGE_SHIFT,
++			       substream->runtime->dma_addr >> PAGE_SHIFT,
+ 			       vma->vm_end - vma->vm_start, vma->vm_page_prot);
+ }
+ 
+diff --git a/sound/soc/xilinx/xlnx_formatter_pcm.c b/sound/soc/xilinx/xlnx_formatter_pcm.c
+index 1d59fb668c77a..91afea9d5de67 100644
+--- a/sound/soc/xilinx/xlnx_formatter_pcm.c
++++ b/sound/soc/xilinx/xlnx_formatter_pcm.c
+@@ -452,8 +452,8 @@ static int xlnx_formatter_pcm_hw_params(struct snd_soc_component *component,
+ 
+ 	stream_data->buffer_size = size;
+ 
+-	low = lower_32_bits(substream->dma_buffer.addr);
+-	high = upper_32_bits(substream->dma_buffer.addr);
++	low = lower_32_bits(runtime->dma_addr);
++	high = upper_32_bits(runtime->dma_addr);
+ 	writel(low, stream_data->mmio + XLNX_AUD_BUFF_ADDR_LSB);
+ 	writel(high, stream_data->mmio + XLNX_AUD_BUFF_ADDR_MSB);
+ 
+diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
+index d57e13a13798e..1d9e5b35524cf 100644
+--- a/tools/lib/bpf/btf.c
++++ b/tools/lib/bpf/btf.c
+@@ -805,6 +805,7 @@ static struct btf *btf_new(const void *data, __u32 size, struct btf *base_btf)
+ 	btf->nr_types = 0;
+ 	btf->start_id = 1;
+ 	btf->start_str_off = 0;
++	btf->fd = -1;
+ 
+ 	if (base_btf) {
+ 		btf->base_btf = base_btf;
+@@ -833,8 +834,6 @@ static struct btf *btf_new(const void *data, __u32 size, struct btf *base_btf)
+ 	if (err)
+ 		goto done;
+ 
+-	btf->fd = -1;
+-
+ done:
+ 	if (err) {
+ 		btf__free(btf);
+diff --git a/tools/lib/bpf/libbpf_probes.c b/tools/lib/bpf/libbpf_probes.c
+index ecaae2927ab81..cd8c703dde718 100644
+--- a/tools/lib/bpf/libbpf_probes.c
++++ b/tools/lib/bpf/libbpf_probes.c
+@@ -75,6 +75,9 @@ probe_load(enum bpf_prog_type prog_type, const struct bpf_insn *insns,
+ 	case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
+ 		xattr.expected_attach_type = BPF_CGROUP_INET4_CONNECT;
+ 		break;
++	case BPF_PROG_TYPE_CGROUP_SOCKOPT:
++		xattr.expected_attach_type = BPF_CGROUP_GETSOCKOPT;
++		break;
+ 	case BPF_PROG_TYPE_SK_LOOKUP:
+ 		xattr.expected_attach_type = BPF_SK_LOOKUP;
+ 		break;
+@@ -104,7 +107,6 @@ probe_load(enum bpf_prog_type prog_type, const struct bpf_insn *insns,
+ 	case BPF_PROG_TYPE_SK_REUSEPORT:
+ 	case BPF_PROG_TYPE_FLOW_DISSECTOR:
+ 	case BPF_PROG_TYPE_CGROUP_SYSCTL:
+-	case BPF_PROG_TYPE_CGROUP_SOCKOPT:
+ 	case BPF_PROG_TYPE_TRACING:
+ 	case BPF_PROG_TYPE_STRUCT_OPS:
+ 	case BPF_PROG_TYPE_EXT:
+diff --git a/tools/testing/selftests/sgx/sigstruct.c b/tools/testing/selftests/sgx/sigstruct.c
+index dee7a3d6c5a5f..92bbc5a15c39f 100644
+--- a/tools/testing/selftests/sgx/sigstruct.c
++++ b/tools/testing/selftests/sgx/sigstruct.c
+@@ -55,10 +55,27 @@ static bool alloc_q1q2_ctx(const uint8_t *s, const uint8_t *m,
+ 	return true;
+ }
+ 
++static void reverse_bytes(void *data, int length)
++{
++	int i = 0;
++	int j = length - 1;
++	uint8_t temp;
++	uint8_t *ptr = data;
++
++	while (i < j) {
++		temp = ptr[i];
++		ptr[i] = ptr[j];
++		ptr[j] = temp;
++		i++;
++		j--;
++	}
++}
++
+ static bool calc_q1q2(const uint8_t *s, const uint8_t *m, uint8_t *q1,
+ 		      uint8_t *q2)
+ {
+ 	struct q1q2_ctx ctx;
++	int len;
+ 
+ 	if (!alloc_q1q2_ctx(s, m, &ctx)) {
+ 		fprintf(stderr, "Not enough memory for Q1Q2 calculation\n");
+@@ -89,8 +106,10 @@ static bool calc_q1q2(const uint8_t *s, const uint8_t *m, uint8_t *q1,
+ 		goto out;
+ 	}
+ 
+-	BN_bn2bin(ctx.q1, q1);
+-	BN_bn2bin(ctx.q2, q2);
++	len = BN_bn2bin(ctx.q1, q1);
++	reverse_bytes(q1, len);
++	len = BN_bn2bin(ctx.q2, q2);
++	reverse_bytes(q2, len);
+ 
+ 	free_q1q2_ctx(&ctx);
+ 	return true;
+@@ -152,22 +171,6 @@ static RSA *gen_sign_key(void)
+ 	return key;
+ }
+ 
+-static void reverse_bytes(void *data, int length)
+-{
+-	int i = 0;
+-	int j = length - 1;
+-	uint8_t temp;
+-	uint8_t *ptr = data;
+-
+-	while (i < j) {
+-		temp = ptr[i];
+-		ptr[i] = ptr[j];
+-		ptr[j] = temp;
+-		i++;
+-		j--;
+-	}
+-}
+-
+ enum mrtags {
+ 	MRECREATE = 0x0045544145524345,
+ 	MREADD = 0x0000000044444145,
+@@ -367,8 +370,6 @@ bool encl_measure(struct encl *encl)
+ 	/* BE -> LE */
+ 	reverse_bytes(sigstruct->signature, SGX_MODULUS_SIZE);
+ 	reverse_bytes(sigstruct->modulus, SGX_MODULUS_SIZE);
+-	reverse_bytes(sigstruct->q1, SGX_MODULUS_SIZE);
+-	reverse_bytes(sigstruct->q2, SGX_MODULUS_SIZE);
+ 
+ 	EVP_MD_CTX_destroy(ctx);
+ 	RSA_free(key);


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-08-18 22:42 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-08-18 22:42 UTC (permalink / raw
  To: gentoo-commits

commit:     e52994525e2c9706fc9e4376f2d4bf375e844b70
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 18 22:41:54 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 18 22:41:54 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e5299452

Bluetooth USB Alt 3 Fix

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                            |  4 ++
 2700_Bluetooth-usb-alt-3-for-WBS.patch | 84 ++++++++++++++++++++++++++++++++++
 2 files changed, 88 insertions(+)

diff --git a/0000_README b/0000_README
index 2eae665..eec200c 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
+Patch:  2700_Bluetooth-usb-alt-3-for-WBS.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next.git/commit/?id=55981d3541812234e687062926ff199c83f79a39
+Desc:   Bluetooth: btusb: check conditions before enabling USB ALT 3 for WBS
+
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2700_Bluetooth-usb-alt-3-for-WBS.patch b/2700_Bluetooth-usb-alt-3-for-WBS.patch
new file mode 100644
index 0000000..e0a67ea
--- /dev/null
+++ b/2700_Bluetooth-usb-alt-3-for-WBS.patch
@@ -0,0 +1,84 @@
+From 55981d3541812234e687062926ff199c83f79a39 Mon Sep 17 00:00:00 2001
+From: Pauli Virtanen <pav@iki.fi>
+Date: Mon, 26 Jul 2021 21:02:06 +0300
+Subject: Bluetooth: btusb: check conditions before enabling USB ALT 3 for WBS
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Some USB BT adapters don't satisfy the MTU requirement mentioned in
+commit e848dbd364ac ("Bluetooth: btusb: Add support USB ALT 3 for WBS")
+and have ALT 3 setting that produces no/garbled audio. Some adapters
+with larger MTU were also reported to have problems with ALT 3.
+
+Add a flag and check it and MTU before selecting ALT 3, falling back to
+ALT 1. Enable the flag for Realtek, restoring the previous behavior for
+non-Realtek devices.
+
+Tested with USB adapters (mtu<72, no/garbled sound with ALT3, ALT1
+works) BCM20702A1 0b05:17cb, CSR8510A10 0a12:0001, and (mtu>=72, ALT3
+works) RTL8761BU 0bda:8771, Intel AX200 8087:0029 (after disabling
+ALT6). Also got reports for (mtu>=72, ALT 3 reported to produce bad
+audio) Intel 8087:0a2b.
+
+Signed-off-by: Pauli Virtanen <pav@iki.fi>
+Fixes: e848dbd364ac ("Bluetooth: btusb: Add support USB ALT 3 for WBS")
+Tested-by: Michał Kępień <kernel@kempniu.pl>
+Tested-by: Jonathan Lampérth <jon@h4n.dev>
+Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
+---
+ drivers/bluetooth/btusb.c | 22 ++++++++++++++--------
+ 1 file changed, 14 insertions(+), 8 deletions(-)
+
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 488f110e17e27..2336f731dbc7e 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -528,6 +528,7 @@ static const struct dmi_system_id btusb_needs_reset_resume_table[] = {
+ #define BTUSB_HW_RESET_ACTIVE	12
+ #define BTUSB_TX_WAIT_VND_EVT	13
+ #define BTUSB_WAKEUP_DISABLE	14
++#define BTUSB_USE_ALT3_FOR_WBS	15
+ 
+ struct btusb_data {
+ 	struct hci_dev       *hdev;
+@@ -1761,16 +1762,20 @@ static void btusb_work(struct work_struct *work)
+ 			/* Bluetooth USB spec recommends alt 6 (63 bytes), but
+ 			 * many adapters do not support it.  Alt 1 appears to
+ 			 * work for all adapters that do not have alt 6, and
+-			 * which work with WBS at all.
++			 * which work with WBS at all.  Some devices prefer
++			 * alt 3 (HCI payload >= 60 Bytes let air packet
++			 * data satisfy 60 bytes), requiring
++			 * MTU >= 3 (packets) * 25 (size) - 3 (headers) = 72
++			 * see also Core spec 5, vol 4, B 2.1.1 & Table 2.1.
+ 			 */
+-			new_alts = btusb_find_altsetting(data, 6) ? 6 : 1;
+-			/* Because mSBC frames do not need to be aligned to the
+-			 * SCO packet boundary. If support the Alt 3, use the
+-			 * Alt 3 for HCI payload >= 60 Bytes let air packet
+-			 * data satisfy 60 bytes.
+-			 */
+-			if (new_alts == 1 && btusb_find_altsetting(data, 3))
++			if (btusb_find_altsetting(data, 6))
++				new_alts = 6;
++			else if (btusb_find_altsetting(data, 3) &&
++				 hdev->sco_mtu >= 72 &&
++				 test_bit(BTUSB_USE_ALT3_FOR_WBS, &data->flags))
+ 				new_alts = 3;
++			else
++				new_alts = 1;
+ 		}
+ 
+ 		if (btusb_switch_alt_setting(hdev, new_alts) < 0)
+@@ -3882,6 +3887,7 @@ static int btusb_probe(struct usb_interface *intf,
+ 		 * (DEVICE_REMOTE_WAKEUP)
+ 		 */
+ 		set_bit(BTUSB_WAKEUP_DISABLE, &data->flags);
++		set_bit(BTUSB_USE_ALT3_FOR_WBS, &data->flags);
+ 	}
+ 
+ 	if (!reset)
+-- 
+cgit 1.2.3-1.el7
+


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-08-21 14:27 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-08-21 14:27 UTC (permalink / raw
  To: gentoo-commits

commit:     e571947b96baf9928b009d774632f2f17dafe943
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Aug 21 14:27:10 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Aug 21 14:27:10 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e571947b

Update to CPU Optimization patch (USE=experiental)

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 5010_enable-cpu-optimizations-universal.patch | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
index c45d13b..e37528f 100644
--- a/5010_enable-cpu-optimizations-universal.patch
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -219,7 +219,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config MZEN3
 +	bool "AMD Zen 3"
-+	depends on ( CC_IS_GCC && GCC_VERSION >= 100300 ) || ( CC_IS_CLANG && CLANG_VERSION >= 120000 )
++	depends on (CC_IS_GCC && GCC_VERSION >= 100300) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
 +	help
 +	  Select this for AMD Family 19h Zen 3 processors.
 +
@@ -378,7 +378,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config MCOOPERLAKE
 +	bool "Intel Cooper Lake"
-+	depends on ( CC_IS_GCC && GCC_VERSION > 100100 ) || ( CC_IS_CLANG && CLANG_VERSION >= 100000 )
++	depends on (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
 +	select X86_P6_NOP
 +	help
 +
@@ -388,7 +388,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config MTIGERLAKE
 +	bool "Intel Tiger Lake"
-+	depends on  ( CC_IS_GCC && GCC_VERSION > 100100 ) || ( CC_IS_CLANG && CLANG_VERSION >= 100000 )
++	depends on  (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
 +	select X86_P6_NOP
 +	help
 +
@@ -398,7 +398,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config MSAPPHIRERAPIDS
 +	bool "Intel Sapphire Rapids"
-+	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && CLANG_VERSION >= 120000 )
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
 +	select X86_P6_NOP
 +	help
 +
@@ -408,7 +408,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config MROCKETLAKE
 +	bool "Intel Rocket Lake"
-+	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && CLANG_VERSION >= 120000 )
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
 +	select X86_P6_NOP
 +	help
 +
@@ -418,7 +418,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config MALDERLAKE
 +	bool "Intel Alder Lake"
-+	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && CLANG_VERSION >= 120000 )
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
 +	select X86_P6_NOP
 +	help
 +
@@ -435,7 +435,7 @@ index 814fe0d349b0..8acf6519d279 100644
  
 +config GENERIC_CPU2
 +	bool "Generic-x86-64-v2"
-+	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && LANG_VERSION >= 120000 )
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
 +	depends on X86_64
 +	help
 +	  Generic x86-64 CPU.
@@ -443,7 +443,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config GENERIC_CPU3
 +	bool "Generic-x86-64-v3"
-+	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && LANG_VERSION >= 120000 )
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
 +	depends on X86_64
 +	help
 +	  Generic x86-64-v3 CPU with v3 instructions.
@@ -451,7 +451,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +
 +config GENERIC_CPU4
 +	bool "Generic-x86-64-v4"
-+	depends on ( CC_IS_GCC && GCC_VERSION > 110000 ) || ( CC_IS_CLANG && LANG_VERSION >= 120000 )
++	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
 +	depends on X86_64
 +	help
 +	  Generic x86-64 CPU with v4 instructions.


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-08-24 19:56 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-08-24 19:56 UTC (permalink / raw
  To: gentoo-commits

commit:     e0e46bc488f191689b31fbb89901c740a31de9ec
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Aug 24 19:53:28 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Aug 24 19:56:22 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e0e46bc4

Add CONFIG option to print firmware info

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 20 +++++++++++++++++---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 864f86a..fd8f955 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -6,9 +6,9 @@
  source "Documentation/Kconfig"
 +
 +source "distro/Kconfig"
---- /dev/null	2021-08-09 07:18:54.945580285 -0400
-+++ b/distro/Kconfig	2021-08-09 19:15:34.418191114 -0400
-@@ -0,0 +1,267 @@
+--- /dev/null	2021-08-24 15:34:24.700702871 -0400
++++ b/distro/Kconfig	2021-08-24 15:49:16.965525424 -0400
+@@ -0,0 +1,281 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -275,6 +275,20 @@
 +	select CPU_SW_DOMAIN_PAN
 +
 +endif
++
++config GENTOO_PRINT_FIRMWARE_INFO
++	bool "Print firmware information that the kernel attempts to load"
++
++	depends on GENTOO_LINUX
++	default n 
++
++	help
++		Enable this option to print information about firmware that the kernel
++		is attempting to load.  This information can be accessible via the
++		dmesg command-line utility
++
++		See the settings that become available for more details and fine-tuning.
++
 +endmenu
 diff --git a/security/Kconfig b/security/Kconfig
 index 7561f6f99..01f0bf73f 100644


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-08-24 20:00 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-08-24 20:00 UTC (permalink / raw
  To: gentoo-commits

commit:     a6d2f668a35484d8afbf3bd7123b9ef349ae50dc
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Aug 24 19:58:46 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Aug 24 19:58:46 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a6d2f668

Add suport for printing firmware info on load attempt

This requires the kernel config parameter:

CONFIG_GENTOO_PRINT_FIRMWARE_INFO

located under the GENTOO_LINUX distro config options

Thanks to Georgy Yakovlev

Bug: https://bugs.gentoo.org/732852

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                               |  4 ++++
 3000_Support-printing-firmware-info.patch | 14 ++++++++++++++
 2 files changed, 18 insertions(+)

diff --git a/0000_README b/0000_README
index eec200c..559bb12 100644
--- a/0000_README
+++ b/0000_README
@@ -115,6 +115,10 @@ Patch:  2920_sign-file-patch-for-libressl.patch
 From:   https://bugs.gentoo.org/717166
 Desc:   sign-file: full functionality with modern LibreSSL
 
+Patch:  3000_Support-printing-firmware-info.patch
+From:   https://bugs.gentoo.org/732852
+Desc:   Print firmware info (Reqs CONFIG_GENTOO_PRINT_FIRMWARE_INFO). Thanks to Georgy Yakovlev
+
 Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.

diff --git a/3000_Support-printing-firmware-info.patch b/3000_Support-printing-firmware-info.patch
new file mode 100644
index 0000000..a630cfb
--- /dev/null
+++ b/3000_Support-printing-firmware-info.patch
@@ -0,0 +1,14 @@
+--- a/drivers/base/firmware_loader/main.c	2021-08-24 15:42:07.025482085 -0400
++++ b/drivers/base/firmware_loader/main.c	2021-08-24 15:44:40.782975313 -0400
+@@ -809,6 +809,11 @@ _request_firmware(const struct firmware
+ 
+ 	ret = _request_firmware_prepare(&fw, name, device, buf, size,
+ 					offset, opt_flags);
++
++#ifdef CONFIG_GENTOO_PRINT_FIRMWARE_INFO
++        printk(KERN_NOTICE "Loading firmware: %s\n", name);
++#endif
++
+ 	if (ret <= 0) /* error or already assigned */
+ 		goto out;
+ 


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-08-25 16:23 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-08-25 16:23 UTC (permalink / raw
  To: gentoo-commits

commit:     a5c75d3a7939e9c83280bb8b91591be8631926b9
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 25 16:20:53 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 25 16:22:59 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a5c75d3a

Change CONFIG_GENTOO_PRINT_FIRMWARE_INFO to y

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index fd8f955..d2175f0 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -280,7 +280,7 @@
 +	bool "Print firmware information that the kernel attempts to load"
 +
 +	depends on GENTOO_LINUX
-+	default n 
++	default y
 +
 +	help
 +		Enable this option to print information about firmware that the kernel


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-08-26 14:33 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-08-26 14:33 UTC (permalink / raw
  To: gentoo-commits

commit:     152bb80b16b9ddcfcb71e91845e3fc4cfd7d8380
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 26 14:33:11 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Aug 26 14:33:11 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=152bb80b

Linux patch 5.13.13

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1012_linux-5.13.13.patch | 4898 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4902 insertions(+)

diff --git a/0000_README b/0000_README
index 559bb12..72c942f 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch:  1011_linux-5.13.12.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.13.12
 
+Patch:  1012_linux-5.13.13.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.13.13
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1012_linux-5.13.13.patch b/1012_linux-5.13.13.patch
new file mode 100644
index 0000000..b7d926a
--- /dev/null
+++ b/1012_linux-5.13.13.patch
@@ -0,0 +1,4898 @@
+diff --git a/Makefile b/Makefile
+index 2458a4abcbccd..ed4031db38947 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 13
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/arm/boot/dts/am43x-epos-evm.dts b/arch/arm/boot/dts/am43x-epos-evm.dts
+index 8b696107eef8c..d2aebdbc7e0f9 100644
+--- a/arch/arm/boot/dts/am43x-epos-evm.dts
++++ b/arch/arm/boot/dts/am43x-epos-evm.dts
+@@ -582,7 +582,7 @@
+ 	status = "okay";
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&i2c0_pins>;
+-	clock-frequency = <400000>;
++	clock-frequency = <100000>;
+ 
+ 	tps65218: tps65218@24 {
+ 		reg = <0x24>;
+diff --git a/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi b/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi
+index c9b9064323415..1815361fe73ce 100644
+--- a/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi
++++ b/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi
+@@ -755,14 +755,14 @@
+ 			status = "disabled";
+ 		};
+ 
+-		vica: intc@10140000 {
++		vica: interrupt-controller@10140000 {
+ 			compatible = "arm,versatile-vic";
+ 			interrupt-controller;
+ 			#interrupt-cells = <1>;
+ 			reg = <0x10140000 0x20>;
+ 		};
+ 
+-		vicb: intc@10140020 {
++		vicb: interrupt-controller@10140020 {
+ 			compatible = "arm,versatile-vic";
+ 			interrupt-controller;
+ 			#interrupt-cells = <1>;
+diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
+index b52481f0605d8..02997b253dee8 100644
+--- a/arch/arm64/Makefile
++++ b/arch/arm64/Makefile
+@@ -181,6 +181,8 @@ archprepare:
+ # We use MRPROPER_FILES and CLEAN_FILES now
+ archclean:
+ 	$(Q)$(MAKE) $(clean)=$(boot)
++	$(Q)$(MAKE) $(clean)=arch/arm64/kernel/vdso
++	$(Q)$(MAKE) $(clean)=arch/arm64/kernel/vdso32
+ 
+ ifeq ($(KBUILD_EXTMOD),)
+ # We need to generate vdso-offsets.h before compiling certain files in kernel/.
+diff --git a/arch/arm64/boot/dts/qcom/msm8992-bullhead-rev-101.dts b/arch/arm64/boot/dts/qcom/msm8992-bullhead-rev-101.dts
+index 23cdcc9f7c725..1ccca83292ac9 100644
+--- a/arch/arm64/boot/dts/qcom/msm8992-bullhead-rev-101.dts
++++ b/arch/arm64/boot/dts/qcom/msm8992-bullhead-rev-101.dts
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /* Copyright (c) 2015, LGE Inc. All rights reserved.
+  * Copyright (c) 2016, The Linux Foundation. All rights reserved.
++ * Copyright (c) 2021, Petr Vorel <petr.vorel@gmail.com>
+  */
+ 
+ /dts-v1/;
+@@ -9,6 +10,9 @@
+ #include "pm8994.dtsi"
+ #include "pmi8994.dtsi"
+ 
++/* cont_splash_mem has different memory mapping */
++/delete-node/ &cont_splash_mem;
++
+ / {
+ 	model = "LG Nexus 5X";
+ 	compatible = "lg,bullhead", "qcom,msm8992";
+@@ -17,6 +21,9 @@
+ 	qcom,board-id = <0xb64 0>;
+ 	qcom,pmic-id = <0x10009 0x1000A 0x0 0x0>;
+ 
++	/* Bullhead firmware doesn't support PSCI */
++	/delete-node/ psci;
++
+ 	aliases {
+ 		serial0 = &blsp1_uart2;
+ 	};
+@@ -38,6 +45,11 @@
+ 			ftrace-size = <0x10000>;
+ 			pmsg-size = <0x20000>;
+ 		};
++
++		cont_splash_mem: memory@3400000 {
++			reg = <0 0x03400000 0 0x1200000>;
++			no-map;
++		};
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8994-angler-rev-101.dts b/arch/arm64/boot/dts/qcom/msm8994-angler-rev-101.dts
+index baa55643b40ff..801995af3dfcd 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994-angler-rev-101.dts
++++ b/arch/arm64/boot/dts/qcom/msm8994-angler-rev-101.dts
+@@ -1,12 +1,16 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /* Copyright (c) 2015, Huawei Inc. All rights reserved.
+  * Copyright (c) 2016, The Linux Foundation. All rights reserved.
++ * Copyright (c) 2021, Petr Vorel <petr.vorel@gmail.com>
+  */
+ 
+ /dts-v1/;
+ 
+ #include "msm8994.dtsi"
+ 
++/* Angler's firmware does not report where the memory is allocated */
++/delete-node/ &cont_splash_mem;
++
+ / {
+ 	model = "Huawei Nexus 6P";
+ 	compatible = "huawei,angler", "qcom,msm8994";
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-oneplus-common.dtsi b/arch/arm64/boot/dts/qcom/sdm845-oneplus-common.dtsi
+index f712771df0c77..846eebebd8317 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-oneplus-common.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845-oneplus-common.dtsi
+@@ -69,7 +69,7 @@
+ 		};
+ 		rmtfs_upper_guard: memory@f5d01000 {
+ 			no-map;
+-			reg = <0 0xf5d01000 0 0x2000>;
++			reg = <0 0xf5d01000 0 0x1000>;
+ 		};
+ 
+ 		/*
+@@ -78,7 +78,7 @@
+ 		 */
+ 		removed_region: memory@88f00000 {
+ 			no-map;
+-			reg = <0 0x88f00000 0 0x200000>;
++			reg = <0 0x88f00000 0 0x1c00000>;
+ 		};
+ 
+ 		ramoops: ramoops@ac300000 {
+diff --git a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+index c2a709a384e9e..d7591a4621a2f 100644
+--- a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
++++ b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+@@ -700,7 +700,7 @@
+ 		left_spkr: wsa8810-left{
+ 			compatible = "sdw10217211000";
+ 			reg = <0 3>;
+-			powerdown-gpios = <&wcdgpio 2 GPIO_ACTIVE_HIGH>;
++			powerdown-gpios = <&wcdgpio 1 GPIO_ACTIVE_HIGH>;
+ 			#thermal-sensor-cells = <0>;
+ 			sound-name-prefix = "SpkrLeft";
+ 			#sound-dai-cells = <0>;
+@@ -708,7 +708,7 @@
+ 
+ 		right_spkr: wsa8810-right{
+ 			compatible = "sdw10217211000";
+-			powerdown-gpios = <&wcdgpio 3 GPIO_ACTIVE_HIGH>;
++			powerdown-gpios = <&wcdgpio 2 GPIO_ACTIVE_HIGH>;
+ 			reg = <0 4>;
+ 			#thermal-sensor-cells = <0>;
+ 			sound-name-prefix = "SpkrRight";
+diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/include/asm/book3s/32/kup.h
+index 1670dfe9d4f10..c6cfca9d2bd77 100644
+--- a/arch/powerpc/include/asm/book3s/32/kup.h
++++ b/arch/powerpc/include/asm/book3s/32/kup.h
+@@ -4,9 +4,50 @@
+ 
+ #include <asm/bug.h>
+ #include <asm/book3s/32/mmu-hash.h>
++#include <asm/mmu.h>
++#include <asm/synch.h>
+ 
+ #ifndef __ASSEMBLY__
+ 
++static __always_inline bool kuep_is_disabled(void)
++{
++	return !IS_ENABLED(CONFIG_PPC_KUEP);
++}
++
++static inline void kuep_lock(void)
++{
++	if (kuep_is_disabled())
++		return;
++
++	update_user_segments(mfsr(0) | SR_NX);
++	/*
++	 * This isync() shouldn't be necessary as the kernel is not excepted to
++	 * run any instruction in userspace soon after the update of segments,
++	 * but hash based cores (at least G3) seem to exhibit a random
++	 * behaviour when the 'isync' is not there. 603 cores don't have this
++	 * behaviour so don't do the 'isync' as it saves several CPU cycles.
++	 */
++	if (mmu_has_feature(MMU_FTR_HPTE_TABLE))
++		isync();	/* Context sync required after mtsr() */
++}
++
++static inline void kuep_unlock(void)
++{
++	if (kuep_is_disabled())
++		return;
++
++	update_user_segments(mfsr(0) & ~SR_NX);
++	/*
++	 * This isync() shouldn't be necessary as a 'rfi' will soon be executed
++	 * to return to userspace, but hash based cores (at least G3) seem to
++	 * exhibit a random behaviour when the 'isync' is not there. 603 cores
++	 * don't have this behaviour so don't do the 'isync' as it saves several
++	 * CPU cycles.
++	 */
++	if (mmu_has_feature(MMU_FTR_HPTE_TABLE))
++		isync();	/* Context sync required after mtsr() */
++}
++
+ #ifdef CONFIG_PPC_KUAP
+ 
+ #include <linux/sched.h>
+diff --git a/arch/powerpc/include/asm/book3s/32/mmu-hash.h b/arch/powerpc/include/asm/book3s/32/mmu-hash.h
+index b85f8e114a9c0..cc0284bbac86f 100644
+--- a/arch/powerpc/include/asm/book3s/32/mmu-hash.h
++++ b/arch/powerpc/include/asm/book3s/32/mmu-hash.h
+@@ -102,6 +102,33 @@ extern s32 patch__hash_page_B, patch__hash_page_C;
+ extern s32 patch__flush_hash_A0, patch__flush_hash_A1, patch__flush_hash_A2;
+ extern s32 patch__flush_hash_B;
+ 
++#include <asm/reg.h>
++#include <asm/task_size_32.h>
++
++#define UPDATE_TWO_USER_SEGMENTS(n) do {		\
++	if (TASK_SIZE > ((n) << 28))			\
++		mtsr(val1, (n) << 28);			\
++	if (TASK_SIZE > (((n) + 1) << 28))		\
++		mtsr(val2, ((n) + 1) << 28);		\
++	val1 = (val1 + 0x222) & 0xf0ffffff;		\
++	val2 = (val2 + 0x222) & 0xf0ffffff;		\
++} while (0)
++
++static __always_inline void update_user_segments(u32 val)
++{
++	int val1 = val;
++	int val2 = (val + 0x111) & 0xf0ffffff;
++
++	UPDATE_TWO_USER_SEGMENTS(0);
++	UPDATE_TWO_USER_SEGMENTS(2);
++	UPDATE_TWO_USER_SEGMENTS(4);
++	UPDATE_TWO_USER_SEGMENTS(6);
++	UPDATE_TWO_USER_SEGMENTS(8);
++	UPDATE_TWO_USER_SEGMENTS(10);
++	UPDATE_TWO_USER_SEGMENTS(12);
++	UPDATE_TWO_USER_SEGMENTS(14);
++}
++
+ #endif /* !__ASSEMBLY__ */
+ 
+ /* We happily ignore the smaller BATs on 601, we don't actually use
+diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
+index ec96232529ac2..4b94d42937774 100644
+--- a/arch/powerpc/include/asm/kup.h
++++ b/arch/powerpc/include/asm/kup.h
+@@ -46,10 +46,7 @@ void setup_kuep(bool disabled);
+ static inline void setup_kuep(bool disabled) { }
+ #endif /* CONFIG_PPC_KUEP */
+ 
+-#if defined(CONFIG_PPC_KUEP) && defined(CONFIG_PPC_BOOK3S_32)
+-void kuep_lock(void);
+-void kuep_unlock(void);
+-#else
++#ifndef CONFIG_PPC_BOOK3S_32
+ static inline void kuep_lock(void) { }
+ static inline void kuep_unlock(void) { }
+ #endif
+diff --git a/arch/powerpc/mm/book3s32/Makefile b/arch/powerpc/mm/book3s32/Makefile
+index 7f0c8a78ba0c0..15f4773643d21 100644
+--- a/arch/powerpc/mm/book3s32/Makefile
++++ b/arch/powerpc/mm/book3s32/Makefile
+@@ -10,3 +10,4 @@ obj-y += mmu.o mmu_context.o
+ obj-$(CONFIG_PPC_BOOK3S_603) += nohash_low.o
+ obj-$(CONFIG_PPC_BOOK3S_604) += hash_low.o tlb.o
+ obj-$(CONFIG_PPC_KUEP) += kuep.o
++obj-$(CONFIG_PPC_KUAP) += kuap.o
+diff --git a/arch/powerpc/mm/book3s32/kuap.c b/arch/powerpc/mm/book3s32/kuap.c
+new file mode 100644
+index 0000000000000..1df55392878e0
+--- /dev/null
++++ b/arch/powerpc/mm/book3s32/kuap.c
+@@ -0,0 +1,11 @@
++// SPDX-License-Identifier: GPL-2.0-or-later
++
++#include <asm/kup.h>
++
++void __init setup_kuap(bool disabled)
++{
++	pr_info("Activating Kernel Userspace Access Protection\n");
++
++	if (disabled)
++		pr_warn("KUAP cannot be disabled yet on 6xx when compiled in\n");
++}
+diff --git a/arch/powerpc/mm/book3s32/kuep.c b/arch/powerpc/mm/book3s32/kuep.c
+index 8ed1b86348397..919595f47e25f 100644
+--- a/arch/powerpc/mm/book3s32/kuep.c
++++ b/arch/powerpc/mm/book3s32/kuep.c
+@@ -1,40 +1,11 @@
+ // SPDX-License-Identifier: GPL-2.0-or-later
+ 
+ #include <asm/kup.h>
+-#include <asm/reg.h>
+-#include <asm/task_size_32.h>
+-#include <asm/mmu.h>
+ 
+-#define KUEP_UPDATE_TWO_USER_SEGMENTS(n) do {		\
+-	if (TASK_SIZE > ((n) << 28))			\
+-		mtsr(val1, (n) << 28);			\
+-	if (TASK_SIZE > (((n) + 1) << 28))		\
+-		mtsr(val2, ((n) + 1) << 28);		\
+-	val1 = (val1 + 0x222) & 0xf0ffffff;		\
+-	val2 = (val2 + 0x222) & 0xf0ffffff;		\
+-} while (0)
+-
+-static __always_inline void kuep_update(u32 val)
++void __init setup_kuep(bool disabled)
+ {
+-	int val1 = val;
+-	int val2 = (val + 0x111) & 0xf0ffffff;
+-
+-	KUEP_UPDATE_TWO_USER_SEGMENTS(0);
+-	KUEP_UPDATE_TWO_USER_SEGMENTS(2);
+-	KUEP_UPDATE_TWO_USER_SEGMENTS(4);
+-	KUEP_UPDATE_TWO_USER_SEGMENTS(6);
+-	KUEP_UPDATE_TWO_USER_SEGMENTS(8);
+-	KUEP_UPDATE_TWO_USER_SEGMENTS(10);
+-	KUEP_UPDATE_TWO_USER_SEGMENTS(12);
+-	KUEP_UPDATE_TWO_USER_SEGMENTS(14);
+-}
++	pr_info("Activating Kernel Userspace Execution Prevention\n");
+ 
+-void kuep_lock(void)
+-{
+-	kuep_update(mfsr(0) | SR_NX);
+-}
+-
+-void kuep_unlock(void)
+-{
+-	kuep_update(mfsr(0) & ~SR_NX);
++	if (disabled)
++		pr_warn("KUEP cannot be disabled yet on 6xx when compiled in\n");
+ }
+diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c
+index 159930351d9f9..27061583a0107 100644
+--- a/arch/powerpc/mm/book3s32/mmu.c
++++ b/arch/powerpc/mm/book3s32/mmu.c
+@@ -445,26 +445,6 @@ void __init print_system_hash_info(void)
+ 		pr_info("Hash_mask         = 0x%lx\n", Hash_mask);
+ }
+ 
+-#ifdef CONFIG_PPC_KUEP
+-void __init setup_kuep(bool disabled)
+-{
+-	pr_info("Activating Kernel Userspace Execution Prevention\n");
+-
+-	if (disabled)
+-		pr_warn("KUEP cannot be disabled yet on 6xx when compiled in\n");
+-}
+-#endif
+-
+-#ifdef CONFIG_PPC_KUAP
+-void __init setup_kuap(bool disabled)
+-{
+-	pr_info("Activating Kernel Userspace Access Protection\n");
+-
+-	if (disabled)
+-		pr_warn("KUAP cannot be disabled yet on 6xx when compiled in\n");
+-}
+-#endif
+-
+ void __init early_init_mmu(void)
+ {
+ }
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index 9a1b7a0603b28..f2a9cd4284b03 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -230,8 +230,8 @@ static void __init init_resources(void)
+ 	}
+ 
+ 	/* Clean-up any unused pre-allocated resources */
+-	mem_res_sz = (num_resources - res_idx + 1) * sizeof(*mem_res);
+-	memblock_free(__pa(mem_res), mem_res_sz);
++	if (res_idx >= 0)
++		memblock_free(__pa(mem_res), (res_idx + 1) * sizeof(*mem_res));
+ 	return;
+ 
+  error:
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index b0993e05affe6..8fcb7ecb7225a 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -560,9 +560,12 @@ static void zpci_cleanup_bus_resources(struct zpci_dev *zdev)
+ 
+ int pcibios_add_device(struct pci_dev *pdev)
+ {
++	struct zpci_dev *zdev = to_zpci(pdev);
+ 	struct resource *res;
+ 	int i;
+ 
++	/* The pdev has a reference to the zdev via its bus */
++	zpci_zdev_get(zdev);
+ 	if (pdev->is_physfn)
+ 		pdev->no_vf_scan = 1;
+ 
+@@ -582,7 +585,10 @@ int pcibios_add_device(struct pci_dev *pdev)
+ 
+ void pcibios_release_device(struct pci_dev *pdev)
+ {
++	struct zpci_dev *zdev = to_zpci(pdev);
++
+ 	zpci_unmap_resources(pdev);
++	zpci_zdev_put(zdev);
+ }
+ 
+ int pcibios_enable_device(struct pci_dev *pdev, int mask)
+diff --git a/arch/s390/pci/pci_bus.h b/arch/s390/pci/pci_bus.h
+index b877a97e6745b..e359d2686178b 100644
+--- a/arch/s390/pci/pci_bus.h
++++ b/arch/s390/pci/pci_bus.h
+@@ -22,6 +22,11 @@ static inline void zpci_zdev_put(struct zpci_dev *zdev)
+ 	kref_put(&zdev->kref, zpci_release_device);
+ }
+ 
++static inline void zpci_zdev_get(struct zpci_dev *zdev)
++{
++	kref_get(&zdev->kref);
++}
++
+ int zpci_alloc_domain(int domain);
+ void zpci_free_domain(int domain);
+ int zpci_setup_bus_resources(struct zpci_dev *zdev,
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index 1eb45139fcc6e..3092fbf9dbe4c 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -2489,13 +2489,15 @@ void perf_clear_dirty_counters(void)
+ 		return;
+ 
+ 	for_each_set_bit(i, cpuc->dirty, X86_PMC_IDX_MAX) {
+-		/* Metrics and fake events don't have corresponding HW counters. */
+-		if (is_metric_idx(i) || (i == INTEL_PMC_IDX_FIXED_VLBR))
+-			continue;
+-		else if (i >= INTEL_PMC_IDX_FIXED)
++		if (i >= INTEL_PMC_IDX_FIXED) {
++			/* Metrics and fake events don't have corresponding HW counters. */
++			if ((i - INTEL_PMC_IDX_FIXED) >= hybrid(cpuc->pmu, num_counters_fixed))
++				continue;
++
+ 			wrmsrl(MSR_ARCH_PERFMON_FIXED_CTR0 + (i - INTEL_PMC_IDX_FIXED), 0);
+-		else
++		} else {
+ 			wrmsrl(x86_pmu_event_addr(i), 0);
++		}
+ 	}
+ 
+ 	bitmap_zero(cpuc->dirty, X86_PMC_IDX_MAX);
+diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c
+index 81e3279ecd574..15a8be57203d6 100644
+--- a/block/kyber-iosched.c
++++ b/block/kyber-iosched.c
+@@ -596,13 +596,13 @@ static void kyber_insert_requests(struct blk_mq_hw_ctx *hctx,
+ 		struct list_head *head = &kcq->rq_list[sched_domain];
+ 
+ 		spin_lock(&kcq->lock);
++		trace_block_rq_insert(rq);
+ 		if (at_head)
+ 			list_move(&rq->queuelist, head);
+ 		else
+ 			list_move_tail(&rq->queuelist, head);
+ 		sbitmap_set_bit(&khd->kcq_map[sched_domain],
+ 				rq->mq_ctx->index_hw[hctx->type]);
+-		trace_block_rq_insert(rq);
+ 		spin_unlock(&kcq->lock);
+ 	}
+ }
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 0ef98e3ba3410..148a4dd8cb9ac 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -3097,8 +3097,10 @@ static int sysc_probe(struct platform_device *pdev)
+ 		return error;
+ 
+ 	error = sysc_check_active_timer(ddata);
+-	if (error == -EBUSY)
++	if (error == -ENXIO)
+ 		ddata->reserved = true;
++	else if (error)
++		return error;
+ 
+ 	error = sysc_get_clocks(ddata);
+ 	if (error)
+diff --git a/drivers/clk/imx/clk-imx6q.c b/drivers/clk/imx/clk-imx6q.c
+index 496900de0b0bb..de36f58d551c0 100644
+--- a/drivers/clk/imx/clk-imx6q.c
++++ b/drivers/clk/imx/clk-imx6q.c
+@@ -974,6 +974,6 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node)
+ 			       hws[IMX6QDL_CLK_PLL3_USB_OTG]->clk);
+ 	}
+ 
+-	imx_register_uart_clocks(1);
++	imx_register_uart_clocks(2);
+ }
+ CLK_OF_DECLARE(imx6q, "fsl,imx6q-ccm", imx6q_clocks_init);
+diff --git a/drivers/clk/qcom/gdsc.c b/drivers/clk/qcom/gdsc.c
+index 51ed640e527b4..4ece326ea233e 100644
+--- a/drivers/clk/qcom/gdsc.c
++++ b/drivers/clk/qcom/gdsc.c
+@@ -357,27 +357,43 @@ static int gdsc_init(struct gdsc *sc)
+ 	if (on < 0)
+ 		return on;
+ 
+-	/*
+-	 * Votable GDSCs can be ON due to Vote from other masters.
+-	 * If a Votable GDSC is ON, make sure we have a Vote.
+-	 */
+-	if ((sc->flags & VOTABLE) && on)
+-		gdsc_enable(&sc->pd);
++	if (on) {
++		/* The regulator must be on, sync the kernel state */
++		if (sc->rsupply) {
++			ret = regulator_enable(sc->rsupply);
++			if (ret < 0)
++				return ret;
++		}
+ 
+-	/*
+-	 * Make sure the retain bit is set if the GDSC is already on, otherwise
+-	 * we end up turning off the GDSC and destroying all the register
+-	 * contents that we thought we were saving.
+-	 */
+-	if ((sc->flags & RETAIN_FF_ENABLE) && on)
+-		gdsc_retain_ff_on(sc);
++		/*
++		 * Votable GDSCs can be ON due to Vote from other masters.
++		 * If a Votable GDSC is ON, make sure we have a Vote.
++		 */
++		if (sc->flags & VOTABLE) {
++			ret = regmap_update_bits(sc->regmap, sc->gdscr,
++						 SW_COLLAPSE_MASK, val);
++			if (ret)
++				return ret;
++		}
++
++		/* Turn on HW trigger mode if supported */
++		if (sc->flags & HW_CTRL) {
++			ret = gdsc_hwctrl(sc, true);
++			if (ret < 0)
++				return ret;
++		}
+ 
+-	/* If ALWAYS_ON GDSCs are not ON, turn them ON */
+-	if (sc->flags & ALWAYS_ON) {
+-		if (!on)
+-			gdsc_enable(&sc->pd);
++		/*
++		 * Make sure the retain bit is set if the GDSC is already on,
++		 * otherwise we end up turning off the GDSC and destroying all
++		 * the register contents that we thought we were saving.
++		 */
++		if (sc->flags & RETAIN_FF_ENABLE)
++			gdsc_retain_ff_on(sc);
++	} else if (sc->flags & ALWAYS_ON) {
++		/* If ALWAYS_ON GDSCs are not ON, turn them ON */
++		gdsc_enable(&sc->pd);
+ 		on = true;
+-		sc->pd.flags |= GENPD_FLAG_ALWAYS_ON;
+ 	}
+ 
+ 	if (on || (sc->pwrsts & PWRSTS_RET))
+@@ -385,6 +401,8 @@ static int gdsc_init(struct gdsc *sc)
+ 	else
+ 		gdsc_clear_mem_on(sc);
+ 
++	if (sc->flags & ALWAYS_ON)
++		sc->pd.flags |= GENPD_FLAG_ALWAYS_ON;
+ 	if (!sc->pd.power_off)
+ 		sc->pd.power_off = gdsc_disable;
+ 	if (!sc->pd.power_on)
+diff --git a/drivers/cpufreq/armada-37xx-cpufreq.c b/drivers/cpufreq/armada-37xx-cpufreq.c
+index 3fc98a3ffd91e..c10fc33b29b18 100644
+--- a/drivers/cpufreq/armada-37xx-cpufreq.c
++++ b/drivers/cpufreq/armada-37xx-cpufreq.c
+@@ -104,7 +104,11 @@ struct armada_37xx_dvfs {
+ };
+ 
+ static struct armada_37xx_dvfs armada_37xx_dvfs[] = {
+-	{.cpu_freq_max = 1200*1000*1000, .divider = {1, 2, 4, 6} },
++	/*
++	 * The cpufreq scaling for 1.2 GHz variant of the SOC is currently
++	 * unstable because we do not know how to configure it properly.
++	 */
++	/* {.cpu_freq_max = 1200*1000*1000, .divider = {1, 2, 4, 6} }, */
+ 	{.cpu_freq_max = 1000*1000*1000, .divider = {1, 2, 4, 5} },
+ 	{.cpu_freq_max = 800*1000*1000,  .divider = {1, 2, 3, 4} },
+ 	{.cpu_freq_max = 600*1000*1000,  .divider = {2, 4, 5, 6} },
+diff --git a/drivers/cpufreq/scmi-cpufreq.c b/drivers/cpufreq/scmi-cpufreq.c
+index ec9a87ca2dbb8..75f818d04b481 100644
+--- a/drivers/cpufreq/scmi-cpufreq.c
++++ b/drivers/cpufreq/scmi-cpufreq.c
+@@ -134,7 +134,7 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
+ 	}
+ 
+ 	if (!zalloc_cpumask_var(&opp_shared_cpus, GFP_KERNEL))
+-		ret = -ENOMEM;
++		return -ENOMEM;
+ 
+ 	/* Obtain CPUs that share SCMI performance controls */
+ 	ret = scmi_get_sharing_cpus(cpu_dev, policy->cpus);
+diff --git a/drivers/dma/of-dma.c b/drivers/dma/of-dma.c
+index ec00b20ae8e4c..ac61ecda29261 100644
+--- a/drivers/dma/of-dma.c
++++ b/drivers/dma/of-dma.c
+@@ -67,8 +67,12 @@ static struct dma_chan *of_dma_router_xlate(struct of_phandle_args *dma_spec,
+ 		return NULL;
+ 
+ 	ofdma_target = of_dma_find_controller(&dma_spec_target);
+-	if (!ofdma_target)
+-		return NULL;
++	if (!ofdma_target) {
++		ofdma->dma_router->route_free(ofdma->dma_router->dev,
++					      route_data);
++		chan = ERR_PTR(-EPROBE_DEFER);
++		goto err;
++	}
+ 
+ 	chan = ofdma_target->of_dma_xlate(&dma_spec_target, ofdma_target);
+ 	if (IS_ERR_OR_NULL(chan)) {
+@@ -89,6 +93,7 @@ static struct dma_chan *of_dma_router_xlate(struct of_phandle_args *dma_spec,
+ 		}
+ 	}
+ 
++err:
+ 	/*
+ 	 * Need to put the node back since the ofdma->of_dma_route_allocate
+ 	 * has taken it for generating the new, translated dma_spec
+diff --git a/drivers/dma/sh/usb-dmac.c b/drivers/dma/sh/usb-dmac.c
+index 8f7ceb698226c..1cc06900153e4 100644
+--- a/drivers/dma/sh/usb-dmac.c
++++ b/drivers/dma/sh/usb-dmac.c
+@@ -855,8 +855,8 @@ static int usb_dmac_probe(struct platform_device *pdev)
+ 
+ error:
+ 	of_dma_controller_free(pdev->dev.of_node);
+-	pm_runtime_put(&pdev->dev);
+ error_pm:
++	pm_runtime_put(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 	return ret;
+ }
+diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
+index 75c0b8e904e51..4b9530a7bf652 100644
+--- a/drivers/dma/xilinx/xilinx_dma.c
++++ b/drivers/dma/xilinx/xilinx_dma.c
+@@ -394,6 +394,7 @@ struct xilinx_dma_tx_descriptor {
+  * @genlock: Support genlock mode
+  * @err: Channel has errors
+  * @idle: Check for channel idle
++ * @terminating: Check for channel being synchronized by user
+  * @tasklet: Cleanup work after irq
+  * @config: Device configuration info
+  * @flush_on_fsync: Flush on Frame sync
+@@ -431,6 +432,7 @@ struct xilinx_dma_chan {
+ 	bool genlock;
+ 	bool err;
+ 	bool idle;
++	bool terminating;
+ 	struct tasklet_struct tasklet;
+ 	struct xilinx_vdma_config config;
+ 	bool flush_on_fsync;
+@@ -1049,6 +1051,13 @@ static void xilinx_dma_chan_desc_cleanup(struct xilinx_dma_chan *chan)
+ 		/* Run any dependencies, then free the descriptor */
+ 		dma_run_dependencies(&desc->async_tx);
+ 		xilinx_dma_free_tx_descriptor(chan, desc);
++
++		/*
++		 * While we ran a callback the user called a terminate function,
++		 * which takes care of cleaning up any remaining descriptors
++		 */
++		if (chan->terminating)
++			break;
+ 	}
+ 
+ 	spin_unlock_irqrestore(&chan->lock, flags);
+@@ -1965,6 +1974,8 @@ static dma_cookie_t xilinx_dma_tx_submit(struct dma_async_tx_descriptor *tx)
+ 	if (desc->cyclic)
+ 		chan->cyclic = true;
+ 
++	chan->terminating = false;
++
+ 	spin_unlock_irqrestore(&chan->lock, flags);
+ 
+ 	return cookie;
+@@ -2436,6 +2447,7 @@ static int xilinx_dma_terminate_all(struct dma_chan *dchan)
+ 
+ 	xilinx_dma_chan_reset(chan);
+ 	/* Remove and free all of the descriptors in the lists */
++	chan->terminating = true;
+ 	xilinx_dma_free_descriptors(chan);
+ 	chan->idle = true;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 516467e962b72..3a476b86485e7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -1293,6 +1293,16 @@ static bool is_raven_kicker(struct amdgpu_device *adev)
+ 		return false;
+ }
+ 
++static bool check_if_enlarge_doorbell_range(struct amdgpu_device *adev)
++{
++	if ((adev->asic_type == CHIP_RENOIR) &&
++	    (adev->gfx.me_fw_version >= 0x000000a5) &&
++	    (adev->gfx.me_feature_version >= 52))
++		return true;
++	else
++		return false;
++}
++
+ static void gfx_v9_0_check_if_need_gfxoff(struct amdgpu_device *adev)
+ {
+ 	if (gfx_v9_0_should_disable_gfxoff(adev->pdev))
+@@ -3673,7 +3683,16 @@ static int gfx_v9_0_kiq_init_register(struct amdgpu_ring *ring)
+ 	if (ring->use_doorbell) {
+ 		WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_LOWER,
+ 					(adev->doorbell_index.kiq * 2) << 2);
+-		WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_UPPER,
++		/* If GC has entered CGPG, ringing doorbell > first page
++		 * doesn't wakeup GC. Enlarge CP_MEC_DOORBELL_RANGE_UPPER to
++		 * workaround this issue. And this change has to align with firmware
++		 * update.
++		 */
++		if (check_if_enlarge_doorbell_range(adev))
++			WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_UPPER,
++					(adev->doorbell.size - 4));
++		else
++			WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_UPPER,
+ 					(adev->doorbell_index.userqueue_end * 2) << 2);
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+index 75ba86f951f84..7bbedb6b4a9ea 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+@@ -66,9 +66,11 @@ int rn_get_active_display_cnt_wa(
+ 	for (i = 0; i < context->stream_count; i++) {
+ 		const struct dc_stream_state *stream = context->streams[i];
+ 
++		/* Extend the WA to DP for Linux*/
+ 		if (stream->signal == SIGNAL_TYPE_HDMI_TYPE_A ||
+ 				stream->signal == SIGNAL_TYPE_DVI_SINGLE_LINK ||
+-				stream->signal == SIGNAL_TYPE_DVI_DUAL_LINK)
++				stream->signal == SIGNAL_TYPE_DVI_DUAL_LINK ||
++				stream->signal == SIGNAL_TYPE_DISPLAY_PORT)
+ 			tmds_present = true;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_optc.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_optc.c
+index 3139d90017eed..23f830986f78d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_optc.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_optc.c
+@@ -464,7 +464,7 @@ void optc2_lock_doublebuffer_enable(struct timing_generator *optc)
+ 
+ 	REG_UPDATE_2(OTG_GLOBAL_CONTROL1,
+ 			MASTER_UPDATE_LOCK_DB_X,
+-			h_blank_start - 200 - 1,
++			(h_blank_start - 200 - 1) / optc1->opp_count,
+ 			MASTER_UPDATE_LOCK_DB_Y,
+ 			v_blank_start - 1);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
+index 472696f949ac3..63b09c1124c47 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
+@@ -1622,106 +1622,12 @@ static void dcn301_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *b
+ 	dml_init_instance(&dc->dml, &dcn3_01_soc, &dcn3_01_ip, DML_PROJECT_DCN30);
+ }
+ 
+-static void calculate_wm_set_for_vlevel(
+-		int vlevel,
+-		struct wm_range_table_entry *table_entry,
+-		struct dcn_watermarks *wm_set,
+-		struct display_mode_lib *dml,
+-		display_e2e_pipe_params_st *pipes,
+-		int pipe_cnt)
+-{
+-	double dram_clock_change_latency_cached = dml->soc.dram_clock_change_latency_us;
+-
+-	ASSERT(vlevel < dml->soc.num_states);
+-	/* only pipe 0 is read for voltage and dcf/soc clocks */
+-	pipes[0].clks_cfg.voltage = vlevel;
+-	pipes[0].clks_cfg.dcfclk_mhz = dml->soc.clock_limits[vlevel].dcfclk_mhz;
+-	pipes[0].clks_cfg.socclk_mhz = dml->soc.clock_limits[vlevel].socclk_mhz;
+-
+-	dml->soc.dram_clock_change_latency_us = table_entry->pstate_latency_us;
+-	dml->soc.sr_exit_time_us = table_entry->sr_exit_time_us;
+-	dml->soc.sr_enter_plus_exit_time_us = table_entry->sr_enter_plus_exit_time_us;
+-
+-	wm_set->urgent_ns = get_wm_urgent(dml, pipes, pipe_cnt) * 1000;
+-	wm_set->cstate_pstate.cstate_enter_plus_exit_ns = get_wm_stutter_enter_exit(dml, pipes, pipe_cnt) * 1000;
+-	wm_set->cstate_pstate.cstate_exit_ns = get_wm_stutter_exit(dml, pipes, pipe_cnt) * 1000;
+-	wm_set->cstate_pstate.pstate_change_ns = get_wm_dram_clock_change(dml, pipes, pipe_cnt) * 1000;
+-	wm_set->pte_meta_urgent_ns = get_wm_memory_trip(dml, pipes, pipe_cnt) * 1000;
+-	wm_set->frac_urg_bw_nom = get_fraction_of_urgent_bandwidth(dml, pipes, pipe_cnt) * 1000;
+-	wm_set->frac_urg_bw_flip = get_fraction_of_urgent_bandwidth_imm_flip(dml, pipes, pipe_cnt) * 1000;
+-	wm_set->urgent_latency_ns = get_urgent_latency(dml, pipes, pipe_cnt) * 1000;
+-	dml->soc.dram_clock_change_latency_us = dram_clock_change_latency_cached;
+-
+-}
+-
+-static void dcn301_calculate_wm_and_dlg(
+-		struct dc *dc, struct dc_state *context,
+-		display_e2e_pipe_params_st *pipes,
+-		int pipe_cnt,
+-		int vlevel_req)
+-{
+-	int i, pipe_idx;
+-	int vlevel, vlevel_max;
+-	struct wm_range_table_entry *table_entry;
+-	struct clk_bw_params *bw_params = dc->clk_mgr->bw_params;
+-
+-	ASSERT(bw_params);
+-
+-	vlevel_max = bw_params->clk_table.num_entries - 1;
+-
+-	/* WM Set D */
+-	table_entry = &bw_params->wm_table.entries[WM_D];
+-	if (table_entry->wm_type == WM_TYPE_RETRAINING)
+-		vlevel = 0;
+-	else
+-		vlevel = vlevel_max;
+-	calculate_wm_set_for_vlevel(vlevel, table_entry, &context->bw_ctx.bw.dcn.watermarks.d,
+-						&context->bw_ctx.dml, pipes, pipe_cnt);
+-	/* WM Set C */
+-	table_entry = &bw_params->wm_table.entries[WM_C];
+-	vlevel = min(max(vlevel_req, 2), vlevel_max);
+-	calculate_wm_set_for_vlevel(vlevel, table_entry, &context->bw_ctx.bw.dcn.watermarks.c,
+-						&context->bw_ctx.dml, pipes, pipe_cnt);
+-	/* WM Set B */
+-	table_entry = &bw_params->wm_table.entries[WM_B];
+-	vlevel = min(max(vlevel_req, 1), vlevel_max);
+-	calculate_wm_set_for_vlevel(vlevel, table_entry, &context->bw_ctx.bw.dcn.watermarks.b,
+-						&context->bw_ctx.dml, pipes, pipe_cnt);
+-
+-	/* WM Set A */
+-	table_entry = &bw_params->wm_table.entries[WM_A];
+-	vlevel = min(vlevel_req, vlevel_max);
+-	calculate_wm_set_for_vlevel(vlevel, table_entry, &context->bw_ctx.bw.dcn.watermarks.a,
+-						&context->bw_ctx.dml, pipes, pipe_cnt);
+-
+-	for (i = 0, pipe_idx = 0; i < dc->res_pool->pipe_count; i++) {
+-		if (!context->res_ctx.pipe_ctx[i].stream)
+-			continue;
+-
+-		pipes[pipe_idx].clks_cfg.dispclk_mhz = get_dispclk_calculated(&context->bw_ctx.dml, pipes, pipe_cnt);
+-		pipes[pipe_idx].clks_cfg.dppclk_mhz = get_dppclk_calculated(&context->bw_ctx.dml, pipes, pipe_cnt, pipe_idx);
+-
+-		if (dc->config.forced_clocks) {
+-			pipes[pipe_idx].clks_cfg.dispclk_mhz = context->bw_ctx.dml.soc.clock_limits[0].dispclk_mhz;
+-			pipes[pipe_idx].clks_cfg.dppclk_mhz = context->bw_ctx.dml.soc.clock_limits[0].dppclk_mhz;
+-		}
+-		if (dc->debug.min_disp_clk_khz > pipes[pipe_idx].clks_cfg.dispclk_mhz * 1000)
+-			pipes[pipe_idx].clks_cfg.dispclk_mhz = dc->debug.min_disp_clk_khz / 1000.0;
+-		if (dc->debug.min_dpp_clk_khz > pipes[pipe_idx].clks_cfg.dppclk_mhz * 1000)
+-			pipes[pipe_idx].clks_cfg.dppclk_mhz = dc->debug.min_dpp_clk_khz / 1000.0;
+-
+-		pipe_idx++;
+-	}
+-
+-	dcn20_calculate_dlg_params(dc, context, pipes, pipe_cnt, vlevel);
+-}
+-
+ static struct resource_funcs dcn301_res_pool_funcs = {
+ 	.destroy = dcn301_destroy_resource_pool,
+ 	.link_enc_create = dcn301_link_encoder_create,
+ 	.panel_cntl_create = dcn301_panel_cntl_create,
+ 	.validate_bandwidth = dcn30_validate_bandwidth,
+-	.calculate_wm_and_dlg = dcn301_calculate_wm_and_dlg,
++	.calculate_wm_and_dlg = dcn30_calculate_wm_and_dlg,
+ 	.update_soc_for_wm_a = dcn30_update_soc_for_wm_a,
+ 	.populate_dml_pipes = dcn30_populate_dml_pipes_from_context,
+ 	.acquire_idle_pipe_for_layer = dcn20_acquire_idle_pipe_for_layer,
+diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c
+index 99126caf57479..a8597444d515c 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_power.c
++++ b/drivers/gpu/drm/i915/display/intel_display_power.c
+@@ -5910,13 +5910,13 @@ void intel_display_power_suspend_late(struct drm_i915_private *i915)
+ {
+ 	if (DISPLAY_VER(i915) >= 11 || IS_GEN9_LP(i915)) {
+ 		bxt_enable_dc9(i915);
+-		/* Tweaked Wa_14010685332:icp,jsp,mcc */
+-		if (INTEL_PCH_TYPE(i915) >= PCH_ICP && INTEL_PCH_TYPE(i915) <= PCH_MCC)
+-			intel_de_rmw(i915, SOUTH_CHICKEN1,
+-				     SBCLK_RUN_REFCLK_DIS, SBCLK_RUN_REFCLK_DIS);
+ 	} else if (IS_HASWELL(i915) || IS_BROADWELL(i915)) {
+ 		hsw_enable_pc8(i915);
+ 	}
++
++	/* Tweaked Wa_14010685332:cnp,icp,jsp,mcc,tgp,adp */
++	if (INTEL_PCH_TYPE(i915) >= PCH_CNP && INTEL_PCH_TYPE(i915) < PCH_DG1)
++		intel_de_rmw(i915, SOUTH_CHICKEN1, SBCLK_RUN_REFCLK_DIS, SBCLK_RUN_REFCLK_DIS);
+ }
+ 
+ void intel_display_power_resume_early(struct drm_i915_private *i915)
+@@ -5924,13 +5924,13 @@ void intel_display_power_resume_early(struct drm_i915_private *i915)
+ 	if (DISPLAY_VER(i915) >= 11 || IS_GEN9_LP(i915)) {
+ 		gen9_sanitize_dc_state(i915);
+ 		bxt_disable_dc9(i915);
+-		/* Tweaked Wa_14010685332:icp,jsp,mcc */
+-		if (INTEL_PCH_TYPE(i915) >= PCH_ICP && INTEL_PCH_TYPE(i915) <= PCH_MCC)
+-			intel_de_rmw(i915, SOUTH_CHICKEN1, SBCLK_RUN_REFCLK_DIS, 0);
+-
+ 	} else if (IS_HASWELL(i915) || IS_BROADWELL(i915)) {
+ 		hsw_disable_pc8(i915);
+ 	}
++
++	/* Tweaked Wa_14010685332:cnp,icp,jsp,mcc,tgp,adp */
++	if (INTEL_PCH_TYPE(i915) >= PCH_CNP && INTEL_PCH_TYPE(i915) < PCH_DG1)
++		intel_de_rmw(i915, SOUTH_CHICKEN1, SBCLK_RUN_REFCLK_DIS, 0);
+ }
+ 
+ void intel_display_power_suspend(struct drm_i915_private *i915)
+diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
+index 7eefbdec25a23..783f25920d00c 100644
+--- a/drivers/gpu/drm/i915/i915_irq.c
++++ b/drivers/gpu/drm/i915/i915_irq.c
+@@ -2421,6 +2421,8 @@ gen8_de_irq_handler(struct drm_i915_private *dev_priv, u32 master_ctl)
+ 	u32 iir;
+ 	enum pipe pipe;
+ 
++	drm_WARN_ON_ONCE(&dev_priv->drm, !HAS_DISPLAY(dev_priv));
++
+ 	if (master_ctl & GEN8_DE_MISC_IRQ) {
+ 		iir = intel_uncore_read(&dev_priv->uncore, GEN8_DE_MISC_IIR);
+ 		if (iir) {
+@@ -3040,32 +3042,13 @@ static void valleyview_irq_reset(struct drm_i915_private *dev_priv)
+ 	spin_unlock_irq(&dev_priv->irq_lock);
+ }
+ 
+-static void cnp_display_clock_wa(struct drm_i915_private *dev_priv)
+-{
+-	struct intel_uncore *uncore = &dev_priv->uncore;
+-
+-	/*
+-	 * Wa_14010685332:cnp/cmp,tgp,adp
+-	 * TODO: Clarify which platforms this applies to
+-	 * TODO: Figure out if this workaround can be applied in the s0ix suspend/resume handlers as
+-	 * on earlier platforms and whether the workaround is also needed for runtime suspend/resume
+-	 */
+-	if (INTEL_PCH_TYPE(dev_priv) == PCH_CNP ||
+-	    (INTEL_PCH_TYPE(dev_priv) >= PCH_TGP && INTEL_PCH_TYPE(dev_priv) < PCH_DG1)) {
+-		intel_uncore_rmw(uncore, SOUTH_CHICKEN1, SBCLK_RUN_REFCLK_DIS,
+-				 SBCLK_RUN_REFCLK_DIS);
+-		intel_uncore_rmw(uncore, SOUTH_CHICKEN1, SBCLK_RUN_REFCLK_DIS, 0);
+-	}
+-}
+-
+-static void gen8_irq_reset(struct drm_i915_private *dev_priv)
++static void gen8_display_irq_reset(struct drm_i915_private *dev_priv)
+ {
+ 	struct intel_uncore *uncore = &dev_priv->uncore;
+ 	enum pipe pipe;
+ 
+-	gen8_master_intr_disable(dev_priv->uncore.regs);
+-
+-	gen8_gt_irq_reset(&dev_priv->gt);
++	if (!HAS_DISPLAY(dev_priv))
++		return;
+ 
+ 	intel_uncore_write(uncore, EDP_PSR_IMR, 0xffffffff);
+ 	intel_uncore_write(uncore, EDP_PSR_IIR, 0xffffffff);
+@@ -3077,12 +3060,21 @@ static void gen8_irq_reset(struct drm_i915_private *dev_priv)
+ 
+ 	GEN3_IRQ_RESET(uncore, GEN8_DE_PORT_);
+ 	GEN3_IRQ_RESET(uncore, GEN8_DE_MISC_);
++}
++
++static void gen8_irq_reset(struct drm_i915_private *dev_priv)
++{
++	struct intel_uncore *uncore = &dev_priv->uncore;
++
++	gen8_master_intr_disable(dev_priv->uncore.regs);
++
++	gen8_gt_irq_reset(&dev_priv->gt);
++	gen8_display_irq_reset(dev_priv);
+ 	GEN3_IRQ_RESET(uncore, GEN8_PCU_);
+ 
+ 	if (HAS_PCH_SPLIT(dev_priv))
+ 		ibx_irq_reset(dev_priv);
+ 
+-	cnp_display_clock_wa(dev_priv);
+ }
+ 
+ static void gen11_display_irq_reset(struct drm_i915_private *dev_priv)
+@@ -3092,6 +3084,9 @@ static void gen11_display_irq_reset(struct drm_i915_private *dev_priv)
+ 	u32 trans_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B) |
+ 		BIT(TRANSCODER_C) | BIT(TRANSCODER_D);
+ 
++	if (!HAS_DISPLAY(dev_priv))
++		return;
++
+ 	intel_uncore_write(uncore, GEN11_DISPLAY_INT_CTL, 0);
+ 
+ 	if (DISPLAY_VER(dev_priv) >= 12) {
+@@ -3123,8 +3118,6 @@ static void gen11_display_irq_reset(struct drm_i915_private *dev_priv)
+ 
+ 	if (INTEL_PCH_TYPE(dev_priv) >= PCH_ICP)
+ 		GEN3_IRQ_RESET(uncore, SDE);
+-
+-	cnp_display_clock_wa(dev_priv);
+ }
+ 
+ static void gen11_irq_reset(struct drm_i915_private *dev_priv)
+@@ -3714,6 +3707,9 @@ static void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv)
+ 		BIT(TRANSCODER_C) | BIT(TRANSCODER_D);
+ 	enum pipe pipe;
+ 
++	if (!HAS_DISPLAY(dev_priv))
++		return;
++
+ 	if (DISPLAY_VER(dev_priv) <= 10)
+ 		de_misc_masked |= GEN8_DE_MISC_GSE;
+ 
+@@ -3797,6 +3793,16 @@ static void gen8_irq_postinstall(struct drm_i915_private *dev_priv)
+ 	gen8_master_intr_enable(dev_priv->uncore.regs);
+ }
+ 
++static void gen11_de_irq_postinstall(struct drm_i915_private *dev_priv)
++{
++	if (!HAS_DISPLAY(dev_priv))
++		return;
++
++	gen8_de_irq_postinstall(dev_priv);
++
++	intel_uncore_write(&dev_priv->uncore, GEN11_DISPLAY_INT_CTL,
++			   GEN11_DISPLAY_IRQ_ENABLE);
++}
+ 
+ static void gen11_irq_postinstall(struct drm_i915_private *dev_priv)
+ {
+@@ -3807,12 +3813,10 @@ static void gen11_irq_postinstall(struct drm_i915_private *dev_priv)
+ 		icp_irq_postinstall(dev_priv);
+ 
+ 	gen11_gt_irq_postinstall(&dev_priv->gt);
+-	gen8_de_irq_postinstall(dev_priv);
++	gen11_de_irq_postinstall(dev_priv);
+ 
+ 	GEN3_IRQ_INIT(uncore, GEN11_GU_MISC_, ~gu_misc_masked, gu_misc_masked);
+ 
+-	intel_uncore_write(&dev_priv->uncore, GEN11_DISPLAY_INT_CTL, GEN11_DISPLAY_IRQ_ENABLE);
+-
+ 	if (HAS_MASTER_UNIT_IRQ(dev_priv)) {
+ 		dg1_master_intr_enable(uncore->regs);
+ 		intel_uncore_posting_read(&dev_priv->uncore, DG1_MSTR_UNIT_INTR);
+diff --git a/drivers/gpu/drm/mediatek/mtk_disp_color.c b/drivers/gpu/drm/mediatek/mtk_disp_color.c
+index 63f411ab393b7..bcb470caf0097 100644
+--- a/drivers/gpu/drm/mediatek/mtk_disp_color.c
++++ b/drivers/gpu/drm/mediatek/mtk_disp_color.c
+@@ -134,6 +134,8 @@ static int mtk_disp_color_probe(struct platform_device *pdev)
+ 
+ static int mtk_disp_color_remove(struct platform_device *pdev)
+ {
++	component_del(&pdev->dev, &mtk_disp_color_component_ops);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
+index 961f87f8d4d15..32a2922bbe5fb 100644
+--- a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
++++ b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
+@@ -424,6 +424,8 @@ static int mtk_disp_ovl_probe(struct platform_device *pdev)
+ 
+ static int mtk_disp_ovl_remove(struct platform_device *pdev)
+ {
++	component_del(&pdev->dev, &mtk_disp_ovl_component_ops);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
+index 75bc00e17fc49..50d20562e612d 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
+@@ -34,6 +34,7 @@
+ 
+ #define DISP_AAL_EN				0x0000
+ #define DISP_AAL_SIZE				0x0030
++#define DISP_AAL_OUTPUT_SIZE			0x04d8
+ 
+ #define DISP_DITHER_EN				0x0000
+ #define DITHER_EN				BIT(0)
+@@ -197,6 +198,7 @@ static void mtk_aal_config(struct device *dev, unsigned int w,
+ 	struct mtk_ddp_comp_dev *priv = dev_get_drvdata(dev);
+ 
+ 	mtk_ddp_write(cmdq_pkt, w << 16 | h, &priv->cmdq_reg, priv->regs, DISP_AAL_SIZE);
++	mtk_ddp_write(cmdq_pkt, w << 16 | h, &priv->cmdq_reg, priv->regs, DISP_AAL_OUTPUT_SIZE);
+ }
+ 
+ static void mtk_aal_gamma_set(struct device *dev, struct drm_crtc_state *state)
+diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
+index 5d96fcc45feca..6987076993277 100644
+--- a/drivers/iommu/dma-iommu.c
++++ b/drivers/iommu/dma-iommu.c
+@@ -768,6 +768,7 @@ static void iommu_dma_free_noncontiguous(struct device *dev, size_t size,
+ 	__iommu_dma_unmap(dev, sgt->sgl->dma_address, size);
+ 	__iommu_dma_free_pages(sh->pages, PAGE_ALIGN(size) >> PAGE_SHIFT);
+ 	sg_free_table(&sh->sgt);
++	kfree(sh);
+ }
+ #endif /* CONFIG_DMA_REMAP */
+ 
+diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
+index 72dc84821dad2..581c694b7cf41 100644
+--- a/drivers/iommu/intel/pasid.c
++++ b/drivers/iommu/intel/pasid.c
+@@ -511,7 +511,7 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev,
+ 				 u32 pasid, bool fault_ignore)
+ {
+ 	struct pasid_entry *pte;
+-	u16 did;
++	u16 did, pgtt;
+ 
+ 	pte = intel_pasid_get_entry(dev, pasid);
+ 	if (WARN_ON(!pte))
+@@ -521,13 +521,19 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev,
+ 		return;
+ 
+ 	did = pasid_get_domain_id(pte);
++	pgtt = pasid_pte_get_pgtt(pte);
++
+ 	intel_pasid_clear_entry(dev, pasid, fault_ignore);
+ 
+ 	if (!ecap_coherent(iommu->ecap))
+ 		clflush_cache_range(pte, sizeof(*pte));
+ 
+ 	pasid_cache_invalidation_with_pasid(iommu, did, pasid);
+-	qi_flush_piotlb(iommu, did, pasid, 0, -1, 0);
++
++	if (pgtt == PASID_ENTRY_PGTT_PT || pgtt == PASID_ENTRY_PGTT_FL_ONLY)
++		qi_flush_piotlb(iommu, did, pasid, 0, -1, 0);
++	else
++		iommu->flush.flush_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH);
+ 
+ 	/* Device IOTLB doesn't need to be flushed in caching mode. */
+ 	if (!cap_caching_mode(iommu->cap))
+diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h
+index 5ff61c3d401f9..c11bc8b833b8e 100644
+--- a/drivers/iommu/intel/pasid.h
++++ b/drivers/iommu/intel/pasid.h
+@@ -99,6 +99,12 @@ static inline bool pasid_pte_is_present(struct pasid_entry *pte)
+ 	return READ_ONCE(pte->val[0]) & PASID_PTE_PRESENT;
+ }
+ 
++/* Get PGTT field of a PASID table entry */
++static inline u16 pasid_pte_get_pgtt(struct pasid_entry *pte)
++{
++	return (u16)((READ_ONCE(pte->val[0]) >> 6) & 0x7);
++}
++
+ extern unsigned int intel_pasid_max_id;
+ int intel_pasid_alloc_table(struct device *dev);
+ void intel_pasid_free_table(struct device *dev);
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index 808ab70d5df50..db966a7841fe4 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -924,6 +924,9 @@ void iommu_group_remove_device(struct device *dev)
+ 	struct iommu_group *group = dev->iommu_group;
+ 	struct group_device *tmp_device, *device = NULL;
+ 
++	if (!group)
++		return;
++
+ 	dev_info(dev, "Removing from iommu group %d\n", group->id);
+ 
+ 	/* Pre-notify listeners that a device is being removed. */
+diff --git a/drivers/ipack/carriers/tpci200.c b/drivers/ipack/carriers/tpci200.c
+index e1822e87ec3d2..c1098f40e03f8 100644
+--- a/drivers/ipack/carriers/tpci200.c
++++ b/drivers/ipack/carriers/tpci200.c
+@@ -91,16 +91,13 @@ static void tpci200_unregister(struct tpci200_board *tpci200)
+ 	free_irq(tpci200->info->pdev->irq, (void *) tpci200);
+ 
+ 	pci_iounmap(tpci200->info->pdev, tpci200->info->interface_regs);
+-	pci_iounmap(tpci200->info->pdev, tpci200->info->cfg_regs);
+ 
+ 	pci_release_region(tpci200->info->pdev, TPCI200_IP_INTERFACE_BAR);
+ 	pci_release_region(tpci200->info->pdev, TPCI200_IO_ID_INT_SPACES_BAR);
+ 	pci_release_region(tpci200->info->pdev, TPCI200_MEM16_SPACE_BAR);
+ 	pci_release_region(tpci200->info->pdev, TPCI200_MEM8_SPACE_BAR);
+-	pci_release_region(tpci200->info->pdev, TPCI200_CFG_MEM_BAR);
+ 
+ 	pci_disable_device(tpci200->info->pdev);
+-	pci_dev_put(tpci200->info->pdev);
+ }
+ 
+ static void tpci200_enable_irq(struct tpci200_board *tpci200,
+@@ -259,7 +256,7 @@ static int tpci200_register(struct tpci200_board *tpci200)
+ 			"(bn 0x%X, sn 0x%X) failed to allocate PCI resource for BAR 2 !",
+ 			tpci200->info->pdev->bus->number,
+ 			tpci200->info->pdev->devfn);
+-		goto out_disable_pci;
++		goto err_disable_device;
+ 	}
+ 
+ 	/* Request IO ID INT space (Bar 3) */
+@@ -271,7 +268,7 @@ static int tpci200_register(struct tpci200_board *tpci200)
+ 			"(bn 0x%X, sn 0x%X) failed to allocate PCI resource for BAR 3 !",
+ 			tpci200->info->pdev->bus->number,
+ 			tpci200->info->pdev->devfn);
+-		goto out_release_ip_space;
++		goto err_ip_interface_bar;
+ 	}
+ 
+ 	/* Request MEM8 space (Bar 5) */
+@@ -282,7 +279,7 @@ static int tpci200_register(struct tpci200_board *tpci200)
+ 			"(bn 0x%X, sn 0x%X) failed to allocate PCI resource for BAR 5!",
+ 			tpci200->info->pdev->bus->number,
+ 			tpci200->info->pdev->devfn);
+-		goto out_release_ioid_int_space;
++		goto err_io_id_int_spaces_bar;
+ 	}
+ 
+ 	/* Request MEM16 space (Bar 4) */
+@@ -293,7 +290,7 @@ static int tpci200_register(struct tpci200_board *tpci200)
+ 			"(bn 0x%X, sn 0x%X) failed to allocate PCI resource for BAR 4!",
+ 			tpci200->info->pdev->bus->number,
+ 			tpci200->info->pdev->devfn);
+-		goto out_release_mem8_space;
++		goto err_mem8_space_bar;
+ 	}
+ 
+ 	/* Map internal tpci200 driver user space */
+@@ -307,7 +304,7 @@ static int tpci200_register(struct tpci200_board *tpci200)
+ 			tpci200->info->pdev->bus->number,
+ 			tpci200->info->pdev->devfn);
+ 		res = -ENOMEM;
+-		goto out_release_mem8_space;
++		goto err_mem16_space_bar;
+ 	}
+ 
+ 	/* Initialize lock that protects interface_regs */
+@@ -346,18 +343,22 @@ static int tpci200_register(struct tpci200_board *tpci200)
+ 			"(bn 0x%X, sn 0x%X) unable to register IRQ !",
+ 			tpci200->info->pdev->bus->number,
+ 			tpci200->info->pdev->devfn);
+-		goto out_release_ioid_int_space;
++		goto err_interface_regs;
+ 	}
+ 
+ 	return 0;
+ 
+-out_release_mem8_space:
++err_interface_regs:
++	pci_iounmap(tpci200->info->pdev, tpci200->info->interface_regs);
++err_mem16_space_bar:
++	pci_release_region(tpci200->info->pdev, TPCI200_MEM16_SPACE_BAR);
++err_mem8_space_bar:
+ 	pci_release_region(tpci200->info->pdev, TPCI200_MEM8_SPACE_BAR);
+-out_release_ioid_int_space:
++err_io_id_int_spaces_bar:
+ 	pci_release_region(tpci200->info->pdev, TPCI200_IO_ID_INT_SPACES_BAR);
+-out_release_ip_space:
++err_ip_interface_bar:
+ 	pci_release_region(tpci200->info->pdev, TPCI200_IP_INTERFACE_BAR);
+-out_disable_pci:
++err_disable_device:
+ 	pci_disable_device(tpci200->info->pdev);
+ 	return res;
+ }
+@@ -529,7 +530,7 @@ static int tpci200_pci_probe(struct pci_dev *pdev,
+ 	tpci200->info = kzalloc(sizeof(struct tpci200_infos), GFP_KERNEL);
+ 	if (!tpci200->info) {
+ 		ret = -ENOMEM;
+-		goto out_err_info;
++		goto err_tpci200;
+ 	}
+ 
+ 	pci_dev_get(pdev);
+@@ -540,7 +541,7 @@ static int tpci200_pci_probe(struct pci_dev *pdev,
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Failed to allocate PCI Configuration Memory");
+ 		ret = -EBUSY;
+-		goto out_err_pci_request;
++		goto err_tpci200_info;
+ 	}
+ 	tpci200->info->cfg_regs = ioremap(
+ 			pci_resource_start(pdev, TPCI200_CFG_MEM_BAR),
+@@ -548,7 +549,7 @@ static int tpci200_pci_probe(struct pci_dev *pdev,
+ 	if (!tpci200->info->cfg_regs) {
+ 		dev_err(&pdev->dev, "Failed to map PCI Configuration Memory");
+ 		ret = -EFAULT;
+-		goto out_err_ioremap;
++		goto err_request_region;
+ 	}
+ 
+ 	/* Disable byte swapping for 16 bit IP module access. This will ensure
+@@ -571,7 +572,7 @@ static int tpci200_pci_probe(struct pci_dev *pdev,
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "error during tpci200 install\n");
+ 		ret = -ENODEV;
+-		goto out_err_install;
++		goto err_cfg_regs;
+ 	}
+ 
+ 	/* Register the carrier in the industry pack bus driver */
+@@ -583,7 +584,7 @@ static int tpci200_pci_probe(struct pci_dev *pdev,
+ 		dev_err(&pdev->dev,
+ 			"error registering the carrier on ipack driver\n");
+ 		ret = -EFAULT;
+-		goto out_err_bus_register;
++		goto err_tpci200_install;
+ 	}
+ 
+ 	/* save the bus number given by ipack to logging purpose */
+@@ -594,19 +595,16 @@ static int tpci200_pci_probe(struct pci_dev *pdev,
+ 		tpci200_create_device(tpci200, i);
+ 	return 0;
+ 
+-out_err_bus_register:
++err_tpci200_install:
+ 	tpci200_uninstall(tpci200);
+-	/* tpci200->info->cfg_regs is unmapped in tpci200_uninstall */
+-	tpci200->info->cfg_regs = NULL;
+-out_err_install:
+-	if (tpci200->info->cfg_regs)
+-		iounmap(tpci200->info->cfg_regs);
+-out_err_ioremap:
++err_cfg_regs:
++	pci_iounmap(tpci200->info->pdev, tpci200->info->cfg_regs);
++err_request_region:
+ 	pci_release_region(pdev, TPCI200_CFG_MEM_BAR);
+-out_err_pci_request:
+-	pci_dev_put(pdev);
++err_tpci200_info:
+ 	kfree(tpci200->info);
+-out_err_info:
++	pci_dev_put(pdev);
++err_tpci200:
+ 	kfree(tpci200);
+ 	return ret;
+ }
+@@ -616,6 +614,12 @@ static void __tpci200_pci_remove(struct tpci200_board *tpci200)
+ 	ipack_bus_unregister(tpci200->info->ipack_bus);
+ 	tpci200_uninstall(tpci200);
+ 
++	pci_iounmap(tpci200->info->pdev, tpci200->info->cfg_regs);
++
++	pci_release_region(tpci200->info->pdev, TPCI200_CFG_MEM_BAR);
++
++	pci_dev_put(tpci200->info->pdev);
++
+ 	kfree(tpci200->info);
+ 	kfree(tpci200);
+ }
+diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
+index d333130d15315..c3229d8c7041c 100644
+--- a/drivers/mmc/host/dw_mmc.c
++++ b/drivers/mmc/host/dw_mmc.c
+@@ -2018,8 +2018,8 @@ static void dw_mci_tasklet_func(struct tasklet_struct *t)
+ 					continue;
+ 				}
+ 
+-				dw_mci_stop_dma(host);
+ 				send_stop_abort(host, data);
++				dw_mci_stop_dma(host);
+ 				state = STATE_SENDING_STOP;
+ 				break;
+ 			}
+@@ -2043,10 +2043,10 @@ static void dw_mci_tasklet_func(struct tasklet_struct *t)
+ 			 */
+ 			if (test_and_clear_bit(EVENT_DATA_ERROR,
+ 					       &host->pending_events)) {
+-				dw_mci_stop_dma(host);
+ 				if (!(host->data_status & (SDMMC_INT_DRTO |
+ 							   SDMMC_INT_EBE)))
+ 					send_stop_abort(host, data);
++				dw_mci_stop_dma(host);
+ 				state = STATE_DATA_ERROR;
+ 				break;
+ 			}
+@@ -2079,10 +2079,10 @@ static void dw_mci_tasklet_func(struct tasklet_struct *t)
+ 			 */
+ 			if (test_and_clear_bit(EVENT_DATA_ERROR,
+ 					       &host->pending_events)) {
+-				dw_mci_stop_dma(host);
+ 				if (!(host->data_status & (SDMMC_INT_DRTO |
+ 							   SDMMC_INT_EBE)))
+ 					send_stop_abort(host, data);
++				dw_mci_stop_dma(host);
+ 				state = STATE_DATA_ERROR;
+ 				break;
+ 			}
+diff --git a/drivers/mmc/host/mmci_stm32_sdmmc.c b/drivers/mmc/host/mmci_stm32_sdmmc.c
+index 51db30acf4dca..fdaa11f92fe6f 100644
+--- a/drivers/mmc/host/mmci_stm32_sdmmc.c
++++ b/drivers/mmc/host/mmci_stm32_sdmmc.c
+@@ -479,8 +479,9 @@ static int sdmmc_post_sig_volt_switch(struct mmci_host *host,
+ 	u32 status;
+ 	int ret = 0;
+ 
+-	if (ios->signal_voltage == MMC_SIGNAL_VOLTAGE_180) {
+-		spin_lock_irqsave(&host->lock, flags);
++	spin_lock_irqsave(&host->lock, flags);
++	if (ios->signal_voltage == MMC_SIGNAL_VOLTAGE_180 &&
++	    host->pwr_reg & MCI_STM32_VSWITCHEN) {
+ 		mmci_write_pwrreg(host, host->pwr_reg | MCI_STM32_VSWITCH);
+ 		spin_unlock_irqrestore(&host->lock, flags);
+ 
+@@ -492,9 +493,11 @@ static int sdmmc_post_sig_volt_switch(struct mmci_host *host,
+ 
+ 		writel_relaxed(MCI_STM32_VSWENDC | MCI_STM32_CKSTOPC,
+ 			       host->base + MMCICLEAR);
++		spin_lock_irqsave(&host->lock, flags);
+ 		mmci_write_pwrreg(host, host->pwr_reg &
+ 				  ~(MCI_STM32_VSWITCHEN | MCI_STM32_VSWITCH));
+ 	}
++	spin_unlock_irqrestore(&host->lock, flags);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/mmc/host/sdhci-iproc.c b/drivers/mmc/host/sdhci-iproc.c
+index ddeaf8e1f72f9..9f0eef97ebddd 100644
+--- a/drivers/mmc/host/sdhci-iproc.c
++++ b/drivers/mmc/host/sdhci-iproc.c
+@@ -173,6 +173,23 @@ static unsigned int sdhci_iproc_get_max_clock(struct sdhci_host *host)
+ 		return pltfm_host->clock;
+ }
+ 
++/*
++ * There is a known bug on BCM2711's SDHCI core integration where the
++ * controller will hang when the difference between the core clock and the bus
++ * clock is too great. Specifically this can be reproduced under the following
++ * conditions:
++ *
++ *  - No SD card plugged in, polling thread is running, probing cards at
++ *    100 kHz.
++ *  - BCM2711's core clock configured at 500MHz or more
++ *
++ * So we set 200kHz as the minimum clock frequency available for that SoC.
++ */
++static unsigned int sdhci_iproc_bcm2711_get_min_clock(struct sdhci_host *host)
++{
++	return 200000;
++}
++
+ static const struct sdhci_ops sdhci_iproc_ops = {
+ 	.set_clock = sdhci_set_clock,
+ 	.get_max_clock = sdhci_iproc_get_max_clock,
+@@ -271,13 +288,15 @@ static const struct sdhci_ops sdhci_iproc_bcm2711_ops = {
+ 	.set_clock = sdhci_set_clock,
+ 	.set_power = sdhci_set_power_and_bus_voltage,
+ 	.get_max_clock = sdhci_iproc_get_max_clock,
++	.get_min_clock = sdhci_iproc_bcm2711_get_min_clock,
+ 	.set_bus_width = sdhci_set_bus_width,
+ 	.reset = sdhci_reset,
+ 	.set_uhs_signaling = sdhci_set_uhs_signaling,
+ };
+ 
+ static const struct sdhci_pltfm_data sdhci_bcm2711_pltfm_data = {
+-	.quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12,
++	.quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12 |
++		  SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN,
+ 	.ops = &sdhci_iproc_bcm2711_ops,
+ };
+ 
+diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
+index e44b7a66b73c5..290a14cdc1cf6 100644
+--- a/drivers/mmc/host/sdhci-msm.c
++++ b/drivers/mmc/host/sdhci-msm.c
+@@ -2089,6 +2089,23 @@ static void sdhci_msm_cqe_disable(struct mmc_host *mmc, bool recovery)
+ 	sdhci_cqe_disable(mmc, recovery);
+ }
+ 
++static void sdhci_msm_set_timeout(struct sdhci_host *host, struct mmc_command *cmd)
++{
++	u32 count, start = 15;
++
++	__sdhci_set_timeout(host, cmd);
++	count = sdhci_readb(host, SDHCI_TIMEOUT_CONTROL);
++	/*
++	 * Update software timeout value if its value is less than hardware data
++	 * timeout value. Qcom SoC hardware data timeout value was calculated
++	 * using 4 * MCLK * 2^(count + 13). where MCLK = 1 / host->clock.
++	 */
++	if (cmd && cmd->data && host->clock > 400000 &&
++	    host->clock <= 50000000 &&
++	    ((1 << (count + start)) > (10 * host->clock)))
++		host->data_timeout = 22LL * NSEC_PER_SEC;
++}
++
+ static const struct cqhci_host_ops sdhci_msm_cqhci_ops = {
+ 	.enable		= sdhci_msm_cqe_enable,
+ 	.disable	= sdhci_msm_cqe_disable,
+@@ -2438,6 +2455,7 @@ static const struct sdhci_ops sdhci_msm_ops = {
+ 	.irq	= sdhci_msm_cqe_irq,
+ 	.dump_vendor_regs = sdhci_msm_dump_vendor_regs,
+ 	.set_power = sdhci_set_power_noreg,
++	.set_timeout = sdhci_msm_set_timeout,
+ };
+ 
+ static const struct sdhci_pltfm_data sdhci_msm_pdata = {
+diff --git a/drivers/mtd/chips/cfi_cmdset_0002.c b/drivers/mtd/chips/cfi_cmdset_0002.c
+index 3097e93787f72..a761134fd3bea 100644
+--- a/drivers/mtd/chips/cfi_cmdset_0002.c
++++ b/drivers/mtd/chips/cfi_cmdset_0002.c
+@@ -119,7 +119,7 @@ static int cfi_use_status_reg(struct cfi_private *cfi)
+ 	struct cfi_pri_amdstd *extp = cfi->cmdset_priv;
+ 	u8 poll_mask = CFI_POLL_STATUS_REG | CFI_POLL_DQ;
+ 
+-	return extp->MinorVersion >= '5' &&
++	return extp && extp->MinorVersion >= '5' &&
+ 		(extp->SoftwareFeatures & poll_mask) == CFI_POLL_STATUS_REG;
+ }
+ 
+diff --git a/drivers/mtd/nand/raw/nand_base.c b/drivers/mtd/nand/raw/nand_base.c
+index fb072c4444950..4412fdc240a25 100644
+--- a/drivers/mtd/nand/raw/nand_base.c
++++ b/drivers/mtd/nand/raw/nand_base.c
+@@ -5056,12 +5056,18 @@ static bool of_get_nand_on_flash_bbt(struct device_node *np)
+ static int of_get_nand_secure_regions(struct nand_chip *chip)
+ {
+ 	struct device_node *dn = nand_get_flash_node(chip);
++	struct property *prop;
+ 	int nr_elem, i, j;
+ 
+-	nr_elem = of_property_count_elems_of_size(dn, "secure-regions", sizeof(u64));
+-	if (!nr_elem)
++	/* Only proceed if the "secure-regions" property is present in DT */
++	prop = of_find_property(dn, "secure-regions", NULL);
++	if (!prop)
+ 		return 0;
+ 
++	nr_elem = of_property_count_elems_of_size(dn, "secure-regions", sizeof(u64));
++	if (nr_elem <= 0)
++		return nr_elem;
++
+ 	chip->nr_secure_regions = nr_elem / 2;
+ 	chip->secure_regions = kcalloc(chip->nr_secure_regions, sizeof(*chip->secure_regions),
+ 				       GFP_KERNEL);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 3c3aa94673103..b365768a2bda6 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -69,7 +69,8 @@
+ #include "bnxt_debugfs.h"
+ 
+ #define BNXT_TX_TIMEOUT		(5 * HZ)
+-#define BNXT_DEF_MSG_ENABLE	(NETIF_MSG_DRV | NETIF_MSG_HW)
++#define BNXT_DEF_MSG_ENABLE	(NETIF_MSG_DRV | NETIF_MSG_HW | \
++				 NETIF_MSG_TX_ERR)
+ 
+ MODULE_LICENSE("GPL");
+ MODULE_DESCRIPTION("Broadcom BCM573xx network driver");
+@@ -362,6 +363,33 @@ static u16 bnxt_xmit_get_cfa_action(struct sk_buff *skb)
+ 	return md_dst->u.port_info.port_id;
+ }
+ 
++static void bnxt_txr_db_kick(struct bnxt *bp, struct bnxt_tx_ring_info *txr,
++			     u16 prod)
++{
++	bnxt_db_write(bp, &txr->tx_db, prod);
++	txr->kick_pending = 0;
++}
++
++static bool bnxt_txr_netif_try_stop_queue(struct bnxt *bp,
++					  struct bnxt_tx_ring_info *txr,
++					  struct netdev_queue *txq)
++{
++	netif_tx_stop_queue(txq);
++
++	/* netif_tx_stop_queue() must be done before checking
++	 * tx index in bnxt_tx_avail() below, because in
++	 * bnxt_tx_int(), we update tx index before checking for
++	 * netif_tx_queue_stopped().
++	 */
++	smp_mb();
++	if (bnxt_tx_avail(bp, txr) > bp->tx_wake_thresh) {
++		netif_tx_wake_queue(txq);
++		return false;
++	}
++
++	return true;
++}
++
+ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct bnxt *bp = netdev_priv(dev);
+@@ -381,6 +409,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	i = skb_get_queue_mapping(skb);
+ 	if (unlikely(i >= bp->tx_nr_rings)) {
+ 		dev_kfree_skb_any(skb);
++		atomic_long_inc(&dev->tx_dropped);
+ 		return NETDEV_TX_OK;
+ 	}
+ 
+@@ -390,8 +419,12 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 	free_size = bnxt_tx_avail(bp, txr);
+ 	if (unlikely(free_size < skb_shinfo(skb)->nr_frags + 2)) {
+-		netif_tx_stop_queue(txq);
+-		return NETDEV_TX_BUSY;
++		/* We must have raced with NAPI cleanup */
++		if (net_ratelimit() && txr->kick_pending)
++			netif_warn(bp, tx_err, dev,
++				   "bnxt: ring busy w/ flush pending!\n");
++		if (bnxt_txr_netif_try_stop_queue(bp, txr, txq))
++			return NETDEV_TX_BUSY;
+ 	}
+ 
+ 	length = skb->len;
+@@ -498,21 +531,16 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ normal_tx:
+ 	if (length < BNXT_MIN_PKT_SIZE) {
+ 		pad = BNXT_MIN_PKT_SIZE - length;
+-		if (skb_pad(skb, pad)) {
++		if (skb_pad(skb, pad))
+ 			/* SKB already freed. */
+-			tx_buf->skb = NULL;
+-			return NETDEV_TX_OK;
+-		}
++			goto tx_kick_pending;
+ 		length = BNXT_MIN_PKT_SIZE;
+ 	}
+ 
+ 	mapping = dma_map_single(&pdev->dev, skb->data, len, DMA_TO_DEVICE);
+ 
+-	if (unlikely(dma_mapping_error(&pdev->dev, mapping))) {
+-		dev_kfree_skb_any(skb);
+-		tx_buf->skb = NULL;
+-		return NETDEV_TX_OK;
+-	}
++	if (unlikely(dma_mapping_error(&pdev->dev, mapping)))
++		goto tx_free;
+ 
+ 	dma_unmap_addr_set(tx_buf, mapping, mapping);
+ 	flags = (len << TX_BD_LEN_SHIFT) | TX_BD_TYPE_LONG_TX_BD |
+@@ -597,24 +625,17 @@ normal_tx:
+ 	txr->tx_prod = prod;
+ 
+ 	if (!netdev_xmit_more() || netif_xmit_stopped(txq))
+-		bnxt_db_write(bp, &txr->tx_db, prod);
++		bnxt_txr_db_kick(bp, txr, prod);
++	else
++		txr->kick_pending = 1;
+ 
+ tx_done:
+ 
+ 	if (unlikely(bnxt_tx_avail(bp, txr) <= MAX_SKB_FRAGS + 1)) {
+ 		if (netdev_xmit_more() && !tx_buf->is_push)
+-			bnxt_db_write(bp, &txr->tx_db, prod);
+-
+-		netif_tx_stop_queue(txq);
++			bnxt_txr_db_kick(bp, txr, prod);
+ 
+-		/* netif_tx_stop_queue() must be done before checking
+-		 * tx index in bnxt_tx_avail() below, because in
+-		 * bnxt_tx_int(), we update tx index before checking for
+-		 * netif_tx_queue_stopped().
+-		 */
+-		smp_mb();
+-		if (bnxt_tx_avail(bp, txr) > bp->tx_wake_thresh)
+-			netif_tx_wake_queue(txq);
++		bnxt_txr_netif_try_stop_queue(bp, txr, txq);
+ 	}
+ 	return NETDEV_TX_OK;
+ 
+@@ -624,7 +645,6 @@ tx_dma_error:
+ 	/* start back at beginning and unmap skb */
+ 	prod = txr->tx_prod;
+ 	tx_buf = &txr->tx_buf_ring[prod];
+-	tx_buf->skb = NULL;
+ 	dma_unmap_single(&pdev->dev, dma_unmap_addr(tx_buf, mapping),
+ 			 skb_headlen(skb), PCI_DMA_TODEVICE);
+ 	prod = NEXT_TX(prod);
+@@ -638,7 +658,13 @@ tx_dma_error:
+ 			       PCI_DMA_TODEVICE);
+ 	}
+ 
++tx_free:
+ 	dev_kfree_skb_any(skb);
++tx_kick_pending:
++	if (txr->kick_pending)
++		bnxt_txr_db_kick(bp, txr, txr->tx_prod);
++	txr->tx_buf_ring[txr->tx_prod].skb = NULL;
++	atomic_long_inc(&dev->tx_dropped);
+ 	return NETDEV_TX_OK;
+ }
+ 
+@@ -698,14 +724,9 @@ next_tx_int:
+ 	smp_mb();
+ 
+ 	if (unlikely(netif_tx_queue_stopped(txq)) &&
+-	    (bnxt_tx_avail(bp, txr) > bp->tx_wake_thresh)) {
+-		__netif_tx_lock(txq, smp_processor_id());
+-		if (netif_tx_queue_stopped(txq) &&
+-		    bnxt_tx_avail(bp, txr) > bp->tx_wake_thresh &&
+-		    txr->dev_state != BNXT_DEV_STATE_CLOSING)
+-			netif_tx_wake_queue(txq);
+-		__netif_tx_unlock(txq);
+-	}
++	    bnxt_tx_avail(bp, txr) > bp->tx_wake_thresh &&
++	    READ_ONCE(txr->dev_state) != BNXT_DEV_STATE_CLOSING)
++		netif_tx_wake_queue(txq);
+ }
+ 
+ static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping,
+@@ -1733,6 +1754,10 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ 	if (!RX_CMP_VALID(rxcmp1, tmp_raw_cons))
+ 		return -EBUSY;
+ 
++	/* The valid test of the entry must be done first before
++	 * reading any further.
++	 */
++	dma_rmb();
+ 	prod = rxr->rx_prod;
+ 
+ 	if (cmp_type == CMP_TYPE_RX_L2_TPA_START_CMP) {
+@@ -1936,6 +1961,10 @@ static int bnxt_force_rx_discard(struct bnxt *bp,
+ 	if (!RX_CMP_VALID(rxcmp1, tmp_raw_cons))
+ 		return -EBUSY;
+ 
++	/* The valid test of the entry must be done first before
++	 * reading any further.
++	 */
++	dma_rmb();
+ 	cmp_type = RX_CMP_TYPE(rxcmp);
+ 	if (cmp_type == CMP_TYPE_RX_L2_CMP) {
+ 		rxcmp1->rx_cmp_cfa_code_errors_v2 |=
+@@ -2400,6 +2429,10 @@ static int bnxt_poll_nitroa0(struct napi_struct *napi, int budget)
+ 		if (!TX_CMP_VALID(txcmp, raw_cons))
+ 			break;
+ 
++		/* The valid test of the entry must be done first before
++		 * reading any further.
++		 */
++		dma_rmb();
+ 		if ((TX_CMP_TYPE(txcmp) & 0x30) == 0x10) {
+ 			tmp_raw_cons = NEXT_RAW_CMP(raw_cons);
+ 			cp_cons = RING_CMP(tmp_raw_cons);
+@@ -9018,10 +9051,9 @@ static void bnxt_disable_napi(struct bnxt *bp)
+ 	for (i = 0; i < bp->cp_nr_rings; i++) {
+ 		struct bnxt_cp_ring_info *cpr = &bp->bnapi[i]->cp_ring;
+ 
++		napi_disable(&bp->bnapi[i]->napi);
+ 		if (bp->bnapi[i]->rx_ring)
+ 			cancel_work_sync(&cpr->dim.work);
+-
+-		napi_disable(&bp->bnapi[i]->napi);
+ 	}
+ }
+ 
+@@ -9055,9 +9087,11 @@ void bnxt_tx_disable(struct bnxt *bp)
+ 	if (bp->tx_ring) {
+ 		for (i = 0; i < bp->tx_nr_rings; i++) {
+ 			txr = &bp->tx_ring[i];
+-			txr->dev_state = BNXT_DEV_STATE_CLOSING;
++			WRITE_ONCE(txr->dev_state, BNXT_DEV_STATE_CLOSING);
+ 		}
+ 	}
++	/* Make sure napi polls see @dev_state change */
++	synchronize_net();
+ 	/* Drop carrier first to prevent TX timeout */
+ 	netif_carrier_off(bp->dev);
+ 	/* Stop all TX queues */
+@@ -9071,8 +9105,10 @@ void bnxt_tx_enable(struct bnxt *bp)
+ 
+ 	for (i = 0; i < bp->tx_nr_rings; i++) {
+ 		txr = &bp->tx_ring[i];
+-		txr->dev_state = 0;
++		WRITE_ONCE(txr->dev_state, 0);
+ 	}
++	/* Make sure napi polls see @dev_state change */
++	synchronize_net();
+ 	netif_tx_wake_all_queues(bp->dev);
+ 	if (bp->link_info.link_up)
+ 		netif_carrier_on(bp->dev);
+@@ -10644,6 +10680,9 @@ static bool bnxt_rfs_supported(struct bnxt *bp)
+ 			return true;
+ 		return false;
+ 	}
++	/* 212 firmware is broken for aRFS */
++	if (BNXT_FW_MAJ(bp) == 212)
++		return false;
+ 	if (BNXT_PF(bp) && !BNXT_CHIP_TYPE_NITRO_A0(bp))
+ 		return true;
+ 	if (bp->flags & BNXT_FLAG_NEW_RSS_CAP)
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 30e47ea343f91..e2f38aaa474b0 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -783,6 +783,7 @@ struct bnxt_tx_ring_info {
+ 	u16			tx_prod;
+ 	u16			tx_cons;
+ 	u16			txq_index;
++	u8			kick_pending;
+ 	struct bnxt_db_info	tx_db;
+ 
+ 	struct tx_bd		*tx_desc_ring[MAX_TX_PAGES];
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
+index 87321b7239cf4..58964d22cb17d 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
+@@ -3038,26 +3038,30 @@ static int dpaa2_switch_port_init(struct ethsw_port_priv *port_priv, u16 port)
+ 	return err;
+ }
+ 
+-static void dpaa2_switch_takedown(struct fsl_mc_device *sw_dev)
++static void dpaa2_switch_ctrl_if_teardown(struct ethsw_core *ethsw)
++{
++	dpsw_ctrl_if_disable(ethsw->mc_io, 0, ethsw->dpsw_handle);
++	dpaa2_switch_free_dpio(ethsw);
++	dpaa2_switch_destroy_rings(ethsw);
++	dpaa2_switch_drain_bp(ethsw);
++	dpaa2_switch_free_dpbp(ethsw);
++}
++
++static void dpaa2_switch_teardown(struct fsl_mc_device *sw_dev)
+ {
+ 	struct device *dev = &sw_dev->dev;
+ 	struct ethsw_core *ethsw = dev_get_drvdata(dev);
+ 	int err;
+ 
++	dpaa2_switch_ctrl_if_teardown(ethsw);
++
++	destroy_workqueue(ethsw->workqueue);
++
+ 	err = dpsw_close(ethsw->mc_io, 0, ethsw->dpsw_handle);
+ 	if (err)
+ 		dev_warn(dev, "dpsw_close err %d\n", err);
+ }
+ 
+-static void dpaa2_switch_ctrl_if_teardown(struct ethsw_core *ethsw)
+-{
+-	dpsw_ctrl_if_disable(ethsw->mc_io, 0, ethsw->dpsw_handle);
+-	dpaa2_switch_free_dpio(ethsw);
+-	dpaa2_switch_destroy_rings(ethsw);
+-	dpaa2_switch_drain_bp(ethsw);
+-	dpaa2_switch_free_dpbp(ethsw);
+-}
+-
+ static int dpaa2_switch_remove(struct fsl_mc_device *sw_dev)
+ {
+ 	struct ethsw_port_priv *port_priv;
+@@ -3068,8 +3072,6 @@ static int dpaa2_switch_remove(struct fsl_mc_device *sw_dev)
+ 	dev = &sw_dev->dev;
+ 	ethsw = dev_get_drvdata(dev);
+ 
+-	dpaa2_switch_ctrl_if_teardown(ethsw);
+-
+ 	dpaa2_switch_teardown_irqs(sw_dev);
+ 
+ 	dpsw_disable(ethsw->mc_io, 0, ethsw->dpsw_handle);
+@@ -3084,9 +3086,7 @@ static int dpaa2_switch_remove(struct fsl_mc_device *sw_dev)
+ 	kfree(ethsw->acls);
+ 	kfree(ethsw->ports);
+ 
+-	dpaa2_switch_takedown(sw_dev);
+-
+-	destroy_workqueue(ethsw->workqueue);
++	dpaa2_switch_teardown(sw_dev);
+ 
+ 	fsl_mc_portal_free(ethsw->mc_io);
+ 
+@@ -3199,7 +3199,7 @@ static int dpaa2_switch_probe(struct fsl_mc_device *sw_dev)
+ 			       GFP_KERNEL);
+ 	if (!(ethsw->ports)) {
+ 		err = -ENOMEM;
+-		goto err_takedown;
++		goto err_teardown;
+ 	}
+ 
+ 	ethsw->fdbs = kcalloc(ethsw->sw_attr.num_ifs, sizeof(*ethsw->fdbs),
+@@ -3270,8 +3270,8 @@ err_free_fdbs:
+ err_free_ports:
+ 	kfree(ethsw->ports);
+ 
+-err_takedown:
+-	dpaa2_switch_takedown(sw_dev);
++err_teardown:
++	dpaa2_switch_teardown(sw_dev);
+ 
+ err_free_cmdport:
+ 	fsl_mc_portal_free(ethsw->mc_io);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+index 107fb472319ee..b18ff0ed8527b 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+@@ -3665,8 +3665,7 @@ u16 i40e_lan_select_queue(struct net_device *netdev,
+ 
+ 	/* is DCB enabled at all? */
+ 	if (vsi->tc_config.numtc == 1)
+-		return i40e_swdcb_skb_tx_hash(netdev, skb,
+-					      netdev->real_num_tx_queues);
++		return netdev_pick_tx(netdev, skb, sb_dev);
+ 
+ 	prio = skb->priority;
+ 	hw = &vsi->back->hw;
+diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
+index e8bd04100ecd0..90793b36126e6 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf.h
++++ b/drivers/net/ethernet/intel/iavf/iavf.h
+@@ -136,6 +136,7 @@ struct iavf_q_vector {
+ struct iavf_mac_filter {
+ 	struct list_head list;
+ 	u8 macaddr[ETH_ALEN];
++	bool is_new_mac;	/* filter is new, wait for PF decision */
+ 	bool remove;		/* filter needs to be removed */
+ 	bool add;		/* filter needs to be added */
+ };
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 244ec74ceca76..606a01ce40739 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -751,6 +751,7 @@ struct iavf_mac_filter *iavf_add_filter(struct iavf_adapter *adapter,
+ 
+ 		list_add_tail(&f->list, &adapter->mac_filter_list);
+ 		f->add = true;
++		f->is_new_mac = true;
+ 		adapter->aq_required |= IAVF_FLAG_AQ_ADD_MAC_FILTER;
+ 	} else {
+ 		f->remove = false;
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+index 0eab3c43bdc59..3c735968e1b85 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+@@ -540,6 +540,47 @@ void iavf_del_ether_addrs(struct iavf_adapter *adapter)
+ 	kfree(veal);
+ }
+ 
++/**
++ * iavf_mac_add_ok
++ * @adapter: adapter structure
++ *
++ * Submit list of filters based on PF response.
++ **/
++static void iavf_mac_add_ok(struct iavf_adapter *adapter)
++{
++	struct iavf_mac_filter *f, *ftmp;
++
++	spin_lock_bh(&adapter->mac_vlan_list_lock);
++	list_for_each_entry_safe(f, ftmp, &adapter->mac_filter_list, list) {
++		f->is_new_mac = false;
++	}
++	spin_unlock_bh(&adapter->mac_vlan_list_lock);
++}
++
++/**
++ * iavf_mac_add_reject
++ * @adapter: adapter structure
++ *
++ * Remove filters from list based on PF response.
++ **/
++static void iavf_mac_add_reject(struct iavf_adapter *adapter)
++{
++	struct net_device *netdev = adapter->netdev;
++	struct iavf_mac_filter *f, *ftmp;
++
++	spin_lock_bh(&adapter->mac_vlan_list_lock);
++	list_for_each_entry_safe(f, ftmp, &adapter->mac_filter_list, list) {
++		if (f->remove && ether_addr_equal(f->macaddr, netdev->dev_addr))
++			f->remove = false;
++
++		if (f->is_new_mac) {
++			list_del(&f->list);
++			kfree(f);
++		}
++	}
++	spin_unlock_bh(&adapter->mac_vlan_list_lock);
++}
++
+ /**
+  * iavf_add_vlans
+  * @adapter: adapter structure
+@@ -1492,6 +1533,7 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
+ 		case VIRTCHNL_OP_ADD_ETH_ADDR:
+ 			dev_err(&adapter->pdev->dev, "Failed to add MAC filter, error %s\n",
+ 				iavf_stat_str(&adapter->hw, v_retval));
++			iavf_mac_add_reject(adapter);
+ 			/* restore administratively set MAC address */
+ 			ether_addr_copy(adapter->hw.mac.addr, netdev->dev_addr);
+ 			break;
+@@ -1639,10 +1681,11 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
+ 		}
+ 	}
+ 	switch (v_opcode) {
+-	case VIRTCHNL_OP_ADD_ETH_ADDR: {
++	case VIRTCHNL_OP_ADD_ETH_ADDR:
++		if (!v_retval)
++			iavf_mac_add_ok(adapter);
+ 		if (!ether_addr_equal(netdev->dev_addr, adapter->hw.mac.addr))
+ 			ether_addr_copy(netdev->dev_addr, adapter->hw.mac.addr);
+-		}
+ 		break;
+ 	case VIRTCHNL_OP_GET_STATS: {
+ 		struct iavf_eth_stats *stats =
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+index f72d2978263b9..d60da7a89092e 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+@@ -52,8 +52,11 @@ static int ixgbe_xsk_pool_enable(struct ixgbe_adapter *adapter,
+ 
+ 		/* Kick start the NAPI context so that receiving will start */
+ 		err = ixgbe_xsk_wakeup(adapter->netdev, qid, XDP_WAKEUP_RX);
+-		if (err)
++		if (err) {
++			clear_bit(qid, adapter->af_xdp_zc_qps);
++			xsk_pool_dma_unmap(pool, IXGBE_RX_DMA_ATTR);
+ 			return err;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index adfb9781799ee..2948d731a1c1c 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -1334,6 +1334,7 @@ void ocelot_apply_bridge_fwd_mask(struct ocelot *ocelot)
+ 			struct net_device *bond = ocelot_port->bond;
+ 
+ 			mask = ocelot_get_bridge_fwd_mask(ocelot, bridge);
++			mask |= cpu_fwd_mask;
+ 			mask &= ~BIT(port);
+ 			if (bond) {
+ 				mask &= ~ocelot_get_bond_mask(ocelot, bond,
+diff --git a/drivers/net/ethernet/qlogic/qede/qede.h b/drivers/net/ethernet/qlogic/qede/qede.h
+index 2e62a2c4eb637..5630008f38b75 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede.h
++++ b/drivers/net/ethernet/qlogic/qede/qede.h
+@@ -501,6 +501,7 @@ struct qede_fastpath {
+ #define QEDE_SP_HW_ERR                  4
+ #define QEDE_SP_ARFS_CONFIG             5
+ #define QEDE_SP_AER			7
++#define QEDE_SP_DISABLE			8
+ 
+ #ifdef CONFIG_RFS_ACCEL
+ int qede_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c
+index 01ac1e93d27a6..7c6064baeba28 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_main.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_main.c
+@@ -1009,6 +1009,13 @@ static void qede_sp_task(struct work_struct *work)
+ 	struct qede_dev *edev = container_of(work, struct qede_dev,
+ 					     sp_task.work);
+ 
++	/* Disable execution of this deferred work once
++	 * qede removal is in progress, this stop any future
++	 * scheduling of sp_task.
++	 */
++	if (test_bit(QEDE_SP_DISABLE, &edev->sp_flags))
++		return;
++
+ 	/* The locking scheme depends on the specific flag:
+ 	 * In case of QEDE_SP_RECOVERY, acquiring the RTNL lock is required to
+ 	 * ensure that ongoing flows are ended and new ones are not started.
+@@ -1300,6 +1307,7 @@ static void __qede_remove(struct pci_dev *pdev, enum qede_remove_mode mode)
+ 	qede_rdma_dev_remove(edev, (mode == QEDE_REMOVE_RECOVERY));
+ 
+ 	if (mode != QEDE_REMOVE_RECOVERY) {
++		set_bit(QEDE_SP_DISABLE, &edev->sp_flags);
+ 		unregister_netdev(ndev);
+ 
+ 		cancel_delayed_work_sync(&edev->sp_task);
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
+index d8882d0b6b498..d51bac7ba5afa 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
+@@ -3156,8 +3156,10 @@ int qlcnic_83xx_flash_read32(struct qlcnic_adapter *adapter, u32 flash_addr,
+ 
+ 		indirect_addr = QLC_83XX_FLASH_DIRECT_DATA(addr);
+ 		ret = QLCRD32(adapter, indirect_addr, &err);
+-		if (err == -EIO)
++		if (err == -EIO) {
++			qlcnic_83xx_unlock_flash(adapter);
+ 			return err;
++		}
+ 
+ 		word = ret;
+ 		*(u32 *)p_data  = word;
+diff --git a/drivers/net/hamradio/6pack.c b/drivers/net/hamradio/6pack.c
+index 80f41945709f9..da6a2a4b6cc75 100644
+--- a/drivers/net/hamradio/6pack.c
++++ b/drivers/net/hamradio/6pack.c
+@@ -833,6 +833,12 @@ static void decode_data(struct sixpack *sp, unsigned char inbyte)
+ 		return;
+ 	}
+ 
++	if (sp->rx_count_cooked + 2 >= sizeof(sp->cooked_buf)) {
++		pr_err("6pack: cooked buffer overrun, data loss\n");
++		sp->rx_count = 0;
++		return;
++	}
++
+ 	buf = sp->raw_buf;
+ 	sp->cooked_buf[sp->rx_count_cooked++] =
+ 		buf[0] | ((buf[1] << 2) & 0xc0);
+diff --git a/drivers/net/mdio/mdio-mux.c b/drivers/net/mdio/mdio-mux.c
+index 110e4ee85785c..3dde0c2b3e097 100644
+--- a/drivers/net/mdio/mdio-mux.c
++++ b/drivers/net/mdio/mdio-mux.c
+@@ -82,6 +82,17 @@ out:
+ 
+ static int parent_count;
+ 
++static void mdio_mux_uninit_children(struct mdio_mux_parent_bus *pb)
++{
++	struct mdio_mux_child_bus *cb = pb->children;
++
++	while (cb) {
++		mdiobus_unregister(cb->mii_bus);
++		mdiobus_free(cb->mii_bus);
++		cb = cb->next;
++	}
++}
++
+ int mdio_mux_init(struct device *dev,
+ 		  struct device_node *mux_node,
+ 		  int (*switch_fn)(int cur, int desired, void *data),
+@@ -144,7 +155,7 @@ int mdio_mux_init(struct device *dev,
+ 		cb = devm_kzalloc(dev, sizeof(*cb), GFP_KERNEL);
+ 		if (!cb) {
+ 			ret_val = -ENOMEM;
+-			continue;
++			goto err_loop;
+ 		}
+ 		cb->bus_number = v;
+ 		cb->parent = pb;
+@@ -152,8 +163,7 @@ int mdio_mux_init(struct device *dev,
+ 		cb->mii_bus = mdiobus_alloc();
+ 		if (!cb->mii_bus) {
+ 			ret_val = -ENOMEM;
+-			devm_kfree(dev, cb);
+-			continue;
++			goto err_loop;
+ 		}
+ 		cb->mii_bus->priv = cb;
+ 
+@@ -165,11 +175,15 @@ int mdio_mux_init(struct device *dev,
+ 		cb->mii_bus->write = mdio_mux_write;
+ 		r = of_mdiobus_register(cb->mii_bus, child_bus_node);
+ 		if (r) {
++			mdiobus_free(cb->mii_bus);
++			if (r == -EPROBE_DEFER) {
++				ret_val = r;
++				goto err_loop;
++			}
++			devm_kfree(dev, cb);
+ 			dev_err(dev,
+ 				"Error: Failed to register MDIO bus for child %pOF\n",
+ 				child_bus_node);
+-			mdiobus_free(cb->mii_bus);
+-			devm_kfree(dev, cb);
+ 		} else {
+ 			cb->next = pb->children;
+ 			pb->children = cb;
+@@ -182,6 +196,10 @@ int mdio_mux_init(struct device *dev,
+ 
+ 	dev_err(dev, "Error: No acceptable child buses found\n");
+ 	devm_kfree(dev, pb);
++
++err_loop:
++	mdio_mux_uninit_children(pb);
++	of_node_put(child_bus_node);
+ err_pb_kz:
+ 	put_device(&parent_bus->dev);
+ err_parent_bus:
+@@ -193,14 +211,8 @@ EXPORT_SYMBOL_GPL(mdio_mux_init);
+ void mdio_mux_uninit(void *mux_handle)
+ {
+ 	struct mdio_mux_parent_bus *pb = mux_handle;
+-	struct mdio_mux_child_bus *cb = pb->children;
+-
+-	while (cb) {
+-		mdiobus_unregister(cb->mii_bus);
+-		mdiobus_free(cb->mii_bus);
+-		cb = cb->next;
+-	}
+ 
++	mdio_mux_uninit_children(pb);
+ 	put_device(&pb->mii_bus->dev);
+ }
+ EXPORT_SYMBOL_GPL(mdio_mux_uninit);
+diff --git a/drivers/net/usb/asix.h b/drivers/net/usb/asix.h
+index 3b53685301dec..edb94efd265e1 100644
+--- a/drivers/net/usb/asix.h
++++ b/drivers/net/usb/asix.h
+@@ -205,8 +205,7 @@ struct sk_buff *asix_tx_fixup(struct usbnet *dev, struct sk_buff *skb,
+ int asix_set_sw_mii(struct usbnet *dev, int in_pm);
+ int asix_set_hw_mii(struct usbnet *dev, int in_pm);
+ 
+-int asix_read_phy_addr(struct usbnet *dev, int internal);
+-int asix_get_phy_addr(struct usbnet *dev);
++int asix_read_phy_addr(struct usbnet *dev, bool internal);
+ 
+ int asix_sw_reset(struct usbnet *dev, u8 flags, int in_pm);
+ 
+diff --git a/drivers/net/usb/asix_common.c b/drivers/net/usb/asix_common.c
+index 7bc6e8f856fe0..e1109f1a8dd5f 100644
+--- a/drivers/net/usb/asix_common.c
++++ b/drivers/net/usb/asix_common.c
+@@ -288,32 +288,33 @@ int asix_set_hw_mii(struct usbnet *dev, int in_pm)
+ 	return ret;
+ }
+ 
+-int asix_read_phy_addr(struct usbnet *dev, int internal)
++int asix_read_phy_addr(struct usbnet *dev, bool internal)
+ {
+-	int offset = (internal ? 1 : 0);
++	int ret, offset;
+ 	u8 buf[2];
+-	int ret = asix_read_cmd(dev, AX_CMD_READ_PHY_ID, 0, 0, 2, buf, 0);
+ 
+-	netdev_dbg(dev->net, "asix_get_phy_addr()\n");
++	ret = asix_read_cmd(dev, AX_CMD_READ_PHY_ID, 0, 0, 2, buf, 0);
++	if (ret < 0)
++		goto error;
+ 
+ 	if (ret < 2) {
+-		netdev_err(dev->net, "Error reading PHYID register: %02x\n", ret);
+-		goto out;
++		ret = -EIO;
++		goto error;
+ 	}
+-	netdev_dbg(dev->net, "asix_get_phy_addr() returning 0x%04x\n",
+-		   *((__le16 *)buf));
++
++	offset = (internal ? 1 : 0);
+ 	ret = buf[offset];
+ 
+-out:
++	netdev_dbg(dev->net, "%s PHY address 0x%x\n",
++		   internal ? "internal" : "external", ret);
++
+ 	return ret;
+-}
+ 
+-int asix_get_phy_addr(struct usbnet *dev)
+-{
+-	/* return the address of the internal phy */
+-	return asix_read_phy_addr(dev, 1);
+-}
++error:
++	netdev_err(dev->net, "Error reading PHY_ID register: %02x\n", ret);
+ 
++	return ret;
++}
+ 
+ int asix_sw_reset(struct usbnet *dev, u8 flags, int in_pm)
+ {
+diff --git a/drivers/net/usb/asix_devices.c b/drivers/net/usb/asix_devices.c
+index 19a8fafb8f04b..fb523734bf31f 100644
+--- a/drivers/net/usb/asix_devices.c
++++ b/drivers/net/usb/asix_devices.c
+@@ -262,7 +262,10 @@ static int ax88172_bind(struct usbnet *dev, struct usb_interface *intf)
+ 	dev->mii.mdio_write = asix_mdio_write;
+ 	dev->mii.phy_id_mask = 0x3f;
+ 	dev->mii.reg_num_mask = 0x1f;
+-	dev->mii.phy_id = asix_get_phy_addr(dev);
++
++	dev->mii.phy_id = asix_read_phy_addr(dev, true);
++	if (dev->mii.phy_id < 0)
++		return dev->mii.phy_id;
+ 
+ 	dev->net->netdev_ops = &ax88172_netdev_ops;
+ 	dev->net->ethtool_ops = &ax88172_ethtool_ops;
+@@ -717,7 +720,10 @@ static int ax88772_bind(struct usbnet *dev, struct usb_interface *intf)
+ 	dev->mii.mdio_write = asix_mdio_write;
+ 	dev->mii.phy_id_mask = 0x1f;
+ 	dev->mii.reg_num_mask = 0x1f;
+-	dev->mii.phy_id = asix_get_phy_addr(dev);
++
++	dev->mii.phy_id = asix_read_phy_addr(dev, true);
++	if (dev->mii.phy_id < 0)
++		return dev->mii.phy_id;
+ 
+ 	dev->net->netdev_ops = &ax88772_netdev_ops;
+ 	dev->net->ethtool_ops = &ax88772_ethtool_ops;
+@@ -1081,7 +1087,10 @@ static int ax88178_bind(struct usbnet *dev, struct usb_interface *intf)
+ 	dev->mii.phy_id_mask = 0x1f;
+ 	dev->mii.reg_num_mask = 0xff;
+ 	dev->mii.supports_gmii = 1;
+-	dev->mii.phy_id = asix_get_phy_addr(dev);
++
++	dev->mii.phy_id = asix_read_phy_addr(dev, true);
++	if (dev->mii.phy_id < 0)
++		return dev->mii.phy_id;
+ 
+ 	dev->net->netdev_ops = &ax88178_netdev_ops;
+ 	dev->net->ethtool_ops = &ax88178_ethtool_ops;
+diff --git a/drivers/net/usb/ax88172a.c b/drivers/net/usb/ax88172a.c
+index b404c9462dcec..c8ca5187eece5 100644
+--- a/drivers/net/usb/ax88172a.c
++++ b/drivers/net/usb/ax88172a.c
+@@ -220,6 +220,11 @@ static int ax88172a_bind(struct usbnet *dev, struct usb_interface *intf)
+ 	}
+ 
+ 	priv->phy_addr = asix_read_phy_addr(dev, priv->use_embdphy);
++	if (priv->phy_addr < 0) {
++		ret = priv->phy_addr;
++		goto free;
++	}
++
+ 	ax88172a_reset_phy(dev, priv->use_embdphy);
+ 
+ 	/* Asix framing packs multiple eth frames into a 2K usb bulk transfer */
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index 02bce40a67e5b..c46a66ea32ebd 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -1154,7 +1154,7 @@ static int lan78xx_link_reset(struct lan78xx_net *dev)
+ {
+ 	struct phy_device *phydev = dev->net->phydev;
+ 	struct ethtool_link_ksettings ecmd;
+-	int ladv, radv, ret;
++	int ladv, radv, ret, link;
+ 	u32 buf;
+ 
+ 	/* clear LAN78xx interrupt status */
+@@ -1162,9 +1162,12 @@ static int lan78xx_link_reset(struct lan78xx_net *dev)
+ 	if (unlikely(ret < 0))
+ 		return -EIO;
+ 
++	mutex_lock(&phydev->lock);
+ 	phy_read_status(phydev);
++	link = phydev->link;
++	mutex_unlock(&phydev->lock);
+ 
+-	if (!phydev->link && dev->link_on) {
++	if (!link && dev->link_on) {
+ 		dev->link_on = false;
+ 
+ 		/* reset MAC */
+@@ -1177,7 +1180,7 @@ static int lan78xx_link_reset(struct lan78xx_net *dev)
+ 			return -EIO;
+ 
+ 		del_timer(&dev->stat_monitor);
+-	} else if (phydev->link && !dev->link_on) {
++	} else if (link && !dev->link_on) {
+ 		dev->link_on = true;
+ 
+ 		phy_ethtool_ksettings_get(phydev, &ecmd);
+@@ -1466,9 +1469,14 @@ static int lan78xx_set_eee(struct net_device *net, struct ethtool_eee *edata)
+ 
+ static u32 lan78xx_get_link(struct net_device *net)
+ {
++	u32 link;
++
++	mutex_lock(&net->phydev->lock);
+ 	phy_read_status(net->phydev);
++	link = net->phydev->link;
++	mutex_unlock(&net->phydev->lock);
+ 
+-	return net->phydev->link;
++	return link;
+ }
+ 
+ static void lan78xx_get_drvinfo(struct net_device *net,
+diff --git a/drivers/net/usb/pegasus.c b/drivers/net/usb/pegasus.c
+index bc2dbf86496b5..a08a46fef0d22 100644
+--- a/drivers/net/usb/pegasus.c
++++ b/drivers/net/usb/pegasus.c
+@@ -132,9 +132,15 @@ static int get_registers(pegasus_t *pegasus, __u16 indx, __u16 size, void *data)
+ static int set_registers(pegasus_t *pegasus, __u16 indx, __u16 size,
+ 			 const void *data)
+ {
+-	return usb_control_msg_send(pegasus->usb, 0, PEGASUS_REQ_SET_REGS,
++	int ret;
++
++	ret = usb_control_msg_send(pegasus->usb, 0, PEGASUS_REQ_SET_REGS,
+ 				    PEGASUS_REQT_WRITE, 0, indx, data, size,
+ 				    1000, GFP_NOIO);
++	if (ret < 0)
++		netif_dbg(pegasus, drv, pegasus->net, "%s failed with %d\n", __func__, ret);
++
++	return ret;
+ }
+ 
+ /*
+@@ -145,10 +151,15 @@ static int set_registers(pegasus_t *pegasus, __u16 indx, __u16 size,
+ static int set_register(pegasus_t *pegasus, __u16 indx, __u8 data)
+ {
+ 	void *buf = &data;
++	int ret;
+ 
+-	return usb_control_msg_send(pegasus->usb, 0, PEGASUS_REQ_SET_REG,
++	ret = usb_control_msg_send(pegasus->usb, 0, PEGASUS_REQ_SET_REG,
+ 				    PEGASUS_REQT_WRITE, data, indx, buf, 1,
+ 				    1000, GFP_NOIO);
++	if (ret < 0)
++		netif_dbg(pegasus, drv, pegasus->net, "%s failed with %d\n", __func__, ret);
++
++	return ret;
+ }
+ 
+ static int update_eth_regs_async(pegasus_t *pegasus)
+@@ -188,10 +199,9 @@ static int update_eth_regs_async(pegasus_t *pegasus)
+ 
+ static int __mii_op(pegasus_t *p, __u8 phy, __u8 indx, __u16 *regd, __u8 cmd)
+ {
+-	int i;
+-	__u8 data[4] = { phy, 0, 0, indx };
++	int i, ret;
+ 	__le16 regdi;
+-	int ret = -ETIMEDOUT;
++	__u8 data[4] = { phy, 0, 0, indx };
+ 
+ 	if (cmd & PHY_WRITE) {
+ 		__le16 *t = (__le16 *) & data[1];
+@@ -207,12 +217,15 @@ static int __mii_op(pegasus_t *p, __u8 phy, __u8 indx, __u16 *regd, __u8 cmd)
+ 		if (data[0] & PHY_DONE)
+ 			break;
+ 	}
+-	if (i >= REG_TIMEOUT)
++	if (i >= REG_TIMEOUT) {
++		ret = -ETIMEDOUT;
+ 		goto fail;
++	}
+ 	if (cmd & PHY_READ) {
+ 		ret = get_registers(p, PhyData, 2, &regdi);
++		if (ret < 0)
++			goto fail;
+ 		*regd = le16_to_cpu(regdi);
+-		return ret;
+ 	}
+ 	return 0;
+ fail:
+@@ -235,9 +248,13 @@ static int write_mii_word(pegasus_t *pegasus, __u8 phy, __u8 indx, __u16 *regd)
+ static int mdio_read(struct net_device *dev, int phy_id, int loc)
+ {
+ 	pegasus_t *pegasus = netdev_priv(dev);
++	int ret;
+ 	u16 res;
+ 
+-	read_mii_word(pegasus, phy_id, loc, &res);
++	ret = read_mii_word(pegasus, phy_id, loc, &res);
++	if (ret < 0)
++		return ret;
++
+ 	return (int)res;
+ }
+ 
+@@ -251,10 +268,9 @@ static void mdio_write(struct net_device *dev, int phy_id, int loc, int val)
+ 
+ static int read_eprom_word(pegasus_t *pegasus, __u8 index, __u16 *retdata)
+ {
+-	int i;
+-	__u8 tmp = 0;
++	int ret, i;
+ 	__le16 retdatai;
+-	int ret;
++	__u8 tmp = 0;
+ 
+ 	set_register(pegasus, EpromCtrl, 0);
+ 	set_register(pegasus, EpromOffset, index);
+@@ -262,21 +278,25 @@ static int read_eprom_word(pegasus_t *pegasus, __u8 index, __u16 *retdata)
+ 
+ 	for (i = 0; i < REG_TIMEOUT; i++) {
+ 		ret = get_registers(pegasus, EpromCtrl, 1, &tmp);
++		if (ret < 0)
++			goto fail;
+ 		if (tmp & EPROM_DONE)
+ 			break;
+-		if (ret == -ESHUTDOWN)
+-			goto fail;
+ 	}
+-	if (i >= REG_TIMEOUT)
++	if (i >= REG_TIMEOUT) {
++		ret = -ETIMEDOUT;
+ 		goto fail;
++	}
+ 
+ 	ret = get_registers(pegasus, EpromData, 2, &retdatai);
++	if (ret < 0)
++		goto fail;
+ 	*retdata = le16_to_cpu(retdatai);
+ 	return ret;
+ 
+ fail:
+-	netif_warn(pegasus, drv, pegasus->net, "%s failed\n", __func__);
+-	return -ETIMEDOUT;
++	netif_dbg(pegasus, drv, pegasus->net, "%s failed\n", __func__);
++	return ret;
+ }
+ 
+ #ifdef	PEGASUS_WRITE_EEPROM
+@@ -324,10 +344,10 @@ static int write_eprom_word(pegasus_t *pegasus, __u8 index, __u16 data)
+ 	return ret;
+ 
+ fail:
+-	netif_warn(pegasus, drv, pegasus->net, "%s failed\n", __func__);
++	netif_dbg(pegasus, drv, pegasus->net, "%s failed\n", __func__);
+ 	return -ETIMEDOUT;
+ }
+-#endif				/* PEGASUS_WRITE_EEPROM */
++#endif	/* PEGASUS_WRITE_EEPROM */
+ 
+ static inline int get_node_id(pegasus_t *pegasus, u8 *id)
+ {
+@@ -367,19 +387,21 @@ static void set_ethernet_addr(pegasus_t *pegasus)
+ 	return;
+ err:
+ 	eth_hw_addr_random(pegasus->net);
+-	dev_info(&pegasus->intf->dev, "software assigned MAC address.\n");
++	netif_dbg(pegasus, drv, pegasus->net, "software assigned MAC address.\n");
+ 
+ 	return;
+ }
+ 
+ static inline int reset_mac(pegasus_t *pegasus)
+ {
++	int ret, i;
+ 	__u8 data = 0x8;
+-	int i;
+ 
+ 	set_register(pegasus, EthCtrl1, data);
+ 	for (i = 0; i < REG_TIMEOUT; i++) {
+-		get_registers(pegasus, EthCtrl1, 1, &data);
++		ret = get_registers(pegasus, EthCtrl1, 1, &data);
++		if (ret < 0)
++			goto fail;
+ 		if (~data & 0x08) {
+ 			if (loopback)
+ 				break;
+@@ -402,22 +424,29 @@ static inline int reset_mac(pegasus_t *pegasus)
+ 	}
+ 	if (usb_dev_id[pegasus->dev_index].vendor == VENDOR_ELCON) {
+ 		__u16 auxmode;
+-		read_mii_word(pegasus, 3, 0x1b, &auxmode);
++		ret = read_mii_word(pegasus, 3, 0x1b, &auxmode);
++		if (ret < 0)
++			goto fail;
+ 		auxmode |= 4;
+ 		write_mii_word(pegasus, 3, 0x1b, &auxmode);
+ 	}
+ 
+ 	return 0;
++fail:
++	netif_dbg(pegasus, drv, pegasus->net, "%s failed\n", __func__);
++	return ret;
+ }
+ 
+ static int enable_net_traffic(struct net_device *dev, struct usb_device *usb)
+ {
+-	__u16 linkpart;
+-	__u8 data[4];
+ 	pegasus_t *pegasus = netdev_priv(dev);
+ 	int ret;
++	__u16 linkpart;
++	__u8 data[4];
+ 
+-	read_mii_word(pegasus, pegasus->phy, MII_LPA, &linkpart);
++	ret = read_mii_word(pegasus, pegasus->phy, MII_LPA, &linkpart);
++	if (ret < 0)
++		goto fail;
+ 	data[0] = 0xc8; /* TX & RX enable, append status, no CRC */
+ 	data[1] = 0;
+ 	if (linkpart & (ADVERTISE_100FULL | ADVERTISE_10FULL))
+@@ -435,11 +464,16 @@ static int enable_net_traffic(struct net_device *dev, struct usb_device *usb)
+ 	    usb_dev_id[pegasus->dev_index].vendor == VENDOR_LINKSYS2 ||
+ 	    usb_dev_id[pegasus->dev_index].vendor == VENDOR_DLINK) {
+ 		u16 auxmode;
+-		read_mii_word(pegasus, 0, 0x1b, &auxmode);
++		ret = read_mii_word(pegasus, 0, 0x1b, &auxmode);
++		if (ret < 0)
++			goto fail;
+ 		auxmode |= 4;
+ 		write_mii_word(pegasus, 0, 0x1b, &auxmode);
+ 	}
+ 
++	return 0;
++fail:
++	netif_dbg(pegasus, drv, pegasus->net, "%s failed\n", __func__);
+ 	return ret;
+ }
+ 
+@@ -447,9 +481,9 @@ static void read_bulk_callback(struct urb *urb)
+ {
+ 	pegasus_t *pegasus = urb->context;
+ 	struct net_device *net;
++	u8 *buf = urb->transfer_buffer;
+ 	int rx_status, count = urb->actual_length;
+ 	int status = urb->status;
+-	u8 *buf = urb->transfer_buffer;
+ 	__u16 pkt_len;
+ 
+ 	if (!pegasus)
+@@ -1004,8 +1038,7 @@ static int pegasus_ioctl(struct net_device *net, struct ifreq *rq, int cmd)
+ 		data[0] = pegasus->phy;
+ 		fallthrough;
+ 	case SIOCDEVPRIVATE + 1:
+-		read_mii_word(pegasus, data[0], data[1] & 0x1f, &data[3]);
+-		res = 0;
++		res = read_mii_word(pegasus, data[0], data[1] & 0x1f, &data[3]);
+ 		break;
+ 	case SIOCDEVPRIVATE + 2:
+ 		if (!capable(CAP_NET_ADMIN))
+@@ -1039,22 +1072,25 @@ static void pegasus_set_multicast(struct net_device *net)
+ 
+ static __u8 mii_phy_probe(pegasus_t *pegasus)
+ {
+-	int i;
++	int i, ret;
+ 	__u16 tmp;
+ 
+ 	for (i = 0; i < 32; i++) {
+-		read_mii_word(pegasus, i, MII_BMSR, &tmp);
++		ret = read_mii_word(pegasus, i, MII_BMSR, &tmp);
++		if (ret < 0)
++			goto fail;
+ 		if (tmp == 0 || tmp == 0xffff || (tmp & BMSR_MEDIA) == 0)
+ 			continue;
+ 		else
+ 			return i;
+ 	}
+-
++fail:
+ 	return 0xff;
+ }
+ 
+ static inline void setup_pegasus_II(pegasus_t *pegasus)
+ {
++	int ret;
+ 	__u8 data = 0xa5;
+ 
+ 	set_register(pegasus, Reg1d, 0);
+@@ -1066,7 +1102,9 @@ static inline void setup_pegasus_II(pegasus_t *pegasus)
+ 		set_register(pegasus, Reg7b, 2);
+ 
+ 	set_register(pegasus, 0x83, data);
+-	get_registers(pegasus, 0x83, 1, &data);
++	ret = get_registers(pegasus, 0x83, 1, &data);
++	if (ret < 0)
++		goto fail;
+ 
+ 	if (data == 0xa5)
+ 		pegasus->chip = 0x8513;
+@@ -1081,6 +1119,10 @@ static inline void setup_pegasus_II(pegasus_t *pegasus)
+ 		set_register(pegasus, Reg81, 6);
+ 	else
+ 		set_register(pegasus, Reg81, 2);
++
++	return;
++fail:
++	netif_dbg(pegasus, drv, pegasus->net, "%s failed\n", __func__);
+ }
+ 
+ static void check_carrier(struct work_struct *work)
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 2cf763b4ea84f..a044f37e87aed 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -3953,17 +3953,28 @@ static void rtl_clear_bp(struct r8152 *tp, u16 type)
+ 	case RTL_VER_06:
+ 		ocp_write_byte(tp, type, PLA_BP_EN, 0);
+ 		break;
++	case RTL_VER_14:
++		ocp_write_word(tp, type, USB_BP2_EN, 0);
++
++		ocp_write_word(tp, type, USB_BP_8, 0);
++		ocp_write_word(tp, type, USB_BP_9, 0);
++		ocp_write_word(tp, type, USB_BP_10, 0);
++		ocp_write_word(tp, type, USB_BP_11, 0);
++		ocp_write_word(tp, type, USB_BP_12, 0);
++		ocp_write_word(tp, type, USB_BP_13, 0);
++		ocp_write_word(tp, type, USB_BP_14, 0);
++		ocp_write_word(tp, type, USB_BP_15, 0);
++		break;
+ 	case RTL_VER_08:
+ 	case RTL_VER_09:
+ 	case RTL_VER_10:
+ 	case RTL_VER_11:
+ 	case RTL_VER_12:
+ 	case RTL_VER_13:
+-	case RTL_VER_14:
+ 	case RTL_VER_15:
+ 	default:
+ 		if (type == MCU_TYPE_USB) {
+-			ocp_write_byte(tp, MCU_TYPE_USB, USB_BP2_EN, 0);
++			ocp_write_word(tp, MCU_TYPE_USB, USB_BP2_EN, 0);
+ 
+ 			ocp_write_word(tp, MCU_TYPE_USB, USB_BP_8, 0);
+ 			ocp_write_word(tp, MCU_TYPE_USB, USB_BP_9, 0);
+@@ -4329,7 +4340,6 @@ static bool rtl8152_is_fw_mac_ok(struct r8152 *tp, struct fw_mac *mac)
+ 		case RTL_VER_11:
+ 		case RTL_VER_12:
+ 		case RTL_VER_13:
+-		case RTL_VER_14:
+ 		case RTL_VER_15:
+ 			fw_reg = 0xf800;
+ 			bp_ba_addr = PLA_BP_BA;
+@@ -4337,6 +4347,13 @@ static bool rtl8152_is_fw_mac_ok(struct r8152 *tp, struct fw_mac *mac)
+ 			bp_start = PLA_BP_0;
+ 			max_bp = 8;
+ 			break;
++		case RTL_VER_14:
++			fw_reg = 0xf800;
++			bp_ba_addr = PLA_BP_BA;
++			bp_en_addr = USB_BP2_EN;
++			bp_start = PLA_BP_0;
++			max_bp = 16;
++			break;
+ 		default:
+ 			goto out;
+ 		}
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 6af2279644137..d397dc6b0ebf2 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -63,7 +63,7 @@ static const unsigned long guest_offloads[] = {
+ 	VIRTIO_NET_F_GUEST_CSUM
+ };
+ 
+-#define GUEST_OFFLOAD_LRO_MASK ((1ULL << VIRTIO_NET_F_GUEST_TSO4) | \
++#define GUEST_OFFLOAD_GRO_HW_MASK ((1ULL << VIRTIO_NET_F_GUEST_TSO4) | \
+ 				(1ULL << VIRTIO_NET_F_GUEST_TSO6) | \
+ 				(1ULL << VIRTIO_NET_F_GUEST_ECN)  | \
+ 				(1ULL << VIRTIO_NET_F_GUEST_UFO))
+@@ -2490,7 +2490,7 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
+ 	        virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_ECN) ||
+ 		virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_UFO) ||
+ 		virtio_has_feature(vi->vdev, VIRTIO_NET_F_GUEST_CSUM))) {
+-		NL_SET_ERR_MSG_MOD(extack, "Can't set XDP while host is implementing LRO/CSUM, disable LRO/CSUM first");
++		NL_SET_ERR_MSG_MOD(extack, "Can't set XDP while host is implementing GRO_HW/CSUM, disable GRO_HW/CSUM first");
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+@@ -2621,15 +2621,15 @@ static int virtnet_set_features(struct net_device *dev,
+ 	u64 offloads;
+ 	int err;
+ 
+-	if ((dev->features ^ features) & NETIF_F_LRO) {
++	if ((dev->features ^ features) & NETIF_F_GRO_HW) {
+ 		if (vi->xdp_enabled)
+ 			return -EBUSY;
+ 
+-		if (features & NETIF_F_LRO)
++		if (features & NETIF_F_GRO_HW)
+ 			offloads = vi->guest_offloads_capable;
+ 		else
+ 			offloads = vi->guest_offloads_capable &
+-				   ~GUEST_OFFLOAD_LRO_MASK;
++				   ~GUEST_OFFLOAD_GRO_HW_MASK;
+ 
+ 		err = virtnet_set_guest_offloads(vi, offloads);
+ 		if (err)
+@@ -3109,9 +3109,9 @@ static int virtnet_probe(struct virtio_device *vdev)
+ 		dev->features |= NETIF_F_RXCSUM;
+ 	if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) ||
+ 	    virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO6))
+-		dev->features |= NETIF_F_LRO;
++		dev->features |= NETIF_F_GRO_HW;
+ 	if (virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS))
+-		dev->hw_features |= NETIF_F_LRO;
++		dev->hw_features |= NETIF_F_GRO_HW;
+ 
+ 	dev->vlan_features = dev->features;
+ 
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 414afcb0a23f8..b1c451d10a5d2 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -1367,6 +1367,8 @@ static struct sk_buff *vrf_ip6_rcv(struct net_device *vrf_dev,
+ 	bool need_strict = rt6_need_strict(&ipv6_hdr(skb)->daddr);
+ 	bool is_ndisc = ipv6_ndisc_frame(skb);
+ 
++	nf_reset_ct(skb);
++
+ 	/* loopback, multicast & non-ND link-local traffic; do not push through
+ 	 * packet taps again. Reset pkt_type for upper layers to process skb.
+ 	 * For strict packets with a source LLA, determine the dst using the
+@@ -1429,6 +1431,8 @@ static struct sk_buff *vrf_ip_rcv(struct net_device *vrf_dev,
+ 	skb->skb_iif = vrf_dev->ifindex;
+ 	IPCB(skb)->flags |= IPSKB_L3SLAVE;
+ 
++	nf_reset_ct(skb);
++
+ 	if (ipv4_is_multicast(ip_hdr(skb)->daddr))
+ 		goto out;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index 607980321d275..106177072d18e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -111,7 +111,7 @@ mt7915_mcu_get_cipher(int cipher)
+ 	case WLAN_CIPHER_SUITE_SMS4:
+ 		return MCU_CIPHER_WAPI;
+ 	default:
+-		return MT_CIPHER_NONE;
++		return MCU_CIPHER_NONE;
+ 	}
+ }
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.h
+index 517621044d9e9..c0255c3ac7d01 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.h
+@@ -1035,7 +1035,8 @@ enum {
+ };
+ 
+ enum mcu_cipher_type {
+-	MCU_CIPHER_WEP40 = 1,
++	MCU_CIPHER_NONE = 0,
++	MCU_CIPHER_WEP40,
+ 	MCU_CIPHER_WEP104,
+ 	MCU_CIPHER_WEP128,
+ 	MCU_CIPHER_TKIP,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+index 47843b055959b..fc0d7dc3a5f34 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+@@ -111,7 +111,7 @@ mt7921_mcu_get_cipher(int cipher)
+ 	case WLAN_CIPHER_SUITE_SMS4:
+ 		return MCU_CIPHER_WAPI;
+ 	default:
+-		return MT_CIPHER_NONE;
++		return MCU_CIPHER_NONE;
+ 	}
+ }
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.h
+index 07abe86f07a94..adad20819341f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.h
+@@ -198,7 +198,8 @@ struct sta_rec_sec {
+ } __packed;
+ 
+ enum mcu_cipher_type {
+-	MCU_CIPHER_WEP40 = 1,
++	MCU_CIPHER_NONE = 0,
++	MCU_CIPHER_WEP40,
+ 	MCU_CIPHER_WEP104,
+ 	MCU_CIPHER_WEP128,
+ 	MCU_CIPHER_TKIP,
+diff --git a/drivers/opp/core.c b/drivers/opp/core.c
+index e366218d67367..4c23b5736c773 100644
+--- a/drivers/opp/core.c
++++ b/drivers/opp/core.c
+@@ -1846,9 +1846,6 @@ void dev_pm_opp_put_supported_hw(struct opp_table *opp_table)
+ 	if (unlikely(!opp_table))
+ 		return;
+ 
+-	/* Make sure there are no concurrent readers while updating opp_table */
+-	WARN_ON(!list_empty(&opp_table->opp_list));
+-
+ 	kfree(opp_table->supported_hw);
+ 	opp_table->supported_hw = NULL;
+ 	opp_table->supported_hw_count = 0;
+@@ -1934,9 +1931,6 @@ void dev_pm_opp_put_prop_name(struct opp_table *opp_table)
+ 	if (unlikely(!opp_table))
+ 		return;
+ 
+-	/* Make sure there are no concurrent readers while updating opp_table */
+-	WARN_ON(!list_empty(&opp_table->opp_list));
+-
+ 	kfree(opp_table->prop_name);
+ 	opp_table->prop_name = NULL;
+ 
+@@ -2046,9 +2040,6 @@ void dev_pm_opp_put_regulators(struct opp_table *opp_table)
+ 	if (!opp_table->regulators)
+ 		goto put_opp_table;
+ 
+-	/* Make sure there are no concurrent readers while updating opp_table */
+-	WARN_ON(!list_empty(&opp_table->opp_list));
+-
+ 	if (opp_table->enabled) {
+ 		for (i = opp_table->regulator_count - 1; i >= 0; i--)
+ 			regulator_disable(opp_table->regulators[i]);
+@@ -2168,9 +2159,6 @@ void dev_pm_opp_put_clkname(struct opp_table *opp_table)
+ 	if (unlikely(!opp_table))
+ 		return;
+ 
+-	/* Make sure there are no concurrent readers while updating opp_table */
+-	WARN_ON(!list_empty(&opp_table->opp_list));
+-
+ 	clk_put(opp_table->clk);
+ 	opp_table->clk = ERR_PTR(-EINVAL);
+ 
+@@ -2269,9 +2257,6 @@ void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table)
+ 	if (unlikely(!opp_table))
+ 		return;
+ 
+-	/* Make sure there are no concurrent readers while updating opp_table */
+-	WARN_ON(!list_empty(&opp_table->opp_list));
+-
+ 	opp_table->set_opp = NULL;
+ 
+ 	mutex_lock(&opp_table->lock);
+diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
+index beb8d1f4fafee..fb667d78e7b34 100644
+--- a/drivers/pci/pci-sysfs.c
++++ b/drivers/pci/pci-sysfs.c
+@@ -978,7 +978,7 @@ void pci_create_legacy_files(struct pci_bus *b)
+ 	b->legacy_mem->size = 1024*1024;
+ 	b->legacy_mem->attr.mode = 0600;
+ 	b->legacy_mem->mmap = pci_mmap_legacy_mem;
+-	b->legacy_io->mapping = iomem_get_mapping();
++	b->legacy_mem->mapping = iomem_get_mapping();
+ 	pci_adjust_legacy_attr(b, pci_mmap_mem);
+ 	error = device_create_bin_file(&b->dev, b->legacy_mem);
+ 	if (error)
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 6d74386eadc2c..ab3de1551b503 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -1900,6 +1900,7 @@ static void quirk_ryzen_xhci_d3hot(struct pci_dev *dev)
+ }
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x15e0, quirk_ryzen_xhci_d3hot);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x15e1, quirk_ryzen_xhci_d3hot);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x1639, quirk_ryzen_xhci_d3hot);
+ 
+ #ifdef CONFIG_X86_IO_APIC
+ static int dmi_disable_ioapicreroute(const struct dmi_system_id *d)
+diff --git a/drivers/ptp/Kconfig b/drivers/ptp/Kconfig
+index 8c20e524e9ad4..e085c255da0c1 100644
+--- a/drivers/ptp/Kconfig
++++ b/drivers/ptp/Kconfig
+@@ -90,7 +90,8 @@ config PTP_1588_CLOCK_INES
+ config PTP_1588_CLOCK_PCH
+ 	tristate "Intel PCH EG20T as PTP clock"
+ 	depends on X86_32 || COMPILE_TEST
+-	depends on HAS_IOMEM && NET
++	depends on HAS_IOMEM && PCI
++	depends on NET
+ 	imply PTP_1588_CLOCK
+ 	help
+ 	  This driver adds support for using the PCH EG20T as a PTP
+diff --git a/drivers/scsi/device_handler/scsi_dh_rdac.c b/drivers/scsi/device_handler/scsi_dh_rdac.c
+index 25f6e1ac9e7bb..66652ab409cc9 100644
+--- a/drivers/scsi/device_handler/scsi_dh_rdac.c
++++ b/drivers/scsi/device_handler/scsi_dh_rdac.c
+@@ -453,8 +453,8 @@ static int initialize_controller(struct scsi_device *sdev,
+ 		if (!h->ctlr)
+ 			err = SCSI_DH_RES_TEMP_UNAVAIL;
+ 		else {
+-			list_add_rcu(&h->node, &h->ctlr->dh_list);
+ 			h->sdev = sdev;
++			list_add_rcu(&h->node, &h->ctlr->dh_list);
+ 		}
+ 		spin_unlock(&list_lock);
+ 		err = SCSI_DH_OK;
+@@ -778,11 +778,11 @@ static void rdac_bus_detach( struct scsi_device *sdev )
+ 	spin_lock(&list_lock);
+ 	if (h->ctlr) {
+ 		list_del_rcu(&h->node);
+-		h->sdev = NULL;
+ 		kref_put(&h->ctlr->kref, release_controller);
+ 	}
+ 	spin_unlock(&list_lock);
+ 	sdev->handler_data = NULL;
++	synchronize_rcu();
+ 	kfree(h);
+ }
+ 
+diff --git a/drivers/scsi/megaraid/megaraid_mm.c b/drivers/scsi/megaraid/megaraid_mm.c
+index abf7b401f5b90..c509440bd1610 100644
+--- a/drivers/scsi/megaraid/megaraid_mm.c
++++ b/drivers/scsi/megaraid/megaraid_mm.c
+@@ -238,7 +238,7 @@ mraid_mm_get_adapter(mimd_t __user *umimd, int *rval)
+ 	mimd_t		mimd;
+ 	uint32_t	adapno;
+ 	int		iterator;
+-
++	bool		is_found;
+ 
+ 	if (copy_from_user(&mimd, umimd, sizeof(mimd_t))) {
+ 		*rval = -EFAULT;
+@@ -254,12 +254,16 @@ mraid_mm_get_adapter(mimd_t __user *umimd, int *rval)
+ 
+ 	adapter = NULL;
+ 	iterator = 0;
++	is_found = false;
+ 
+ 	list_for_each_entry(adapter, &adapters_list_g, list) {
+-		if (iterator++ == adapno) break;
++		if (iterator++ == adapno) {
++			is_found = true;
++			break;
++		}
+ 	}
+ 
+-	if (!adapter) {
++	if (!is_found) {
+ 		*rval = -ENODEV;
+ 		return NULL;
+ 	}
+@@ -725,6 +729,7 @@ ioctl_done(uioc_t *kioc)
+ 	uint32_t	adapno;
+ 	int		iterator;
+ 	mraid_mmadp_t*	adapter;
++	bool		is_found;
+ 
+ 	/*
+ 	 * When the kioc returns from driver, make sure it still doesn't
+@@ -747,19 +752,23 @@ ioctl_done(uioc_t *kioc)
+ 		iterator	= 0;
+ 		adapter		= NULL;
+ 		adapno		= kioc->adapno;
++		is_found	= false;
+ 
+ 		con_log(CL_ANN, ( KERN_WARNING "megaraid cmm: completed "
+ 					"ioctl that was timedout before\n"));
+ 
+ 		list_for_each_entry(adapter, &adapters_list_g, list) {
+-			if (iterator++ == adapno) break;
++			if (iterator++ == adapno) {
++				is_found = true;
++				break;
++			}
+ 		}
+ 
+ 		kioc->timedout = 0;
+ 
+-		if (adapter) {
++		if (is_found)
+ 			mraid_mm_dealloc_kioc( adapter, kioc );
+-		}
++
+ 	}
+ 	else {
+ 		wake_up(&wait_q);
+diff --git a/drivers/scsi/pm8001/pm8001_sas.c b/drivers/scsi/pm8001/pm8001_sas.c
+index 335cf37e6cb94..2e429e31f1f0f 100644
+--- a/drivers/scsi/pm8001/pm8001_sas.c
++++ b/drivers/scsi/pm8001/pm8001_sas.c
+@@ -684,8 +684,7 @@ int pm8001_dev_found(struct domain_device *dev)
+ 
+ void pm8001_task_done(struct sas_task *task)
+ {
+-	if (!del_timer(&task->slow_task->timer))
+-		return;
++	del_timer(&task->slow_task->timer);
+ 	complete(&task->slow_task->completion);
+ }
+ 
+@@ -693,9 +692,14 @@ static void pm8001_tmf_timedout(struct timer_list *t)
+ {
+ 	struct sas_task_slow *slow = from_timer(slow, t, timer);
+ 	struct sas_task *task = slow->task;
++	unsigned long flags;
+ 
+-	task->task_state_flags |= SAS_TASK_STATE_ABORTED;
+-	complete(&task->slow_task->completion);
++	spin_lock_irqsave(&task->task_state_lock, flags);
++	if (!(task->task_state_flags & SAS_TASK_STATE_DONE)) {
++		task->task_state_flags |= SAS_TASK_STATE_ABORTED;
++		complete(&task->slow_task->completion);
++	}
++	spin_unlock_irqrestore(&task->task_state_lock, flags);
+ }
+ 
+ #define PM8001_TASK_TIMEOUT 20
+@@ -748,13 +752,10 @@ static int pm8001_exec_internal_tmf_task(struct domain_device *dev,
+ 		}
+ 		res = -TMF_RESP_FUNC_FAILED;
+ 		/* Even TMF timed out, return direct. */
+-		if ((task->task_state_flags & SAS_TASK_STATE_ABORTED)) {
+-			if (!(task->task_state_flags & SAS_TASK_STATE_DONE)) {
+-				pm8001_dbg(pm8001_ha, FAIL,
+-					   "TMF task[%x]timeout.\n",
+-					   tmf->tmf);
+-				goto ex_err;
+-			}
++		if (task->task_state_flags & SAS_TASK_STATE_ABORTED) {
++			pm8001_dbg(pm8001_ha, FAIL, "TMF task[%x]timeout.\n",
++				   tmf->tmf);
++			goto ex_err;
+ 		}
+ 
+ 		if (task->task_status.resp == SAS_TASK_COMPLETE &&
+@@ -834,12 +835,9 @@ pm8001_exec_internal_task_abort(struct pm8001_hba_info *pm8001_ha,
+ 		wait_for_completion(&task->slow_task->completion);
+ 		res = TMF_RESP_FUNC_FAILED;
+ 		/* Even TMF timed out, return direct. */
+-		if ((task->task_state_flags & SAS_TASK_STATE_ABORTED)) {
+-			if (!(task->task_state_flags & SAS_TASK_STATE_DONE)) {
+-				pm8001_dbg(pm8001_ha, FAIL,
+-					   "TMF task timeout.\n");
+-				goto ex_err;
+-			}
++		if (task->task_state_flags & SAS_TASK_STATE_ABORTED) {
++			pm8001_dbg(pm8001_ha, FAIL, "TMF task timeout.\n");
++			goto ex_err;
+ 		}
+ 
+ 		if (task->task_status.resp == SAS_TASK_COMPLETE &&
+diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
+index 12f54571b83e4..f0367115632b4 100644
+--- a/drivers/scsi/scsi_scan.c
++++ b/drivers/scsi/scsi_scan.c
+@@ -471,7 +471,8 @@ static struct scsi_target *scsi_alloc_target(struct device *parent,
+ 		error = shost->hostt->target_alloc(starget);
+ 
+ 		if(error) {
+-			dev_printk(KERN_ERR, dev, "target allocation failed, error %d\n", error);
++			if (error != -ENXIO)
++				dev_err(dev, "target allocation failed, error %d\n", error);
+ 			/* don't want scsi_target_reap to do the final
+ 			 * put because it will be under the host lock */
+ 			scsi_target_destroy(starget);
+diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
+index 32489d25158f8..ae9bfc658203e 100644
+--- a/drivers/scsi/scsi_sysfs.c
++++ b/drivers/scsi/scsi_sysfs.c
+@@ -807,11 +807,14 @@ store_state_field(struct device *dev, struct device_attribute *attr,
+ 	mutex_lock(&sdev->state_mutex);
+ 	ret = scsi_device_set_state(sdev, state);
+ 	/*
+-	 * If the device state changes to SDEV_RUNNING, we need to run
+-	 * the queue to avoid I/O hang.
++	 * If the device state changes to SDEV_RUNNING, we need to
++	 * rescan the device to revalidate it, and run the queue to
++	 * avoid I/O hang.
+ 	 */
+-	if (ret == 0 && state == SDEV_RUNNING)
++	if (ret == 0 && state == SDEV_RUNNING) {
++		scsi_rescan_device(dev);
+ 		blk_mq_run_hw_queues(sdev->request_queue, true);
++	}
+ 	mutex_unlock(&sdev->state_mutex);
+ 
+ 	return ret == 0 ? count : -EINVAL;
+diff --git a/drivers/slimbus/messaging.c b/drivers/slimbus/messaging.c
+index f2b5d347d227b..e5ae26227bdbf 100644
+--- a/drivers/slimbus/messaging.c
++++ b/drivers/slimbus/messaging.c
+@@ -66,7 +66,7 @@ int slim_alloc_txn_tid(struct slim_controller *ctrl, struct slim_msg_txn *txn)
+ 	int ret = 0;
+ 
+ 	spin_lock_irqsave(&ctrl->txn_lock, flags);
+-	ret = idr_alloc_cyclic(&ctrl->tid_idr, txn, 0,
++	ret = idr_alloc_cyclic(&ctrl->tid_idr, txn, 1,
+ 				SLIM_MAX_TIDS, GFP_ATOMIC);
+ 	if (ret < 0) {
+ 		spin_unlock_irqrestore(&ctrl->txn_lock, flags);
+@@ -131,7 +131,8 @@ int slim_do_transfer(struct slim_controller *ctrl, struct slim_msg_txn *txn)
+ 			goto slim_xfer_err;
+ 		}
+ 	}
+-
++	/* Initialize tid to invalid value */
++	txn->tid = 0;
+ 	need_tid = slim_tid_txn(txn->mt, txn->mc);
+ 
+ 	if (need_tid) {
+@@ -163,7 +164,7 @@ int slim_do_transfer(struct slim_controller *ctrl, struct slim_msg_txn *txn)
+ 			txn->mt, txn->mc, txn->la, ret);
+ 
+ slim_xfer_err:
+-	if (!clk_pause_msg && (!need_tid  || ret == -ETIMEDOUT)) {
++	if (!clk_pause_msg && (txn->tid == 0  || ret == -ETIMEDOUT)) {
+ 		/*
+ 		 * remove runtime-pm vote if this was TX only, or
+ 		 * if there was error during this transaction
+diff --git a/drivers/slimbus/qcom-ngd-ctrl.c b/drivers/slimbus/qcom-ngd-ctrl.c
+index c054e83ab6361..7040293c2ee8f 100644
+--- a/drivers/slimbus/qcom-ngd-ctrl.c
++++ b/drivers/slimbus/qcom-ngd-ctrl.c
+@@ -618,7 +618,7 @@ static void qcom_slim_ngd_rx(struct qcom_slim_ngd_ctrl *ctrl, u8 *buf)
+ 		(mc == SLIM_USR_MC_GENERIC_ACK &&
+ 		 mt == SLIM_MSG_MT_SRC_REFERRED_USER)) {
+ 		slim_msg_response(&ctrl->ctrl, &buf[4], buf[3], len - 4);
+-		pm_runtime_mark_last_busy(ctrl->dev);
++		pm_runtime_mark_last_busy(ctrl->ctrl.dev);
+ 	}
+ }
+ 
+@@ -1080,7 +1080,8 @@ static void qcom_slim_ngd_setup(struct qcom_slim_ngd_ctrl *ctrl)
+ {
+ 	u32 cfg = readl_relaxed(ctrl->ngd->base);
+ 
+-	if (ctrl->state == QCOM_SLIM_NGD_CTRL_DOWN)
++	if (ctrl->state == QCOM_SLIM_NGD_CTRL_DOWN ||
++		ctrl->state == QCOM_SLIM_NGD_CTRL_ASLEEP)
+ 		qcom_slim_ngd_init_dma(ctrl);
+ 
+ 	/* By default enable message queues */
+@@ -1131,6 +1132,7 @@ static int qcom_slim_ngd_power_up(struct qcom_slim_ngd_ctrl *ctrl)
+ 			dev_info(ctrl->dev, "Subsys restart: ADSP active framer\n");
+ 			return 0;
+ 		}
++		qcom_slim_ngd_setup(ctrl);
+ 		return 0;
+ 	}
+ 
+@@ -1257,13 +1259,14 @@ static int qcom_slim_ngd_enable(struct qcom_slim_ngd_ctrl *ctrl, bool enable)
+ 		}
+ 		/* controller state should be in sync with framework state */
+ 		complete(&ctrl->qmi.qmi_comp);
+-		if (!pm_runtime_enabled(ctrl->dev) ||
+-				!pm_runtime_suspended(ctrl->dev))
+-			qcom_slim_ngd_runtime_resume(ctrl->dev);
++		if (!pm_runtime_enabled(ctrl->ctrl.dev) ||
++			 !pm_runtime_suspended(ctrl->ctrl.dev))
++			qcom_slim_ngd_runtime_resume(ctrl->ctrl.dev);
+ 		else
+-			pm_runtime_resume(ctrl->dev);
+-		pm_runtime_mark_last_busy(ctrl->dev);
+-		pm_runtime_put(ctrl->dev);
++			pm_runtime_resume(ctrl->ctrl.dev);
++
++		pm_runtime_mark_last_busy(ctrl->ctrl.dev);
++		pm_runtime_put(ctrl->ctrl.dev);
+ 
+ 		ret = slim_register_controller(&ctrl->ctrl);
+ 		if (ret) {
+@@ -1389,7 +1392,7 @@ static int qcom_slim_ngd_ssr_pdr_notify(struct qcom_slim_ngd_ctrl *ctrl,
+ 		/* Make sure the last dma xfer is finished */
+ 		mutex_lock(&ctrl->tx_lock);
+ 		if (ctrl->state != QCOM_SLIM_NGD_CTRL_DOWN) {
+-			pm_runtime_get_noresume(ctrl->dev);
++			pm_runtime_get_noresume(ctrl->ctrl.dev);
+ 			ctrl->state = QCOM_SLIM_NGD_CTRL_DOWN;
+ 			qcom_slim_ngd_down(ctrl);
+ 			qcom_slim_ngd_exit_dma(ctrl);
+@@ -1617,6 +1620,7 @@ static int __maybe_unused qcom_slim_ngd_runtime_suspend(struct device *dev)
+ 	struct qcom_slim_ngd_ctrl *ctrl = dev_get_drvdata(dev);
+ 	int ret = 0;
+ 
++	qcom_slim_ngd_exit_dma(ctrl);
+ 	if (!ctrl->qmi.handle)
+ 		return 0;
+ 
+diff --git a/drivers/soc/fsl/qe/qe_ic.c b/drivers/soc/fsl/qe/qe_ic.c
+index 3f711c1a0996a..bbae3d39c7bed 100644
+--- a/drivers/soc/fsl/qe/qe_ic.c
++++ b/drivers/soc/fsl/qe/qe_ic.c
+@@ -23,6 +23,7 @@
+ #include <linux/signal.h>
+ #include <linux/device.h>
+ #include <linux/spinlock.h>
++#include <linux/platform_device.h>
+ #include <asm/irq.h>
+ #include <asm/io.h>
+ #include <soc/fsl/qe/qe.h>
+@@ -53,8 +54,8 @@ struct qe_ic {
+ 	struct irq_chip hc_irq;
+ 
+ 	/* VIRQ numbers of QE high/low irqs */
+-	unsigned int virq_high;
+-	unsigned int virq_low;
++	int virq_high;
++	int virq_low;
+ };
+ 
+ /*
+@@ -404,42 +405,40 @@ static void qe_ic_cascade_muxed_mpic(struct irq_desc *desc)
+ 	chip->irq_eoi(&desc->irq_data);
+ }
+ 
+-static void __init qe_ic_init(struct device_node *node)
++static int qe_ic_init(struct platform_device *pdev)
+ {
++	struct device *dev = &pdev->dev;
+ 	void (*low_handler)(struct irq_desc *desc);
+ 	void (*high_handler)(struct irq_desc *desc);
+ 	struct qe_ic *qe_ic;
+-	struct resource res;
+-	u32 ret;
++	struct resource *res;
++	struct device_node *node = pdev->dev.of_node;
+ 
+-	ret = of_address_to_resource(node, 0, &res);
+-	if (ret)
+-		return;
++	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (res == NULL) {
++		dev_err(dev, "no memory resource defined\n");
++		return -ENODEV;
++	}
+ 
+-	qe_ic = kzalloc(sizeof(*qe_ic), GFP_KERNEL);
++	qe_ic = devm_kzalloc(dev, sizeof(*qe_ic), GFP_KERNEL);
+ 	if (qe_ic == NULL)
+-		return;
++		return -ENOMEM;
+ 
+-	qe_ic->irqhost = irq_domain_add_linear(node, NR_QE_IC_INTS,
+-					       &qe_ic_host_ops, qe_ic);
+-	if (qe_ic->irqhost == NULL) {
+-		kfree(qe_ic);
+-		return;
++	qe_ic->regs = devm_ioremap(dev, res->start, resource_size(res));
++	if (qe_ic->regs == NULL) {
++		dev_err(dev, "failed to ioremap() registers\n");
++		return -ENODEV;
+ 	}
+ 
+-	qe_ic->regs = ioremap(res.start, resource_size(&res));
+-
+ 	qe_ic->hc_irq = qe_ic_irq_chip;
+ 
+-	qe_ic->virq_high = irq_of_parse_and_map(node, 0);
+-	qe_ic->virq_low = irq_of_parse_and_map(node, 1);
++	qe_ic->virq_high = platform_get_irq(pdev, 0);
++	qe_ic->virq_low = platform_get_irq(pdev, 1);
+ 
+-	if (!qe_ic->virq_low) {
+-		printk(KERN_ERR "Failed to map QE_IC low IRQ\n");
+-		kfree(qe_ic);
+-		return;
+-	}
+-	if (qe_ic->virq_high != qe_ic->virq_low) {
++	if (qe_ic->virq_low <= 0)
++		return -ENODEV;
++
++	if (qe_ic->virq_high > 0 && qe_ic->virq_high != qe_ic->virq_low) {
+ 		low_handler = qe_ic_cascade_low;
+ 		high_handler = qe_ic_cascade_high;
+ 	} else {
+@@ -447,29 +446,42 @@ static void __init qe_ic_init(struct device_node *node)
+ 		high_handler = NULL;
+ 	}
+ 
++	qe_ic->irqhost = irq_domain_add_linear(node, NR_QE_IC_INTS,
++					       &qe_ic_host_ops, qe_ic);
++	if (qe_ic->irqhost == NULL) {
++		dev_err(dev, "failed to add irq domain\n");
++		return -ENODEV;
++	}
++
+ 	qe_ic_write(qe_ic->regs, QEIC_CICR, 0);
+ 
+ 	irq_set_handler_data(qe_ic->virq_low, qe_ic);
+ 	irq_set_chained_handler(qe_ic->virq_low, low_handler);
+ 
+-	if (qe_ic->virq_high && qe_ic->virq_high != qe_ic->virq_low) {
++	if (high_handler) {
+ 		irq_set_handler_data(qe_ic->virq_high, qe_ic);
+ 		irq_set_chained_handler(qe_ic->virq_high, high_handler);
+ 	}
++	return 0;
+ }
++static const struct of_device_id qe_ic_ids[] = {
++	{ .compatible = "fsl,qe-ic"},
++	{ .type = "qeic"},
++	{},
++};
+ 
+-static int __init qe_ic_of_init(void)
++static struct platform_driver qe_ic_driver =
+ {
+-	struct device_node *np;
++	.driver	= {
++		.name		= "qe-ic",
++		.of_match_table	= qe_ic_ids,
++	},
++	.probe	= qe_ic_init,
++};
+ 
+-	np = of_find_compatible_node(NULL, NULL, "fsl,qe-ic");
+-	if (!np) {
+-		np = of_find_node_by_type(NULL, "qeic");
+-		if (!np)
+-			return -ENODEV;
+-	}
+-	qe_ic_init(np);
+-	of_node_put(np);
++static int __init qe_ic_of_init(void)
++{
++	platform_driver_register(&qe_ic_driver);
+ 	return 0;
+ }
+ subsys_initcall(qe_ic_of_init);
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index d62d69dd72b9d..73d4f0a1558d7 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -325,7 +325,15 @@ static int cqspi_set_protocol(struct cqspi_flash_pdata *f_pdata,
+ 	f_pdata->inst_width = CQSPI_INST_TYPE_SINGLE;
+ 	f_pdata->addr_width = CQSPI_INST_TYPE_SINGLE;
+ 	f_pdata->data_width = CQSPI_INST_TYPE_SINGLE;
+-	f_pdata->dtr = op->data.dtr && op->cmd.dtr && op->addr.dtr;
++
++	/*
++	 * For an op to be DTR, cmd phase along with every other non-empty
++	 * phase should have dtr field set to 1. If an op phase has zero
++	 * nbytes, ignore its dtr field; otherwise, check its dtr field.
++	 */
++	f_pdata->dtr = op->cmd.dtr &&
++		       (!op->addr.nbytes || op->addr.dtr) &&
++		       (!op->data.nbytes || op->data.dtr);
+ 
+ 	switch (op->data.buswidth) {
+ 	case 0:
+@@ -1227,8 +1235,15 @@ static bool cqspi_supports_mem_op(struct spi_mem *mem,
+ {
+ 	bool all_true, all_false;
+ 
+-	all_true = op->cmd.dtr && op->addr.dtr && op->dummy.dtr &&
+-		   op->data.dtr;
++	/*
++	 * op->dummy.dtr is required for converting nbytes into ncycles.
++	 * Also, don't check the dtr field of the op phase having zero nbytes.
++	 */
++	all_true = op->cmd.dtr &&
++		   (!op->addr.nbytes || op->addr.dtr) &&
++		   (!op->dummy.nbytes || op->dummy.dtr) &&
++		   (!op->data.nbytes || op->data.dtr);
++
+ 	all_false = !op->cmd.dtr && !op->addr.dtr && !op->dummy.dtr &&
+ 		    !op->data.dtr;
+ 
+diff --git a/drivers/spi/spi-mux.c b/drivers/spi/spi-mux.c
+index 37dfc6e828042..9708b7827ff70 100644
+--- a/drivers/spi/spi-mux.c
++++ b/drivers/spi/spi-mux.c
+@@ -167,10 +167,17 @@ err_put_ctlr:
+ 	return ret;
+ }
+ 
++static const struct spi_device_id spi_mux_id[] = {
++	{ "spi-mux" },
++	{ }
++};
++MODULE_DEVICE_TABLE(spi, spi_mux_id);
++
+ static const struct of_device_id spi_mux_of_match[] = {
+ 	{ .compatible = "spi-mux" },
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(of, spi_mux_of_match);
+ 
+ static struct spi_driver spi_mux_driver = {
+ 	.probe  = spi_mux_probe,
+@@ -178,6 +185,7 @@ static struct spi_driver spi_mux_driver = {
+ 		.name   = "spi-mux",
+ 		.of_match_table = spi_mux_of_match,
+ 	},
++	.id_table = spi_mux_id,
+ };
+ 
+ module_spi_driver(spi_mux_driver);
+diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
+index 2218941d35a3f..73b60f013b205 100644
+--- a/drivers/usb/core/devio.c
++++ b/drivers/usb/core/devio.c
+@@ -1133,7 +1133,7 @@ static int do_proc_control(struct usb_dev_state *ps,
+ 		"wIndex=%04x wLength=%04x\n",
+ 		ctrl->bRequestType, ctrl->bRequest, ctrl->wValue,
+ 		ctrl->wIndex, ctrl->wLength);
+-	if (ctrl->bRequestType & 0x80) {
++	if ((ctrl->bRequestType & USB_DIR_IN) && ctrl->wLength) {
+ 		pipe = usb_rcvctrlpipe(dev, 0);
+ 		snoop_urb(dev, NULL, pipe, ctrl->wLength, tmo, SUBMIT, NULL, 0);
+ 
+diff --git a/drivers/usb/core/message.c b/drivers/usb/core/message.c
+index 30e9e680c74cc..4d59d927ae3e3 100644
+--- a/drivers/usb/core/message.c
++++ b/drivers/usb/core/message.c
+@@ -783,6 +783,9 @@ int usb_get_descriptor(struct usb_device *dev, unsigned char type,
+ 	int i;
+ 	int result;
+ 
++	if (size <= 0)		/* No point in asking for no data */
++		return -EINVAL;
++
+ 	memset(buf, 0, size);	/* Make sure we parse really received data */
+ 
+ 	for (i = 0; i < 3; ++i) {
+@@ -832,6 +835,9 @@ static int usb_get_string(struct usb_device *dev, unsigned short langid,
+ 	int i;
+ 	int result;
+ 
++	if (size <= 0)		/* No point in asking for no data */
++		return -EINVAL;
++
+ 	for (i = 0; i < 3; ++i) {
+ 		/* retry on length 0 or stall; some devices are flakey */
+ 		result = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0),
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 426e37a1e78c5..1b886d80ba1cc 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -1709,6 +1709,10 @@ static int tcpm_pd_svdm(struct tcpm_port *port, struct typec_altmode *adev,
+ 	return rlen;
+ }
+ 
++static void tcpm_pd_handle_msg(struct tcpm_port *port,
++			       enum pd_msg_request message,
++			       enum tcpm_ams ams);
++
+ static void tcpm_handle_vdm_request(struct tcpm_port *port,
+ 				    const __le32 *payload, int cnt)
+ {
+@@ -1736,11 +1740,11 @@ static void tcpm_handle_vdm_request(struct tcpm_port *port,
+ 		port->vdm_state = VDM_STATE_DONE;
+ 	}
+ 
+-	if (PD_VDO_SVDM(p[0])) {
++	if (PD_VDO_SVDM(p[0]) && (adev || tcpm_vdm_ams(port) || port->nr_snk_vdo)) {
+ 		rlen = tcpm_pd_svdm(port, adev, p, cnt, response, &adev_action);
+ 	} else {
+ 		if (port->negotiated_rev >= PD_REV30)
+-			tcpm_queue_message(port, PD_MSG_CTRL_NOT_SUPP);
++			tcpm_pd_handle_msg(port, PD_MSG_CTRL_NOT_SUPP, NONE_AMS);
+ 	}
+ 
+ 	/*
+@@ -2443,10 +2447,7 @@ static void tcpm_pd_data_request(struct tcpm_port *port,
+ 					   NONE_AMS);
+ 		break;
+ 	case PD_DATA_VENDOR_DEF:
+-		if (tcpm_vdm_ams(port) || port->nr_snk_vdo)
+-			tcpm_handle_vdm_request(port, msg->payload, cnt);
+-		else if (port->negotiated_rev > PD_REV20)
+-			tcpm_pd_handle_msg(port, PD_MSG_CTRL_NOT_SUPP, NONE_AMS);
++		tcpm_handle_vdm_request(port, msg->payload, cnt);
+ 		break;
+ 	case PD_DATA_BIST:
+ 		port->bist_request = le32_to_cpu(msg->payload[0]);
+diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c b/drivers/vdpa/ifcvf/ifcvf_main.c
+index ab0ab5cf0f6e1..1c6cd5276a50e 100644
+--- a/drivers/vdpa/ifcvf/ifcvf_main.c
++++ b/drivers/vdpa/ifcvf/ifcvf_main.c
+@@ -477,9 +477,9 @@ static int ifcvf_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 
+ 	adapter = vdpa_alloc_device(struct ifcvf_adapter, vdpa,
+ 				    dev, &ifc_vdpa_ops, NULL);
+-	if (adapter == NULL) {
++	if (IS_ERR(adapter)) {
+ 		IFCVF_ERR(pdev, "Failed to allocate vDPA structure");
+-		return -ENOMEM;
++		return PTR_ERR(adapter);
+ 	}
+ 
+ 	pci_set_master(pdev);
+diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
+index cfa56a58b271b..0ca39ef20c6d6 100644
+--- a/drivers/vdpa/mlx5/core/mr.c
++++ b/drivers/vdpa/mlx5/core/mr.c
+@@ -454,11 +454,6 @@ out:
+ 	mutex_unlock(&mr->mkey_mtx);
+ }
+ 
+-static bool map_empty(struct vhost_iotlb *iotlb)
+-{
+-	return !vhost_iotlb_itree_first(iotlb, 0, U64_MAX);
+-}
+-
+ int mlx5_vdpa_handle_set_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb,
+ 			     bool *change_map)
+ {
+@@ -466,10 +461,6 @@ int mlx5_vdpa_handle_set_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *io
+ 	int err = 0;
+ 
+ 	*change_map = false;
+-	if (map_empty(iotlb)) {
+-		mlx5_vdpa_destroy_mr(mvdev);
+-		return 0;
+-	}
+ 	mutex_lock(&mr->mkey_mtx);
+ 	if (mr->initialized) {
+ 		mlx5_vdpa_info(mvdev, "memory map update\n");
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index f3495386698a2..103d8f70df8e1 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -752,12 +752,12 @@ static int get_queue_type(struct mlx5_vdpa_net *ndev)
+ 	type_mask = MLX5_CAP_DEV_VDPA_EMULATION(ndev->mvdev.mdev, virtio_queue_type);
+ 
+ 	/* prefer split queue */
+-	if (type_mask & MLX5_VIRTIO_EMULATION_CAP_VIRTIO_QUEUE_TYPE_PACKED)
+-		return MLX5_VIRTIO_EMULATION_VIRTIO_QUEUE_TYPE_PACKED;
++	if (type_mask & MLX5_VIRTIO_EMULATION_CAP_VIRTIO_QUEUE_TYPE_SPLIT)
++		return MLX5_VIRTIO_EMULATION_VIRTIO_QUEUE_TYPE_SPLIT;
+ 
+-	WARN_ON(!(type_mask & MLX5_VIRTIO_EMULATION_CAP_VIRTIO_QUEUE_TYPE_SPLIT));
++	WARN_ON(!(type_mask & MLX5_VIRTIO_EMULATION_CAP_VIRTIO_QUEUE_TYPE_PACKED));
+ 
+-	return MLX5_VIRTIO_EMULATION_VIRTIO_QUEUE_TYPE_SPLIT;
++	return MLX5_VIRTIO_EMULATION_VIRTIO_QUEUE_TYPE_PACKED;
+ }
+ 
+ static bool vq_is_tx(u16 idx)
+@@ -2010,6 +2010,12 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name)
+ 		return -ENOSPC;
+ 
+ 	mdev = mgtdev->madev->mdev;
++	if (!(MLX5_CAP_DEV_VDPA_EMULATION(mdev, virtio_queue_type) &
++	    MLX5_VIRTIO_EMULATION_CAP_VIRTIO_QUEUE_TYPE_SPLIT)) {
++		dev_warn(mdev->device, "missing support for split virtqueues\n");
++		return -EOPNOTSUPP;
++	}
++
+ 	/* we save one virtqueue for control virtqueue should we require it */
+ 	max_vqs = MLX5_CAP_DEV_VDPA_EMULATION(mdev, max_num_virtio_queues);
+ 	max_vqs = min_t(u32, max_vqs, MLX5_MAX_SUPPORTED_VQS);
+diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+index 98f793bc93765..26337ba424458 100644
+--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
++++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+@@ -251,8 +251,10 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr)
+ 
+ 	vdpasim = vdpa_alloc_device(struct vdpasim, vdpa, NULL, ops,
+ 				    dev_attr->name);
+-	if (!vdpasim)
++	if (IS_ERR(vdpasim)) {
++		ret = PTR_ERR(vdpasim);
+ 		goto err_alloc;
++	}
+ 
+ 	vdpasim->dev_attr = *dev_attr;
+ 	INIT_WORK(&vdpasim->work, dev_attr->work_fn);
+diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c b/drivers/vdpa/virtio_pci/vp_vdpa.c
+index 9145e06245658..54b313e4e63f5 100644
+--- a/drivers/vdpa/virtio_pci/vp_vdpa.c
++++ b/drivers/vdpa/virtio_pci/vp_vdpa.c
+@@ -400,9 +400,9 @@ static int vp_vdpa_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 
+ 	vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa,
+ 				    dev, &vp_vdpa_ops, NULL);
+-	if (vp_vdpa == NULL) {
++	if (IS_ERR(vp_vdpa)) {
+ 		dev_err(dev, "vp_vdpa: Failed to allocate vDPA structure\n");
+-		return -ENOMEM;
++		return PTR_ERR(vp_vdpa);
+ 	}
+ 
+ 	mdev = &vp_vdpa->mdev;
+diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
+index fb41db3da611b..b5201bedf93fa 100644
+--- a/drivers/vhost/vdpa.c
++++ b/drivers/vhost/vdpa.c
+@@ -614,7 +614,8 @@ static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v,
+ 	long pinned;
+ 	int ret = 0;
+ 
+-	if (msg->iova < v->range.first ||
++	if (msg->iova < v->range.first || !msg->size ||
++	    msg->iova > U64_MAX - msg->size + 1 ||
+ 	    msg->iova + msg->size - 1 > v->range.last)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index 5ccb0705beae1..f41463ab4031d 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -735,10 +735,16 @@ static bool log_access_ok(void __user *log_base, u64 addr, unsigned long sz)
+ 			 (sz + VHOST_PAGE_SIZE * 8 - 1) / VHOST_PAGE_SIZE / 8);
+ }
+ 
++/* Make sure 64 bit math will not overflow. */
+ static bool vhost_overflow(u64 uaddr, u64 size)
+ {
+-	/* Make sure 64 bit math will not overflow. */
+-	return uaddr > ULONG_MAX || size > ULONG_MAX || uaddr > ULONG_MAX - size;
++	if (uaddr > ULONG_MAX || size > ULONG_MAX)
++		return true;
++
++	if (!size)
++		return false;
++
++	return uaddr > ULONG_MAX - size + 1;
+ }
+ 
+ /* Caller should have vq mutex and device mutex. */
+diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
+index 4b15c00c0a0af..49984d2cba246 100644
+--- a/drivers/virtio/virtio.c
++++ b/drivers/virtio/virtio.c
+@@ -355,6 +355,7 @@ int register_virtio_device(struct virtio_device *dev)
+ 	virtio_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE);
+ 
+ 	INIT_LIST_HEAD(&dev->vqs);
++	spin_lock_init(&dev->vqs_list_lock);
+ 
+ 	/*
+ 	 * device_add() causes the bus infrastructure to look for a matching
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index 71e16b53e9c18..6b7aa26c53844 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -1668,7 +1668,9 @@ static struct virtqueue *vring_create_virtqueue_packed(
+ 			cpu_to_le16(vq->packed.event_flags_shadow);
+ 	}
+ 
++	spin_lock(&vdev->vqs_list_lock);
+ 	list_add_tail(&vq->vq.list, &vdev->vqs);
++	spin_unlock(&vdev->vqs_list_lock);
+ 	return &vq->vq;
+ 
+ err_desc_extra:
+@@ -2126,7 +2128,9 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index,
+ 	memset(vq->split.desc_state, 0, vring.num *
+ 			sizeof(struct vring_desc_state_split));
+ 
++	spin_lock(&vdev->vqs_list_lock);
+ 	list_add_tail(&vq->vq.list, &vdev->vqs);
++	spin_unlock(&vdev->vqs_list_lock);
+ 	return &vq->vq;
+ }
+ EXPORT_SYMBOL_GPL(__vring_new_virtqueue);
+@@ -2210,7 +2214,9 @@ void vring_del_virtqueue(struct virtqueue *_vq)
+ 	}
+ 	if (!vq->packed_ring)
+ 		kfree(vq->split.desc_state);
++	spin_lock(&vq->vq.vdev->vqs_list_lock);
+ 	list_del(&_vq->list);
++	spin_unlock(&vq->vq.vdev->vqs_list_lock);
+ 	kfree(vq);
+ }
+ EXPORT_SYMBOL_GPL(vring_del_virtqueue);
+@@ -2274,10 +2280,12 @@ void virtio_break_device(struct virtio_device *dev)
+ {
+ 	struct virtqueue *_vq;
+ 
++	spin_lock(&dev->vqs_list_lock);
+ 	list_for_each_entry(_vq, &dev->vqs, list) {
+ 		struct vring_virtqueue *vq = to_vvq(_vq);
+ 		vq->broken = true;
+ 	}
++	spin_unlock(&dev->vqs_list_lock);
+ }
+ EXPORT_SYMBOL_GPL(virtio_break_device);
+ 
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 272eff4441bcf..5d188d5af6fa7 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -9191,8 +9191,14 @@ static int btrfs_rename_exchange(struct inode *old_dir,
+ 	bool dest_log_pinned = false;
+ 	bool need_abort = false;
+ 
+-	/* we only allow rename subvolume link between subvolumes */
+-	if (old_ino != BTRFS_FIRST_FREE_OBJECTID && root != dest)
++	/*
++	 * For non-subvolumes allow exchange only within one subvolume, in the
++	 * same inode namespace. Two subvolumes (represented as directory) can
++	 * be exchanged as they're a logical link and have a fixed inode number.
++	 */
++	if (root != dest &&
++	    (old_ino != BTRFS_FIRST_FREE_OBJECTID ||
++	     new_ino != BTRFS_FIRST_FREE_OBJECTID))
+ 		return -EXDEV;
+ 
+ 	/* close the race window with snapshot create/destroy ioctl */
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index f23ff39f7697e..9df82eee440a3 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -165,7 +165,7 @@ struct io_rings {
+ 	 * Written by the application, shouldn't be modified by the
+ 	 * kernel.
+ 	 */
+-	u32                     cq_flags;
++	u32			cq_flags;
+ 	/*
+ 	 * Number of completion events lost because the queue was full;
+ 	 * this should be avoided by the application by making sure
+@@ -850,7 +850,7 @@ struct io_kiocb {
+ 	struct hlist_node		hash_node;
+ 	struct async_poll		*apoll;
+ 	struct io_wq_work		work;
+-	const struct cred 		*creds;
++	const struct cred		*creds;
+ 
+ 	/* store used ubuf, so we can prevent reloading */
+ 	struct io_mapped_ubuf		*imu;
+@@ -1482,7 +1482,8 @@ static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force)
+ 	if (all_flushed) {
+ 		clear_bit(0, &ctx->sq_check_overflow);
+ 		clear_bit(0, &ctx->cq_check_overflow);
+-		ctx->rings->sq_flags &= ~IORING_SQ_CQ_OVERFLOW;
++		WRITE_ONCE(ctx->rings->sq_flags,
++			   ctx->rings->sq_flags & ~IORING_SQ_CQ_OVERFLOW);
+ 	}
+ 
+ 	if (posted)
+@@ -1562,7 +1563,9 @@ static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
+ 	if (list_empty(&ctx->cq_overflow_list)) {
+ 		set_bit(0, &ctx->sq_check_overflow);
+ 		set_bit(0, &ctx->cq_check_overflow);
+-		ctx->rings->sq_flags |= IORING_SQ_CQ_OVERFLOW;
++		WRITE_ONCE(ctx->rings->sq_flags,
++			   ctx->rings->sq_flags | IORING_SQ_CQ_OVERFLOW);
++
+ 	}
+ 	ocqe->cqe.user_data = user_data;
+ 	ocqe->cqe.res = res;
+@@ -1718,7 +1721,7 @@ static struct io_kiocb *io_alloc_req(struct io_ring_ctx *ctx)
+ {
+ 	struct io_submit_state *state = &ctx->submit_state;
+ 
+-	BUILD_BUG_ON(IO_REQ_ALLOC_BATCH > ARRAY_SIZE(state->reqs));
++	BUILD_BUG_ON(ARRAY_SIZE(state->reqs) < IO_REQ_ALLOC_BATCH);
+ 
+ 	if (!state->free_reqs) {
+ 		gfp_t gfp = GFP_KERNEL | __GFP_NOWARN;
+@@ -2775,7 +2778,7 @@ static void kiocb_done(struct kiocb *kiocb, ssize_t ret,
+ 	else
+ 		io_rw_done(kiocb, ret);
+ 
+-	if (check_reissue && req->flags & REQ_F_REISSUE) {
++	if (check_reissue && (req->flags & REQ_F_REISSUE)) {
+ 		req->flags &= ~REQ_F_REISSUE;
+ 		if (io_resubmit_prep(req)) {
+ 			req_ref_get(req);
+@@ -3605,7 +3608,7 @@ static int io_shutdown(struct io_kiocb *req, unsigned int issue_flags)
+ static int __io_splice_prep(struct io_kiocb *req,
+ 			    const struct io_uring_sqe *sqe)
+ {
+-	struct io_splice* sp = &req->splice;
++	struct io_splice *sp = &req->splice;
+ 	unsigned int valid_flags = SPLICE_F_FD_IN_FIXED | SPLICE_F_ALL;
+ 
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+@@ -3659,7 +3662,7 @@ static int io_tee(struct io_kiocb *req, unsigned int issue_flags)
+ 
+ static int io_splice_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+-	struct io_splice* sp = &req->splice;
++	struct io_splice *sp = &req->splice;
+ 
+ 	sp->off_in = READ_ONCE(sqe->splice_off_in);
+ 	sp->off_out = READ_ONCE(sqe->off);
+@@ -6790,14 +6793,16 @@ static inline void io_ring_set_wakeup_flag(struct io_ring_ctx *ctx)
+ {
+ 	/* Tell userspace we may need a wakeup call */
+ 	spin_lock_irq(&ctx->completion_lock);
+-	ctx->rings->sq_flags |= IORING_SQ_NEED_WAKEUP;
++	WRITE_ONCE(ctx->rings->sq_flags,
++		   ctx->rings->sq_flags | IORING_SQ_NEED_WAKEUP);
+ 	spin_unlock_irq(&ctx->completion_lock);
+ }
+ 
+ static inline void io_ring_clear_wakeup_flag(struct io_ring_ctx *ctx)
+ {
+ 	spin_lock_irq(&ctx->completion_lock);
+-	ctx->rings->sq_flags &= ~IORING_SQ_NEED_WAKEUP;
++	WRITE_ONCE(ctx->rings->sq_flags,
++		   ctx->rings->sq_flags & ~IORING_SQ_NEED_WAKEUP);
+ 	spin_unlock_irq(&ctx->completion_lock);
+ }
+ 
+@@ -8558,6 +8563,7 @@ static int io_eventfd_register(struct io_ring_ctx *ctx, void __user *arg)
+ 	ctx->cq_ev_fd = eventfd_ctx_fdget(fd);
+ 	if (IS_ERR(ctx->cq_ev_fd)) {
+ 		int ret = PTR_ERR(ctx->cq_ev_fd);
++
+ 		ctx->cq_ev_fd = NULL;
+ 		return ret;
+ 	}
+@@ -9356,8 +9362,8 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
+ 	if (ctx->flags & IORING_SETUP_SQPOLL) {
+ 		io_cqring_overflow_flush(ctx, false);
+ 
+-		ret = -EOWNERDEAD;
+ 		if (unlikely(ctx->sq_data->thread == NULL)) {
++			ret = -EOWNERDEAD;
+ 			goto out;
+ 		}
+ 		if (flags & IORING_ENTER_SQ_WAKEUP)
+@@ -9829,10 +9835,11 @@ static int io_register_personality(struct io_ring_ctx *ctx)
+ 
+ 	ret = xa_alloc_cyclic(&ctx->personalities, &id, (void *)creds,
+ 			XA_LIMIT(0, USHRT_MAX), &ctx->pers_next, GFP_KERNEL);
+-	if (!ret)
+-		return id;
+-	put_cred(creds);
+-	return ret;
++	if (ret < 0) {
++		put_cred(creds);
++		return ret;
++	}
++	return id;
+ }
+ 
+ static int io_register_restrictions(struct io_ring_ctx *ctx, void __user *arg,
+diff --git a/fs/namespace.c b/fs/namespace.c
+index caad091fb204d..03770bae9dd50 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -1716,8 +1716,12 @@ static inline bool may_mount(void)
+ }
+ 
+ #ifdef	CONFIG_MANDATORY_FILE_LOCKING
+-static inline bool may_mandlock(void)
++static bool may_mandlock(void)
+ {
++	pr_warn_once("======================================================\n"
++		     "WARNING: the mand mount option is being deprecated and\n"
++		     "         will be removed in v5.15!\n"
++		     "======================================================\n");
+ 	return capable(CAP_SYS_ADMIN);
+ }
+ #else
+diff --git a/include/linux/kfence.h b/include/linux/kfence.h
+index a70d1ea035325..3fe6dd8a18c19 100644
+--- a/include/linux/kfence.h
++++ b/include/linux/kfence.h
+@@ -51,10 +51,11 @@ extern atomic_t kfence_allocation_gate;
+ static __always_inline bool is_kfence_address(const void *addr)
+ {
+ 	/*
+-	 * The non-NULL check is required in case the __kfence_pool pointer was
+-	 * never initialized; keep it in the slow-path after the range-check.
++	 * The __kfence_pool != NULL check is required to deal with the case
++	 * where __kfence_pool == NULL && addr < KFENCE_POOL_SIZE. Keep it in
++	 * the slow-path after the range-check!
+ 	 */
+-	return unlikely((unsigned long)((char *)addr - __kfence_pool) < KFENCE_POOL_SIZE && addr);
++	return unlikely((unsigned long)((char *)addr - __kfence_pool) < KFENCE_POOL_SIZE && __kfence_pool);
+ }
+ 
+ /**
+diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
+index c193be7607090..63f751faa5c16 100644
+--- a/include/linux/memcontrol.h
++++ b/include/linux/memcontrol.h
+@@ -613,12 +613,15 @@ static inline bool mem_cgroup_disabled(void)
+ 	return !cgroup_subsys_enabled(memory_cgrp_subsys);
+ }
+ 
+-static inline unsigned long mem_cgroup_protection(struct mem_cgroup *root,
+-						  struct mem_cgroup *memcg,
+-						  bool in_low_reclaim)
++static inline void mem_cgroup_protection(struct mem_cgroup *root,
++					 struct mem_cgroup *memcg,
++					 unsigned long *min,
++					 unsigned long *low)
+ {
++	*min = *low = 0;
++
+ 	if (mem_cgroup_disabled())
+-		return 0;
++		return;
+ 
+ 	/*
+ 	 * There is no reclaim protection applied to a targeted reclaim.
+@@ -654,13 +657,10 @@ static inline unsigned long mem_cgroup_protection(struct mem_cgroup *root,
+ 	 *
+ 	 */
+ 	if (root == memcg)
+-		return 0;
+-
+-	if (in_low_reclaim)
+-		return READ_ONCE(memcg->memory.emin);
++		return;
+ 
+-	return max(READ_ONCE(memcg->memory.emin),
+-		   READ_ONCE(memcg->memory.elow));
++	*min = READ_ONCE(memcg->memory.emin);
++	*low = READ_ONCE(memcg->memory.elow);
+ }
+ 
+ void mem_cgroup_calculate_protection(struct mem_cgroup *root,
+@@ -1165,11 +1165,12 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm,
+ {
+ }
+ 
+-static inline unsigned long mem_cgroup_protection(struct mem_cgroup *root,
+-						  struct mem_cgroup *memcg,
+-						  bool in_low_reclaim)
++static inline void mem_cgroup_protection(struct mem_cgroup *root,
++					 struct mem_cgroup *memcg,
++					 unsigned long *min,
++					 unsigned long *low)
+ {
+-	return 0;
++	*min = *low = 0;
+ }
+ 
+ static inline void mem_cgroup_calculate_protection(struct mem_cgroup *root,
+diff --git a/include/linux/mlx5/mlx5_ifc_vdpa.h b/include/linux/mlx5/mlx5_ifc_vdpa.h
+index 98b56b75c625b..1a9c9d94cb59f 100644
+--- a/include/linux/mlx5/mlx5_ifc_vdpa.h
++++ b/include/linux/mlx5/mlx5_ifc_vdpa.h
+@@ -11,13 +11,15 @@ enum {
+ };
+ 
+ enum {
+-	MLX5_VIRTIO_EMULATION_CAP_VIRTIO_QUEUE_TYPE_SPLIT   = 0x1, // do I check this caps?
+-	MLX5_VIRTIO_EMULATION_CAP_VIRTIO_QUEUE_TYPE_PACKED  = 0x2,
++	MLX5_VIRTIO_EMULATION_VIRTIO_QUEUE_TYPE_SPLIT   = 0,
++	MLX5_VIRTIO_EMULATION_VIRTIO_QUEUE_TYPE_PACKED  = 1,
+ };
+ 
+ enum {
+-	MLX5_VIRTIO_EMULATION_VIRTIO_QUEUE_TYPE_SPLIT   = 0,
+-	MLX5_VIRTIO_EMULATION_VIRTIO_QUEUE_TYPE_PACKED  = 1,
++	MLX5_VIRTIO_EMULATION_CAP_VIRTIO_QUEUE_TYPE_SPLIT =
++		BIT(MLX5_VIRTIO_EMULATION_VIRTIO_QUEUE_TYPE_SPLIT),
++	MLX5_VIRTIO_EMULATION_CAP_VIRTIO_QUEUE_TYPE_PACKED =
++		BIT(MLX5_VIRTIO_EMULATION_VIRTIO_QUEUE_TYPE_PACKED),
+ };
+ 
+ struct mlx5_ifc_virtio_q_bits {
+diff --git a/include/linux/virtio.h b/include/linux/virtio.h
+index b1894e0323fae..41edbc01ffa40 100644
+--- a/include/linux/virtio.h
++++ b/include/linux/virtio.h
+@@ -110,6 +110,7 @@ struct virtio_device {
+ 	bool config_enabled;
+ 	bool config_change_pending;
+ 	spinlock_t config_lock;
++	spinlock_t vqs_list_lock; /* Protects VQs list access */
+ 	struct device dev;
+ 	struct virtio_device_id id;
+ 	const struct virtio_config_ops *config;
+diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h
+index 69c9eabf83252..dc5c1e69cd9f2 100644
+--- a/include/net/flow_offload.h
++++ b/include/net/flow_offload.h
+@@ -319,14 +319,12 @@ flow_action_mixed_hw_stats_check(const struct flow_action *action,
+ 	if (flow_offload_has_one_action(action))
+ 		return true;
+ 
+-	if (action) {
+-		flow_action_for_each(i, action_entry, action) {
+-			if (i && action_entry->hw_stats != last_hw_stats) {
+-				NL_SET_ERR_MSG_MOD(extack, "Mixing HW stats types for actions is not supported");
+-				return false;
+-			}
+-			last_hw_stats = action_entry->hw_stats;
++	flow_action_for_each(i, action_entry, action) {
++		if (i && action_entry->hw_stats != last_hw_stats) {
++			NL_SET_ERR_MSG_MOD(extack, "Mixing HW stats types for actions is not supported");
++			return false;
+ 		}
++		last_hw_stats = action_entry->hw_stats;
+ 	}
+ 	return true;
+ }
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index eab48745231fb..0fbe7ef6b1553 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -11632,6 +11632,7 @@ static void sanitize_dead_code(struct bpf_verifier_env *env)
+ 		if (aux_data[i].seen)
+ 			continue;
+ 		memcpy(insn + i, &trap, sizeof(trap));
++		aux_data[i].zext_dst = false;
+ 	}
+ }
+ 
+diff --git a/kernel/cfi.c b/kernel/cfi.c
+index e17a56639766b..9594cfd1cf2cf 100644
+--- a/kernel/cfi.c
++++ b/kernel/cfi.c
+@@ -248,9 +248,9 @@ static inline cfi_check_fn find_shadow_check_fn(unsigned long ptr)
+ {
+ 	cfi_check_fn fn;
+ 
+-	rcu_read_lock_sched();
++	rcu_read_lock_sched_notrace();
+ 	fn = ptr_to_check_fn(rcu_dereference_sched(cfi_shadow), ptr);
+-	rcu_read_unlock_sched();
++	rcu_read_unlock_sched_notrace();
+ 
+ 	return fn;
+ }
+@@ -269,11 +269,11 @@ static inline cfi_check_fn find_module_check_fn(unsigned long ptr)
+ 	cfi_check_fn fn = NULL;
+ 	struct module *mod;
+ 
+-	rcu_read_lock_sched();
++	rcu_read_lock_sched_notrace();
+ 	mod = __module_address(ptr);
+ 	if (mod)
+ 		fn = mod->cfi_check;
+-	rcu_read_unlock_sched();
++	rcu_read_unlock_sched_notrace();
+ 
+ 	return fn;
+ }
+diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
+index 7fa82778c3e64..682334e018dd3 100644
+--- a/kernel/trace/Kconfig
++++ b/kernel/trace/Kconfig
+@@ -219,6 +219,11 @@ config DYNAMIC_FTRACE_WITH_DIRECT_CALLS
+ 	depends on DYNAMIC_FTRACE_WITH_REGS
+ 	depends on HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
+ 
++config DYNAMIC_FTRACE_WITH_ARGS
++	def_bool y
++	depends on DYNAMIC_FTRACE
++	depends on HAVE_DYNAMIC_FTRACE_WITH_ARGS
++
+ config FUNCTION_PROFILER
+ 	bool "Kernel function profiler"
+ 	depends on FUNCTION_TRACER
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 018067e379f2b..fa617a0a9eed0 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -2853,14 +2853,26 @@ int tracepoint_printk_sysctl(struct ctl_table *table, int write,
+ 
+ void trace_event_buffer_commit(struct trace_event_buffer *fbuffer)
+ {
++	enum event_trigger_type tt = ETT_NONE;
++	struct trace_event_file *file = fbuffer->trace_file;
++
++	if (__event_trigger_test_discard(file, fbuffer->buffer, fbuffer->event,
++			fbuffer->entry, &tt))
++		goto discard;
++
+ 	if (static_key_false(&tracepoint_printk_key.key))
+ 		output_printk(fbuffer);
+ 
+ 	if (static_branch_unlikely(&trace_event_exports_enabled))
+ 		ftrace_exports(fbuffer->event, TRACE_EXPORT_EVENT);
+-	event_trigger_unlock_commit_regs(fbuffer->trace_file, fbuffer->buffer,
+-				    fbuffer->event, fbuffer->entry,
+-				    fbuffer->trace_ctx, fbuffer->regs);
++
++	trace_buffer_unlock_commit_regs(file->tr, fbuffer->buffer,
++			fbuffer->event, fbuffer->trace_ctx, fbuffer->regs);
++
++discard:
++	if (tt)
++		event_triggers_post_call(file, tt);
++
+ }
+ EXPORT_SYMBOL_GPL(trace_event_buffer_commit);
+ 
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index cd80d046c7a5d..1b60ecf853915 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -1391,38 +1391,6 @@ event_trigger_unlock_commit(struct trace_event_file *file,
+ 		event_triggers_post_call(file, tt);
+ }
+ 
+-/**
+- * event_trigger_unlock_commit_regs - handle triggers and finish event commit
+- * @file: The file pointer associated with the event
+- * @buffer: The ring buffer that the event is being written to
+- * @event: The event meta data in the ring buffer
+- * @entry: The event itself
+- * @trace_ctx: The tracing context flags.
+- *
+- * This is a helper function to handle triggers that require data
+- * from the event itself. It also tests the event against filters and
+- * if the event is soft disabled and should be discarded.
+- *
+- * Same as event_trigger_unlock_commit() but calls
+- * trace_buffer_unlock_commit_regs() instead of trace_buffer_unlock_commit().
+- */
+-static inline void
+-event_trigger_unlock_commit_regs(struct trace_event_file *file,
+-				 struct trace_buffer *buffer,
+-				 struct ring_buffer_event *event,
+-				 void *entry, unsigned int trace_ctx,
+-				 struct pt_regs *regs)
+-{
+-	enum event_trigger_type tt = ETT_NONE;
+-
+-	if (!__event_trigger_test_discard(file, buffer, event, entry, &tt))
+-		trace_buffer_unlock_commit_regs(file->tr, buffer, event,
+-						trace_ctx, regs);
+-
+-	if (tt)
+-		event_triggers_post_call(file, tt);
+-}
+-
+ #define FILTER_PRED_INVALID	((unsigned short)-1)
+ #define FILTER_PRED_IS_RIGHT	(1 << 15)
+ #define FILTER_PRED_FOLD	(1 << 15)
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index c59793ffd59ce..4a2e1d360437f 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -3430,6 +3430,8 @@ trace_action_create_field_var(struct hist_trigger_data *hist_data,
+ 			event = data->match_data.event;
+ 		}
+ 
++		if (!event)
++			goto free;
+ 		/*
+ 		 * At this point, we're looking at a field on another
+ 		 * event.  Because we can't modify a hist trigger on
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 8363f737d5adb..6ad419e7e0a4c 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -2286,7 +2286,7 @@ void restore_reserve_on_error(struct hstate *h, struct vm_area_struct *vma,
+ 		if (!rc) {
+ 			/*
+ 			 * This indicates there is an entry in the reserve map
+-			 * added by alloc_huge_page.  We know it was added
++			 * not added by alloc_huge_page.  We know it was added
+ 			 * before the alloc_huge_page call, otherwise
+ 			 * HPageRestoreReserve would be set on the page.
+ 			 * Remove the entry so that a subsequent allocation
+@@ -4465,7 +4465,9 @@ retry_avoidcopy:
+ 	spin_unlock(ptl);
+ 	mmu_notifier_invalidate_range_end(&range);
+ out_release_all:
+-	restore_reserve_on_error(h, vma, haddr, new_page);
++	/* No restore in case of successful pagetable update (Break COW) */
++	if (new_page != old_page)
++		restore_reserve_on_error(h, vma, haddr, new_page);
+ 	put_page(new_page);
+ out_release_old:
+ 	put_page(old_page);
+@@ -4581,7 +4583,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
+ 	pte_t new_pte;
+ 	spinlock_t *ptl;
+ 	unsigned long haddr = address & huge_page_mask(h);
+-	bool new_page = false;
++	bool new_page, new_pagecache_page = false;
+ 
+ 	/*
+ 	 * Currently, we are forced to kill the process in the event the
+@@ -4604,6 +4606,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
+ 		goto out;
+ 
+ retry:
++	new_page = false;
+ 	page = find_lock_page(mapping, idx);
+ 	if (!page) {
+ 		/* Check for page in userfault range */
+@@ -4647,6 +4650,7 @@ retry:
+ 					goto retry;
+ 				goto out;
+ 			}
++			new_pagecache_page = true;
+ 		} else {
+ 			lock_page(page);
+ 			if (unlikely(anon_vma_prepare(vma))) {
+@@ -4731,7 +4735,9 @@ backout:
+ 	spin_unlock(ptl);
+ backout_unlocked:
+ 	unlock_page(page);
+-	restore_reserve_on_error(h, vma, haddr, page);
++	/* restore reserve for newly allocated pages not in page cache */
++	if (new_page && !new_pagecache_page)
++		restore_reserve_on_error(h, vma, haddr, page);
+ 	put_page(page);
+ 	goto out;
+ }
+@@ -4940,6 +4946,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm,
+ 	int ret;
+ 	struct page *page;
+ 	int writable;
++	bool new_pagecache_page = false;
+ 
+ 	mapping = dst_vma->vm_file->f_mapping;
+ 	idx = vma_hugecache_offset(h, dst_vma, dst_addr);
+@@ -5004,6 +5011,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm,
+ 		ret = huge_add_to_page_cache(page, mapping, idx);
+ 		if (ret)
+ 			goto out_release_nounlock;
++		new_pagecache_page = true;
+ 	}
+ 
+ 	ptl = huge_pte_lockptr(h, dst_mm, dst_pte);
+@@ -5067,7 +5075,8 @@ out_release_unlock:
+ 	if (vm_shared || is_continue)
+ 		unlock_page(page);
+ out_release_nounlock:
+-	restore_reserve_on_error(h, dst_vma, dst_addr, page);
++	if (!new_pagecache_page)
++		restore_reserve_on_error(h, dst_vma, dst_addr, page);
+ 	put_page(page);
+ 	goto out;
+ }
+@@ -5930,6 +5939,8 @@ int get_hwpoison_huge_page(struct page *page, bool *hugetlb)
+ 		*hugetlb = true;
+ 		if (HPageFreed(page) || HPageMigratable(page))
+ 			ret = get_page_unless_zero(page);
++		else
++			ret = -EBUSY;
+ 	}
+ 	spin_unlock_irq(&hugetlb_lock);
+ 	return ret;
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 6f5f78885ab42..624763fdecc58 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -974,13 +974,6 @@ static inline bool HWPoisonHandlable(struct page *page)
+ 	return PageLRU(page) || __PageMovable(page);
+ }
+ 
+-/**
+- * __get_hwpoison_page() - Get refcount for memory error handling:
+- * @page:	raw error page (hit by memory error)
+- *
+- * Return: return 0 if failed to grab the refcount, otherwise true (some
+- * non-zero value.)
+- */
+ static int __get_hwpoison_page(struct page *page)
+ {
+ 	struct page *head = compound_head(page);
+@@ -997,7 +990,7 @@ static int __get_hwpoison_page(struct page *page)
+ 	 * unexpected races caused by taking a page refcount.
+ 	 */
+ 	if (!HWPoisonHandlable(head))
+-		return 0;
++		return -EBUSY;
+ 
+ 	if (PageTransHuge(head)) {
+ 		/*
+@@ -1025,15 +1018,6 @@ static int __get_hwpoison_page(struct page *page)
+ 	return 0;
+ }
+ 
+-/*
+- * Safely get reference count of an arbitrary page.
+- *
+- * Returns 0 for a free page, 1 for an in-use page,
+- * -EIO for a page-type we cannot handle and -EBUSY if we raced with an
+- * allocation.
+- * We only incremented refcount in case the page was already in-use and it
+- * is a known type we can handle.
+- */
+ static int get_any_page(struct page *p, unsigned long flags)
+ {
+ 	int ret = 0, pass = 0;
+@@ -1043,50 +1027,83 @@ static int get_any_page(struct page *p, unsigned long flags)
+ 		count_increased = true;
+ 
+ try_again:
+-	if (!count_increased && !__get_hwpoison_page(p)) {
+-		if (page_count(p)) {
+-			/* We raced with an allocation, retry. */
+-			if (pass++ < 3)
+-				goto try_again;
+-			ret = -EBUSY;
+-		} else if (!PageHuge(p) && !is_free_buddy_page(p)) {
+-			/* We raced with put_page, retry. */
+-			if (pass++ < 3)
+-				goto try_again;
+-			ret = -EIO;
+-		}
+-	} else {
+-		if (PageHuge(p) || HWPoisonHandlable(p)) {
+-			ret = 1;
+-		} else {
++	if (!count_increased) {
++		ret = __get_hwpoison_page(p);
++		if (!ret) {
++			if (page_count(p)) {
++				/* We raced with an allocation, retry. */
++				if (pass++ < 3)
++					goto try_again;
++				ret = -EBUSY;
++			} else if (!PageHuge(p) && !is_free_buddy_page(p)) {
++				/* We raced with put_page, retry. */
++				if (pass++ < 3)
++					goto try_again;
++				ret = -EIO;
++			}
++			goto out;
++		} else if (ret == -EBUSY) {
+ 			/*
+-			 * A page we cannot handle. Check whether we can turn
+-			 * it into something we can handle.
++			 * We raced with (possibly temporary) unhandlable
++			 * page, retry.
+ 			 */
+ 			if (pass++ < 3) {
+-				put_page(p);
+ 				shake_page(p, 1);
+-				count_increased = false;
+ 				goto try_again;
+ 			}
+-			put_page(p);
+ 			ret = -EIO;
++			goto out;
+ 		}
+ 	}
+ 
++	if (PageHuge(p) || HWPoisonHandlable(p)) {
++		ret = 1;
++	} else {
++		/*
++		 * A page we cannot handle. Check whether we can turn
++		 * it into something we can handle.
++		 */
++		if (pass++ < 3) {
++			put_page(p);
++			shake_page(p, 1);
++			count_increased = false;
++			goto try_again;
++		}
++		put_page(p);
++		ret = -EIO;
++	}
++out:
+ 	return ret;
+ }
+ 
+-static int get_hwpoison_page(struct page *p, unsigned long flags,
+-			     enum mf_flags ctxt)
++/**
++ * get_hwpoison_page() - Get refcount for memory error handling
++ * @p:		Raw error page (hit by memory error)
++ * @flags:	Flags controlling behavior of error handling
++ *
++ * get_hwpoison_page() takes a page refcount of an error page to handle memory
++ * error on it, after checking that the error page is in a well-defined state
++ * (defined as a page-type we can successfully handle the memor error on it,
++ * such as LRU page and hugetlb page).
++ *
++ * Memory error handling could be triggered at any time on any type of page,
++ * so it's prone to race with typical memory management lifecycle (like
++ * allocation and free).  So to avoid such races, get_hwpoison_page() takes
++ * extra care for the error page's state (as done in __get_hwpoison_page()),
++ * and has some retry logic in get_any_page().
++ *
++ * Return: 0 on failure,
++ *         1 on success for in-use pages in a well-defined state,
++ *         -EIO for pages on which we can not handle memory errors,
++ *         -EBUSY when get_hwpoison_page() has raced with page lifecycle
++ *         operations like allocation and free.
++ */
++static int get_hwpoison_page(struct page *p, unsigned long flags)
+ {
+ 	int ret;
+ 
+ 	zone_pcp_disable(page_zone(p));
+-	if (ctxt == MF_SOFT_OFFLINE)
+-		ret = get_any_page(p, flags);
+-	else
+-		ret = __get_hwpoison_page(p);
++	ret = get_any_page(p, flags);
+ 	zone_pcp_enable(page_zone(p));
+ 
+ 	return ret;
+@@ -1272,27 +1289,33 @@ static int memory_failure_hugetlb(unsigned long pfn, int flags)
+ 
+ 	num_poisoned_pages_inc();
+ 
+-	if (!(flags & MF_COUNT_INCREASED) && !get_hwpoison_page(p, flags, 0)) {
+-		/*
+-		 * Check "filter hit" and "race with other subpage."
+-		 */
+-		lock_page(head);
+-		if (PageHWPoison(head)) {
+-			if ((hwpoison_filter(p) && TestClearPageHWPoison(p))
+-			    || (p != head && TestSetPageHWPoison(head))) {
+-				num_poisoned_pages_dec();
+-				unlock_page(head);
+-				return 0;
++	if (!(flags & MF_COUNT_INCREASED)) {
++		res = get_hwpoison_page(p, flags);
++		if (!res) {
++			/*
++			 * Check "filter hit" and "race with other subpage."
++			 */
++			lock_page(head);
++			if (PageHWPoison(head)) {
++				if ((hwpoison_filter(p) && TestClearPageHWPoison(p))
++				    || (p != head && TestSetPageHWPoison(head))) {
++					num_poisoned_pages_dec();
++					unlock_page(head);
++					return 0;
++				}
+ 			}
++			unlock_page(head);
++			res = MF_FAILED;
++			if (!dissolve_free_huge_page(p) && take_page_off_buddy(p)) {
++				page_ref_inc(p);
++				res = MF_RECOVERED;
++			}
++			action_result(pfn, MF_MSG_FREE_HUGE, res);
++			return res == MF_RECOVERED ? 0 : -EBUSY;
++		} else if (res < 0) {
++			action_result(pfn, MF_MSG_UNKNOWN, MF_IGNORED);
++			return -EBUSY;
+ 		}
+-		unlock_page(head);
+-		res = MF_FAILED;
+-		if (!dissolve_free_huge_page(p) && take_page_off_buddy(p)) {
+-			page_ref_inc(p);
+-			res = MF_RECOVERED;
+-		}
+-		action_result(pfn, MF_MSG_FREE_HUGE, res);
+-		return res == MF_RECOVERED ? 0 : -EBUSY;
+ 	}
+ 
+ 	lock_page(head);
+@@ -1493,28 +1516,35 @@ try_again:
+ 	 * In fact it's dangerous to directly bump up page count from 0,
+ 	 * that may make page_ref_freeze()/page_ref_unfreeze() mismatch.
+ 	 */
+-	if (!(flags & MF_COUNT_INCREASED) && !get_hwpoison_page(p, flags, 0)) {
+-		if (is_free_buddy_page(p)) {
+-			if (take_page_off_buddy(p)) {
+-				page_ref_inc(p);
+-				res = MF_RECOVERED;
+-			} else {
+-				/* We lost the race, try again */
+-				if (retry) {
+-					ClearPageHWPoison(p);
+-					num_poisoned_pages_dec();
+-					retry = false;
+-					goto try_again;
++	if (!(flags & MF_COUNT_INCREASED)) {
++		res = get_hwpoison_page(p, flags);
++		if (!res) {
++			if (is_free_buddy_page(p)) {
++				if (take_page_off_buddy(p)) {
++					page_ref_inc(p);
++					res = MF_RECOVERED;
++				} else {
++					/* We lost the race, try again */
++					if (retry) {
++						ClearPageHWPoison(p);
++						num_poisoned_pages_dec();
++						retry = false;
++						goto try_again;
++					}
++					res = MF_FAILED;
+ 				}
+-				res = MF_FAILED;
++				action_result(pfn, MF_MSG_BUDDY, res);
++				res = res == MF_RECOVERED ? 0 : -EBUSY;
++			} else {
++				action_result(pfn, MF_MSG_KERNEL_HIGH_ORDER, MF_IGNORED);
++				res = -EBUSY;
+ 			}
+-			action_result(pfn, MF_MSG_BUDDY, res);
+-			res = res == MF_RECOVERED ? 0 : -EBUSY;
+-		} else {
+-			action_result(pfn, MF_MSG_KERNEL_HIGH_ORDER, MF_IGNORED);
++			goto unlock_mutex;
++		} else if (res < 0) {
++			action_result(pfn, MF_MSG_UNKNOWN, MF_IGNORED);
+ 			res = -EBUSY;
++			goto unlock_mutex;
+ 		}
+-		goto unlock_mutex;
+ 	}
+ 
+ 	if (PageTransHuge(hpage)) {
+@@ -1792,7 +1822,7 @@ int unpoison_memory(unsigned long pfn)
+ 		return 0;
+ 	}
+ 
+-	if (!get_hwpoison_page(p, flags, 0)) {
++	if (!get_hwpoison_page(p, flags)) {
+ 		if (TestClearPageHWPoison(p))
+ 			num_poisoned_pages_dec();
+ 		unpoison_pr_info("Unpoison: Software-unpoisoned free page %#lx\n",
+@@ -2008,7 +2038,7 @@ int soft_offline_page(unsigned long pfn, int flags)
+ 
+ retry:
+ 	get_online_mems();
+-	ret = get_hwpoison_page(page, flags, MF_SOFT_OFFLINE);
++	ret = get_hwpoison_page(page, flags);
+ 	put_online_mems();
+ 
+ 	if (ret > 0) {
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 5199b9696babf..f62d81f61b56b 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -100,9 +100,12 @@ struct scan_control {
+ 	unsigned int may_swap:1;
+ 
+ 	/*
+-	 * Cgroups are not reclaimed below their configured memory.low,
+-	 * unless we threaten to OOM. If any cgroups are skipped due to
+-	 * memory.low and nothing was reclaimed, go back for memory.low.
++	 * Cgroup memory below memory.low is protected as long as we
++	 * don't threaten to OOM. If any cgroup is reclaimed at
++	 * reduced force or passed over entirely due to its memory.low
++	 * setting (memcg_low_skipped), and nothing is reclaimed as a
++	 * result, then go back for one more cycle that reclaims the protected
++	 * memory (memcg_low_reclaim) to avert OOM.
+ 	 */
+ 	unsigned int memcg_low_reclaim:1;
+ 	unsigned int memcg_low_skipped:1;
+@@ -2521,15 +2524,14 @@ out:
+ 	for_each_evictable_lru(lru) {
+ 		int file = is_file_lru(lru);
+ 		unsigned long lruvec_size;
++		unsigned long low, min;
+ 		unsigned long scan;
+-		unsigned long protection;
+ 
+ 		lruvec_size = lruvec_lru_size(lruvec, lru, sc->reclaim_idx);
+-		protection = mem_cgroup_protection(sc->target_mem_cgroup,
+-						   memcg,
+-						   sc->memcg_low_reclaim);
++		mem_cgroup_protection(sc->target_mem_cgroup, memcg,
++				      &min, &low);
+ 
+-		if (protection) {
++		if (min || low) {
+ 			/*
+ 			 * Scale a cgroup's reclaim pressure by proportioning
+ 			 * its current usage to its memory.low or memory.min
+@@ -2560,6 +2562,15 @@ out:
+ 			 * hard protection.
+ 			 */
+ 			unsigned long cgroup_size = mem_cgroup_size(memcg);
++			unsigned long protection;
++
++			/* memory.low scaling, make sure we retry before OOM */
++			if (!sc->memcg_low_reclaim && low > min) {
++				protection = low;
++				sc->memcg_low_skipped = 1;
++			} else {
++				protection = min;
++			}
+ 
+ 			/* Avoid TOCTOU with earlier protection check */
+ 			cgroup_size = max(cgroup_size, protection);
+diff --git a/net/dccp/dccp.h b/net/dccp/dccp.h
+index 9cc9d1ee6cdb9..c5c1d2b8045e8 100644
+--- a/net/dccp/dccp.h
++++ b/net/dccp/dccp.h
+@@ -41,9 +41,9 @@ extern bool dccp_debug;
+ #define dccp_pr_debug_cat(format, a...)   DCCP_PRINTK(dccp_debug, format, ##a)
+ #define dccp_debug(fmt, a...)		  dccp_pr_debug_cat(KERN_DEBUG fmt, ##a)
+ #else
+-#define dccp_pr_debug(format, a...)
+-#define dccp_pr_debug_cat(format, a...)
+-#define dccp_debug(format, a...)
++#define dccp_pr_debug(format, a...)	  do {} while (0)
++#define dccp_pr_debug_cat(format, a...)	  do {} while (0)
++#define dccp_debug(format, a...)	  do {} while (0)
+ #endif
+ 
+ extern struct inet_hashinfo dccp_hashinfo;
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index 2481bfdfafd09..efe5c3295455f 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -260,6 +260,8 @@ static void ieee80211_restart_work(struct work_struct *work)
+ 	flush_work(&local->radar_detected_work);
+ 
+ 	rtnl_lock();
++	/* we might do interface manipulations, so need both */
++	wiphy_lock(local->hw.wiphy);
+ 
+ 	WARN(test_bit(SCAN_HW_SCANNING, &local->scanning),
+ 	     "%s called with hardware scan in progress\n", __func__);
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index 4f08e04e1ab71..f3ec857797332 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -843,20 +843,16 @@ static bool check_fully_established(struct mptcp_sock *msk, struct sock *ssk,
+ 		return subflow->mp_capable;
+ 	}
+ 
+-	if (mp_opt->dss && mp_opt->use_ack) {
++	if ((mp_opt->dss && mp_opt->use_ack) ||
++	    (mp_opt->add_addr && !mp_opt->echo)) {
+ 		/* subflows are fully established as soon as we get any
+-		 * additional ack.
++		 * additional ack, including ADD_ADDR.
+ 		 */
+ 		subflow->fully_established = 1;
+ 		WRITE_ONCE(msk->fully_established, true);
+ 		goto fully_established;
+ 	}
+ 
+-	if (mp_opt->add_addr) {
+-		WRITE_ONCE(msk->fully_established, true);
+-		return true;
+-	}
+-
+ 	/* If the first established packet does not contain MP_CAPABLE + data
+ 	 * then fallback to TCP. Fallback scenarios requires a reset for
+ 	 * MP_JOIN subflows.
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index fce1d057d19eb..45b414efc0012 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -1135,36 +1135,12 @@ next:
+ 	return 0;
+ }
+ 
+-struct addr_entry_release_work {
+-	struct rcu_work	rwork;
+-	struct mptcp_pm_addr_entry *entry;
+-};
+-
+-static void mptcp_pm_release_addr_entry(struct work_struct *work)
++/* caller must ensure the RCU grace period is already elapsed */
++static void __mptcp_pm_release_addr_entry(struct mptcp_pm_addr_entry *entry)
+ {
+-	struct addr_entry_release_work *w;
+-	struct mptcp_pm_addr_entry *entry;
+-
+-	w = container_of(to_rcu_work(work), struct addr_entry_release_work, rwork);
+-	entry = w->entry;
+-	if (entry) {
+-		if (entry->lsk)
+-			sock_release(entry->lsk);
+-		kfree(entry);
+-	}
+-	kfree(w);
+-}
+-
+-static void mptcp_pm_free_addr_entry(struct mptcp_pm_addr_entry *entry)
+-{
+-	struct addr_entry_release_work *w;
+-
+-	w = kmalloc(sizeof(*w), GFP_ATOMIC);
+-	if (w) {
+-		INIT_RCU_WORK(&w->rwork, mptcp_pm_release_addr_entry);
+-		w->entry = entry;
+-		queue_rcu_work(system_wq, &w->rwork);
+-	}
++	if (entry->lsk)
++		sock_release(entry->lsk);
++	kfree(entry);
+ }
+ 
+ static int mptcp_nl_remove_id_zero_address(struct net *net,
+@@ -1244,7 +1220,8 @@ static int mptcp_nl_cmd_del_addr(struct sk_buff *skb, struct genl_info *info)
+ 	spin_unlock_bh(&pernet->lock);
+ 
+ 	mptcp_nl_remove_subflow_and_signal_addr(sock_net(skb->sk), &entry->addr);
+-	mptcp_pm_free_addr_entry(entry);
++	synchronize_rcu();
++	__mptcp_pm_release_addr_entry(entry);
+ 
+ 	return ret;
+ }
+@@ -1297,6 +1274,7 @@ static void mptcp_nl_remove_addrs_list(struct net *net,
+ 	}
+ }
+ 
++/* caller must ensure the RCU grace period is already elapsed */
+ static void __flush_addrs(struct list_head *list)
+ {
+ 	while (!list_empty(list)) {
+@@ -1305,7 +1283,7 @@ static void __flush_addrs(struct list_head *list)
+ 		cur = list_entry(list->next,
+ 				 struct mptcp_pm_addr_entry, list);
+ 		list_del_rcu(&cur->list);
+-		mptcp_pm_free_addr_entry(cur);
++		__mptcp_pm_release_addr_entry(cur);
+ 	}
+ }
+ 
+@@ -1329,6 +1307,7 @@ static int mptcp_nl_cmd_flush_addrs(struct sk_buff *skb, struct genl_info *info)
+ 	bitmap_zero(pernet->id_bitmap, MAX_ADDR_ID + 1);
+ 	spin_unlock_bh(&pernet->lock);
+ 	mptcp_nl_remove_addrs_list(sock_net(skb->sk), &free_list);
++	synchronize_rcu();
+ 	__flush_addrs(&free_list);
+ 	return 0;
+ }
+@@ -1936,7 +1915,8 @@ static void __net_exit pm_nl_exit_net(struct list_head *net_list)
+ 		struct pm_nl_pernet *pernet = net_generic(net, pm_nl_pernet_id);
+ 
+ 		/* net is removed from namespace list, can't race with
+-		 * other modifiers
++		 * other modifiers, also netns core already waited for a
++		 * RCU grace period.
+ 		 */
+ 		__flush_addrs(&pernet->local_addr_list);
+ 	}
+diff --git a/net/openvswitch/vport.c b/net/openvswitch/vport.c
+index 88deb5b41429f..cf2ce58124896 100644
+--- a/net/openvswitch/vport.c
++++ b/net/openvswitch/vport.c
+@@ -507,6 +507,7 @@ void ovs_vport_send(struct vport *vport, struct sk_buff *skb, u8 mac_proto)
+ 	}
+ 
+ 	skb->dev = vport->dev;
++	skb->tstamp = 0;
+ 	vport->ops->send(skb);
+ 	return;
+ 
+diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
+index 951542843cab2..28af8b1e1bb1f 100644
+--- a/net/sched/sch_cake.c
++++ b/net/sched/sch_cake.c
+@@ -720,7 +720,7 @@ static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb,
+ skip_hash:
+ 	if (flow_override)
+ 		flow_hash = flow_override - 1;
+-	else if (use_skbhash)
++	else if (use_skbhash && (flow_mode & CAKE_FLOW_FLOWS))
+ 		flow_hash = skb->hash;
+ 	if (host_override) {
+ 		dsthost_hash = host_override - 1;
+diff --git a/net/xfrm/xfrm_ipcomp.c b/net/xfrm/xfrm_ipcomp.c
+index 2e8afe078d612..cb40ff0ff28da 100644
+--- a/net/xfrm/xfrm_ipcomp.c
++++ b/net/xfrm/xfrm_ipcomp.c
+@@ -241,7 +241,7 @@ static void ipcomp_free_tfms(struct crypto_comp * __percpu *tfms)
+ 			break;
+ 	}
+ 
+-	WARN_ON(!pos);
++	WARN_ON(list_entry_is_head(pos, &ipcomp_tfms_list, list));
+ 
+ 	if (--pos->users)
+ 		return;
+diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c
+index 1f8018f9ce57a..534c0df75172f 100644
+--- a/sound/pci/hda/hda_generic.c
++++ b/sound/pci/hda/hda_generic.c
+@@ -3460,7 +3460,7 @@ static int cap_put_caller(struct snd_kcontrol *kcontrol,
+ 	struct hda_gen_spec *spec = codec->spec;
+ 	const struct hda_input_mux *imux;
+ 	struct nid_path *path;
+-	int i, adc_idx, err = 0;
++	int i, adc_idx, ret, err = 0;
+ 
+ 	imux = &spec->input_mux;
+ 	adc_idx = kcontrol->id.index;
+@@ -3470,9 +3470,13 @@ static int cap_put_caller(struct snd_kcontrol *kcontrol,
+ 		if (!path || !path->ctls[type])
+ 			continue;
+ 		kcontrol->private_value = path->ctls[type];
+-		err = func(kcontrol, ucontrol);
+-		if (err < 0)
++		ret = func(kcontrol, ucontrol);
++		if (ret < 0) {
++			err = ret;
+ 			break;
++		}
++		if (ret > 0)
++			err = 1;
+ 	}
+ 	mutex_unlock(&codec->control_mutex);
+ 	if (err >= 0 && spec->cap_sync_hook)
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 6d8c4dedfe0fe..0c6be85098558 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6598,6 +6598,7 @@ enum {
+ 	ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP,
+ 	ALC623_FIXUP_LENOVO_THINKSTATION_P340,
+ 	ALC255_FIXUP_ACER_HEADPHONE_AND_MIC,
++	ALC236_FIXUP_HP_LIMIT_INT_MIC_BOOST,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -8182,6 +8183,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC255_FIXUP_XIAOMI_HEADSET_MIC
+ 	},
++	[ALC236_FIXUP_HP_LIMIT_INT_MIC_BOOST] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc269_fixup_limit_int_mic_boost,
++		.chained = true,
++		.chain_id = ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF,
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -8272,6 +8279,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0a2e, "Dell", ALC236_FIXUP_DELL_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0a30, "Dell", ALC236_FIXUP_DELL_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0a58, "Dell", ALC255_FIXUP_DELL_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1028, 0x0a61, "Dell XPS 15 9510", ALC289_FIXUP_DUAL_SPK),
+ 	SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
+@@ -8377,8 +8385,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8847, "HP EliteBook x360 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x884b, "HP EliteBook 840 Aero G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x884c, "HP EliteBook 840 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+-	SND_PCI_QUIRK(0x103c, 0x8862, "HP ProBook 445 G8 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+-	SND_PCI_QUIRK(0x103c, 0x8863, "HP ProBook 445 G8 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
++	SND_PCI_QUIRK(0x103c, 0x8862, "HP ProBook 445 G8 Notebook PC", ALC236_FIXUP_HP_LIMIT_INT_MIC_BOOST),
++	SND_PCI_QUIRK(0x103c, 0x8863, "HP ProBook 445 G8 Notebook PC", ALC236_FIXUP_HP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x103c, 0x886d, "HP ZBook Fury 17.3 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8870, "HP ZBook Fury 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ 	SND_PCI_QUIRK(0x103c, 0x8873, "HP ZBook Studio 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+diff --git a/sound/pci/hda/patch_via.c b/sound/pci/hda/patch_via.c
+index a5c1a2c4eae4e..773a136161f11 100644
+--- a/sound/pci/hda/patch_via.c
++++ b/sound/pci/hda/patch_via.c
+@@ -1041,6 +1041,7 @@ static const struct hda_fixup via_fixups[] = {
+ };
+ 
+ static const struct snd_pci_quirk vt2002p_fixups[] = {
++	SND_PCI_QUIRK(0x1043, 0x13f7, "Asus B23E", VIA_FIXUP_POWER_SAVE),
+ 	SND_PCI_QUIRK(0x1043, 0x1487, "Asus G75", VIA_FIXUP_ASUS_G75),
+ 	SND_PCI_QUIRK(0x1043, 0x8532, "Asus X202E", VIA_FIXUP_INTMIC_BOOST),
+ 	SND_PCI_QUIRK_VENDOR(0x1558, "Clevo", VIA_FIXUP_POWER_SAVE),
+diff --git a/sound/soc/intel/atom/sst-mfld-platform-pcm.c b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+index 5db2f4865bbba..905c7965f6539 100644
+--- a/sound/soc/intel/atom/sst-mfld-platform-pcm.c
++++ b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+@@ -127,7 +127,7 @@ static void sst_fill_alloc_params(struct snd_pcm_substream *substream,
+ 	snd_pcm_uframes_t period_size;
+ 	ssize_t periodbytes;
+ 	ssize_t buffer_bytes = snd_pcm_lib_buffer_bytes(substream);
+-	u32 buffer_addr = substream->runtime->dma_addr;
++	u32 buffer_addr = virt_to_phys(substream->runtime->dma_area);
+ 
+ 	channels = substream->runtime->channels;
+ 	period_size = substream->runtime->period_size;


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-08-29 14:48 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-08-29 14:48 UTC (permalink / raw
  To: gentoo-commits

commit:     5b2bc668fd015cecc3f1dbe007087a1705a0672e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Aug 29 14:42:38 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Aug 29 14:42:38 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5b2bc668

Bump BMQ Patch to v5.13-r3

Thanks to Ulenrich for reporting

Bug: https://bugs.gentoo.org/810925

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                                |  2 +-
 ...2.patch => 5020_BMQ-and-PDS-io-scheduler-v5.13-r3.patch | 14 ++++----------
 2 files changed, 5 insertions(+), 11 deletions(-)

diff --git a/0000_README b/0000_README
index 72c942f..05f6b94 100644
--- a/0000_README
+++ b/0000_README
@@ -131,7 +131,7 @@ Patch:  5010_enable-cpu-optimizations-universal.patch
 From:   https://github.com/graysky2/kernel_compiler_patch
 Desc:   Kernel >= 5.8 patch enables gcc = v9+ optimizations for additional CPUs.
 
-Patch:  5020_BMQ-and-PDS-io-scheduler-v5.13-r2.patch
+Patch:  5020_BMQ-and-PDS-io-scheduler-v5.13-r3.patch
 From:   https://gitlab.com/alfredchen/linux-prjc
 Desc:   BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
 

diff --git a/5020_BMQ-and-PDS-io-scheduler-v5.13-r2.patch b/5020_BMQ-and-PDS-io-scheduler-v5.13-r3.patch
similarity index 99%
rename from 5020_BMQ-and-PDS-io-scheduler-v5.13-r2.patch
rename to 5020_BMQ-and-PDS-io-scheduler-v5.13-r3.patch
index 72533b6..3d7e610 100644
--- a/5020_BMQ-and-PDS-io-scheduler-v5.13-r2.patch
+++ b/5020_BMQ-and-PDS-io-scheduler-v5.13-r3.patch
@@ -647,10 +647,10 @@ index 5fc9c9b70862..06b60d612535 100644
  obj-$(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) += cpufreq_schedutil.o
 diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
 new file mode 100644
-index 000000000000..e296d56e85f0
+index 000000000000..b10012b67435
 --- /dev/null
 +++ b/kernel/sched/alt_core.c
-@@ -0,0 +1,7227 @@
+@@ -0,0 +1,7221 @@
 +/*
 + *  kernel/sched/alt_core.c
 + *
@@ -720,7 +720,7 @@ index 000000000000..e296d56e85f0
 +#define sched_feat(x)	(0)
 +#endif /* CONFIG_SCHED_DEBUG */
 +
-+#define ALT_SCHED_VERSION "v5.13-r2"
++#define ALT_SCHED_VERSION "v5.13-r3"
 +
 +/* rt_prio(prio) defined in include/linux/sched/rt.h */
 +#define rt_task(p)		rt_prio((p)->prio)
@@ -3974,15 +3974,9 @@ index 000000000000..e296d56e85f0
 +	struct task_struct *p = current;
 +	unsigned long flags;
 +	int dest_cpu;
-+	struct rq *rq;
 +
 +	raw_spin_lock_irqsave(&p->pi_lock, flags);
-+	rq = this_rq();
-+
-+	if (rq != task_rq(p) || rq->nr_running < 2)
-+		goto unlock;
-+
-+	dest_cpu = select_task_rq(p);
++	dest_cpu = cpumask_any(p->cpus_ptr);
 +	if (dest_cpu == smp_processor_id())
 +		goto unlock;
 +


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-09-03 11:19 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-09-03 11:19 UTC (permalink / raw
  To: gentoo-commits

commit:     41df3682652562cf0830bf376b4688ef4b7d1779
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Sep  3 11:18:27 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Sep  3 11:18:27 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=41df3682

Linux patch 5.13.14

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1013_linux-5.13.14.patch | 4579 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4583 insertions(+)

diff --git a/0000_README b/0000_README
index 05f6b94..4d2f6e6 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch:  1012_linux-5.13.13.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.13.13
 
+Patch:  1013_linux-5.13.14.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.13.14
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1013_linux-5.13.14.patch b/1013_linux-5.13.14.patch
new file mode 100644
index 0000000..f1ed4f1
--- /dev/null
+++ b/1013_linux-5.13.14.patch
@@ -0,0 +1,4579 @@
+diff --git a/Documentation/devicetree/bindings/riscv/sifive-l2-cache.yaml b/Documentation/devicetree/bindings/riscv/sifive-l2-cache.yaml
+index 23b2276143667..c92f951b85386 100644
+--- a/Documentation/devicetree/bindings/riscv/sifive-l2-cache.yaml
++++ b/Documentation/devicetree/bindings/riscv/sifive-l2-cache.yaml
+@@ -24,10 +24,10 @@ allOf:
+ select:
+   properties:
+     compatible:
+-      items:
+-        - enum:
+-            - sifive,fu540-c000-ccache
+-            - sifive,fu740-c000-ccache
++      contains:
++        enum:
++          - sifive,fu540-c000-ccache
++          - sifive,fu740-c000-ccache
+ 
+   required:
+     - compatible
+diff --git a/Makefile b/Makefile
+index ed4031db38947..de1f9c79e27ab 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 13
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/arc/kernel/vmlinux.lds.S b/arch/arc/kernel/vmlinux.lds.S
+index e2146a8da1953..529ae50f9fe23 100644
+--- a/arch/arc/kernel/vmlinux.lds.S
++++ b/arch/arc/kernel/vmlinux.lds.S
+@@ -88,6 +88,8 @@ SECTIONS
+ 		CPUIDLE_TEXT
+ 		LOCK_TEXT
+ 		KPROBES_TEXT
++		IRQENTRY_TEXT
++		SOFTIRQENTRY_TEXT
+ 		*(.fixup)
+ 		*(.gnu.warning)
+ 	}
+diff --git a/arch/arm64/boot/dts/qcom/msm8994-angler-rev-101.dts b/arch/arm64/boot/dts/qcom/msm8994-angler-rev-101.dts
+index 801995af3dfcd..c096b7758aa0e 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994-angler-rev-101.dts
++++ b/arch/arm64/boot/dts/qcom/msm8994-angler-rev-101.dts
+@@ -36,3 +36,7 @@
+ 		};
+ 	};
+ };
++
++&tlmm {
++	gpio-reserved-ranges = <85 4>;
++};
+diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
+index 21fa330f498dd..b83fb24954b77 100644
+--- a/arch/arm64/include/asm/el2_setup.h
++++ b/arch/arm64/include/asm/el2_setup.h
+@@ -33,8 +33,7 @@
+  * EL2.
+  */
+ .macro __init_el2_timers
+-	mrs	x0, cnthctl_el2
+-	orr	x0, x0, #3			// Enable EL1 physical timers
++	mov	x0, #3				// Enable EL1 physical timers
+ 	msr	cnthctl_el2, x0
+ 	msr	cntvoff_el2, xzr		// Clear virtual offset
+ .endm
+diff --git a/arch/parisc/include/asm/string.h b/arch/parisc/include/asm/string.h
+index 4a0c9dbd62fd0..f6e1132f4e352 100644
+--- a/arch/parisc/include/asm/string.h
++++ b/arch/parisc/include/asm/string.h
+@@ -8,19 +8,4 @@ extern void * memset(void *, int, size_t);
+ #define __HAVE_ARCH_MEMCPY
+ void * memcpy(void * dest,const void *src,size_t count);
+ 
+-#define __HAVE_ARCH_STRLEN
+-extern size_t strlen(const char *s);
+-
+-#define __HAVE_ARCH_STRCPY
+-extern char *strcpy(char *dest, const char *src);
+-
+-#define __HAVE_ARCH_STRNCPY
+-extern char *strncpy(char *dest, const char *src, size_t count);
+-
+-#define __HAVE_ARCH_STRCAT
+-extern char *strcat(char *dest, const char *src);
+-
+-#define __HAVE_ARCH_MEMSET
+-extern void *memset(void *, int, size_t);
+-
+ #endif
+diff --git a/arch/parisc/kernel/parisc_ksyms.c b/arch/parisc/kernel/parisc_ksyms.c
+index 8ed409ecec933..e8a6a751dfd8e 100644
+--- a/arch/parisc/kernel/parisc_ksyms.c
++++ b/arch/parisc/kernel/parisc_ksyms.c
+@@ -17,10 +17,6 @@
+ 
+ #include <linux/string.h>
+ EXPORT_SYMBOL(memset);
+-EXPORT_SYMBOL(strlen);
+-EXPORT_SYMBOL(strcpy);
+-EXPORT_SYMBOL(strncpy);
+-EXPORT_SYMBOL(strcat);
+ 
+ #include <linux/atomic.h>
+ EXPORT_SYMBOL(__xchg8);
+diff --git a/arch/parisc/lib/Makefile b/arch/parisc/lib/Makefile
+index 2d7a9974dbaef..7b197667faf6c 100644
+--- a/arch/parisc/lib/Makefile
++++ b/arch/parisc/lib/Makefile
+@@ -3,7 +3,7 @@
+ # Makefile for parisc-specific library files
+ #
+ 
+-lib-y	:= lusercopy.o bitops.o checksum.o io.o memcpy.o \
+-	   ucmpdi2.o delay.o string.o
++lib-y	:= lusercopy.o bitops.o checksum.o io.o memset.o memcpy.o \
++	   ucmpdi2.o delay.o
+ 
+ obj-y	:= iomap.o
+diff --git a/arch/parisc/lib/memset.c b/arch/parisc/lib/memset.c
+new file mode 100644
+index 0000000000000..133e4809859a3
+--- /dev/null
++++ b/arch/parisc/lib/memset.c
+@@ -0,0 +1,72 @@
++/* SPDX-License-Identifier: GPL-2.0-or-later */
++#include <linux/types.h>
++#include <asm/string.h>
++
++#define OPSIZ (BITS_PER_LONG/8)
++typedef unsigned long op_t;
++
++void *
++memset (void *dstpp, int sc, size_t len)
++{
++  unsigned int c = sc;
++  long int dstp = (long int) dstpp;
++
++  if (len >= 8)
++    {
++      size_t xlen;
++      op_t cccc;
++
++      cccc = (unsigned char) c;
++      cccc |= cccc << 8;
++      cccc |= cccc << 16;
++      if (OPSIZ > 4)
++	/* Do the shift in two steps to avoid warning if long has 32 bits.  */
++	cccc |= (cccc << 16) << 16;
++
++      /* There are at least some bytes to set.
++	 No need to test for LEN == 0 in this alignment loop.  */
++      while (dstp % OPSIZ != 0)
++	{
++	  ((unsigned char *) dstp)[0] = c;
++	  dstp += 1;
++	  len -= 1;
++	}
++
++      /* Write 8 `op_t' per iteration until less than 8 `op_t' remain.  */
++      xlen = len / (OPSIZ * 8);
++      while (xlen > 0)
++	{
++	  ((op_t *) dstp)[0] = cccc;
++	  ((op_t *) dstp)[1] = cccc;
++	  ((op_t *) dstp)[2] = cccc;
++	  ((op_t *) dstp)[3] = cccc;
++	  ((op_t *) dstp)[4] = cccc;
++	  ((op_t *) dstp)[5] = cccc;
++	  ((op_t *) dstp)[6] = cccc;
++	  ((op_t *) dstp)[7] = cccc;
++	  dstp += 8 * OPSIZ;
++	  xlen -= 1;
++	}
++      len %= OPSIZ * 8;
++
++      /* Write 1 `op_t' per iteration until less than OPSIZ bytes remain.  */
++      xlen = len / OPSIZ;
++      while (xlen > 0)
++	{
++	  ((op_t *) dstp)[0] = cccc;
++	  dstp += OPSIZ;
++	  xlen -= 1;
++	}
++      len %= OPSIZ;
++    }
++
++  /* Write the last few bytes.  */
++  while (len > 0)
++    {
++      ((unsigned char *) dstp)[0] = c;
++      dstp += 1;
++      len -= 1;
++    }
++
++  return dstpp;
++}
+diff --git a/arch/parisc/lib/string.S b/arch/parisc/lib/string.S
+deleted file mode 100644
+index 4a64264427a63..0000000000000
+--- a/arch/parisc/lib/string.S
++++ /dev/null
+@@ -1,136 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- *    PA-RISC assembly string functions
+- *
+- *    Copyright (C) 2019 Helge Deller <deller@gmx.de>
+- */
+-
+-#include <asm/assembly.h>
+-#include <linux/linkage.h>
+-
+-	.section .text.hot
+-	.level PA_ASM_LEVEL
+-
+-	t0 = r20
+-	t1 = r21
+-	t2 = r22
+-
+-ENTRY_CFI(strlen, frame=0,no_calls)
+-	or,COND(<>) arg0,r0,ret0
+-	b,l,n	.Lstrlen_null_ptr,r0
+-	depwi	0,31,2,ret0
+-	cmpb,COND(<>) arg0,ret0,.Lstrlen_not_aligned
+-	ldw,ma	4(ret0),t0
+-	cmpib,tr 0,r0,.Lstrlen_loop
+-	uxor,nbz r0,t0,r0
+-.Lstrlen_not_aligned:
+-	uaddcm	arg0,ret0,t1
+-	shladd	t1,3,r0,t1
+-	mtsar	t1
+-	depwi	-1,%sar,32,t0
+-	uxor,nbz r0,t0,r0
+-.Lstrlen_loop:
+-	b,l,n	.Lstrlen_end_loop,r0
+-	ldw,ma	4(ret0),t0
+-	cmpib,tr 0,r0,.Lstrlen_loop
+-	uxor,nbz r0,t0,r0
+-.Lstrlen_end_loop:
+-	extrw,u,<> t0,7,8,r0
+-	addib,tr,n -3,ret0,.Lstrlen_out
+-	extrw,u,<> t0,15,8,r0
+-	addib,tr,n -2,ret0,.Lstrlen_out
+-	extrw,u,<> t0,23,8,r0
+-	addi	-1,ret0,ret0
+-.Lstrlen_out:
+-	bv r0(rp)
+-	uaddcm ret0,arg0,ret0
+-.Lstrlen_null_ptr:
+-	bv,n r0(rp)
+-ENDPROC_CFI(strlen)
+-
+-
+-ENTRY_CFI(strcpy, frame=0,no_calls)
+-	ldb	0(arg1),t0
+-	stb	t0,0(arg0)
+-	ldo	0(arg0),ret0
+-	ldo	1(arg1),t1
+-	cmpb,=	r0,t0,2f
+-	ldo	1(arg0),t2
+-1:	ldb	0(t1),arg1
+-	stb	arg1,0(t2)
+-	ldo	1(t1),t1
+-	cmpb,<> r0,arg1,1b
+-	ldo	1(t2),t2
+-2:	bv,n	r0(rp)
+-ENDPROC_CFI(strcpy)
+-
+-
+-ENTRY_CFI(strncpy, frame=0,no_calls)
+-	ldb	0(arg1),t0
+-	stb	t0,0(arg0)
+-	ldo	1(arg1),t1
+-	ldo	0(arg0),ret0
+-	cmpb,=	r0,t0,2f
+-	ldo	1(arg0),arg1
+-1:	ldo	-1(arg2),arg2
+-	cmpb,COND(=),n r0,arg2,2f
+-	ldb	0(t1),arg0
+-	stb	arg0,0(arg1)
+-	ldo	1(t1),t1
+-	cmpb,<> r0,arg0,1b
+-	ldo	1(arg1),arg1
+-2:	bv,n	r0(rp)
+-ENDPROC_CFI(strncpy)
+-
+-
+-ENTRY_CFI(strcat, frame=0,no_calls)
+-	ldb	0(arg0),t0
+-	cmpb,=	t0,r0,2f
+-	ldo	0(arg0),ret0
+-	ldo	1(arg0),arg0
+-1:	ldb	0(arg0),t1
+-	cmpb,<>,n r0,t1,1b
+-	ldo	1(arg0),arg0
+-2:	ldb	0(arg1),t2
+-	stb	t2,0(arg0)
+-	ldo	1(arg0),arg0
+-	ldb	0(arg1),t0
+-	cmpb,<>	r0,t0,2b
+-	ldo	1(arg1),arg1
+-	bv,n	r0(rp)
+-ENDPROC_CFI(strcat)
+-
+-
+-ENTRY_CFI(memset, frame=0,no_calls)
+-	copy	arg0,ret0
+-	cmpb,COND(=) r0,arg0,4f
+-	copy	arg0,t2
+-	cmpb,COND(=) r0,arg2,4f
+-	ldo	-1(arg2),arg3
+-	subi	-1,arg3,t0
+-	subi	0,t0,t1
+-	cmpiclr,COND(>=) 0,t1,arg2
+-	ldo	-1(t1),arg2
+-	extru arg2,31,2,arg0
+-2:	stb	arg1,0(t2)
+-	ldo	1(t2),t2
+-	addib,>= -1,arg0,2b
+-	ldo	-1(arg3),arg3
+-	cmpiclr,COND(<=) 4,arg2,r0
+-	b,l,n	4f,r0
+-#ifdef CONFIG_64BIT
+-	depd,*	r0,63,2,arg2
+-#else
+-	depw	r0,31,2,arg2
+-#endif
+-	ldo	1(t2),t2
+-3:	stb	arg1,-1(t2)
+-	stb	arg1,0(t2)
+-	stb	arg1,1(t2)
+-	stb	arg1,2(t2)
+-	addib,COND(>) -4,arg2,3b
+-	ldo	4(t2),t2
+-4:	bv,n	r0(rp)
+-ENDPROC_CFI(memset)
+-
+-	.end
+diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
+index f998e655b5706..7450651a75361 100644
+--- a/arch/powerpc/platforms/Kconfig.cputype
++++ b/arch/powerpc/platforms/Kconfig.cputype
+@@ -97,7 +97,7 @@ config PPC_BOOK3S_64
+ 	select PPC_HAVE_PMU_SUPPORT
+ 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE
+ 	select ARCH_ENABLE_HUGEPAGE_MIGRATION if HUGETLB_PAGE && MIGRATION
+-	select ARCH_ENABLE_PMD_SPLIT_PTLOCK
++	select ARCH_ENABLE_SPLIT_PMD_PTLOCK
+ 	select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE
+ 	select ARCH_SUPPORTS_HUGETLBFS
+ 	select ARCH_SUPPORTS_NUMA_BALANCING
+diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c
+index 1a85305720e84..9c0511119bad9 100644
+--- a/arch/riscv/kernel/ptrace.c
++++ b/arch/riscv/kernel/ptrace.c
+@@ -10,6 +10,7 @@
+ #include <asm/ptrace.h>
+ #include <asm/syscall.h>
+ #include <asm/thread_info.h>
++#include <asm/switch_to.h>
+ #include <linux/audit.h>
+ #include <linux/ptrace.h>
+ #include <linux/elf.h>
+@@ -56,6 +57,9 @@ static int riscv_fpr_get(struct task_struct *target,
+ {
+ 	struct __riscv_d_ext_state *fstate = &target->thread.fstate;
+ 
++	if (target == current)
++		fstate_save(current, task_pt_regs(current));
++
+ 	membuf_write(&to, fstate, offsetof(struct __riscv_d_ext_state, fcsr));
+ 	membuf_store(&to, fstate->fcsr);
+ 	return membuf_zero(&to, 4);	// explicitly pad
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index 1f7bb4898a9d2..a8f02c889ae87 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -4701,7 +4701,7 @@ static void __snr_uncore_mmio_init_box(struct intel_uncore_box *box,
+ 		return;
+ 
+ 	pci_read_config_dword(pdev, SNR_IMC_MMIO_BASE_OFFSET, &pci_dword);
+-	addr = (pci_dword & SNR_IMC_MMIO_BASE_MASK) << 23;
++	addr = ((resource_size_t)pci_dword & SNR_IMC_MMIO_BASE_MASK) << 23;
+ 
+ 	pci_read_config_dword(pdev, mem_offset, &pci_dword);
+ 	addr |= (pci_dword & SNR_IMC_MMIO_MEM0_MASK) << 12;
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 5fac3757e6e05..0e56557cacf26 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -3061,19 +3061,19 @@ static ssize_t ioc_weight_write(struct kernfs_open_file *of, char *buf,
+ 		if (v < CGROUP_WEIGHT_MIN || v > CGROUP_WEIGHT_MAX)
+ 			return -EINVAL;
+ 
+-		spin_lock(&blkcg->lock);
++		spin_lock_irq(&blkcg->lock);
+ 		iocc->dfl_weight = v * WEIGHT_ONE;
+ 		hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) {
+ 			struct ioc_gq *iocg = blkg_to_iocg(blkg);
+ 
+ 			if (iocg) {
+-				spin_lock_irq(&iocg->ioc->lock);
++				spin_lock(&iocg->ioc->lock);
+ 				ioc_now(iocg->ioc, &now);
+ 				weight_updated(iocg, &now);
+-				spin_unlock_irq(&iocg->ioc->lock);
++				spin_unlock(&iocg->ioc->lock);
+ 			}
+ 		}
+-		spin_unlock(&blkcg->lock);
++		spin_unlock_irq(&blkcg->lock);
+ 
+ 		return nbytes;
+ 	}
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index c732aa581124f..6dfa572ac1fc1 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -923,34 +923,14 @@ static bool blk_mq_check_expired(struct blk_mq_hw_ctx *hctx,
+ 	unsigned long *next = priv;
+ 
+ 	/*
+-	 * Just do a quick check if it is expired before locking the request in
+-	 * so we're not unnecessarilly synchronizing across CPUs.
+-	 */
+-	if (!blk_mq_req_expired(rq, next))
+-		return true;
+-
+-	/*
+-	 * We have reason to believe the request may be expired. Take a
+-	 * reference on the request to lock this request lifetime into its
+-	 * currently allocated context to prevent it from being reallocated in
+-	 * the event the completion by-passes this timeout handler.
+-	 *
+-	 * If the reference was already released, then the driver beat the
+-	 * timeout handler to posting a natural completion.
+-	 */
+-	if (!refcount_inc_not_zero(&rq->ref))
+-		return true;
+-
+-	/*
+-	 * The request is now locked and cannot be reallocated underneath the
+-	 * timeout handler's processing. Re-verify this exact request is truly
+-	 * expired; if it is not expired, then the request was completed and
+-	 * reallocated as a new request.
++	 * blk_mq_queue_tag_busy_iter() has locked the request, so it cannot
++	 * be reallocated underneath the timeout handler's processing, then
++	 * the expire check is reliable. If the request is not expired, then
++	 * it was completed and reallocated as a new request after returning
++	 * from blk_mq_check_expired().
+ 	 */
+ 	if (blk_mq_req_expired(rq, next))
+ 		blk_mq_rq_timed_out(rq, reserved);
+-
+-	blk_mq_put_rq_ref(rq);
+ 	return true;
+ }
+ 
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
+index 8a9d22207c59c..86f264fd58a5b 100644
+--- a/drivers/block/floppy.c
++++ b/drivers/block/floppy.c
+@@ -4029,23 +4029,23 @@ static int floppy_open(struct block_device *bdev, fmode_t mode)
+ 	if (fdc_state[FDC(drive)].rawcmd == 1)
+ 		fdc_state[FDC(drive)].rawcmd = 2;
+ 
+-	if (mode & (FMODE_READ|FMODE_WRITE)) {
+-		drive_state[drive].last_checked = 0;
+-		clear_bit(FD_OPEN_SHOULD_FAIL_BIT, &drive_state[drive].flags);
+-		if (bdev_check_media_change(bdev))
+-			floppy_revalidate(bdev->bd_disk);
+-		if (test_bit(FD_DISK_CHANGED_BIT, &drive_state[drive].flags))
+-			goto out;
+-		if (test_bit(FD_OPEN_SHOULD_FAIL_BIT, &drive_state[drive].flags))
++	if (!(mode & FMODE_NDELAY)) {
++		if (mode & (FMODE_READ|FMODE_WRITE)) {
++			drive_state[drive].last_checked = 0;
++			clear_bit(FD_OPEN_SHOULD_FAIL_BIT,
++				  &drive_state[drive].flags);
++			if (bdev_check_media_change(bdev))
++				floppy_revalidate(bdev->bd_disk);
++			if (test_bit(FD_DISK_CHANGED_BIT, &drive_state[drive].flags))
++				goto out;
++			if (test_bit(FD_OPEN_SHOULD_FAIL_BIT, &drive_state[drive].flags))
++				goto out;
++		}
++		res = -EROFS;
++		if ((mode & FMODE_WRITE) &&
++		    !test_bit(FD_DISK_WRITABLE_BIT, &drive_state[drive].flags))
+ 			goto out;
+ 	}
+-
+-	res = -EROFS;
+-
+-	if ((mode & FMODE_WRITE) &&
+-			!test_bit(FD_DISK_WRITABLE_BIT, &drive_state[drive].flags))
+-		goto out;
+-
+ 	mutex_unlock(&open_lock);
+ 	mutex_unlock(&floppy_mutex);
+ 	return 0;
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 6d23308119d16..c6b48160343ab 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -516,6 +516,7 @@ static const struct dmi_system_id btusb_needs_reset_resume_table[] = {
+ #define BTUSB_HW_RESET_ACTIVE	12
+ #define BTUSB_TX_WAIT_VND_EVT	13
+ #define BTUSB_WAKEUP_DISABLE	14
++#define BTUSB_USE_ALT3_FOR_WBS	15
+ 
+ struct btusb_data {
+ 	struct hci_dev       *hdev;
+@@ -1748,16 +1749,20 @@ static void btusb_work(struct work_struct *work)
+ 			/* Bluetooth USB spec recommends alt 6 (63 bytes), but
+ 			 * many adapters do not support it.  Alt 1 appears to
+ 			 * work for all adapters that do not have alt 6, and
+-			 * which work with WBS at all.
++			 * which work with WBS at all.  Some devices prefer
++			 * alt 3 (HCI payload >= 60 Bytes let air packet
++			 * data satisfy 60 bytes), requiring
++			 * MTU >= 3 (packets) * 25 (size) - 3 (headers) = 72
++			 * see also Core spec 5, vol 4, B 2.1.1 & Table 2.1.
+ 			 */
+-			new_alts = btusb_find_altsetting(data, 6) ? 6 : 1;
+-			/* Because mSBC frames do not need to be aligned to the
+-			 * SCO packet boundary. If support the Alt 3, use the
+-			 * Alt 3 for HCI payload >= 60 Bytes let air packet
+-			 * data satisfy 60 bytes.
+-			 */
+-			if (new_alts == 1 && btusb_find_altsetting(data, 3))
++			if (btusb_find_altsetting(data, 6))
++				new_alts = 6;
++			else if (btusb_find_altsetting(data, 3) &&
++				 hdev->sco_mtu >= 72 &&
++				 test_bit(BTUSB_USE_ALT3_FOR_WBS, &data->flags))
+ 				new_alts = 3;
++			else
++				new_alts = 1;
+ 		}
+ 
+ 		if (btusb_switch_alt_setting(hdev, new_alts) < 0)
+@@ -4733,6 +4738,7 @@ static int btusb_probe(struct usb_interface *intf,
+ 		 * (DEVICE_REMOTE_WAKEUP)
+ 		 */
+ 		set_bit(BTUSB_WAKEUP_DISABLE, &data->flags);
++		set_bit(BTUSB_USE_ALT3_FOR_WBS, &data->flags);
+ 	}
+ 
+ 	if (!reset)
+diff --git a/drivers/clk/renesas/rcar-usb2-clock-sel.c b/drivers/clk/renesas/rcar-usb2-clock-sel.c
+index 9fb79bd794350..684d8937965e0 100644
+--- a/drivers/clk/renesas/rcar-usb2-clock-sel.c
++++ b/drivers/clk/renesas/rcar-usb2-clock-sel.c
+@@ -187,7 +187,7 @@ static int rcar_usb2_clock_sel_probe(struct platform_device *pdev)
+ 	init.ops = &usb2_clock_sel_clock_ops;
+ 	priv->hw.init = &init;
+ 
+-	ret = devm_clk_hw_register(NULL, &priv->hw);
++	ret = devm_clk_hw_register(dev, &priv->hw);
+ 	if (ret)
+ 		goto pm_put;
+ 
+diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c
+index 5e07065ec22f7..1f8dc1164ba2a 100644
+--- a/drivers/cpufreq/cpufreq-dt-platdev.c
++++ b/drivers/cpufreq/cpufreq-dt-platdev.c
+@@ -138,6 +138,7 @@ static const struct of_device_id blacklist[] __initconst = {
+ 	{ .compatible = "qcom,qcs404", },
+ 	{ .compatible = "qcom,sc7180", },
+ 	{ .compatible = "qcom,sdm845", },
++	{ .compatible = "qcom,sm8150", },
+ 
+ 	{ .compatible = "st,stih407", },
+ 	{ .compatible = "st,stih410", },
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+index b53eab384adb7..38d0811e3c75c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+@@ -904,7 +904,7 @@ void amdgpu_acpi_fini(struct amdgpu_device *adev)
+  */
+ bool amdgpu_acpi_is_s0ix_supported(struct amdgpu_device *adev)
+ {
+-#if IS_ENABLED(CONFIG_AMD_PMC) && IS_ENABLED(CONFIG_PM_SLEEP)
++#if IS_ENABLED(CONFIG_AMD_PMC) && IS_ENABLED(CONFIG_SUSPEND)
+ 	if (acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0) {
+ 		if (adev->flags & AMD_IS_APU)
+ 			return pm_suspend_target_state == PM_SUSPEND_TO_IDLE;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index cb3ad1395e13c..8939e98bf0038 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2690,12 +2690,11 @@ static void amdgpu_device_delay_enable_gfx_off(struct work_struct *work)
+ 	struct amdgpu_device *adev =
+ 		container_of(work, struct amdgpu_device, gfx.gfx_off_delay_work.work);
+ 
+-	mutex_lock(&adev->gfx.gfx_off_mutex);
+-	if (!adev->gfx.gfx_off_state && !adev->gfx.gfx_off_req_count) {
+-		if (!amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_GFX, true))
+-			adev->gfx.gfx_off_state = true;
+-	}
+-	mutex_unlock(&adev->gfx.gfx_off_mutex);
++	WARN_ON_ONCE(adev->gfx.gfx_off_state);
++	WARN_ON_ONCE(adev->gfx.gfx_off_req_count);
++
++	if (!amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_GFX, true))
++		adev->gfx.gfx_off_state = true;
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+index 95d4f43a03df4..571d77ccc9dd0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+@@ -563,24 +563,38 @@ void amdgpu_gfx_off_ctrl(struct amdgpu_device *adev, bool enable)
+ 
+ 	mutex_lock(&adev->gfx.gfx_off_mutex);
+ 
+-	if (!enable)
+-		adev->gfx.gfx_off_req_count++;
+-	else if (adev->gfx.gfx_off_req_count > 0)
++	if (enable) {
++		/* If the count is already 0, it means there's an imbalance bug somewhere.
++		 * Note that the bug may be in a different caller than the one which triggers the
++		 * WARN_ON_ONCE.
++		 */
++		if (WARN_ON_ONCE(adev->gfx.gfx_off_req_count == 0))
++			goto unlock;
++
+ 		adev->gfx.gfx_off_req_count--;
+ 
+-	if (enable && !adev->gfx.gfx_off_state && !adev->gfx.gfx_off_req_count) {
+-		schedule_delayed_work(&adev->gfx.gfx_off_delay_work, GFX_OFF_DELAY_ENABLE);
+-	} else if (!enable && adev->gfx.gfx_off_state) {
+-		if (!amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_GFX, false)) {
+-			adev->gfx.gfx_off_state = false;
++		if (adev->gfx.gfx_off_req_count == 0 && !adev->gfx.gfx_off_state)
++			schedule_delayed_work(&adev->gfx.gfx_off_delay_work, GFX_OFF_DELAY_ENABLE);
++	} else {
++		if (adev->gfx.gfx_off_req_count == 0) {
++			cancel_delayed_work_sync(&adev->gfx.gfx_off_delay_work);
++
++			if (adev->gfx.gfx_off_state &&
++			    !amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_GFX, false)) {
++				adev->gfx.gfx_off_state = false;
+ 
+-			if (adev->gfx.funcs->init_spm_golden) {
+-				dev_dbg(adev->dev, "GFXOFF is disabled, re-init SPM golden settings\n");
+-				amdgpu_gfx_init_spm_golden(adev);
++				if (adev->gfx.funcs->init_spm_golden) {
++					dev_dbg(adev->dev,
++						"GFXOFF is disabled, re-init SPM golden settings\n");
++					amdgpu_gfx_init_spm_golden(adev);
++				}
+ 			}
+ 		}
++
++		adev->gfx.gfx_off_req_count++;
+ 	}
+ 
++unlock:
+ 	mutex_unlock(&adev->gfx.gfx_off_mutex);
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index db00de33caa32..3933a42f8d811 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -937,11 +937,6 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 domain,
+ 			return -EINVAL;
+ 	}
+ 
+-	/* This assumes only APU display buffers are pinned with (VRAM|GTT).
+-	 * See function amdgpu_display_supported_domains()
+-	 */
+-	domain = amdgpu_bo_get_preferred_pin_domain(adev, domain);
+-
+ 	if (bo->tbo.pin_count) {
+ 		uint32_t mem_type = bo->tbo.mem.mem_type;
+ 		uint32_t mem_flags = bo->tbo.mem.placement;
+@@ -966,6 +961,11 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 domain,
+ 		return 0;
+ 	}
+ 
++	/* This assumes only APU display buffers are pinned with (VRAM|GTT).
++	 * See function amdgpu_display_supported_domains()
++	 */
++	domain = amdgpu_bo_get_preferred_pin_domain(adev, domain);
++
+ 	if (bo->tbo.base.import_attach)
+ 		dma_buf_pin(bo->tbo.base.import_attach);
+ 
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
+index 31c61ac3bd5e1..cc6f19a48dea6 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
+@@ -5123,6 +5123,13 @@ static int vega10_get_power_profile_mode(struct pp_hwmgr *hwmgr, char *buf)
+ 	return size;
+ }
+ 
++static bool vega10_get_power_profile_mode_quirks(struct pp_hwmgr *hwmgr)
++{
++	struct amdgpu_device *adev = hwmgr->adev;
++
++	return (adev->pdev->device == 0x6860);
++}
++
+ static int vega10_set_power_profile_mode(struct pp_hwmgr *hwmgr, long *input, uint32_t size)
+ {
+ 	struct vega10_hwmgr *data = hwmgr->backend;
+@@ -5159,9 +5166,15 @@ static int vega10_set_power_profile_mode(struct pp_hwmgr *hwmgr, long *input, ui
+ 	}
+ 
+ out:
+-	smum_send_msg_to_smc_with_parameter(hwmgr, PPSMC_MSG_SetWorkloadMask,
++	if (vega10_get_power_profile_mode_quirks(hwmgr))
++		smum_send_msg_to_smc_with_parameter(hwmgr, PPSMC_MSG_SetWorkloadMask,
++						1 << power_profile_mode,
++						NULL);
++	else
++		smum_send_msg_to_smc_with_parameter(hwmgr, PPSMC_MSG_SetWorkloadMask,
+ 						(!power_profile_mode) ? 0 : 1 << (power_profile_mode - 1),
+ 						NULL);
++
+ 	hwmgr->power_profile_mode = power_profile_mode;
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/drm_ioc32.c b/drivers/gpu/drm/drm_ioc32.c
+index 33390f02f5eb1..e41d3a69a02a4 100644
+--- a/drivers/gpu/drm/drm_ioc32.c
++++ b/drivers/gpu/drm/drm_ioc32.c
+@@ -856,8 +856,6 @@ static int compat_drm_wait_vblank(struct file *file, unsigned int cmd,
+ 	req.request.sequence = req32.request.sequence;
+ 	req.request.signal = req32.request.signal;
+ 	err = drm_ioctl_kernel(file, drm_wait_vblank_ioctl, &req, DRM_UNLOCKED);
+-	if (err)
+-		return err;
+ 
+ 	req32.reply.type = req.reply.type;
+ 	req32.reply.sequence = req.reply.sequence;
+@@ -866,7 +864,7 @@ static int compat_drm_wait_vblank(struct file *file, unsigned int cmd,
+ 	if (copy_to_user(argp, &req32, sizeof(req32)))
+ 		return -EFAULT;
+ 
+-	return 0;
++	return err;
+ }
+ 
+ #if defined(CONFIG_X86)
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 005510f309cf7..ddee6d4f07cf1 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -3833,23 +3833,18 @@ static void intel_dp_check_device_service_irq(struct intel_dp *intel_dp)
+ 
+ static void intel_dp_check_link_service_irq(struct intel_dp *intel_dp)
+ {
+-	struct drm_i915_private *i915 = dp_to_i915(intel_dp);
+ 	u8 val;
+ 
+ 	if (intel_dp->dpcd[DP_DPCD_REV] < 0x11)
+ 		return;
+ 
+ 	if (drm_dp_dpcd_readb(&intel_dp->aux,
+-			      DP_LINK_SERVICE_IRQ_VECTOR_ESI0, &val) != 1 || !val) {
+-		drm_dbg_kms(&i915->drm, "Error in reading link service irq vector\n");
++			      DP_LINK_SERVICE_IRQ_VECTOR_ESI0, &val) != 1 || !val)
+ 		return;
+-	}
+ 
+ 	if (drm_dp_dpcd_writeb(&intel_dp->aux,
+-			       DP_LINK_SERVICE_IRQ_VECTOR_ESI0, val) != 1) {
+-		drm_dbg_kms(&i915->drm, "Error in writing link service irq vector\n");
++			       DP_LINK_SERVICE_IRQ_VECTOR_ESI0, val) != 1)
+ 		return;
+-	}
+ 
+ 	if (val & HDMI_LINK_STATUS_CHANGED)
+ 		intel_dp_handle_hdmi_link_status_change(intel_dp);
+diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
+index f19cf6d2fa85c..feb1a76c639fb 100644
+--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
++++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
+@@ -127,6 +127,15 @@ static void intel_timeline_fini(struct rcu_head *rcu)
+ 
+ 	i915_vma_put(timeline->hwsp_ggtt);
+ 	i915_active_fini(&timeline->active);
++
++	/*
++	 * A small race exists between intel_gt_retire_requests_timeout and
++	 * intel_timeline_exit which could result in the syncmap not getting
++	 * free'd. Rather than work to hard to seal this race, simply cleanup
++	 * the syncmap on fini.
++	 */
++	i915_syncmap_free(&timeline->sync);
++
+ 	kfree(timeline);
+ }
+ 
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+index 1c9c0cdf85dbc..578aaac2e277d 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -2235,6 +2235,33 @@ nv50_disp_atomic_commit_tail(struct drm_atomic_state *state)
+ 		interlock[NV50_DISP_INTERLOCK_CORE] = 0;
+ 	}
+ 
++	/* Finish updating head(s)...
++	 *
++	 * NVD is rather picky about both where window assignments can change,
++	 * *and* about certain core and window channel states matching.
++	 *
++	 * The EFI GOP driver on newer GPUs configures window channels with a
++	 * different output format to what we do, and the core channel update
++	 * in the assign_windows case above would result in a state mismatch.
++	 *
++	 * Delay some of the head update until after that point to workaround
++	 * the issue.  This only affects the initial modeset.
++	 *
++	 * TODO: handle this better when adding flexible window mapping
++	 */
++	for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
++		struct nv50_head_atom *asyh = nv50_head_atom(new_crtc_state);
++		struct nv50_head *head = nv50_head(crtc);
++
++		NV_ATOMIC(drm, "%s: set %04x (clr %04x)\n", crtc->name,
++			  asyh->set.mask, asyh->clr.mask);
++
++		if (asyh->set.mask) {
++			nv50_head_flush_set_wndw(head, asyh);
++			interlock[NV50_DISP_INTERLOCK_CORE] = 1;
++		}
++	}
++
+ 	/* Update plane(s). */
+ 	for_each_new_plane_in_state(state, plane, new_plane_state, i) {
+ 		struct nv50_wndw_atom *asyw = nv50_wndw_atom(new_plane_state);
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/head.c b/drivers/gpu/drm/nouveau/dispnv50/head.c
+index ec361d17e900b..d66f97280282a 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/head.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/head.c
+@@ -50,11 +50,8 @@ nv50_head_flush_clr(struct nv50_head *head,
+ }
+ 
+ void
+-nv50_head_flush_set(struct nv50_head *head, struct nv50_head_atom *asyh)
++nv50_head_flush_set_wndw(struct nv50_head *head, struct nv50_head_atom *asyh)
+ {
+-	if (asyh->set.view   ) head->func->view    (head, asyh);
+-	if (asyh->set.mode   ) head->func->mode    (head, asyh);
+-	if (asyh->set.core   ) head->func->core_set(head, asyh);
+ 	if (asyh->set.olut   ) {
+ 		asyh->olut.offset = nv50_lut_load(&head->olut,
+ 						  asyh->olut.buffer,
+@@ -62,6 +59,14 @@ nv50_head_flush_set(struct nv50_head *head, struct nv50_head_atom *asyh)
+ 						  asyh->olut.load);
+ 		head->func->olut_set(head, asyh);
+ 	}
++}
++
++void
++nv50_head_flush_set(struct nv50_head *head, struct nv50_head_atom *asyh)
++{
++	if (asyh->set.view   ) head->func->view    (head, asyh);
++	if (asyh->set.mode   ) head->func->mode    (head, asyh);
++	if (asyh->set.core   ) head->func->core_set(head, asyh);
+ 	if (asyh->set.curs   ) head->func->curs_set(head, asyh);
+ 	if (asyh->set.base   ) head->func->base    (head, asyh);
+ 	if (asyh->set.ovly   ) head->func->ovly    (head, asyh);
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/head.h b/drivers/gpu/drm/nouveau/dispnv50/head.h
+index dae841dc05fdf..0bac6be9ba34d 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/head.h
++++ b/drivers/gpu/drm/nouveau/dispnv50/head.h
+@@ -21,6 +21,7 @@ struct nv50_head {
+ 
+ struct nv50_head *nv50_head_create(struct drm_device *, int index);
+ void nv50_head_flush_set(struct nv50_head *head, struct nv50_head_atom *asyh);
++void nv50_head_flush_set_wndw(struct nv50_head *head, struct nv50_head_atom *asyh);
+ void nv50_head_flush_clr(struct nv50_head *head,
+ 			 struct nv50_head_atom *asyh, bool flush);
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
+index b930f539feec7..93ddf63d11140 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
+@@ -2624,6 +2624,26 @@ nv174_chipset = {
+ 	.dma      = { 0x00000001, gv100_dma_new },
+ };
+ 
++static const struct nvkm_device_chip
++nv177_chipset = {
++	.name = "GA107",
++	.bar      = { 0x00000001, tu102_bar_new },
++	.bios     = { 0x00000001, nvkm_bios_new },
++	.devinit  = { 0x00000001, ga100_devinit_new },
++	.fb       = { 0x00000001, ga102_fb_new },
++	.gpio     = { 0x00000001, ga102_gpio_new },
++	.i2c      = { 0x00000001, gm200_i2c_new },
++	.imem     = { 0x00000001, nv50_instmem_new },
++	.mc       = { 0x00000001, ga100_mc_new },
++	.mmu      = { 0x00000001, tu102_mmu_new },
++	.pci      = { 0x00000001, gp100_pci_new },
++	.privring = { 0x00000001, gm200_privring_new },
++	.timer    = { 0x00000001, gk20a_timer_new },
++	.top      = { 0x00000001, ga100_top_new },
++	.disp     = { 0x00000001, ga102_disp_new },
++	.dma      = { 0x00000001, gv100_dma_new },
++};
++
+ static int
+ nvkm_device_event_ctor(struct nvkm_object *object, void *data, u32 size,
+ 		       struct nvkm_notify *notify)
+@@ -3049,6 +3069,7 @@ nvkm_device_ctor(const struct nvkm_device_func *func,
+ 		case 0x168: device->chip = &nv168_chipset; break;
+ 		case 0x172: device->chip = &nv172_chipset; break;
+ 		case 0x174: device->chip = &nv174_chipset; break;
++		case 0x177: device->chip = &nv177_chipset; break;
+ 		default:
+ 			if (nvkm_boolopt(device->cfgopt, "NvEnableUnsupportedChipsets", false)) {
+ 				switch (device->chipset) {
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c
+index 55fbfe28c6dc1..9669472a2749d 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c
+@@ -440,7 +440,7 @@ nvkm_dp_train(struct nvkm_dp *dp, u32 dataKBps)
+ 	return ret;
+ }
+ 
+-static void
++void
+ nvkm_dp_disable(struct nvkm_outp *outp, struct nvkm_ior *ior)
+ {
+ 	struct nvkm_dp *dp = nvkm_dp(outp);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.h b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.h
+index 428b3f488f033..e484d0c3b0d42 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.h
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.h
+@@ -32,6 +32,7 @@ struct nvkm_dp {
+ 
+ int nvkm_dp_new(struct nvkm_disp *, int index, struct dcb_output *,
+ 		struct nvkm_outp **);
++void nvkm_dp_disable(struct nvkm_outp *, struct nvkm_ior *);
+ 
+ /* DPCD Receiver Capabilities */
+ #define DPCD_RC00_DPCD_REV                                              0x00000
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c
+index dffcac249211c..129982fef7ef6 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c
+@@ -22,6 +22,7 @@
+  * Authors: Ben Skeggs
+  */
+ #include "outp.h"
++#include "dp.h"
+ #include "ior.h"
+ 
+ #include <subdev/bios.h>
+@@ -257,6 +258,14 @@ nvkm_outp_init_route(struct nvkm_outp *outp)
+ 	if (!ior->arm.head || ior->arm.proto != proto) {
+ 		OUTP_DBG(outp, "no heads (%x %d %d)", ior->arm.head,
+ 			 ior->arm.proto, proto);
++
++		/* The EFI GOP driver on Ampere can leave unused DP links routed,
++		 * which we don't expect.  The DisableLT IED script *should* get
++		 * us back to where we need to be.
++		 */
++		if (ior->func->route.get && !ior->arm.head && outp->info.type == DCB_OUTPUT_DP)
++			nvkm_dp_disable(outp, ior);
++
+ 		return;
+ 	}
+ 
+diff --git a/drivers/infiniband/core/uverbs_std_types_mr.c b/drivers/infiniband/core/uverbs_std_types_mr.c
+index f782d5e1aa255..03e1db5d1e8c3 100644
+--- a/drivers/infiniband/core/uverbs_std_types_mr.c
++++ b/drivers/infiniband/core/uverbs_std_types_mr.c
+@@ -249,6 +249,9 @@ static int UVERBS_HANDLER(UVERBS_METHOD_REG_DMABUF_MR)(
+ 	mr->uobject = uobj;
+ 	atomic_inc(&pd->usecnt);
+ 
++	rdma_restrack_new(&mr->res, RDMA_RESTRACK_MR);
++	rdma_restrack_set_name(&mr->res, NULL);
++	rdma_restrack_add(&mr->res);
+ 	uobj->object = mr;
+ 
+ 	uverbs_finalize_uobj_create(attrs, UVERBS_ATTR_REG_DMABUF_MR_HANDLE);
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index 2efaa80bfbd2d..9d79005befd63 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -1691,6 +1691,7 @@ int bnxt_re_create_srq(struct ib_srq *ib_srq,
+ 	if (nq)
+ 		nq->budget++;
+ 	atomic_inc(&rdev->srq_count);
++	spin_lock_init(&srq->lock);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index 25550d982238c..8097a8d8a49f4 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -1406,7 +1406,6 @@ static int bnxt_re_dev_init(struct bnxt_re_dev *rdev, u8 wqe_mode)
+ 	memset(&rattr, 0, sizeof(rattr));
+ 	rc = bnxt_re_register_netdev(rdev);
+ 	if (rc) {
+-		rtnl_unlock();
+ 		ibdev_err(&rdev->ibdev,
+ 			  "Failed to register with netedev: %#x\n", rc);
+ 		return -EINVAL;
+diff --git a/drivers/infiniband/hw/efa/efa_main.c b/drivers/infiniband/hw/efa/efa_main.c
+index 816cfd65b7ac8..0b61ef0d59833 100644
+--- a/drivers/infiniband/hw/efa/efa_main.c
++++ b/drivers/infiniband/hw/efa/efa_main.c
+@@ -356,6 +356,7 @@ static int efa_enable_msix(struct efa_dev *dev)
+ 	}
+ 
+ 	if (irq_num != msix_vecs) {
++		efa_disable_msix(dev);
+ 		dev_err(&dev->pdev->dev,
+ 			"Allocated %d MSI-X (out of %d requested)\n",
+ 			irq_num, msix_vecs);
+diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
+index 1fcc6e9666e05..8e902b83ce26f 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.c
++++ b/drivers/infiniband/hw/hfi1/sdma.c
+@@ -3055,6 +3055,7 @@ static void __sdma_process_event(struct sdma_engine *sde,
+ static int _extend_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx)
+ {
+ 	int i;
++	struct sdma_desc *descp;
+ 
+ 	/* Handle last descriptor */
+ 	if (unlikely((tx->num_desc == (MAX_DESC - 1)))) {
+@@ -3075,12 +3076,10 @@ static int _extend_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx)
+ 	if (unlikely(tx->num_desc == MAX_DESC))
+ 		goto enomem;
+ 
+-	tx->descp = kmalloc_array(
+-			MAX_DESC,
+-			sizeof(struct sdma_desc),
+-			GFP_ATOMIC);
+-	if (!tx->descp)
++	descp = kmalloc_array(MAX_DESC, sizeof(struct sdma_desc), GFP_ATOMIC);
++	if (!descp)
+ 		goto enomem;
++	tx->descp = descp;
+ 
+ 	/* reserve last descriptor for coalescing */
+ 	tx->desc_limit = MAX_DESC - 1;
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index cca7296b12d01..9bb562c7cd241 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -4444,7 +4444,8 @@ static void mlx5r_mp_remove(struct auxiliary_device *adev)
+ 	mutex_lock(&mlx5_ib_multiport_mutex);
+ 	if (mpi->ibdev)
+ 		mlx5_ib_unbind_slave_port(mpi->ibdev, mpi);
+-	list_del(&mpi->list);
++	else
++		list_del(&mpi->list);
+ 	mutex_unlock(&mlx5_ib_multiport_mutex);
+ 	kfree(mpi);
+ }
+diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c
+index 0ea9a5aa4ec0d..1c1d1b53312dc 100644
+--- a/drivers/infiniband/sw/rxe/rxe_mcast.c
++++ b/drivers/infiniband/sw/rxe/rxe_mcast.c
+@@ -85,7 +85,7 @@ int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp,
+ 		goto out;
+ 	}
+ 
+-	elem = rxe_alloc(&rxe->mc_elem_pool);
++	elem = rxe_alloc_locked(&rxe->mc_elem_pool);
+ 	if (!elem) {
+ 		err = -ENOMEM;
+ 		goto out;
+diff --git a/drivers/media/pci/intel/ipu3/cio2-bridge.c b/drivers/media/pci/intel/ipu3/cio2-bridge.c
+index 59a36f9226755..30d29b96a3396 100644
+--- a/drivers/media/pci/intel/ipu3/cio2-bridge.c
++++ b/drivers/media/pci/intel/ipu3/cio2-bridge.c
+@@ -226,7 +226,7 @@ static int cio2_bridge_connect_sensor(const struct cio2_sensor_config *cfg,
+ err_free_swnodes:
+ 	software_node_unregister_nodes(sensor->swnodes);
+ err_put_adev:
+-	acpi_dev_put(sensor->adev);
++	acpi_dev_put(adev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/mmc/host/sdhci-iproc.c b/drivers/mmc/host/sdhci-iproc.c
+index 9f0eef97ebddd..b9eb2ec61a83a 100644
+--- a/drivers/mmc/host/sdhci-iproc.c
++++ b/drivers/mmc/host/sdhci-iproc.c
+@@ -295,8 +295,7 @@ static const struct sdhci_ops sdhci_iproc_bcm2711_ops = {
+ };
+ 
+ static const struct sdhci_pltfm_data sdhci_bcm2711_pltfm_data = {
+-	.quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12 |
+-		  SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN,
++	.quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12,
+ 	.ops = &sdhci_iproc_bcm2711_ops,
+ };
+ 
+diff --git a/drivers/net/can/usb/esd_usb2.c b/drivers/net/can/usb/esd_usb2.c
+index 66fa8b07c2e6f..95ae740fc3110 100644
+--- a/drivers/net/can/usb/esd_usb2.c
++++ b/drivers/net/can/usb/esd_usb2.c
+@@ -224,8 +224,8 @@ static void esd_usb2_rx_event(struct esd_usb2_net_priv *priv,
+ 	if (id == ESD_EV_CAN_ERROR_EXT) {
+ 		u8 state = msg->msg.rx.data[0];
+ 		u8 ecc = msg->msg.rx.data[1];
+-		u8 txerr = msg->msg.rx.data[2];
+-		u8 rxerr = msg->msg.rx.data[3];
++		u8 rxerr = msg->msg.rx.data[2];
++		u8 txerr = msg->msg.rx.data[3];
+ 
+ 		skb = alloc_can_err_skb(priv->netdev, &cf);
+ 		if (skb == NULL) {
+diff --git a/drivers/net/dsa/hirschmann/hellcreek.c b/drivers/net/dsa/hirschmann/hellcreek.c
+index 50109218baadd..512b1810a8bdc 100644
+--- a/drivers/net/dsa/hirschmann/hellcreek.c
++++ b/drivers/net/dsa/hirschmann/hellcreek.c
+@@ -1473,9 +1473,6 @@ static void hellcreek_setup_gcl(struct hellcreek *hellcreek, int port,
+ 		u16 data;
+ 		u8 gates;
+ 
+-		cur++;
+-		next++;
+-
+ 		if (i == schedule->num_entries)
+ 			gates = initial->gate_mask ^
+ 				cur->gate_mask;
+@@ -1504,6 +1501,9 @@ static void hellcreek_setup_gcl(struct hellcreek *hellcreek, int port,
+ 			(initial->gate_mask <<
+ 			 TR_GCLCMD_INIT_GATE_STATES_SHIFT);
+ 		hellcreek_write(hellcreek, data, TR_GCLCMD);
++
++		cur++;
++		next++;
+ 	}
+ }
+ 
+@@ -1551,7 +1551,7 @@ static bool hellcreek_schedule_startable(struct hellcreek *hellcreek, int port)
+ 	/* Calculate difference to admin base time */
+ 	base_time_ns = ktime_to_ns(hellcreek_port->current_schedule->base_time);
+ 
+-	return base_time_ns - current_ns < (s64)8 * NSEC_PER_SEC;
++	return base_time_ns - current_ns < (s64)4 * NSEC_PER_SEC;
+ }
+ 
+ static void hellcreek_start_schedule(struct hellcreek *hellcreek, int port)
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 167c599a81a55..2b01efad1a51c 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -1295,11 +1295,8 @@ mt7530_port_bridge_leave(struct dsa_switch *ds, int port,
+ 		/* Remove this port from the port matrix of the other ports
+ 		 * in the same bridge. If the port is disabled, port matrix
+ 		 * is kept and not being setup until the port becomes enabled.
+-		 * And the other port's port matrix cannot be broken when the
+-		 * other port is still a VLAN-aware port.
+ 		 */
+-		if (dsa_is_user_port(ds, i) && i != port &&
+-		   !dsa_port_is_vlan_filtering(dsa_to_port(ds, i))) {
++		if (dsa_is_user_port(ds, i) && i != port) {
+ 			if (dsa_to_port(ds, i)->bridge_dev != bridge)
+ 				continue;
+ 			if (priv->ports[i].enable)
+diff --git a/drivers/net/ethernet/apm/xgene-v2/main.c b/drivers/net/ethernet/apm/xgene-v2/main.c
+index 860c18fb7aae9..80399c8980bd3 100644
+--- a/drivers/net/ethernet/apm/xgene-v2/main.c
++++ b/drivers/net/ethernet/apm/xgene-v2/main.c
+@@ -677,11 +677,13 @@ static int xge_probe(struct platform_device *pdev)
+ 	ret = register_netdev(ndev);
+ 	if (ret) {
+ 		netdev_err(ndev, "Failed to register netdev\n");
+-		goto err;
++		goto err_mdio_remove;
+ 	}
+ 
+ 	return 0;
+ 
++err_mdio_remove:
++	xge_mdio_remove(ndev);
+ err:
+ 	free_netdev(ndev);
+ 
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index 9f62ffe647819..9200467176382 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -5068,6 +5068,7 @@ static int adap_init0(struct adapter *adap, int vpd_skip)
+ 		ret = -ENOMEM;
+ 		goto bye;
+ 	}
++	bitmap_zero(adap->sge.blocked_fl, adap->sge.egr_sz);
+ #endif
+ 
+ 	params[0] = FW_PARAM_PFVF(CLIP_START);
+@@ -6788,13 +6789,11 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	setup_memwin(adapter);
+ 	err = adap_init0(adapter, 0);
+-#ifdef CONFIG_DEBUG_FS
+-	bitmap_zero(adapter->sge.blocked_fl, adapter->sge.egr_sz);
+-#endif
+-	setup_memwin_rdma(adapter);
+ 	if (err)
+ 		goto out_unmap_bar;
+ 
++	setup_memwin_rdma(adapter);
++
+ 	/* configure SGE_STAT_CFG_A to read WC stats */
+ 	if (!is_t4(adapter->params.chip))
+ 		t4_write_reg(adapter, SGE_STAT_CFG_A, STATSOURCE_T5_V(7) |
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
+index 76a482456f1f9..91445521dde10 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
+@@ -564,9 +564,13 @@ static void hclge_cmd_uninit_regs(struct hclge_hw *hw)
+ 
+ void hclge_cmd_uninit(struct hclge_dev *hdev)
+ {
++	set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state);
++	/* wait to ensure that the firmware completes the possible left
++	 * over commands.
++	 */
++	msleep(HCLGE_CMDQ_CLEAR_WAIT_TIME);
+ 	spin_lock_bh(&hdev->hw.cmq.csq.lock);
+ 	spin_lock(&hdev->hw.cmq.crq.lock);
+-	set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state);
+ 	hclge_cmd_uninit_regs(&hdev->hw);
+ 	spin_unlock(&hdev->hw.cmq.crq.lock);
+ 	spin_unlock_bh(&hdev->hw.cmq.csq.lock);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
+index c6fc22e295814..a836bdba5a4d5 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
+@@ -9,6 +9,7 @@
+ #include "hnae3.h"
+ 
+ #define HCLGE_CMDQ_TX_TIMEOUT		30000
++#define HCLGE_CMDQ_CLEAR_WAIT_TIME	200
+ #define HCLGE_DESC_DATA_LEN		6
+ 
+ struct hclge_dev;
+@@ -264,6 +265,9 @@ enum hclge_opcode_type {
+ 	/* Led command */
+ 	HCLGE_OPC_LED_STATUS_CFG	= 0xB000,
+ 
++	/* clear hardware resource command */
++	HCLGE_OPC_CLEAR_HW_RESOURCE	= 0x700B,
++
+ 	/* NCL config command */
+ 	HCLGE_OPC_QUERY_NCL_CONFIG	= 0x7011,
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+index 5bf5db91d16cc..39f56f245d843 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+@@ -255,21 +255,12 @@ static int hclge_ieee_getpfc(struct hnae3_handle *h, struct ieee_pfc *pfc)
+ 	u64 requests[HNAE3_MAX_TC], indications[HNAE3_MAX_TC];
+ 	struct hclge_vport *vport = hclge_get_vport(h);
+ 	struct hclge_dev *hdev = vport->back;
+-	u8 i, j, pfc_map, *prio_tc;
+ 	int ret;
++	u8 i;
+ 
+ 	memset(pfc, 0, sizeof(*pfc));
+ 	pfc->pfc_cap = hdev->pfc_max;
+-	prio_tc = hdev->tm_info.prio_tc;
+-	pfc_map = hdev->tm_info.hw_pfc_map;
+-
+-	/* Pfc setting is based on TC */
+-	for (i = 0; i < hdev->tm_info.num_tc; i++) {
+-		for (j = 0; j < HNAE3_MAX_USER_PRIO; j++) {
+-			if ((prio_tc[j] == i) && (pfc_map & BIT(i)))
+-				pfc->pfc_en |= BIT(j);
+-		}
+-	}
++	pfc->pfc_en = hdev->tm_info.pfc_en;
+ 
+ 	ret = hclge_pfc_tx_stats_get(hdev, requests);
+ 	if (ret)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 6304aed49f224..f105ff9e3f4cd 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -2924,12 +2924,12 @@ static void hclge_update_link_status(struct hclge_dev *hdev)
+ 	}
+ 
+ 	if (state != hdev->hw.mac.link) {
++		hdev->hw.mac.link = state;
+ 		client->ops->link_status_change(handle, state);
+ 		hclge_config_mac_tnl_int(hdev, state);
+ 		if (rclient && rclient->ops->link_status_change)
+ 			rclient->ops->link_status_change(rhandle, state);
+ 
+-		hdev->hw.mac.link = state;
+ 		hclge_push_link_status(hdev);
+ 	}
+ 
+@@ -9869,7 +9869,11 @@ static int hclge_init_vlan_config(struct hclge_dev *hdev)
+ static void hclge_add_vport_vlan_table(struct hclge_vport *vport, u16 vlan_id,
+ 				       bool writen_to_tbl)
+ {
+-	struct hclge_vport_vlan_cfg *vlan;
++	struct hclge_vport_vlan_cfg *vlan, *tmp;
++
++	list_for_each_entry_safe(vlan, tmp, &vport->vlan_list, node)
++		if (vlan->vlan_id == vlan_id)
++			return;
+ 
+ 	vlan = kzalloc(sizeof(*vlan), GFP_KERNEL);
+ 	if (!vlan)
+@@ -11167,6 +11171,28 @@ static void hclge_clear_resetting_state(struct hclge_dev *hdev)
+ 	}
+ }
+ 
++static int hclge_clear_hw_resource(struct hclge_dev *hdev)
++{
++	struct hclge_desc desc;
++	int ret;
++
++	hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_CLEAR_HW_RESOURCE, false);
++
++	ret = hclge_cmd_send(&hdev->hw, &desc, 1);
++	/* This new command is only supported by new firmware, it will
++	 * fail with older firmware. Error value -EOPNOSUPP can only be
++	 * returned by older firmware running this command, to keep code
++	 * backward compatible we will override this value and return
++	 * success.
++	 */
++	if (ret && ret != -EOPNOTSUPP) {
++		dev_err(&hdev->pdev->dev,
++			"failed to clear hw resource, ret = %d\n", ret);
++		return ret;
++	}
++	return 0;
++}
++
+ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev)
+ {
+ 	struct pci_dev *pdev = ae_dev->pdev;
+@@ -11204,6 +11230,10 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev)
+ 	if (ret)
+ 		goto err_cmd_uninit;
+ 
++	ret  = hclge_clear_hw_resource(hdev);
++	if (ret)
++		goto err_cmd_uninit;
++
+ 	ret = hclge_get_cap(hdev);
+ 	if (ret)
+ 		goto err_cmd_uninit;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
+index d8c5c5810b99d..2267832037d8a 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
+@@ -505,12 +505,17 @@ static void hclgevf_cmd_uninit_regs(struct hclgevf_hw *hw)
+ 
+ void hclgevf_cmd_uninit(struct hclgevf_dev *hdev)
+ {
++	set_bit(HCLGEVF_STATE_CMD_DISABLE, &hdev->state);
++	/* wait to ensure that the firmware completes the possible left
++	 * over commands.
++	 */
++	msleep(HCLGEVF_CMDQ_CLEAR_WAIT_TIME);
+ 	spin_lock_bh(&hdev->hw.cmq.csq.lock);
+ 	spin_lock(&hdev->hw.cmq.crq.lock);
+-	set_bit(HCLGEVF_STATE_CMD_DISABLE, &hdev->state);
+ 	hclgevf_cmd_uninit_regs(&hdev->hw);
+ 	spin_unlock(&hdev->hw.cmq.crq.lock);
+ 	spin_unlock_bh(&hdev->hw.cmq.csq.lock);
++
+ 	hclgevf_free_cmd_desc(&hdev->hw.cmq.csq);
+ 	hclgevf_free_cmd_desc(&hdev->hw.cmq.crq);
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h
+index c6dc11b32aa7a..59f4c19bd846b 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.h
+@@ -8,6 +8,7 @@
+ #include "hnae3.h"
+ 
+ #define HCLGEVF_CMDQ_TX_TIMEOUT		30000
++#define HCLGEVF_CMDQ_CLEAR_WAIT_TIME	200
+ #define HCLGEVF_CMDQ_RX_INVLD_B		0
+ #define HCLGEVF_CMDQ_RX_OUTVLD_B	1
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index fe03c84198906..d3f36e2b2b908 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -498,10 +498,10 @@ void hclgevf_update_link_status(struct hclgevf_dev *hdev, int link_state)
+ 	link_state =
+ 		test_bit(HCLGEVF_STATE_DOWN, &hdev->state) ? 0 : link_state;
+ 	if (link_state != hdev->hw.mac.link) {
++		hdev->hw.mac.link = link_state;
+ 		client->ops->link_status_change(handle, !!link_state);
+ 		if (rclient && rclient->ops->link_status_change)
+ 			rclient->ops->link_status_change(rhandle, !!link_state);
+-		hdev->hw.mac.link = link_state;
+ 	}
+ 
+ 	clear_bit(HCLGEVF_STATE_LINK_UPDATING, &hdev->state);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
+index 9b17735b9f4ce..987b88d7e0c7d 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
+@@ -304,8 +304,8 @@ void hclgevf_mbx_async_handler(struct hclgevf_dev *hdev)
+ 			flag = (u8)msg_q[5];
+ 
+ 			/* update upper layer with new link link status */
+-			hclgevf_update_link_status(hdev, link_status);
+ 			hclgevf_update_speed_duplex(hdev, speed, duplex);
++			hclgevf_update_link_status(hdev, link_status);
+ 
+ 			if (flag & HCLGE_MBX_PUSH_LINK_STATUS_EN)
+ 				set_bit(HCLGEVF_STATE_PF_PUSH_LINK_STATUS,
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+index 590ad110d383b..c6ec669aa7bdd 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+@@ -1006,6 +1006,8 @@ static s32 e1000_platform_pm_pch_lpt(struct e1000_hw *hw, bool link)
+ {
+ 	u32 reg = link << (E1000_LTRV_REQ_SHIFT + E1000_LTRV_NOSNOOP_SHIFT) |
+ 	    link << E1000_LTRV_REQ_SHIFT | E1000_LTRV_SEND;
++	u16 max_ltr_enc_d = 0;	/* maximum LTR decoded by platform */
++	u16 lat_enc_d = 0;	/* latency decoded */
+ 	u16 lat_enc = 0;	/* latency encoded */
+ 
+ 	if (link) {
+@@ -1059,7 +1061,17 @@ static s32 e1000_platform_pm_pch_lpt(struct e1000_hw *hw, bool link)
+ 				     E1000_PCI_LTR_CAP_LPT + 2, &max_nosnoop);
+ 		max_ltr_enc = max_t(u16, max_snoop, max_nosnoop);
+ 
+-		if (lat_enc > max_ltr_enc)
++		lat_enc_d = (lat_enc & E1000_LTRV_VALUE_MASK) *
++			     (1U << (E1000_LTRV_SCALE_FACTOR *
++			     ((lat_enc & E1000_LTRV_SCALE_MASK)
++			     >> E1000_LTRV_SCALE_SHIFT)));
++
++		max_ltr_enc_d = (max_ltr_enc & E1000_LTRV_VALUE_MASK) *
++				 (1U << (E1000_LTRV_SCALE_FACTOR *
++				 ((max_ltr_enc & E1000_LTRV_SCALE_MASK)
++				 >> E1000_LTRV_SCALE_SHIFT)));
++
++		if (lat_enc_d > max_ltr_enc_d)
+ 			lat_enc = max_ltr_enc;
+ 	}
+ 
+@@ -4115,13 +4127,17 @@ static s32 e1000_validate_nvm_checksum_ich8lan(struct e1000_hw *hw)
+ 		return ret_val;
+ 
+ 	if (!(data & valid_csum_mask)) {
+-		data |= valid_csum_mask;
+-		ret_val = e1000_write_nvm(hw, word, 1, &data);
+-		if (ret_val)
+-			return ret_val;
+-		ret_val = e1000e_update_nvm_checksum(hw);
+-		if (ret_val)
+-			return ret_val;
++		e_dbg("NVM Checksum Invalid\n");
++
++		if (hw->mac.type < e1000_pch_cnp) {
++			data |= valid_csum_mask;
++			ret_val = e1000_write_nvm(hw, word, 1, &data);
++			if (ret_val)
++				return ret_val;
++			ret_val = e1000e_update_nvm_checksum(hw);
++			if (ret_val)
++				return ret_val;
++		}
+ 	}
+ 
+ 	return e1000e_validate_nvm_checksum_generic(hw);
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.h b/drivers/net/ethernet/intel/e1000e/ich8lan.h
+index 1502895eb45dd..e757896287eba 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.h
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.h
+@@ -274,8 +274,11 @@
+ 
+ /* Latency Tolerance Reporting */
+ #define E1000_LTRV			0x000F8
++#define E1000_LTRV_VALUE_MASK		0x000003FF
+ #define E1000_LTRV_SCALE_MAX		5
+ #define E1000_LTRV_SCALE_FACTOR		5
++#define E1000_LTRV_SCALE_SHIFT		10
++#define E1000_LTRV_SCALE_MASK		0x00001C00
+ #define E1000_LTRV_REQ_SHIFT		15
+ #define E1000_LTRV_NOSNOOP_SHIFT	16
+ #define E1000_LTRV_SEND			(1 << 30)
+diff --git a/drivers/net/ethernet/intel/ice/ice_devlink.c b/drivers/net/ethernet/intel/ice/ice_devlink.c
+index cf685eeea198e..e256f70cf59d0 100644
+--- a/drivers/net/ethernet/intel/ice/ice_devlink.c
++++ b/drivers/net/ethernet/intel/ice/ice_devlink.c
+@@ -42,7 +42,9 @@ static int ice_info_pba(struct ice_pf *pf, struct ice_info_ctx *ctx)
+ 
+ 	status = ice_read_pba_string(hw, (u8 *)ctx->buf, sizeof(ctx->buf));
+ 	if (status)
+-		return -EIO;
++		/* We failed to locate the PBA, so just skip this entry */
++		dev_dbg(ice_pf_to_dev(pf), "Failed to read Product Board Assembly string, status %s\n",
++			ice_stat_str(status));
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index a8d5f196fdbd6..9b85fdf012977 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -146,6 +146,9 @@ static void igc_release_hw_control(struct igc_adapter *adapter)
+ 	struct igc_hw *hw = &adapter->hw;
+ 	u32 ctrl_ext;
+ 
++	if (!pci_device_is_present(adapter->pdev))
++		return;
++
+ 	/* Let firmware take over control of h/w */
+ 	ctrl_ext = rd32(IGC_CTRL_EXT);
+ 	wr32(IGC_CTRL_EXT,
+@@ -4037,26 +4040,29 @@ void igc_down(struct igc_adapter *adapter)
+ 
+ 	igc_ptp_suspend(adapter);
+ 
+-	/* disable receives in the hardware */
+-	rctl = rd32(IGC_RCTL);
+-	wr32(IGC_RCTL, rctl & ~IGC_RCTL_EN);
+-	/* flush and sleep below */
+-
++	if (pci_device_is_present(adapter->pdev)) {
++		/* disable receives in the hardware */
++		rctl = rd32(IGC_RCTL);
++		wr32(IGC_RCTL, rctl & ~IGC_RCTL_EN);
++		/* flush and sleep below */
++	}
+ 	/* set trans_start so we don't get spurious watchdogs during reset */
+ 	netif_trans_update(netdev);
+ 
+ 	netif_carrier_off(netdev);
+ 	netif_tx_stop_all_queues(netdev);
+ 
+-	/* disable transmits in the hardware */
+-	tctl = rd32(IGC_TCTL);
+-	tctl &= ~IGC_TCTL_EN;
+-	wr32(IGC_TCTL, tctl);
+-	/* flush both disables and wait for them to finish */
+-	wrfl();
+-	usleep_range(10000, 20000);
++	if (pci_device_is_present(adapter->pdev)) {
++		/* disable transmits in the hardware */
++		tctl = rd32(IGC_TCTL);
++		tctl &= ~IGC_TCTL_EN;
++		wr32(IGC_TCTL, tctl);
++		/* flush both disables and wait for them to finish */
++		wrfl();
++		usleep_range(10000, 20000);
+ 
+-	igc_irq_disable(adapter);
++		igc_irq_disable(adapter);
++	}
+ 
+ 	adapter->flags &= ~IGC_FLAG_NEED_LINK_UPDATE;
+ 
+@@ -5074,7 +5080,7 @@ static bool validate_schedule(struct igc_adapter *adapter,
+ 		if (e->command != TC_TAPRIO_CMD_SET_GATES)
+ 			return false;
+ 
+-		for (i = 0; i < IGC_MAX_TX_QUEUES; i++) {
++		for (i = 0; i < adapter->num_tx_queues; i++) {
+ 			if (e->gate_mask & BIT(i))
+ 				queue_uses[i]++;
+ 
+@@ -5131,7 +5137,7 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
+ 
+ 		end_time += e->interval;
+ 
+-		for (i = 0; i < IGC_MAX_TX_QUEUES; i++) {
++		for (i = 0; i < adapter->num_tx_queues; i++) {
+ 			struct igc_ring *ring = adapter->tx_ring[i];
+ 
+ 			if (!(e->gate_mask & BIT(i)))
+diff --git a/drivers/net/ethernet/intel/igc/igc_ptp.c b/drivers/net/ethernet/intel/igc/igc_ptp.c
+index 69617d2c1be23..4ae19c6a32477 100644
+--- a/drivers/net/ethernet/intel/igc/igc_ptp.c
++++ b/drivers/net/ethernet/intel/igc/igc_ptp.c
+@@ -849,7 +849,8 @@ void igc_ptp_suspend(struct igc_adapter *adapter)
+ 	adapter->ptp_tx_skb = NULL;
+ 	clear_bit_unlock(__IGC_PTP_TX_IN_PROGRESS, &adapter->state);
+ 
+-	igc_ptp_time_save(adapter);
++	if (pci_device_is_present(adapter->pdev))
++		igc_ptp_time_save(adapter);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index 6186230141800..76c6805157643 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -105,7 +105,7 @@
+ #define	MVNETA_VLAN_PRIO_TO_RXQ			 0x2440
+ #define      MVNETA_VLAN_PRIO_RXQ_MAP(prio, rxq) ((rxq) << ((prio) * 3))
+ #define MVNETA_PORT_STATUS                       0x2444
+-#define      MVNETA_TX_IN_PRGRS                  BIT(1)
++#define      MVNETA_TX_IN_PRGRS                  BIT(0)
+ #define      MVNETA_TX_FIFO_EMPTY                BIT(8)
+ #define MVNETA_RX_MIN_FRAME_SIZE                 0x247c
+ /* Only exists on Armada XP and Armada 370 */
+diff --git a/drivers/net/ethernet/mscc/ocelot_io.c b/drivers/net/ethernet/mscc/ocelot_io.c
+index ea4e83410fe4d..7390fa3980ec5 100644
+--- a/drivers/net/ethernet/mscc/ocelot_io.c
++++ b/drivers/net/ethernet/mscc/ocelot_io.c
+@@ -21,7 +21,7 @@ u32 __ocelot_read_ix(struct ocelot *ocelot, u32 reg, u32 offset)
+ 		    ocelot->map[target][reg & REG_MASK] + offset, &val);
+ 	return val;
+ }
+-EXPORT_SYMBOL(__ocelot_read_ix);
++EXPORT_SYMBOL_GPL(__ocelot_read_ix);
+ 
+ void __ocelot_write_ix(struct ocelot *ocelot, u32 val, u32 reg, u32 offset)
+ {
+@@ -32,7 +32,7 @@ void __ocelot_write_ix(struct ocelot *ocelot, u32 val, u32 reg, u32 offset)
+ 	regmap_write(ocelot->targets[target],
+ 		     ocelot->map[target][reg & REG_MASK] + offset, val);
+ }
+-EXPORT_SYMBOL(__ocelot_write_ix);
++EXPORT_SYMBOL_GPL(__ocelot_write_ix);
+ 
+ void __ocelot_rmw_ix(struct ocelot *ocelot, u32 val, u32 mask, u32 reg,
+ 		     u32 offset)
+@@ -45,7 +45,7 @@ void __ocelot_rmw_ix(struct ocelot *ocelot, u32 val, u32 mask, u32 reg,
+ 			   ocelot->map[target][reg & REG_MASK] + offset,
+ 			   mask, val);
+ }
+-EXPORT_SYMBOL(__ocelot_rmw_ix);
++EXPORT_SYMBOL_GPL(__ocelot_rmw_ix);
+ 
+ u32 ocelot_port_readl(struct ocelot_port *port, u32 reg)
+ {
+@@ -58,7 +58,7 @@ u32 ocelot_port_readl(struct ocelot_port *port, u32 reg)
+ 	regmap_read(port->target, ocelot->map[target][reg & REG_MASK], &val);
+ 	return val;
+ }
+-EXPORT_SYMBOL(ocelot_port_readl);
++EXPORT_SYMBOL_GPL(ocelot_port_readl);
+ 
+ void ocelot_port_writel(struct ocelot_port *port, u32 val, u32 reg)
+ {
+@@ -69,7 +69,7 @@ void ocelot_port_writel(struct ocelot_port *port, u32 val, u32 reg)
+ 
+ 	regmap_write(port->target, ocelot->map[target][reg & REG_MASK], val);
+ }
+-EXPORT_SYMBOL(ocelot_port_writel);
++EXPORT_SYMBOL_GPL(ocelot_port_writel);
+ 
+ void ocelot_port_rmwl(struct ocelot_port *port, u32 val, u32 mask, u32 reg)
+ {
+@@ -77,7 +77,7 @@ void ocelot_port_rmwl(struct ocelot_port *port, u32 val, u32 mask, u32 reg)
+ 
+ 	ocelot_port_writel(port, (cur & (~mask)) | val, reg);
+ }
+-EXPORT_SYMBOL(ocelot_port_rmwl);
++EXPORT_SYMBOL_GPL(ocelot_port_rmwl);
+ 
+ u32 __ocelot_target_read_ix(struct ocelot *ocelot, enum ocelot_target target,
+ 			    u32 reg, u32 offset)
+@@ -128,7 +128,7 @@ int ocelot_regfields_init(struct ocelot *ocelot,
+ 
+ 	return 0;
+ }
+-EXPORT_SYMBOL(ocelot_regfields_init);
++EXPORT_SYMBOL_GPL(ocelot_regfields_init);
+ 
+ static struct regmap_config ocelot_regmap_config = {
+ 	.reg_bits	= 32,
+@@ -148,4 +148,4 @@ struct regmap *ocelot_regmap_init(struct ocelot *ocelot, struct resource *res)
+ 
+ 	return devm_regmap_init_mmio(ocelot->dev, regs, &ocelot_regmap_config);
+ }
+-EXPORT_SYMBOL(ocelot_regmap_init);
++EXPORT_SYMBOL_GPL(ocelot_regmap_init);
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_ll2.c b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
+index 49783f365079e..f2c8273dce67d 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_ll2.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_ll2.c
+@@ -327,6 +327,9 @@ static int qed_ll2_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
+ 	unsigned long flags;
+ 	int rc = -EINVAL;
+ 
++	if (!p_ll2_conn)
++		return rc;
++
+ 	spin_lock_irqsave(&p_tx->lock, flags);
+ 	if (p_tx->b_completing_packet) {
+ 		rc = -EBUSY;
+@@ -500,7 +503,16 @@ static int qed_ll2_rxq_completion(struct qed_hwfn *p_hwfn, void *cookie)
+ 	unsigned long flags = 0;
+ 	int rc = 0;
+ 
++	if (!p_ll2_conn)
++		return rc;
++
+ 	spin_lock_irqsave(&p_rx->lock, flags);
++
++	if (!QED_LL2_RX_REGISTERED(p_ll2_conn)) {
++		spin_unlock_irqrestore(&p_rx->lock, flags);
++		return 0;
++	}
++
+ 	cq_new_idx = le16_to_cpu(*p_rx->p_fw_cons);
+ 	cq_old_idx = qed_chain_get_cons_idx(&p_rx->rcq_chain);
+ 
+@@ -821,6 +833,9 @@ static int qed_ll2_lb_rxq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
+ 	struct qed_ll2_info *p_ll2_conn = (struct qed_ll2_info *)p_cookie;
+ 	int rc;
+ 
++	if (!p_ll2_conn)
++		return 0;
++
+ 	if (!QED_LL2_RX_REGISTERED(p_ll2_conn))
+ 		return 0;
+ 
+@@ -844,6 +859,9 @@ static int qed_ll2_lb_txq_completion(struct qed_hwfn *p_hwfn, void *p_cookie)
+ 	u16 new_idx = 0, num_bds = 0;
+ 	int rc;
+ 
++	if (!p_ll2_conn)
++		return 0;
++
+ 	if (!QED_LL2_TX_REGISTERED(p_ll2_conn))
+ 		return 0;
+ 
+@@ -1725,6 +1743,8 @@ int qed_ll2_post_rx_buffer(void *cxt,
+ 	if (!p_ll2_conn)
+ 		return -EINVAL;
+ 	p_rx = &p_ll2_conn->rx_queue;
++	if (!p_rx->set_prod_addr)
++		return -EIO;
+ 
+ 	spin_lock_irqsave(&p_rx->lock, flags);
+ 	if (!list_empty(&p_rx->free_descq))
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_rdma.c b/drivers/net/ethernet/qlogic/qed/qed_rdma.c
+index da864d12916b7..4f4b79250a2b2 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_rdma.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_rdma.c
+@@ -1285,8 +1285,7 @@ qed_rdma_create_qp(void *rdma_cxt,
+ 
+ 	if (!rdma_cxt || !in_params || !out_params ||
+ 	    !p_hwfn->p_rdma_info->active) {
+-		DP_ERR(p_hwfn->cdev,
+-		       "qed roce create qp failed due to NULL entry (rdma_cxt=%p, in=%p, out=%p, roce_info=?\n",
++		pr_err("qed roce create qp failed due to NULL entry (rdma_cxt=%p, in=%p, out=%p, roce_info=?\n",
+ 		       rdma_cxt, in_params, out_params);
+ 		return NULL;
+ 	}
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 980a60477b022..a5285a8a9eaeb 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -4925,6 +4925,10 @@ read_again:
+ 
+ 		prefetch(np);
+ 
++		/* Ensure a valid XSK buffer before proceed */
++		if (!buf->xdp)
++			break;
++
+ 		if (priv->extend_desc)
+ 			stmmac_rx_extended_status(priv, &priv->dev->stats,
+ 						  &priv->xstats,
+@@ -4945,10 +4949,6 @@ read_again:
+ 			continue;
+ 		}
+ 
+-		/* Ensure a valid XSK buffer before proceed */
+-		if (!buf->xdp)
+-			break;
+-
+ 		/* XSK pool expects RX frame 1:1 mapped to XSK buffer */
+ 		if (likely(status & rx_not_ls)) {
+ 			xsk_buff_free(buf->xdp);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
+index 4e70efc45458e..c7554db2962f1 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
+@@ -775,14 +775,18 @@ static int tc_setup_taprio(struct stmmac_priv *priv,
+ 					 GFP_KERNEL);
+ 		if (!plat->est)
+ 			return -ENOMEM;
++
++		mutex_init(&priv->plat->est->lock);
+ 	} else {
+ 		memset(plat->est, 0, sizeof(*plat->est));
+ 	}
+ 
+ 	size = qopt->num_entries;
+ 
++	mutex_lock(&priv->plat->est->lock);
+ 	priv->plat->est->gcl_size = size;
+ 	priv->plat->est->enable = qopt->enable;
++	mutex_unlock(&priv->plat->est->lock);
+ 
+ 	for (i = 0; i < size; i++) {
+ 		s64 delta_ns = qopt->entries[i].interval;
+@@ -813,6 +817,7 @@ static int tc_setup_taprio(struct stmmac_priv *priv,
+ 		priv->plat->est->gcl[i] = delta_ns | (gates << wid);
+ 	}
+ 
++	mutex_lock(&priv->plat->est->lock);
+ 	/* Adjust for real system time */
+ 	priv->ptp_clock_ops.gettime64(&priv->ptp_clock_ops, &current_time);
+ 	current_time_ns = timespec64_to_ktime(current_time);
+@@ -837,8 +842,10 @@ static int tc_setup_taprio(struct stmmac_priv *priv,
+ 	priv->plat->est->ctr[0] = do_div(ctr, NSEC_PER_SEC);
+ 	priv->plat->est->ctr[1] = (u32)ctr;
+ 
+-	if (fpe && !priv->dma_cap.fpesel)
++	if (fpe && !priv->dma_cap.fpesel) {
++		mutex_unlock(&priv->plat->est->lock);
+ 		return -EOPNOTSUPP;
++	}
+ 
+ 	/* Actual FPE register configuration will be done after FPE handshake
+ 	 * is success.
+@@ -847,6 +854,7 @@ static int tc_setup_taprio(struct stmmac_priv *priv,
+ 
+ 	ret = stmmac_est_configure(priv, priv->ioaddr, priv->plat->est,
+ 				   priv->plat->clk_ptp_rate);
++	mutex_unlock(&priv->plat->est->lock);
+ 	if (ret) {
+ 		netdev_err(priv->dev, "failed to configure EST\n");
+ 		goto disable;
+@@ -862,9 +870,13 @@ static int tc_setup_taprio(struct stmmac_priv *priv,
+ 	return 0;
+ 
+ disable:
+-	priv->plat->est->enable = false;
+-	stmmac_est_configure(priv, priv->ioaddr, priv->plat->est,
+-			     priv->plat->clk_ptp_rate);
++	if (priv->plat->est) {
++		mutex_lock(&priv->plat->est->lock);
++		priv->plat->est->enable = false;
++		stmmac_est_configure(priv, priv->ioaddr, priv->plat->est,
++				     priv->plat->clk_ptp_rate);
++		mutex_unlock(&priv->plat->est->lock);
++	}
+ 
+ 	priv->plat->fpe_cfg->enable = false;
+ 	stmmac_fpe_configure(priv, priv->ioaddr,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c
+index 105821b53020b..2a616c6f7cd0e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c
+@@ -34,18 +34,18 @@ static int stmmac_xdp_enable_pool(struct stmmac_priv *priv,
+ 	need_update = netif_running(priv->dev) && stmmac_xdp_is_enabled(priv);
+ 
+ 	if (need_update) {
+-		stmmac_disable_rx_queue(priv, queue);
+-		stmmac_disable_tx_queue(priv, queue);
+ 		napi_disable(&ch->rx_napi);
+ 		napi_disable(&ch->tx_napi);
++		stmmac_disable_rx_queue(priv, queue);
++		stmmac_disable_tx_queue(priv, queue);
+ 	}
+ 
+ 	set_bit(queue, priv->af_xdp_zc_qps);
+ 
+ 	if (need_update) {
+-		napi_enable(&ch->rxtx_napi);
+ 		stmmac_enable_rx_queue(priv, queue);
+ 		stmmac_enable_tx_queue(priv, queue);
++		napi_enable(&ch->rxtx_napi);
+ 
+ 		err = stmmac_xsk_wakeup(priv->dev, queue, XDP_WAKEUP_RX);
+ 		if (err)
+@@ -72,10 +72,10 @@ static int stmmac_xdp_disable_pool(struct stmmac_priv *priv, u16 queue)
+ 	need_update = netif_running(priv->dev) && stmmac_xdp_is_enabled(priv);
+ 
+ 	if (need_update) {
++		napi_disable(&ch->rxtx_napi);
+ 		stmmac_disable_rx_queue(priv, queue);
+ 		stmmac_disable_tx_queue(priv, queue);
+ 		synchronize_rcu();
+-		napi_disable(&ch->rxtx_napi);
+ 	}
+ 
+ 	xsk_pool_dma_unmap(pool, STMMAC_RX_DMA_ATTR);
+@@ -83,10 +83,10 @@ static int stmmac_xdp_disable_pool(struct stmmac_priv *priv, u16 queue)
+ 	clear_bit(queue, priv->af_xdp_zc_qps);
+ 
+ 	if (need_update) {
+-		napi_enable(&ch->rx_napi);
+-		napi_enable(&ch->tx_napi);
+ 		stmmac_enable_rx_queue(priv, queue);
+ 		stmmac_enable_tx_queue(priv, queue);
++		napi_enable(&ch->rx_napi);
++		napi_enable(&ch->tx_napi);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/usb/pegasus.c b/drivers/net/usb/pegasus.c
+index a08a46fef0d22..847372c6d9e8a 100644
+--- a/drivers/net/usb/pegasus.c
++++ b/drivers/net/usb/pegasus.c
+@@ -471,7 +471,7 @@ static int enable_net_traffic(struct net_device *dev, struct usb_device *usb)
+ 		write_mii_word(pegasus, 0, 0x1b, &auxmode);
+ 	}
+ 
+-	return 0;
++	return ret;
+ fail:
+ 	netif_dbg(pegasus, drv, pegasus->net, "%s failed\n", __func__);
+ 	return ret;
+@@ -860,7 +860,7 @@ static int pegasus_open(struct net_device *net)
+ 	if (!pegasus->rx_skb)
+ 		goto exit;
+ 
+-	res = set_registers(pegasus, EthID, 6, net->dev_addr);
++	set_registers(pegasus, EthID, 6, net->dev_addr);
+ 
+ 	usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb,
+ 			  usb_rcvbulkpipe(pegasus->usb, 1),
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
+index 40f2109a097f3..1a63cae6567e2 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
+@@ -37,6 +37,7 @@ static int iwl_pnvm_handle_section(struct iwl_trans *trans, const u8 *data,
+ 	u32 sha1 = 0;
+ 	u16 mac_type = 0, rf_id = 0;
+ 	u8 *pnvm_data = NULL, *tmp;
++	bool hw_match = false;
+ 	u32 size = 0;
+ 	int ret;
+ 
+@@ -83,6 +84,9 @@ static int iwl_pnvm_handle_section(struct iwl_trans *trans, const u8 *data,
+ 				break;
+ 			}
+ 
++			if (hw_match)
++				break;
++
+ 			mac_type = le16_to_cpup((__le16 *)data);
+ 			rf_id = le16_to_cpup((__le16 *)(data + sizeof(__le16)));
+ 
+@@ -90,15 +94,9 @@ static int iwl_pnvm_handle_section(struct iwl_trans *trans, const u8 *data,
+ 				     "Got IWL_UCODE_TLV_HW_TYPE mac_type 0x%0x rf_id 0x%0x\n",
+ 				     mac_type, rf_id);
+ 
+-			if (mac_type != CSR_HW_REV_TYPE(trans->hw_rev) ||
+-			    rf_id != CSR_HW_RFID_TYPE(trans->hw_rf_id)) {
+-				IWL_DEBUG_FW(trans,
+-					     "HW mismatch, skipping PNVM section, mac_type 0x%0x, rf_id 0x%0x.\n",
+-					     CSR_HW_REV_TYPE(trans->hw_rev), trans->hw_rf_id);
+-				ret = -ENOENT;
+-				goto out;
+-			}
+-
++			if (mac_type == CSR_HW_REV_TYPE(trans->hw_rev) &&
++			    rf_id == CSR_HW_RFID_TYPE(trans->hw_rf_id))
++				hw_match = true;
+ 			break;
+ 		case IWL_UCODE_TLV_SEC_RT: {
+ 			struct iwl_pnvm_section *section = (void *)data;
+@@ -149,6 +147,15 @@ static int iwl_pnvm_handle_section(struct iwl_trans *trans, const u8 *data,
+ 	}
+ 
+ done:
++	if (!hw_match) {
++		IWL_DEBUG_FW(trans,
++			     "HW mismatch, skipping PNVM section (need mac_type 0x%x rf_id 0x%x)\n",
++			     CSR_HW_REV_TYPE(trans->hw_rev),
++			     CSR_HW_RFID_TYPE(trans->hw_rf_id));
++		ret = -ENOENT;
++		goto out;
++	}
++
+ 	if (!size) {
+ 		IWL_DEBUG_FW(trans, "Empty PNVM, skipping.\n");
+ 		ret = -ENOENT;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index d94bd8d732e96..9f11a1d5d0346 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -1103,12 +1103,80 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
+ 		      IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_NO_CDB,
+ 		      iwl_cfg_bz_a0_mr_a0, iwl_ax211_name),
+ 
++/* SoF with JF2 */
++	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++		      IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY,
++		      IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF,
++		      IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
++		      iwlax210_2ax_cfg_so_jf_b0, iwl9560_160_name),
++	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++		      IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY,
++		      IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF,
++		      IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
++		      iwlax210_2ax_cfg_so_jf_b0, iwl9560_name),
++
++/* SoF with JF */
++	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++		      IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY,
++		      IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1,
++		      IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
++		      iwlax210_2ax_cfg_so_jf_b0, iwl9461_160_name),
++	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++		      IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY,
++		      IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV,
++		      IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
++		      iwlax210_2ax_cfg_so_jf_b0, iwl9462_160_name),
++	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++		      IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY,
++		      IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1,
++		      IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
++		      iwlax210_2ax_cfg_so_jf_b0, iwl9461_name),
++	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++		      IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY,
++		      IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV,
++		      IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
++		      iwlax210_2ax_cfg_so_jf_b0, iwl9462_name),
++
+ /* So with GF */
+ 	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+ 		      IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY,
+ 		      IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY,
+ 		      IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
+-		      iwlax211_2ax_cfg_so_gf_a0, iwl_ax211_name)
++		      iwlax211_2ax_cfg_so_gf_a0, iwl_ax211_name),
++
++/* So with JF2 */
++	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++		      IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY,
++		      IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF,
++		      IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
++		      iwlax210_2ax_cfg_so_jf_b0, iwl9560_160_name),
++	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++		      IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY,
++		      IWL_CFG_RF_TYPE_JF2, IWL_CFG_RF_ID_JF,
++		      IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
++		      iwlax210_2ax_cfg_so_jf_b0, iwl9560_name),
++
++/* So with JF */
++	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++		      IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY,
++		      IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1,
++		      IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
++		      iwlax210_2ax_cfg_so_jf_b0, iwl9461_160_name),
++	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++		      IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY,
++		      IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV,
++		      IWL_CFG_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
++		      iwlax210_2ax_cfg_so_jf_b0, iwl9462_160_name),
++	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++		      IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY,
++		      IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1,
++		      IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
++		      iwlax210_2ax_cfg_so_jf_b0, iwl9461_name),
++	_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++		      IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY,
++		      IWL_CFG_RF_TYPE_JF1, IWL_CFG_RF_ID_JF1_DIV,
++		      IWL_CFG_NO_160, IWL_CFG_CORES_BT, IWL_CFG_NO_CDB,
++		      iwlax210_2ax_cfg_so_jf_b0, iwl9462_name)
+ 
+ #endif /* CONFIG_IWLMVM */
+ };
+diff --git a/drivers/opp/of.c b/drivers/opp/of.c
+index c582a9ca397bb..01feeba78426c 100644
+--- a/drivers/opp/of.c
++++ b/drivers/opp/of.c
+@@ -985,8 +985,9 @@ static int _of_add_opp_table_v2(struct device *dev, struct opp_table *opp_table)
+ 		}
+ 	}
+ 
+-	/* There should be one of more OPP defined */
+-	if (WARN_ON(!count)) {
++	/* There should be one or more OPPs defined */
++	if (!count) {
++		dev_err(dev, "%s: no supported OPPs", __func__);
+ 		ret = -ENOENT;
+ 		goto remove_static_opp;
+ 	}
+diff --git a/drivers/platform/x86/Kconfig b/drivers/platform/x86/Kconfig
+index 60592fb88e7a0..e878ac1927582 100644
+--- a/drivers/platform/x86/Kconfig
++++ b/drivers/platform/x86/Kconfig
+@@ -507,6 +507,7 @@ config THINKPAD_ACPI
+ 	depends on RFKILL || RFKILL = n
+ 	depends on ACPI_VIDEO || ACPI_VIDEO = n
+ 	depends on BACKLIGHT_CLASS_DEVICE
++	depends on I2C
+ 	select ACPI_PLATFORM_PROFILE
+ 	select HWMON
+ 	select NVRAM
+@@ -701,6 +702,7 @@ config INTEL_HID_EVENT
+ 	tristate "INTEL HID Event"
+ 	depends on ACPI
+ 	depends on INPUT
++	depends on I2C
+ 	select INPUT_SPARSEKMAP
+ 	help
+ 	  This driver provides support for the Intel HID Event hotkey interface.
+@@ -752,6 +754,7 @@ config INTEL_VBTN
+ 	tristate "INTEL VIRTUAL BUTTON"
+ 	depends on ACPI
+ 	depends on INPUT
++	depends on I2C
+ 	select INPUT_SPARSEKMAP
+ 	help
+ 	  This driver provides support for the Intel Virtual Button interface.
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index 0cb927f0f301a..a81dc4b191b77 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -41,6 +41,10 @@ static int wapf = -1;
+ module_param(wapf, uint, 0444);
+ MODULE_PARM_DESC(wapf, "WAPF value");
+ 
++static int tablet_mode_sw = -1;
++module_param(tablet_mode_sw, uint, 0444);
++MODULE_PARM_DESC(tablet_mode_sw, "Tablet mode detect: -1:auto 0:disable 1:kbd-dock 2:lid-flip");
++
+ static struct quirk_entry *quirks;
+ 
+ static bool asus_q500a_i8042_filter(unsigned char data, unsigned char str,
+@@ -458,6 +462,15 @@ static const struct dmi_system_id asus_quirks[] = {
+ 		},
+ 		.driver_data = &quirk_asus_use_lid_flip_devid,
+ 	},
++	{
++		.callback = dmi_matched,
++		.ident = "ASUS TP200s / E205SA",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "E205SA"),
++		},
++		.driver_data = &quirk_asus_use_lid_flip_devid,
++	},
+ 	{},
+ };
+ 
+@@ -477,6 +490,21 @@ static void asus_nb_wmi_quirks(struct asus_wmi_driver *driver)
+ 	else
+ 		wapf = quirks->wapf;
+ 
++	switch (tablet_mode_sw) {
++	case 0:
++		quirks->use_kbd_dock_devid = false;
++		quirks->use_lid_flip_devid = false;
++		break;
++	case 1:
++		quirks->use_kbd_dock_devid = true;
++		quirks->use_lid_flip_devid = false;
++		break;
++	case 2:
++		quirks->use_kbd_dock_devid = false;
++		quirks->use_lid_flip_devid = true;
++		break;
++	}
++
+ 	if (quirks->i8042_filter) {
+ 		ret = i8042_install_filter(quirks->i8042_filter);
+ 		if (ret) {
+diff --git a/drivers/platform/x86/dual_accel_detect.h b/drivers/platform/x86/dual_accel_detect.h
+new file mode 100644
+index 0000000000000..a9eae17cc43dd
+--- /dev/null
++++ b/drivers/platform/x86/dual_accel_detect.h
+@@ -0,0 +1,76 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * Helper code to detect 360 degree hinges (yoga) style 2-in-1 devices using 2 accelerometers
++ * to allow the OS to determine the angle between the display and the base of the device.
++ *
++ * On Windows these are read by a special HingeAngleService process which calls undocumented
++ * ACPI methods, to let the firmware know if the 2-in-1 is in tablet- or laptop-mode.
++ * The firmware may use this to disable the kbd and touchpad to avoid spurious input in
++ * tablet-mode as well as to report SW_TABLET_MODE info to the OS.
++ *
++ * Since Linux does not call these undocumented methods, the SW_TABLET_MODE info reported
++ * by various drivers/platform/x86 drivers is incorrect. These drivers use the detection
++ * code in this file to disable SW_TABLET_MODE reporting to avoid reporting broken info
++ * (instead userspace can derive the status itself by directly reading the 2 accels).
++ */
++
++#include <linux/acpi.h>
++#include <linux/i2c.h>
++
++static int dual_accel_i2c_resource_count(struct acpi_resource *ares, void *data)
++{
++	struct acpi_resource_i2c_serialbus *sb;
++	int *count = data;
++
++	if (i2c_acpi_get_i2c_resource(ares, &sb))
++		*count = *count + 1;
++
++	return 1;
++}
++
++static int dual_accel_i2c_client_count(struct acpi_device *adev)
++{
++	int ret, count = 0;
++	LIST_HEAD(r);
++
++	ret = acpi_dev_get_resources(adev, &r, dual_accel_i2c_resource_count, &count);
++	if (ret < 0)
++		return ret;
++
++	acpi_dev_free_resource_list(&r);
++	return count;
++}
++
++static bool dual_accel_detect_bosc0200(void)
++{
++	struct acpi_device *adev;
++	int count;
++
++	adev = acpi_dev_get_first_match_dev("BOSC0200", NULL, -1);
++	if (!adev)
++		return false;
++
++	count = dual_accel_i2c_client_count(adev);
++
++	acpi_dev_put(adev);
++
++	return count == 2;
++}
++
++static bool dual_accel_detect(void)
++{
++	/* Systems which use a pair of accels with KIOX010A / KIOX020A ACPI ids */
++	if (acpi_dev_present("KIOX010A", NULL, -1) &&
++	    acpi_dev_present("KIOX020A", NULL, -1))
++		return true;
++
++	/* Systems which use a single DUAL250E ACPI device to model 2 accels */
++	if (acpi_dev_present("DUAL250E", NULL, -1))
++		return true;
++
++	/* Systems which use a single BOSC0200 ACPI device to model 2 accels */
++	if (dual_accel_detect_bosc0200())
++		return true;
++
++	return false;
++}
+diff --git a/drivers/platform/x86/gigabyte-wmi.c b/drivers/platform/x86/gigabyte-wmi.c
+index fbb224a82e34c..7f3a03f937f66 100644
+--- a/drivers/platform/x86/gigabyte-wmi.c
++++ b/drivers/platform/x86/gigabyte-wmi.c
+@@ -140,6 +140,7 @@ static u8 gigabyte_wmi_detect_sensor_usability(struct wmi_device *wdev)
+ 	}}
+ 
+ static const struct dmi_system_id gigabyte_wmi_known_working_platforms[] = {
++	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B450M S2H V2"),
+ 	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE"),
+ 	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE V2"),
+ 	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 GAMING X V2"),
+@@ -147,6 +148,7 @@ static const struct dmi_system_id gigabyte_wmi_known_working_platforms[] = {
+ 	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550M DS3H"),
+ 	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("Z390 I AORUS PRO WIFI-CF"),
+ 	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 AORUS ELITE"),
++	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 GAMING X"),
+ 	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 I AORUS PRO WIFI"),
+ 	DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 UD"),
+ 	{ }
+diff --git a/drivers/platform/x86/intel-hid.c b/drivers/platform/x86/intel-hid.c
+index 078648a9201b3..7135d720af603 100644
+--- a/drivers/platform/x86/intel-hid.c
++++ b/drivers/platform/x86/intel-hid.c
+@@ -14,6 +14,7 @@
+ #include <linux/module.h>
+ #include <linux/platform_device.h>
+ #include <linux/suspend.h>
++#include "dual_accel_detect.h"
+ 
+ /* When NOT in tablet mode, VGBS returns with the flag 0x40 */
+ #define TABLET_MODE_FLAG BIT(6)
+@@ -121,6 +122,7 @@ struct intel_hid_priv {
+ 	struct input_dev *array;
+ 	struct input_dev *switches;
+ 	bool wakeup_mode;
++	bool dual_accel;
+ };
+ 
+ #define HID_EVENT_FILTER_UUID	"eeec56b3-4442-408f-a792-4edd4d758054"
+@@ -450,22 +452,9 @@ static void notify_handler(acpi_handle handle, u32 event, void *context)
+ 	 * SW_TABLET_MODE report, in these cases we enable support when receiving
+ 	 * the first event instead of during driver setup.
+ 	 *
+-	 * Some 360 degree hinges (yoga) style 2-in-1 devices use 2 accelerometers
+-	 * to allow the OS to determine the angle between the display and the base
+-	 * of the device. On Windows these are read by a special HingeAngleService
+-	 * process which calls an ACPI DSM (Device Specific Method) on the
+-	 * ACPI KIOX010A device node for the sensor in the display, to let the
+-	 * firmware know if the 2-in-1 is in tablet- or laptop-mode so that it can
+-	 * disable the kbd and touchpad to avoid spurious input in tablet-mode.
+-	 *
+-	 * The linux kxcjk1013 driver calls the DSM for this once at probe time
+-	 * to ensure that the builtin kbd and touchpad work. On some devices this
+-	 * causes a "spurious" 0xcd event on the intel-hid ACPI dev. In this case
+-	 * there is not a functional tablet-mode switch, so we should not register
+-	 * the tablet-mode switch device.
++	 * See dual_accel_detect.h for more info on the dual_accel check.
+ 	 */
+-	if (!priv->switches && (event == 0xcc || event == 0xcd) &&
+-	    !acpi_dev_present("KIOX010A", NULL, -1)) {
++	if (!priv->switches && !priv->dual_accel && (event == 0xcc || event == 0xcd)) {
+ 		dev_info(&device->dev, "switch event received, enable switches supports\n");
+ 		err = intel_hid_switches_setup(device);
+ 		if (err)
+@@ -606,6 +595,8 @@ static int intel_hid_probe(struct platform_device *device)
+ 		return -ENOMEM;
+ 	dev_set_drvdata(&device->dev, priv);
+ 
++	priv->dual_accel = dual_accel_detect();
++
+ 	err = intel_hid_input_setup(device);
+ 	if (err) {
+ 		pr_err("Failed to setup Intel HID hotkeys\n");
+diff --git a/drivers/platform/x86/intel-vbtn.c b/drivers/platform/x86/intel-vbtn.c
+index 888a764efad1a..3091664310633 100644
+--- a/drivers/platform/x86/intel-vbtn.c
++++ b/drivers/platform/x86/intel-vbtn.c
+@@ -14,6 +14,7 @@
+ #include <linux/module.h>
+ #include <linux/platform_device.h>
+ #include <linux/suspend.h>
++#include "dual_accel_detect.h"
+ 
+ /* Returned when NOT in tablet mode on some HP Stream x360 11 models */
+ #define VGBS_TABLET_MODE_FLAG_ALT	0x10
+@@ -66,6 +67,7 @@ static const struct key_entry intel_vbtn_switchmap[] = {
+ struct intel_vbtn_priv {
+ 	struct input_dev *buttons_dev;
+ 	struct input_dev *switches_dev;
++	bool dual_accel;
+ 	bool has_buttons;
+ 	bool has_switches;
+ 	bool wakeup_mode;
+@@ -160,6 +162,10 @@ static void notify_handler(acpi_handle handle, u32 event, void *context)
+ 		input_dev = priv->buttons_dev;
+ 	} else if ((ke = sparse_keymap_entry_from_scancode(priv->switches_dev, event))) {
+ 		if (!priv->has_switches) {
++			/* See dual_accel_detect.h for more info */
++			if (priv->dual_accel)
++				return;
++
+ 			dev_info(&device->dev, "Registering Intel Virtual Switches input-dev after receiving a switch event\n");
+ 			ret = input_register_device(priv->switches_dev);
+ 			if (ret)
+@@ -248,11 +254,15 @@ static const struct dmi_system_id dmi_switches_allow_list[] = {
+ 	{} /* Array terminator */
+ };
+ 
+-static bool intel_vbtn_has_switches(acpi_handle handle)
++static bool intel_vbtn_has_switches(acpi_handle handle, bool dual_accel)
+ {
+ 	unsigned long long vgbs;
+ 	acpi_status status;
+ 
++	/* See dual_accel_detect.h for more info */
++	if (dual_accel)
++		return false;
++
+ 	if (!dmi_check_system(dmi_switches_allow_list))
+ 		return false;
+ 
+@@ -263,13 +273,14 @@ static bool intel_vbtn_has_switches(acpi_handle handle)
+ static int intel_vbtn_probe(struct platform_device *device)
+ {
+ 	acpi_handle handle = ACPI_HANDLE(&device->dev);
+-	bool has_buttons, has_switches;
++	bool dual_accel, has_buttons, has_switches;
+ 	struct intel_vbtn_priv *priv;
+ 	acpi_status status;
+ 	int err;
+ 
++	dual_accel = dual_accel_detect();
+ 	has_buttons = acpi_has_method(handle, "VBDL");
+-	has_switches = intel_vbtn_has_switches(handle);
++	has_switches = intel_vbtn_has_switches(handle, dual_accel);
+ 
+ 	if (!has_buttons && !has_switches) {
+ 		dev_warn(&device->dev, "failed to read Intel Virtual Button driver\n");
+@@ -281,6 +292,7 @@ static int intel_vbtn_probe(struct platform_device *device)
+ 		return -ENOMEM;
+ 	dev_set_drvdata(&device->dev, priv);
+ 
++	priv->dual_accel = dual_accel;
+ 	priv->has_buttons = has_buttons;
+ 	priv->has_switches = has_switches;
+ 
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index edd71e744d275..f8cfb529d1a2d 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -73,6 +73,7 @@
+ #include <linux/uaccess.h>
+ #include <acpi/battery.h>
+ #include <acpi/video.h>
++#include "dual_accel_detect.h"
+ 
+ /* ThinkPad CMOS commands */
+ #define TP_CMOS_VOLUME_DOWN	0
+@@ -3232,7 +3233,7 @@ static int hotkey_init_tablet_mode(void)
+ 		 * the laptop/tent/tablet mode to the EC. The bmc150 iio driver
+ 		 * does not support this, so skip the hotkey on these models.
+ 		 */
+-		if (has_tablet_mode && !acpi_dev_present("BOSC0200", "1", -1))
++		if (has_tablet_mode && !dual_accel_detect())
+ 			tp_features.hotkey_tablet = TP_HOTKEY_TABLET_USES_GMMS;
+ 		type = "GMMS";
+ 	} else if (acpi_evalf(hkey_handle, &res, "MHKG", "qd")) {
+diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
+index ae9bfc658203e..c0d31119d6d7b 100644
+--- a/drivers/scsi/scsi_sysfs.c
++++ b/drivers/scsi/scsi_sysfs.c
+@@ -808,12 +808,15 @@ store_state_field(struct device *dev, struct device_attribute *attr,
+ 	ret = scsi_device_set_state(sdev, state);
+ 	/*
+ 	 * If the device state changes to SDEV_RUNNING, we need to
+-	 * rescan the device to revalidate it, and run the queue to
+-	 * avoid I/O hang.
++	 * run the queue to avoid I/O hang, and rescan the device
++	 * to revalidate it. Running the queue first is necessary
++	 * because another thread may be waiting inside
++	 * blk_mq_freeze_queue_wait() and because that call may be
++	 * waiting for pending I/O to finish.
+ 	 */
+ 	if (ret == 0 && state == SDEV_RUNNING) {
+-		scsi_rescan_device(dev);
+ 		blk_mq_run_hw_queues(sdev->request_queue, true);
++		scsi_rescan_device(dev);
+ 	}
+ 	mutex_unlock(&sdev->state_mutex);
+ 
+diff --git a/drivers/tty/vt/vt_ioctl.c b/drivers/tty/vt/vt_ioctl.c
+index 0e0cd9e9e589e..3639bb6dc372e 100644
+--- a/drivers/tty/vt/vt_ioctl.c
++++ b/drivers/tty/vt/vt_ioctl.c
+@@ -246,6 +246,8 @@ int vt_waitactive(int n)
+  *
+  * XXX It should at least call into the driver, fbdev's definitely need to
+  * restore their engine state. --BenH
++ *
++ * Called with the console lock held.
+  */
+ static int vt_kdsetmode(struct vc_data *vc, unsigned long mode)
+ {
+@@ -262,7 +264,6 @@ static int vt_kdsetmode(struct vc_data *vc, unsigned long mode)
+ 		return -EINVAL;
+ 	}
+ 
+-	/* FIXME: this needs the console lock extending */
+ 	if (vc->vc_mode == mode)
+ 		return 0;
+ 
+@@ -271,12 +272,10 @@ static int vt_kdsetmode(struct vc_data *vc, unsigned long mode)
+ 		return 0;
+ 
+ 	/* explicitly blank/unblank the screen if switching modes */
+-	console_lock();
+ 	if (mode == KD_TEXT)
+ 		do_unblank_screen(1);
+ 	else
+ 		do_blank_screen(1);
+-	console_unlock();
+ 
+ 	return 0;
+ }
+@@ -378,7 +377,10 @@ static int vt_k_ioctl(struct tty_struct *tty, unsigned int cmd,
+ 		if (!perm)
+ 			return -EPERM;
+ 
+-		return vt_kdsetmode(vc, arg);
++		console_lock();
++		ret = vt_kdsetmode(vc, arg);
++		console_unlock();
++		return ret;
+ 
+ 	case KDGETMODE:
+ 		return put_user(vc->vc_mode, (int __user *)arg);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 8e3e0c30f420e..3c4d5d29149c1 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -940,19 +940,19 @@ static struct dwc3_trb *dwc3_ep_prev_trb(struct dwc3_ep *dep, u8 index)
+ 
+ static u32 dwc3_calc_trbs_left(struct dwc3_ep *dep)
+ {
+-	struct dwc3_trb		*tmp;
+ 	u8			trbs_left;
+ 
+ 	/*
+-	 * If enqueue & dequeue are equal than it is either full or empty.
+-	 *
+-	 * One way to know for sure is if the TRB right before us has HWO bit
+-	 * set or not. If it has, then we're definitely full and can't fit any
+-	 * more transfers in our ring.
++	 * If the enqueue & dequeue are equal then the TRB ring is either full
++	 * or empty. It's considered full when there are DWC3_TRB_NUM-1 of TRBs
++	 * pending to be processed by the driver.
+ 	 */
+ 	if (dep->trb_enqueue == dep->trb_dequeue) {
+-		tmp = dwc3_ep_prev_trb(dep, dep->trb_enqueue);
+-		if (tmp->ctrl & DWC3_TRB_CTRL_HWO)
++		/*
++		 * If there is any request remained in the started_list at
++		 * this point, that means there is no TRB available.
++		 */
++		if (!list_empty(&dep->started_list))
+ 			return 0;
+ 
+ 		return DWC3_TRB_NUM - 1;
+@@ -2243,10 +2243,8 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
+ 
+ 		ret = wait_for_completion_timeout(&dwc->ep0_in_setup,
+ 				msecs_to_jiffies(DWC3_PULL_UP_TIMEOUT));
+-		if (ret == 0) {
+-			dev_err(dwc->dev, "timed out waiting for SETUP phase\n");
+-			return -ETIMEDOUT;
+-		}
++		if (ret == 0)
++			dev_warn(dwc->dev, "timed out waiting for SETUP phase\n");
+ 	}
+ 
+ 	/*
+@@ -2458,6 +2456,7 @@ static int __dwc3_gadget_start(struct dwc3 *dwc)
+ 	/* begin to receive SETUP packets */
+ 	dwc->ep0state = EP0_SETUP_PHASE;
+ 	dwc->link_state = DWC3_LINK_STATE_SS_DIS;
++	dwc->delayed_status = false;
+ 	dwc3_ep0_out_start(dwc);
+ 
+ 	dwc3_gadget_enable_irq(dwc);
+diff --git a/drivers/usb/gadget/function/u_audio.c b/drivers/usb/gadget/function/u_audio.c
+index 5fbceee897a3f..3b613ca06ae55 100644
+--- a/drivers/usb/gadget/function/u_audio.c
++++ b/drivers/usb/gadget/function/u_audio.c
+@@ -312,8 +312,6 @@ static inline void free_ep(struct uac_rtd_params *prm, struct usb_ep *ep)
+ 	if (!prm->ep_enabled)
+ 		return;
+ 
+-	prm->ep_enabled = false;
+-
+ 	audio_dev = uac->audio_dev;
+ 	params = &audio_dev->params;
+ 
+@@ -331,11 +329,12 @@ static inline void free_ep(struct uac_rtd_params *prm, struct usb_ep *ep)
+ 		}
+ 	}
+ 
++	prm->ep_enabled = false;
++
+ 	if (usb_ep_disable(ep))
+ 		dev_err(uac->card->dev, "%s:%d Error!\n", __func__, __LINE__);
+ }
+ 
+-
+ int u_audio_start_capture(struct g_audio *audio_dev)
+ {
+ 	struct snd_uac_chip *uac = audio_dev->uac;
+diff --git a/drivers/usb/host/xhci-pci-renesas.c b/drivers/usb/host/xhci-pci-renesas.c
+index f97ac9f52bf4d..96692dbbd4dad 100644
+--- a/drivers/usb/host/xhci-pci-renesas.c
++++ b/drivers/usb/host/xhci-pci-renesas.c
+@@ -207,7 +207,8 @@ static int renesas_check_rom_state(struct pci_dev *pdev)
+ 			return 0;
+ 
+ 		case RENESAS_ROM_STATUS_NO_RESULT: /* No result yet */
+-			return 0;
++			dev_dbg(&pdev->dev, "Unknown ROM status ...\n");
++			return -ENOENT;
+ 
+ 		case RENESAS_ROM_STATUS_ERROR: /* Error State */
+ 		default: /* All other states are marked as "Reserved states" */
+@@ -224,14 +225,6 @@ static int renesas_fw_check_running(struct pci_dev *pdev)
+ 	u8 fw_state;
+ 	int err;
+ 
+-	/* Check if device has ROM and loaded, if so skip everything */
+-	err = renesas_check_rom(pdev);
+-	if (err) { /* we have rom */
+-		err = renesas_check_rom_state(pdev);
+-		if (!err)
+-			return err;
+-	}
+-
+ 	/*
+ 	 * Test if the device is actually needing the firmware. As most
+ 	 * BIOSes will initialize the device for us. If the device is
+@@ -591,21 +584,39 @@ int renesas_xhci_check_request_fw(struct pci_dev *pdev,
+ 			(struct xhci_driver_data *)id->driver_data;
+ 	const char *fw_name = driver_data->firmware;
+ 	const struct firmware *fw;
++	bool has_rom;
+ 	int err;
+ 
++	/* Check if device has ROM and loaded, if so skip everything */
++	has_rom = renesas_check_rom(pdev);
++	if (has_rom) {
++		err = renesas_check_rom_state(pdev);
++		if (!err)
++			return 0;
++		else if (err != -ENOENT)
++			has_rom = false;
++	}
++
+ 	err = renesas_fw_check_running(pdev);
+ 	/* Continue ahead, if the firmware is already running. */
+ 	if (err == 0)
+ 		return 0;
+ 
++	/* no firmware interface available */
+ 	if (err != 1)
+-		return err;
++		return has_rom ? 0 : err;
+ 
+ 	pci_dev_get(pdev);
+-	err = request_firmware(&fw, fw_name, &pdev->dev);
++	err = firmware_request_nowarn(&fw, fw_name, &pdev->dev);
+ 	pci_dev_put(pdev);
+ 	if (err) {
+-		dev_err(&pdev->dev, "request_firmware failed: %d\n", err);
++		if (has_rom) {
++			dev_info(&pdev->dev, "failed to load firmware %s, fallback to ROM\n",
++				 fw_name);
++			return 0;
++		}
++		dev_err(&pdev->dev, "failed to load firmware %s: %d\n",
++			fw_name, err);
+ 		return err;
+ 	}
+ 
+diff --git a/drivers/usb/serial/ch341.c b/drivers/usb/serial/ch341.c
+index 8a521b5ea769e..2db917eab7995 100644
+--- a/drivers/usb/serial/ch341.c
++++ b/drivers/usb/serial/ch341.c
+@@ -851,7 +851,6 @@ static struct usb_serial_driver ch341_device = {
+ 		.owner	= THIS_MODULE,
+ 		.name	= "ch341-uart",
+ 	},
+-	.bulk_in_size      = 512,
+ 	.id_table          = id_table,
+ 	.num_ports         = 1,
+ 	.open              = ch341_open,
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 039450069ca45..29c765cc84957 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -2074,6 +2074,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(4) | RSVD(5) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0105, 0xff),			/* Fibocom NL678 series */
+ 	  .driver_info = RSVD(6) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0xff, 0x30) },	/* Fibocom FG150 Diag */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0, 0) },		/* Fibocom FG150 AT */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a0, 0xff) },			/* Fibocom NL668-AM/NL652-EU (laptop MBIM) */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2df3, 0x9d03, 0xff) },			/* LongSung M5710 */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1404, 0xff) },			/* GosunCn GM500 RNDIS */
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 1b886d80ba1cc..b3853b09d484a 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -341,6 +341,7 @@ struct tcpm_port {
+ 	bool vbus_source;
+ 	bool vbus_charge;
+ 
++	/* Set to true when Discover_Identity Command is expected to be sent in Ready states. */
+ 	bool send_discover;
+ 	bool op_vsafe5v;
+ 
+@@ -370,6 +371,7 @@ struct tcpm_port {
+ 	struct hrtimer send_discover_timer;
+ 	struct kthread_work send_discover_work;
+ 	bool state_machine_running;
++	/* Set to true when VDM State Machine has following actions. */
+ 	bool vdm_sm_running;
+ 
+ 	struct completion tx_complete;
+@@ -1403,6 +1405,7 @@ static void tcpm_queue_vdm(struct tcpm_port *port, const u32 header,
+ 	/* Set ready, vdm state machine will actually send */
+ 	port->vdm_retries = 0;
+ 	port->vdm_state = VDM_STATE_READY;
++	port->vdm_sm_running = true;
+ 
+ 	mod_vdm_delayed_work(port, 0);
+ }
+@@ -1645,7 +1648,6 @@ static int tcpm_pd_svdm(struct tcpm_port *port, struct typec_altmode *adev,
+ 				rlen = 1;
+ 			} else {
+ 				tcpm_register_partner_altmodes(port);
+-				port->vdm_sm_running = false;
+ 			}
+ 			break;
+ 		case CMD_ENTER_MODE:
+@@ -1693,14 +1695,12 @@ static int tcpm_pd_svdm(struct tcpm_port *port, struct typec_altmode *adev,
+ 				      (VDO_SVDM_VERS(svdm_version));
+ 			break;
+ 		}
+-		port->vdm_sm_running = false;
+ 		break;
+ 	default:
+ 		response[0] = p[0] | VDO_CMDT(CMDT_RSP_NAK);
+ 		rlen = 1;
+ 		response[0] = (response[0] & ~VDO_SVDM_VERS_MASK) |
+ 			      (VDO_SVDM_VERS(svdm_version));
+-		port->vdm_sm_running = false;
+ 		break;
+ 	}
+ 
+@@ -1741,6 +1741,20 @@ static void tcpm_handle_vdm_request(struct tcpm_port *port,
+ 	}
+ 
+ 	if (PD_VDO_SVDM(p[0]) && (adev || tcpm_vdm_ams(port) || port->nr_snk_vdo)) {
++		/*
++		 * Here a SVDM is received (INIT or RSP or unknown). Set the vdm_sm_running in
++		 * advance because we are dropping the lock but may send VDMs soon.
++		 * For the cases of INIT received:
++		 *  - If no response to send, it will be cleared later in this function.
++		 *  - If there are responses to send, it will be cleared in the state machine.
++		 * For the cases of RSP received:
++		 *  - If no further INIT to send, it will be cleared later in this function.
++		 *  - Otherwise, it will be cleared in the state machine if timeout or it will go
++		 *    back here until no further INIT to send.
++		 * For the cases of unknown type received:
++		 *  - We will send NAK and the flag will be cleared in the state machine.
++		 */
++		port->vdm_sm_running = true;
+ 		rlen = tcpm_pd_svdm(port, adev, p, cnt, response, &adev_action);
+ 	} else {
+ 		if (port->negotiated_rev >= PD_REV30)
+@@ -1809,6 +1823,8 @@ static void tcpm_handle_vdm_request(struct tcpm_port *port,
+ 
+ 	if (rlen > 0)
+ 		tcpm_queue_vdm(port, response[0], &response[1], rlen - 1);
++	else
++		port->vdm_sm_running = false;
+ }
+ 
+ static void tcpm_send_vdm(struct tcpm_port *port, u32 vid, int cmd,
+@@ -1874,8 +1890,10 @@ static void vdm_run_state_machine(struct tcpm_port *port)
+ 		 * if there's traffic or we're not in PDO ready state don't send
+ 		 * a VDM.
+ 		 */
+-		if (port->state != SRC_READY && port->state != SNK_READY)
++		if (port->state != SRC_READY && port->state != SNK_READY) {
++			port->vdm_sm_running = false;
+ 			break;
++		}
+ 
+ 		/* TODO: AMS operation for Unstructured VDM */
+ 		if (PD_VDO_SVDM(vdo_hdr) && PD_VDO_CMDT(vdo_hdr) == CMDT_INIT) {
+@@ -2528,10 +2546,6 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port,
+ 								       TYPEC_PWR_MODE_PD,
+ 								       port->pps_data.active,
+ 								       port->supply_voltage);
+-				/* Set VDM running flag ASAP */
+-				if (port->data_role == TYPEC_HOST &&
+-				    port->send_discover)
+-					port->vdm_sm_running = true;
+ 				tcpm_set_state(port, SNK_READY, 0);
+ 			} else {
+ 				/*
+@@ -2569,14 +2583,10 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port,
+ 		switch (port->state) {
+ 		case SNK_NEGOTIATE_CAPABILITIES:
+ 			/* USB PD specification, Figure 8-43 */
+-			if (port->explicit_contract) {
++			if (port->explicit_contract)
+ 				next_state = SNK_READY;
+-				if (port->data_role == TYPEC_HOST &&
+-				    port->send_discover)
+-					port->vdm_sm_running = true;
+-			} else {
++			else
+ 				next_state = SNK_WAIT_CAPABILITIES;
+-			}
+ 
+ 			/* Threshold was relaxed before sending Request. Restore it back. */
+ 			tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_PD,
+@@ -2591,10 +2601,6 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port,
+ 			port->pps_status = (type == PD_CTRL_WAIT ?
+ 					    -EAGAIN : -EOPNOTSUPP);
+ 
+-			if (port->data_role == TYPEC_HOST &&
+-			    port->send_discover)
+-				port->vdm_sm_running = true;
+-
+ 			/* Threshold was relaxed before sending Request. Restore it back. */
+ 			tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_PD,
+ 							       port->pps_data.active,
+@@ -2670,10 +2676,6 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port,
+ 			}
+ 			break;
+ 		case DR_SWAP_SEND:
+-			if (port->data_role == TYPEC_DEVICE &&
+-			    port->send_discover)
+-				port->vdm_sm_running = true;
+-
+ 			tcpm_set_state(port, DR_SWAP_CHANGE_DR, 0);
+ 			break;
+ 		case PR_SWAP_SEND:
+@@ -2711,7 +2713,7 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port,
+ 					   PD_MSG_CTRL_NOT_SUPP,
+ 					   NONE_AMS);
+ 		} else {
+-			if (port->vdm_sm_running) {
++			if (port->send_discover) {
+ 				tcpm_queue_message(port, PD_MSG_CTRL_WAIT);
+ 				break;
+ 			}
+@@ -2727,7 +2729,7 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port,
+ 					   PD_MSG_CTRL_NOT_SUPP,
+ 					   NONE_AMS);
+ 		} else {
+-			if (port->vdm_sm_running) {
++			if (port->send_discover) {
+ 				tcpm_queue_message(port, PD_MSG_CTRL_WAIT);
+ 				break;
+ 			}
+@@ -2736,7 +2738,7 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port,
+ 		}
+ 		break;
+ 	case PD_CTRL_VCONN_SWAP:
+-		if (port->vdm_sm_running) {
++		if (port->send_discover) {
+ 			tcpm_queue_message(port, PD_MSG_CTRL_WAIT);
+ 			break;
+ 		}
+@@ -4470,18 +4472,20 @@ static void run_state_machine(struct tcpm_port *port)
+ 	/* DR_Swap states */
+ 	case DR_SWAP_SEND:
+ 		tcpm_pd_send_control(port, PD_CTRL_DR_SWAP);
++		if (port->data_role == TYPEC_DEVICE || port->negotiated_rev > PD_REV20)
++			port->send_discover = true;
+ 		tcpm_set_state_cond(port, DR_SWAP_SEND_TIMEOUT,
+ 				    PD_T_SENDER_RESPONSE);
+ 		break;
+ 	case DR_SWAP_ACCEPT:
+ 		tcpm_pd_send_control(port, PD_CTRL_ACCEPT);
+-		/* Set VDM state machine running flag ASAP */
+-		if (port->data_role == TYPEC_DEVICE && port->send_discover)
+-			port->vdm_sm_running = true;
++		if (port->data_role == TYPEC_DEVICE || port->negotiated_rev > PD_REV20)
++			port->send_discover = true;
+ 		tcpm_set_state_cond(port, DR_SWAP_CHANGE_DR, 0);
+ 		break;
+ 	case DR_SWAP_SEND_TIMEOUT:
+ 		tcpm_swap_complete(port, -ETIMEDOUT);
++		port->send_discover = false;
+ 		tcpm_ams_finish(port);
+ 		tcpm_set_state(port, ready_state(port), 0);
+ 		break;
+@@ -4493,7 +4497,6 @@ static void run_state_machine(struct tcpm_port *port)
+ 		} else {
+ 			tcpm_set_roles(port, true, port->pwr_role,
+ 				       TYPEC_HOST);
+-			port->send_discover = true;
+ 		}
+ 		tcpm_ams_finish(port);
+ 		tcpm_set_state(port, ready_state(port), 0);
+@@ -4633,8 +4636,6 @@ static void run_state_machine(struct tcpm_port *port)
+ 		break;
+ 	case VCONN_SWAP_SEND_TIMEOUT:
+ 		tcpm_swap_complete(port, -ETIMEDOUT);
+-		if (port->data_role == TYPEC_HOST && port->send_discover)
+-			port->vdm_sm_running = true;
+ 		tcpm_set_state(port, ready_state(port), 0);
+ 		break;
+ 	case VCONN_SWAP_START:
+@@ -4650,14 +4651,10 @@ static void run_state_machine(struct tcpm_port *port)
+ 	case VCONN_SWAP_TURN_ON_VCONN:
+ 		tcpm_set_vconn(port, true);
+ 		tcpm_pd_send_control(port, PD_CTRL_PS_RDY);
+-		if (port->data_role == TYPEC_HOST && port->send_discover)
+-			port->vdm_sm_running = true;
+ 		tcpm_set_state(port, ready_state(port), 0);
+ 		break;
+ 	case VCONN_SWAP_TURN_OFF_VCONN:
+ 		tcpm_set_vconn(port, false);
+-		if (port->data_role == TYPEC_HOST && port->send_discover)
+-			port->vdm_sm_running = true;
+ 		tcpm_set_state(port, ready_state(port), 0);
+ 		break;
+ 
+@@ -4665,8 +4662,6 @@ static void run_state_machine(struct tcpm_port *port)
+ 	case PR_SWAP_CANCEL:
+ 	case VCONN_SWAP_CANCEL:
+ 		tcpm_swap_complete(port, port->swap_status);
+-		if (port->data_role == TYPEC_HOST && port->send_discover)
+-			port->vdm_sm_running = true;
+ 		if (port->pwr_role == TYPEC_SOURCE)
+ 			tcpm_set_state(port, SRC_READY, 0);
+ 		else
+@@ -5016,9 +5011,6 @@ static void _tcpm_pd_vbus_on(struct tcpm_port *port)
+ 	switch (port->state) {
+ 	case SNK_TRANSITION_SINK_VBUS:
+ 		port->explicit_contract = true;
+-		/* Set the VDM flag ASAP */
+-		if (port->data_role == TYPEC_HOST && port->send_discover)
+-			port->vdm_sm_running = true;
+ 		tcpm_set_state(port, SNK_READY, 0);
+ 		break;
+ 	case SNK_DISCOVERY:
+@@ -5412,15 +5404,18 @@ static void tcpm_send_discover_work(struct kthread_work *work)
+ 	if (!port->send_discover)
+ 		goto unlock;
+ 
++	if (port->data_role == TYPEC_DEVICE && port->negotiated_rev < PD_REV30) {
++		port->send_discover = false;
++		goto unlock;
++	}
++
+ 	/* Retry if the port is not idle */
+ 	if ((port->state != SRC_READY && port->state != SNK_READY) || port->vdm_sm_running) {
+ 		mod_send_discover_delayed_work(port, SEND_DISCOVER_RETRY_MS);
+ 		goto unlock;
+ 	}
+ 
+-	/* Only send the Message if the port is host for PD rev2.0 */
+-	if (port->data_role == TYPEC_HOST || port->negotiated_rev > PD_REV20)
+-		tcpm_send_vdm(port, USB_SID_PD, CMD_DISCOVER_IDENT, NULL, 0);
++	tcpm_send_vdm(port, USB_SID_PD, CMD_DISCOVER_IDENT, NULL, 0);
+ 
+ unlock:
+ 	mutex_unlock(&port->lock);
+diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c
+index 4af8fa259d65f..14e2043d76852 100644
+--- a/drivers/vhost/vringh.c
++++ b/drivers/vhost/vringh.c
+@@ -359,7 +359,7 @@ __vringh_iov(struct vringh *vrh, u16 i,
+ 			iov = wiov;
+ 		else {
+ 			iov = riov;
+-			if (unlikely(wiov && wiov->i)) {
++			if (unlikely(wiov && wiov->used)) {
+ 				vringh_bad("Readable desc %p after writable",
+ 					   &descs[i]);
+ 				err = -EINVAL;
+diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
+index 222d630c41fc9..b35bb2d57f62c 100644
+--- a/drivers/virtio/virtio_pci_common.c
++++ b/drivers/virtio/virtio_pci_common.c
+@@ -576,6 +576,13 @@ static void virtio_pci_remove(struct pci_dev *pci_dev)
+ 	struct virtio_pci_device *vp_dev = pci_get_drvdata(pci_dev);
+ 	struct device *dev = get_device(&vp_dev->vdev.dev);
+ 
++	/*
++	 * Device is marked broken on surprise removal so that virtio upper
++	 * layers can abort any ongoing operation.
++	 */
++	if (!pci_device_is_present(pci_dev))
++		virtio_break_device(&vp_dev->vdev);
++
+ 	pci_disable_sriov(pci_dev);
+ 
+ 	unregister_virtio_device(&vp_dev->vdev);
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index 6b7aa26c53844..6c730d6d50f71 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -2268,7 +2268,7 @@ bool virtqueue_is_broken(struct virtqueue *_vq)
+ {
+ 	struct vring_virtqueue *vq = to_vvq(_vq);
+ 
+-	return vq->broken;
++	return READ_ONCE(vq->broken);
+ }
+ EXPORT_SYMBOL_GPL(virtqueue_is_broken);
+ 
+@@ -2283,7 +2283,9 @@ void virtio_break_device(struct virtio_device *dev)
+ 	spin_lock(&dev->vqs_list_lock);
+ 	list_for_each_entry(_vq, &dev->vqs, list) {
+ 		struct vring_virtqueue *vq = to_vvq(_vq);
+-		vq->broken = true;
++
++		/* Pairs with READ_ONCE() in virtqueue_is_broken(). */
++		WRITE_ONCE(vq->broken, true);
+ 	}
+ 	spin_unlock(&dev->vqs_list_lock);
+ }
+diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c
+index e28acf482e0ce..e9b9dd03f44a7 100644
+--- a/drivers/virtio/virtio_vdpa.c
++++ b/drivers/virtio/virtio_vdpa.c
+@@ -149,6 +149,9 @@ virtio_vdpa_setup_vq(struct virtio_device *vdev, unsigned int index,
+ 	if (!name)
+ 		return NULL;
+ 
++	if (index >= vdpa->nvqs)
++		return ERR_PTR(-ENOENT);
++
+ 	/* Queue shouldn't already be set up. */
+ 	if (ops->get_vq_ready(vdpa, index))
+ 		return ERR_PTR(-ENOENT);
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 5d188d5af6fa7..6328deb27d126 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -603,7 +603,7 @@ again:
+ 	 * inode has not been flagged as nocompress.  This flag can
+ 	 * change at any time if we discover bad compression ratios.
+ 	 */
+-	if (nr_pages > 1 && inode_need_compress(BTRFS_I(inode), start, end)) {
++	if (inode_need_compress(BTRFS_I(inode), start, end)) {
+ 		WARN_ON(pages);
+ 		pages = kcalloc(nr_pages, sizeof(struct page *), GFP_NOFS);
+ 		if (!pages) {
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 9f723b744863f..7d5875824be7a 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -2137,7 +2137,7 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info, const char *device_path,
+ 
+ 	if (IS_ERR(device)) {
+ 		if (PTR_ERR(device) == -ENOENT &&
+-		    strcmp(device_path, "missing") == 0)
++		    device_path && strcmp(device_path, "missing") == 0)
+ 			ret = BTRFS_ERROR_DEV_MISSING_NOT_FOUND;
+ 		else
+ 			ret = PTR_ERR(device);
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index c79b8dff25d7d..805c656a2e72a 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -1753,7 +1753,11 @@ int __ceph_mark_dirty_caps(struct ceph_inode_info *ci, int mask,
+ 
+ struct ceph_cap_flush *ceph_alloc_cap_flush(void)
+ {
+-	return kmem_cache_alloc(ceph_cap_flush_cachep, GFP_KERNEL);
++	struct ceph_cap_flush *cf;
++
++	cf = kmem_cache_alloc(ceph_cap_flush_cachep, GFP_KERNEL);
++	cf->is_capsnap = false;
++	return cf;
+ }
+ 
+ void ceph_free_cap_flush(struct ceph_cap_flush *cf)
+@@ -1788,7 +1792,7 @@ static bool __detach_cap_flush_from_mdsc(struct ceph_mds_client *mdsc,
+ 		prev->wake = true;
+ 		wake = false;
+ 	}
+-	list_del(&cf->g_list);
++	list_del_init(&cf->g_list);
+ 	return wake;
+ }
+ 
+@@ -1803,7 +1807,7 @@ static bool __detach_cap_flush_from_ci(struct ceph_inode_info *ci,
+ 		prev->wake = true;
+ 		wake = false;
+ 	}
+-	list_del(&cf->i_list);
++	list_del_init(&cf->i_list);
+ 	return wake;
+ }
+ 
+@@ -2423,7 +2427,7 @@ static void __kick_flushing_caps(struct ceph_mds_client *mdsc,
+ 	ci->i_ceph_flags &= ~CEPH_I_KICK_FLUSH;
+ 
+ 	list_for_each_entry_reverse(cf, &ci->i_cap_flush_list, i_list) {
+-		if (!cf->caps) {
++		if (cf->is_capsnap) {
+ 			last_snap_flush = cf->tid;
+ 			break;
+ 		}
+@@ -2442,7 +2446,7 @@ static void __kick_flushing_caps(struct ceph_mds_client *mdsc,
+ 
+ 		first_tid = cf->tid + 1;
+ 
+-		if (cf->caps) {
++		if (!cf->is_capsnap) {
+ 			struct cap_msg_args arg;
+ 
+ 			dout("kick_flushing_caps %p cap %p tid %llu %s\n",
+@@ -3589,7 +3593,7 @@ static void handle_cap_flush_ack(struct inode *inode, u64 flush_tid,
+ 			cleaned = cf->caps;
+ 
+ 		/* Is this a capsnap? */
+-		if (cf->caps == 0)
++		if (cf->is_capsnap)
+ 			continue;
+ 
+ 		if (cf->tid <= flush_tid) {
+@@ -3662,8 +3666,9 @@ out:
+ 	while (!list_empty(&to_remove)) {
+ 		cf = list_first_entry(&to_remove,
+ 				      struct ceph_cap_flush, i_list);
+-		list_del(&cf->i_list);
+-		ceph_free_cap_flush(cf);
++		list_del_init(&cf->i_list);
++		if (!cf->is_capsnap)
++			ceph_free_cap_flush(cf);
+ 	}
+ 
+ 	if (wake_ci)
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 2c5701bfe4e8c..9e24fcea152b3 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -1621,7 +1621,7 @@ static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ 		spin_lock(&mdsc->cap_dirty_lock);
+ 
+ 		list_for_each_entry(cf, &to_remove, i_list)
+-			list_del(&cf->g_list);
++			list_del_init(&cf->g_list);
+ 
+ 		if (!list_empty(&ci->i_dirty_item)) {
+ 			pr_warn_ratelimited(
+@@ -1673,8 +1673,9 @@ static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ 		struct ceph_cap_flush *cf;
+ 		cf = list_first_entry(&to_remove,
+ 				      struct ceph_cap_flush, i_list);
+-		list_del(&cf->i_list);
+-		ceph_free_cap_flush(cf);
++		list_del_init(&cf->i_list);
++		if (!cf->is_capsnap)
++			ceph_free_cap_flush(cf);
+ 	}
+ 
+ 	wake_up_all(&ci->i_cap_wq);
+diff --git a/fs/ceph/snap.c b/fs/ceph/snap.c
+index ae20504326b88..1ecd6a5391b46 100644
+--- a/fs/ceph/snap.c
++++ b/fs/ceph/snap.c
+@@ -487,6 +487,9 @@ void ceph_queue_cap_snap(struct ceph_inode_info *ci)
+ 		pr_err("ENOMEM allocating ceph_cap_snap on %p\n", inode);
+ 		return;
+ 	}
++	capsnap->cap_flush.is_capsnap = true;
++	INIT_LIST_HEAD(&capsnap->cap_flush.i_list);
++	INIT_LIST_HEAD(&capsnap->cap_flush.g_list);
+ 
+ 	spin_lock(&ci->i_ceph_lock);
+ 	used = __ceph_caps_used(ci);
+diff --git a/fs/ceph/super.h b/fs/ceph/super.h
+index 3b5207c827673..3b6a61116036a 100644
+--- a/fs/ceph/super.h
++++ b/fs/ceph/super.h
+@@ -182,8 +182,9 @@ struct ceph_cap {
+ 
+ struct ceph_cap_flush {
+ 	u64 tid;
+-	int caps; /* 0 means capsnap */
++	int caps;
+ 	bool wake; /* wake up flush waiters when finish ? */
++	bool is_capsnap; /* true means capsnap */
+ 	struct list_head g_list; // global
+ 	struct list_head i_list; // per inode
+ };
+diff --git a/fs/crypto/hooks.c b/fs/crypto/hooks.c
+index a73b0376e6f37..af74599ae1cf0 100644
+--- a/fs/crypto/hooks.c
++++ b/fs/crypto/hooks.c
+@@ -384,3 +384,47 @@ err_kfree:
+ 	return ERR_PTR(err);
+ }
+ EXPORT_SYMBOL_GPL(fscrypt_get_symlink);
++
++/**
++ * fscrypt_symlink_getattr() - set the correct st_size for encrypted symlinks
++ * @path: the path for the encrypted symlink being queried
++ * @stat: the struct being filled with the symlink's attributes
++ *
++ * Override st_size of encrypted symlinks to be the length of the decrypted
++ * symlink target (or the no-key encoded symlink target, if the key is
++ * unavailable) rather than the length of the encrypted symlink target.  This is
++ * necessary for st_size to match the symlink target that userspace actually
++ * sees.  POSIX requires this, and some userspace programs depend on it.
++ *
++ * This requires reading the symlink target from disk if needed, setting up the
++ * inode's encryption key if possible, and then decrypting or encoding the
++ * symlink target.  This makes lstat() more heavyweight than is normally the
++ * case.  However, decrypted symlink targets will be cached in ->i_link, so
++ * usually the symlink won't have to be read and decrypted again later if/when
++ * it is actually followed, readlink() is called, or lstat() is called again.
++ *
++ * Return: 0 on success, -errno on failure
++ */
++int fscrypt_symlink_getattr(const struct path *path, struct kstat *stat)
++{
++	struct dentry *dentry = path->dentry;
++	struct inode *inode = d_inode(dentry);
++	const char *link;
++	DEFINE_DELAYED_CALL(done);
++
++	/*
++	 * To get the symlink target that userspace will see (whether it's the
++	 * decrypted target or the no-key encoded target), we can just get it in
++	 * the same way the VFS does during path resolution and readlink().
++	 */
++	link = READ_ONCE(inode->i_link);
++	if (!link) {
++		link = inode->i_op->get_link(dentry, inode, &done);
++		if (IS_ERR(link))
++			return PTR_ERR(link);
++	}
++	stat->size = strlen(link);
++	do_delayed_call(&done);
++	return 0;
++}
++EXPORT_SYMBOL_GPL(fscrypt_symlink_getattr);
+diff --git a/fs/ext4/symlink.c b/fs/ext4/symlink.c
+index dd05af983092d..69109746e6e21 100644
+--- a/fs/ext4/symlink.c
++++ b/fs/ext4/symlink.c
+@@ -52,10 +52,20 @@ static const char *ext4_encrypted_get_link(struct dentry *dentry,
+ 	return paddr;
+ }
+ 
++static int ext4_encrypted_symlink_getattr(struct user_namespace *mnt_userns,
++					  const struct path *path,
++					  struct kstat *stat, u32 request_mask,
++					  unsigned int query_flags)
++{
++	ext4_getattr(mnt_userns, path, stat, request_mask, query_flags);
++
++	return fscrypt_symlink_getattr(path, stat);
++}
++
+ const struct inode_operations ext4_encrypted_symlink_inode_operations = {
+ 	.get_link	= ext4_encrypted_get_link,
+ 	.setattr	= ext4_setattr,
+-	.getattr	= ext4_getattr,
++	.getattr	= ext4_encrypted_symlink_getattr,
+ 	.listxattr	= ext4_listxattr,
+ };
+ 
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index d4139e166b958..ddaa11c762d17 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -1313,9 +1313,19 @@ static const char *f2fs_encrypted_get_link(struct dentry *dentry,
+ 	return target;
+ }
+ 
++static int f2fs_encrypted_symlink_getattr(struct user_namespace *mnt_userns,
++					  const struct path *path,
++					  struct kstat *stat, u32 request_mask,
++					  unsigned int query_flags)
++{
++	f2fs_getattr(mnt_userns, path, stat, request_mask, query_flags);
++
++	return fscrypt_symlink_getattr(path, stat);
++}
++
+ const struct inode_operations f2fs_encrypted_symlink_inode_operations = {
+ 	.get_link	= f2fs_encrypted_get_link,
+-	.getattr	= f2fs_getattr,
++	.getattr	= f2fs_encrypted_symlink_getattr,
+ 	.setattr	= f2fs_setattr,
+ 	.listxattr	= f2fs_listxattr,
+ };
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 9df82eee440a3..f6ddc7182943d 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -7105,16 +7105,6 @@ static void io_free_file_tables(struct io_file_table *table, unsigned nr_files)
+ 	table->files = NULL;
+ }
+ 
+-static inline void io_rsrc_ref_lock(struct io_ring_ctx *ctx)
+-{
+-	spin_lock_bh(&ctx->rsrc_ref_lock);
+-}
+-
+-static inline void io_rsrc_ref_unlock(struct io_ring_ctx *ctx)
+-{
+-	spin_unlock_bh(&ctx->rsrc_ref_lock);
+-}
+-
+ static void io_rsrc_node_destroy(struct io_rsrc_node *ref_node)
+ {
+ 	percpu_ref_exit(&ref_node->refs);
+@@ -7131,9 +7121,9 @@ static void io_rsrc_node_switch(struct io_ring_ctx *ctx,
+ 		struct io_rsrc_node *rsrc_node = ctx->rsrc_node;
+ 
+ 		rsrc_node->rsrc_data = data_to_kill;
+-		io_rsrc_ref_lock(ctx);
++		spin_lock_irq(&ctx->rsrc_ref_lock);
+ 		list_add_tail(&rsrc_node->node, &ctx->rsrc_ref_list);
+-		io_rsrc_ref_unlock(ctx);
++		spin_unlock_irq(&ctx->rsrc_ref_lock);
+ 
+ 		atomic_inc(&data_to_kill->refs);
+ 		percpu_ref_kill(&rsrc_node->refs);
+@@ -7625,9 +7615,10 @@ static void io_rsrc_node_ref_zero(struct percpu_ref *ref)
+ {
+ 	struct io_rsrc_node *node = container_of(ref, struct io_rsrc_node, refs);
+ 	struct io_ring_ctx *ctx = node->rsrc_data->ctx;
++	unsigned long flags;
+ 	bool first_add = false;
+ 
+-	io_rsrc_ref_lock(ctx);
++	spin_lock_irqsave(&ctx->rsrc_ref_lock, flags);
+ 	node->done = true;
+ 
+ 	while (!list_empty(&ctx->rsrc_ref_list)) {
+@@ -7639,7 +7630,7 @@ static void io_rsrc_node_ref_zero(struct percpu_ref *ref)
+ 		list_del(&node->node);
+ 		first_add |= llist_add(&node->llist, &ctx->rsrc_put_llist);
+ 	}
+-	io_rsrc_ref_unlock(ctx);
++	spin_unlock_irqrestore(&ctx->rsrc_ref_lock, flags);
+ 
+ 	if (first_add)
+ 		mod_delayed_work(system_wq, &ctx->rsrc_put_work, HZ);
+diff --git a/fs/overlayfs/export.c b/fs/overlayfs/export.c
+index 41ebf52f1bbce..ebde05c9cf62e 100644
+--- a/fs/overlayfs/export.c
++++ b/fs/overlayfs/export.c
+@@ -392,6 +392,7 @@ static struct dentry *ovl_lookup_real_one(struct dentry *connected,
+ 	 */
+ 	take_dentry_name_snapshot(&name, real);
+ 	this = lookup_one_len(name.name.name, connected, name.name.len);
++	release_dentry_name_snapshot(&name);
+ 	err = PTR_ERR(this);
+ 	if (IS_ERR(this)) {
+ 		goto fail;
+@@ -406,7 +407,6 @@ static struct dentry *ovl_lookup_real_one(struct dentry *connected,
+ 	}
+ 
+ out:
+-	release_dentry_name_snapshot(&name);
+ 	dput(parent);
+ 	inode_unlock(dir);
+ 	return this;
+diff --git a/fs/pipe.c b/fs/pipe.c
+index 8e6ef62aeb1c6..6d4342bad9f15 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -363,10 +363,9 @@ pipe_read(struct kiocb *iocb, struct iov_iter *to)
+ 		 * _very_ unlikely case that the pipe was full, but we got
+ 		 * no data.
+ 		 */
+-		if (unlikely(was_full)) {
++		if (unlikely(was_full))
+ 			wake_up_interruptible_sync_poll(&pipe->wr_wait, EPOLLOUT | EPOLLWRNORM);
+-			kill_fasync(&pipe->fasync_writers, SIGIO, POLL_OUT);
+-		}
++		kill_fasync(&pipe->fasync_writers, SIGIO, POLL_OUT);
+ 
+ 		/*
+ 		 * But because we didn't read anything, at this point we can
+@@ -385,12 +384,11 @@ pipe_read(struct kiocb *iocb, struct iov_iter *to)
+ 		wake_next_reader = false;
+ 	__pipe_unlock(pipe);
+ 
+-	if (was_full) {
++	if (was_full)
+ 		wake_up_interruptible_sync_poll(&pipe->wr_wait, EPOLLOUT | EPOLLWRNORM);
+-		kill_fasync(&pipe->fasync_writers, SIGIO, POLL_OUT);
+-	}
+ 	if (wake_next_reader)
+ 		wake_up_interruptible_sync_poll(&pipe->rd_wait, EPOLLIN | EPOLLRDNORM);
++	kill_fasync(&pipe->fasync_writers, SIGIO, POLL_OUT);
+ 	if (ret > 0)
+ 		file_accessed(filp);
+ 	return ret;
+@@ -444,9 +442,6 @@ pipe_write(struct kiocb *iocb, struct iov_iter *from)
+ #endif
+ 
+ 	/*
+-	 * Epoll nonsensically wants a wakeup whether the pipe
+-	 * was already empty or not.
+-	 *
+ 	 * If it wasn't empty we try to merge new data into
+ 	 * the last buffer.
+ 	 *
+@@ -455,9 +450,9 @@ pipe_write(struct kiocb *iocb, struct iov_iter *from)
+ 	 * spanning multiple pages.
+ 	 */
+ 	head = pipe->head;
+-	was_empty = true;
++	was_empty = pipe_empty(head, pipe->tail);
+ 	chars = total_len & (PAGE_SIZE-1);
+-	if (chars && !pipe_empty(head, pipe->tail)) {
++	if (chars && !was_empty) {
+ 		unsigned int mask = pipe->ring_size - 1;
+ 		struct pipe_buffer *buf = &pipe->bufs[(head - 1) & mask];
+ 		int offset = buf->offset + buf->len;
+@@ -568,10 +563,9 @@ pipe_write(struct kiocb *iocb, struct iov_iter *from)
+ 		 * become empty while we dropped the lock.
+ 		 */
+ 		__pipe_unlock(pipe);
+-		if (was_empty) {
++		if (was_empty)
+ 			wake_up_interruptible_sync_poll(&pipe->rd_wait, EPOLLIN | EPOLLRDNORM);
+-			kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
+-		}
++		kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
+ 		wait_event_interruptible_exclusive(pipe->wr_wait, pipe_writable(pipe));
+ 		__pipe_lock(pipe);
+ 		was_empty = pipe_empty(pipe->head, pipe->tail);
+@@ -590,11 +584,13 @@ out:
+ 	 * This is particularly important for small writes, because of
+ 	 * how (for example) the GNU make jobserver uses small writes to
+ 	 * wake up pending jobs
++	 *
++	 * Epoll nonsensically wants a wakeup whether the pipe
++	 * was already empty or not.
+ 	 */
+-	if (was_empty) {
++	if (was_empty || pipe->poll_usage)
+ 		wake_up_interruptible_sync_poll(&pipe->rd_wait, EPOLLIN | EPOLLRDNORM);
+-		kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
+-	}
++	kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
+ 	if (wake_next_writer)
+ 		wake_up_interruptible_sync_poll(&pipe->wr_wait, EPOLLOUT | EPOLLWRNORM);
+ 	if (ret > 0 && sb_start_write_trylock(file_inode(filp)->i_sb)) {
+@@ -654,6 +650,9 @@ pipe_poll(struct file *filp, poll_table *wait)
+ 	struct pipe_inode_info *pipe = filp->private_data;
+ 	unsigned int head, tail;
+ 
++	/* Epoll has some historical nasty semantics, this enables them */
++	pipe->poll_usage = 1;
++
+ 	/*
+ 	 * Reading pipe state only -- no need for acquiring the semaphore.
+ 	 *
+diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
+index 2e4e1d1599693..5cfa28cd00cdc 100644
+--- a/fs/ubifs/file.c
++++ b/fs/ubifs/file.c
+@@ -1630,6 +1630,17 @@ static const char *ubifs_get_link(struct dentry *dentry,
+ 	return fscrypt_get_symlink(inode, ui->data, ui->data_len, done);
+ }
+ 
++static int ubifs_symlink_getattr(struct user_namespace *mnt_userns,
++				 const struct path *path, struct kstat *stat,
++				 u32 request_mask, unsigned int query_flags)
++{
++	ubifs_getattr(mnt_userns, path, stat, request_mask, query_flags);
++
++	if (IS_ENCRYPTED(d_inode(path->dentry)))
++		return fscrypt_symlink_getattr(path, stat);
++	return 0;
++}
++
+ const struct address_space_operations ubifs_file_address_operations = {
+ 	.readpage       = ubifs_readpage,
+ 	.writepage      = ubifs_writepage,
+@@ -1655,7 +1666,7 @@ const struct inode_operations ubifs_file_inode_operations = {
+ const struct inode_operations ubifs_symlink_inode_operations = {
+ 	.get_link    = ubifs_get_link,
+ 	.setattr     = ubifs_setattr,
+-	.getattr     = ubifs_getattr,
++	.getattr     = ubifs_symlink_getattr,
+ 	.listxattr   = ubifs_listxattr,
+ 	.update_time = ubifs_update_time,
+ };
+diff --git a/include/linux/fscrypt.h b/include/linux/fscrypt.h
+index 2ea1387bb497b..b7bfd0cd4f3ef 100644
+--- a/include/linux/fscrypt.h
++++ b/include/linux/fscrypt.h
+@@ -253,6 +253,7 @@ int __fscrypt_encrypt_symlink(struct inode *inode, const char *target,
+ const char *fscrypt_get_symlink(struct inode *inode, const void *caddr,
+ 				unsigned int max_size,
+ 				struct delayed_call *done);
++int fscrypt_symlink_getattr(const struct path *path, struct kstat *stat);
+ static inline void fscrypt_set_ops(struct super_block *sb,
+ 				   const struct fscrypt_operations *s_cop)
+ {
+@@ -583,6 +584,12 @@ static inline const char *fscrypt_get_symlink(struct inode *inode,
+ 	return ERR_PTR(-EOPNOTSUPP);
+ }
+ 
++static inline int fscrypt_symlink_getattr(const struct path *path,
++					  struct kstat *stat)
++{
++	return -EOPNOTSUPP;
++}
++
+ static inline void fscrypt_set_ops(struct super_block *sb,
+ 				   const struct fscrypt_operations *s_cop)
+ {
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 5cbc950b34dfd..043ef83c84206 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -4012,6 +4012,10 @@ int netdev_rx_handler_register(struct net_device *dev,
+ void netdev_rx_handler_unregister(struct net_device *dev);
+ 
+ bool dev_valid_name(const char *name);
++static inline bool is_socket_ioctl_cmd(unsigned int cmd)
++{
++	return _IOC_TYPE(cmd) == SOCK_IOC_TYPE;
++}
+ int dev_ioctl(struct net *net, unsigned int cmd, struct ifreq *ifr,
+ 		bool *need_copyout);
+ int dev_ifconf(struct net *net, struct ifconf *, int);
+diff --git a/include/linux/netfilter/ipset/ip_set.h b/include/linux/netfilter/ipset/ip_set.h
+index 10279c4830ac3..ada1296c87d50 100644
+--- a/include/linux/netfilter/ipset/ip_set.h
++++ b/include/linux/netfilter/ipset/ip_set.h
+@@ -196,6 +196,9 @@ struct ip_set_region {
+ 	u32 elements;		/* Number of elements vs timeout */
+ };
+ 
++/* Max range where every element is added/deleted in one step */
++#define IPSET_MAX_RANGE		(1<<20)
++
+ /* The max revision number supported by any set type + 1 */
+ #define IPSET_REVISION_MAX	9
+ 
+diff --git a/include/linux/once.h b/include/linux/once.h
+index 9225ee6d96c75..ae6f4eb41cbe7 100644
+--- a/include/linux/once.h
++++ b/include/linux/once.h
+@@ -7,7 +7,7 @@
+ 
+ bool __do_once_start(bool *done, unsigned long *flags);
+ void __do_once_done(bool *done, struct static_key_true *once_key,
+-		    unsigned long *flags);
++		    unsigned long *flags, struct module *mod);
+ 
+ /* Call a function exactly once. The idea of DO_ONCE() is to perform
+  * a function call such as initialization of random seeds, etc, only
+@@ -46,7 +46,7 @@ void __do_once_done(bool *done, struct static_key_true *once_key,
+ 			if (unlikely(___ret)) {				     \
+ 				func(__VA_ARGS__);			     \
+ 				__do_once_done(&___done, &___once_key,	     \
+-					       &___flags);		     \
++					       &___flags, THIS_MODULE);	     \
+ 			}						     \
+ 		}							     \
+ 		___ret;							     \
+diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h
+index 5d2705f1d01c3..fc5642431b923 100644
+--- a/include/linux/pipe_fs_i.h
++++ b/include/linux/pipe_fs_i.h
+@@ -48,6 +48,7 @@ struct pipe_buffer {
+  *	@files: number of struct file referring this pipe (protected by ->i_lock)
+  *	@r_counter: reader counter
+  *	@w_counter: writer counter
++ *	@poll_usage: is this pipe used for epoll, which has crazy wakeups?
+  *	@fasync_readers: reader side fasync
+  *	@fasync_writers: writer side fasync
+  *	@bufs: the circular array of pipe buffers
+@@ -70,6 +71,7 @@ struct pipe_inode_info {
+ 	unsigned int files;
+ 	unsigned int r_counter;
+ 	unsigned int w_counter;
++	unsigned int poll_usage;
+ 	struct page *tmp_page;
+ 	struct fasync_struct *fasync_readers;
+ 	struct fasync_struct *fasync_writers;
+diff --git a/include/linux/stmmac.h b/include/linux/stmmac.h
+index 0db36360ef216..cb7fbd747ae17 100644
+--- a/include/linux/stmmac.h
++++ b/include/linux/stmmac.h
+@@ -115,6 +115,7 @@ struct stmmac_axi {
+ 
+ #define EST_GCL		1024
+ struct stmmac_est {
++	struct mutex lock;
+ 	int enable;
+ 	u32 btr_offset[2];
+ 	u32 btr[2];
+diff --git a/kernel/audit_tree.c b/kernel/audit_tree.c
+index 6c91902f4f455..39241207ec044 100644
+--- a/kernel/audit_tree.c
++++ b/kernel/audit_tree.c
+@@ -593,7 +593,6 @@ static void prune_tree_chunks(struct audit_tree *victim, bool tagged)
+ 		spin_lock(&hash_lock);
+ 	}
+ 	spin_unlock(&hash_lock);
+-	put_tree(victim);
+ }
+ 
+ /*
+@@ -602,6 +601,7 @@ static void prune_tree_chunks(struct audit_tree *victim, bool tagged)
+ static void prune_one(struct audit_tree *victim)
+ {
+ 	prune_tree_chunks(victim, false);
++	put_tree(victim);
+ }
+ 
+ /* trim the uncommitted chunks from tree */
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 0fbe7ef6b1553..6b58fd978b703 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -5148,8 +5148,6 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
+ 	case BPF_MAP_TYPE_RINGBUF:
+ 		if (func_id != BPF_FUNC_ringbuf_output &&
+ 		    func_id != BPF_FUNC_ringbuf_reserve &&
+-		    func_id != BPF_FUNC_ringbuf_submit &&
+-		    func_id != BPF_FUNC_ringbuf_discard &&
+ 		    func_id != BPF_FUNC_ringbuf_query)
+ 			goto error;
+ 		break;
+@@ -5258,6 +5256,12 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
+ 		if (map->map_type != BPF_MAP_TYPE_PERF_EVENT_ARRAY)
+ 			goto error;
+ 		break;
++	case BPF_FUNC_ringbuf_output:
++	case BPF_FUNC_ringbuf_reserve:
++	case BPF_FUNC_ringbuf_query:
++		if (map->map_type != BPF_MAP_TYPE_RINGBUF)
++			goto error;
++		break;
+ 	case BPF_FUNC_get_stackid:
+ 		if (map->map_type != BPF_MAP_TYPE_STACK_TRACE)
+ 			goto error;
+diff --git a/kernel/cred.c b/kernel/cred.c
+index 9c2759166bd82..0f84958d1db91 100644
+--- a/kernel/cred.c
++++ b/kernel/cred.c
+@@ -286,13 +286,13 @@ struct cred *prepare_creds(void)
+ 	new->security = NULL;
+ #endif
+ 
+-	if (security_prepare_creds(new, old, GFP_KERNEL_ACCOUNT) < 0)
+-		goto error;
+-
+ 	new->ucounts = get_ucounts(new->ucounts);
+ 	if (!new->ucounts)
+ 		goto error;
+ 
++	if (security_prepare_creds(new, old, GFP_KERNEL_ACCOUNT) < 0)
++		goto error;
++
+ 	validate_creds(new);
+ 	return new;
+ 
+@@ -753,13 +753,13 @@ struct cred *prepare_kernel_cred(struct task_struct *daemon)
+ #ifdef CONFIG_SECURITY
+ 	new->security = NULL;
+ #endif
+-	if (security_prepare_creds(new, old, GFP_KERNEL_ACCOUNT) < 0)
+-		goto error;
+-
+ 	new->ucounts = get_ucounts(new->ucounts);
+ 	if (!new->ucounts)
+ 		goto error;
+ 
++	if (security_prepare_creds(new, old, GFP_KERNEL_ACCOUNT) < 0)
++		goto error;
++
+ 	put_cred(old);
+ 	validate_creds(new);
+ 	return new;
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 35f7efed75c4c..f2bc99ca01e57 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -1977,6 +1977,9 @@ static inline struct task_struct *get_push_task(struct rq *rq)
+ 	if (p->nr_cpus_allowed == 1)
+ 		return NULL;
+ 
++	if (p->migration_disabled)
++		return NULL;
++
+ 	rq->push_busy = true;
+ 	return get_task_struct(p);
+ }
+diff --git a/lib/once.c b/lib/once.c
+index 8b7d6235217ee..59149bf3bfb4a 100644
+--- a/lib/once.c
++++ b/lib/once.c
+@@ -3,10 +3,12 @@
+ #include <linux/spinlock.h>
+ #include <linux/once.h>
+ #include <linux/random.h>
++#include <linux/module.h>
+ 
+ struct once_work {
+ 	struct work_struct work;
+ 	struct static_key_true *key;
++	struct module *module;
+ };
+ 
+ static void once_deferred(struct work_struct *w)
+@@ -16,10 +18,11 @@ static void once_deferred(struct work_struct *w)
+ 	work = container_of(w, struct once_work, work);
+ 	BUG_ON(!static_key_enabled(work->key));
+ 	static_branch_disable(work->key);
++	module_put(work->module);
+ 	kfree(work);
+ }
+ 
+-static void once_disable_jump(struct static_key_true *key)
++static void once_disable_jump(struct static_key_true *key, struct module *mod)
+ {
+ 	struct once_work *w;
+ 
+@@ -29,6 +32,8 @@ static void once_disable_jump(struct static_key_true *key)
+ 
+ 	INIT_WORK(&w->work, once_deferred);
+ 	w->key = key;
++	w->module = mod;
++	__module_get(mod);
+ 	schedule_work(&w->work);
+ }
+ 
+@@ -53,11 +58,11 @@ bool __do_once_start(bool *done, unsigned long *flags)
+ EXPORT_SYMBOL(__do_once_start);
+ 
+ void __do_once_done(bool *done, struct static_key_true *once_key,
+-		    unsigned long *flags)
++		    unsigned long *flags, struct module *mod)
+ 	__releases(once_lock)
+ {
+ 	*done = true;
+ 	spin_unlock_irqrestore(&once_lock, *flags);
+-	once_disable_jump(once_key);
++	once_disable_jump(once_key, mod);
+ }
+ EXPORT_SYMBOL(__do_once_done);
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 70620d0dd923a..1bf3f86812509 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1854,6 +1854,7 @@ failed_removal_isolated:
+ 	undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE);
+ 	memory_notify(MEM_CANCEL_OFFLINE, &arg);
+ failed_removal_pcplists_disabled:
++	lru_cache_enable();
+ 	zone_pcp_enable(zone);
+ failed_removal:
+ 	pr_debug("memory offlining [mem %#010llx-%#010llx] failed due to %s\n",
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index c6e75bd0035dc..89c7369805e99 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -2597,6 +2597,7 @@ static int do_setlink(const struct sk_buff *skb,
+ 		return err;
+ 
+ 	if (tb[IFLA_NET_NS_PID] || tb[IFLA_NET_NS_FD] || tb[IFLA_TARGET_NETNSID]) {
++		const char *pat = ifname && ifname[0] ? ifname : NULL;
+ 		struct net *net;
+ 		int new_ifindex;
+ 
+@@ -2612,7 +2613,7 @@ static int do_setlink(const struct sk_buff *skb,
+ 		else
+ 			new_ifindex = 0;
+ 
+-		err = __dev_change_net_namespace(dev, net, ifname, new_ifindex);
++		err = __dev_change_net_namespace(dev, net, pat, new_ifindex);
+ 		put_net(net);
+ 		if (err)
+ 			goto errout;
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index a68bf4c6fe9b7..ff34cde983d4e 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -468,6 +468,8 @@ static void __gre_xmit(struct sk_buff *skb, struct net_device *dev,
+ 
+ static int gre_handle_offloads(struct sk_buff *skb, bool csum)
+ {
++	if (csum && skb_checksum_start(skb) < skb->data)
++		return -EINVAL;
+ 	return iptunnel_handle_offloads(skb, csum ? SKB_GSO_GRE_CSUM : SKB_GSO_GRE);
+ }
+ 
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 78d1e5afc4520..d8811e1fbd6c8 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -600,14 +600,14 @@ static struct fib_nh_exception *fnhe_oldest(struct fnhe_hash_bucket *hash)
+ 	return oldest;
+ }
+ 
+-static inline u32 fnhe_hashfun(__be32 daddr)
++static u32 fnhe_hashfun(__be32 daddr)
+ {
+-	static u32 fnhe_hashrnd __read_mostly;
+-	u32 hval;
++	static siphash_key_t fnhe_hash_key __read_mostly;
++	u64 hval;
+ 
+-	net_get_random_once(&fnhe_hashrnd, sizeof(fnhe_hashrnd));
+-	hval = jhash_1word((__force u32)daddr, fnhe_hashrnd);
+-	return hash_32(hval, FNHE_HASH_SHIFT);
++	net_get_random_once(&fnhe_hash_key, sizeof(fnhe_hash_key));
++	hval = siphash_1u32((__force u32)daddr, &fnhe_hash_key);
++	return hash_64(hval, FNHE_HASH_SHIFT);
+ }
+ 
+ static void fill_route_from_fnhe(struct rtable *rt, struct fib_nh_exception *fnhe)
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 09e84161b7312..67c74469503a5 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -41,6 +41,7 @@
+ #include <linux/nsproxy.h>
+ #include <linux/slab.h>
+ #include <linux/jhash.h>
++#include <linux/siphash.h>
+ #include <net/net_namespace.h>
+ #include <net/snmp.h>
+ #include <net/ipv6.h>
+@@ -1484,17 +1485,24 @@ static void rt6_exception_remove_oldest(struct rt6_exception_bucket *bucket)
+ static u32 rt6_exception_hash(const struct in6_addr *dst,
+ 			      const struct in6_addr *src)
+ {
+-	static u32 seed __read_mostly;
+-	u32 val;
++	static siphash_key_t rt6_exception_key __read_mostly;
++	struct {
++		struct in6_addr dst;
++		struct in6_addr src;
++	} __aligned(SIPHASH_ALIGNMENT) combined = {
++		.dst = *dst,
++	};
++	u64 val;
+ 
+-	net_get_random_once(&seed, sizeof(seed));
+-	val = jhash2((const u32 *)dst, sizeof(*dst)/sizeof(u32), seed);
++	net_get_random_once(&rt6_exception_key, sizeof(rt6_exception_key));
+ 
+ #ifdef CONFIG_IPV6_SUBTREES
+ 	if (src)
+-		val = jhash2((const u32 *)src, sizeof(*src)/sizeof(u32), val);
++		combined.src = *src;
+ #endif
+-	return hash_32(val, FIB6_EXCEPTION_BUCKET_SIZE_SHIFT);
++	val = siphash(&combined, sizeof(combined), &rt6_exception_key);
++
++	return hash_64(val, FIB6_EXCEPTION_BUCKET_SIZE_SHIFT);
+ }
+ 
+ /* Helper function to find the cached rt in the hash table
+diff --git a/net/netfilter/ipset/ip_set_hash_ip.c b/net/netfilter/ipset/ip_set_hash_ip.c
+index d1bef23fd4f58..dd30c03d5a23f 100644
+--- a/net/netfilter/ipset/ip_set_hash_ip.c
++++ b/net/netfilter/ipset/ip_set_hash_ip.c
+@@ -132,8 +132,11 @@ hash_ip4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 		ret = ip_set_get_hostipaddr4(tb[IPSET_ATTR_IP_TO], &ip_to);
+ 		if (ret)
+ 			return ret;
+-		if (ip > ip_to)
++		if (ip > ip_to) {
++			if (ip_to == 0)
++				return -IPSET_ERR_HASH_ELEM;
+ 			swap(ip, ip_to);
++		}
+ 	} else if (tb[IPSET_ATTR_CIDR]) {
+ 		u8 cidr = nla_get_u8(tb[IPSET_ATTR_CIDR]);
+ 
+@@ -144,6 +147,10 @@ hash_ip4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 
+ 	hosts = h->netmask == 32 ? 1 : 2 << (32 - h->netmask - 1);
+ 
++	/* 64bit division is not allowed on 32bit */
++	if (((u64)ip_to - ip + 1) >> (32 - h->netmask) > IPSET_MAX_RANGE)
++		return -ERANGE;
++
+ 	if (retried) {
+ 		ip = ntohl(h->next.ip);
+ 		e.ip = htonl(ip);
+diff --git a/net/netfilter/ipset/ip_set_hash_ipmark.c b/net/netfilter/ipset/ip_set_hash_ipmark.c
+index 18346d18aa16c..153de3457423e 100644
+--- a/net/netfilter/ipset/ip_set_hash_ipmark.c
++++ b/net/netfilter/ipset/ip_set_hash_ipmark.c
+@@ -121,6 +121,8 @@ hash_ipmark4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 
+ 	e.mark = ntohl(nla_get_be32(tb[IPSET_ATTR_MARK]));
+ 	e.mark &= h->markmask;
++	if (e.mark == 0 && e.ip == 0)
++		return -IPSET_ERR_HASH_ELEM;
+ 
+ 	if (adt == IPSET_TEST ||
+ 	    !(tb[IPSET_ATTR_IP_TO] || tb[IPSET_ATTR_CIDR])) {
+@@ -133,8 +135,11 @@ hash_ipmark4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 		ret = ip_set_get_hostipaddr4(tb[IPSET_ATTR_IP_TO], &ip_to);
+ 		if (ret)
+ 			return ret;
+-		if (ip > ip_to)
++		if (ip > ip_to) {
++			if (e.mark == 0 && ip_to == 0)
++				return -IPSET_ERR_HASH_ELEM;
+ 			swap(ip, ip_to);
++		}
+ 	} else if (tb[IPSET_ATTR_CIDR]) {
+ 		u8 cidr = nla_get_u8(tb[IPSET_ATTR_CIDR]);
+ 
+@@ -143,6 +148,9 @@ hash_ipmark4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 		ip_set_mask_from_to(ip, ip_to, cidr);
+ 	}
+ 
++	if (((u64)ip_to - ip + 1) > IPSET_MAX_RANGE)
++		return -ERANGE;
++
+ 	if (retried)
+ 		ip = ntohl(h->next.ip);
+ 	for (; ip <= ip_to; ip++) {
+diff --git a/net/netfilter/ipset/ip_set_hash_ipport.c b/net/netfilter/ipset/ip_set_hash_ipport.c
+index e1ca111965158..7303138e46be1 100644
+--- a/net/netfilter/ipset/ip_set_hash_ipport.c
++++ b/net/netfilter/ipset/ip_set_hash_ipport.c
+@@ -173,6 +173,9 @@ hash_ipport4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 			swap(port, port_to);
+ 	}
+ 
++	if (((u64)ip_to - ip + 1)*(port_to - port + 1) > IPSET_MAX_RANGE)
++		return -ERANGE;
++
+ 	if (retried)
+ 		ip = ntohl(h->next.ip);
+ 	for (; ip <= ip_to; ip++) {
+diff --git a/net/netfilter/ipset/ip_set_hash_ipportip.c b/net/netfilter/ipset/ip_set_hash_ipportip.c
+index ab179e064597c..334fb1ad0e86c 100644
+--- a/net/netfilter/ipset/ip_set_hash_ipportip.c
++++ b/net/netfilter/ipset/ip_set_hash_ipportip.c
+@@ -180,6 +180,9 @@ hash_ipportip4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 			swap(port, port_to);
+ 	}
+ 
++	if (((u64)ip_to - ip + 1)*(port_to - port + 1) > IPSET_MAX_RANGE)
++		return -ERANGE;
++
+ 	if (retried)
+ 		ip = ntohl(h->next.ip);
+ 	for (; ip <= ip_to; ip++) {
+diff --git a/net/netfilter/ipset/ip_set_hash_ipportnet.c b/net/netfilter/ipset/ip_set_hash_ipportnet.c
+index 8f075b44cf64e..7df94f437f600 100644
+--- a/net/netfilter/ipset/ip_set_hash_ipportnet.c
++++ b/net/netfilter/ipset/ip_set_hash_ipportnet.c
+@@ -253,6 +253,9 @@ hash_ipportnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 			swap(port, port_to);
+ 	}
+ 
++	if (((u64)ip_to - ip + 1)*(port_to - port + 1) > IPSET_MAX_RANGE)
++		return -ERANGE;
++
+ 	ip2_to = ip2_from;
+ 	if (tb[IPSET_ATTR_IP2_TO]) {
+ 		ret = ip_set_get_hostipaddr4(tb[IPSET_ATTR_IP2_TO], &ip2_to);
+diff --git a/net/netfilter/ipset/ip_set_hash_net.c b/net/netfilter/ipset/ip_set_hash_net.c
+index c1a11f041ac6b..1422739d9aa25 100644
+--- a/net/netfilter/ipset/ip_set_hash_net.c
++++ b/net/netfilter/ipset/ip_set_hash_net.c
+@@ -140,7 +140,7 @@ hash_net4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	ipset_adtfn adtfn = set->variant->adt[adt];
+ 	struct hash_net4_elem e = { .cidr = HOST_MASK };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+-	u32 ip = 0, ip_to = 0;
++	u32 ip = 0, ip_to = 0, ipn, n = 0;
+ 	int ret;
+ 
+ 	if (tb[IPSET_ATTR_LINENO])
+@@ -188,6 +188,15 @@ hash_net4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 		if (ip + UINT_MAX == ip_to)
+ 			return -IPSET_ERR_HASH_RANGE;
+ 	}
++	ipn = ip;
++	do {
++		ipn = ip_set_range_to_cidr(ipn, ip_to, &e.cidr);
++		n++;
++	} while (ipn++ < ip_to);
++
++	if (n > IPSET_MAX_RANGE)
++		return -ERANGE;
++
+ 	if (retried)
+ 		ip = ntohl(h->next.ip);
+ 	do {
+diff --git a/net/netfilter/ipset/ip_set_hash_netiface.c b/net/netfilter/ipset/ip_set_hash_netiface.c
+index ddd51c2e1cb36..9810f5bf63f5e 100644
+--- a/net/netfilter/ipset/ip_set_hash_netiface.c
++++ b/net/netfilter/ipset/ip_set_hash_netiface.c
+@@ -202,7 +202,7 @@ hash_netiface4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	ipset_adtfn adtfn = set->variant->adt[adt];
+ 	struct hash_netiface4_elem e = { .cidr = HOST_MASK, .elem = 1 };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+-	u32 ip = 0, ip_to = 0;
++	u32 ip = 0, ip_to = 0, ipn, n = 0;
+ 	int ret;
+ 
+ 	if (tb[IPSET_ATTR_LINENO])
+@@ -256,6 +256,14 @@ hash_netiface4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	} else {
+ 		ip_set_mask_from_to(ip, ip_to, e.cidr);
+ 	}
++	ipn = ip;
++	do {
++		ipn = ip_set_range_to_cidr(ipn, ip_to, &e.cidr);
++		n++;
++	} while (ipn++ < ip_to);
++
++	if (n > IPSET_MAX_RANGE)
++		return -ERANGE;
+ 
+ 	if (retried)
+ 		ip = ntohl(h->next.ip);
+diff --git a/net/netfilter/ipset/ip_set_hash_netnet.c b/net/netfilter/ipset/ip_set_hash_netnet.c
+index 6532f0505e66f..3d09eefe998a7 100644
+--- a/net/netfilter/ipset/ip_set_hash_netnet.c
++++ b/net/netfilter/ipset/ip_set_hash_netnet.c
+@@ -168,7 +168,8 @@ hash_netnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	struct hash_netnet4_elem e = { };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+ 	u32 ip = 0, ip_to = 0;
+-	u32 ip2 = 0, ip2_from = 0, ip2_to = 0;
++	u32 ip2 = 0, ip2_from = 0, ip2_to = 0, ipn;
++	u64 n = 0, m = 0;
+ 	int ret;
+ 
+ 	if (tb[IPSET_ATTR_LINENO])
+@@ -244,6 +245,19 @@ hash_netnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	} else {
+ 		ip_set_mask_from_to(ip2_from, ip2_to, e.cidr[1]);
+ 	}
++	ipn = ip;
++	do {
++		ipn = ip_set_range_to_cidr(ipn, ip_to, &e.cidr[0]);
++		n++;
++	} while (ipn++ < ip_to);
++	ipn = ip2_from;
++	do {
++		ipn = ip_set_range_to_cidr(ipn, ip2_to, &e.cidr[1]);
++		m++;
++	} while (ipn++ < ip2_to);
++
++	if (n*m > IPSET_MAX_RANGE)
++		return -ERANGE;
+ 
+ 	if (retried) {
+ 		ip = ntohl(h->next.ip[0]);
+diff --git a/net/netfilter/ipset/ip_set_hash_netport.c b/net/netfilter/ipset/ip_set_hash_netport.c
+index ec1564a1cb5a5..09cf72eb37f8d 100644
+--- a/net/netfilter/ipset/ip_set_hash_netport.c
++++ b/net/netfilter/ipset/ip_set_hash_netport.c
+@@ -158,7 +158,8 @@ hash_netport4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	ipset_adtfn adtfn = set->variant->adt[adt];
+ 	struct hash_netport4_elem e = { .cidr = HOST_MASK - 1 };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+-	u32 port, port_to, p = 0, ip = 0, ip_to = 0;
++	u32 port, port_to, p = 0, ip = 0, ip_to = 0, ipn;
++	u64 n = 0;
+ 	bool with_ports = false;
+ 	u8 cidr;
+ 	int ret;
+@@ -235,6 +236,14 @@ hash_netport4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	} else {
+ 		ip_set_mask_from_to(ip, ip_to, e.cidr + 1);
+ 	}
++	ipn = ip;
++	do {
++		ipn = ip_set_range_to_cidr(ipn, ip_to, &cidr);
++		n++;
++	} while (ipn++ < ip_to);
++
++	if (n*(port_to - port + 1) > IPSET_MAX_RANGE)
++		return -ERANGE;
+ 
+ 	if (retried) {
+ 		ip = ntohl(h->next.ip);
+diff --git a/net/netfilter/ipset/ip_set_hash_netportnet.c b/net/netfilter/ipset/ip_set_hash_netportnet.c
+index 0e91d1e82f1cf..19bcdb3141f6e 100644
+--- a/net/netfilter/ipset/ip_set_hash_netportnet.c
++++ b/net/netfilter/ipset/ip_set_hash_netportnet.c
+@@ -182,7 +182,8 @@ hash_netportnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	struct hash_netportnet4_elem e = { };
+ 	struct ip_set_ext ext = IP_SET_INIT_UEXT(set);
+ 	u32 ip = 0, ip_to = 0, p = 0, port, port_to;
+-	u32 ip2_from = 0, ip2_to = 0, ip2;
++	u32 ip2_from = 0, ip2_to = 0, ip2, ipn;
++	u64 n = 0, m = 0;
+ 	bool with_ports = false;
+ 	int ret;
+ 
+@@ -284,6 +285,19 @@ hash_netportnet4_uadt(struct ip_set *set, struct nlattr *tb[],
+ 	} else {
+ 		ip_set_mask_from_to(ip2_from, ip2_to, e.cidr[1]);
+ 	}
++	ipn = ip;
++	do {
++		ipn = ip_set_range_to_cidr(ipn, ip_to, &e.cidr[0]);
++		n++;
++	} while (ipn++ < ip_to);
++	ipn = ip2_from;
++	do {
++		ipn = ip_set_range_to_cidr(ipn, ip2_to, &e.cidr[1]);
++		m++;
++	} while (ipn++ < ip2_to);
++
++	if (n*m*(port_to - port + 1) > IPSET_MAX_RANGE)
++		return -ERANGE;
+ 
+ 	if (retried) {
+ 		ip = ntohl(h->next.ip[0]);
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index 69079a382d3a5..6ba9f4b8d1456 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -68,22 +68,17 @@ EXPORT_SYMBOL_GPL(nf_conntrack_hash);
+ 
+ struct conntrack_gc_work {
+ 	struct delayed_work	dwork;
+-	u32			last_bucket;
++	u32			next_bucket;
+ 	bool			exiting;
+ 	bool			early_drop;
+-	long			next_gc_run;
+ };
+ 
+ static __read_mostly struct kmem_cache *nf_conntrack_cachep;
+ static DEFINE_SPINLOCK(nf_conntrack_locks_all_lock);
+ static __read_mostly bool nf_conntrack_locks_all;
+ 
+-/* every gc cycle scans at most 1/GC_MAX_BUCKETS_DIV part of table */
+-#define GC_MAX_BUCKETS_DIV	128u
+-/* upper bound of full table scan */
+-#define GC_MAX_SCAN_JIFFIES	(16u * HZ)
+-/* desired ratio of entries found to be expired */
+-#define GC_EVICT_RATIO	50u
++#define GC_SCAN_INTERVAL	(120u * HZ)
++#define GC_SCAN_MAX_DURATION	msecs_to_jiffies(10)
+ 
+ static struct conntrack_gc_work conntrack_gc_work;
+ 
+@@ -1359,17 +1354,13 @@ static bool gc_worker_can_early_drop(const struct nf_conn *ct)
+ 
+ static void gc_worker(struct work_struct *work)
+ {
+-	unsigned int min_interval = max(HZ / GC_MAX_BUCKETS_DIV, 1u);
+-	unsigned int i, goal, buckets = 0, expired_count = 0;
+-	unsigned int nf_conntrack_max95 = 0;
++	unsigned long end_time = jiffies + GC_SCAN_MAX_DURATION;
++	unsigned int i, hashsz, nf_conntrack_max95 = 0;
++	unsigned long next_run = GC_SCAN_INTERVAL;
+ 	struct conntrack_gc_work *gc_work;
+-	unsigned int ratio, scanned = 0;
+-	unsigned long next_run;
+-
+ 	gc_work = container_of(work, struct conntrack_gc_work, dwork.work);
+ 
+-	goal = nf_conntrack_htable_size / GC_MAX_BUCKETS_DIV;
+-	i = gc_work->last_bucket;
++	i = gc_work->next_bucket;
+ 	if (gc_work->early_drop)
+ 		nf_conntrack_max95 = nf_conntrack_max / 100u * 95u;
+ 
+@@ -1377,15 +1368,15 @@ static void gc_worker(struct work_struct *work)
+ 		struct nf_conntrack_tuple_hash *h;
+ 		struct hlist_nulls_head *ct_hash;
+ 		struct hlist_nulls_node *n;
+-		unsigned int hashsz;
+ 		struct nf_conn *tmp;
+ 
+-		i++;
+ 		rcu_read_lock();
+ 
+ 		nf_conntrack_get_ht(&ct_hash, &hashsz);
+-		if (i >= hashsz)
+-			i = 0;
++		if (i >= hashsz) {
++			rcu_read_unlock();
++			break;
++		}
+ 
+ 		hlist_nulls_for_each_entry_rcu(h, n, &ct_hash[i], hnnode) {
+ 			struct nf_conntrack_net *cnet;
+@@ -1393,7 +1384,6 @@ static void gc_worker(struct work_struct *work)
+ 
+ 			tmp = nf_ct_tuplehash_to_ctrack(h);
+ 
+-			scanned++;
+ 			if (test_bit(IPS_OFFLOAD_BIT, &tmp->status)) {
+ 				nf_ct_offload_timeout(tmp);
+ 				continue;
+@@ -1401,7 +1391,6 @@ static void gc_worker(struct work_struct *work)
+ 
+ 			if (nf_ct_is_expired(tmp)) {
+ 				nf_ct_gc_expired(tmp);
+-				expired_count++;
+ 				continue;
+ 			}
+ 
+@@ -1434,7 +1423,14 @@ static void gc_worker(struct work_struct *work)
+ 		 */
+ 		rcu_read_unlock();
+ 		cond_resched();
+-	} while (++buckets < goal);
++		i++;
++
++		if (time_after(jiffies, end_time) && i < hashsz) {
++			gc_work->next_bucket = i;
++			next_run = 0;
++			break;
++		}
++	} while (i < hashsz);
+ 
+ 	if (gc_work->exiting)
+ 		return;
+@@ -1445,40 +1441,17 @@ static void gc_worker(struct work_struct *work)
+ 	 *
+ 	 * This worker is only here to reap expired entries when system went
+ 	 * idle after a busy period.
+-	 *
+-	 * The heuristics below are supposed to balance conflicting goals:
+-	 *
+-	 * 1. Minimize time until we notice a stale entry
+-	 * 2. Maximize scan intervals to not waste cycles
+-	 *
+-	 * Normally, expire ratio will be close to 0.
+-	 *
+-	 * As soon as a sizeable fraction of the entries have expired
+-	 * increase scan frequency.
+ 	 */
+-	ratio = scanned ? expired_count * 100 / scanned : 0;
+-	if (ratio > GC_EVICT_RATIO) {
+-		gc_work->next_gc_run = min_interval;
+-	} else {
+-		unsigned int max = GC_MAX_SCAN_JIFFIES / GC_MAX_BUCKETS_DIV;
+-
+-		BUILD_BUG_ON((GC_MAX_SCAN_JIFFIES / GC_MAX_BUCKETS_DIV) == 0);
+-
+-		gc_work->next_gc_run += min_interval;
+-		if (gc_work->next_gc_run > max)
+-			gc_work->next_gc_run = max;
++	if (next_run) {
++		gc_work->early_drop = false;
++		gc_work->next_bucket = 0;
+ 	}
+-
+-	next_run = gc_work->next_gc_run;
+-	gc_work->last_bucket = i;
+-	gc_work->early_drop = false;
+ 	queue_delayed_work(system_power_efficient_wq, &gc_work->dwork, next_run);
+ }
+ 
+ static void conntrack_gc_work_init(struct conntrack_gc_work *gc_work)
+ {
+ 	INIT_DEFERRABLE_WORK(&gc_work->dwork, gc_worker);
+-	gc_work->next_gc_run = HZ;
+ 	gc_work->exiting = false;
+ }
+ 
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index 67993bcfecdea..52b7f6490d248 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -493,7 +493,7 @@ int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len)
+ 		goto err;
+ 	}
+ 
+-	if (len != ALIGN(size, 4) + hdrlen)
++	if (!size || len != ALIGN(size, 4) + hdrlen)
+ 		goto err;
+ 
+ 	if (cb->dst_port != QRTR_PORT_CTRL && cb->type != QRTR_TYPE_DATA &&
+diff --git a/net/rds/ib_frmr.c b/net/rds/ib_frmr.c
+index 9b6ffff72f2d1..28c1b00221780 100644
+--- a/net/rds/ib_frmr.c
++++ b/net/rds/ib_frmr.c
+@@ -131,9 +131,9 @@ static int rds_ib_post_reg_frmr(struct rds_ib_mr *ibmr)
+ 		cpu_relax();
+ 	}
+ 
+-	ret = ib_map_mr_sg_zbva(frmr->mr, ibmr->sg, ibmr->sg_len,
++	ret = ib_map_mr_sg_zbva(frmr->mr, ibmr->sg, ibmr->sg_dma_len,
+ 				&off, PAGE_SIZE);
+-	if (unlikely(ret != ibmr->sg_len))
++	if (unlikely(ret != ibmr->sg_dma_len))
+ 		return ret < 0 ? ret : -EINVAL;
+ 
+ 	if (cmpxchg(&frmr->fr_state,
+diff --git a/net/sched/sch_ets.c b/net/sched/sch_ets.c
+index c1e84d1eeaba8..c76701ac35abf 100644
+--- a/net/sched/sch_ets.c
++++ b/net/sched/sch_ets.c
+@@ -660,6 +660,13 @@ static int ets_qdisc_change(struct Qdisc *sch, struct nlattr *opt,
+ 	sch_tree_lock(sch);
+ 
+ 	q->nbands = nbands;
++	for (i = nstrict; i < q->nstrict; i++) {
++		INIT_LIST_HEAD(&q->classes[i].alist);
++		if (q->classes[i].qdisc->q.qlen) {
++			list_add_tail(&q->classes[i].alist, &q->active);
++			q->classes[i].deficit = quanta[i];
++		}
++	}
+ 	q->nstrict = nstrict;
+ 	memcpy(q->prio2band, priomap, sizeof(priomap));
+ 
+diff --git a/net/socket.c b/net/socket.c
+index 4f2c6d2795d0a..877f1fb61719a 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -1054,7 +1054,7 @@ static long sock_do_ioctl(struct net *net, struct socket *sock,
+ 		rtnl_unlock();
+ 		if (!err && copy_to_user(argp, &ifc, sizeof(struct ifconf)))
+ 			err = -EFAULT;
+-	} else {
++	} else if (is_socket_ioctl_cmd(cmd)) {
+ 		struct ifreq ifr;
+ 		bool need_copyout;
+ 		if (copy_from_user(&ifr, argp, sizeof(struct ifreq)))
+@@ -1063,6 +1063,8 @@ static long sock_do_ioctl(struct net *net, struct socket *sock,
+ 		if (!err && need_copyout)
+ 			if (copy_to_user(argp, &ifr, sizeof(struct ifreq)))
+ 				return -EFAULT;
++	} else {
++		err = -ENOTTY;
+ 	}
+ 	return err;
+ }
+@@ -3251,6 +3253,8 @@ static int compat_ifr_data_ioctl(struct net *net, unsigned int cmd,
+ 	struct ifreq ifreq;
+ 	u32 data32;
+ 
++	if (!is_socket_ioctl_cmd(cmd))
++		return -ENOTTY;
+ 	if (copy_from_user(ifreq.ifr_name, u_ifreq32->ifr_name, IFNAMSIZ))
+ 		return -EFAULT;
+ 	if (get_user(data32, &u_ifreq32->ifr_data))
+diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
+index d66a8e44a1aeb..dbb41821b1b85 100644
+--- a/net/sunrpc/svc_xprt.c
++++ b/net/sunrpc/svc_xprt.c
+@@ -835,7 +835,8 @@ static int svc_handle_xprt(struct svc_rqst *rqstp, struct svc_xprt *xprt)
+ 		rqstp->rq_stime = ktime_get();
+ 		rqstp->rq_reserved = serv->sv_max_mesg;
+ 		atomic_add(rqstp->rq_reserved, &xprt->xpt_reserved);
+-	}
++	} else
++		svc_xprt_received(xprt);
+ out:
+ 	trace_svc_handle_xprt(xprt, len);
+ 	return len;
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 9bdc5147a65a1..a0dce194a404a 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -1528,7 +1528,7 @@ static int __tipc_sendmsg(struct socket *sock, struct msghdr *m, size_t dlen)
+ 
+ 	if (unlikely(syn && !rc)) {
+ 		tipc_set_sk_state(sk, TIPC_CONNECTING);
+-		if (timeout) {
++		if (dlen && timeout) {
+ 			timeout = msecs_to_jiffies(timeout);
+ 			tipc_wait_for_connect(sock, &timeout);
+ 		}
+diff --git a/sound/soc/codecs/rt5682.c b/sound/soc/codecs/rt5682.c
+index abcd6f4837888..51ecaa2abcd14 100644
+--- a/sound/soc/codecs/rt5682.c
++++ b/sound/soc/codecs/rt5682.c
+@@ -44,6 +44,7 @@ static const struct reg_sequence patch_list[] = {
+ 	{RT5682_I2C_CTRL, 0x000f},
+ 	{RT5682_PLL2_INTERNAL, 0x8266},
+ 	{RT5682_SAR_IL_CMD_3, 0x8365},
++	{RT5682_SAR_IL_CMD_6, 0x0180},
+ };
+ 
+ void rt5682_apply_patch_list(struct rt5682_priv *rt5682, struct device *dev)
+diff --git a/sound/soc/soc-component.c b/sound/soc/soc-component.c
+index 3a5e84e16a87b..c8dfd0de30e4f 100644
+--- a/sound/soc/soc-component.c
++++ b/sound/soc/soc-component.c
+@@ -148,86 +148,75 @@ int snd_soc_component_set_bias_level(struct snd_soc_component *component,
+ 	return soc_component_ret(component, ret);
+ }
+ 
+-static int soc_component_pin(struct snd_soc_component *component,
+-			     const char *pin,
+-			     int (*pin_func)(struct snd_soc_dapm_context *dapm,
+-					     const char *pin))
+-{
+-	struct snd_soc_dapm_context *dapm =
+-		snd_soc_component_get_dapm(component);
+-	char *full_name;
+-	int ret;
+-
+-	if (!component->name_prefix) {
+-		ret = pin_func(dapm, pin);
+-		goto end;
+-	}
+-
+-	full_name = kasprintf(GFP_KERNEL, "%s %s", component->name_prefix, pin);
+-	if (!full_name) {
+-		ret = -ENOMEM;
+-		goto end;
+-	}
+-
+-	ret = pin_func(dapm, full_name);
+-	kfree(full_name);
+-end:
+-	return soc_component_ret(component, ret);
+-}
+-
+ int snd_soc_component_enable_pin(struct snd_soc_component *component,
+ 				 const char *pin)
+ {
+-	return soc_component_pin(component, pin, snd_soc_dapm_enable_pin);
++	struct snd_soc_dapm_context *dapm =
++		snd_soc_component_get_dapm(component);
++	return snd_soc_dapm_enable_pin(dapm, pin);
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_component_enable_pin);
+ 
+ int snd_soc_component_enable_pin_unlocked(struct snd_soc_component *component,
+ 					  const char *pin)
+ {
+-	return soc_component_pin(component, pin, snd_soc_dapm_enable_pin_unlocked);
++	struct snd_soc_dapm_context *dapm =
++		snd_soc_component_get_dapm(component);
++	return snd_soc_dapm_enable_pin_unlocked(dapm, pin);
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_component_enable_pin_unlocked);
+ 
+ int snd_soc_component_disable_pin(struct snd_soc_component *component,
+ 				  const char *pin)
+ {
+-	return soc_component_pin(component, pin, snd_soc_dapm_disable_pin);
++	struct snd_soc_dapm_context *dapm =
++		snd_soc_component_get_dapm(component);
++	return snd_soc_dapm_disable_pin(dapm, pin);
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_component_disable_pin);
+ 
+ int snd_soc_component_disable_pin_unlocked(struct snd_soc_component *component,
+ 					   const char *pin)
+ {
+-	return soc_component_pin(component, pin, snd_soc_dapm_disable_pin_unlocked);
++	struct snd_soc_dapm_context *dapm = 
++		snd_soc_component_get_dapm(component);
++	return snd_soc_dapm_disable_pin_unlocked(dapm, pin);
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_component_disable_pin_unlocked);
+ 
+ int snd_soc_component_nc_pin(struct snd_soc_component *component,
+ 			     const char *pin)
+ {
+-	return soc_component_pin(component, pin, snd_soc_dapm_nc_pin);
++	struct snd_soc_dapm_context *dapm =
++		snd_soc_component_get_dapm(component);
++	return snd_soc_dapm_nc_pin(dapm, pin);
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_component_nc_pin);
+ 
+ int snd_soc_component_nc_pin_unlocked(struct snd_soc_component *component,
+ 				      const char *pin)
+ {
+-	return soc_component_pin(component, pin, snd_soc_dapm_nc_pin_unlocked);
++	struct snd_soc_dapm_context *dapm =
++		snd_soc_component_get_dapm(component);
++	return snd_soc_dapm_nc_pin_unlocked(dapm, pin);
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_component_nc_pin_unlocked);
+ 
+ int snd_soc_component_get_pin_status(struct snd_soc_component *component,
+ 				     const char *pin)
+ {
+-	return soc_component_pin(component, pin, snd_soc_dapm_get_pin_status);
++	struct snd_soc_dapm_context *dapm =
++		snd_soc_component_get_dapm(component);
++	return snd_soc_dapm_get_pin_status(dapm, pin);
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_component_get_pin_status);
+ 
+ int snd_soc_component_force_enable_pin(struct snd_soc_component *component,
+ 				       const char *pin)
+ {
+-	return soc_component_pin(component, pin, snd_soc_dapm_force_enable_pin);
++	struct snd_soc_dapm_context *dapm =
++		snd_soc_component_get_dapm(component);
++	return snd_soc_dapm_force_enable_pin(dapm, pin);
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_component_force_enable_pin);
+ 
+@@ -235,7 +224,9 @@ int snd_soc_component_force_enable_pin_unlocked(
+ 	struct snd_soc_component *component,
+ 	const char *pin)
+ {
+-	return soc_component_pin(component, pin, snd_soc_dapm_force_enable_pin_unlocked);
++	struct snd_soc_dapm_context *dapm =
++		snd_soc_component_get_dapm(component);
++	return snd_soc_dapm_force_enable_pin_unlocked(dapm, pin);
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_component_force_enable_pin_unlocked);
+ 
+diff --git a/tools/virtio/Makefile b/tools/virtio/Makefile
+index b587b9a7a124b..0d7bbe49359d8 100644
+--- a/tools/virtio/Makefile
++++ b/tools/virtio/Makefile
+@@ -4,7 +4,8 @@ test: virtio_test vringh_test
+ virtio_test: virtio_ring.o virtio_test.o
+ vringh_test: vringh_test.o vringh.o virtio_ring.o
+ 
+-CFLAGS += -g -O2 -Werror -Wall -I. -I../include/ -I ../../usr/include/ -Wno-pointer-sign -fno-strict-overflow -fno-strict-aliasing -fno-common -MMD -U_FORTIFY_SOURCE -include ../../include/linux/kconfig.h
++CFLAGS += -g -O2 -Werror -Wno-maybe-uninitialized -Wall -I. -I../include/ -I ../../usr/include/ -Wno-pointer-sign -fno-strict-overflow -fno-strict-aliasing -fno-common -MMD -U_FORTIFY_SOURCE -include ../../include/linux/kconfig.h
++LDFLAGS += -lpthread
+ vpath %.c ../../drivers/virtio ../../drivers/vhost
+ mod:
+ 	${MAKE} -C `pwd`/../.. M=`pwd`/vhost_test V=${V}
+diff --git a/tools/virtio/linux/spinlock.h b/tools/virtio/linux/spinlock.h
+new file mode 100644
+index 0000000000000..028e3cdcc5d30
+--- /dev/null
++++ b/tools/virtio/linux/spinlock.h
+@@ -0,0 +1,56 @@
++#ifndef SPINLOCK_H_STUB
++#define SPINLOCK_H_STUB
++
++#include <pthread.h>
++
++typedef pthread_spinlock_t  spinlock_t;
++
++static inline void spin_lock_init(spinlock_t *lock)
++{
++	int r = pthread_spin_init(lock, 0);
++	assert(!r);
++}
++
++static inline void spin_lock(spinlock_t *lock)
++{
++	int ret = pthread_spin_lock(lock);
++	assert(!ret);
++}
++
++static inline void spin_unlock(spinlock_t *lock)
++{
++	int ret = pthread_spin_unlock(lock);
++	assert(!ret);
++}
++
++static inline void spin_lock_bh(spinlock_t *lock)
++{
++	spin_lock(lock);
++}
++
++static inline void spin_unlock_bh(spinlock_t *lock)
++{
++	spin_unlock(lock);
++}
++
++static inline void spin_lock_irq(spinlock_t *lock)
++{
++	spin_lock(lock);
++}
++
++static inline void spin_unlock_irq(spinlock_t *lock)
++{
++	spin_unlock(lock);
++}
++
++static inline void spin_lock_irqsave(spinlock_t *lock, unsigned long f)
++{
++	spin_lock(lock);
++}
++
++static inline void spin_unlock_irqrestore(spinlock_t *lock, unsigned long f)
++{
++	spin_unlock(lock);
++}
++
++#endif
+diff --git a/tools/virtio/linux/virtio.h b/tools/virtio/linux/virtio.h
+index 5d90254ddae47..363b982283011 100644
+--- a/tools/virtio/linux/virtio.h
++++ b/tools/virtio/linux/virtio.h
+@@ -3,6 +3,7 @@
+ #define LINUX_VIRTIO_H
+ #include <linux/scatterlist.h>
+ #include <linux/kernel.h>
++#include <linux/spinlock.h>
+ 
+ struct device {
+ 	void *parent;
+@@ -12,6 +13,7 @@ struct virtio_device {
+ 	struct device dev;
+ 	u64 features;
+ 	struct list_head vqs;
++	spinlock_t vqs_list_lock;
+ };
+ 
+ struct virtqueue {


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-09-03 11:50 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-09-03 11:50 UTC (permalink / raw
  To: gentoo-commits

commit:     0c4e2e80c212207e945a4f8bbdf51a2578116f0c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Sep  3 11:49:55 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Sep  3 11:49:55 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0c4e2e80

Remove redundant patch

Removed: 2700_Bluetooth-usb-alt-3-for-WBS.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                            |  4 --
 2700_Bluetooth-usb-alt-3-for-WBS.patch | 84 ----------------------------------
 2 files changed, 88 deletions(-)

diff --git a/0000_README b/0000_README
index 4d2f6e6..2397f15 100644
--- a/0000_README
+++ b/0000_README
@@ -111,10 +111,6 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
-Patch:  2700_Bluetooth-usb-alt-3-for-WBS.patch
-From:   https://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next.git/commit/?id=55981d3541812234e687062926ff199c83f79a39
-Desc:   Bluetooth: btusb: check conditions before enabling USB ALT 3 for WBS
-
 Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino

diff --git a/2700_Bluetooth-usb-alt-3-for-WBS.patch b/2700_Bluetooth-usb-alt-3-for-WBS.patch
deleted file mode 100644
index e0a67ea..0000000
--- a/2700_Bluetooth-usb-alt-3-for-WBS.patch
+++ /dev/null
@@ -1,84 +0,0 @@
-From 55981d3541812234e687062926ff199c83f79a39 Mon Sep 17 00:00:00 2001
-From: Pauli Virtanen <pav@iki.fi>
-Date: Mon, 26 Jul 2021 21:02:06 +0300
-Subject: Bluetooth: btusb: check conditions before enabling USB ALT 3 for WBS
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-Some USB BT adapters don't satisfy the MTU requirement mentioned in
-commit e848dbd364ac ("Bluetooth: btusb: Add support USB ALT 3 for WBS")
-and have ALT 3 setting that produces no/garbled audio. Some adapters
-with larger MTU were also reported to have problems with ALT 3.
-
-Add a flag and check it and MTU before selecting ALT 3, falling back to
-ALT 1. Enable the flag for Realtek, restoring the previous behavior for
-non-Realtek devices.
-
-Tested with USB adapters (mtu<72, no/garbled sound with ALT3, ALT1
-works) BCM20702A1 0b05:17cb, CSR8510A10 0a12:0001, and (mtu>=72, ALT3
-works) RTL8761BU 0bda:8771, Intel AX200 8087:0029 (after disabling
-ALT6). Also got reports for (mtu>=72, ALT 3 reported to produce bad
-audio) Intel 8087:0a2b.
-
-Signed-off-by: Pauli Virtanen <pav@iki.fi>
-Fixes: e848dbd364ac ("Bluetooth: btusb: Add support USB ALT 3 for WBS")
-Tested-by: Michał Kępień <kernel@kempniu.pl>
-Tested-by: Jonathan Lampérth <jon@h4n.dev>
-Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
----
- drivers/bluetooth/btusb.c | 22 ++++++++++++++--------
- 1 file changed, 14 insertions(+), 8 deletions(-)
-
-diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
-index 488f110e17e27..2336f731dbc7e 100644
---- a/drivers/bluetooth/btusb.c
-+++ b/drivers/bluetooth/btusb.c
-@@ -528,6 +528,7 @@ static const struct dmi_system_id btusb_needs_reset_resume_table[] = {
- #define BTUSB_HW_RESET_ACTIVE	12
- #define BTUSB_TX_WAIT_VND_EVT	13
- #define BTUSB_WAKEUP_DISABLE	14
-+#define BTUSB_USE_ALT3_FOR_WBS	15
- 
- struct btusb_data {
- 	struct hci_dev       *hdev;
-@@ -1761,16 +1762,20 @@ static void btusb_work(struct work_struct *work)
- 			/* Bluetooth USB spec recommends alt 6 (63 bytes), but
- 			 * many adapters do not support it.  Alt 1 appears to
- 			 * work for all adapters that do not have alt 6, and
--			 * which work with WBS at all.
-+			 * which work with WBS at all.  Some devices prefer
-+			 * alt 3 (HCI payload >= 60 Bytes let air packet
-+			 * data satisfy 60 bytes), requiring
-+			 * MTU >= 3 (packets) * 25 (size) - 3 (headers) = 72
-+			 * see also Core spec 5, vol 4, B 2.1.1 & Table 2.1.
- 			 */
--			new_alts = btusb_find_altsetting(data, 6) ? 6 : 1;
--			/* Because mSBC frames do not need to be aligned to the
--			 * SCO packet boundary. If support the Alt 3, use the
--			 * Alt 3 for HCI payload >= 60 Bytes let air packet
--			 * data satisfy 60 bytes.
--			 */
--			if (new_alts == 1 && btusb_find_altsetting(data, 3))
-+			if (btusb_find_altsetting(data, 6))
-+				new_alts = 6;
-+			else if (btusb_find_altsetting(data, 3) &&
-+				 hdev->sco_mtu >= 72 &&
-+				 test_bit(BTUSB_USE_ALT3_FOR_WBS, &data->flags))
- 				new_alts = 3;
-+			else
-+				new_alts = 1;
- 		}
- 
- 		if (btusb_switch_alt_setting(hdev, new_alts) < 0)
-@@ -3882,6 +3887,7 @@ static int btusb_probe(struct usb_interface *intf,
- 		 * (DEVICE_REMOTE_WAKEUP)
- 		 */
- 		set_bit(BTUSB_WAKEUP_DISABLE, &data->flags);
-+		set_bit(BTUSB_USE_ALT3_FOR_WBS, &data->flags);
- 	}
- 
- 	if (!reset)
--- 
-cgit 1.2.3-1.el7
-


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-09-08 13:55 Alice Ferrazzi
  0 siblings, 0 replies; 42+ messages in thread
From: Alice Ferrazzi @ 2021-09-08 13:55 UTC (permalink / raw
  To: gentoo-commits

commit:     341ed21bc7db1ba6cbb8b33a1fdd07671c378a9c
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Sep  8 12:45:28 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Sep  8 12:45:42 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=341ed21b

Linux patch 5.13.15

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README              |   4 +
 1014_linux-5.13.15.patch | 516 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 520 insertions(+)

diff --git a/0000_README b/0000_README
index 2397f15..19f95ee 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,10 @@ Patch:  1013_linux-5.13.14.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.13.14
 
+Patch:  1014_linux-5.13.15.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.13.15
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1014_linux-5.13.15.patch b/1014_linux-5.13.15.patch
new file mode 100644
index 0000000..6b9ad3c
--- /dev/null
+++ b/1014_linux-5.13.15.patch
@@ -0,0 +1,516 @@
+diff --git a/Makefile b/Makefile
+index de1f9c79e27ab..d0ea05957da61 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 13
+-SUBLEVEL = 14
++SUBLEVEL = 15
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/riscv/boot/dts/microchip/microchip-mpfs-icicle-kit.dts b/arch/riscv/boot/dts/microchip/microchip-mpfs-icicle-kit.dts
+index ec79944065c98..baea7d204639a 100644
+--- a/arch/riscv/boot/dts/microchip/microchip-mpfs-icicle-kit.dts
++++ b/arch/riscv/boot/dts/microchip/microchip-mpfs-icicle-kit.dts
+@@ -14,6 +14,10 @@
+ 	model = "Microchip PolarFire-SoC Icicle Kit";
+ 	compatible = "microchip,mpfs-icicle-kit";
+ 
++	aliases {
++		ethernet0 = &emac1;
++	};
++
+ 	chosen {
+ 		stdout-path = &serial0;
+ 	};
+diff --git a/arch/riscv/boot/dts/microchip/microchip-mpfs.dtsi b/arch/riscv/boot/dts/microchip/microchip-mpfs.dtsi
+index b9819570a7d17..9d2fbbc1f7778 100644
+--- a/arch/riscv/boot/dts/microchip/microchip-mpfs.dtsi
++++ b/arch/riscv/boot/dts/microchip/microchip-mpfs.dtsi
+@@ -317,7 +317,7 @@
+ 			reg = <0x0 0x20112000 0x0 0x2000>;
+ 			interrupt-parent = <&plic>;
+ 			interrupts = <70 71 72 73>;
+-			mac-address = [00 00 00 00 00 00];
++			local-mac-address = [00 00 00 00 00 00];
+ 			clocks = <&clkcfg 5>, <&clkcfg 2>;
+ 			status = "disabled";
+ 			clock-names = "pclk", "hclk";
+diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
+index 40669eac9d6db..921f47b9bb247 100644
+--- a/arch/x86/events/amd/ibs.c
++++ b/arch/x86/events/amd/ibs.c
+@@ -90,6 +90,7 @@ struct perf_ibs {
+ 	unsigned long			offset_mask[1];
+ 	int				offset_max;
+ 	unsigned int			fetch_count_reset_broken : 1;
++	unsigned int			fetch_ignore_if_zero_rip : 1;
+ 	struct cpu_perf_ibs __percpu	*pcpu;
+ 
+ 	struct attribute		**format_attrs;
+@@ -672,6 +673,10 @@ fail:
+ 	if (check_rip && (ibs_data.regs[2] & IBS_RIP_INVALID)) {
+ 		regs.flags &= ~PERF_EFLAGS_EXACT;
+ 	} else {
++		/* Workaround for erratum #1197 */
++		if (perf_ibs->fetch_ignore_if_zero_rip && !(ibs_data.regs[1]))
++			goto out;
++
+ 		set_linear_ip(&regs, ibs_data.regs[1]);
+ 		regs.flags |= PERF_EFLAGS_EXACT;
+ 	}
+@@ -769,6 +774,9 @@ static __init void perf_event_ibs_init(void)
+ 	if (boot_cpu_data.x86 >= 0x16 && boot_cpu_data.x86 <= 0x18)
+ 		perf_ibs_fetch.fetch_count_reset_broken = 1;
+ 
++	if (boot_cpu_data.x86 == 0x19 && boot_cpu_data.x86_model < 0x10)
++		perf_ibs_fetch.fetch_ignore_if_zero_rip = 1;
++
+ 	perf_ibs_pmu_init(&perf_ibs_fetch, "ibs_fetch");
+ 
+ 	if (ibs_caps & IBS_CAPS_OPCNT) {
+diff --git a/arch/x86/events/amd/power.c b/arch/x86/events/amd/power.c
+index 16a2369c586e8..37d5b380516ec 100644
+--- a/arch/x86/events/amd/power.c
++++ b/arch/x86/events/amd/power.c
+@@ -213,6 +213,7 @@ static struct pmu pmu_class = {
+ 	.stop		= pmu_event_stop,
+ 	.read		= pmu_event_read,
+ 	.capabilities	= PERF_PMU_CAP_NO_EXCLUDE,
++	.module		= THIS_MODULE,
+ };
+ 
+ static int power_cpu_exit(unsigned int cpu)
+diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
+index 915847655c065..b044577785bbb 100644
+--- a/arch/x86/events/intel/pt.c
++++ b/arch/x86/events/intel/pt.c
+@@ -62,7 +62,7 @@ static struct pt_cap_desc {
+ 	PT_CAP(single_range_output,	0, CPUID_ECX, BIT(2)),
+ 	PT_CAP(output_subsys,		0, CPUID_ECX, BIT(3)),
+ 	PT_CAP(payloads_lip,		0, CPUID_ECX, BIT(31)),
+-	PT_CAP(num_address_ranges,	1, CPUID_EAX, 0x3),
++	PT_CAP(num_address_ranges,	1, CPUID_EAX, 0x7),
+ 	PT_CAP(mtc_periods,		1, CPUID_EAX, 0xffff0000),
+ 	PT_CAP(cycle_thresholds,	1, CPUID_EBX, 0xffff),
+ 	PT_CAP(psb_periods,		1, CPUID_EBX, 0xffff0000),
+diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
+index 2332b21569938..1bdb55c2d0c14 100644
+--- a/arch/xtensa/Kconfig
++++ b/arch/xtensa/Kconfig
+@@ -30,7 +30,7 @@ config XTENSA
+ 	select HAVE_DMA_CONTIGUOUS
+ 	select HAVE_EXIT_THREAD
+ 	select HAVE_FUNCTION_TRACER
+-	select HAVE_FUTEX_CMPXCHG if !MMU
++	select HAVE_FUTEX_CMPXCHG if !MMU && FUTEX
+ 	select HAVE_HW_BREAKPOINT if PERF_EVENTS
+ 	select HAVE_IRQ_TIME_ACCOUNTING
+ 	select HAVE_PCI
+diff --git a/drivers/block/Kconfig b/drivers/block/Kconfig
+index 63056cfd4b62c..fbb3a558139fc 100644
+--- a/drivers/block/Kconfig
++++ b/drivers/block/Kconfig
+@@ -213,7 +213,7 @@ config BLK_DEV_LOOP_MIN_COUNT
+ 	  dynamically allocated with the /dev/loop-control interface.
+ 
+ config BLK_DEV_CRYPTOLOOP
+-	tristate "Cryptoloop Support"
++	tristate "Cryptoloop Support (DEPRECATED)"
+ 	select CRYPTO
+ 	select CRYPTO_CBC
+ 	depends on BLK_DEV_LOOP
+@@ -225,7 +225,7 @@ config BLK_DEV_CRYPTOLOOP
+ 	  WARNING: This device is not safe for journaled file systems like
+ 	  ext3 or Reiserfs. Please use the Device Mapper crypto module
+ 	  instead, which can be configured to be on-disk compatible with the
+-	  cryptoloop device.
++	  cryptoloop device.  cryptoloop support will be removed in Linux 5.16.
+ 
+ source "drivers/block/drbd/Kconfig"
+ 
+diff --git a/drivers/block/cryptoloop.c b/drivers/block/cryptoloop.c
+index 3cabc335ae744..f0a91faa43a89 100644
+--- a/drivers/block/cryptoloop.c
++++ b/drivers/block/cryptoloop.c
+@@ -189,6 +189,8 @@ init_cryptoloop(void)
+ 
+ 	if (rc)
+ 		printk(KERN_ERR "cryptoloop: loop_register_transfer failed\n");
++	else
++		pr_warn("the cryptoloop driver has been deprecated and will be removed in in Linux 5.16\n");
+ 	return rc;
+ }
+ 
+diff --git a/drivers/gpu/ipu-v3/ipu-cpmem.c b/drivers/gpu/ipu-v3/ipu-cpmem.c
+index a1c85d1521f5c..82b244cb313e6 100644
+--- a/drivers/gpu/ipu-v3/ipu-cpmem.c
++++ b/drivers/gpu/ipu-v3/ipu-cpmem.c
+@@ -585,21 +585,21 @@ static const struct ipu_rgb def_bgra_16 = {
+ 	.bits_per_pixel = 16,
+ };
+ 
+-#define Y_OFFSET(pix, x, y)	((x) + pix->width * (y))
+-#define U_OFFSET(pix, x, y)	((pix->width * pix->height) +		\
+-				 (pix->width * ((y) / 2) / 2) + (x) / 2)
+-#define V_OFFSET(pix, x, y)	((pix->width * pix->height) +		\
+-				 (pix->width * pix->height / 4) +	\
+-				 (pix->width * ((y) / 2) / 2) + (x) / 2)
+-#define U2_OFFSET(pix, x, y)	((pix->width * pix->height) +		\
+-				 (pix->width * (y) / 2) + (x) / 2)
+-#define V2_OFFSET(pix, x, y)	((pix->width * pix->height) +		\
+-				 (pix->width * pix->height / 2) +	\
+-				 (pix->width * (y) / 2) + (x) / 2)
+-#define UV_OFFSET(pix, x, y)	((pix->width * pix->height) +	\
+-				 (pix->width * ((y) / 2)) + (x))
+-#define UV2_OFFSET(pix, x, y)	((pix->width * pix->height) +	\
+-				 (pix->width * y) + (x))
++#define Y_OFFSET(pix, x, y)	((x) + pix->bytesperline * (y))
++#define U_OFFSET(pix, x, y)	((pix->bytesperline * pix->height) +	 \
++				 (pix->bytesperline * ((y) / 2) / 2) + (x) / 2)
++#define V_OFFSET(pix, x, y)	((pix->bytesperline * pix->height) +	 \
++				 (pix->bytesperline * pix->height / 4) + \
++				 (pix->bytesperline * ((y) / 2) / 2) + (x) / 2)
++#define U2_OFFSET(pix, x, y)	((pix->bytesperline * pix->height) +	 \
++				 (pix->bytesperline * (y) / 2) + (x) / 2)
++#define V2_OFFSET(pix, x, y)	((pix->bytesperline * pix->height) +	 \
++				 (pix->bytesperline * pix->height / 2) + \
++				 (pix->bytesperline * (y) / 2) + (x) / 2)
++#define UV_OFFSET(pix, x, y)	((pix->bytesperline * pix->height) +	 \
++				 (pix->bytesperline * ((y) / 2)) + (x))
++#define UV2_OFFSET(pix, x, y)	((pix->bytesperline * pix->height) +	 \
++				 (pix->bytesperline * y) + (x))
+ 
+ #define NUM_ALPHA_CHANNELS	7
+ 
+diff --git a/drivers/media/usb/stkwebcam/stk-webcam.c b/drivers/media/usb/stkwebcam/stk-webcam.c
+index a45d464427c4c..0e231e576dc3d 100644
+--- a/drivers/media/usb/stkwebcam/stk-webcam.c
++++ b/drivers/media/usb/stkwebcam/stk-webcam.c
+@@ -1346,7 +1346,7 @@ static int stk_camera_probe(struct usb_interface *interface,
+ 	if (!dev->isoc_ep) {
+ 		pr_err("Could not find isoc-in endpoint\n");
+ 		err = -ENODEV;
+-		goto error;
++		goto error_put;
+ 	}
+ 	dev->vsettings.palette = V4L2_PIX_FMT_RGB565;
+ 	dev->vsettings.mode = MODE_VGA;
+@@ -1359,10 +1359,12 @@ static int stk_camera_probe(struct usb_interface *interface,
+ 
+ 	err = stk_register_video_device(dev);
+ 	if (err)
+-		goto error;
++		goto error_put;
+ 
+ 	return 0;
+ 
++error_put:
++	usb_put_intf(interface);
+ error:
+ 	v4l2_ctrl_handler_free(hdl);
+ 	v4l2_device_unregister(&dev->v4l2_dev);
+diff --git a/drivers/net/dsa/mv88e6xxx/serdes.c b/drivers/net/dsa/mv88e6xxx/serdes.c
+index b1d46dd8eaabc..6ea0036787986 100644
+--- a/drivers/net/dsa/mv88e6xxx/serdes.c
++++ b/drivers/net/dsa/mv88e6xxx/serdes.c
+@@ -1277,15 +1277,16 @@ static int mv88e6393x_serdes_port_errata(struct mv88e6xxx_chip *chip, int lane)
+ 	int err;
+ 
+ 	/* mv88e6393x family errata 4.6:
+-	 * Cannot clear PwrDn bit on SERDES on port 0 if device is configured
+-	 * CPU_MGD mode or P0_mode is configured for [x]MII.
+-	 * Workaround: Set Port0 SERDES register 4.F002 bit 5=0 and bit 15=1.
++	 * Cannot clear PwrDn bit on SERDES if device is configured CPU_MGD
++	 * mode or P0_mode is configured for [x]MII.
++	 * Workaround: Set SERDES register 4.F002 bit 5=0 and bit 15=1.
+ 	 *
+ 	 * It seems that after this workaround the SERDES is automatically
+ 	 * powered up (the bit is cleared), so power it down.
+ 	 */
+-	if (lane == MV88E6393X_PORT0_LANE) {
+-		err = mv88e6390_serdes_read(chip, MV88E6393X_PORT0_LANE,
++	if (lane == MV88E6393X_PORT0_LANE || lane == MV88E6393X_PORT9_LANE ||
++	    lane == MV88E6393X_PORT10_LANE) {
++		err = mv88e6390_serdes_read(chip, lane,
+ 					    MDIO_MMD_PHYXS,
+ 					    MV88E6393X_SERDES_POC, &reg);
+ 		if (err)
+diff --git a/drivers/net/ethernet/cadence/macb_ptp.c b/drivers/net/ethernet/cadence/macb_ptp.c
+index 283918aeb741d..09d64a29f56e3 100644
+--- a/drivers/net/ethernet/cadence/macb_ptp.c
++++ b/drivers/net/ethernet/cadence/macb_ptp.c
+@@ -275,6 +275,12 @@ void gem_ptp_rxstamp(struct macb *bp, struct sk_buff *skb,
+ 
+ 	if (GEM_BFEXT(DMA_RXVALID, desc->addr)) {
+ 		desc_ptp = macb_ptp_desc(bp, desc);
++		/* Unlikely but check */
++		if (!desc_ptp) {
++			dev_warn_ratelimited(&bp->pdev->dev,
++					     "Timestamp not supported in BD\n");
++			return;
++		}
+ 		gem_hw_timestamp(bp, desc_ptp->ts_1, desc_ptp->ts_2, &ts);
+ 		memset(shhwtstamps, 0, sizeof(struct skb_shared_hwtstamps));
+ 		shhwtstamps->hwtstamp = ktime_set(ts.tv_sec, ts.tv_nsec);
+@@ -307,8 +313,11 @@ int gem_ptp_txstamp(struct macb_queue *queue, struct sk_buff *skb,
+ 	if (CIRC_SPACE(head, tail, PTP_TS_BUFFER_SIZE) == 0)
+ 		return -ENOMEM;
+ 
+-	skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+ 	desc_ptp = macb_ptp_desc(queue->bp, desc);
++	/* Unlikely but check */
++	if (!desc_ptp)
++		return -EINVAL;
++	skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+ 	tx_timestamp = &queue->tx_timestamps[head];
+ 	tx_timestamp->skb = skb;
+ 	/* ensure ts_1/ts_2 is loaded after ctrl (TX_USED check) */
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c
+index 5bd58c65e1631..6bb9ec98a12b5 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_main.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_main.c
+@@ -616,7 +616,12 @@ static int qed_enable_msix(struct qed_dev *cdev,
+ 			rc = cnt;
+ 	}
+ 
+-	if (rc > 0) {
++	/* For VFs, we should return with an error in case we didn't get the
++	 * exact number of msix vectors as we requested.
++	 * Not doing that will lead to a crash when starting queues for
++	 * this VF.
++	 */
++	if ((IS_PF(cdev) && rc > 0) || (IS_VF(cdev) && rc == cnt)) {
+ 		/* MSI-x configuration was achieved */
+ 		int_params->out.int_mode = QED_INT_MODE_MSIX;
+ 		int_params->out.num_vectors = rc;
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c
+index 7c6064baeba28..1c7f9ed6f1c19 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_main.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_main.c
+@@ -1874,6 +1874,7 @@ static void qede_sync_free_irqs(struct qede_dev *edev)
+ 	}
+ 
+ 	edev->int_info.used_cnt = 0;
++	edev->int_info.msix_cnt = 0;
+ }
+ 
+ static int qede_req_msix_irqs(struct qede_dev *edev)
+@@ -2427,7 +2428,6 @@ static int qede_load(struct qede_dev *edev, enum qede_load_mode mode,
+ 	goto out;
+ err4:
+ 	qede_sync_free_irqs(edev);
+-	memset(&edev->int_info.msix_cnt, 0, sizeof(struct qed_int_info));
+ err3:
+ 	qede_napi_disable_remove(edev);
+ err2:
+diff --git a/drivers/reset/reset-zynqmp.c b/drivers/reset/reset-zynqmp.c
+index ebd433fa09dd7..8c51768e9a720 100644
+--- a/drivers/reset/reset-zynqmp.c
++++ b/drivers/reset/reset-zynqmp.c
+@@ -53,7 +53,8 @@ static int zynqmp_reset_status(struct reset_controller_dev *rcdev,
+ 			       unsigned long id)
+ {
+ 	struct zynqmp_reset_data *priv = to_zynqmp_reset_data(rcdev);
+-	int val, err;
++	int err;
++	u32 val;
+ 
+ 	err = zynqmp_pm_reset_get_status(priv->data->reset_id + id, &val);
+ 	if (err)
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index ea2e2d925a960..737748529482a 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -1110,10 +1110,8 @@ static int cp210x_set_chars(struct usb_serial_port *port,
+ 
+ 	kfree(dmabuf);
+ 
+-	if (result < 0) {
+-		dev_err(&port->dev, "failed to set special chars: %d\n", result);
++	if (result < 0)
+ 		return result;
+-	}
+ 
+ 	return 0;
+ }
+@@ -1138,6 +1136,7 @@ static void cp210x_set_flow_control(struct tty_struct *tty,
+ 	struct cp210x_flow_ctl flow_ctl;
+ 	u32 flow_repl;
+ 	u32 ctl_hs;
++	bool crtscts;
+ 	int ret;
+ 
+ 	/*
+@@ -1165,8 +1164,10 @@ static void cp210x_set_flow_control(struct tty_struct *tty,
+ 		chars.bXoffChar = STOP_CHAR(tty);
+ 
+ 		ret = cp210x_set_chars(port, &chars);
+-		if (ret)
+-			return;
++		if (ret) {
++			dev_err(&port->dev, "failed to set special chars: %d\n",
++					ret);
++		}
+ 	}
+ 
+ 	mutex_lock(&port_priv->mutex);
+@@ -1195,14 +1196,14 @@ static void cp210x_set_flow_control(struct tty_struct *tty,
+ 			flow_repl |= CP210X_SERIAL_RTS_FLOW_CTL;
+ 		else
+ 			flow_repl |= CP210X_SERIAL_RTS_INACTIVE;
+-		port_priv->crtscts = true;
++		crtscts = true;
+ 	} else {
+ 		ctl_hs &= ~CP210X_SERIAL_CTS_HANDSHAKE;
+ 		if (port_priv->rts)
+ 			flow_repl |= CP210X_SERIAL_RTS_ACTIVE;
+ 		else
+ 			flow_repl |= CP210X_SERIAL_RTS_INACTIVE;
+-		port_priv->crtscts = false;
++		crtscts = false;
+ 	}
+ 
+ 	if (I_IXOFF(tty)) {
+@@ -1225,8 +1226,12 @@ static void cp210x_set_flow_control(struct tty_struct *tty,
+ 	flow_ctl.ulControlHandshake = cpu_to_le32(ctl_hs);
+ 	flow_ctl.ulFlowReplace = cpu_to_le32(flow_repl);
+ 
+-	cp210x_write_reg_block(port, CP210X_SET_FLOW, &flow_ctl,
++	ret = cp210x_write_reg_block(port, CP210X_SET_FLOW, &flow_ctl,
+ 			sizeof(flow_ctl));
++	if (ret)
++		goto out_unlock;
++
++	port_priv->crtscts = crtscts;
+ out_unlock:
+ 	mutex_unlock(&port_priv->mutex);
+ }
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index 2ce9cbf49e974..3b579966fe735 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -433,6 +433,7 @@ static int pl2303_detect_type(struct usb_serial *serial)
+ 		switch (bcdDevice) {
+ 		case 0x100:
+ 		case 0x305:
++		case 0x405:
+ 			/*
+ 			 * Assume it's an HXN-type if the device doesn't
+ 			 * support the old read request value.
+diff --git a/fs/ceph/mdsmap.c b/fs/ceph/mdsmap.c
+index abd9af7727ad3..3c444b9cb17b8 100644
+--- a/fs/ceph/mdsmap.c
++++ b/fs/ceph/mdsmap.c
+@@ -394,9 +394,11 @@ void ceph_mdsmap_destroy(struct ceph_mdsmap *m)
+ {
+ 	int i;
+ 
+-	for (i = 0; i < m->possible_max_rank; i++)
+-		kfree(m->m_info[i].export_targets);
+-	kfree(m->m_info);
++	if (m->m_info) {
++		for (i = 0; i < m->possible_max_rank; i++)
++			kfree(m->m_info[i].export_targets);
++		kfree(m->m_info);
++	}
+ 	kfree(m->m_data_pg_pools);
+ 	kfree(m);
+ }
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 3cf01629010d9..0e85447022ae0 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -750,6 +750,12 @@ int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len,
+ 	ext4_write_lock_xattr(inode, &no_expand);
+ 	BUG_ON(!ext4_has_inline_data(inode));
+ 
++	/*
++	 * ei->i_inline_off may have changed since ext4_write_begin()
++	 * called ext4_try_to_write_inline_data()
++	 */
++	(void) ext4_find_inline_data_nolock(inode);
++
+ 	kaddr = kmap_atomic(page);
+ 	ext4_write_inline_data(inode, &iloc, kaddr, pos, len);
+ 	kunmap_atomic(kaddr);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 6a4e040ea9b3a..079bb0f6b3438 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -5051,6 +5051,14 @@ no_journal:
+ 		err = percpu_counter_init(&sbi->s_freeinodes_counter, freei,
+ 					  GFP_KERNEL);
+ 	}
++	/*
++	 * Update the checksum after updating free space/inode
++	 * counters.  Otherwise the superblock can have an incorrect
++	 * checksum in the buffer cache until it is written out and
++	 * e2fsprogs programs trying to open a file system immediately
++	 * after it is mounted can fail.
++	 */
++	ext4_superblock_csum_set(sb);
+ 	if (!err)
+ 		err = percpu_counter_init(&sbi->s_dirs_counter,
+ 					  ext4_count_dirs(sb), GFP_KERNEL);
+diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c
+index b7e3d8f445113..23c58b62a58a5 100644
+--- a/sound/core/pcm_lib.c
++++ b/sound/core/pcm_lib.c
+@@ -1746,7 +1746,7 @@ static int snd_pcm_lib_ioctl_fifo_size(struct snd_pcm_substream *substream,
+ 		channels = params_channels(params);
+ 		frame_size = snd_pcm_format_size(format, channels);
+ 		if (frame_size > 0)
+-			params->fifo_size /= (unsigned)frame_size;
++			params->fifo_size /= frame_size;
+ 	}
+ 	return 0;
+ }
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 0c6be85098558..82191d8f3d217 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8378,6 +8378,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x87f2, "HP ProBook 640 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f4, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x87f6, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
+ 	SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP),
+ 	SND_PCI_QUIRK(0x103c, 0x8805, "HP ProBook 650 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x880d, "HP EliteBook 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+@@ -9456,6 +9457,16 @@ static int patch_alc269(struct hda_codec *codec)
+ 
+ 	snd_hda_pick_fixup(codec, alc269_fixup_models,
+ 		       alc269_fixup_tbl, alc269_fixups);
++	/* FIXME: both TX300 and ROG Strix G17 have the same SSID, and
++	 * the quirk breaks the latter (bko#214101).
++	 * Clear the wrong entry.
++	 */
++	if (codec->fixup_id == ALC282_FIXUP_ASUS_TX300 &&
++	    codec->core.vendor_id == 0x10ec0294) {
++		codec_dbg(codec, "Clear wrong fixup for ASUS ROG Strix G17\n");
++		codec->fixup_id = HDA_FIXUP_ID_NOT_SET;
++	}
++
+ 	snd_hda_pick_pin_fixup(codec, alc269_pin_fixup_tbl, alc269_fixups, true);
+ 	snd_hda_pick_pin_fixup(codec, alc269_fallback_pin_fixup_tbl, alc269_fixups, false);
+ 	snd_hda_pick_fixup(codec, NULL,	alc269_fixup_vendor_tbl,
+diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c
+index 014c438628262..3cb1a584bf80f 100644
+--- a/sound/usb/endpoint.c
++++ b/sound/usb/endpoint.c
+@@ -1286,6 +1286,11 @@ int snd_usb_endpoint_configure(struct snd_usb_audio *chip,
+ 	 * to be set up before parameter setups
+ 	 */
+ 	iface_first = ep->cur_audiofmt->protocol == UAC_VERSION_1;
++	/* Workaround for Sony WALKMAN NW-A45 DAC;
++	 * it requires the interface setup at first like UAC1
++	 */
++	if (chip->usb_id == USB_ID(0x054c, 0x0b8c))
++		iface_first = true;
+ 	if (iface_first) {
+ 		err = endpoint_set_interface(chip, ep, true);
+ 		if (err < 0)


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-09-12 14:37 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-09-12 14:37 UTC (permalink / raw
  To: gentoo-commits

commit:     e5d55becdd2596999153da64a1d206971545d1f0
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Sep 12 14:36:51 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Sep 12 14:36:51 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e5d55bec

Linux patch 5.13.16

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1015_linux-5.13.16.patch | 1015 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1019 insertions(+)

diff --git a/0000_README b/0000_README
index 19f95ee..640800f 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch:  1014_linux-5.13.15.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.13.15
 
+Patch:  1015_linux-5.13.16.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.13.16
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1015_linux-5.13.16.patch b/1015_linux-5.13.16.patch
new file mode 100644
index 0000000..51bd586
--- /dev/null
+++ b/1015_linux-5.13.16.patch
@@ -0,0 +1,1015 @@
+diff --git a/Makefile b/Makefile
+index d0ea05957da61..cbb2f35baedbc 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 13
+-SUBLEVEL = 15
++SUBLEVEL = 16
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
+index b29657b76e3fa..798a6f73f8946 100644
+--- a/arch/x86/kernel/reboot.c
++++ b/arch/x86/kernel/reboot.c
+@@ -388,10 +388,11 @@ static const struct dmi_system_id reboot_dmi_table[] __initconst = {
+ 	},
+ 	{	/* Handle problems with rebooting on the OptiPlex 990. */
+ 		.callback = set_pci_reboot,
+-		.ident = "Dell OptiPlex 990",
++		.ident = "Dell OptiPlex 990 BIOS A0x",
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex 990"),
++			DMI_MATCH(DMI_BIOS_VERSION, "A0"),
+ 		},
+ 	},
+ 	{	/* Handle problems with rebooting on Dell 300's */
+diff --git a/block/blk-core.c b/block/blk-core.c
+index ce0125efbaa76..9dce692e25a22 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -122,7 +122,6 @@ void blk_rq_init(struct request_queue *q, struct request *rq)
+ 	rq->internal_tag = BLK_MQ_NO_TAG;
+ 	rq->start_time_ns = ktime_get_ns();
+ 	rq->part = NULL;
+-	refcount_set(&rq->ref, 1);
+ 	blk_crypto_rq_set_defaults(rq);
+ }
+ EXPORT_SYMBOL(blk_rq_init);
+diff --git a/block/blk-flush.c b/block/blk-flush.c
+index 1002f6c581816..4201728bf3a5a 100644
+--- a/block/blk-flush.c
++++ b/block/blk-flush.c
+@@ -262,6 +262,11 @@ static void flush_end_io(struct request *flush_rq, blk_status_t error)
+ 	spin_unlock_irqrestore(&fq->mq_flush_lock, flags);
+ }
+ 
++bool is_flush_rq(struct request *rq)
++{
++	return rq->end_io == flush_end_io;
++}
++
+ /**
+  * blk_kick_flush - consider issuing flush request
+  * @q: request_queue being kicked
+@@ -329,6 +334,14 @@ static void blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq,
+ 	flush_rq->rq_flags |= RQF_FLUSH_SEQ;
+ 	flush_rq->rq_disk = first_rq->rq_disk;
+ 	flush_rq->end_io = flush_end_io;
++	/*
++	 * Order WRITE ->end_io and WRITE rq->ref, and its pair is the one
++	 * implied in refcount_inc_not_zero() called from
++	 * blk_mq_find_and_get_req(), which orders WRITE/READ flush_rq->ref
++	 * and READ flush_rq->end_io
++	 */
++	smp_wmb();
++	refcount_set(&flush_rq->ref, 1);
+ 
+ 	blk_flush_queue_rq(flush_rq, false);
+ }
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 6dfa572ac1fc1..74fde07721e88 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -911,7 +911,7 @@ static bool blk_mq_req_expired(struct request *rq, unsigned long *next)
+ 
+ void blk_mq_put_rq_ref(struct request *rq)
+ {
+-	if (is_flush_rq(rq, rq->mq_hctx))
++	if (is_flush_rq(rq))
+ 		rq->end_io(rq, 0);
+ 	else if (refcount_dec_and_test(&rq->ref))
+ 		__blk_mq_free_request(rq);
+@@ -2620,16 +2620,49 @@ static void blk_mq_remove_cpuhp(struct blk_mq_hw_ctx *hctx)
+ 					    &hctx->cpuhp_dead);
+ }
+ 
++/*
++ * Before freeing hw queue, clearing the flush request reference in
++ * tags->rqs[] for avoiding potential UAF.
++ */
++static void blk_mq_clear_flush_rq_mapping(struct blk_mq_tags *tags,
++		unsigned int queue_depth, struct request *flush_rq)
++{
++	int i;
++	unsigned long flags;
++
++	/* The hw queue may not be mapped yet */
++	if (!tags)
++		return;
++
++	WARN_ON_ONCE(refcount_read(&flush_rq->ref) != 0);
++
++	for (i = 0; i < queue_depth; i++)
++		cmpxchg(&tags->rqs[i], flush_rq, NULL);
++
++	/*
++	 * Wait until all pending iteration is done.
++	 *
++	 * Request reference is cleared and it is guaranteed to be observed
++	 * after the ->lock is released.
++	 */
++	spin_lock_irqsave(&tags->lock, flags);
++	spin_unlock_irqrestore(&tags->lock, flags);
++}
++
+ /* hctx->ctxs will be freed in queue's release handler */
+ static void blk_mq_exit_hctx(struct request_queue *q,
+ 		struct blk_mq_tag_set *set,
+ 		struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx)
+ {
++	struct request *flush_rq = hctx->fq->flush_rq;
++
+ 	if (blk_mq_hw_queue_mapped(hctx))
+ 		blk_mq_tag_idle(hctx);
+ 
++	blk_mq_clear_flush_rq_mapping(set->tags[hctx_idx],
++			set->queue_depth, flush_rq);
+ 	if (set->ops->exit_request)
+-		set->ops->exit_request(set, hctx->fq->flush_rq, hctx_idx);
++		set->ops->exit_request(set, flush_rq, hctx_idx);
+ 
+ 	if (set->ops->exit_hctx)
+ 		set->ops->exit_hctx(hctx, hctx_idx);
+diff --git a/block/blk.h b/block/blk.h
+index 8b3591aee0a5f..54d48987c21b2 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -44,11 +44,7 @@ static inline void __blk_get_queue(struct request_queue *q)
+ 	kobject_get(&q->kobj);
+ }
+ 
+-static inline bool
+-is_flush_rq(struct request *req, struct blk_mq_hw_ctx *hctx)
+-{
+-	return hctx->fq->flush_rq == req;
+-}
++bool is_flush_rq(struct request *req);
+ 
+ struct blk_flush_queue *blk_alloc_flush_queue(int node, int cmd_size,
+ 					      gfp_t flags);
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index c6b48160343ab..9122f9cc97cbe 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -443,6 +443,10 @@ static const struct usb_device_id blacklist_table[] = {
+ 	/* Additional Realtek 8822CE Bluetooth devices */
+ 	{ USB_DEVICE(0x04ca, 0x4005), .driver_info = BTUSB_REALTEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
++	/* Bluetooth component of Realtek 8852AE device */
++	{ USB_DEVICE(0x04ca, 0x4006), .driver_info = BTUSB_REALTEK |
++						     BTUSB_WIDEBAND_SPEECH },
++
+ 	{ USB_DEVICE(0x04c5, 0x161f), .driver_info = BTUSB_REALTEK |
+ 						     BTUSB_WIDEBAND_SPEECH },
+ 	{ USB_DEVICE(0x0b05, 0x18ef), .driver_info = BTUSB_REALTEK |
+@@ -1886,7 +1890,7 @@ static int btusb_setup_csr(struct hci_dev *hdev)
+ 		is_fake = true;
+ 
+ 	if (is_fake) {
+-		bt_dev_warn(hdev, "CSR: Unbranded CSR clone detected; adding workarounds...");
++		bt_dev_warn(hdev, "CSR: Unbranded CSR clone detected; adding workarounds and force-suspending once...");
+ 
+ 		/* Generally these clones have big discrepancies between
+ 		 * advertised features and what's actually supported.
+@@ -1903,41 +1907,46 @@ static int btusb_setup_csr(struct hci_dev *hdev)
+ 		clear_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
+ 
+ 		/*
+-		 * Special workaround for clones with a Barrot 8041a02 chip,
+-		 * these clones are really messed-up:
+-		 * 1. Their bulk rx endpoint will never report any data unless
+-		 * the device was suspended at least once (yes really).
++		 * Special workaround for these BT 4.0 chip clones, and potentially more:
++		 *
++		 * - 0x0134: a Barrot 8041a02                 (HCI rev: 0x1012 sub: 0x0810)
++		 * - 0x7558: IC markings FR3191AHAL 749H15143 (HCI rev/sub-version: 0x0709)
++		 *
++		 * These controllers are really messed-up.
++		 *
++		 * 1. Their bulk RX endpoint will never report any data unless
++		 * the device was suspended at least once (yes, really).
+ 		 * 2. They will not wakeup when autosuspended and receiving data
+-		 * on their bulk rx endpoint from e.g. a keyboard or mouse
++		 * on their bulk RX endpoint from e.g. a keyboard or mouse
+ 		 * (IOW remote-wakeup support is broken for the bulk endpoint).
+ 		 *
+ 		 * To fix 1. enable runtime-suspend, force-suspend the
+-		 * hci and then wake-it up by disabling runtime-suspend.
++		 * HCI and then wake-it up by disabling runtime-suspend.
+ 		 *
+-		 * To fix 2. clear the hci's can_wake flag, this way the hci
++		 * To fix 2. clear the HCI's can_wake flag, this way the HCI
+ 		 * will still be autosuspended when it is not open.
++		 *
++		 * --
++		 *
++		 * Because these are widespread problems we prefer generic solutions; so
++		 * apply this initialization quirk to every controller that gets here,
++		 * it should be harmless. The alternative is to not work at all.
+ 		 */
+-		if (bcdDevice == 0x8891 &&
+-		    le16_to_cpu(rp->lmp_subver) == 0x1012 &&
+-		    le16_to_cpu(rp->hci_rev) == 0x0810 &&
+-		    le16_to_cpu(rp->hci_ver) == BLUETOOTH_VER_4_0) {
+-			bt_dev_warn(hdev, "CSR: detected a fake CSR dongle using a Barrot 8041a02 chip, this chip is very buggy and may have issues");
++		pm_runtime_allow(&data->udev->dev);
+ 
+-			pm_runtime_allow(&data->udev->dev);
++		ret = pm_runtime_suspend(&data->udev->dev);
++		if (ret >= 0)
++			msleep(200);
++		else
++			bt_dev_err(hdev, "CSR: Failed to suspend the device for our Barrot 8041a02 receive-issue workaround");
+ 
+-			ret = pm_runtime_suspend(&data->udev->dev);
+-			if (ret >= 0)
+-				msleep(200);
+-			else
+-				bt_dev_err(hdev, "Failed to suspend the device for Barrot 8041a02 receive-issue workaround");
++		pm_runtime_forbid(&data->udev->dev);
+ 
+-			pm_runtime_forbid(&data->udev->dev);
++		device_set_wakeup_capable(&data->udev->dev, false);
+ 
+-			device_set_wakeup_capable(&data->udev->dev, false);
+-			/* Re-enable autosuspend if this was requested */
+-			if (enable_autosuspend)
+-				usb_enable_autosuspend(data->udev);
+-		}
++		/* Re-enable autosuspend if this was requested */
++		if (enable_autosuspend)
++			usb_enable_autosuspend(data->udev);
+ 	}
+ 
+ 	kfree_skb(skb);
+diff --git a/drivers/firmware/dmi-id.c b/drivers/firmware/dmi-id.c
+index 4d5421d14a410..940ddf916202a 100644
+--- a/drivers/firmware/dmi-id.c
++++ b/drivers/firmware/dmi-id.c
+@@ -73,6 +73,10 @@ static void ascii_filter(char *d, const char *s)
+ 
+ static ssize_t get_modalias(char *buffer, size_t buffer_size)
+ {
++	/*
++	 * Note new fields need to be added at the end to keep compatibility
++	 * with udev's hwdb which does matches on "`cat dmi/id/modalias`*".
++	 */
+ 	static const struct mafield {
+ 		const char *prefix;
+ 		int field;
+@@ -85,13 +89,13 @@ static ssize_t get_modalias(char *buffer, size_t buffer_size)
+ 		{ "svn", DMI_SYS_VENDOR },
+ 		{ "pn",  DMI_PRODUCT_NAME },
+ 		{ "pvr", DMI_PRODUCT_VERSION },
+-		{ "sku", DMI_PRODUCT_SKU },
+ 		{ "rvn", DMI_BOARD_VENDOR },
+ 		{ "rn",  DMI_BOARD_NAME },
+ 		{ "rvr", DMI_BOARD_VERSION },
+ 		{ "cvn", DMI_CHASSIS_VENDOR },
+ 		{ "ct",  DMI_CHASSIS_TYPE },
+ 		{ "cvr", DMI_CHASSIS_VERSION },
++		{ "sku", DMI_PRODUCT_SKU },
+ 		{ NULL,  DMI_NONE }
+ 	};
+ 
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index b8eb1b2a8de39..ffbae9b64dc5a 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -3510,6 +3510,7 @@ static void rtl_hw_start_8106(struct rtl8169_private *tp)
+ 	rtl_eri_write(tp, 0x1b0, ERIAR_MASK_0011, 0x0000);
+ 
+ 	rtl_pcie_state_l2l3_disable(tp);
++	rtl_hw_aspm_clkreq_enable(tp, true);
+ }
+ 
+ DECLARE_RTL_COND(rtl_mac_ocp_e00e_cond)
+diff --git a/drivers/net/ethernet/xilinx/ll_temac_main.c b/drivers/net/ethernet/xilinx/ll_temac_main.c
+index 9a13953ea70fa..60a4f79b8fa19 100644
+--- a/drivers/net/ethernet/xilinx/ll_temac_main.c
++++ b/drivers/net/ethernet/xilinx/ll_temac_main.c
+@@ -942,10 +942,8 @@ temac_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+ 	wmb();
+ 	lp->dma_out(lp, TX_TAILDESC_PTR, tail_p); /* DMA start */
+ 
+-	if (temac_check_tx_bd_space(lp, MAX_SKB_FRAGS + 1)) {
+-		netdev_info(ndev, "%s -> netif_stop_queue\n", __func__);
++	if (temac_check_tx_bd_space(lp, MAX_SKB_FRAGS + 1))
+ 		netif_stop_queue(ndev);
+-	}
+ 
+ 	return NETDEV_TX_OK;
+ }
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index ab3de1551b503..7b1c81b899cdf 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -3235,12 +3235,12 @@ static void fixup_mpss_256(struct pci_dev *dev)
+ {
+ 	dev->pcie_mpss = 1; /* 256 bytes */
+ }
+-DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SOLARFLARE,
+-			 PCI_DEVICE_ID_SOLARFLARE_SFC4000A_0, fixup_mpss_256);
+-DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SOLARFLARE,
+-			 PCI_DEVICE_ID_SOLARFLARE_SFC4000A_1, fixup_mpss_256);
+-DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SOLARFLARE,
+-			 PCI_DEVICE_ID_SOLARFLARE_SFC4000B, fixup_mpss_256);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE,
++			PCI_DEVICE_ID_SOLARFLARE_SFC4000A_0, fixup_mpss_256);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE,
++			PCI_DEVICE_ID_SOLARFLARE_SFC4000A_1, fixup_mpss_256);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE,
++			PCI_DEVICE_ID_SOLARFLARE_SFC4000B, fixup_mpss_256);
+ 
+ /*
+  * Intel 5000 and 5100 Memory controllers have an erratum with read completion
+diff --git a/drivers/usb/cdns3/cdnsp-mem.c b/drivers/usb/cdns3/cdnsp-mem.c
+index 5d4c4bfe15b70..697a4e47c6427 100644
+--- a/drivers/usb/cdns3/cdnsp-mem.c
++++ b/drivers/usb/cdns3/cdnsp-mem.c
+@@ -882,7 +882,7 @@ static u32 cdnsp_get_endpoint_max_burst(struct usb_gadget *g,
+ 	if (g->speed == USB_SPEED_HIGH &&
+ 	    (usb_endpoint_xfer_isoc(pep->endpoint.desc) ||
+ 	     usb_endpoint_xfer_int(pep->endpoint.desc)))
+-		return (usb_endpoint_maxp(pep->endpoint.desc) & 0x1800) >> 11;
++		return usb_endpoint_maxp_mult(pep->endpoint.desc) - 1;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/gadget/udc/tegra-xudc.c b/drivers/usb/gadget/udc/tegra-xudc.c
+index f3f112b08c9b1..57ee72fead45a 100644
+--- a/drivers/usb/gadget/udc/tegra-xudc.c
++++ b/drivers/usb/gadget/udc/tegra-xudc.c
+@@ -1610,7 +1610,7 @@ static void tegra_xudc_ep_context_setup(struct tegra_xudc_ep *ep)
+ 	u16 maxpacket, maxburst = 0, esit = 0;
+ 	u32 val;
+ 
+-	maxpacket = usb_endpoint_maxp(desc) & 0x7ff;
++	maxpacket = usb_endpoint_maxp(desc);
+ 	if (xudc->gadget.speed == USB_SPEED_SUPER) {
+ 		if (!usb_endpoint_xfer_control(desc))
+ 			maxburst = comp_desc->bMaxBurst;
+@@ -1621,7 +1621,7 @@ static void tegra_xudc_ep_context_setup(struct tegra_xudc_ep *ep)
+ 		   (usb_endpoint_xfer_int(desc) ||
+ 		    usb_endpoint_xfer_isoc(desc))) {
+ 		if (xudc->gadget.speed == USB_SPEED_HIGH) {
+-			maxburst = (usb_endpoint_maxp(desc) >> 11) & 0x3;
++			maxburst = usb_endpoint_maxp_mult(desc) - 1;
+ 			if (maxburst == 0x3) {
+ 				dev_warn(xudc->dev,
+ 					 "invalid endpoint maxburst\n");
+diff --git a/drivers/usb/host/xhci-debugfs.c b/drivers/usb/host/xhci-debugfs.c
+index 2c0fda57869e4..dc832ddf7033f 100644
+--- a/drivers/usb/host/xhci-debugfs.c
++++ b/drivers/usb/host/xhci-debugfs.c
+@@ -198,12 +198,13 @@ static void xhci_ring_dump_segment(struct seq_file *s,
+ 	int			i;
+ 	dma_addr_t		dma;
+ 	union xhci_trb		*trb;
++	char			str[XHCI_MSG_MAX];
+ 
+ 	for (i = 0; i < TRBS_PER_SEGMENT; i++) {
+ 		trb = &seg->trbs[i];
+ 		dma = seg->dma + i * sizeof(*trb);
+ 		seq_printf(s, "%pad: %s\n", &dma,
+-			   xhci_decode_trb(le32_to_cpu(trb->generic.field[0]),
++			   xhci_decode_trb(str, XHCI_MSG_MAX, le32_to_cpu(trb->generic.field[0]),
+ 					   le32_to_cpu(trb->generic.field[1]),
+ 					   le32_to_cpu(trb->generic.field[2]),
+ 					   le32_to_cpu(trb->generic.field[3])));
+@@ -260,11 +261,13 @@ static int xhci_slot_context_show(struct seq_file *s, void *unused)
+ 	struct xhci_slot_ctx	*slot_ctx;
+ 	struct xhci_slot_priv	*priv = s->private;
+ 	struct xhci_virt_device	*dev = priv->dev;
++	char			str[XHCI_MSG_MAX];
+ 
+ 	xhci = hcd_to_xhci(bus_to_hcd(dev->udev->bus));
+ 	slot_ctx = xhci_get_slot_ctx(xhci, dev->out_ctx);
+ 	seq_printf(s, "%pad: %s\n", &dev->out_ctx->dma,
+-		   xhci_decode_slot_context(le32_to_cpu(slot_ctx->dev_info),
++		   xhci_decode_slot_context(str,
++					    le32_to_cpu(slot_ctx->dev_info),
+ 					    le32_to_cpu(slot_ctx->dev_info2),
+ 					    le32_to_cpu(slot_ctx->tt_info),
+ 					    le32_to_cpu(slot_ctx->dev_state)));
+@@ -280,6 +283,7 @@ static int xhci_endpoint_context_show(struct seq_file *s, void *unused)
+ 	struct xhci_ep_ctx	*ep_ctx;
+ 	struct xhci_slot_priv	*priv = s->private;
+ 	struct xhci_virt_device	*dev = priv->dev;
++	char			str[XHCI_MSG_MAX];
+ 
+ 	xhci = hcd_to_xhci(bus_to_hcd(dev->udev->bus));
+ 
+@@ -287,7 +291,8 @@ static int xhci_endpoint_context_show(struct seq_file *s, void *unused)
+ 		ep_ctx = xhci_get_ep_ctx(xhci, dev->out_ctx, ep_index);
+ 		dma = dev->out_ctx->dma + (ep_index + 1) * CTX_SIZE(xhci->hcc_params);
+ 		seq_printf(s, "%pad: %s\n", &dma,
+-			   xhci_decode_ep_context(le32_to_cpu(ep_ctx->ep_info),
++			   xhci_decode_ep_context(str,
++						  le32_to_cpu(ep_ctx->ep_info),
+ 						  le32_to_cpu(ep_ctx->ep_info2),
+ 						  le64_to_cpu(ep_ctx->deq),
+ 						  le32_to_cpu(ep_ctx->tx_info)));
+@@ -341,9 +346,10 @@ static int xhci_portsc_show(struct seq_file *s, void *unused)
+ {
+ 	struct xhci_port	*port = s->private;
+ 	u32			portsc;
++	char			str[XHCI_MSG_MAX];
+ 
+ 	portsc = readl(port->addr);
+-	seq_printf(s, "%s\n", xhci_decode_portsc(portsc));
++	seq_printf(s, "%s\n", xhci_decode_portsc(str, portsc));
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/host/xhci-mtk-sch.c b/drivers/usb/host/xhci-mtk-sch.c
+index 8b90da5a6ed19..5e003baabac91 100644
+--- a/drivers/usb/host/xhci-mtk-sch.c
++++ b/drivers/usb/host/xhci-mtk-sch.c
+@@ -590,10 +590,12 @@ static u32 get_esit_boundary(struct mu3h_sch_ep_info *sch_ep)
+ 	u32 boundary = sch_ep->esit;
+ 
+ 	if (sch_ep->sch_tt) { /* LS/FS with TT */
+-		/* tune for CS */
+-		if (sch_ep->ep_type != ISOC_OUT_EP)
+-			boundary++;
+-		else if (boundary > 1) /* normally esit >= 8 for FS/LS */
++		/*
++		 * tune for CS, normally esit >= 8 for FS/LS,
++		 * not add one for other types to avoid access array
++		 * out of boundary
++		 */
++		if (sch_ep->ep_type == ISOC_OUT_EP && boundary > 1)
+ 			boundary--;
+ 	}
+ 
+diff --git a/drivers/usb/host/xhci-rcar.c b/drivers/usb/host/xhci-rcar.c
+index 1bc4fe7b8c756..9888ba7d85b6a 100644
+--- a/drivers/usb/host/xhci-rcar.c
++++ b/drivers/usb/host/xhci-rcar.c
+@@ -134,6 +134,13 @@ static int xhci_rcar_download_firmware(struct usb_hcd *hcd)
+ 	const struct soc_device_attribute *attr;
+ 	const char *firmware_name;
+ 
++	/*
++	 * According to the datasheet, "Upon the completion of FW Download,
++	 * there is no need to write or reload FW".
++	 */
++	if (readl(regs + RCAR_USB3_DL_CTRL) & RCAR_USB3_DL_CTRL_FW_SUCCESS)
++		return 0;
++
+ 	attr = soc_device_match(rcar_quirks_match);
+ 	if (attr)
+ 		quirks = (uintptr_t)attr->data;
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 6acd2329e08d4..5b54a3622c39c 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -942,17 +942,21 @@ static int xhci_invalidate_cancelled_tds(struct xhci_virt_ep *ep)
+ 					 td->urb->stream_id);
+ 		hw_deq &= ~0xf;
+ 
+-		if (td->cancel_status == TD_HALTED) {
+-			cached_td = td;
+-		} else if (trb_in_td(xhci, td->start_seg, td->first_trb,
+-			      td->last_trb, hw_deq, false)) {
++		if (td->cancel_status == TD_HALTED ||
++		    trb_in_td(xhci, td->start_seg, td->first_trb, td->last_trb, hw_deq, false)) {
+ 			switch (td->cancel_status) {
+ 			case TD_CLEARED: /* TD is already no-op */
+ 			case TD_CLEARING_CACHE: /* set TR deq command already queued */
+ 				break;
+ 			case TD_DIRTY: /* TD is cached, clear it */
+ 			case TD_HALTED:
+-				/* FIXME  stream case, several stopped rings */
++				td->cancel_status = TD_CLEARING_CACHE;
++				if (cached_td)
++					/* FIXME  stream case, several stopped rings */
++					xhci_dbg(xhci,
++						 "Move dq past stream %u URB %p instead of stream %u URB %p\n",
++						 td->urb->stream_id, td->urb,
++						 cached_td->urb->stream_id, cached_td->urb);
+ 				cached_td = td;
+ 				break;
+ 			}
+@@ -961,18 +965,24 @@ static int xhci_invalidate_cancelled_tds(struct xhci_virt_ep *ep)
+ 			td->cancel_status = TD_CLEARED;
+ 		}
+ 	}
+-	if (cached_td) {
+-		cached_td->cancel_status = TD_CLEARING_CACHE;
+ 
+-		err = xhci_move_dequeue_past_td(xhci, slot_id, ep->ep_index,
+-						cached_td->urb->stream_id,
+-						cached_td);
+-		/* Failed to move past cached td, try just setting it noop */
+-		if (err) {
+-			td_to_noop(xhci, ring, cached_td, false);
+-			cached_td->cancel_status = TD_CLEARED;
++	/* If there's no need to move the dequeue pointer then we're done */
++	if (!cached_td)
++		return 0;
++
++	err = xhci_move_dequeue_past_td(xhci, slot_id, ep->ep_index,
++					cached_td->urb->stream_id,
++					cached_td);
++	if (err) {
++		/* Failed to move past cached td, just set cached TDs to no-op */
++		list_for_each_entry_safe(td, tmp_td, &ep->cancelled_td_list, cancelled_td_list) {
++			if (td->cancel_status != TD_CLEARING_CACHE)
++				continue;
++			xhci_dbg(xhci, "Failed to clear cancelled cached URB %p, mark clear anyway\n",
++				 td->urb);
++			td_to_noop(xhci, ring, td, false);
++			td->cancel_status = TD_CLEARED;
+ 		}
+-		cached_td = NULL;
+ 	}
+ 	return 0;
+ }
+@@ -1212,6 +1222,7 @@ void xhci_stop_endpoint_command_watchdog(struct timer_list *t)
+ 	struct xhci_hcd *xhci = ep->xhci;
+ 	unsigned long flags;
+ 	u32 usbsts;
++	char str[XHCI_MSG_MAX];
+ 
+ 	spin_lock_irqsave(&xhci->lock, flags);
+ 
+@@ -1225,7 +1236,7 @@ void xhci_stop_endpoint_command_watchdog(struct timer_list *t)
+ 	usbsts = readl(&xhci->op_regs->status);
+ 
+ 	xhci_warn(xhci, "xHCI host not responding to stop endpoint command.\n");
+-	xhci_warn(xhci, "USBSTS:%s\n", xhci_decode_usbsts(usbsts));
++	xhci_warn(xhci, "USBSTS:%s\n", xhci_decode_usbsts(str, usbsts));
+ 
+ 	ep->ep_state &= ~EP_STOP_CMD_PENDING;
+ 
+diff --git a/drivers/usb/host/xhci-trace.h b/drivers/usb/host/xhci-trace.h
+index 627abd236dbe1..a5da020772977 100644
+--- a/drivers/usb/host/xhci-trace.h
++++ b/drivers/usb/host/xhci-trace.h
+@@ -25,8 +25,6 @@
+ #include "xhci.h"
+ #include "xhci-dbgcap.h"
+ 
+-#define XHCI_MSG_MAX	500
+-
+ DECLARE_EVENT_CLASS(xhci_log_msg,
+ 	TP_PROTO(struct va_format *vaf),
+ 	TP_ARGS(vaf),
+@@ -122,6 +120,7 @@ DECLARE_EVENT_CLASS(xhci_log_trb,
+ 		__field(u32, field1)
+ 		__field(u32, field2)
+ 		__field(u32, field3)
++		__dynamic_array(char, str, XHCI_MSG_MAX)
+ 	),
+ 	TP_fast_assign(
+ 		__entry->type = ring->type;
+@@ -131,7 +130,7 @@ DECLARE_EVENT_CLASS(xhci_log_trb,
+ 		__entry->field3 = le32_to_cpu(trb->field[3]);
+ 	),
+ 	TP_printk("%s: %s", xhci_ring_type_string(__entry->type),
+-			xhci_decode_trb(__entry->field0, __entry->field1,
++		  xhci_decode_trb(__get_str(str), XHCI_MSG_MAX, __entry->field0, __entry->field1,
+ 					__entry->field2, __entry->field3)
+ 	)
+ );
+@@ -323,6 +322,7 @@ DECLARE_EVENT_CLASS(xhci_log_ep_ctx,
+ 		__field(u32, info2)
+ 		__field(u64, deq)
+ 		__field(u32, tx_info)
++		__dynamic_array(char, str, XHCI_MSG_MAX)
+ 	),
+ 	TP_fast_assign(
+ 		__entry->info = le32_to_cpu(ctx->ep_info);
+@@ -330,8 +330,8 @@ DECLARE_EVENT_CLASS(xhci_log_ep_ctx,
+ 		__entry->deq = le64_to_cpu(ctx->deq);
+ 		__entry->tx_info = le32_to_cpu(ctx->tx_info);
+ 	),
+-	TP_printk("%s", xhci_decode_ep_context(__entry->info,
+-		__entry->info2, __entry->deq, __entry->tx_info)
++	TP_printk("%s", xhci_decode_ep_context(__get_str(str),
++		__entry->info, __entry->info2, __entry->deq, __entry->tx_info)
+ 	)
+ );
+ 
+@@ -368,6 +368,7 @@ DECLARE_EVENT_CLASS(xhci_log_slot_ctx,
+ 		__field(u32, info2)
+ 		__field(u32, tt_info)
+ 		__field(u32, state)
++		__dynamic_array(char, str, XHCI_MSG_MAX)
+ 	),
+ 	TP_fast_assign(
+ 		__entry->info = le32_to_cpu(ctx->dev_info);
+@@ -375,9 +376,9 @@ DECLARE_EVENT_CLASS(xhci_log_slot_ctx,
+ 		__entry->tt_info = le64_to_cpu(ctx->tt_info);
+ 		__entry->state = le32_to_cpu(ctx->dev_state);
+ 	),
+-	TP_printk("%s", xhci_decode_slot_context(__entry->info,
+-			__entry->info2, __entry->tt_info,
+-			__entry->state)
++	TP_printk("%s", xhci_decode_slot_context(__get_str(str),
++			__entry->info, __entry->info2,
++			__entry->tt_info, __entry->state)
+ 	)
+ );
+ 
+@@ -432,12 +433,13 @@ DECLARE_EVENT_CLASS(xhci_log_ctrl_ctx,
+ 	TP_STRUCT__entry(
+ 		__field(u32, drop)
+ 		__field(u32, add)
++		__dynamic_array(char, str, XHCI_MSG_MAX)
+ 	),
+ 	TP_fast_assign(
+ 		__entry->drop = le32_to_cpu(ctrl_ctx->drop_flags);
+ 		__entry->add = le32_to_cpu(ctrl_ctx->add_flags);
+ 	),
+-	TP_printk("%s", xhci_decode_ctrl_ctx(__entry->drop, __entry->add)
++	TP_printk("%s", xhci_decode_ctrl_ctx(__get_str(str), __entry->drop, __entry->add)
+ 	)
+ );
+ 
+@@ -523,6 +525,7 @@ DECLARE_EVENT_CLASS(xhci_log_portsc,
+ 		    TP_STRUCT__entry(
+ 				     __field(u32, portnum)
+ 				     __field(u32, portsc)
++				     __dynamic_array(char, str, XHCI_MSG_MAX)
+ 				     ),
+ 		    TP_fast_assign(
+ 				   __entry->portnum = portnum;
+@@ -530,7 +533,7 @@ DECLARE_EVENT_CLASS(xhci_log_portsc,
+ 				   ),
+ 		    TP_printk("port-%d: %s",
+ 			      __entry->portnum,
+-			      xhci_decode_portsc(__entry->portsc)
++			      xhci_decode_portsc(__get_str(str), __entry->portsc)
+ 			      )
+ );
+ 
+@@ -555,13 +558,14 @@ DECLARE_EVENT_CLASS(xhci_log_doorbell,
+ 	TP_STRUCT__entry(
+ 		__field(u32, slot)
+ 		__field(u32, doorbell)
++		__dynamic_array(char, str, XHCI_MSG_MAX)
+ 	),
+ 	TP_fast_assign(
+ 		__entry->slot = slot;
+ 		__entry->doorbell = doorbell;
+ 	),
+ 	TP_printk("Ring doorbell for %s",
+-		xhci_decode_doorbell(__entry->slot, __entry->doorbell)
++		  xhci_decode_doorbell(__get_str(str), __entry->slot, __entry->doorbell)
+ 	)
+ );
+ 
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index e417f5ce13d18..528cdac532690 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -22,6 +22,9 @@
+ #include	"xhci-ext-caps.h"
+ #include "pci-quirks.h"
+ 
++/* max buffer size for trace and debug messages */
++#define XHCI_MSG_MAX		500
++
+ /* xHCI PCI Configuration Registers */
+ #define XHCI_SBRN_OFFSET	(0x60)
+ 
+@@ -2232,15 +2235,14 @@ static inline char *xhci_slot_state_string(u32 state)
+ 	}
+ }
+ 
+-static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+-		u32 field3)
++static inline const char *xhci_decode_trb(char *str, size_t size,
++					  u32 field0, u32 field1, u32 field2, u32 field3)
+ {
+-	static char str[256];
+ 	int type = TRB_FIELD_TO_TYPE(field3);
+ 
+ 	switch (type) {
+ 	case TRB_LINK:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"LINK %08x%08x intr %d type '%s' flags %c:%c:%c:%c",
+ 			field1, field0, GET_INTR_TARGET(field2),
+ 			xhci_trb_type_string(type),
+@@ -2257,7 +2259,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 	case TRB_HC_EVENT:
+ 	case TRB_DEV_NOTE:
+ 	case TRB_MFINDEX_WRAP:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"TRB %08x%08x status '%s' len %d slot %d ep %d type '%s' flags %c:%c",
+ 			field1, field0,
+ 			xhci_trb_comp_code_string(GET_COMP_CODE(field2)),
+@@ -2270,7 +2272,8 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 
+ 		break;
+ 	case TRB_SETUP:
+-		sprintf(str, "bRequestType %02x bRequest %02x wValue %02x%02x wIndex %02x%02x wLength %d length %d TD size %d intr %d type '%s' flags %c:%c:%c",
++		snprintf(str, size,
++			"bRequestType %02x bRequest %02x wValue %02x%02x wIndex %02x%02x wLength %d length %d TD size %d intr %d type '%s' flags %c:%c:%c",
+ 				field0 & 0xff,
+ 				(field0 & 0xff00) >> 8,
+ 				(field0 & 0xff000000) >> 24,
+@@ -2287,7 +2290,8 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 				field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_DATA:
+-		sprintf(str, "Buffer %08x%08x length %d TD size %d intr %d type '%s' flags %c:%c:%c:%c:%c:%c:%c",
++		snprintf(str, size,
++			 "Buffer %08x%08x length %d TD size %d intr %d type '%s' flags %c:%c:%c:%c:%c:%c:%c",
+ 				field1, field0, TRB_LEN(field2), GET_TD_SIZE(field2),
+ 				GET_INTR_TARGET(field2),
+ 				xhci_trb_type_string(type),
+@@ -2300,7 +2304,8 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 				field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_STATUS:
+-		sprintf(str, "Buffer %08x%08x length %d TD size %d intr %d type '%s' flags %c:%c:%c:%c",
++		snprintf(str, size,
++			 "Buffer %08x%08x length %d TD size %d intr %d type '%s' flags %c:%c:%c:%c",
+ 				field1, field0, TRB_LEN(field2), GET_TD_SIZE(field2),
+ 				GET_INTR_TARGET(field2),
+ 				xhci_trb_type_string(type),
+@@ -2313,7 +2318,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 	case TRB_ISOC:
+ 	case TRB_EVENT_DATA:
+ 	case TRB_TR_NOOP:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"Buffer %08x%08x length %d TD size %d intr %d type '%s' flags %c:%c:%c:%c:%c:%c:%c:%c",
+ 			field1, field0, TRB_LEN(field2), GET_TD_SIZE(field2),
+ 			GET_INTR_TARGET(field2),
+@@ -2330,21 +2335,21 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 
+ 	case TRB_CMD_NOOP:
+ 	case TRB_ENABLE_SLOT:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: flags %c",
+ 			xhci_trb_type_string(type),
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_DISABLE_SLOT:
+ 	case TRB_NEG_BANDWIDTH:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: slot %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			TRB_TO_SLOT_ID(field3),
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_ADDR_DEV:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: ctx %08x%08x slot %d flags %c:%c",
+ 			xhci_trb_type_string(type),
+ 			field1, field0,
+@@ -2353,7 +2358,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_CONFIG_EP:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: ctx %08x%08x slot %d flags %c:%c",
+ 			xhci_trb_type_string(type),
+ 			field1, field0,
+@@ -2362,7 +2367,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_EVAL_CONTEXT:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: ctx %08x%08x slot %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			field1, field0,
+@@ -2370,7 +2375,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_RESET_EP:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: ctx %08x%08x slot %d ep %d flags %c:%c",
+ 			xhci_trb_type_string(type),
+ 			field1, field0,
+@@ -2391,7 +2396,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_SET_DEQ:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: deq %08x%08x stream %d slot %d ep %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			field1, field0,
+@@ -2402,14 +2407,14 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_RESET_DEV:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: slot %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			TRB_TO_SLOT_ID(field3),
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_FORCE_EVENT:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: event %08x%08x vf intr %d vf id %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			field1, field0,
+@@ -2418,14 +2423,14 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_SET_LT:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: belt %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			TRB_TO_BELT(field3),
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_GET_BW:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: ctx %08x%08x slot %d speed %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			field1, field0,
+@@ -2434,7 +2439,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	case TRB_FORCE_HEADER:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"%s: info %08x%08x%08x pkt type %d roothub port %d flags %c",
+ 			xhci_trb_type_string(type),
+ 			field2, field1, field0 & 0xffffffe0,
+@@ -2443,7 +2448,7 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 			field3 & TRB_CYCLE ? 'C' : 'c');
+ 		break;
+ 	default:
+-		sprintf(str,
++		snprintf(str, size,
+ 			"type '%s' -> raw %08x %08x %08x %08x",
+ 			xhci_trb_type_string(type),
+ 			field0, field1, field2, field3);
+@@ -2452,10 +2457,9 @@ static inline const char *xhci_decode_trb(u32 field0, u32 field1, u32 field2,
+ 	return str;
+ }
+ 
+-static inline const char *xhci_decode_ctrl_ctx(unsigned long drop,
+-					       unsigned long add)
++static inline const char *xhci_decode_ctrl_ctx(char *str,
++		unsigned long drop, unsigned long add)
+ {
+-	static char	str[1024];
+ 	unsigned int	bit;
+ 	int		ret = 0;
+ 
+@@ -2481,10 +2485,9 @@ static inline const char *xhci_decode_ctrl_ctx(unsigned long drop,
+ 	return str;
+ }
+ 
+-static inline const char *xhci_decode_slot_context(u32 info, u32 info2,
+-		u32 tt_info, u32 state)
++static inline const char *xhci_decode_slot_context(char *str,
++		u32 info, u32 info2, u32 tt_info, u32 state)
+ {
+-	static char str[1024];
+ 	u32 speed;
+ 	u32 hub;
+ 	u32 mtt;
+@@ -2568,9 +2571,8 @@ static inline const char *xhci_portsc_link_state_string(u32 portsc)
+ 	return "Unknown";
+ }
+ 
+-static inline const char *xhci_decode_portsc(u32 portsc)
++static inline const char *xhci_decode_portsc(char *str, u32 portsc)
+ {
+-	static char str[256];
+ 	int ret;
+ 
+ 	ret = sprintf(str, "%s %s %s Link:%s PortSpeed:%d ",
+@@ -2614,9 +2616,8 @@ static inline const char *xhci_decode_portsc(u32 portsc)
+ 	return str;
+ }
+ 
+-static inline const char *xhci_decode_usbsts(u32 usbsts)
++static inline const char *xhci_decode_usbsts(char *str, u32 usbsts)
+ {
+-	static char str[256];
+ 	int ret = 0;
+ 
+ 	if (usbsts == ~(u32)0)
+@@ -2643,9 +2644,8 @@ static inline const char *xhci_decode_usbsts(u32 usbsts)
+ 	return str;
+ }
+ 
+-static inline const char *xhci_decode_doorbell(u32 slot, u32 doorbell)
++static inline const char *xhci_decode_doorbell(char *str, u32 slot, u32 doorbell)
+ {
+-	static char str[256];
+ 	u8 ep;
+ 	u16 stream;
+ 	int ret;
+@@ -2712,10 +2712,9 @@ static inline const char *xhci_ep_type_string(u8 type)
+ 	}
+ }
+ 
+-static inline const char *xhci_decode_ep_context(u32 info, u32 info2, u64 deq,
+-		u32 tx_info)
++static inline const char *xhci_decode_ep_context(char *str, u32 info,
++		u32 info2, u64 deq, u32 tx_info)
+ {
+-	static char str[1024];
+ 	int ret;
+ 
+ 	u32 esit;
+diff --git a/drivers/usb/mtu3/mtu3_core.c b/drivers/usb/mtu3/mtu3_core.c
+index b3b4599375668..3d328dfdbb5ed 100644
+--- a/drivers/usb/mtu3/mtu3_core.c
++++ b/drivers/usb/mtu3/mtu3_core.c
+@@ -227,11 +227,13 @@ void mtu3_set_speed(struct mtu3 *mtu, enum usb_device_speed speed)
+ 		mtu3_setbits(mbase, U3D_POWER_MANAGEMENT, HS_ENABLE);
+ 		break;
+ 	case USB_SPEED_SUPER:
++		mtu3_setbits(mbase, U3D_POWER_MANAGEMENT, HS_ENABLE);
+ 		mtu3_clrbits(mtu->ippc_base, SSUSB_U3_CTRL(0),
+ 			     SSUSB_U3_PORT_SSP_SPEED);
+ 		break;
+ 	case USB_SPEED_SUPER_PLUS:
+-			mtu3_setbits(mtu->ippc_base, SSUSB_U3_CTRL(0),
++		mtu3_setbits(mbase, U3D_POWER_MANAGEMENT, HS_ENABLE);
++		mtu3_setbits(mtu->ippc_base, SSUSB_U3_CTRL(0),
+ 			     SSUSB_U3_PORT_SSP_SPEED);
+ 		break;
+ 	default:
+diff --git a/drivers/usb/mtu3/mtu3_gadget.c b/drivers/usb/mtu3/mtu3_gadget.c
+index 38f17d66d5bc1..0b3aa7c65857a 100644
+--- a/drivers/usb/mtu3/mtu3_gadget.c
++++ b/drivers/usb/mtu3/mtu3_gadget.c
+@@ -64,14 +64,12 @@ static int mtu3_ep_enable(struct mtu3_ep *mep)
+ 	u32 interval = 0;
+ 	u32 mult = 0;
+ 	u32 burst = 0;
+-	int max_packet;
+ 	int ret;
+ 
+ 	desc = mep->desc;
+ 	comp_desc = mep->comp_desc;
+ 	mep->type = usb_endpoint_type(desc);
+-	max_packet = usb_endpoint_maxp(desc);
+-	mep->maxp = max_packet & GENMASK(10, 0);
++	mep->maxp = usb_endpoint_maxp(desc);
+ 
+ 	switch (mtu->g.speed) {
+ 	case USB_SPEED_SUPER:
+@@ -92,7 +90,7 @@ static int mtu3_ep_enable(struct mtu3_ep *mep)
+ 				usb_endpoint_xfer_int(desc)) {
+ 			interval = desc->bInterval;
+ 			interval = clamp_val(interval, 1, 16) - 1;
+-			burst = (max_packet & GENMASK(12, 11)) >> 11;
++			mult = usb_endpoint_maxp_mult(desc) - 1;
+ 		}
+ 		break;
+ 	default:
+diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
+index 00576bae183d3..0c321996c6eb0 100644
+--- a/net/ipv4/igmp.c
++++ b/net/ipv4/igmp.c
+@@ -2720,6 +2720,7 @@ int ip_check_mc_rcu(struct in_device *in_dev, __be32 mc_addr, __be32 src_addr, u
+ 		rv = 1;
+ 	} else if (im) {
+ 		if (src_addr) {
++			spin_lock_bh(&im->lock);
+ 			for (psf = im->sources; psf; psf = psf->sf_next) {
+ 				if (psf->sf_inaddr == src_addr)
+ 					break;
+@@ -2730,6 +2731,7 @@ int ip_check_mc_rcu(struct in_device *in_dev, __be32 mc_addr, __be32 src_addr, u
+ 					im->sfcount[MCAST_EXCLUDE];
+ 			else
+ 				rv = im->sfcount[MCAST_EXCLUDE] != 0;
++			spin_unlock_bh(&im->lock);
+ 		} else
+ 			rv = 1; /* unspecified source; tentatively allow */
+ 	}
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 326d1b0ea5e69..db65f77eb131f 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1898,6 +1898,7 @@ static const struct registration_quirk registration_quirks[] = {
+ 	REG_QUIRK_ENTRY(0x0951, 0x16ed, 2),	/* Kingston HyperX Cloud Alpha S */
+ 	REG_QUIRK_ENTRY(0x0951, 0x16ea, 2),	/* Kingston HyperX Cloud Flight S */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x1f46, 2),	/* JBL Quantum 600 */
++	REG_QUIRK_ENTRY(0x0ecb, 0x1f47, 2),	/* JBL Quantum 800 */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x2039, 2),	/* JBL Quantum 400 */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x203c, 2),	/* JBL Quantum 600 */
+ 	REG_QUIRK_ENTRY(0x0ecb, 0x203e, 2),	/* JBL Quantum 800 */


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-09-15 11:59 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-09-15 11:59 UTC (permalink / raw
  To: gentoo-commits

commit:     28c88a1f45d165a39d82b198849358c4823da767
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 15 11:59:01 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Sep 15 11:59:01 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=28c88a1f

Linux patch 5.13.17

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |     4 +
 1016_linux-5.13.17.patch | 11676 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 11680 insertions(+)

diff --git a/0000_README b/0000_README
index 640800f..ab01fc2 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch:  1015_linux-5.13.16.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.13.16
 
+Patch:  1016_linux-5.13.17.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.13.17
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1016_linux-5.13.17.patch b/1016_linux-5.13.17.patch
new file mode 100644
index 0000000..6561764
--- /dev/null
+++ b/1016_linux-5.13.17.patch
@@ -0,0 +1,11676 @@
+diff --git a/Documentation/fault-injection/provoke-crashes.rst b/Documentation/fault-injection/provoke-crashes.rst
+index a20ba5d939320..18de17354206a 100644
+--- a/Documentation/fault-injection/provoke-crashes.rst
++++ b/Documentation/fault-injection/provoke-crashes.rst
+@@ -29,7 +29,7 @@ recur_count
+ cpoint_name
+ 	Where in the kernel to trigger the action. It can be
+ 	one of INT_HARDWARE_ENTRY, INT_HW_IRQ_EN, INT_TASKLET_ENTRY,
+-	FS_DEVRW, MEM_SWAPOUT, TIMERADD, SCSI_DISPATCH_CMD,
++	FS_DEVRW, MEM_SWAPOUT, TIMERADD, SCSI_QUEUE_RQ,
+ 	IDE_CORE_CP, or DIRECT
+ 
+ cpoint_type
+diff --git a/Makefile b/Makefile
+index cbb2f35baedbc..c79a2c70a22ba 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 13
+-SUBLEVEL = 16
++SUBLEVEL = 17
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi b/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi
+index 7028e21bdd980..910eacc8ad3bd 100644
+--- a/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi
++++ b/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi
+@@ -208,12 +208,12 @@
+ 	};
+ 
+ 	pinctrl_hvi3c3_default: hvi3c3_default {
+-		function = "HVI3C3";
++		function = "I3C3";
+ 		groups = "HVI3C3";
+ 	};
+ 
+ 	pinctrl_hvi3c4_default: hvi3c4_default {
+-		function = "HVI3C4";
++		function = "I3C4";
+ 		groups = "HVI3C4";
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/at91-sam9x60ek.dts b/arch/arm/boot/dts/at91-sam9x60ek.dts
+index edca66c232c15..ebbc9b23aef1c 100644
+--- a/arch/arm/boot/dts/at91-sam9x60ek.dts
++++ b/arch/arm/boot/dts/at91-sam9x60ek.dts
+@@ -92,6 +92,8 @@
+ 
+ 	leds {
+ 		compatible = "gpio-leds";
++		pinctrl-names = "default";
++		pinctrl-0 = <&pinctrl_gpio_leds>;
+ 		status = "okay"; /* Conflict with pwm0. */
+ 
+ 		red {
+@@ -537,6 +539,10 @@
+ 				 AT91_PIOA 19 AT91_PERIPH_A (AT91_PINCTRL_PULL_UP | AT91_PINCTRL_DRIVE_STRENGTH_HI)	/* PA19 DAT2 periph A with pullup */
+ 				 AT91_PIOA 20 AT91_PERIPH_A (AT91_PINCTRL_PULL_UP | AT91_PINCTRL_DRIVE_STRENGTH_HI)>;	/* PA20 DAT3 periph A with pullup */
+ 		};
++		pinctrl_sdmmc0_cd: sdmmc0_cd {
++			atmel,pins =
++				<AT91_PIOA 23 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++		};
+ 	};
+ 
+ 	sdmmc1 {
+@@ -569,6 +575,14 @@
+ 				      AT91_PIOD 16 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
+ 		};
+ 	};
++
++	leds {
++		pinctrl_gpio_leds: gpio_leds {
++			atmel,pins = <AT91_PIOB 11 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
++				      AT91_PIOB 12 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
++				      AT91_PIOB 13 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++		};
++	};
+ }; /* pinctrl */
+ 
+ &pwm0 {
+@@ -580,7 +594,7 @@
+ &sdmmc0 {
+ 	bus-width = <4>;
+ 	pinctrl-names = "default";
+-	pinctrl-0 = <&pinctrl_sdmmc0_default>;
++	pinctrl-0 = <&pinctrl_sdmmc0_default &pinctrl_sdmmc0_cd>;
+ 	status = "okay";
+ 	cd-gpios = <&pioA 23 GPIO_ACTIVE_LOW>;
+ 	disable-wp;
+diff --git a/arch/arm/boot/dts/at91-sama5d3_xplained.dts b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+index 9c55a921263bd..cc55d1684322b 100644
+--- a/arch/arm/boot/dts/at91-sama5d3_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+@@ -57,6 +57,8 @@
+ 			};
+ 
+ 			spi0: spi@f0004000 {
++				pinctrl-names = "default";
++				pinctrl-0 = <&pinctrl_spi0_cs>;
+ 				cs-gpios = <&pioD 13 0>, <0>, <0>, <&pioD 16 0>;
+ 				status = "okay";
+ 			};
+@@ -169,6 +171,8 @@
+ 			};
+ 
+ 			spi1: spi@f8008000 {
++				pinctrl-names = "default";
++				pinctrl-0 = <&pinctrl_spi1_cs>;
+ 				cs-gpios = <&pioC 25 0>;
+ 				status = "okay";
+ 			};
+@@ -248,6 +252,26 @@
+ 							<AT91_PIOE 3 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
+ 							 AT91_PIOE 4 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
+ 					};
++
++					pinctrl_gpio_leds: gpio_leds_default {
++						atmel,pins =
++							<AT91_PIOE 23 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
++							 AT91_PIOE 24 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
++
++					pinctrl_spi0_cs: spi0_cs_default {
++						atmel,pins =
++							<AT91_PIOD 13 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
++							 AT91_PIOD 16 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
++
++					pinctrl_spi1_cs: spi1_cs_default {
++						atmel,pins = <AT91_PIOC 25 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
++
++					pinctrl_vcc_mmc0_reg_gpio: vcc_mmc0_reg_gpio_default {
++						atmel,pins = <AT91_PIOE 2 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
+ 				};
+ 			};
+ 		};
+@@ -339,6 +363,8 @@
+ 
+ 	vcc_mmc0_reg: fixedregulator_mmc0 {
+ 		compatible = "regulator-fixed";
++		pinctrl-names = "default";
++		pinctrl-0 = <&pinctrl_vcc_mmc0_reg_gpio>;
+ 		gpio = <&pioE 2 GPIO_ACTIVE_LOW>;
+ 		regulator-name = "mmc0-card-supply";
+ 		regulator-min-microvolt = <3300000>;
+@@ -362,6 +388,9 @@
+ 
+ 	leds {
+ 		compatible = "gpio-leds";
++		pinctrl-names = "default";
++		pinctrl-0 = <&pinctrl_gpio_leds>;
++		status = "okay";
+ 
+ 		d2 {
+ 			label = "d2";
+diff --git a/arch/arm/boot/dts/at91-sama5d4_xplained.dts b/arch/arm/boot/dts/at91-sama5d4_xplained.dts
+index 0b3ad1b580b83..e42dae06b5826 100644
+--- a/arch/arm/boot/dts/at91-sama5d4_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d4_xplained.dts
+@@ -90,6 +90,8 @@
+ 			};
+ 
+ 			spi1: spi@fc018000 {
++				pinctrl-names = "default";
++				pinctrl-0 = <&pinctrl_spi0_cs>;
+ 				cs-gpios = <&pioB 21 0>;
+ 				status = "okay";
+ 			};
+@@ -147,6 +149,19 @@
+ 						atmel,pins =
+ 							<AT91_PIOE 1 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_UP_DEGLITCH>;
+ 					};
++					pinctrl_spi0_cs: spi0_cs_default {
++						atmel,pins =
++							<AT91_PIOB 21 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
++					pinctrl_gpio_leds: gpio_leds_default {
++						atmel,pins =
++							<AT91_PIOD 30 AT91_PERIPH_GPIO AT91_PINCTRL_NONE
++							 AT91_PIOE 15 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
++					pinctrl_vcc_mmc1_reg: vcc_mmc1_reg {
++						atmel,pins =
++							<AT91_PIOE 4 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
++					};
+ 				};
+ 			};
+ 		};
+@@ -252,6 +267,8 @@
+ 
+ 	leds {
+ 		compatible = "gpio-leds";
++		pinctrl-names = "default";
++		pinctrl-0 = <&pinctrl_gpio_leds>;
+ 		status = "okay";
+ 
+ 		d8 {
+@@ -278,6 +295,8 @@
+ 
+ 	vcc_mmc1_reg: fixedregulator_mmc1 {
+ 		compatible = "regulator-fixed";
++		pinctrl-names = "default";
++		pinctrl-0 = <&pinctrl_vcc_mmc1_reg>;
+ 		gpio = <&pioE 4 GPIO_ACTIVE_LOW>;
+ 		regulator-name = "VDD MCI1";
+ 		regulator-min-microvolt = <3300000>;
+diff --git a/arch/arm/boot/dts/meson8.dtsi b/arch/arm/boot/dts/meson8.dtsi
+index 157a950a55d38..686c7b7c79d55 100644
+--- a/arch/arm/boot/dts/meson8.dtsi
++++ b/arch/arm/boot/dts/meson8.dtsi
+@@ -304,8 +304,13 @@
+ 					  "pp2", "ppmmu2", "pp4", "ppmmu4",
+ 					  "pp5", "ppmmu5", "pp6", "ppmmu6";
+ 			resets = <&reset RESET_MALI>;
++
+ 			clocks = <&clkc CLKID_CLK81>, <&clkc CLKID_MALI>;
+ 			clock-names = "bus", "core";
++
++			assigned-clocks = <&clkc CLKID_MALI>;
++			assigned-clock-rates = <318750000>;
++
+ 			operating-points-v2 = <&gpu_opp_table>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+diff --git a/arch/arm/boot/dts/meson8b-ec100.dts b/arch/arm/boot/dts/meson8b-ec100.dts
+index 8e48ccc6b634e..7e8ddc6f1252b 100644
+--- a/arch/arm/boot/dts/meson8b-ec100.dts
++++ b/arch/arm/boot/dts/meson8b-ec100.dts
+@@ -148,7 +148,7 @@
+ 		regulator-min-microvolt = <860000>;
+ 		regulator-max-microvolt = <1140000>;
+ 
+-		vin-supply = <&vcc_5v>;
++		pwm-supply = <&vcc_5v>;
+ 
+ 		pwms = <&pwm_cd 0 1148 0>;
+ 		pwm-dutycycle-range = <100 0>;
+@@ -232,7 +232,7 @@
+ 		regulator-min-microvolt = <860000>;
+ 		regulator-max-microvolt = <1140000>;
+ 
+-		vin-supply = <&vcc_5v>;
++		pwm-supply = <&vcc_5v>;
+ 
+ 		pwms = <&pwm_cd 1 1148 0>;
+ 		pwm-dutycycle-range = <100 0>;
+diff --git a/arch/arm/boot/dts/meson8b-mxq.dts b/arch/arm/boot/dts/meson8b-mxq.dts
+index f3937d55472d4..7adedd3258c33 100644
+--- a/arch/arm/boot/dts/meson8b-mxq.dts
++++ b/arch/arm/boot/dts/meson8b-mxq.dts
+@@ -34,6 +34,8 @@
+ 		regulator-min-microvolt = <860000>;
+ 		regulator-max-microvolt = <1140000>;
+ 
++		pwm-supply = <&vcc_5v>;
++
+ 		pwms = <&pwm_cd 0 1148 0>;
+ 		pwm-dutycycle-range = <100 0>;
+ 
+@@ -79,7 +81,7 @@
+ 		regulator-min-microvolt = <860000>;
+ 		regulator-max-microvolt = <1140000>;
+ 
+-		vin-supply = <&vcc_5v>;
++		pwm-supply = <&vcc_5v>;
+ 
+ 		pwms = <&pwm_cd 1 1148 0>;
+ 		pwm-dutycycle-range = <100 0>;
+diff --git a/arch/arm/boot/dts/meson8b-odroidc1.dts b/arch/arm/boot/dts/meson8b-odroidc1.dts
+index c440ef94e0820..04356bc639faf 100644
+--- a/arch/arm/boot/dts/meson8b-odroidc1.dts
++++ b/arch/arm/boot/dts/meson8b-odroidc1.dts
+@@ -131,7 +131,7 @@
+ 		regulator-min-microvolt = <860000>;
+ 		regulator-max-microvolt = <1140000>;
+ 
+-		vin-supply = <&p5v0>;
++		pwm-supply = <&p5v0>;
+ 
+ 		pwms = <&pwm_cd 0 12218 0>;
+ 		pwm-dutycycle-range = <91 0>;
+@@ -163,7 +163,7 @@
+ 		regulator-min-microvolt = <860000>;
+ 		regulator-max-microvolt = <1140000>;
+ 
+-		vin-supply = <&p5v0>;
++		pwm-supply = <&p5v0>;
+ 
+ 		pwms = <&pwm_cd 1 12218 0>;
+ 		pwm-dutycycle-range = <91 0>;
+diff --git a/arch/arm64/boot/dts/exynos/exynos7.dtsi b/arch/arm64/boot/dts/exynos/exynos7.dtsi
+index 10244e59d56dd..56a0bb7eb0e69 100644
+--- a/arch/arm64/boot/dts/exynos/exynos7.dtsi
++++ b/arch/arm64/boot/dts/exynos/exynos7.dtsi
+@@ -102,7 +102,7 @@
+ 			#address-cells = <0>;
+ 			interrupt-controller;
+ 			reg =	<0x11001000 0x1000>,
+-				<0x11002000 0x1000>,
++				<0x11002000 0x2000>,
+ 				<0x11004000 0x2000>,
+ 				<0x11006000 0x2000>;
+ 		};
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+index a05b1ab2dd12c..04da07ae44208 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+@@ -135,6 +135,23 @@
+ 	pinctrl-0 = <&pcie_reset_pins &pcie_clkreq_pins>;
+ 	status = "okay";
+ 	reset-gpios = <&gpiosb 3 GPIO_ACTIVE_LOW>;
++	/*
++	 * U-Boot port for Turris Mox has a bug which always expects that "ranges" DT property
++	 * contains exactly 2 ranges with 3 (child) address cells, 2 (parent) address cells and
++	 * 2 size cells and also expects that the second range starts at 16 MB offset. If these
++	 * conditions are not met then U-Boot crashes during loading kernel DTB file. PCIe address
++	 * space is 128 MB long, so the best split between MEM and IO is to use fixed 16 MB window
++	 * for IO and the rest 112 MB (64+32+16) for MEM, despite that maximal IO size is just 64 kB.
++	 * This bug is not present in U-Boot ports for other Armada 3700 devices and is fixed in
++	 * U-Boot version 2021.07. See relevant U-Boot commits (the last one contains fix):
++	 * https://source.denx.de/u-boot/u-boot/-/commit/cb2ddb291ee6fcbddd6d8f4ff49089dfe580f5d7
++	 * https://source.denx.de/u-boot/u-boot/-/commit/c64ac3b3185aeb3846297ad7391fc6df8ecd73bf
++	 * https://source.denx.de/u-boot/u-boot/-/commit/4a82fca8e330157081fc132a591ebd99ba02ee33
++	 */
++	#address-cells = <3>;
++	#size-cells = <2>;
++	ranges = <0x81000000 0 0xe8000000   0 0xe8000000   0 0x01000000   /* Port 0 IO */
++		  0x82000000 0 0xe9000000   0 0xe9000000   0 0x07000000>; /* Port 0 MEM */
+ 
+ 	/* enabled by U-Boot if PCIe module is present */
+ 	status = "disabled";
+diff --git a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+index 5db81a416cd65..9acc5d2b5a002 100644
+--- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+@@ -489,8 +489,15 @@
+ 			#interrupt-cells = <1>;
+ 			msi-parent = <&pcie0>;
+ 			msi-controller;
+-			ranges = <0x82000000 0 0xe8000000   0 0xe8000000 0 0x1000000 /* Port 0 MEM */
+-				  0x81000000 0 0xe9000000   0 0xe9000000 0 0x10000>; /* Port 0 IO*/
++			/*
++			 * The 128 MiB address range [0xe8000000-0xf0000000] is
++			 * dedicated for PCIe and can be assigned to 8 windows
++			 * with size a power of two. Use one 64 KiB window for
++			 * IO at the end and the remaining seven windows
++			 * (totaling 127 MiB) for MEM.
++			 */
++			ranges = <0x82000000 0 0xe8000000   0 0xe8000000   0 0x07f00000   /* Port 0 MEM */
++				  0x81000000 0 0xefff0000   0 0xefff0000   0 0x00010000>; /* Port 0 IO */
+ 			interrupt-map-mask = <0 0 0 7>;
+ 			interrupt-map = <0 0 0 1 &pcie_intc 0>,
+ 					<0 0 0 2 &pcie_intc 1>,
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi
+index 3eb8550da1fc5..3fa1ad1d0f02c 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi
+@@ -23,7 +23,7 @@ ap_h1_spi: &spi0 {};
+ 	adau7002: audio-codec-1 {
+ 		compatible = "adi,adau7002";
+ 		IOVDD-supply = <&pp1800_l15a>;
+-		wakeup-delay-ms = <15>;
++		wakeup-delay-ms = <80>;
+ 		#sound-dai-cells = <0>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index 09b5523965579..1316bea3eab52 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -2123,7 +2123,7 @@
+ 				 <&gcc GCC_USB3_PHY_SEC_BCR>;
+ 			reset-names = "phy", "common";
+ 
+-			usb_2_ssphy: lane@88eb200 {
++			usb_2_ssphy: lanes@88eb200 {
+ 				reg = <0 0x088eb200 0 0x200>,
+ 				      <0 0x088eb400 0 0x200>,
+ 				      <0 0x088eb800 0 0x800>;
+diff --git a/arch/arm64/boot/dts/renesas/hihope-rzg2-ex.dtsi b/arch/arm64/boot/dts/renesas/hihope-rzg2-ex.dtsi
+index 202c4fc88bd51..dde3a07bc417c 100644
+--- a/arch/arm64/boot/dts/renesas/hihope-rzg2-ex.dtsi
++++ b/arch/arm64/boot/dts/renesas/hihope-rzg2-ex.dtsi
+@@ -20,6 +20,7 @@
+ 	pinctrl-names = "default";
+ 	phy-handle = <&phy0>;
+ 	tx-internal-delay-ps = <2000>;
++	rx-internal-delay-ps = <1800>;
+ 	status = "okay";
+ 
+ 	phy0: ethernet-phy@0 {
+diff --git a/arch/arm64/boot/dts/renesas/r8a77995-draak.dts b/arch/arm64/boot/dts/renesas/r8a77995-draak.dts
+index 6783c3ad08567..57784341f39d7 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77995-draak.dts
++++ b/arch/arm64/boot/dts/renesas/r8a77995-draak.dts
+@@ -277,10 +277,6 @@
+ 		interrupt-parent = <&gpio1>;
+ 		interrupts = <28 IRQ_TYPE_LEVEL_LOW>;
+ 
+-		/* Depends on LVDS */
+-		max-clock = <135000000>;
+-		min-vrefresh = <50>;
+-
+ 		adi,input-depth = <8>;
+ 		adi,input-colorspace = "rgb";
+ 		adi,input-clock = "1x";
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index facf4d41d32a2..3ed09e7f81f03 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -15,6 +15,7 @@
+ #include <linux/fs.h>
+ #include <linux/mman.h>
+ #include <linux/sched.h>
++#include <linux/kmemleak.h>
+ #include <linux/kvm.h>
+ #include <linux/kvm_irqfd.h>
+ #include <linux/irqbypass.h>
+@@ -1957,6 +1958,12 @@ static int finalize_hyp_mode(void)
+ 	if (ret)
+ 		return ret;
+ 
++	/*
++	 * Exclude HYP BSS from kmemleak so that it doesn't get peeked
++	 * at, which would end badly once the section is inaccessible.
++	 * None of other sections should ever be introspected.
++	 */
++	kmemleak_free_part(__hyp_bss_start, __hyp_bss_end - __hyp_bss_start);
+ 	ret = pkvm_mark_hyp_section(__hyp_bss);
+ 	if (ret)
+ 		return ret;
+diff --git a/arch/m68k/Kconfig.cpu b/arch/m68k/Kconfig.cpu
+index f4d23977d2a5a..1127d64b6c1dc 100644
+--- a/arch/m68k/Kconfig.cpu
++++ b/arch/m68k/Kconfig.cpu
+@@ -26,6 +26,7 @@ config COLDFIRE
+ 	bool "Coldfire CPU family support"
+ 	select ARCH_HAVE_CUSTOM_GPIO_H
+ 	select CPU_HAS_NO_BITFIELDS
++	select CPU_HAS_NO_CAS
+ 	select CPU_HAS_NO_MULDIV64
+ 	select GENERIC_CSUM
+ 	select GPIOLIB
+@@ -39,6 +40,7 @@ config M68000
+ 	bool
+ 	depends on !MMU
+ 	select CPU_HAS_NO_BITFIELDS
++	select CPU_HAS_NO_CAS
+ 	select CPU_HAS_NO_MULDIV64
+ 	select CPU_HAS_NO_UNALIGNED
+ 	select GENERIC_CSUM
+@@ -54,6 +56,7 @@ config M68000
+ config MCPU32
+ 	bool
+ 	select CPU_HAS_NO_BITFIELDS
++	select CPU_HAS_NO_CAS
+ 	select CPU_HAS_NO_UNALIGNED
+ 	select CPU_NO_EFFICIENT_FFS
+ 	help
+@@ -383,7 +386,7 @@ config ADVANCED
+ 
+ config RMW_INSNS
+ 	bool "Use read-modify-write instructions"
+-	depends on ADVANCED
++	depends on ADVANCED && !CPU_HAS_NO_CAS
+ 	help
+ 	  This allows to use certain instructions that work with indivisible
+ 	  read-modify-write bus cycles. While this is faster than the
+@@ -459,6 +462,9 @@ config NODES_SHIFT
+ config CPU_HAS_NO_BITFIELDS
+ 	bool
+ 
++config CPU_HAS_NO_CAS
++	bool
++
+ config CPU_HAS_NO_MULDIV64
+ 	bool
+ 
+diff --git a/arch/m68k/coldfire/clk.c b/arch/m68k/coldfire/clk.c
+index 076a9caa9557b..c895a189c5ae3 100644
+--- a/arch/m68k/coldfire/clk.c
++++ b/arch/m68k/coldfire/clk.c
+@@ -92,7 +92,7 @@ int clk_enable(struct clk *clk)
+ 	unsigned long flags;
+ 
+ 	if (!clk)
+-		return -EINVAL;
++		return 0;
+ 
+ 	spin_lock_irqsave(&clk_lock, flags);
+ 	if ((clk->enabled++ == 0) && clk->clk_ops)
+diff --git a/arch/m68k/emu/nfeth.c b/arch/m68k/emu/nfeth.c
+index d2875e32abfca..79e55421cfb18 100644
+--- a/arch/m68k/emu/nfeth.c
++++ b/arch/m68k/emu/nfeth.c
+@@ -254,8 +254,8 @@ static void __exit nfeth_cleanup(void)
+ 
+ 	for (i = 0; i < MAX_UNIT; i++) {
+ 		if (nfeth_dev[i]) {
+-			unregister_netdev(nfeth_dev[0]);
+-			free_netdev(nfeth_dev[0]);
++			unregister_netdev(nfeth_dev[i]);
++			free_netdev(nfeth_dev[i]);
+ 		}
+ 	}
+ 	free_irq(nfEtherIRQ, nfeth_interrupt);
+diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
+index 8925f3969478f..5445ae106077f 100644
+--- a/arch/s390/include/asm/kvm_host.h
++++ b/arch/s390/include/asm/kvm_host.h
+@@ -962,6 +962,7 @@ struct kvm_arch{
+ 	atomic64_t cmma_dirty_pages;
+ 	/* subset of available cpu features enabled by user space */
+ 	DECLARE_BITMAP(cpu_feat, KVM_S390_VM_CPU_FEAT_NR_BITS);
++	/* indexed by vcpu_idx */
+ 	DECLARE_BITMAP(idle_mask, KVM_MAX_VCPUS);
+ 	struct kvm_s390_gisa_interrupt gisa_int;
+ 	struct kvm_s390_pv pv;
+diff --git a/arch/s390/kernel/debug.c b/arch/s390/kernel/debug.c
+index bb958d32bd813..a142671c449b1 100644
+--- a/arch/s390/kernel/debug.c
++++ b/arch/s390/kernel/debug.c
+@@ -24,6 +24,7 @@
+ #include <linux/export.h>
+ #include <linux/init.h>
+ #include <linux/fs.h>
++#include <linux/minmax.h>
+ #include <linux/debugfs.h>
+ 
+ #include <asm/debug.h>
+@@ -92,6 +93,8 @@ static int debug_hex_ascii_format_fn(debug_info_t *id, struct debug_view *view,
+ 				     char *out_buf, const char *in_buf);
+ static int debug_sprintf_format_fn(debug_info_t *id, struct debug_view *view,
+ 				   char *out_buf, debug_sprintf_entry_t *curr_event);
++static void debug_areas_swap(debug_info_t *a, debug_info_t *b);
++static void debug_events_append(debug_info_t *dest, debug_info_t *src);
+ 
+ /* globals */
+ 
+@@ -311,24 +314,6 @@ static debug_info_t *debug_info_create(const char *name, int pages_per_area,
+ 		goto out;
+ 
+ 	rc->mode = mode & ~S_IFMT;
+-
+-	/* create root directory */
+-	rc->debugfs_root_entry = debugfs_create_dir(rc->name,
+-						    debug_debugfs_root_entry);
+-
+-	/* append new element to linked list */
+-	if (!debug_area_first) {
+-		/* first element in list */
+-		debug_area_first = rc;
+-		rc->prev = NULL;
+-	} else {
+-		/* append element to end of list */
+-		debug_area_last->next = rc;
+-		rc->prev = debug_area_last;
+-	}
+-	debug_area_last = rc;
+-	rc->next = NULL;
+-
+ 	refcount_set(&rc->ref_count, 1);
+ out:
+ 	return rc;
+@@ -388,27 +373,10 @@ static void debug_info_get(debug_info_t *db_info)
+  */
+ static void debug_info_put(debug_info_t *db_info)
+ {
+-	int i;
+-
+ 	if (!db_info)
+ 		return;
+-	if (refcount_dec_and_test(&db_info->ref_count)) {
+-		for (i = 0; i < DEBUG_MAX_VIEWS; i++) {
+-			if (!db_info->views[i])
+-				continue;
+-			debugfs_remove(db_info->debugfs_entries[i]);
+-		}
+-		debugfs_remove(db_info->debugfs_root_entry);
+-		if (db_info == debug_area_first)
+-			debug_area_first = db_info->next;
+-		if (db_info == debug_area_last)
+-			debug_area_last = db_info->prev;
+-		if (db_info->prev)
+-			db_info->prev->next = db_info->next;
+-		if (db_info->next)
+-			db_info->next->prev = db_info->prev;
++	if (refcount_dec_and_test(&db_info->ref_count))
+ 		debug_info_free(db_info);
+-	}
+ }
+ 
+ /*
+@@ -632,6 +600,31 @@ static int debug_close(struct inode *inode, struct file *file)
+ 	return 0; /* success */
+ }
+ 
++/* Create debugfs entries and add to internal list. */
++static void _debug_register(debug_info_t *id)
++{
++	/* create root directory */
++	id->debugfs_root_entry = debugfs_create_dir(id->name,
++						    debug_debugfs_root_entry);
++
++	/* append new element to linked list */
++	if (!debug_area_first) {
++		/* first element in list */
++		debug_area_first = id;
++		id->prev = NULL;
++	} else {
++		/* append element to end of list */
++		debug_area_last->next = id;
++		id->prev = debug_area_last;
++	}
++	debug_area_last = id;
++	id->next = NULL;
++
++	debug_register_view(id, &debug_level_view);
++	debug_register_view(id, &debug_flush_view);
++	debug_register_view(id, &debug_pages_view);
++}
++
+ /**
+  * debug_register_mode() - creates and initializes debug area.
+  *
+@@ -661,19 +654,16 @@ debug_info_t *debug_register_mode(const char *name, int pages_per_area,
+ 	if ((uid != 0) || (gid != 0))
+ 		pr_warn("Root becomes the owner of all s390dbf files in sysfs\n");
+ 	BUG_ON(!initialized);
+-	mutex_lock(&debug_mutex);
+ 
+ 	/* create new debug_info */
+ 	rc = debug_info_create(name, pages_per_area, nr_areas, buf_size, mode);
+-	if (!rc)
+-		goto out;
+-	debug_register_view(rc, &debug_level_view);
+-	debug_register_view(rc, &debug_flush_view);
+-	debug_register_view(rc, &debug_pages_view);
+-out:
+-	if (!rc)
++	if (rc) {
++		mutex_lock(&debug_mutex);
++		_debug_register(rc);
++		mutex_unlock(&debug_mutex);
++	} else {
+ 		pr_err("Registering debug feature %s failed\n", name);
+-	mutex_unlock(&debug_mutex);
++	}
+ 	return rc;
+ }
+ EXPORT_SYMBOL(debug_register_mode);
+@@ -702,6 +692,27 @@ debug_info_t *debug_register(const char *name, int pages_per_area,
+ }
+ EXPORT_SYMBOL(debug_register);
+ 
++/* Remove debugfs entries and remove from internal list. */
++static void _debug_unregister(debug_info_t *id)
++{
++	int i;
++
++	for (i = 0; i < DEBUG_MAX_VIEWS; i++) {
++		if (!id->views[i])
++			continue;
++		debugfs_remove(id->debugfs_entries[i]);
++	}
++	debugfs_remove(id->debugfs_root_entry);
++	if (id == debug_area_first)
++		debug_area_first = id->next;
++	if (id == debug_area_last)
++		debug_area_last = id->prev;
++	if (id->prev)
++		id->prev->next = id->next;
++	if (id->next)
++		id->next->prev = id->prev;
++}
++
+ /**
+  * debug_unregister() - give back debug area.
+  *
+@@ -715,8 +726,10 @@ void debug_unregister(debug_info_t *id)
+ 	if (!id)
+ 		return;
+ 	mutex_lock(&debug_mutex);
+-	debug_info_put(id);
++	_debug_unregister(id);
+ 	mutex_unlock(&debug_mutex);
++
++	debug_info_put(id);
+ }
+ EXPORT_SYMBOL(debug_unregister);
+ 
+@@ -726,35 +739,28 @@ EXPORT_SYMBOL(debug_unregister);
+  */
+ static int debug_set_size(debug_info_t *id, int nr_areas, int pages_per_area)
+ {
+-	debug_entry_t ***new_areas;
++	debug_info_t *new_id;
+ 	unsigned long flags;
+-	int rc = 0;
+ 
+ 	if (!id || (nr_areas <= 0) || (pages_per_area < 0))
+ 		return -EINVAL;
+-	if (pages_per_area > 0) {
+-		new_areas = debug_areas_alloc(pages_per_area, nr_areas);
+-		if (!new_areas) {
+-			pr_info("Allocating memory for %i pages failed\n",
+-				pages_per_area);
+-			rc = -ENOMEM;
+-			goto out;
+-		}
+-	} else {
+-		new_areas = NULL;
++
++	new_id = debug_info_alloc("", pages_per_area, nr_areas, id->buf_size,
++				  id->level, ALL_AREAS);
++	if (!new_id) {
++		pr_info("Allocating memory for %i pages failed\n",
++			pages_per_area);
++		return -ENOMEM;
+ 	}
++
+ 	spin_lock_irqsave(&id->lock, flags);
+-	debug_areas_free(id);
+-	id->areas = new_areas;
+-	id->nr_areas = nr_areas;
+-	id->pages_per_area = pages_per_area;
+-	id->active_area = 0;
+-	memset(id->active_entries, 0, sizeof(int)*id->nr_areas);
+-	memset(id->active_pages, 0, sizeof(int)*id->nr_areas);
++	debug_events_append(new_id, id);
++	debug_areas_swap(new_id, id);
++	debug_info_free(new_id);
+ 	spin_unlock_irqrestore(&id->lock, flags);
+ 	pr_info("%s: set new size (%i pages)\n", id->name, pages_per_area);
+-out:
+-	return rc;
++
++	return 0;
+ }
+ 
+ /**
+@@ -821,6 +827,42 @@ static inline debug_entry_t *get_active_entry(debug_info_t *id)
+ 				  id->active_entries[id->active_area]);
+ }
+ 
++/* Swap debug areas of a and b. */
++static void debug_areas_swap(debug_info_t *a, debug_info_t *b)
++{
++	swap(a->nr_areas, b->nr_areas);
++	swap(a->pages_per_area, b->pages_per_area);
++	swap(a->areas, b->areas);
++	swap(a->active_area, b->active_area);
++	swap(a->active_pages, b->active_pages);
++	swap(a->active_entries, b->active_entries);
++}
++
++/* Append all debug events in active area from source to destination log. */
++static void debug_events_append(debug_info_t *dest, debug_info_t *src)
++{
++	debug_entry_t *from, *to, *last;
++
++	if (!src->areas || !dest->areas)
++		return;
++
++	/* Loop over all entries in src, starting with oldest. */
++	from = get_active_entry(src);
++	last = from;
++	do {
++		if (from->clock != 0LL) {
++			to = get_active_entry(dest);
++			memset(to, 0, dest->entry_size);
++			memcpy(to, from, min(src->entry_size,
++					     dest->entry_size));
++			proceed_active_entry(dest);
++		}
++
++		proceed_active_entry(src);
++		from = get_active_entry(src);
++	} while (from != last);
++}
++
+ /*
+  * debug_finish_entry:
+  * - set timestamp, caller address, cpu number etc.
+diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
+index d548d60caed25..16256e17a544a 100644
+--- a/arch/s390/kvm/interrupt.c
++++ b/arch/s390/kvm/interrupt.c
+@@ -419,13 +419,13 @@ static unsigned long deliverable_irqs(struct kvm_vcpu *vcpu)
+ static void __set_cpu_idle(struct kvm_vcpu *vcpu)
+ {
+ 	kvm_s390_set_cpuflags(vcpu, CPUSTAT_WAIT);
+-	set_bit(vcpu->vcpu_id, vcpu->kvm->arch.idle_mask);
++	set_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.idle_mask);
+ }
+ 
+ static void __unset_cpu_idle(struct kvm_vcpu *vcpu)
+ {
+ 	kvm_s390_clear_cpuflags(vcpu, CPUSTAT_WAIT);
+-	clear_bit(vcpu->vcpu_id, vcpu->kvm->arch.idle_mask);
++	clear_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.idle_mask);
+ }
+ 
+ static void __reset_intercept_indicators(struct kvm_vcpu *vcpu)
+@@ -3050,18 +3050,18 @@ int kvm_s390_get_irq_state(struct kvm_vcpu *vcpu, __u8 __user *buf, int len)
+ 
+ static void __airqs_kick_single_vcpu(struct kvm *kvm, u8 deliverable_mask)
+ {
+-	int vcpu_id, online_vcpus = atomic_read(&kvm->online_vcpus);
++	int vcpu_idx, online_vcpus = atomic_read(&kvm->online_vcpus);
+ 	struct kvm_s390_gisa_interrupt *gi = &kvm->arch.gisa_int;
+ 	struct kvm_vcpu *vcpu;
+ 
+-	for_each_set_bit(vcpu_id, kvm->arch.idle_mask, online_vcpus) {
+-		vcpu = kvm_get_vcpu(kvm, vcpu_id);
++	for_each_set_bit(vcpu_idx, kvm->arch.idle_mask, online_vcpus) {
++		vcpu = kvm_get_vcpu(kvm, vcpu_idx);
+ 		if (psw_ioint_disabled(vcpu))
+ 			continue;
+ 		deliverable_mask &= (u8)(vcpu->arch.sie_block->gcr[6] >> 24);
+ 		if (deliverable_mask) {
+ 			/* lately kicked but not yet running */
+-			if (test_and_set_bit(vcpu_id, gi->kicked_mask))
++			if (test_and_set_bit(vcpu_idx, gi->kicked_mask))
+ 				return;
+ 			kvm_s390_vcpu_wakeup(vcpu);
+ 			return;
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 876fc1f7282a0..32173fffad3f1 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -4020,7 +4020,7 @@ static int vcpu_pre_run(struct kvm_vcpu *vcpu)
+ 		kvm_s390_patch_guest_per_regs(vcpu);
+ 	}
+ 
+-	clear_bit(vcpu->vcpu_id, vcpu->kvm->arch.gisa_int.kicked_mask);
++	clear_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.gisa_int.kicked_mask);
+ 
+ 	vcpu->arch.sie_block->icptcode = 0;
+ 	cpuflags = atomic_read(&vcpu->arch.sie_block->cpuflags);
+diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
+index 9fad25109b0dd..ecd741ee3276e 100644
+--- a/arch/s390/kvm/kvm-s390.h
++++ b/arch/s390/kvm/kvm-s390.h
+@@ -79,7 +79,7 @@ static inline int is_vcpu_stopped(struct kvm_vcpu *vcpu)
+ 
+ static inline int is_vcpu_idle(struct kvm_vcpu *vcpu)
+ {
+-	return test_bit(vcpu->vcpu_id, vcpu->kvm->arch.idle_mask);
++	return test_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.idle_mask);
+ }
+ 
+ static inline int kvm_is_ucontrol(struct kvm *kvm)
+diff --git a/arch/s390/mm/kasan_init.c b/arch/s390/mm/kasan_init.c
+index db4d303aaaa9a..d7fcfe97d168d 100644
+--- a/arch/s390/mm/kasan_init.c
++++ b/arch/s390/mm/kasan_init.c
+@@ -108,6 +108,9 @@ static void __init kasan_early_pgtable_populate(unsigned long address,
+ 		sgt_prot &= ~_SEGMENT_ENTRY_NOEXEC;
+ 	}
+ 
++	/*
++	 * The first 1MB of 1:1 mapping is mapped with 4KB pages
++	 */
+ 	while (address < end) {
+ 		pg_dir = pgd_offset_k(address);
+ 		if (pgd_none(*pg_dir)) {
+@@ -158,30 +161,26 @@ static void __init kasan_early_pgtable_populate(unsigned long address,
+ 
+ 		pm_dir = pmd_offset(pu_dir, address);
+ 		if (pmd_none(*pm_dir)) {
+-			if (mode == POPULATE_ZERO_SHADOW &&
+-			    IS_ALIGNED(address, PMD_SIZE) &&
++			if (IS_ALIGNED(address, PMD_SIZE) &&
+ 			    end - address >= PMD_SIZE) {
+-				pmd_populate(&init_mm, pm_dir,
+-						kasan_early_shadow_pte);
+-				address = (address + PMD_SIZE) & PMD_MASK;
+-				continue;
+-			}
+-			/* the first megabyte of 1:1 is mapped with 4k pages */
+-			if (has_edat && address && end - address >= PMD_SIZE &&
+-			    mode != POPULATE_ZERO_SHADOW) {
+-				void *page;
+-
+-				if (mode == POPULATE_ONE2ONE) {
+-					page = (void *)address;
+-				} else {
+-					page = kasan_early_alloc_segment();
+-					memset(page, 0, _SEGMENT_SIZE);
++				if (mode == POPULATE_ZERO_SHADOW) {
++					pmd_populate(&init_mm, pm_dir, kasan_early_shadow_pte);
++					address = (address + PMD_SIZE) & PMD_MASK;
++					continue;
++				} else if (has_edat && address) {
++					void *page;
++
++					if (mode == POPULATE_ONE2ONE) {
++						page = (void *)address;
++					} else {
++						page = kasan_early_alloc_segment();
++						memset(page, 0, _SEGMENT_SIZE);
++					}
++					pmd_val(*pm_dir) = __pa(page) | sgt_prot;
++					address = (address + PMD_SIZE) & PMD_MASK;
++					continue;
+ 				}
+-				pmd_val(*pm_dir) = __pa(page) | sgt_prot;
+-				address = (address + PMD_SIZE) & PMD_MASK;
+-				continue;
+ 			}
+-
+ 			pt_dir = kasan_early_pte_alloc();
+ 			pmd_populate(&init_mm, pm_dir, pt_dir);
+ 		} else if (pmd_large(*pm_dir)) {
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index 8fcb7ecb7225a..77cd965cffefa 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -661,9 +661,10 @@ int zpci_enable_device(struct zpci_dev *zdev)
+ {
+ 	int rc;
+ 
+-	rc = clp_enable_fh(zdev, ZPCI_NR_DMA_SPACES);
+-	if (rc)
++	if (clp_enable_fh(zdev, ZPCI_NR_DMA_SPACES)) {
++		rc = -EIO;
+ 		goto out;
++	}
+ 
+ 	rc = zpci_dma_init_device(zdev);
+ 	if (rc)
+@@ -684,7 +685,7 @@ int zpci_disable_device(struct zpci_dev *zdev)
+ 	 * The zPCI function may already be disabled by the platform, this is
+ 	 * detected in clp_disable_fh() which becomes a no-op.
+ 	 */
+-	return clp_disable_fh(zdev);
++	return clp_disable_fh(zdev) ? -EIO : 0;
+ }
+ 
+ /**
+diff --git a/arch/s390/pci/pci_clp.c b/arch/s390/pci/pci_clp.c
+index d3331596ddbe1..0a0e8b8293bef 100644
+--- a/arch/s390/pci/pci_clp.c
++++ b/arch/s390/pci/pci_clp.c
+@@ -213,15 +213,19 @@ out:
+ }
+ 
+ static int clp_refresh_fh(u32 fid);
+-/*
+- * Enable/Disable a given PCI function and update its function handle if
+- * necessary
++/**
++ * clp_set_pci_fn() - Execute a command on a PCI function
++ * @zdev: Function that will be affected
++ * @nr_dma_as: DMA address space number
++ * @command: The command code to execute
++ *
++ * Returns: 0 on success, < 0 for Linux errors (e.g. -ENOMEM), and
++ * > 0 for non-success platform responses
+  */
+ static int clp_set_pci_fn(struct zpci_dev *zdev, u8 nr_dma_as, u8 command)
+ {
+ 	struct clp_req_rsp_set_pci *rrb;
+ 	int rc, retries = 100;
+-	u32 fid = zdev->fid;
+ 
+ 	rrb = clp_alloc_block(GFP_KERNEL);
+ 	if (!rrb)
+@@ -245,17 +249,16 @@ static int clp_set_pci_fn(struct zpci_dev *zdev, u8 nr_dma_as, u8 command)
+ 		}
+ 	} while (rrb->response.hdr.rsp == CLP_RC_SETPCIFN_BUSY);
+ 
+-	if (rc || rrb->response.hdr.rsp != CLP_RC_OK) {
+-		zpci_err("Set PCI FN:\n");
+-		zpci_err_clp(rrb->response.hdr.rsp, rc);
+-	}
+-
+ 	if (!rc && rrb->response.hdr.rsp == CLP_RC_OK) {
+ 		zdev->fh = rrb->response.fh;
+-	} else if (!rc && rrb->response.hdr.rsp == CLP_RC_SETPCIFN_ALRDY &&
+-			rrb->response.fh == 0) {
++	} else if (!rc && rrb->response.hdr.rsp == CLP_RC_SETPCIFN_ALRDY) {
+ 		/* Function is already in desired state - update handle */
+-		rc = clp_refresh_fh(fid);
++		rc = clp_refresh_fh(zdev->fid);
++	} else {
++		zpci_err("Set PCI FN:\n");
++		zpci_err_clp(rrb->response.hdr.rsp, rc);
++		if (!rc)
++			rc = rrb->response.hdr.rsp;
+ 	}
+ 	clp_free_block(rrb);
+ 	return rc;
+@@ -301,17 +304,13 @@ int clp_enable_fh(struct zpci_dev *zdev, u8 nr_dma_as)
+ 
+ 	rc = clp_set_pci_fn(zdev, nr_dma_as, CLP_SET_ENABLE_PCI_FN);
+ 	zpci_dbg(3, "ena fid:%x, fh:%x, rc:%d\n", zdev->fid, zdev->fh, rc);
+-	if (rc)
+-		goto out;
+-
+-	if (zpci_use_mio(zdev)) {
++	if (!rc && zpci_use_mio(zdev)) {
+ 		rc = clp_set_pci_fn(zdev, nr_dma_as, CLP_SET_ENABLE_MIO);
+ 		zpci_dbg(3, "ena mio fid:%x, fh:%x, rc:%d\n",
+ 				zdev->fid, zdev->fh, rc);
+ 		if (rc)
+ 			clp_disable_fh(zdev);
+ 	}
+-out:
+ 	return rc;
+ }
+ 
+diff --git a/arch/x86/boot/compressed/efi_thunk_64.S b/arch/x86/boot/compressed/efi_thunk_64.S
+index 95a223b3e56a2..8bb92e9f4e973 100644
+--- a/arch/x86/boot/compressed/efi_thunk_64.S
++++ b/arch/x86/boot/compressed/efi_thunk_64.S
+@@ -5,9 +5,8 @@
+  * Early support for invoking 32-bit EFI services from a 64-bit kernel.
+  *
+  * Because this thunking occurs before ExitBootServices() we have to
+- * restore the firmware's 32-bit GDT before we make EFI service calls,
+- * since the firmware's 32-bit IDT is still currently installed and it
+- * needs to be able to service interrupts.
++ * restore the firmware's 32-bit GDT and IDT before we make EFI service
++ * calls.
+  *
+  * On the plus side, we don't have to worry about mangling 64-bit
+  * addresses into 32-bits because we're executing with an identity
+@@ -39,7 +38,7 @@ SYM_FUNC_START(__efi64_thunk)
+ 	/*
+ 	 * Convert x86-64 ABI params to i386 ABI
+ 	 */
+-	subq	$32, %rsp
++	subq	$64, %rsp
+ 	movl	%esi, 0x0(%rsp)
+ 	movl	%edx, 0x4(%rsp)
+ 	movl	%ecx, 0x8(%rsp)
+@@ -49,14 +48,19 @@ SYM_FUNC_START(__efi64_thunk)
+ 	leaq	0x14(%rsp), %rbx
+ 	sgdt	(%rbx)
+ 
++	addq	$16, %rbx
++	sidt	(%rbx)
++
+ 	/*
+-	 * Switch to gdt with 32-bit segments. This is the firmware GDT
+-	 * that was installed when the kernel started executing. This
+-	 * pointer was saved at the EFI stub entry point in head_64.S.
++	 * Switch to IDT and GDT with 32-bit segments. This is the firmware GDT
++	 * and IDT that was installed when the kernel started executing. The
++	 * pointers were saved at the EFI stub entry point in head_64.S.
+ 	 *
+ 	 * Pass the saved DS selector to the 32-bit code, and use far return to
+ 	 * restore the saved CS selector.
+ 	 */
++	leaq	efi32_boot_idt(%rip), %rax
++	lidt	(%rax)
+ 	leaq	efi32_boot_gdt(%rip), %rax
+ 	lgdt	(%rax)
+ 
+@@ -67,7 +71,7 @@ SYM_FUNC_START(__efi64_thunk)
+ 	pushq	%rax
+ 	lretq
+ 
+-1:	addq	$32, %rsp
++1:	addq	$64, %rsp
+ 	movq	%rdi, %rax
+ 
+ 	pop	%rbx
+@@ -128,10 +132,13 @@ SYM_FUNC_START_LOCAL(efi_enter32)
+ 
+ 	/*
+ 	 * Some firmware will return with interrupts enabled. Be sure to
+-	 * disable them before we switch GDTs.
++	 * disable them before we switch GDTs and IDTs.
+ 	 */
+ 	cli
+ 
++	lidtl	(%ebx)
++	subl	$16, %ebx
++
+ 	lgdtl	(%ebx)
+ 
+ 	movl	%cr4, %eax
+@@ -166,6 +173,11 @@ SYM_DATA_START(efi32_boot_gdt)
+ 	.quad	0
+ SYM_DATA_END(efi32_boot_gdt)
+ 
++SYM_DATA_START(efi32_boot_idt)
++	.word	0
++	.quad	0
++SYM_DATA_END(efi32_boot_idt)
++
+ SYM_DATA_START(efi32_boot_cs)
+ 	.word	0
+ SYM_DATA_END(efi32_boot_cs)
+diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
+index a2347ded77ea2..572c535cf45bc 100644
+--- a/arch/x86/boot/compressed/head_64.S
++++ b/arch/x86/boot/compressed/head_64.S
+@@ -319,6 +319,9 @@ SYM_INNER_LABEL(efi32_pe_stub_entry, SYM_L_LOCAL)
+ 	movw	%cs, rva(efi32_boot_cs)(%ebp)
+ 	movw	%ds, rva(efi32_boot_ds)(%ebp)
+ 
++	/* Store firmware IDT descriptor */
++	sidtl	rva(efi32_boot_idt)(%ebp)
++
+ 	/* Disable paging */
+ 	movl	%cr0, %eax
+ 	btrl	$X86_CR0_PG_BIT, %eax
+diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
+index 2144e54a6c892..388643ca2177e 100644
+--- a/arch/x86/crypto/aesni-intel_glue.c
++++ b/arch/x86/crypto/aesni-intel_glue.c
+@@ -849,6 +849,8 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
+ 		return -EINVAL;
+ 
+ 	err = skcipher_walk_virt(&walk, req, false);
++	if (err)
++		return err;
+ 
+ 	if (unlikely(tail > 0 && walk.nbytes < walk.total)) {
+ 		int blocks = DIV_ROUND_UP(req->cryptlen, AES_BLOCK_SIZE) - 2;
+@@ -862,7 +864,10 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
+ 		skcipher_request_set_crypt(&subreq, req->src, req->dst,
+ 					   blocks * AES_BLOCK_SIZE, req->iv);
+ 		req = &subreq;
++
+ 		err = skcipher_walk_virt(&walk, req, false);
++		if (err)
++			return err;
+ 	} else {
+ 		tail = 0;
+ 	}
+diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
+index 921f47b9bb247..ccc9ee1971e89 100644
+--- a/arch/x86/events/amd/ibs.c
++++ b/arch/x86/events/amd/ibs.c
+@@ -571,6 +571,7 @@ static struct perf_ibs perf_ibs_op = {
+ 		.start		= perf_ibs_start,
+ 		.stop		= perf_ibs_stop,
+ 		.read		= perf_ibs_read,
++		.capabilities	= PERF_PMU_CAP_NO_EXCLUDE,
+ 	},
+ 	.msr			= MSR_AMD64_IBSOPCTL,
+ 	.config_mask		= IBS_OP_CONFIG_MASK,
+diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h
+index ddfb3cad8dff2..2a8319fad0b75 100644
+--- a/arch/x86/include/asm/mce.h
++++ b/arch/x86/include/asm/mce.h
+@@ -265,6 +265,7 @@ enum mcp_flags {
+ 	MCP_TIMESTAMP	= BIT(0),	/* log time stamp */
+ 	MCP_UC		= BIT(1),	/* log uncorrected errors */
+ 	MCP_DONTLOG	= BIT(2),	/* only clear, don't log */
++	MCP_QUEUE_LOG	= BIT(3),	/* only queue to genpool */
+ };
+ bool machine_check_poll(enum mcp_flags flags, mce_banks_t *b);
+ 
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index bf7fe87a7e884..01ff4014b7f67 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -817,7 +817,10 @@ log_it:
+ 		if (mca_cfg.dont_log_ce && !mce_usable_address(&m))
+ 			goto clear_it;
+ 
+-		mce_log(&m);
++		if (flags & MCP_QUEUE_LOG)
++			mce_gen_pool_add(&m);
++		else
++			mce_log(&m);
+ 
+ clear_it:
+ 		/*
+@@ -1630,10 +1633,12 @@ static void __mcheck_cpu_init_generic(void)
+ 		m_fl = MCP_DONTLOG;
+ 
+ 	/*
+-	 * Log the machine checks left over from the previous reset.
++	 * Log the machine checks left over from the previous reset. Log them
++	 * only, do not start processing them. That will happen in mcheck_late_init()
++	 * when all consumers have been registered on the notifier chain.
+ 	 */
+ 	bitmap_fill(all_banks, MAX_NR_BANKS);
+-	machine_check_poll(MCP_UC | m_fl, &all_banks);
++	machine_check_poll(MCP_UC | MCP_QUEUE_LOG | m_fl, &all_banks);
+ 
+ 	cr4_set_bits(X86_CR4_MCE);
+ 
+diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
+index 57e4bb695ff96..8caf871b796f2 100644
+--- a/arch/x86/kernel/cpu/resctrl/monitor.c
++++ b/arch/x86/kernel/cpu/resctrl/monitor.c
+@@ -304,6 +304,12 @@ static u64 __mon_event_count(u32 rmid, struct rmid_read *rr)
+ 	case QOS_L3_MBM_LOCAL_EVENT_ID:
+ 		m = &rr->d->mbm_local[rmid];
+ 		break;
++	default:
++		/*
++		 * Code would never reach here because an invalid
++		 * event id would fail the __rmid_read.
++		 */
++		return RMID_VAL_ERROR;
+ 	}
+ 
+ 	if (rr->first) {
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 9d3783800c8ce..9d76c33683649 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -257,12 +257,6 @@ static bool check_mmio_spte(struct kvm_vcpu *vcpu, u64 spte)
+ static gpa_t translate_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access,
+                                   struct x86_exception *exception)
+ {
+-	/* Check if guest physical address doesn't exceed guest maximum */
+-	if (kvm_vcpu_is_illegal_gpa(vcpu, gpa)) {
+-		exception->error_code |= PFERR_RSVD_MASK;
+-		return UNMAPPED_GVA;
+-	}
+-
+         return gpa;
+ }
+ 
+@@ -2760,6 +2754,7 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm,
+ 			      kvm_pfn_t pfn, int max_level)
+ {
+ 	struct kvm_lpage_info *linfo;
++	int host_level;
+ 
+ 	max_level = min(max_level, max_huge_page_level);
+ 	for ( ; max_level > PG_LEVEL_4K; max_level--) {
+@@ -2771,7 +2766,8 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm,
+ 	if (max_level == PG_LEVEL_4K)
+ 		return PG_LEVEL_4K;
+ 
+-	return host_pfn_mapping_level(kvm, gfn, pfn, slot);
++	host_level = host_pfn_mapping_level(kvm, gfn, pfn, slot);
++	return min(host_level, max_level);
+ }
+ 
+ int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn,
+@@ -2795,17 +2791,12 @@ int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn,
+ 	if (!slot)
+ 		return PG_LEVEL_4K;
+ 
+-	level = kvm_mmu_max_mapping_level(vcpu->kvm, slot, gfn, pfn, max_level);
+-	if (level == PG_LEVEL_4K)
+-		return level;
+-
+-	*req_level = level = min(level, max_level);
+-
+ 	/*
+ 	 * Enforce the iTLB multihit workaround after capturing the requested
+ 	 * level, which will be used to do precise, accurate accounting.
+ 	 */
+-	if (huge_page_disallowed)
++	*req_level = level = kvm_mmu_max_mapping_level(vcpu->kvm, slot, gfn, pfn, max_level);
++	if (level == PG_LEVEL_4K || huge_page_disallowed)
+ 		return PG_LEVEL_4K;
+ 
+ 	/*
+diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
+index 41ef3ed5349f1..3c225bc0c0826 100644
+--- a/arch/x86/kvm/mmu/tdp_mmu.c
++++ b/arch/x86/kvm/mmu/tdp_mmu.c
+@@ -410,6 +410,7 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
+ 	bool was_leaf = was_present && is_last_spte(old_spte, level);
+ 	bool is_leaf = is_present && is_last_spte(new_spte, level);
+ 	bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte);
++	bool was_large, is_large;
+ 
+ 	WARN_ON(level > PT64_ROOT_MAX_LEVEL);
+ 	WARN_ON(level < PG_LEVEL_4K);
+@@ -443,13 +444,6 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
+ 
+ 	trace_kvm_tdp_mmu_spte_changed(as_id, gfn, level, old_spte, new_spte);
+ 
+-	if (is_large_pte(old_spte) != is_large_pte(new_spte)) {
+-		if (is_large_pte(old_spte))
+-			atomic64_sub(1, (atomic64_t*)&kvm->stat.lpages);
+-		else
+-			atomic64_add(1, (atomic64_t*)&kvm->stat.lpages);
+-	}
+-
+ 	/*
+ 	 * The only times a SPTE should be changed from a non-present to
+ 	 * non-present state is when an MMIO entry is installed/modified/
+@@ -475,6 +469,18 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
+ 		return;
+ 	}
+ 
++	/*
++	 * Update large page stats if a large page is being zapped, created, or
++	 * is replacing an existing shadow page.
++	 */
++	was_large = was_leaf && is_large_pte(old_spte);
++	is_large = is_leaf && is_large_pte(new_spte);
++	if (was_large != is_large) {
++		if (was_large)
++			atomic64_sub(1, (atomic64_t *)&kvm->stat.lpages);
++		else
++			atomic64_add(1, (atomic64_t *)&kvm->stat.lpages);
++	}
+ 
+ 	if (was_leaf && is_dirty_spte(old_spte) &&
+ 	    (!is_present || !is_dirty_spte(new_spte) || pfn_changed))
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index df3b7e5644169..1d47f2dbe3e99 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -2226,12 +2226,11 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
+ 			 ~PIN_BASED_VMX_PREEMPTION_TIMER);
+ 
+ 	/* Posted interrupts setting is only taken from vmcs12.  */
+-	if (nested_cpu_has_posted_intr(vmcs12)) {
++	vmx->nested.pi_pending = false;
++	if (nested_cpu_has_posted_intr(vmcs12))
+ 		vmx->nested.posted_intr_nv = vmcs12->posted_intr_nv;
+-		vmx->nested.pi_pending = false;
+-	} else {
++	else
+ 		exec_control &= ~PIN_BASED_POSTED_INTR;
+-	}
+ 	pin_controls_set(vmx, exec_control);
+ 
+ 	/*
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index dcd4f43c23de5..6af7d0b0c154b 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -6452,6 +6452,9 @@ static void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
+ {
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+ 
++	if (vmx->emulation_required)
++		return;
++
+ 	if (vmx->exit_reason.basic == EXIT_REASON_EXTERNAL_INTERRUPT)
+ 		handle_external_interrupt_irqoff(vcpu);
+ 	else if (vmx->exit_reason.basic == EXIT_REASON_EXCEPTION_NMI)
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 1e11198f89934..93e851041c9c0 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -3223,6 +3223,10 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 			if (!msr_info->host_initiated) {
+ 				s64 adj = data - vcpu->arch.ia32_tsc_adjust_msr;
+ 				adjust_tsc_offset_guest(vcpu, adj);
++				/* Before back to guest, tsc_timestamp must be adjusted
++				 * as well, otherwise guest's percpu pvclock time could jump.
++				 */
++				kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
+ 			}
+ 			vcpu->arch.ia32_tsc_adjust_msr = data;
+ 		}
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index eccbe2aed7c3f..4df33cc08eee0 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2333,6 +2333,9 @@ static int bfq_request_merge(struct request_queue *q, struct request **req,
+ 	__rq = bfq_find_rq_fmerge(bfqd, bio, q);
+ 	if (__rq && elv_bio_merge_ok(__rq, bio)) {
+ 		*req = __rq;
++
++		if (blk_discard_mergable(__rq))
++			return ELEVATOR_DISCARD_MERGE;
+ 		return ELEVATOR_FRONT_MERGE;
+ 	}
+ 
+diff --git a/block/bio.c b/block/bio.c
+index 1fab762e079be..d95e3456ba0c5 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -979,6 +979,14 @@ static int bio_iov_bvec_set_append(struct bio *bio, struct iov_iter *iter)
+ 	return 0;
+ }
+ 
++static void bio_put_pages(struct page **pages, size_t size, size_t off)
++{
++	size_t i, nr = DIV_ROUND_UP(size + (off & ~PAGE_MASK), PAGE_SIZE);
++
++	for (i = 0; i < nr; i++)
++		put_page(pages[i]);
++}
++
+ #define PAGE_PTRS_PER_BVEC     (sizeof(struct bio_vec) / sizeof(struct page *))
+ 
+ /**
+@@ -1023,8 +1031,10 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
+ 			if (same_page)
+ 				put_page(page);
+ 		} else {
+-			if (WARN_ON_ONCE(bio_full(bio, len)))
+-                                return -EINVAL;
++			if (WARN_ON_ONCE(bio_full(bio, len))) {
++				bio_put_pages(pages + i, left, offset);
++				return -EINVAL;
++			}
+ 			__bio_add_page(bio, page, len, offset);
+ 		}
+ 		offset = 0;
+@@ -1069,6 +1079,7 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
+ 		len = min_t(size_t, PAGE_SIZE - offset, left);
+ 		if (bio_add_hw_page(q, bio, page, len, offset,
+ 				max_append_sectors, &same_page) != len) {
++			bio_put_pages(pages + i, left, offset);
+ 			ret = -EINVAL;
+ 			break;
+ 		}
+diff --git a/block/blk-crypto.c b/block/blk-crypto.c
+index c5bdaafffa29f..103c2e2d50d67 100644
+--- a/block/blk-crypto.c
++++ b/block/blk-crypto.c
+@@ -332,7 +332,7 @@ int blk_crypto_init_key(struct blk_crypto_key *blk_key, const u8 *raw_key,
+ 	if (mode->keysize == 0)
+ 		return -EINVAL;
+ 
+-	if (dun_bytes == 0 || dun_bytes > BLK_CRYPTO_MAX_IV_SIZE)
++	if (dun_bytes == 0 || dun_bytes > mode->ivsize)
+ 		return -EINVAL;
+ 
+ 	if (!is_power_of_2(data_unit_size))
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index bcdff1879c346..526953525e35e 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -348,6 +348,8 @@ void __blk_queue_split(struct bio **bio, unsigned int *nr_segs)
+ 		trace_block_split(split, (*bio)->bi_iter.bi_sector);
+ 		submit_bio_noacct(*bio);
+ 		*bio = split;
++
++		blk_throtl_charge_bio_split(*bio);
+ 	}
+ }
+ 
+@@ -705,22 +707,6 @@ static void blk_account_io_merge_request(struct request *req)
+ 	}
+ }
+ 
+-/*
+- * Two cases of handling DISCARD merge:
+- * If max_discard_segments > 1, the driver takes every bio
+- * as a range and send them to controller together. The ranges
+- * needn't to be contiguous.
+- * Otherwise, the bios/requests will be handled as same as
+- * others which should be contiguous.
+- */
+-static inline bool blk_discard_mergable(struct request *req)
+-{
+-	if (req_op(req) == REQ_OP_DISCARD &&
+-	    queue_max_discard_segments(req->q) > 1)
+-		return true;
+-	return false;
+-}
+-
+ static enum elv_merge blk_try_req_merge(struct request *req,
+ 					struct request *next)
+ {
+diff --git a/block/blk-throttle.c b/block/blk-throttle.c
+index b1b22d863bdf8..55c49015e5333 100644
+--- a/block/blk-throttle.c
++++ b/block/blk-throttle.c
+@@ -178,6 +178,9 @@ struct throtl_grp {
+ 	unsigned int bad_bio_cnt; /* bios exceeding latency threshold */
+ 	unsigned long bio_cnt_reset_time;
+ 
++	atomic_t io_split_cnt[2];
++	atomic_t last_io_split_cnt[2];
++
+ 	struct blkg_rwstat stat_bytes;
+ 	struct blkg_rwstat stat_ios;
+ };
+@@ -777,6 +780,8 @@ static inline void throtl_start_new_slice_with_credit(struct throtl_grp *tg,
+ 	tg->bytes_disp[rw] = 0;
+ 	tg->io_disp[rw] = 0;
+ 
++	atomic_set(&tg->io_split_cnt[rw], 0);
++
+ 	/*
+ 	 * Previous slice has expired. We must have trimmed it after last
+ 	 * bio dispatch. That means since start of last slice, we never used
+@@ -799,6 +804,9 @@ static inline void throtl_start_new_slice(struct throtl_grp *tg, bool rw)
+ 	tg->io_disp[rw] = 0;
+ 	tg->slice_start[rw] = jiffies;
+ 	tg->slice_end[rw] = jiffies + tg->td->throtl_slice;
++
++	atomic_set(&tg->io_split_cnt[rw], 0);
++
+ 	throtl_log(&tg->service_queue,
+ 		   "[%c] new slice start=%lu end=%lu jiffies=%lu",
+ 		   rw == READ ? 'R' : 'W', tg->slice_start[rw],
+@@ -1031,6 +1039,9 @@ static bool tg_may_dispatch(struct throtl_grp *tg, struct bio *bio,
+ 				jiffies + tg->td->throtl_slice);
+ 	}
+ 
++	if (iops_limit != UINT_MAX)
++		tg->io_disp[rw] += atomic_xchg(&tg->io_split_cnt[rw], 0);
++
+ 	if (tg_with_in_bps_limit(tg, bio, bps_limit, &bps_wait) &&
+ 	    tg_with_in_iops_limit(tg, bio, iops_limit, &iops_wait)) {
+ 		if (wait)
+@@ -2052,12 +2063,14 @@ static void throtl_downgrade_check(struct throtl_grp *tg)
+ 	}
+ 
+ 	if (tg->iops[READ][LIMIT_LOW]) {
++		tg->last_io_disp[READ] += atomic_xchg(&tg->last_io_split_cnt[READ], 0);
+ 		iops = tg->last_io_disp[READ] * HZ / elapsed_time;
+ 		if (iops >= tg->iops[READ][LIMIT_LOW])
+ 			tg->last_low_overflow_time[READ] = now;
+ 	}
+ 
+ 	if (tg->iops[WRITE][LIMIT_LOW]) {
++		tg->last_io_disp[WRITE] += atomic_xchg(&tg->last_io_split_cnt[WRITE], 0);
+ 		iops = tg->last_io_disp[WRITE] * HZ / elapsed_time;
+ 		if (iops >= tg->iops[WRITE][LIMIT_LOW])
+ 			tg->last_low_overflow_time[WRITE] = now;
+@@ -2176,6 +2189,25 @@ static inline void throtl_update_latency_buckets(struct throtl_data *td)
+ }
+ #endif
+ 
++void blk_throtl_charge_bio_split(struct bio *bio)
++{
++	struct blkcg_gq *blkg = bio->bi_blkg;
++	struct throtl_grp *parent = blkg_to_tg(blkg);
++	struct throtl_service_queue *parent_sq;
++	bool rw = bio_data_dir(bio);
++
++	do {
++		if (!parent->has_rules[rw])
++			break;
++
++		atomic_inc(&parent->io_split_cnt[rw]);
++		atomic_inc(&parent->last_io_split_cnt[rw]);
++
++		parent_sq = parent->service_queue.parent_sq;
++		parent = sq_to_tg(parent_sq);
++	} while (parent);
++}
++
+ bool blk_throtl_bio(struct bio *bio)
+ {
+ 	struct request_queue *q = bio->bi_bdev->bd_disk->queue;
+diff --git a/block/blk.h b/block/blk.h
+index 54d48987c21b2..40b00d18bdb25 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -290,11 +290,13 @@ int create_task_io_context(struct task_struct *task, gfp_t gfp_mask, int node);
+ extern int blk_throtl_init(struct request_queue *q);
+ extern void blk_throtl_exit(struct request_queue *q);
+ extern void blk_throtl_register_queue(struct request_queue *q);
++extern void blk_throtl_charge_bio_split(struct bio *bio);
+ bool blk_throtl_bio(struct bio *bio);
+ #else /* CONFIG_BLK_DEV_THROTTLING */
+ static inline int blk_throtl_init(struct request_queue *q) { return 0; }
+ static inline void blk_throtl_exit(struct request_queue *q) { }
+ static inline void blk_throtl_register_queue(struct request_queue *q) { }
++static inline void blk_throtl_charge_bio_split(struct bio *bio) { }
+ static inline bool blk_throtl_bio(struct bio *bio) { return false; }
+ #endif /* CONFIG_BLK_DEV_THROTTLING */
+ #ifdef CONFIG_BLK_DEV_THROTTLING_LOW
+diff --git a/block/elevator.c b/block/elevator.c
+index 440699c281193..73e0591acfd4b 100644
+--- a/block/elevator.c
++++ b/block/elevator.c
+@@ -336,6 +336,9 @@ enum elv_merge elv_merge(struct request_queue *q, struct request **req,
+ 	__rq = elv_rqhash_find(q, bio->bi_iter.bi_sector);
+ 	if (__rq && elv_bio_merge_ok(__rq, bio)) {
+ 		*req = __rq;
++
++		if (blk_discard_mergable(__rq))
++			return ELEVATOR_DISCARD_MERGE;
+ 		return ELEVATOR_BACK_MERGE;
+ 	}
+ 
+diff --git a/block/mq-deadline.c b/block/mq-deadline.c
+index 8eea2cbf2bf4a..8dca7255d04cf 100644
+--- a/block/mq-deadline.c
++++ b/block/mq-deadline.c
+@@ -454,6 +454,8 @@ static int dd_request_merge(struct request_queue *q, struct request **rq,
+ 
+ 		if (elv_bio_merge_ok(__rq, bio)) {
+ 			*rq = __rq;
++			if (blk_discard_mergable(__rq))
++				return ELEVATOR_DISCARD_MERGE;
+ 			return ELEVATOR_FRONT_MERGE;
+ 		}
+ 	}
+diff --git a/certs/Makefile b/certs/Makefile
+index 359239a0ee9e3..f9344e52ecdae 100644
+--- a/certs/Makefile
++++ b/certs/Makefile
+@@ -57,11 +57,19 @@ endif
+ redirect_openssl	= 2>&1
+ quiet_redirect_openssl	= 2>&1
+ silent_redirect_openssl = 2>/dev/null
++openssl_available       = $(shell openssl help 2>/dev/null && echo yes)
+ 
+ # We do it this way rather than having a boolean option for enabling an
+ # external private key, because 'make randconfig' might enable such a
+ # boolean option and we unfortunately can't make it depend on !RANDCONFIG.
+ ifeq ($(CONFIG_MODULE_SIG_KEY),"certs/signing_key.pem")
++
++ifeq ($(openssl_available),yes)
++X509TEXT=$(shell openssl x509 -in "certs/signing_key.pem" -text 2>/dev/null)
++
++$(if $(findstring rsaEncryption,$(X509TEXT)),,$(shell rm -f "certs/signing_key.pem"))
++endif
++
+ $(obj)/signing_key.pem: $(obj)/x509.genkey
+ 	@$(kecho) "###"
+ 	@$(kecho) "### Now generating an X.509 key pair to be used for signing modules."
+diff --git a/crypto/Makefile b/crypto/Makefile
+index 10526d4559b80..c633f15a04813 100644
+--- a/crypto/Makefile
++++ b/crypto/Makefile
+@@ -74,7 +74,6 @@ obj-$(CONFIG_CRYPTO_NULL2) += crypto_null.o
+ obj-$(CONFIG_CRYPTO_MD4) += md4.o
+ obj-$(CONFIG_CRYPTO_MD5) += md5.o
+ obj-$(CONFIG_CRYPTO_RMD160) += rmd160.o
+-obj-$(CONFIG_CRYPTO_RMD320) += rmd320.o
+ obj-$(CONFIG_CRYPTO_SHA1) += sha1_generic.o
+ obj-$(CONFIG_CRYPTO_SHA256) += sha256_generic.o
+ obj-$(CONFIG_CRYPTO_SHA512) += sha512_generic.o
+diff --git a/crypto/ecc.h b/crypto/ecc.h
+index a006132646a43..1350e8eb6ac23 100644
+--- a/crypto/ecc.h
++++ b/crypto/ecc.h
+@@ -27,6 +27,7 @@
+ #define _CRYPTO_ECC_H
+ 
+ #include <crypto/ecc_curve.h>
++#include <asm/unaligned.h>
+ 
+ /* One digit is u64 qword. */
+ #define ECC_CURVE_NIST_P192_DIGITS  3
+@@ -46,13 +47,13 @@
+  * @out:      Output array
+  * @ndigits:  Number of digits to copy
+  */
+-static inline void ecc_swap_digits(const u64 *in, u64 *out, unsigned int ndigits)
++static inline void ecc_swap_digits(const void *in, u64 *out, unsigned int ndigits)
+ {
+ 	const __be64 *src = (__force __be64 *)in;
+ 	int i;
+ 
+ 	for (i = 0; i < ndigits; i++)
+-		out[i] = be64_to_cpu(src[ndigits - 1 - i]);
++		out[i] = get_unaligned_be64(&src[ndigits - 1 - i]);
+ }
+ 
+ /**
+diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
+index 6b7c158dc5087..f9c00875bc0e4 100644
+--- a/crypto/tcrypt.c
++++ b/crypto/tcrypt.c
+@@ -290,6 +290,11 @@ static void test_mb_aead_speed(const char *algo, int enc, int secs,
+ 	}
+ 
+ 	ret = crypto_aead_setauthsize(tfm, authsize);
++	if (ret) {
++		pr_err("alg: aead: Failed to setauthsize for %s: %d\n", algo,
++		       ret);
++		goto out_free_tfm;
++	}
+ 
+ 	for (i = 0; i < num_mb; ++i)
+ 		if (testmgr_alloc_buf(data[i].xbuf)) {
+@@ -315,7 +320,7 @@ static void test_mb_aead_speed(const char *algo, int enc, int secs,
+ 	for (i = 0; i < num_mb; ++i) {
+ 		data[i].req = aead_request_alloc(tfm, GFP_KERNEL);
+ 		if (!data[i].req) {
+-			pr_err("alg: skcipher: Failed to allocate request for %s\n",
++			pr_err("alg: aead: Failed to allocate request for %s\n",
+ 			       algo);
+ 			while (i--)
+ 				aead_request_free(data[i].req);
+@@ -567,13 +572,19 @@ static void test_aead_speed(const char *algo, int enc, unsigned int secs,
+ 	sgout = &sg[9];
+ 
+ 	tfm = crypto_alloc_aead(algo, 0, 0);
+-
+ 	if (IS_ERR(tfm)) {
+ 		pr_err("alg: aead: Failed to load transform for %s: %ld\n", algo,
+ 		       PTR_ERR(tfm));
+ 		goto out_notfm;
+ 	}
+ 
++	ret = crypto_aead_setauthsize(tfm, authsize);
++	if (ret) {
++		pr_err("alg: aead: Failed to setauthsize for %s: %d\n", algo,
++		       ret);
++		goto out_noreq;
++	}
++
+ 	crypto_init_wait(&wait);
+ 	printk(KERN_INFO "\ntesting speed of %s (%s) %s\n", algo,
+ 			get_driver_name(crypto_aead, tfm), e);
+@@ -611,8 +622,13 @@ static void test_aead_speed(const char *algo, int enc, unsigned int secs,
+ 					break;
+ 				}
+ 			}
++
+ 			ret = crypto_aead_setkey(tfm, key, *keysize);
+-			ret = crypto_aead_setauthsize(tfm, authsize);
++			if (ret) {
++				pr_err("setkey() failed flags=%x: %d\n",
++					crypto_aead_get_flags(tfm), ret);
++				goto out;
++			}
+ 
+ 			iv_len = crypto_aead_ivsize(tfm);
+ 			if (iv_len)
+@@ -622,15 +638,8 @@ static void test_aead_speed(const char *algo, int enc, unsigned int secs,
+ 			printk(KERN_INFO "test %u (%d bit key, %d byte blocks): ",
+ 					i, *keysize * 8, bs);
+ 
+-
+ 			memset(tvmem[0], 0xff, PAGE_SIZE);
+ 
+-			if (ret) {
+-				pr_err("setkey() failed flags=%x\n",
+-						crypto_aead_get_flags(tfm));
+-				goto out;
+-			}
+-
+ 			sg_init_aead(sg, xbuf, bs + (enc ? 0 : authsize),
+ 				     assoc, aad_size);
+ 
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 61c762961ca8e..44f434acfce08 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -5573,7 +5573,7 @@ int ata_host_start(struct ata_host *host)
+ 			have_stop = 1;
+ 	}
+ 
+-	if (host->ops->host_stop)
++	if (host->ops && host->ops->host_stop)
+ 		have_stop = 1;
+ 
+ 	if (have_stop) {
+diff --git a/drivers/auxdisplay/hd44780.c b/drivers/auxdisplay/hd44780.c
+index 2e5e7c9939334..8b2a0eb3f32a4 100644
+--- a/drivers/auxdisplay/hd44780.c
++++ b/drivers/auxdisplay/hd44780.c
+@@ -323,8 +323,8 @@ static int hd44780_remove(struct platform_device *pdev)
+ {
+ 	struct charlcd *lcd = platform_get_drvdata(pdev);
+ 
+-	kfree(lcd->drvdata);
+ 	charlcd_unregister(lcd);
++	kfree(lcd->drvdata);
+ 
+ 	kfree(lcd);
+ 	return 0;
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 592b3955abe22..a421da0c9c012 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -560,7 +560,8 @@ re_probe:
+ 			goto probe_failed;
+ 	}
+ 
+-	if (driver_sysfs_add(dev)) {
++	ret = driver_sysfs_add(dev);
++	if (ret) {
+ 		pr_err("%s: driver_sysfs_add(%s) failed\n",
+ 		       __func__, dev_name(dev));
+ 		goto probe_failed;
+@@ -582,15 +583,18 @@ re_probe:
+ 			goto probe_failed;
+ 	}
+ 
+-	if (device_add_groups(dev, drv->dev_groups)) {
++	ret = device_add_groups(dev, drv->dev_groups);
++	if (ret) {
+ 		dev_err(dev, "device_add_groups() failed\n");
+ 		goto dev_groups_failed;
+ 	}
+ 
+-	if (dev_has_sync_state(dev) &&
+-	    device_create_file(dev, &dev_attr_state_synced)) {
+-		dev_err(dev, "state_synced sysfs add failed\n");
+-		goto dev_sysfs_state_synced_failed;
++	if (dev_has_sync_state(dev)) {
++		ret = device_create_file(dev, &dev_attr_state_synced);
++		if (ret) {
++			dev_err(dev, "state_synced sysfs add failed\n");
++			goto dev_sysfs_state_synced_failed;
++		}
+ 	}
+ 
+ 	if (test_remove) {
+diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c
+index 68c549d712304..bdbedc6660a87 100644
+--- a/drivers/base/firmware_loader/main.c
++++ b/drivers/base/firmware_loader/main.c
+@@ -165,7 +165,7 @@ static inline int fw_state_wait(struct fw_priv *fw_priv)
+ 	return __fw_state_wait_common(fw_priv, MAX_SCHEDULE_TIMEOUT);
+ }
+ 
+-static int fw_cache_piggyback_on_request(const char *name);
++static void fw_cache_piggyback_on_request(struct fw_priv *fw_priv);
+ 
+ static struct fw_priv *__allocate_fw_priv(const char *fw_name,
+ 					  struct firmware_cache *fwc,
+@@ -707,10 +707,8 @@ int assign_fw(struct firmware *fw, struct device *device)
+ 	 * on request firmware.
+ 	 */
+ 	if (!(fw_priv->opt_flags & FW_OPT_NOCACHE) &&
+-	    fw_priv->fwc->state == FW_LOADER_START_CACHE) {
+-		if (fw_cache_piggyback_on_request(fw_priv->fw_name))
+-			kref_get(&fw_priv->ref);
+-	}
++	    fw_priv->fwc->state == FW_LOADER_START_CACHE)
++		fw_cache_piggyback_on_request(fw_priv);
+ 
+ 	/* pass the pages buffer to driver at the last minute */
+ 	fw_set_page_data(fw_priv, fw);
+@@ -1259,11 +1257,11 @@ static int __fw_entry_found(const char *name)
+ 	return 0;
+ }
+ 
+-static int fw_cache_piggyback_on_request(const char *name)
++static void fw_cache_piggyback_on_request(struct fw_priv *fw_priv)
+ {
+-	struct firmware_cache *fwc = &fw_cache;
++	const char *name = fw_priv->fw_name;
++	struct firmware_cache *fwc = fw_priv->fwc;
+ 	struct fw_cache_entry *fce;
+-	int ret = 0;
+ 
+ 	spin_lock(&fwc->name_lock);
+ 	if (__fw_entry_found(name))
+@@ -1271,13 +1269,12 @@ static int fw_cache_piggyback_on_request(const char *name)
+ 
+ 	fce = alloc_fw_cache_entry(name);
+ 	if (fce) {
+-		ret = 1;
+ 		list_add(&fce->list, &fwc->fw_names);
++		kref_get(&fw_priv->ref);
+ 		pr_debug("%s: fw: %s\n", __func__, name);
+ 	}
+ found:
+ 	spin_unlock(&fwc->name_lock);
+-	return ret;
+ }
+ 
+ static void free_fw_cache_entry(struct fw_cache_entry *fce)
+@@ -1508,9 +1505,8 @@ static inline void unregister_fw_pm_ops(void)
+ 	unregister_pm_notifier(&fw_cache.pm_notify);
+ }
+ #else
+-static int fw_cache_piggyback_on_request(const char *name)
++static void fw_cache_piggyback_on_request(struct fw_priv *fw_priv)
+ {
+-	return 0;
+ }
+ static inline int register_fw_pm_ops(void)
+ {
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index 297e95be25b3b..cf1dca0cde2c1 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -1652,7 +1652,7 @@ static int _regmap_raw_write_impl(struct regmap *map, unsigned int reg,
+ 			if (ret) {
+ 				dev_err(map->dev,
+ 					"Error in caching of register: %x ret: %d\n",
+-					reg + i, ret);
++					reg + regmap_get_offset(map, i), ret);
+ 				return ret;
+ 			}
+ 		}
+diff --git a/drivers/bcma/main.c b/drivers/bcma/main.c
+index 6535614a7dc13..1df2b5801c3bc 100644
+--- a/drivers/bcma/main.c
++++ b/drivers/bcma/main.c
+@@ -236,6 +236,7 @@ EXPORT_SYMBOL(bcma_core_irq);
+ 
+ void bcma_prepare_core(struct bcma_bus *bus, struct bcma_device *core)
+ {
++	device_initialize(&core->dev);
+ 	core->dev.release = bcma_release_core_dev;
+ 	core->dev.bus = &bcma_bus_type;
+ 	dev_set_name(&core->dev, "bcma%d:%d", bus->num, core->core_index);
+@@ -277,11 +278,10 @@ static void bcma_register_core(struct bcma_bus *bus, struct bcma_device *core)
+ {
+ 	int err;
+ 
+-	err = device_register(&core->dev);
++	err = device_add(&core->dev);
+ 	if (err) {
+ 		bcma_err(bus, "Could not register dev for core 0x%03X\n",
+ 			 core->id.id);
+-		put_device(&core->dev);
+ 		return;
+ 	}
+ 	core->dev_registered = true;
+@@ -372,7 +372,7 @@ void bcma_unregister_cores(struct bcma_bus *bus)
+ 	/* Now noone uses internally-handled cores, we can free them */
+ 	list_for_each_entry_safe(core, tmp, &bus->cores, list) {
+ 		list_del(&core->list);
+-		kfree(core);
++		put_device(&core->dev);
+ 	}
+ }
+ 
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 1061894a55df2..4acf5c6cb80d2 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -1369,6 +1369,7 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
+ 		       unsigned int cmd, unsigned long arg)
+ {
+ 	struct nbd_config *config = nbd->config;
++	loff_t bytesize;
+ 
+ 	switch (cmd) {
+ 	case NBD_DISCONNECT:
+@@ -1383,8 +1384,9 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
+ 	case NBD_SET_SIZE:
+ 		return nbd_set_size(nbd, arg, config->blksize);
+ 	case NBD_SET_SIZE_BLOCKS:
+-		return nbd_set_size(nbd, arg * config->blksize,
+-				    config->blksize);
++		if (check_mul_overflow((loff_t)arg, config->blksize, &bytesize))
++			return -EINVAL;
++		return nbd_set_size(nbd, bytesize, config->blksize);
+ 	case NBD_SET_TIMEOUT:
+ 		nbd_set_cmd_timeout(nbd, arg);
+ 		return 0;
+@@ -1715,7 +1717,17 @@ static int nbd_dev_add(int index)
+ 	refcount_set(&nbd->refs, 1);
+ 	INIT_LIST_HEAD(&nbd->list);
+ 	disk->major = NBD_MAJOR;
++
++	/* Too big first_minor can cause duplicate creation of
++	 * sysfs files/links, since first_minor will be truncated to
++	 * byte in __device_add_disk().
++	 */
+ 	disk->first_minor = index << part_shift;
++	if (disk->first_minor > 0xff) {
++		err = -EINVAL;
++		goto out_free_idr;
++	}
++
+ 	disk->fops = &nbd_fops;
+ 	disk->private_data = nbd;
+ 	sprintf(disk->disk_name, "nbd%d", index);
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 9122f9cc97cbe..ae0cf5e715842 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -2912,10 +2912,11 @@ static int btusb_setup_intel_new(struct hci_dev *hdev)
+ 	/* Read the Intel supported features and if new exception formats
+ 	 * supported, need to load the additional DDC config to enable.
+ 	 */
+-	btintel_read_debug_features(hdev, &features);
+-
+-	/* Set DDC mask for available debug features */
+-	btintel_set_debug_features(hdev, &features);
++	err = btintel_read_debug_features(hdev, &features);
++	if (!err) {
++		/* Set DDC mask for available debug features */
++		btintel_set_debug_features(hdev, &features);
++	}
+ 
+ 	/* Read the Intel version information after loading the FW  */
+ 	err = btintel_read_version(hdev, &ver);
+@@ -3008,10 +3009,11 @@ static int btusb_setup_intel_newgen(struct hci_dev *hdev)
+ 	/* Read the Intel supported features and if new exception formats
+ 	 * supported, need to load the additional DDC config to enable.
+ 	 */
+-	btintel_read_debug_features(hdev, &features);
+-
+-	/* Set DDC mask for available debug features */
+-	btintel_set_debug_features(hdev, &features);
++	err = btintel_read_debug_features(hdev, &features);
++	if (!err) {
++		/* Set DDC mask for available debug features */
++		btintel_set_debug_features(hdev, &features);
++	}
+ 
+ 	/* Read the Intel version information after loading the FW  */
+ 	err = btintel_read_version_tlv(hdev, &version);
+diff --git a/drivers/char/tpm/Kconfig b/drivers/char/tpm/Kconfig
+index 4308f9ca7a43d..d6ba644f6b00a 100644
+--- a/drivers/char/tpm/Kconfig
++++ b/drivers/char/tpm/Kconfig
+@@ -89,7 +89,6 @@ config TCG_TIS_SYNQUACER
+ config TCG_TIS_I2C_CR50
+ 	tristate "TPM Interface Specification 2.0 Interface (I2C - CR50)"
+ 	depends on I2C
+-	select TCG_CR50
+ 	help
+ 	  This is a driver for the Google cr50 I2C TPM interface which is a
+ 	  custom microcontroller and requires a custom i2c protocol interface
+diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c
+index 903604769de99..3af4c07a9342f 100644
+--- a/drivers/char/tpm/tpm_ibmvtpm.c
++++ b/drivers/char/tpm/tpm_ibmvtpm.c
+@@ -106,17 +106,12 @@ static int tpm_ibmvtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+ {
+ 	struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
+ 	u16 len;
+-	int sig;
+ 
+ 	if (!ibmvtpm->rtce_buf) {
+ 		dev_err(ibmvtpm->dev, "ibmvtpm device is not ready\n");
+ 		return 0;
+ 	}
+ 
+-	sig = wait_event_interruptible(ibmvtpm->wq, !ibmvtpm->tpm_processing_cmd);
+-	if (sig)
+-		return -EINTR;
+-
+ 	len = ibmvtpm->res_len;
+ 
+ 	if (count < len) {
+@@ -237,7 +232,7 @@ static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ 	 * set the processing flag before the Hcall, since we may get the
+ 	 * result (interrupt) before even being able to check rc.
+ 	 */
+-	ibmvtpm->tpm_processing_cmd = true;
++	ibmvtpm->tpm_processing_cmd = 1;
+ 
+ again:
+ 	rc = ibmvtpm_send_crq(ibmvtpm->vdev,
+@@ -255,7 +250,7 @@ again:
+ 			goto again;
+ 		}
+ 		dev_err(ibmvtpm->dev, "tpm_ibmvtpm_send failed rc=%d\n", rc);
+-		ibmvtpm->tpm_processing_cmd = false;
++		ibmvtpm->tpm_processing_cmd = 0;
+ 	}
+ 
+ 	spin_unlock(&ibmvtpm->rtce_lock);
+@@ -269,7 +264,9 @@ static void tpm_ibmvtpm_cancel(struct tpm_chip *chip)
+ 
+ static u8 tpm_ibmvtpm_status(struct tpm_chip *chip)
+ {
+-	return 0;
++	struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
++
++	return ibmvtpm->tpm_processing_cmd;
+ }
+ 
+ /**
+@@ -457,7 +454,7 @@ static const struct tpm_class_ops tpm_ibmvtpm = {
+ 	.send = tpm_ibmvtpm_send,
+ 	.cancel = tpm_ibmvtpm_cancel,
+ 	.status = tpm_ibmvtpm_status,
+-	.req_complete_mask = 0,
++	.req_complete_mask = 1,
+ 	.req_complete_val = 0,
+ 	.req_canceled = tpm_ibmvtpm_req_canceled,
+ };
+@@ -550,7 +547,7 @@ static void ibmvtpm_crq_process(struct ibmvtpm_crq *crq,
+ 		case VTPM_TPM_COMMAND_RES:
+ 			/* len of the data in rtce buffer */
+ 			ibmvtpm->res_len = be16_to_cpu(crq->len);
+-			ibmvtpm->tpm_processing_cmd = false;
++			ibmvtpm->tpm_processing_cmd = 0;
+ 			wake_up_interruptible(&ibmvtpm->wq);
+ 			return;
+ 		default:
+@@ -688,8 +685,15 @@ static int tpm_ibmvtpm_probe(struct vio_dev *vio_dev,
+ 		goto init_irq_cleanup;
+ 	}
+ 
+-	if (!strcmp(id->compat, "IBM,vtpm20")) {
++
++	if (!strcmp(id->compat, "IBM,vtpm20"))
+ 		chip->flags |= TPM_CHIP_FLAG_TPM2;
++
++	rc = tpm_get_timeouts(chip);
++	if (rc)
++		goto init_irq_cleanup;
++
++	if (chip->flags & TPM_CHIP_FLAG_TPM2) {
+ 		rc = tpm2_get_cc_attrs_tbl(chip);
+ 		if (rc)
+ 			goto init_irq_cleanup;
+diff --git a/drivers/char/tpm/tpm_ibmvtpm.h b/drivers/char/tpm/tpm_ibmvtpm.h
+index b92aa7d3e93e7..51198b137461e 100644
+--- a/drivers/char/tpm/tpm_ibmvtpm.h
++++ b/drivers/char/tpm/tpm_ibmvtpm.h
+@@ -41,7 +41,7 @@ struct ibmvtpm_dev {
+ 	wait_queue_head_t wq;
+ 	u16 res_len;
+ 	u32 vtpm_version;
+-	bool tpm_processing_cmd;
++	u8 tpm_processing_cmd;
+ };
+ 
+ #define CRQ_RES_BUF_SIZE	PAGE_SIZE
+diff --git a/drivers/clk/mvebu/kirkwood.c b/drivers/clk/mvebu/kirkwood.c
+index 47680237d0beb..8bc893df47364 100644
+--- a/drivers/clk/mvebu/kirkwood.c
++++ b/drivers/clk/mvebu/kirkwood.c
+@@ -265,6 +265,7 @@ static const char *powersave_parents[] = {
+ static const struct clk_muxing_soc_desc kirkwood_mux_desc[] __initconst = {
+ 	{ "powersave", powersave_parents, ARRAY_SIZE(powersave_parents),
+ 		11, 1, 0 },
++	{ }
+ };
+ 
+ static struct clk *clk_muxing_get_src(
+diff --git a/drivers/clocksource/sh_cmt.c b/drivers/clocksource/sh_cmt.c
+index d7ed99f0001f8..dd0956ad969c1 100644
+--- a/drivers/clocksource/sh_cmt.c
++++ b/drivers/clocksource/sh_cmt.c
+@@ -579,7 +579,8 @@ static int sh_cmt_start(struct sh_cmt_channel *ch, unsigned long flag)
+ 	ch->flags |= flag;
+ 
+ 	/* setup timeout if no clockevent */
+-	if ((flag == FLAG_CLOCKSOURCE) && (!(ch->flags & FLAG_CLOCKEVENT)))
++	if (ch->cmt->num_channels == 1 &&
++	    flag == FLAG_CLOCKSOURCE && (!(ch->flags & FLAG_CLOCKEVENT)))
+ 		__sh_cmt_set_next(ch, ch->max_match_value);
+  out:
+ 	raw_spin_unlock_irqrestore(&ch->lock, flags);
+@@ -621,20 +622,25 @@ static struct sh_cmt_channel *cs_to_sh_cmt(struct clocksource *cs)
+ static u64 sh_cmt_clocksource_read(struct clocksource *cs)
+ {
+ 	struct sh_cmt_channel *ch = cs_to_sh_cmt(cs);
+-	unsigned long flags;
+ 	u32 has_wrapped;
+-	u64 value;
+-	u32 raw;
+ 
+-	raw_spin_lock_irqsave(&ch->lock, flags);
+-	value = ch->total_cycles;
+-	raw = sh_cmt_get_counter(ch, &has_wrapped);
++	if (ch->cmt->num_channels == 1) {
++		unsigned long flags;
++		u64 value;
++		u32 raw;
+ 
+-	if (unlikely(has_wrapped))
+-		raw += ch->match_value + 1;
+-	raw_spin_unlock_irqrestore(&ch->lock, flags);
++		raw_spin_lock_irqsave(&ch->lock, flags);
++		value = ch->total_cycles;
++		raw = sh_cmt_get_counter(ch, &has_wrapped);
++
++		if (unlikely(has_wrapped))
++			raw += ch->match_value + 1;
++		raw_spin_unlock_irqrestore(&ch->lock, flags);
++
++		return value + raw;
++	}
+ 
+-	return value + raw;
++	return sh_cmt_get_counter(ch, &has_wrapped);
+ }
+ 
+ static int sh_cmt_clocksource_enable(struct clocksource *cs)
+@@ -697,7 +703,7 @@ static int sh_cmt_register_clocksource(struct sh_cmt_channel *ch,
+ 	cs->disable = sh_cmt_clocksource_disable;
+ 	cs->suspend = sh_cmt_clocksource_suspend;
+ 	cs->resume = sh_cmt_clocksource_resume;
+-	cs->mask = CLOCKSOURCE_MASK(sizeof(u64) * 8);
++	cs->mask = CLOCKSOURCE_MASK(ch->cmt->info->width);
+ 	cs->flags = CLOCK_SOURCE_IS_CONTINUOUS;
+ 
+ 	dev_info(&ch->cmt->pdev->dev, "ch%u: used as clock source\n",
+diff --git a/drivers/counter/104-quad-8.c b/drivers/counter/104-quad-8.c
+index 9691f8612be87..8de776f3b142a 100644
+--- a/drivers/counter/104-quad-8.c
++++ b/drivers/counter/104-quad-8.c
+@@ -715,12 +715,13 @@ static ssize_t quad8_count_ceiling_write(struct counter_device *counter,
+ 	case 1:
+ 	case 3:
+ 		quad8_preset_register_set(priv, count->id, ceiling);
+-		break;
++		mutex_unlock(&priv->lock);
++		return len;
+ 	}
+ 
+ 	mutex_unlock(&priv->lock);
+ 
+-	return len;
++	return -EINVAL;
+ }
+ 
+ static ssize_t quad8_count_preset_enable_read(struct counter_device *counter,
+diff --git a/drivers/crypto/hisilicon/sec2/sec.h b/drivers/crypto/hisilicon/sec2/sec.h
+index dfdce2f21e658..1aeb53f28eddb 100644
+--- a/drivers/crypto/hisilicon/sec2/sec.h
++++ b/drivers/crypto/hisilicon/sec2/sec.h
+@@ -140,11 +140,6 @@ struct sec_ctx {
+ 	struct device *dev;
+ };
+ 
+-enum sec_endian {
+-	SEC_LE = 0,
+-	SEC_32BE,
+-	SEC_64BE
+-};
+ 
+ enum sec_debug_file_index {
+ 	SEC_CLEAR_ENABLE,
+diff --git a/drivers/crypto/hisilicon/sec2/sec_main.c b/drivers/crypto/hisilicon/sec2/sec_main.c
+index 6f0062d4408c3..0305e656b4778 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_main.c
++++ b/drivers/crypto/hisilicon/sec2/sec_main.c
+@@ -304,31 +304,20 @@ static const struct pci_device_id sec_dev_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(pci, sec_dev_ids);
+ 
+-static u8 sec_get_endian(struct hisi_qm *qm)
++static void sec_set_endian(struct hisi_qm *qm)
+ {
+ 	u32 reg;
+ 
+-	/*
+-	 * As for VF, it is a wrong way to get endian setting by
+-	 * reading a register of the engine
+-	 */
+-	if (qm->pdev->is_virtfn) {
+-		dev_err_ratelimited(&qm->pdev->dev,
+-				    "cannot access a register in VF!\n");
+-		return SEC_LE;
+-	}
+ 	reg = readl_relaxed(qm->io_base + SEC_CONTROL_REG);
+-	/* BD little endian mode */
+-	if (!(reg & BIT(0)))
+-		return SEC_LE;
++	reg &= ~(BIT(1) | BIT(0));
++	if (!IS_ENABLED(CONFIG_64BIT))
++		reg |= BIT(1);
+ 
+-	/* BD 32-bits big endian mode */
+-	else if (!(reg & BIT(1)))
+-		return SEC_32BE;
+ 
+-	/* BD 64-bits big endian mode */
+-	else
+-		return SEC_64BE;
++	if (!IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN))
++		reg |= BIT(0);
++
++	writel_relaxed(reg, qm->io_base + SEC_CONTROL_REG);
+ }
+ 
+ static int sec_engine_init(struct hisi_qm *qm)
+@@ -382,9 +371,7 @@ static int sec_engine_init(struct hisi_qm *qm)
+ 		       qm->io_base + SEC_BD_ERR_CHK_EN_REG3);
+ 
+ 	/* config endian */
+-	reg = readl_relaxed(qm->io_base + SEC_CONTROL_REG);
+-	reg |= sec_get_endian(qm);
+-	writel_relaxed(reg, qm->io_base + SEC_CONTROL_REG);
++	sec_set_endian(qm);
+ 
+ 	return 0;
+ }
+@@ -921,7 +908,8 @@ static int sec_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	return 0;
+ 
+ err_alg_unregister:
+-	hisi_qm_alg_unregister(qm, &sec_devices);
++	if (qm->qp_num >= ctx_q_num)
++		hisi_qm_alg_unregister(qm, &sec_devices);
+ err_qm_stop:
+ 	sec_debugfs_exit(qm);
+ 	hisi_qm_stop(qm, QM_NORMAL);
+diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c
+index d6a7784d29888..f397cc5bf1021 100644
+--- a/drivers/crypto/mxs-dcp.c
++++ b/drivers/crypto/mxs-dcp.c
+@@ -170,15 +170,19 @@ static struct dcp *global_sdcp;
+ 
+ static int mxs_dcp_start_dma(struct dcp_async_ctx *actx)
+ {
++	int dma_err;
+ 	struct dcp *sdcp = global_sdcp;
+ 	const int chan = actx->chan;
+ 	uint32_t stat;
+ 	unsigned long ret;
+ 	struct dcp_dma_desc *desc = &sdcp->coh->desc[actx->chan];
+-
+ 	dma_addr_t desc_phys = dma_map_single(sdcp->dev, desc, sizeof(*desc),
+ 					      DMA_TO_DEVICE);
+ 
++	dma_err = dma_mapping_error(sdcp->dev, desc_phys);
++	if (dma_err)
++		return dma_err;
++
+ 	reinit_completion(&sdcp->completion[chan]);
+ 
+ 	/* Clear status register. */
+@@ -216,18 +220,29 @@ static int mxs_dcp_start_dma(struct dcp_async_ctx *actx)
+ static int mxs_dcp_run_aes(struct dcp_async_ctx *actx,
+ 			   struct skcipher_request *req, int init)
+ {
++	dma_addr_t key_phys, src_phys, dst_phys;
+ 	struct dcp *sdcp = global_sdcp;
+ 	struct dcp_dma_desc *desc = &sdcp->coh->desc[actx->chan];
+ 	struct dcp_aes_req_ctx *rctx = skcipher_request_ctx(req);
+ 	int ret;
+ 
+-	dma_addr_t key_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_key,
+-					     2 * AES_KEYSIZE_128,
+-					     DMA_TO_DEVICE);
+-	dma_addr_t src_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_in_buf,
+-					     DCP_BUF_SZ, DMA_TO_DEVICE);
+-	dma_addr_t dst_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_out_buf,
+-					     DCP_BUF_SZ, DMA_FROM_DEVICE);
++	key_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_key,
++				  2 * AES_KEYSIZE_128, DMA_TO_DEVICE);
++	ret = dma_mapping_error(sdcp->dev, key_phys);
++	if (ret)
++		return ret;
++
++	src_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_in_buf,
++				  DCP_BUF_SZ, DMA_TO_DEVICE);
++	ret = dma_mapping_error(sdcp->dev, src_phys);
++	if (ret)
++		goto err_src;
++
++	dst_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_out_buf,
++				  DCP_BUF_SZ, DMA_FROM_DEVICE);
++	ret = dma_mapping_error(sdcp->dev, dst_phys);
++	if (ret)
++		goto err_dst;
+ 
+ 	if (actx->fill % AES_BLOCK_SIZE) {
+ 		dev_err(sdcp->dev, "Invalid block size!\n");
+@@ -265,10 +280,12 @@ static int mxs_dcp_run_aes(struct dcp_async_ctx *actx,
+ 	ret = mxs_dcp_start_dma(actx);
+ 
+ aes_done_run:
++	dma_unmap_single(sdcp->dev, dst_phys, DCP_BUF_SZ, DMA_FROM_DEVICE);
++err_dst:
++	dma_unmap_single(sdcp->dev, src_phys, DCP_BUF_SZ, DMA_TO_DEVICE);
++err_src:
+ 	dma_unmap_single(sdcp->dev, key_phys, 2 * AES_KEYSIZE_128,
+ 			 DMA_TO_DEVICE);
+-	dma_unmap_single(sdcp->dev, src_phys, DCP_BUF_SZ, DMA_TO_DEVICE);
+-	dma_unmap_single(sdcp->dev, dst_phys, DCP_BUF_SZ, DMA_FROM_DEVICE);
+ 
+ 	return ret;
+ }
+@@ -557,6 +574,10 @@ static int mxs_dcp_run_sha(struct ahash_request *req)
+ 	dma_addr_t buf_phys = dma_map_single(sdcp->dev, sdcp->coh->sha_in_buf,
+ 					     DCP_BUF_SZ, DMA_TO_DEVICE);
+ 
++	ret = dma_mapping_error(sdcp->dev, buf_phys);
++	if (ret)
++		return ret;
++
+ 	/* Fill in the DMA descriptor. */
+ 	desc->control0 = MXS_DCP_CONTROL0_DECR_SEMAPHORE |
+ 		    MXS_DCP_CONTROL0_INTERRUPT |
+@@ -589,6 +610,10 @@ static int mxs_dcp_run_sha(struct ahash_request *req)
+ 	if (rctx->fini) {
+ 		digest_phys = dma_map_single(sdcp->dev, sdcp->coh->sha_out_buf,
+ 					     DCP_SHA_PAY_SZ, DMA_FROM_DEVICE);
++		ret = dma_mapping_error(sdcp->dev, digest_phys);
++		if (ret)
++			goto done_run;
++
+ 		desc->control0 |= MXS_DCP_CONTROL0_HASH_TERM;
+ 		desc->payload = digest_phys;
+ 	}
+diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
+index 0dd4c6b157de9..9b968ac4ee7b6 100644
+--- a/drivers/crypto/omap-aes.c
++++ b/drivers/crypto/omap-aes.c
+@@ -1175,9 +1175,9 @@ static int omap_aes_probe(struct platform_device *pdev)
+ 	spin_lock_init(&dd->lock);
+ 
+ 	INIT_LIST_HEAD(&dd->list);
+-	spin_lock(&list_lock);
++	spin_lock_bh(&list_lock);
+ 	list_add_tail(&dd->list, &dev_list);
+-	spin_unlock(&list_lock);
++	spin_unlock_bh(&list_lock);
+ 
+ 	/* Initialize crypto engine */
+ 	dd->engine = crypto_engine_alloc_init(dev, 1);
+@@ -1264,9 +1264,9 @@ static int omap_aes_remove(struct platform_device *pdev)
+ 	if (!dd)
+ 		return -ENODEV;
+ 
+-	spin_lock(&list_lock);
++	spin_lock_bh(&list_lock);
+ 	list_del(&dd->list);
+-	spin_unlock(&list_lock);
++	spin_unlock_bh(&list_lock);
+ 
+ 	for (i = dd->pdata->algs_info_size - 1; i >= 0; i--)
+ 		for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--) {
+diff --git a/drivers/crypto/omap-des.c b/drivers/crypto/omap-des.c
+index c9d38bcfd1c77..7fdf38e07adf8 100644
+--- a/drivers/crypto/omap-des.c
++++ b/drivers/crypto/omap-des.c
+@@ -1035,9 +1035,9 @@ static int omap_des_probe(struct platform_device *pdev)
+ 
+ 
+ 	INIT_LIST_HEAD(&dd->list);
+-	spin_lock(&list_lock);
++	spin_lock_bh(&list_lock);
+ 	list_add_tail(&dd->list, &dev_list);
+-	spin_unlock(&list_lock);
++	spin_unlock_bh(&list_lock);
+ 
+ 	/* Initialize des crypto engine */
+ 	dd->engine = crypto_engine_alloc_init(dev, 1);
+@@ -1096,9 +1096,9 @@ static int omap_des_remove(struct platform_device *pdev)
+ 	if (!dd)
+ 		return -ENODEV;
+ 
+-	spin_lock(&list_lock);
++	spin_lock_bh(&list_lock);
+ 	list_del(&dd->list);
+-	spin_unlock(&list_lock);
++	spin_unlock_bh(&list_lock);
+ 
+ 	for (i = dd->pdata->algs_info_size - 1; i >= 0; i--)
+ 		for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--)
+diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
+index dd53ad9987b0d..63beea7cdba5e 100644
+--- a/drivers/crypto/omap-sham.c
++++ b/drivers/crypto/omap-sham.c
+@@ -1736,7 +1736,7 @@ static void omap_sham_done_task(unsigned long data)
+ 		if (test_and_clear_bit(FLAGS_OUTPUT_READY, &dd->flags))
+ 			goto finish;
+ 	} else if (test_bit(FLAGS_DMA_READY, &dd->flags)) {
+-		if (test_and_clear_bit(FLAGS_DMA_ACTIVE, &dd->flags)) {
++		if (test_bit(FLAGS_DMA_ACTIVE, &dd->flags)) {
+ 			omap_sham_update_dma_stop(dd);
+ 			if (dd->err) {
+ 				err = dd->err;
+@@ -2144,9 +2144,9 @@ static int omap_sham_probe(struct platform_device *pdev)
+ 		(rev & dd->pdata->major_mask) >> dd->pdata->major_shift,
+ 		(rev & dd->pdata->minor_mask) >> dd->pdata->minor_shift);
+ 
+-	spin_lock(&sham.lock);
++	spin_lock_bh(&sham.lock);
+ 	list_add_tail(&dd->list, &sham.dev_list);
+-	spin_unlock(&sham.lock);
++	spin_unlock_bh(&sham.lock);
+ 
+ 	dd->engine = crypto_engine_alloc_init(dev, 1);
+ 	if (!dd->engine) {
+@@ -2194,9 +2194,9 @@ err_algs:
+ err_engine_start:
+ 	crypto_engine_exit(dd->engine);
+ err_engine:
+-	spin_lock(&sham.lock);
++	spin_lock_bh(&sham.lock);
+ 	list_del(&dd->list);
+-	spin_unlock(&sham.lock);
++	spin_unlock_bh(&sham.lock);
+ err_pm:
+ 	pm_runtime_disable(dev);
+ 	if (!dd->polling_mode)
+@@ -2215,9 +2215,9 @@ static int omap_sham_remove(struct platform_device *pdev)
+ 	dd = platform_get_drvdata(pdev);
+ 	if (!dd)
+ 		return -ENODEV;
+-	spin_lock(&sham.lock);
++	spin_lock_bh(&sham.lock);
+ 	list_del(&dd->list);
+-	spin_unlock(&sham.lock);
++	spin_unlock_bh(&sham.lock);
+ 	for (i = dd->pdata->algs_info_size - 1; i >= 0; i--)
+ 		for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--) {
+ 			crypto_unregister_ahash(
+diff --git a/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c b/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c
+index 15f6b9bdfb221..ddf42fb326251 100644
+--- a/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c
++++ b/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c
+@@ -81,10 +81,10 @@ void adf_init_hw_data_c3xxxiov(struct adf_hw_device_data *hw_data)
+ 	hw_data->enable_error_correction = adf_vf_void_noop;
+ 	hw_data->init_admin_comms = adf_vf_int_noop;
+ 	hw_data->exit_admin_comms = adf_vf_void_noop;
+-	hw_data->send_admin_init = adf_vf2pf_init;
++	hw_data->send_admin_init = adf_vf2pf_notify_init;
+ 	hw_data->init_arb = adf_vf_int_noop;
+ 	hw_data->exit_arb = adf_vf_void_noop;
+-	hw_data->disable_iov = adf_vf2pf_shutdown;
++	hw_data->disable_iov = adf_vf2pf_notify_shutdown;
+ 	hw_data->get_accel_mask = get_accel_mask;
+ 	hw_data->get_ae_mask = get_ae_mask;
+ 	hw_data->get_num_accels = get_num_accels;
+diff --git a/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c b/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c
+index d231583428c91..7e202ef925231 100644
+--- a/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c
++++ b/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c
+@@ -81,10 +81,10 @@ void adf_init_hw_data_c62xiov(struct adf_hw_device_data *hw_data)
+ 	hw_data->enable_error_correction = adf_vf_void_noop;
+ 	hw_data->init_admin_comms = adf_vf_int_noop;
+ 	hw_data->exit_admin_comms = adf_vf_void_noop;
+-	hw_data->send_admin_init = adf_vf2pf_init;
++	hw_data->send_admin_init = adf_vf2pf_notify_init;
+ 	hw_data->init_arb = adf_vf_int_noop;
+ 	hw_data->exit_arb = adf_vf_void_noop;
+-	hw_data->disable_iov = adf_vf2pf_shutdown;
++	hw_data->disable_iov = adf_vf2pf_notify_shutdown;
+ 	hw_data->get_accel_mask = get_accel_mask;
+ 	hw_data->get_ae_mask = get_ae_mask;
+ 	hw_data->get_num_accels = get_num_accels;
+diff --git a/drivers/crypto/qat/qat_common/adf_common_drv.h b/drivers/crypto/qat/qat_common/adf_common_drv.h
+index c61476553728d..dd4a811b7e89f 100644
+--- a/drivers/crypto/qat/qat_common/adf_common_drv.h
++++ b/drivers/crypto/qat/qat_common/adf_common_drv.h
+@@ -198,8 +198,8 @@ void adf_enable_vf2pf_interrupts(struct adf_accel_dev *accel_dev,
+ void adf_enable_pf2vf_interrupts(struct adf_accel_dev *accel_dev);
+ void adf_disable_pf2vf_interrupts(struct adf_accel_dev *accel_dev);
+ 
+-int adf_vf2pf_init(struct adf_accel_dev *accel_dev);
+-void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev);
++int adf_vf2pf_notify_init(struct adf_accel_dev *accel_dev);
++void adf_vf2pf_notify_shutdown(struct adf_accel_dev *accel_dev);
+ int adf_init_pf_wq(void);
+ void adf_exit_pf_wq(void);
+ int adf_init_vf_wq(void);
+@@ -222,12 +222,12 @@ static inline void adf_disable_pf2vf_interrupts(struct adf_accel_dev *accel_dev)
+ {
+ }
+ 
+-static inline int adf_vf2pf_init(struct adf_accel_dev *accel_dev)
++static inline int adf_vf2pf_notify_init(struct adf_accel_dev *accel_dev)
+ {
+ 	return 0;
+ }
+ 
+-static inline void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev)
++static inline void adf_vf2pf_notify_shutdown(struct adf_accel_dev *accel_dev)
+ {
+ }
+ 
+diff --git a/drivers/crypto/qat/qat_common/adf_init.c b/drivers/crypto/qat/qat_common/adf_init.c
+index 744c40351428d..02864985dbb04 100644
+--- a/drivers/crypto/qat/qat_common/adf_init.c
++++ b/drivers/crypto/qat/qat_common/adf_init.c
+@@ -61,6 +61,7 @@ int adf_dev_init(struct adf_accel_dev *accel_dev)
+ 	struct service_hndl *service;
+ 	struct list_head *list_itr;
+ 	struct adf_hw_device_data *hw_data = accel_dev->hw_device;
++	int ret;
+ 
+ 	if (!hw_data) {
+ 		dev_err(&GET_DEV(accel_dev),
+@@ -127,9 +128,9 @@ int adf_dev_init(struct adf_accel_dev *accel_dev)
+ 	}
+ 
+ 	hw_data->enable_error_correction(accel_dev);
+-	hw_data->enable_vf2pf_comms(accel_dev);
++	ret = hw_data->enable_vf2pf_comms(accel_dev);
+ 
+-	return 0;
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(adf_dev_init);
+ 
+diff --git a/drivers/crypto/qat/qat_common/adf_isr.c b/drivers/crypto/qat/qat_common/adf_isr.c
+index e3ad5587be49e..daab02011717d 100644
+--- a/drivers/crypto/qat/qat_common/adf_isr.c
++++ b/drivers/crypto/qat/qat_common/adf_isr.c
+@@ -15,6 +15,8 @@
+ #include "adf_transport_access_macros.h"
+ #include "adf_transport_internal.h"
+ 
++#define ADF_MAX_NUM_VFS	32
++
+ static int adf_enable_msix(struct adf_accel_dev *accel_dev)
+ {
+ 	struct adf_accel_pci *pci_dev_info = &accel_dev->accel_pci_dev;
+@@ -72,7 +74,7 @@ static irqreturn_t adf_msix_isr_ae(int irq, void *dev_ptr)
+ 		struct adf_bar *pmisc =
+ 			&GET_BARS(accel_dev)[hw_data->get_misc_bar_id(hw_data)];
+ 		void __iomem *pmisc_bar_addr = pmisc->virt_addr;
+-		u32 vf_mask;
++		unsigned long vf_mask;
+ 
+ 		/* Get the interrupt sources triggered by VFs */
+ 		vf_mask = ((ADF_CSR_RD(pmisc_bar_addr, ADF_ERRSOU5) &
+@@ -93,8 +95,7 @@ static irqreturn_t adf_msix_isr_ae(int irq, void *dev_ptr)
+ 			 * unless the VF is malicious and is attempting to
+ 			 * flood the host OS with VF2PF interrupts.
+ 			 */
+-			for_each_set_bit(i, (const unsigned long *)&vf_mask,
+-					 (sizeof(vf_mask) * BITS_PER_BYTE)) {
++			for_each_set_bit(i, &vf_mask, ADF_MAX_NUM_VFS) {
+ 				vf_info = accel_dev->pf.vf_info + i;
+ 
+ 				if (!__ratelimit(&vf_info->vf2pf_ratelimit)) {
+diff --git a/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c b/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
+index a1b77bd7a8944..efa4bffb4f601 100644
+--- a/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
++++ b/drivers/crypto/qat/qat_common/adf_pf2vf_msg.c
+@@ -186,7 +186,6 @@ int adf_iov_putmsg(struct adf_accel_dev *accel_dev, u32 msg, u8 vf_nr)
+ 
+ 	return ret;
+ }
+-EXPORT_SYMBOL_GPL(adf_iov_putmsg);
+ 
+ void adf_vf2pf_req_hndl(struct adf_accel_vf_info *vf_info)
+ {
+@@ -316,6 +315,8 @@ static int adf_vf2pf_request_version(struct adf_accel_dev *accel_dev)
+ 	msg |= ADF_PFVF_COMPATIBILITY_VERSION << ADF_VF2PF_COMPAT_VER_REQ_SHIFT;
+ 	BUILD_BUG_ON(ADF_PFVF_COMPATIBILITY_VERSION > 255);
+ 
++	reinit_completion(&accel_dev->vf.iov_msg_completion);
++
+ 	/* Send request from VF to PF */
+ 	ret = adf_iov_putmsg(accel_dev, msg, 0);
+ 	if (ret) {
+diff --git a/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c b/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c
+index e85bd62d134a4..3e25fac051b25 100644
+--- a/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c
++++ b/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c
+@@ -5,14 +5,14 @@
+ #include "adf_pf2vf_msg.h"
+ 
+ /**
+- * adf_vf2pf_init() - send init msg to PF
++ * adf_vf2pf_notify_init() - send init msg to PF
+  * @accel_dev:  Pointer to acceleration VF device.
+  *
+  * Function sends an init message from the VF to a PF
+  *
+  * Return: 0 on success, error code otherwise.
+  */
+-int adf_vf2pf_init(struct adf_accel_dev *accel_dev)
++int adf_vf2pf_notify_init(struct adf_accel_dev *accel_dev)
+ {
+ 	u32 msg = (ADF_VF2PF_MSGORIGIN_SYSTEM |
+ 		(ADF_VF2PF_MSGTYPE_INIT << ADF_VF2PF_MSGTYPE_SHIFT));
+@@ -25,17 +25,17 @@ int adf_vf2pf_init(struct adf_accel_dev *accel_dev)
+ 	set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
+ 	return 0;
+ }
+-EXPORT_SYMBOL_GPL(adf_vf2pf_init);
++EXPORT_SYMBOL_GPL(adf_vf2pf_notify_init);
+ 
+ /**
+- * adf_vf2pf_shutdown() - send shutdown msg to PF
++ * adf_vf2pf_notify_shutdown() - send shutdown msg to PF
+  * @accel_dev:  Pointer to acceleration VF device.
+  *
+  * Function sends a shutdown message from the VF to a PF
+  *
+  * Return: void
+  */
+-void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev)
++void adf_vf2pf_notify_shutdown(struct adf_accel_dev *accel_dev)
+ {
+ 	u32 msg = (ADF_VF2PF_MSGORIGIN_SYSTEM |
+ 	    (ADF_VF2PF_MSGTYPE_SHUTDOWN << ADF_VF2PF_MSGTYPE_SHIFT));
+@@ -45,4 +45,4 @@ void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev)
+ 			dev_err(&GET_DEV(accel_dev),
+ 				"Failed to send Shutdown event to PF\n");
+ }
+-EXPORT_SYMBOL_GPL(adf_vf2pf_shutdown);
++EXPORT_SYMBOL_GPL(adf_vf2pf_notify_shutdown);
+diff --git a/drivers/crypto/qat/qat_common/adf_vf_isr.c b/drivers/crypto/qat/qat_common/adf_vf_isr.c
+index 888388acb6bd3..3e4f64d248f9b 100644
+--- a/drivers/crypto/qat/qat_common/adf_vf_isr.c
++++ b/drivers/crypto/qat/qat_common/adf_vf_isr.c
+@@ -160,6 +160,7 @@ static irqreturn_t adf_isr(int irq, void *privdata)
+ 	struct adf_bar *pmisc =
+ 			&GET_BARS(accel_dev)[hw_data->get_misc_bar_id(hw_data)];
+ 	void __iomem *pmisc_bar_addr = pmisc->virt_addr;
++	bool handled = false;
+ 	u32 v_int;
+ 
+ 	/* Read VF INT source CSR to determine the source of VF interrupt */
+@@ -172,7 +173,7 @@ static irqreturn_t adf_isr(int irq, void *privdata)
+ 
+ 		/* Schedule tasklet to handle interrupt BH */
+ 		tasklet_hi_schedule(&accel_dev->vf.pf2vf_bh_tasklet);
+-		return IRQ_HANDLED;
++		handled = true;
+ 	}
+ 
+ 	/* Check bundle interrupt */
+@@ -184,10 +185,10 @@ static irqreturn_t adf_isr(int irq, void *privdata)
+ 		csr_ops->write_csr_int_flag_and_col(bank->csr_addr,
+ 						    bank->bank_number, 0);
+ 		tasklet_hi_schedule(&bank->resp_handler);
+-		return IRQ_HANDLED;
++		handled = true;
+ 	}
+ 
+-	return IRQ_NONE;
++	return handled ? IRQ_HANDLED : IRQ_NONE;
+ }
+ 
+ static int adf_request_msi_irq(struct adf_accel_dev *accel_dev)
+diff --git a/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c b/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c
+index f14fb82ed6dfc..744734caaf7b7 100644
+--- a/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c
++++ b/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c
+@@ -81,10 +81,10 @@ void adf_init_hw_data_dh895xcciov(struct adf_hw_device_data *hw_data)
+ 	hw_data->enable_error_correction = adf_vf_void_noop;
+ 	hw_data->init_admin_comms = adf_vf_int_noop;
+ 	hw_data->exit_admin_comms = adf_vf_void_noop;
+-	hw_data->send_admin_init = adf_vf2pf_init;
++	hw_data->send_admin_init = adf_vf2pf_notify_init;
+ 	hw_data->init_arb = adf_vf_int_noop;
+ 	hw_data->exit_arb = adf_vf_void_noop;
+-	hw_data->disable_iov = adf_vf2pf_shutdown;
++	hw_data->disable_iov = adf_vf2pf_notify_shutdown;
+ 	hw_data->get_accel_mask = get_accel_mask;
+ 	hw_data->get_ae_mask = get_ae_mask;
+ 	hw_data->get_num_accels = get_num_accels;
+diff --git a/drivers/edac/i10nm_base.c b/drivers/edac/i10nm_base.c
+index 37b4e875420e4..1cea5d8fa4349 100644
+--- a/drivers/edac/i10nm_base.c
++++ b/drivers/edac/i10nm_base.c
+@@ -26,8 +26,8 @@
+ 	pci_read_config_dword((d)->uracu, 0xd8 + (i) * 4, &(reg))
+ #define I10NM_GET_DIMMMTR(m, i, j)	\
+ 	readl((m)->mbase + 0x2080c + (i) * (m)->chan_mmio_sz + (j) * 4)
+-#define I10NM_GET_MCDDRTCFG(m, i, j)	\
+-	readl((m)->mbase + 0x20970 + (i) * (m)->chan_mmio_sz + (j) * 4)
++#define I10NM_GET_MCDDRTCFG(m, i)	\
++	readl((m)->mbase + 0x20970 + (i) * (m)->chan_mmio_sz)
+ #define I10NM_GET_MCMTR(m, i)		\
+ 	readl((m)->mbase + 0x20ef8 + (i) * (m)->chan_mmio_sz)
+ #define I10NM_GET_AMAP(m, i)		\
+@@ -185,10 +185,10 @@ static int i10nm_get_dimm_config(struct mem_ctl_info *mci,
+ 
+ 		ndimms = 0;
+ 		amap = I10NM_GET_AMAP(imc, i);
++		mcddrtcfg = I10NM_GET_MCDDRTCFG(imc, i);
+ 		for (j = 0; j < I10NM_NUM_DIMMS; j++) {
+ 			dimm = edac_get_dimm(mci, i, j, 0);
+ 			mtr = I10NM_GET_DIMMMTR(imc, i, j);
+-			mcddrtcfg = I10NM_GET_MCDDRTCFG(imc, i, j);
+ 			edac_dbg(1, "dimmmtr 0x%x mcddrtcfg 0x%x (mc%d ch%d dimm%d)\n",
+ 				 mtr, mcddrtcfg, imc->mc, i, j);
+ 
+diff --git a/drivers/edac/mce_amd.c b/drivers/edac/mce_amd.c
+index 5dd905a3f30ca..1a1629166aa30 100644
+--- a/drivers/edac/mce_amd.c
++++ b/drivers/edac/mce_amd.c
+@@ -1176,6 +1176,9 @@ static int __init mce_amd_init(void)
+ 	    c->x86_vendor != X86_VENDOR_HYGON)
+ 		return -ENODEV;
+ 
++	if (cpu_feature_enabled(X86_FEATURE_HYPERVISOR))
++		return -ENODEV;
++
+ 	if (boot_cpu_has(X86_FEATURE_SMCA)) {
+ 		xec_mask = 0x3f;
+ 		goto out;
+diff --git a/drivers/firmware/raspberrypi.c b/drivers/firmware/raspberrypi.c
+index 250e016807422..4b8978b254f9a 100644
+--- a/drivers/firmware/raspberrypi.c
++++ b/drivers/firmware/raspberrypi.c
+@@ -329,12 +329,18 @@ struct rpi_firmware *rpi_firmware_get(struct device_node *firmware_node)
+ 
+ 	fw = platform_get_drvdata(pdev);
+ 	if (!fw)
+-		return NULL;
++		goto err_put_device;
+ 
+ 	if (!kref_get_unless_zero(&fw->consumers))
+-		return NULL;
++		goto err_put_device;
++
++	put_device(&pdev->dev);
+ 
+ 	return fw;
++
++err_put_device:
++	put_device(&pdev->dev);
++	return NULL;
+ }
+ EXPORT_SYMBOL_GPL(rpi_firmware_get);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
+index b8655ff73a658..cc9c9f8b23b2c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
+@@ -160,17 +160,28 @@ static int acp_poweron(struct generic_pm_domain *genpd)
+ 	return 0;
+ }
+ 
+-static struct device *get_mfd_cell_dev(const char *device_name, int r)
++static int acp_genpd_add_device(struct device *dev, void *data)
+ {
+-	char auto_dev_name[25];
+-	struct device *dev;
++	struct generic_pm_domain *gpd = data;
++	int ret;
+ 
+-	snprintf(auto_dev_name, sizeof(auto_dev_name),
+-		 "%s.%d.auto", device_name, r);
+-	dev = bus_find_device_by_name(&platform_bus_type, NULL, auto_dev_name);
+-	dev_info(dev, "device %s added to pm domain\n", auto_dev_name);
++	ret = pm_genpd_add_device(gpd, dev);
++	if (ret)
++		dev_err(dev, "Failed to add dev to genpd %d\n", ret);
+ 
+-	return dev;
++	return ret;
++}
++
++static int acp_genpd_remove_device(struct device *dev, void *data)
++{
++	int ret;
++
++	ret = pm_genpd_remove_device(dev);
++	if (ret)
++		dev_err(dev, "Failed to remove dev from genpd %d\n", ret);
++
++	/* Continue to remove */
++	return 0;
+ }
+ 
+ /**
+@@ -181,11 +192,10 @@ static struct device *get_mfd_cell_dev(const char *device_name, int r)
+  */
+ static int acp_hw_init(void *handle)
+ {
+-	int r, i;
++	int r;
+ 	uint64_t acp_base;
+ 	u32 val = 0;
+ 	u32 count = 0;
+-	struct device *dev;
+ 	struct i2s_platform_data *i2s_pdata = NULL;
+ 
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+@@ -341,15 +351,10 @@ static int acp_hw_init(void *handle)
+ 	if (r)
+ 		goto failure;
+ 
+-	for (i = 0; i < ACP_DEVS ; i++) {
+-		dev = get_mfd_cell_dev(adev->acp.acp_cell[i].name, i);
+-		r = pm_genpd_add_device(&adev->acp.acp_genpd->gpd, dev);
+-		if (r) {
+-			dev_err(dev, "Failed to add dev to genpd\n");
+-			goto failure;
+-		}
+-	}
+-
++	r = device_for_each_child(adev->acp.parent, &adev->acp.acp_genpd->gpd,
++				  acp_genpd_add_device);
++	if (r)
++		goto failure;
+ 
+ 	/* Assert Soft reset of ACP */
+ 	val = cgs_read_register(adev->acp.cgs_device, mmACP_SOFT_RESET);
+@@ -410,10 +415,8 @@ failure:
+  */
+ static int acp_hw_fini(void *handle)
+ {
+-	int i, ret;
+ 	u32 val = 0;
+ 	u32 count = 0;
+-	struct device *dev;
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ 
+ 	/* return early if no ACP */
+@@ -458,13 +461,8 @@ static int acp_hw_fini(void *handle)
+ 		udelay(100);
+ 	}
+ 
+-	for (i = 0; i < ACP_DEVS ; i++) {
+-		dev = get_mfd_cell_dev(adev->acp.acp_cell[i].name, i);
+-		ret = pm_genpd_remove_device(dev);
+-		/* If removal fails, dont giveup and try rest */
+-		if (ret)
+-			dev_err(dev, "remove dev from genpd failed\n");
+-	}
++	device_for_each_child(adev->acp.parent, NULL,
++			      acp_genpd_remove_device);
+ 
+ 	mfd_remove_devices(adev->acp.parent);
+ 	kfree(adev->acp.acp_res);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
+index dc7d2e71aa6fd..5d1743f3321ef 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
+@@ -104,8 +104,8 @@ int smu_cmn_send_msg_without_waiting(struct smu_context *smu,
+ 
+ 	ret = smu_cmn_wait_for_response(smu);
+ 	if (ret != 0x1) {
+-		dev_err(adev->dev, "Msg issuing pre-check failed and "
+-		       "SMU may be not in the right state!\n");
++		dev_err(adev->dev, "Msg issuing pre-check failed(0x%x) and "
++		       "SMU may be not in the right state!\n", ret);
+ 		if (ret != -ETIME)
+ 			ret = -EIO;
+ 		return ret;
+diff --git a/drivers/gpu/drm/drm_of.c b/drivers/gpu/drm/drm_of.c
+index ca04c34e82518..997b8827fed27 100644
+--- a/drivers/gpu/drm/drm_of.c
++++ b/drivers/gpu/drm/drm_of.c
+@@ -315,7 +315,7 @@ static int drm_of_lvds_get_remote_pixels_type(
+ 
+ 		remote_port = of_graph_get_remote_port(endpoint);
+ 		if (!remote_port) {
+-			of_node_put(remote_port);
++			of_node_put(endpoint);
+ 			return -EPIPE;
+ 		}
+ 
+@@ -331,8 +331,10 @@ static int drm_of_lvds_get_remote_pixels_type(
+ 		 * configurations by passing the endpoints explicitly to
+ 		 * drm_of_lvds_get_dual_link_pixel_order().
+ 		 */
+-		if (!current_pt || pixels_type != current_pt)
++		if (!current_pt || pixels_type != current_pt) {
++			of_node_put(endpoint);
+ 			return -EINVAL;
++		}
+ 	}
+ 
+ 	return pixels_type;
+diff --git a/drivers/gpu/drm/gma500/oaktrail_lvds.c b/drivers/gpu/drm/gma500/oaktrail_lvds.c
+index 432bdcc57ac9e..a1332878857b2 100644
+--- a/drivers/gpu/drm/gma500/oaktrail_lvds.c
++++ b/drivers/gpu/drm/gma500/oaktrail_lvds.c
+@@ -117,7 +117,7 @@ static void oaktrail_lvds_mode_set(struct drm_encoder *encoder,
+ 			continue;
+ 	}
+ 
+-	if (!connector) {
++	if (list_entry_is_head(connector, &mode_config->connector_list, head)) {
+ 		DRM_ERROR("Couldn't find connector when setting mode");
+ 		gma_power_end(dev);
+ 		return;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
+index 2d4645e01ebf6..e01135b7a404f 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
+@@ -345,10 +345,12 @@ static void dpu_hw_ctl_clear_all_blendstages(struct dpu_hw_ctl *ctx)
+ 	int i;
+ 
+ 	for (i = 0; i < ctx->mixer_count; i++) {
+-		DPU_REG_WRITE(c, CTL_LAYER(LM_0 + i), 0);
+-		DPU_REG_WRITE(c, CTL_LAYER_EXT(LM_0 + i), 0);
+-		DPU_REG_WRITE(c, CTL_LAYER_EXT2(LM_0 + i), 0);
+-		DPU_REG_WRITE(c, CTL_LAYER_EXT3(LM_0 + i), 0);
++		enum dpu_lm mixer_id = ctx->mixer_hw_caps[i].id;
++
++		DPU_REG_WRITE(c, CTL_LAYER(mixer_id), 0);
++		DPU_REG_WRITE(c, CTL_LAYER_EXT(mixer_id), 0);
++		DPU_REG_WRITE(c, CTL_LAYER_EXT2(mixer_id), 0);
++		DPU_REG_WRITE(c, CTL_LAYER_EXT3(mixer_id), 0);
+ 	}
+ 
+ 	DPU_REG_WRITE(c, CTL_FETCH_PIPE_ACTIVE, 0);
+diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
+index 4a5b518288b06..0712752742f4f 100644
+--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
++++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
+@@ -19,30 +19,12 @@ static int mdp4_hw_init(struct msm_kms *kms)
+ {
+ 	struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms));
+ 	struct drm_device *dev = mdp4_kms->dev;
+-	uint32_t version, major, minor, dmap_cfg, vg_cfg;
++	u32 dmap_cfg, vg_cfg;
+ 	unsigned long clk;
+ 	int ret = 0;
+ 
+ 	pm_runtime_get_sync(dev->dev);
+ 
+-	mdp4_enable(mdp4_kms);
+-	version = mdp4_read(mdp4_kms, REG_MDP4_VERSION);
+-	mdp4_disable(mdp4_kms);
+-
+-	major = FIELD(version, MDP4_VERSION_MAJOR);
+-	minor = FIELD(version, MDP4_VERSION_MINOR);
+-
+-	DBG("found MDP4 version v%d.%d", major, minor);
+-
+-	if (major != 4) {
+-		DRM_DEV_ERROR(dev->dev, "unexpected MDP version: v%d.%d\n",
+-				major, minor);
+-		ret = -ENXIO;
+-		goto out;
+-	}
+-
+-	mdp4_kms->rev = minor;
+-
+ 	if (mdp4_kms->rev > 1) {
+ 		mdp4_write(mdp4_kms, REG_MDP4_CS_CONTROLLER0, 0x0707ffff);
+ 		mdp4_write(mdp4_kms, REG_MDP4_CS_CONTROLLER1, 0x03073f3f);
+@@ -88,7 +70,6 @@ static int mdp4_hw_init(struct msm_kms *kms)
+ 	if (mdp4_kms->rev > 1)
+ 		mdp4_write(mdp4_kms, REG_MDP4_RESET_STATUS, 1);
+ 
+-out:
+ 	pm_runtime_put_sync(dev->dev);
+ 
+ 	return ret;
+@@ -411,6 +392,22 @@ fail:
+ 	return ret;
+ }
+ 
++static void read_mdp_hw_revision(struct mdp4_kms *mdp4_kms,
++				 u32 *major, u32 *minor)
++{
++	struct drm_device *dev = mdp4_kms->dev;
++	u32 version;
++
++	mdp4_enable(mdp4_kms);
++	version = mdp4_read(mdp4_kms, REG_MDP4_VERSION);
++	mdp4_disable(mdp4_kms);
++
++	*major = FIELD(version, MDP4_VERSION_MAJOR);
++	*minor = FIELD(version, MDP4_VERSION_MINOR);
++
++	DRM_DEV_INFO(dev->dev, "MDP4 version v%d.%d", *major, *minor);
++}
++
+ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
+ {
+ 	struct platform_device *pdev = to_platform_device(dev->dev);
+@@ -419,6 +416,7 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
+ 	struct msm_kms *kms = NULL;
+ 	struct msm_gem_address_space *aspace;
+ 	int irq, ret;
++	u32 major, minor;
+ 
+ 	mdp4_kms = kzalloc(sizeof(*mdp4_kms), GFP_KERNEL);
+ 	if (!mdp4_kms) {
+@@ -479,15 +477,6 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
+ 	if (IS_ERR(mdp4_kms->pclk))
+ 		mdp4_kms->pclk = NULL;
+ 
+-	if (mdp4_kms->rev >= 2) {
+-		mdp4_kms->lut_clk = devm_clk_get(&pdev->dev, "lut_clk");
+-		if (IS_ERR(mdp4_kms->lut_clk)) {
+-			DRM_DEV_ERROR(dev->dev, "failed to get lut_clk\n");
+-			ret = PTR_ERR(mdp4_kms->lut_clk);
+-			goto fail;
+-		}
+-	}
+-
+ 	mdp4_kms->axi_clk = devm_clk_get(&pdev->dev, "bus_clk");
+ 	if (IS_ERR(mdp4_kms->axi_clk)) {
+ 		DRM_DEV_ERROR(dev->dev, "failed to get axi_clk\n");
+@@ -496,8 +485,27 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
+ 	}
+ 
+ 	clk_set_rate(mdp4_kms->clk, config->max_clk);
+-	if (mdp4_kms->lut_clk)
++
++	read_mdp_hw_revision(mdp4_kms, &major, &minor);
++
++	if (major != 4) {
++		DRM_DEV_ERROR(dev->dev, "unexpected MDP version: v%d.%d\n",
++			      major, minor);
++		ret = -ENXIO;
++		goto fail;
++	}
++
++	mdp4_kms->rev = minor;
++
++	if (mdp4_kms->rev >= 2) {
++		mdp4_kms->lut_clk = devm_clk_get(&pdev->dev, "lut_clk");
++		if (IS_ERR(mdp4_kms->lut_clk)) {
++			DRM_DEV_ERROR(dev->dev, "failed to get lut_clk\n");
++			ret = PTR_ERR(mdp4_kms->lut_clk);
++			goto fail;
++		}
+ 		clk_set_rate(mdp4_kms->lut_clk, config->max_clk);
++	}
+ 
+ 	pm_runtime_enable(dev->dev);
+ 	mdp4_kms->rpm_enabled = true;
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index cdec0a367a2cb..2b1e127390e4e 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -55,7 +55,6 @@ enum {
+ 	EV_HPD_INIT_SETUP,
+ 	EV_HPD_PLUG_INT,
+ 	EV_IRQ_HPD_INT,
+-	EV_HPD_REPLUG_INT,
+ 	EV_HPD_UNPLUG_INT,
+ 	EV_USER_NOTIFICATION,
+ 	EV_CONNECT_PENDING_TIMEOUT,
+@@ -1119,9 +1118,6 @@ static int hpd_event_thread(void *data)
+ 		case EV_IRQ_HPD_INT:
+ 			dp_irq_hpd_handle(dp_priv, todo->data);
+ 			break;
+-		case EV_HPD_REPLUG_INT:
+-			/* do nothing */
+-			break;
+ 		case EV_USER_NOTIFICATION:
+ 			dp_display_send_hpd_notification(dp_priv,
+ 						todo->data);
+@@ -1165,10 +1161,8 @@ static irqreturn_t dp_display_irq_handler(int irq, void *dev_id)
+ 
+ 	if (hpd_isr_status & 0x0F) {
+ 		/* hpd related interrupts */
+-		if (hpd_isr_status & DP_DP_HPD_PLUG_INT_MASK ||
+-			hpd_isr_status & DP_DP_HPD_REPLUG_INT_MASK) {
++		if (hpd_isr_status & DP_DP_HPD_PLUG_INT_MASK)
+ 			dp_add_event(dp, EV_HPD_PLUG_INT, 0, 0);
+-		}
+ 
+ 		if (hpd_isr_status & DP_DP_IRQ_HPD_INT_MASK) {
+ 			/* stop sentinel connect pending checking */
+@@ -1176,8 +1170,10 @@ static irqreturn_t dp_display_irq_handler(int irq, void *dev_id)
+ 			dp_add_event(dp, EV_IRQ_HPD_INT, 0, 0);
+ 		}
+ 
+-		if (hpd_isr_status & DP_DP_HPD_REPLUG_INT_MASK)
+-			dp_add_event(dp, EV_HPD_REPLUG_INT, 0, 0);
++		if (hpd_isr_status & DP_DP_HPD_REPLUG_INT_MASK) {
++			dp_add_event(dp, EV_HPD_UNPLUG_INT, 0, 0);
++			dp_add_event(dp, EV_HPD_PLUG_INT, 0, 3);
++		}
+ 
+ 		if (hpd_isr_status & DP_DP_HPD_UNPLUG_INT_MASK)
+ 			dp_add_event(dp, EV_HPD_UNPLUG_INT, 0, 0);
+@@ -1286,7 +1282,7 @@ static int dp_pm_resume(struct device *dev)
+ 	struct platform_device *pdev = to_platform_device(dev);
+ 	struct msm_dp *dp_display = platform_get_drvdata(pdev);
+ 	struct dp_display_private *dp;
+-	u32 status;
++	int sink_count = 0;
+ 
+ 	dp = container_of(dp_display, struct dp_display_private, dp_display);
+ 
+@@ -1300,14 +1296,25 @@ static int dp_pm_resume(struct device *dev)
+ 
+ 	dp_catalog_ctrl_hpd_config(dp->catalog);
+ 
+-	status = dp_catalog_link_is_connected(dp->catalog);
++	/*
++	 * set sink to normal operation mode -- D0
++	 * before dpcd read
++	 */
++	dp_link_psm_config(dp->link, &dp->panel->link_info, false);
++
++	if (dp_catalog_link_is_connected(dp->catalog)) {
++		sink_count = drm_dp_read_sink_count(dp->aux);
++		if (sink_count < 0)
++			sink_count = 0;
++	}
+ 
++	dp->link->sink_count = sink_count;
+ 	/*
+ 	 * can not declared display is connected unless
+ 	 * HDMI cable is plugged in and sink_count of
+ 	 * dongle become 1
+ 	 */
+-	if (status && dp->link->sink_count)
++	if (dp->link->sink_count)
+ 		dp->dp_display.is_connected = true;
+ 	else
+ 		dp->dp_display.is_connected = false;
+diff --git a/drivers/gpu/drm/msm/dsi/dsi.c b/drivers/gpu/drm/msm/dsi/dsi.c
+index 627048851d99c..7e364b9c9f9e1 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi.c
++++ b/drivers/gpu/drm/msm/dsi/dsi.c
+@@ -26,8 +26,10 @@ static int dsi_get_phy(struct msm_dsi *msm_dsi)
+ 	}
+ 
+ 	phy_pdev = of_find_device_by_node(phy_node);
+-	if (phy_pdev)
++	if (phy_pdev) {
+ 		msm_dsi->phy = platform_get_drvdata(phy_pdev);
++		msm_dsi->phy_dev = &phy_pdev->dev;
++	}
+ 
+ 	of_node_put(phy_node);
+ 
+@@ -36,8 +38,6 @@ static int dsi_get_phy(struct msm_dsi *msm_dsi)
+ 		return -EPROBE_DEFER;
+ 	}
+ 
+-	msm_dsi->phy_dev = get_device(&phy_pdev->dev);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/mxsfb/mxsfb_drv.c b/drivers/gpu/drm/mxsfb/mxsfb_drv.c
+index 6da93551e2e5f..c277d3f61a5ef 100644
+--- a/drivers/gpu/drm/mxsfb/mxsfb_drv.c
++++ b/drivers/gpu/drm/mxsfb/mxsfb_drv.c
+@@ -51,6 +51,7 @@ static const struct mxsfb_devdata mxsfb_devdata[] = {
+ 		.hs_wdth_mask	= 0xff,
+ 		.hs_wdth_shift	= 24,
+ 		.has_overlay	= false,
++		.has_ctrl2	= false,
+ 	},
+ 	[MXSFB_V4] = {
+ 		.transfer_count	= LCDC_V4_TRANSFER_COUNT,
+@@ -59,6 +60,7 @@ static const struct mxsfb_devdata mxsfb_devdata[] = {
+ 		.hs_wdth_mask	= 0x3fff,
+ 		.hs_wdth_shift	= 18,
+ 		.has_overlay	= false,
++		.has_ctrl2	= true,
+ 	},
+ 	[MXSFB_V6] = {
+ 		.transfer_count	= LCDC_V4_TRANSFER_COUNT,
+@@ -67,6 +69,7 @@ static const struct mxsfb_devdata mxsfb_devdata[] = {
+ 		.hs_wdth_mask	= 0x3fff,
+ 		.hs_wdth_shift	= 18,
+ 		.has_overlay	= true,
++		.has_ctrl2	= true,
+ 	},
+ };
+ 
+diff --git a/drivers/gpu/drm/mxsfb/mxsfb_drv.h b/drivers/gpu/drm/mxsfb/mxsfb_drv.h
+index 399d23e91ed10..7c720e226fdfd 100644
+--- a/drivers/gpu/drm/mxsfb/mxsfb_drv.h
++++ b/drivers/gpu/drm/mxsfb/mxsfb_drv.h
+@@ -22,6 +22,7 @@ struct mxsfb_devdata {
+ 	unsigned int	hs_wdth_mask;
+ 	unsigned int	hs_wdth_shift;
+ 	bool		has_overlay;
++	bool		has_ctrl2;
+ };
+ 
+ struct mxsfb_drm_private {
+diff --git a/drivers/gpu/drm/mxsfb/mxsfb_kms.c b/drivers/gpu/drm/mxsfb/mxsfb_kms.c
+index 300e7bab0f431..54f905ac75c07 100644
+--- a/drivers/gpu/drm/mxsfb/mxsfb_kms.c
++++ b/drivers/gpu/drm/mxsfb/mxsfb_kms.c
+@@ -107,6 +107,14 @@ static void mxsfb_enable_controller(struct mxsfb_drm_private *mxsfb)
+ 		clk_prepare_enable(mxsfb->clk_disp_axi);
+ 	clk_prepare_enable(mxsfb->clk);
+ 
++	/* Increase number of outstanding requests on all supported IPs */
++	if (mxsfb->devdata->has_ctrl2) {
++		reg = readl(mxsfb->base + LCDC_V4_CTRL2);
++		reg &= ~CTRL2_SET_OUTSTANDING_REQS_MASK;
++		reg |= CTRL2_SET_OUTSTANDING_REQS_16;
++		writel(reg, mxsfb->base + LCDC_V4_CTRL2);
++	}
++
+ 	/* If it was disabled, re-enable the mode again */
+ 	writel(CTRL_DOTCLK_MODE, mxsfb->base + LCDC_CTRL + REG_SET);
+ 
+@@ -115,6 +123,35 @@ static void mxsfb_enable_controller(struct mxsfb_drm_private *mxsfb)
+ 	reg |= VDCTRL4_SYNC_SIGNALS_ON;
+ 	writel(reg, mxsfb->base + LCDC_VDCTRL4);
+ 
++	/*
++	 * Enable recovery on underflow.
++	 *
++	 * There is some sort of corner case behavior of the controller,
++	 * which could rarely be triggered at least on i.MX6SX connected
++	 * to 800x480 DPI panel and i.MX8MM connected to DPI->DSI->LVDS
++	 * bridged 1920x1080 panel (and likely on other setups too), where
++	 * the image on the panel shifts to the right and wraps around.
++	 * This happens either when the controller is enabled on boot or
++	 * even later during run time. The condition does not correct
++	 * itself automatically, i.e. the display image remains shifted.
++	 *
++	 * It seems this problem is known and is due to sporadic underflows
++	 * of the LCDIF FIFO. While the LCDIF IP does have underflow/overflow
++	 * IRQs, neither of the IRQs trigger and neither IRQ status bit is
++	 * asserted when this condition occurs.
++	 *
++	 * All known revisions of the LCDIF IP have CTRL1 RECOVER_ON_UNDERFLOW
++	 * bit, which is described in the reference manual since i.MX23 as
++	 * "
++	 *   Set this bit to enable the LCDIF block to recover in the next
++	 *   field/frame if there was an underflow in the current field/frame.
++	 * "
++	 * Enable this bit to mitigate the sporadic underflows.
++	 */
++	reg = readl(mxsfb->base + LCDC_CTRL1);
++	reg |= CTRL1_RECOVER_ON_UNDERFLOW;
++	writel(reg, mxsfb->base + LCDC_CTRL1);
++
+ 	writel(CTRL_RUN, mxsfb->base + LCDC_CTRL + REG_SET);
+ }
+ 
+@@ -206,6 +243,9 @@ static void mxsfb_crtc_mode_set_nofb(struct mxsfb_drm_private *mxsfb)
+ 
+ 	/* Clear the FIFOs */
+ 	writel(CTRL1_FIFO_CLEAR, mxsfb->base + LCDC_CTRL1 + REG_SET);
++	readl(mxsfb->base + LCDC_CTRL1);
++	writel(CTRL1_FIFO_CLEAR, mxsfb->base + LCDC_CTRL1 + REG_CLR);
++	readl(mxsfb->base + LCDC_CTRL1);
+ 
+ 	if (mxsfb->devdata->has_overlay)
+ 		writel(0, mxsfb->base + LCDC_AS_CTRL);
+diff --git a/drivers/gpu/drm/mxsfb/mxsfb_regs.h b/drivers/gpu/drm/mxsfb/mxsfb_regs.h
+index 55d28a27f9124..694fea13e893e 100644
+--- a/drivers/gpu/drm/mxsfb/mxsfb_regs.h
++++ b/drivers/gpu/drm/mxsfb/mxsfb_regs.h
+@@ -15,6 +15,7 @@
+ #define LCDC_CTRL			0x00
+ #define LCDC_CTRL1			0x10
+ #define LCDC_V3_TRANSFER_COUNT		0x20
++#define LCDC_V4_CTRL2			0x20
+ #define LCDC_V4_TRANSFER_COUNT		0x30
+ #define LCDC_V4_CUR_BUF			0x40
+ #define LCDC_V4_NEXT_BUF		0x50
+@@ -54,12 +55,20 @@
+ #define CTRL_DF24			BIT(1)
+ #define CTRL_RUN			BIT(0)
+ 
++#define CTRL1_RECOVER_ON_UNDERFLOW	BIT(24)
+ #define CTRL1_FIFO_CLEAR		BIT(21)
+ #define CTRL1_SET_BYTE_PACKAGING(x)	(((x) & 0xf) << 16)
+ #define CTRL1_GET_BYTE_PACKAGING(x)	(((x) >> 16) & 0xf)
+ #define CTRL1_CUR_FRAME_DONE_IRQ_EN	BIT(13)
+ #define CTRL1_CUR_FRAME_DONE_IRQ	BIT(9)
+ 
++#define CTRL2_SET_OUTSTANDING_REQS_1	0
++#define CTRL2_SET_OUTSTANDING_REQS_2	(0x1 << 21)
++#define CTRL2_SET_OUTSTANDING_REQS_4	(0x2 << 21)
++#define CTRL2_SET_OUTSTANDING_REQS_8	(0x3 << 21)
++#define CTRL2_SET_OUTSTANDING_REQS_16	(0x4 << 21)
++#define CTRL2_SET_OUTSTANDING_REQS_MASK	(0x7 << 21)
++
+ #define TRANSFER_COUNT_SET_VCOUNT(x)	(((x) & 0xffff) << 16)
+ #define TRANSFER_COUNT_GET_VCOUNT(x)	(((x) >> 16) & 0xffff)
+ #define TRANSFER_COUNT_SET_HCOUNT(x)	((x) & 0xffff)
+diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c b/drivers/gpu/drm/panfrost/panfrost_device.c
+index fbcf5edbe3675..9275cd0b2793e 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_device.c
++++ b/drivers/gpu/drm/panfrost/panfrost_device.c
+@@ -54,7 +54,8 @@ static int panfrost_clk_init(struct panfrost_device *pfdev)
+ 	if (IS_ERR(pfdev->bus_clock)) {
+ 		dev_err(pfdev->dev, "get bus_clock failed %ld\n",
+ 			PTR_ERR(pfdev->bus_clock));
+-		return PTR_ERR(pfdev->bus_clock);
++		err = PTR_ERR(pfdev->bus_clock);
++		goto disable_clock;
+ 	}
+ 
+ 	if (pfdev->bus_clock) {
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_drv.c b/drivers/gpu/drm/rcar-du/rcar_du_drv.c
+index bfbff90588cbf..c22551c2facb1 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_drv.c
++++ b/drivers/gpu/drm/rcar-du/rcar_du_drv.c
+@@ -556,8 +556,6 @@ static int rcar_du_remove(struct platform_device *pdev)
+ 
+ 	drm_kms_helper_poll_fini(ddev);
+ 
+-	drm_dev_put(ddev);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/hwmon/Makefile b/drivers/hwmon/Makefile
+index 59e78bc212cf3..ae66c8ce4eef5 100644
+--- a/drivers/hwmon/Makefile
++++ b/drivers/hwmon/Makefile
+@@ -45,7 +45,6 @@ obj-$(CONFIG_SENSORS_ADT7462)	+= adt7462.o
+ obj-$(CONFIG_SENSORS_ADT7470)	+= adt7470.o
+ obj-$(CONFIG_SENSORS_ADT7475)	+= adt7475.o
+ obj-$(CONFIG_SENSORS_AHT10)	+= aht10.o
+-obj-$(CONFIG_SENSORS_AMD_ENERGY) += amd_energy.o
+ obj-$(CONFIG_SENSORS_APPLESMC)	+= applesmc.o
+ obj-$(CONFIG_SENSORS_ARM_SCMI)	+= scmi-hwmon.o
+ obj-$(CONFIG_SENSORS_ARM_SCPI)	+= scpi-hwmon.o
+diff --git a/drivers/hwmon/pmbus/bpa-rs600.c b/drivers/hwmon/pmbus/bpa-rs600.c
+index 2be69fedfa361..be76efe67d83f 100644
+--- a/drivers/hwmon/pmbus/bpa-rs600.c
++++ b/drivers/hwmon/pmbus/bpa-rs600.c
+@@ -12,15 +12,6 @@
+ #include <linux/pmbus.h>
+ #include "pmbus.h"
+ 
+-#define BPARS600_MFR_VIN_MIN	0xa0
+-#define BPARS600_MFR_VIN_MAX	0xa1
+-#define BPARS600_MFR_IIN_MAX	0xa2
+-#define BPARS600_MFR_PIN_MAX	0xa3
+-#define BPARS600_MFR_VOUT_MIN	0xa4
+-#define BPARS600_MFR_VOUT_MAX	0xa5
+-#define BPARS600_MFR_IOUT_MAX	0xa6
+-#define BPARS600_MFR_POUT_MAX	0xa7
+-
+ static int bpa_rs600_read_byte_data(struct i2c_client *client, int page, int reg)
+ {
+ 	int ret;
+@@ -81,29 +72,13 @@ static int bpa_rs600_read_word_data(struct i2c_client *client, int page, int pha
+ 
+ 	switch (reg) {
+ 	case PMBUS_VIN_UV_WARN_LIMIT:
+-		ret = pmbus_read_word_data(client, 0, 0xff, BPARS600_MFR_VIN_MIN);
+-		break;
+ 	case PMBUS_VIN_OV_WARN_LIMIT:
+-		ret = pmbus_read_word_data(client, 0, 0xff, BPARS600_MFR_VIN_MAX);
+-		break;
+ 	case PMBUS_VOUT_UV_WARN_LIMIT:
+-		ret = pmbus_read_word_data(client, 0, 0xff, BPARS600_MFR_VOUT_MIN);
+-		break;
+ 	case PMBUS_VOUT_OV_WARN_LIMIT:
+-		ret = pmbus_read_word_data(client, 0, 0xff, BPARS600_MFR_VOUT_MAX);
+-		break;
+ 	case PMBUS_IIN_OC_WARN_LIMIT:
+-		ret = pmbus_read_word_data(client, 0, 0xff, BPARS600_MFR_IIN_MAX);
+-		break;
+ 	case PMBUS_IOUT_OC_WARN_LIMIT:
+-		ret = pmbus_read_word_data(client, 0, 0xff, BPARS600_MFR_IOUT_MAX);
+-		break;
+ 	case PMBUS_PIN_OP_WARN_LIMIT:
+-		ret = pmbus_read_word_data(client, 0, 0xff, BPARS600_MFR_PIN_MAX);
+-		break;
+ 	case PMBUS_POUT_OP_WARN_LIMIT:
+-		ret = pmbus_read_word_data(client, 0, 0xff, BPARS600_MFR_POUT_MAX);
+-		break;
+ 	case PMBUS_VIN_UV_FAULT_LIMIT:
+ 	case PMBUS_VIN_OV_FAULT_LIMIT:
+ 	case PMBUS_VOUT_UV_FAULT_LIMIT:
+diff --git a/drivers/i2c/busses/i2c-highlander.c b/drivers/i2c/busses/i2c-highlander.c
+index 803dad70e2a71..a2add128d0843 100644
+--- a/drivers/i2c/busses/i2c-highlander.c
++++ b/drivers/i2c/busses/i2c-highlander.c
+@@ -379,7 +379,7 @@ static int highlander_i2c_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, dev);
+ 
+ 	dev->irq = platform_get_irq(pdev, 0);
+-	if (iic_force_poll)
++	if (dev->irq < 0 || iic_force_poll)
+ 		dev->irq = 0;
+ 
+ 	if (dev->irq) {
+diff --git a/drivers/i2c/busses/i2c-hix5hd2.c b/drivers/i2c/busses/i2c-hix5hd2.c
+index aa00ba8bcb70f..61ae58f570475 100644
+--- a/drivers/i2c/busses/i2c-hix5hd2.c
++++ b/drivers/i2c/busses/i2c-hix5hd2.c
+@@ -413,7 +413,7 @@ static int hix5hd2_i2c_probe(struct platform_device *pdev)
+ 		return PTR_ERR(priv->regs);
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (irq <= 0)
++	if (irq < 0)
+ 		return irq;
+ 
+ 	priv->clk = devm_clk_get(&pdev->dev, NULL);
+diff --git a/drivers/i2c/busses/i2c-iop3xx.c b/drivers/i2c/busses/i2c-iop3xx.c
+index cfecaf18ccbb7..4a6ff54d87fe8 100644
+--- a/drivers/i2c/busses/i2c-iop3xx.c
++++ b/drivers/i2c/busses/i2c-iop3xx.c
+@@ -469,16 +469,14 @@ iop3xx_i2c_probe(struct platform_device *pdev)
+ 
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq < 0) {
+-		ret = -ENXIO;
++		ret = irq;
+ 		goto unmap;
+ 	}
+ 	ret = request_irq(irq, iop3xx_i2c_irq_handler, 0,
+ 				pdev->name, adapter_data);
+ 
+-	if (ret) {
+-		ret = -EIO;
++	if (ret)
+ 		goto unmap;
+-	}
+ 
+ 	memcpy(new_adapter->name, pdev->name, strlen(pdev->name));
+ 	new_adapter->owner = THIS_MODULE;
+diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c
+index 4e9fb6b44436a..d90d80d046bd7 100644
+--- a/drivers/i2c/busses/i2c-mt65xx.c
++++ b/drivers/i2c/busses/i2c-mt65xx.c
+@@ -1211,7 +1211,7 @@ static int mtk_i2c_probe(struct platform_device *pdev)
+ 		return PTR_ERR(i2c->pdmabase);
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (irq <= 0)
++	if (irq < 0)
+ 		return irq;
+ 
+ 	init_completion(&i2c->msg_complete);
+diff --git a/drivers/i2c/busses/i2c-s3c2410.c b/drivers/i2c/busses/i2c-s3c2410.c
+index 4d82761e1585e..b49a1b170bb2f 100644
+--- a/drivers/i2c/busses/i2c-s3c2410.c
++++ b/drivers/i2c/busses/i2c-s3c2410.c
+@@ -1137,7 +1137,7 @@ static int s3c24xx_i2c_probe(struct platform_device *pdev)
+ 	 */
+ 	if (!(i2c->quirks & QUIRK_POLL)) {
+ 		i2c->irq = ret = platform_get_irq(pdev, 0);
+-		if (ret <= 0) {
++		if (ret < 0) {
+ 			dev_err(&pdev->dev, "cannot find IRQ\n");
+ 			clk_unprepare(i2c->clk);
+ 			return ret;
+diff --git a/drivers/i2c/busses/i2c-synquacer.c b/drivers/i2c/busses/i2c-synquacer.c
+index 31be1811d5e66..e4026c5416b15 100644
+--- a/drivers/i2c/busses/i2c-synquacer.c
++++ b/drivers/i2c/busses/i2c-synquacer.c
+@@ -578,7 +578,7 @@ static int synquacer_i2c_probe(struct platform_device *pdev)
+ 
+ 	i2c->irq = platform_get_irq(pdev, 0);
+ 	if (i2c->irq < 0)
+-		return -ENODEV;
++		return i2c->irq;
+ 
+ 	ret = devm_request_irq(&pdev->dev, i2c->irq, synquacer_i2c_isr,
+ 			       0, dev_name(&pdev->dev), i2c);
+diff --git a/drivers/i2c/busses/i2c-xlp9xx.c b/drivers/i2c/busses/i2c-xlp9xx.c
+index f2241cedf5d3f..6d24dc3855229 100644
+--- a/drivers/i2c/busses/i2c-xlp9xx.c
++++ b/drivers/i2c/busses/i2c-xlp9xx.c
+@@ -517,7 +517,7 @@ static int xlp9xx_i2c_probe(struct platform_device *pdev)
+ 		return PTR_ERR(priv->base);
+ 
+ 	priv->irq = platform_get_irq(pdev, 0);
+-	if (priv->irq <= 0)
++	if (priv->irq < 0)
+ 		return priv->irq;
+ 	/* SMBAlert irq */
+ 	priv->alert_data.irq = platform_get_irq(pdev, 1);
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index fd113ddf6e862..8193fa5c3fedf 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -1022,7 +1022,7 @@ static void *mlx5_ib_alloc_xlt(size_t *nents, size_t ent_size, gfp_t gfp_mask)
+ 
+ 	if (size > MLX5_SPARE_UMR_CHUNK) {
+ 		size = MLX5_SPARE_UMR_CHUNK;
+-		*nents = get_order(size) / ent_size;
++		*nents = size / ent_size;
+ 		res = (void *)__get_free_pages(gfp_mask | __GFP_NOWARN,
+ 					       get_order(size));
+ 		if (res)
+diff --git a/drivers/irqchip/irq-apple-aic.c b/drivers/irqchip/irq-apple-aic.c
+index c179e27062fd5..151aab408fa65 100644
+--- a/drivers/irqchip/irq-apple-aic.c
++++ b/drivers/irqchip/irq-apple-aic.c
+@@ -225,7 +225,7 @@ static void aic_irq_eoi(struct irq_data *d)
+ 	 * Reading the interrupt reason automatically acknowledges and masks
+ 	 * the IRQ, so we just unmask it here if needed.
+ 	 */
+-	if (!irqd_irq_disabled(d) && !irqd_irq_masked(d))
++	if (!irqd_irq_masked(d))
+ 		aic_irq_unmask(d);
+ }
+ 
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index 66d623f91678a..20a2d606b4c98 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -100,6 +100,27 @@ EXPORT_SYMBOL(gic_pmr_sync);
+ DEFINE_STATIC_KEY_FALSE(gic_nonsecure_priorities);
+ EXPORT_SYMBOL(gic_nonsecure_priorities);
+ 
++/*
++ * When the Non-secure world has access to group 0 interrupts (as a
++ * consequence of SCR_EL3.FIQ == 0), reading the ICC_RPR_EL1 register will
++ * return the Distributor's view of the interrupt priority.
++ *
++ * When GIC security is enabled (GICD_CTLR.DS == 0), the interrupt priority
++ * written by software is moved to the Non-secure range by the Distributor.
++ *
++ * If both are true (which is when gic_nonsecure_priorities gets enabled),
++ * we need to shift down the priority programmed by software to match it
++ * against the value returned by ICC_RPR_EL1.
++ */
++#define GICD_INT_RPR_PRI(priority)					\
++	({								\
++		u32 __priority = (priority);				\
++		if (static_branch_unlikely(&gic_nonsecure_priorities))	\
++			__priority = 0x80 | (__priority >> 1);		\
++									\
++		__priority;						\
++	})
++
+ /* ppi_nmi_refs[n] == number of cpus having ppi[n + 16] set as NMI */
+ static refcount_t *ppi_nmi_refs;
+ 
+@@ -687,7 +708,7 @@ static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs
+ 		return;
+ 
+ 	if (gic_supports_nmi() &&
+-	    unlikely(gic_read_rpr() == GICD_INT_NMI_PRI)) {
++	    unlikely(gic_read_rpr() == GICD_INT_RPR_PRI(GICD_INT_NMI_PRI))) {
+ 		gic_handle_nmi(irqnr, regs);
+ 		return;
+ 	}
+diff --git a/drivers/irqchip/irq-loongson-pch-pic.c b/drivers/irqchip/irq-loongson-pch-pic.c
+index f790ca6d78aa4..a4eb8a2181c7f 100644
+--- a/drivers/irqchip/irq-loongson-pch-pic.c
++++ b/drivers/irqchip/irq-loongson-pch-pic.c
+@@ -92,18 +92,22 @@ static int pch_pic_set_type(struct irq_data *d, unsigned int type)
+ 	case IRQ_TYPE_EDGE_RISING:
+ 		pch_pic_bitset(priv, PCH_PIC_EDGE, d->hwirq);
+ 		pch_pic_bitclr(priv, PCH_PIC_POL, d->hwirq);
++		irq_set_handler_locked(d, handle_edge_irq);
+ 		break;
+ 	case IRQ_TYPE_EDGE_FALLING:
+ 		pch_pic_bitset(priv, PCH_PIC_EDGE, d->hwirq);
+ 		pch_pic_bitset(priv, PCH_PIC_POL, d->hwirq);
++		irq_set_handler_locked(d, handle_edge_irq);
+ 		break;
+ 	case IRQ_TYPE_LEVEL_HIGH:
+ 		pch_pic_bitclr(priv, PCH_PIC_EDGE, d->hwirq);
+ 		pch_pic_bitclr(priv, PCH_PIC_POL, d->hwirq);
++		irq_set_handler_locked(d, handle_level_irq);
+ 		break;
+ 	case IRQ_TYPE_LEVEL_LOW:
+ 		pch_pic_bitclr(priv, PCH_PIC_EDGE, d->hwirq);
+ 		pch_pic_bitset(priv, PCH_PIC_POL, d->hwirq);
++		irq_set_handler_locked(d, handle_level_irq);
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
+@@ -113,11 +117,24 @@ static int pch_pic_set_type(struct irq_data *d, unsigned int type)
+ 	return ret;
+ }
+ 
++static void pch_pic_ack_irq(struct irq_data *d)
++{
++	unsigned int reg;
++	struct pch_pic *priv = irq_data_get_irq_chip_data(d);
++
++	reg = readl(priv->base + PCH_PIC_EDGE + PIC_REG_IDX(d->hwirq) * 4);
++	if (reg & BIT(PIC_REG_BIT(d->hwirq))) {
++		writel(BIT(PIC_REG_BIT(d->hwirq)),
++			priv->base + PCH_PIC_CLR + PIC_REG_IDX(d->hwirq) * 4);
++	}
++	irq_chip_ack_parent(d);
++}
++
+ static struct irq_chip pch_pic_irq_chip = {
+ 	.name			= "PCH PIC",
+ 	.irq_mask		= pch_pic_mask_irq,
+ 	.irq_unmask		= pch_pic_unmask_irq,
+-	.irq_ack		= irq_chip_ack_parent,
++	.irq_ack		= pch_pic_ack_irq,
+ 	.irq_set_affinity	= irq_chip_set_affinity_parent,
+ 	.irq_set_type		= pch_pic_set_type,
+ };
+diff --git a/drivers/leds/blink/leds-lgm-sso.c b/drivers/leds/blink/leds-lgm-sso.c
+index 7d5f0bf2817ad..24736f29d3633 100644
+--- a/drivers/leds/blink/leds-lgm-sso.c
++++ b/drivers/leds/blink/leds-lgm-sso.c
+@@ -630,8 +630,10 @@ __sso_led_dt_parse(struct sso_led_priv *priv, struct fwnode_handle *fw_ssoled)
+ 
+ 	fwnode_for_each_child_node(fw_ssoled, fwnode_child) {
+ 		led = devm_kzalloc(dev, sizeof(*led), GFP_KERNEL);
+-		if (!led)
+-			return -ENOMEM;
++		if (!led) {
++			ret = -ENOMEM;
++			goto __dt_err;
++		}
+ 
+ 		INIT_LIST_HEAD(&led->list);
+ 		led->priv = priv;
+@@ -641,7 +643,7 @@ __sso_led_dt_parse(struct sso_led_priv *priv, struct fwnode_handle *fw_ssoled)
+ 							      fwnode_child,
+ 							      GPIOD_ASIS, NULL);
+ 		if (IS_ERR(led->gpiod)) {
+-			dev_err(dev, "led: get gpio fail!\n");
++			ret = dev_err_probe(dev, PTR_ERR(led->gpiod), "led: get gpio fail!\n");
+ 			goto __dt_err;
+ 		}
+ 
+@@ -661,8 +663,11 @@ __sso_led_dt_parse(struct sso_led_priv *priv, struct fwnode_handle *fw_ssoled)
+ 			desc->panic_indicator = 1;
+ 
+ 		ret = fwnode_property_read_u32(fwnode_child, "reg", &prop);
+-		if (ret != 0 || prop >= SSO_LED_MAX_NUM) {
++		if (ret)
++			goto __dt_err;
++		if (prop >= SSO_LED_MAX_NUM) {
+ 			dev_err(dev, "invalid LED pin:%u\n", prop);
++			ret = -EINVAL;
+ 			goto __dt_err;
+ 		}
+ 		desc->pin = prop;
+@@ -698,21 +703,22 @@ __sso_led_dt_parse(struct sso_led_priv *priv, struct fwnode_handle *fw_ssoled)
+ 				desc->brightness = LED_FULL;
+ 		}
+ 
+-		if (sso_create_led(priv, led, fwnode_child))
++		ret = sso_create_led(priv, led, fwnode_child);
++		if (ret)
+ 			goto __dt_err;
+ 	}
+-	fwnode_handle_put(fw_ssoled);
+ 
+ 	return 0;
++
+ __dt_err:
+-	fwnode_handle_put(fw_ssoled);
++	fwnode_handle_put(fwnode_child);
+ 	/* unregister leds */
+ 	list_for_each(p, &priv->led_list) {
+ 		led = list_entry(p, struct sso_led, list);
+ 		sso_led_shutdown(led);
+ 	}
+ 
+-	return -EINVAL;
++	return ret;
+ }
+ 
+ static int sso_led_dt_parse(struct sso_led_priv *priv)
+@@ -730,6 +736,7 @@ static int sso_led_dt_parse(struct sso_led_priv *priv)
+ 	fw_ssoled = fwnode_get_named_child_node(fwnode, "ssoled");
+ 	if (fw_ssoled) {
+ 		ret = __sso_led_dt_parse(priv, fw_ssoled);
++		fwnode_handle_put(fw_ssoled);
+ 		if (ret)
+ 			return ret;
+ 	}
+diff --git a/drivers/leds/flash/leds-rt8515.c b/drivers/leds/flash/leds-rt8515.c
+index 590bfa180d104..44904fdee3cc0 100644
+--- a/drivers/leds/flash/leds-rt8515.c
++++ b/drivers/leds/flash/leds-rt8515.c
+@@ -343,8 +343,9 @@ static int rt8515_probe(struct platform_device *pdev)
+ 
+ 	ret = devm_led_classdev_flash_register_ext(dev, fled, &init_data);
+ 	if (ret) {
+-		dev_err(dev, "can't register LED %s\n", led->name);
++		fwnode_handle_put(child);
+ 		mutex_destroy(&rt->lock);
++		dev_err(dev, "can't register LED %s\n", led->name);
+ 		return ret;
+ 	}
+ 
+@@ -362,6 +363,7 @@ static int rt8515_probe(struct platform_device *pdev)
+ 		 */
+ 	}
+ 
++	fwnode_handle_put(child);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/leds/leds-is31fl32xx.c b/drivers/leds/leds-is31fl32xx.c
+index 2180255ad3393..899ed94b66876 100644
+--- a/drivers/leds/leds-is31fl32xx.c
++++ b/drivers/leds/leds-is31fl32xx.c
+@@ -385,6 +385,7 @@ static int is31fl32xx_parse_dt(struct device *dev,
+ 			dev_err(dev,
+ 				"Node %pOF 'reg' conflicts with another LED\n",
+ 				child);
++			ret = -EINVAL;
+ 			goto err;
+ 		}
+ 
+diff --git a/drivers/leds/leds-lt3593.c b/drivers/leds/leds-lt3593.c
+index 68e06434ac087..7dab08773a347 100644
+--- a/drivers/leds/leds-lt3593.c
++++ b/drivers/leds/leds-lt3593.c
+@@ -99,10 +99,9 @@ static int lt3593_led_probe(struct platform_device *pdev)
+ 	init_data.default_label = ":";
+ 
+ 	ret = devm_led_classdev_register_ext(dev, &led_data->cdev, &init_data);
+-	if (ret < 0) {
+-		fwnode_handle_put(child);
++	fwnode_handle_put(child);
++	if (ret < 0)
+ 		return ret;
+-	}
+ 
+ 	platform_set_drvdata(pdev, led_data);
+ 
+diff --git a/drivers/leds/trigger/ledtrig-audio.c b/drivers/leds/trigger/ledtrig-audio.c
+index f76621e88482d..c6b437e6369b8 100644
+--- a/drivers/leds/trigger/ledtrig-audio.c
++++ b/drivers/leds/trigger/ledtrig-audio.c
+@@ -6,10 +6,33 @@
+ #include <linux/kernel.h>
+ #include <linux/leds.h>
+ #include <linux/module.h>
++#include "../leds.h"
+ 
+-static struct led_trigger *ledtrig_audio[NUM_AUDIO_LEDS];
+ static enum led_brightness audio_state[NUM_AUDIO_LEDS];
+ 
++static int ledtrig_audio_mute_activate(struct led_classdev *led_cdev)
++{
++	led_set_brightness_nosleep(led_cdev, audio_state[LED_AUDIO_MUTE]);
++	return 0;
++}
++
++static int ledtrig_audio_micmute_activate(struct led_classdev *led_cdev)
++{
++	led_set_brightness_nosleep(led_cdev, audio_state[LED_AUDIO_MICMUTE]);
++	return 0;
++}
++
++static struct led_trigger ledtrig_audio[NUM_AUDIO_LEDS] = {
++	[LED_AUDIO_MUTE] = {
++		.name     = "audio-mute",
++		.activate = ledtrig_audio_mute_activate,
++	},
++	[LED_AUDIO_MICMUTE] = {
++		.name     = "audio-micmute",
++		.activate = ledtrig_audio_micmute_activate,
++	},
++};
++
+ enum led_brightness ledtrig_audio_get(enum led_audio type)
+ {
+ 	return audio_state[type];
+@@ -19,24 +42,22 @@ EXPORT_SYMBOL_GPL(ledtrig_audio_get);
+ void ledtrig_audio_set(enum led_audio type, enum led_brightness state)
+ {
+ 	audio_state[type] = state;
+-	led_trigger_event(ledtrig_audio[type], state);
++	led_trigger_event(&ledtrig_audio[type], state);
+ }
+ EXPORT_SYMBOL_GPL(ledtrig_audio_set);
+ 
+ static int __init ledtrig_audio_init(void)
+ {
+-	led_trigger_register_simple("audio-mute",
+-				    &ledtrig_audio[LED_AUDIO_MUTE]);
+-	led_trigger_register_simple("audio-micmute",
+-				    &ledtrig_audio[LED_AUDIO_MICMUTE]);
++	led_trigger_register(&ledtrig_audio[LED_AUDIO_MUTE]);
++	led_trigger_register(&ledtrig_audio[LED_AUDIO_MICMUTE]);
+ 	return 0;
+ }
+ module_init(ledtrig_audio_init);
+ 
+ static void __exit ledtrig_audio_exit(void)
+ {
+-	led_trigger_unregister_simple(ledtrig_audio[LED_AUDIO_MUTE]);
+-	led_trigger_unregister_simple(ledtrig_audio[LED_AUDIO_MICMUTE]);
++	led_trigger_unregister(&ledtrig_audio[LED_AUDIO_MUTE]);
++	led_trigger_unregister(&ledtrig_audio[LED_AUDIO_MICMUTE]);
+ }
+ module_exit(ledtrig_audio_exit);
+ 
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index bea8c4429ae8f..a407e3be0f170 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -935,20 +935,20 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
+ 	n = BITS_TO_LONGS(d->nr_stripes) * sizeof(unsigned long);
+ 	d->full_dirty_stripes = kvzalloc(n, GFP_KERNEL);
+ 	if (!d->full_dirty_stripes)
+-		return -ENOMEM;
++		goto out_free_stripe_sectors_dirty;
+ 
+ 	idx = ida_simple_get(&bcache_device_idx, 0,
+ 				BCACHE_DEVICE_IDX_MAX, GFP_KERNEL);
+ 	if (idx < 0)
+-		return idx;
++		goto out_free_full_dirty_stripes;
+ 
+ 	if (bioset_init(&d->bio_split, 4, offsetof(struct bbio, bio),
+ 			BIOSET_NEED_BVECS|BIOSET_NEED_RESCUER))
+-		goto err;
++		goto out_ida_remove;
+ 
+ 	d->disk = alloc_disk(BCACHE_MINORS);
+ 	if (!d->disk)
+-		goto err;
++		goto out_bioset_exit;
+ 
+ 	set_capacity(d->disk, sectors);
+ 	snprintf(d->disk->disk_name, DISK_NAME_LEN, "bcache%i", idx);
+@@ -994,8 +994,14 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
+ 
+ 	return 0;
+ 
+-err:
++out_bioset_exit:
++	bioset_exit(&d->bio_split);
++out_ida_remove:
+ 	ida_simple_remove(&bcache_device_idx, idx);
++out_free_full_dirty_stripes:
++	kvfree(d->full_dirty_stripes);
++out_free_stripe_sectors_dirty:
++	kvfree(d->stripe_sectors_dirty);
+ 	return -ENOMEM;
+ 
+ }
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 753822ca96131..6b8e58ae3f9ee 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -1324,6 +1324,7 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+ 	struct raid1_plug_cb *plug = NULL;
+ 	int first_clone;
+ 	int max_sectors;
++	bool write_behind = false;
+ 
+ 	if (mddev_is_clustered(mddev) &&
+ 	     md_cluster_ops->area_resyncing(mddev, WRITE,
+@@ -1376,6 +1377,15 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+ 	max_sectors = r1_bio->sectors;
+ 	for (i = 0;  i < disks; i++) {
+ 		struct md_rdev *rdev = rcu_dereference(conf->mirrors[i].rdev);
++
++		/*
++		 * The write-behind io is only attempted on drives marked as
++		 * write-mostly, which means we could allocate write behind
++		 * bio later.
++		 */
++		if (rdev && test_bit(WriteMostly, &rdev->flags))
++			write_behind = true;
++
+ 		if (rdev && unlikely(test_bit(Blocked, &rdev->flags))) {
+ 			atomic_inc(&rdev->nr_pending);
+ 			blocked_rdev = rdev;
+@@ -1449,6 +1459,15 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+ 		goto retry_write;
+ 	}
+ 
++	/*
++	 * When using a bitmap, we may call alloc_behind_master_bio below.
++	 * alloc_behind_master_bio allocates a copy of the data payload a page
++	 * at a time and thus needs a new bio that can fit the whole payload
++	 * this bio in page sized chunks.
++	 */
++	if (write_behind && bitmap)
++		max_sectors = min_t(int, max_sectors,
++				    BIO_MAX_VECS * (PAGE_SIZE >> 9));
+ 	if (max_sectors < bio_sectors(bio)) {
+ 		struct bio *split = bio_split(bio, max_sectors,
+ 					      GFP_NOIO, &conf->bio_split);
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 40e845fb97170..92b490aac93ee 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -1706,6 +1706,11 @@ retry_discard:
+ 	} else
+ 		r10_bio->master_bio = (struct bio *)first_r10bio;
+ 
++	/*
++	 * first select target devices under rcu_lock and
++	 * inc refcount on their rdev.  Record them by setting
++	 * bios[x] to bio
++	 */
+ 	rcu_read_lock();
+ 	for (disk = 0; disk < geo->raid_disks; disk++) {
+ 		struct md_rdev *rdev = rcu_dereference(conf->mirrors[disk].rdev);
+@@ -1737,9 +1742,6 @@ retry_discard:
+ 	for (disk = 0; disk < geo->raid_disks; disk++) {
+ 		sector_t dev_start, dev_end;
+ 		struct bio *mbio, *rbio = NULL;
+-		struct md_rdev *rdev = rcu_dereference(conf->mirrors[disk].rdev);
+-		struct md_rdev *rrdev = rcu_dereference(
+-			conf->mirrors[disk].replacement);
+ 
+ 		/*
+ 		 * Now start to calculate the start and end address for each disk.
+@@ -1769,9 +1771,12 @@ retry_discard:
+ 
+ 		/*
+ 		 * It only handles discard bio which size is >= stripe size, so
+-		 * dev_end > dev_start all the time
++		 * dev_end > dev_start all the time.
++		 * It doesn't need to use rcu lock to get rdev here. We already
++		 * add rdev->nr_pending in the first loop.
+ 		 */
+ 		if (r10_bio->devs[disk].bio) {
++			struct md_rdev *rdev = conf->mirrors[disk].rdev;
+ 			mbio = bio_clone_fast(bio, GFP_NOIO, &mddev->bio_set);
+ 			mbio->bi_end_io = raid10_end_discard_request;
+ 			mbio->bi_private = r10_bio;
+@@ -1784,6 +1789,7 @@ retry_discard:
+ 			bio_endio(mbio);
+ 		}
+ 		if (r10_bio->devs[disk].repl_bio) {
++			struct md_rdev *rrdev = conf->mirrors[disk].replacement;
+ 			rbio = bio_clone_fast(bio, GFP_NOIO, &mddev->bio_set);
+ 			rbio->bi_end_io = raid10_end_discard_request;
+ 			rbio->bi_private = r10_bio;
+diff --git a/drivers/media/i2c/tda1997x.c b/drivers/media/i2c/tda1997x.c
+index 89bb7e6dc7a42..9554c8348c020 100644
+--- a/drivers/media/i2c/tda1997x.c
++++ b/drivers/media/i2c/tda1997x.c
+@@ -2233,6 +2233,7 @@ static int tda1997x_core_init(struct v4l2_subdev *sd)
+ 	/* get initial HDMI status */
+ 	state->hdmi_status = io_read(sd, REG_HDMI_FLAGS);
+ 
++	io_write(sd, REG_EDID_ENABLE, EDID_ENABLE_A_EN | EDID_ENABLE_B_EN);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/platform/coda/coda-bit.c b/drivers/media/platform/coda/coda-bit.c
+index 2f42808c43a4b..c484c008ab027 100644
+--- a/drivers/media/platform/coda/coda-bit.c
++++ b/drivers/media/platform/coda/coda-bit.c
+@@ -2053,17 +2053,25 @@ static int __coda_start_decoding(struct coda_ctx *ctx)
+ 	u32 src_fourcc, dst_fourcc;
+ 	int ret;
+ 
++	q_data_src = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
++	q_data_dst = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
++	src_fourcc = q_data_src->fourcc;
++	dst_fourcc = q_data_dst->fourcc;
++
+ 	if (!ctx->initialized) {
+ 		ret = __coda_decoder_seq_init(ctx);
+ 		if (ret < 0)
+ 			return ret;
++	} else {
++		ctx->frame_mem_ctrl &= ~(CODA_FRAME_CHROMA_INTERLEAVE | (0x3 << 9) |
++					 CODA9_FRAME_TILED2LINEAR);
++		if (dst_fourcc == V4L2_PIX_FMT_NV12 || dst_fourcc == V4L2_PIX_FMT_YUYV)
++			ctx->frame_mem_ctrl |= CODA_FRAME_CHROMA_INTERLEAVE;
++		if (ctx->tiled_map_type == GDI_TILED_FRAME_MB_RASTER_MAP)
++			ctx->frame_mem_ctrl |= (0x3 << 9) |
++				((ctx->use_vdoa) ? 0 : CODA9_FRAME_TILED2LINEAR);
+ 	}
+ 
+-	q_data_src = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
+-	q_data_dst = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
+-	src_fourcc = q_data_src->fourcc;
+-	dst_fourcc = q_data_dst->fourcc;
+-
+ 	coda_write(dev, ctx->parabuf.paddr, CODA_REG_BIT_PARA_BUF_ADDR);
+ 
+ 	ret = coda_alloc_framebuffers(ctx, q_data_dst, src_fourcc);
+diff --git a/drivers/media/platform/omap3isp/isp.c b/drivers/media/platform/omap3isp/isp.c
+index 53025c8c75312..20f59c59ff8a2 100644
+--- a/drivers/media/platform/omap3isp/isp.c
++++ b/drivers/media/platform/omap3isp/isp.c
+@@ -2037,8 +2037,10 @@ static int isp_subdev_notifier_complete(struct v4l2_async_notifier *async)
+ 	mutex_lock(&isp->media_dev.graph_mutex);
+ 
+ 	ret = media_entity_enum_init(&isp->crashed, &isp->media_dev);
+-	if (ret)
++	if (ret) {
++		mutex_unlock(&isp->media_dev.graph_mutex);
+ 		return ret;
++	}
+ 
+ 	list_for_each_entry(sd, &v4l2_dev->subdevs, list) {
+ 		if (sd->notifier != &isp->notifier)
+diff --git a/drivers/media/platform/qcom/venus/helpers.c b/drivers/media/platform/qcom/venus/helpers.c
+index b813d6dba4817..3a0871f0bea67 100644
+--- a/drivers/media/platform/qcom/venus/helpers.c
++++ b/drivers/media/platform/qcom/venus/helpers.c
+@@ -1138,6 +1138,9 @@ int venus_helper_set_format_constraints(struct venus_inst *inst)
+ 	if (!IS_V6(inst->core))
+ 		return 0;
+ 
++	if (inst->opb_fmt == HFI_COLOR_FORMAT_NV12_UBWC)
++		return 0;
++
+ 	pconstraint.buffer_type = HFI_BUFFER_OUTPUT2;
+ 	pconstraint.num_planes = 2;
+ 	pconstraint.plane_format[0].stride_multiples = 128;
+diff --git a/drivers/media/platform/qcom/venus/hfi_msgs.c b/drivers/media/platform/qcom/venus/hfi_msgs.c
+index a2d436d407b22..e8776ac45b020 100644
+--- a/drivers/media/platform/qcom/venus/hfi_msgs.c
++++ b/drivers/media/platform/qcom/venus/hfi_msgs.c
+@@ -261,7 +261,7 @@ sys_get_prop_image_version(struct device *dev,
+ 
+ 	smem_tbl_ptr = qcom_smem_get(QCOM_SMEM_HOST_ANY,
+ 		SMEM_IMG_VER_TBL, &smem_blk_sz);
+-	if (smem_tbl_ptr && smem_blk_sz >= SMEM_IMG_OFFSET_VENUS + VER_STR_SZ)
++	if (!IS_ERR(smem_tbl_ptr) && smem_blk_sz >= SMEM_IMG_OFFSET_VENUS + VER_STR_SZ)
+ 		memcpy(smem_tbl_ptr + SMEM_IMG_OFFSET_VENUS,
+ 		       img_ver, VER_STR_SZ);
+ }
+diff --git a/drivers/media/platform/qcom/venus/venc.c b/drivers/media/platform/qcom/venus/venc.c
+index 4a7291f934b6b..2c443c1afd3a5 100644
+--- a/drivers/media/platform/qcom/venus/venc.c
++++ b/drivers/media/platform/qcom/venus/venc.c
+@@ -183,6 +183,8 @@ venc_try_fmt_common(struct venus_inst *inst, struct v4l2_format *f)
+ 		else
+ 			return NULL;
+ 		fmt = find_format(inst, pixmp->pixelformat, f->type);
++		if (!fmt)
++			return NULL;
+ 	}
+ 
+ 	pixmp->width = clamp(pixmp->width, frame_width_min(inst),
+diff --git a/drivers/media/platform/rockchip/rga/rga-buf.c b/drivers/media/platform/rockchip/rga/rga-buf.c
+index bf9a75b75083b..81508ed5abf34 100644
+--- a/drivers/media/platform/rockchip/rga/rga-buf.c
++++ b/drivers/media/platform/rockchip/rga/rga-buf.c
+@@ -79,9 +79,8 @@ static int rga_buf_start_streaming(struct vb2_queue *q, unsigned int count)
+ 	struct rockchip_rga *rga = ctx->rga;
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(rga->dev);
++	ret = pm_runtime_resume_and_get(rga->dev);
+ 	if (ret < 0) {
+-		pm_runtime_put_noidle(rga->dev);
+ 		rga_buf_return_buffers(q, VB2_BUF_STATE_QUEUED);
+ 		return ret;
+ 	}
+diff --git a/drivers/media/platform/rockchip/rga/rga.c b/drivers/media/platform/rockchip/rga/rga.c
+index 9d122429706e9..6759091b15e09 100644
+--- a/drivers/media/platform/rockchip/rga/rga.c
++++ b/drivers/media/platform/rockchip/rga/rga.c
+@@ -863,10 +863,12 @@ static int rga_probe(struct platform_device *pdev)
+ 	if (IS_ERR(rga->m2m_dev)) {
+ 		v4l2_err(&rga->v4l2_dev, "Failed to init mem2mem device\n");
+ 		ret = PTR_ERR(rga->m2m_dev);
+-		goto unreg_video_dev;
++		goto rel_vdev;
+ 	}
+ 
+-	pm_runtime_get_sync(rga->dev);
++	ret = pm_runtime_resume_and_get(rga->dev);
++	if (ret < 0)
++		goto rel_vdev;
+ 
+ 	rga->version.major = (rga_read(rga, RGA_VERSION_INFO) >> 24) & 0xFF;
+ 	rga->version.minor = (rga_read(rga, RGA_VERSION_INFO) >> 20) & 0x0F;
+@@ -880,11 +882,23 @@ static int rga_probe(struct platform_device *pdev)
+ 	rga->cmdbuf_virt = dma_alloc_attrs(rga->dev, RGA_CMDBUF_SIZE,
+ 					   &rga->cmdbuf_phy, GFP_KERNEL,
+ 					   DMA_ATTR_WRITE_COMBINE);
++	if (!rga->cmdbuf_virt) {
++		ret = -ENOMEM;
++		goto rel_vdev;
++	}
+ 
+ 	rga->src_mmu_pages =
+ 		(unsigned int *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 3);
++	if (!rga->src_mmu_pages) {
++		ret = -ENOMEM;
++		goto free_dma;
++	}
+ 	rga->dst_mmu_pages =
+ 		(unsigned int *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 3);
++	if (rga->dst_mmu_pages) {
++		ret = -ENOMEM;
++		goto free_src_pages;
++	}
+ 
+ 	def_frame.stride = (def_frame.width * def_frame.fmt->depth) >> 3;
+ 	def_frame.size = def_frame.stride * def_frame.height;
+@@ -892,7 +906,7 @@ static int rga_probe(struct platform_device *pdev)
+ 	ret = video_register_device(vfd, VFL_TYPE_VIDEO, -1);
+ 	if (ret) {
+ 		v4l2_err(&rga->v4l2_dev, "Failed to register video device\n");
+-		goto rel_vdev;
++		goto free_dst_pages;
+ 	}
+ 
+ 	v4l2_info(&rga->v4l2_dev, "Registered %s as /dev/%s\n",
+@@ -900,10 +914,15 @@ static int rga_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ 
++free_dst_pages:
++	free_pages((unsigned long)rga->dst_mmu_pages, 3);
++free_src_pages:
++	free_pages((unsigned long)rga->src_mmu_pages, 3);
++free_dma:
++	dma_free_attrs(rga->dev, RGA_CMDBUF_SIZE, rga->cmdbuf_virt,
++		       rga->cmdbuf_phy, DMA_ATTR_WRITE_COMBINE);
+ rel_vdev:
+ 	video_device_release(vfd);
+-unreg_video_dev:
+-	video_unregister_device(rga->vfd);
+ unreg_v4l2_dev:
+ 	v4l2_device_unregister(&rga->v4l2_dev);
+ err_put_clk:
+diff --git a/drivers/media/spi/cxd2880-spi.c b/drivers/media/spi/cxd2880-spi.c
+index 931ec0727cd38..a280e4bd80c2f 100644
+--- a/drivers/media/spi/cxd2880-spi.c
++++ b/drivers/media/spi/cxd2880-spi.c
+@@ -524,13 +524,13 @@ cxd2880_spi_probe(struct spi_device *spi)
+ 	if (IS_ERR(dvb_spi->vcc_supply)) {
+ 		if (PTR_ERR(dvb_spi->vcc_supply) == -EPROBE_DEFER) {
+ 			ret = -EPROBE_DEFER;
+-			goto fail_adapter;
++			goto fail_regulator;
+ 		}
+ 		dvb_spi->vcc_supply = NULL;
+ 	} else {
+ 		ret = regulator_enable(dvb_spi->vcc_supply);
+ 		if (ret)
+-			goto fail_adapter;
++			goto fail_regulator;
+ 	}
+ 
+ 	dvb_spi->spi = spi;
+@@ -618,6 +618,9 @@ fail_frontend:
+ fail_attach:
+ 	dvb_unregister_adapter(&dvb_spi->adapter);
+ fail_adapter:
++	if (!dvb_spi->vcc_supply)
++		regulator_disable(dvb_spi->vcc_supply);
++fail_regulator:
+ 	kfree(dvb_spi);
+ 	return ret;
+ }
+diff --git a/drivers/media/usb/dvb-usb/dvb-usb-i2c.c b/drivers/media/usb/dvb-usb/dvb-usb-i2c.c
+index 2e07106f46803..bc4b2abdde1a4 100644
+--- a/drivers/media/usb/dvb-usb/dvb-usb-i2c.c
++++ b/drivers/media/usb/dvb-usb/dvb-usb-i2c.c
+@@ -17,7 +17,8 @@ int dvb_usb_i2c_init(struct dvb_usb_device *d)
+ 
+ 	if (d->props.i2c_algo == NULL) {
+ 		err("no i2c algorithm specified");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err;
+ 	}
+ 
+ 	strscpy(d->i2c_adap.name, d->desc->name, sizeof(d->i2c_adap.name));
+@@ -27,11 +28,15 @@ int dvb_usb_i2c_init(struct dvb_usb_device *d)
+ 
+ 	i2c_set_adapdata(&d->i2c_adap, d);
+ 
+-	if ((ret = i2c_add_adapter(&d->i2c_adap)) < 0)
++	ret = i2c_add_adapter(&d->i2c_adap);
++	if (ret < 0) {
+ 		err("could not add i2c adapter");
++		goto err;
++	}
+ 
+ 	d->state |= DVB_USB_STATE_I2C;
+ 
++err:
+ 	return ret;
+ }
+ 
+diff --git a/drivers/media/usb/dvb-usb/dvb-usb-init.c b/drivers/media/usb/dvb-usb/dvb-usb-init.c
+index 28e1fd64dd3c2..61439c8f33cab 100644
+--- a/drivers/media/usb/dvb-usb/dvb-usb-init.c
++++ b/drivers/media/usb/dvb-usb/dvb-usb-init.c
+@@ -194,8 +194,8 @@ static int dvb_usb_init(struct dvb_usb_device *d, short *adapter_nums)
+ 
+ err_adapter_init:
+ 	dvb_usb_adapter_exit(d);
+-err_i2c_init:
+ 	dvb_usb_i2c_exit(d);
++err_i2c_init:
+ 	if (d->priv && d->props.priv_destroy)
+ 		d->props.priv_destroy(d);
+ err_priv_init:
+diff --git a/drivers/media/usb/dvb-usb/nova-t-usb2.c b/drivers/media/usb/dvb-usb/nova-t-usb2.c
+index e7b290552b663..9c0eb0d40822e 100644
+--- a/drivers/media/usb/dvb-usb/nova-t-usb2.c
++++ b/drivers/media/usb/dvb-usb/nova-t-usb2.c
+@@ -130,7 +130,7 @@ ret:
+ 
+ static int nova_t_read_mac_address (struct dvb_usb_device *d, u8 mac[6])
+ {
+-	int i;
++	int i, ret;
+ 	u8 b;
+ 
+ 	mac[0] = 0x00;
+@@ -139,7 +139,9 @@ static int nova_t_read_mac_address (struct dvb_usb_device *d, u8 mac[6])
+ 
+ 	/* this is a complete guess, but works for my box */
+ 	for (i = 136; i < 139; i++) {
+-		dibusb_read_eeprom_byte(d,i, &b);
++		ret = dibusb_read_eeprom_byte(d, i, &b);
++		if (ret)
++			return ret;
+ 
+ 		mac[5 - (i - 136)] = b;
+ 	}
+diff --git a/drivers/media/usb/dvb-usb/vp702x.c b/drivers/media/usb/dvb-usb/vp702x.c
+index bf54747e2e01a..a1d9e4801a2ba 100644
+--- a/drivers/media/usb/dvb-usb/vp702x.c
++++ b/drivers/media/usb/dvb-usb/vp702x.c
+@@ -291,16 +291,22 @@ static int vp702x_rc_query(struct dvb_usb_device *d, u32 *event, int *state)
+ static int vp702x_read_mac_addr(struct dvb_usb_device *d,u8 mac[6])
+ {
+ 	u8 i, *buf;
++	int ret;
+ 	struct vp702x_device_state *st = d->priv;
+ 
+ 	mutex_lock(&st->buf_mutex);
+ 	buf = st->buf;
+-	for (i = 6; i < 12; i++)
+-		vp702x_usb_in_op(d, READ_EEPROM_REQ, i, 1, &buf[i - 6], 1);
++	for (i = 6; i < 12; i++) {
++		ret = vp702x_usb_in_op(d, READ_EEPROM_REQ, i, 1,
++				       &buf[i - 6], 1);
++		if (ret < 0)
++			goto err;
++	}
+ 
+ 	memcpy(mac, buf, 6);
++err:
+ 	mutex_unlock(&st->buf_mutex);
+-	return 0;
++	return ret;
+ }
+ 
+ static int vp702x_frontend_attach(struct dvb_usb_adapter *adap)
+diff --git a/drivers/media/usb/em28xx/em28xx-input.c b/drivers/media/usb/em28xx/em28xx-input.c
+index 59529cbf9cd0b..0b6d77c3bec86 100644
+--- a/drivers/media/usb/em28xx/em28xx-input.c
++++ b/drivers/media/usb/em28xx/em28xx-input.c
+@@ -842,7 +842,6 @@ error:
+ 	kfree(ir);
+ ref_put:
+ 	em28xx_shutdown_buttons(dev);
+-	kref_put(&dev->ref, em28xx_free_device);
+ 	return err;
+ }
+ 
+diff --git a/drivers/media/usb/go7007/go7007-driver.c b/drivers/media/usb/go7007/go7007-driver.c
+index f1767be9d8685..6650eab913d81 100644
+--- a/drivers/media/usb/go7007/go7007-driver.c
++++ b/drivers/media/usb/go7007/go7007-driver.c
+@@ -691,49 +691,23 @@ struct go7007 *go7007_alloc(const struct go7007_board_info *board,
+ 						struct device *dev)
+ {
+ 	struct go7007 *go;
+-	int i;
+ 
+ 	go = kzalloc(sizeof(struct go7007), GFP_KERNEL);
+ 	if (go == NULL)
+ 		return NULL;
+ 	go->dev = dev;
+ 	go->board_info = board;
+-	go->board_id = 0;
+ 	go->tuner_type = -1;
+-	go->channel_number = 0;
+-	go->name[0] = 0;
+ 	mutex_init(&go->hw_lock);
+ 	init_waitqueue_head(&go->frame_waitq);
+ 	spin_lock_init(&go->spinlock);
+ 	go->status = STATUS_INIT;
+-	memset(&go->i2c_adapter, 0, sizeof(go->i2c_adapter));
+-	go->i2c_adapter_online = 0;
+-	go->interrupt_available = 0;
+ 	init_waitqueue_head(&go->interrupt_waitq);
+-	go->input = 0;
+ 	go7007_update_board(go);
+-	go->encoder_h_halve = 0;
+-	go->encoder_v_halve = 0;
+-	go->encoder_subsample = 0;
+ 	go->format = V4L2_PIX_FMT_MJPEG;
+ 	go->bitrate = 1500000;
+ 	go->fps_scale = 1;
+-	go->pali = 0;
+ 	go->aspect_ratio = GO7007_RATIO_1_1;
+-	go->gop_size = 0;
+-	go->ipb = 0;
+-	go->closed_gop = 0;
+-	go->repeat_seqhead = 0;
+-	go->seq_header_enable = 0;
+-	go->gop_header_enable = 0;
+-	go->dvd_mode = 0;
+-	go->interlace_coding = 0;
+-	for (i = 0; i < 4; ++i)
+-		go->modet[i].enable = 0;
+-	for (i = 0; i < 1624; ++i)
+-		go->modet_map[i] = 0;
+-	go->audio_deliver = NULL;
+-	go->audio_enabled = 0;
+ 
+ 	return go;
+ }
+diff --git a/drivers/media/usb/go7007/go7007-usb.c b/drivers/media/usb/go7007/go7007-usb.c
+index dbf0455d5d50d..eeb85981e02b6 100644
+--- a/drivers/media/usb/go7007/go7007-usb.c
++++ b/drivers/media/usb/go7007/go7007-usb.c
+@@ -1134,7 +1134,7 @@ static int go7007_usb_probe(struct usb_interface *intf,
+ 
+ 	ep = usb->usbdev->ep_in[4];
+ 	if (!ep)
+-		return -ENODEV;
++		goto allocfail;
+ 
+ 	/* Allocate the URB and buffer for receiving incoming interrupts */
+ 	usb->intr_urb = usb_alloc_urb(0, GFP_KERNEL);
+diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c
+index cd833011f2850..4757e29b42c0b 100644
+--- a/drivers/misc/lkdtm/core.c
++++ b/drivers/misc/lkdtm/core.c
+@@ -81,7 +81,7 @@ static struct crashpoint crashpoints[] = {
+ 	CRASHPOINT("FS_DEVRW",		 "ll_rw_block"),
+ 	CRASHPOINT("MEM_SWAPOUT",	 "shrink_inactive_list"),
+ 	CRASHPOINT("TIMERADD",		 "hrtimer_start"),
+-	CRASHPOINT("SCSI_DISPATCH_CMD",	 "scsi_dispatch_cmd"),
++	CRASHPOINT("SCSI_QUEUE_RQ",	 "scsi_queue_rq"),
+ 	CRASHPOINT("IDE_CORE_CP",	 "generic_ide_ioctl"),
+ #endif
+ };
+diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
+index c3229d8c7041c..33cb70aa02aa8 100644
+--- a/drivers/mmc/host/dw_mmc.c
++++ b/drivers/mmc/host/dw_mmc.c
+@@ -782,6 +782,7 @@ static int dw_mci_edmac_start_dma(struct dw_mci *host,
+ 	int ret = 0;
+ 
+ 	/* Set external dma config: burst size, burst width */
++	memset(&cfg, 0, sizeof(cfg));
+ 	cfg.dst_addr = host->phy_regs + fifo_offset;
+ 	cfg.src_addr = cfg.dst_addr;
+ 	cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+diff --git a/drivers/mmc/host/moxart-mmc.c b/drivers/mmc/host/moxart-mmc.c
+index bde2988875797..6c9d38132f74c 100644
+--- a/drivers/mmc/host/moxart-mmc.c
++++ b/drivers/mmc/host/moxart-mmc.c
+@@ -628,6 +628,7 @@ static int moxart_probe(struct platform_device *pdev)
+ 			 host->dma_chan_tx, host->dma_chan_rx);
+ 		host->have_dma = true;
+ 
++		memset(&cfg, 0, sizeof(cfg));
+ 		cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+ 		cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+ 
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index 6b39126fbf06c..a1df6d4e9e86e 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -1222,6 +1222,7 @@ static int sdhci_external_dma_setup(struct sdhci_host *host,
+ 	if (!host->mapbase)
+ 		return -EINVAL;
+ 
++	memset(&cfg, 0, sizeof(cfg));
+ 	cfg.src_addr = host->mapbase + SDHCI_BUFFER;
+ 	cfg.dst_addr = host->mapbase + SDHCI_BUFFER;
+ 	cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index 3ca6b394dd5f5..54b273d858612 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -1993,15 +1993,6 @@ int b53_br_flags(struct dsa_switch *ds, int port,
+ }
+ EXPORT_SYMBOL(b53_br_flags);
+ 
+-int b53_set_mrouter(struct dsa_switch *ds, int port, bool mrouter,
+-		    struct netlink_ext_ack *extack)
+-{
+-	b53_port_set_mcast_flood(ds->priv, port, mrouter);
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL(b53_set_mrouter);
+-
+ static bool b53_possible_cpu_port(struct dsa_switch *ds, int port)
+ {
+ 	/* Broadcom switches will accept enabling Broadcom tags on the
+@@ -2245,7 +2236,6 @@ static const struct dsa_switch_ops b53_switch_ops = {
+ 	.port_bridge_leave	= b53_br_leave,
+ 	.port_pre_bridge_flags	= b53_br_flags_pre,
+ 	.port_bridge_flags	= b53_br_flags,
+-	.port_set_mrouter	= b53_set_mrouter,
+ 	.port_stp_state_set	= b53_br_set_stp_state,
+ 	.port_fast_age		= b53_br_fast_age,
+ 	.port_vlan_filtering	= b53_vlan_filtering,
+diff --git a/drivers/net/dsa/b53/b53_priv.h b/drivers/net/dsa/b53/b53_priv.h
+index 82700a5714c10..9bf8319342b0b 100644
+--- a/drivers/net/dsa/b53/b53_priv.h
++++ b/drivers/net/dsa/b53/b53_priv.h
+@@ -328,8 +328,6 @@ int b53_br_flags_pre(struct dsa_switch *ds, int port,
+ int b53_br_flags(struct dsa_switch *ds, int port,
+ 		 struct switchdev_brport_flags flags,
+ 		 struct netlink_ext_ack *extack);
+-int b53_set_mrouter(struct dsa_switch *ds, int port, bool mrouter,
+-		    struct netlink_ext_ack *extack);
+ int b53_setup_devlink_resources(struct dsa_switch *ds);
+ void b53_port_event(struct dsa_switch *ds, int port);
+ void b53_phylink_validate(struct dsa_switch *ds, int port,
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index 3b018fcf44124..6ce9ec1283e05 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -1199,7 +1199,6 @@ static const struct dsa_switch_ops bcm_sf2_ops = {
+ 	.port_pre_bridge_flags	= b53_br_flags_pre,
+ 	.port_bridge_flags	= b53_br_flags,
+ 	.port_stp_state_set	= b53_br_set_stp_state,
+-	.port_set_mrouter	= b53_set_mrouter,
+ 	.port_fast_age		= b53_br_fast_age,
+ 	.port_vlan_filtering	= b53_vlan_filtering,
+ 	.port_vlan_add		= b53_vlan_add,
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 2b01efad1a51c..647f8e5c16da5 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -1172,18 +1172,6 @@ mt7530_port_bridge_flags(struct dsa_switch *ds, int port,
+ 	return 0;
+ }
+ 
+-static int
+-mt7530_port_set_mrouter(struct dsa_switch *ds, int port, bool mrouter,
+-			struct netlink_ext_ack *extack)
+-{
+-	struct mt7530_priv *priv = ds->priv;
+-
+-	mt7530_rmw(priv, MT7530_MFC, UNM_FFP(BIT(port)),
+-		   mrouter ? UNM_FFP(BIT(port)) : 0);
+-
+-	return 0;
+-}
+-
+ static int
+ mt7530_port_bridge_join(struct dsa_switch *ds, int port,
+ 			struct net_device *bridge)
+@@ -2847,7 +2835,6 @@ static const struct dsa_switch_ops mt7530_switch_ops = {
+ 	.port_stp_state_set	= mt7530_stp_state_set,
+ 	.port_pre_bridge_flags	= mt7530_port_pre_bridge_flags,
+ 	.port_bridge_flags	= mt7530_port_bridge_flags,
+-	.port_set_mrouter	= mt7530_port_set_mrouter,
+ 	.port_bridge_join	= mt7530_port_bridge_join,
+ 	.port_bridge_leave	= mt7530_port_bridge_leave,
+ 	.port_fdb_add		= mt7530_port_fdb_add,
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 272b0535d9461..111a6d5985da6 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -5781,23 +5781,6 @@ out:
+ 	return err;
+ }
+ 
+-static int mv88e6xxx_port_set_mrouter(struct dsa_switch *ds, int port,
+-				      bool mrouter,
+-				      struct netlink_ext_ack *extack)
+-{
+-	struct mv88e6xxx_chip *chip = ds->priv;
+-	int err;
+-
+-	if (!chip->info->ops->port_set_mcast_flood)
+-		return -EOPNOTSUPP;
+-
+-	mv88e6xxx_reg_lock(chip);
+-	err = chip->info->ops->port_set_mcast_flood(chip, port, mrouter);
+-	mv88e6xxx_reg_unlock(chip);
+-
+-	return err;
+-}
+-
+ static bool mv88e6xxx_lag_can_offload(struct dsa_switch *ds,
+ 				      struct net_device *lag,
+ 				      struct netdev_lag_upper_info *info)
+@@ -6099,7 +6082,6 @@ static const struct dsa_switch_ops mv88e6xxx_switch_ops = {
+ 	.port_bridge_leave	= mv88e6xxx_port_bridge_leave,
+ 	.port_pre_bridge_flags	= mv88e6xxx_port_pre_bridge_flags,
+ 	.port_bridge_flags	= mv88e6xxx_port_bridge_flags,
+-	.port_set_mrouter	= mv88e6xxx_port_set_mrouter,
+ 	.port_stp_state_set	= mv88e6xxx_port_stp_state_set,
+ 	.port_fast_age		= mv88e6xxx_port_fast_age,
+ 	.port_vlan_filtering	= mv88e6xxx_port_vlan_filtering,
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+index 59253846e8858..f26d037356191 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+@@ -417,6 +417,9 @@ static int atl_resume_common(struct device *dev, bool deep)
+ 	pci_restore_state(pdev);
+ 
+ 	if (deep) {
++		/* Reinitialize Nic/Vecs objects */
++		aq_nic_deinit(nic, !nic->aq_hw->aq_nic_cfg->wol);
++
+ 		ret = aq_nic_init(nic);
+ 		if (ret)
+ 			goto err_exit;
+diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c
+index 53864f2005994..b175f2b2f5bcf 100644
+--- a/drivers/net/ethernet/google/gve/gve_adminq.c
++++ b/drivers/net/ethernet/google/gve/gve_adminq.c
+@@ -233,7 +233,8 @@ static int gve_adminq_issue_cmd(struct gve_priv *priv,
+ 	tail = ioread32be(&priv->reg_bar0->adminq_event_counter);
+ 
+ 	// Check if next command will overflow the buffer.
+-	if (((priv->adminq_prod_cnt + 1) & priv->adminq_mask) == tail) {
++	if (((priv->adminq_prod_cnt + 1) & priv->adminq_mask) ==
++	    (tail & priv->adminq_mask)) {
+ 		int err;
+ 
+ 		// Flush existing commands to make room.
+@@ -243,7 +244,8 @@ static int gve_adminq_issue_cmd(struct gve_priv *priv,
+ 
+ 		// Retry.
+ 		tail = ioread32be(&priv->reg_bar0->adminq_event_counter);
+-		if (((priv->adminq_prod_cnt + 1) & priv->adminq_mask) == tail) {
++		if (((priv->adminq_prod_cnt + 1) & priv->adminq_mask) ==
++		    (tail & priv->adminq_mask)) {
+ 			// This should never happen. We just flushed the
+ 			// command queue so there should be enough space.
+ 			return -ENOMEM;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index eff0a30790dd7..472f56b360b8c 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -1160,12 +1160,12 @@ static int i40e_quiesce_vf_pci(struct i40e_vf *vf)
+ }
+ 
+ /**
+- * i40e_getnum_vf_vsi_vlan_filters
++ * __i40e_getnum_vf_vsi_vlan_filters
+  * @vsi: pointer to the vsi
+  *
+  * called to get the number of VLANs offloaded on this VF
+  **/
+-static int i40e_getnum_vf_vsi_vlan_filters(struct i40e_vsi *vsi)
++static int __i40e_getnum_vf_vsi_vlan_filters(struct i40e_vsi *vsi)
+ {
+ 	struct i40e_mac_filter *f;
+ 	u16 num_vlans = 0, bkt;
+@@ -1178,6 +1178,23 @@ static int i40e_getnum_vf_vsi_vlan_filters(struct i40e_vsi *vsi)
+ 	return num_vlans;
+ }
+ 
++/**
++ * i40e_getnum_vf_vsi_vlan_filters
++ * @vsi: pointer to the vsi
++ *
++ * wrapper for __i40e_getnum_vf_vsi_vlan_filters() with spinlock held
++ **/
++static int i40e_getnum_vf_vsi_vlan_filters(struct i40e_vsi *vsi)
++{
++	int num_vlans;
++
++	spin_lock_bh(&vsi->mac_filter_hash_lock);
++	num_vlans = __i40e_getnum_vf_vsi_vlan_filters(vsi);
++	spin_unlock_bh(&vsi->mac_filter_hash_lock);
++
++	return num_vlans;
++}
++
+ /**
+  * i40e_get_vlan_list_sync
+  * @vsi: pointer to the VSI
+@@ -1195,7 +1212,7 @@ static void i40e_get_vlan_list_sync(struct i40e_vsi *vsi, u16 *num_vlans,
+ 	int bkt;
+ 
+ 	spin_lock_bh(&vsi->mac_filter_hash_lock);
+-	*num_vlans = i40e_getnum_vf_vsi_vlan_filters(vsi);
++	*num_vlans = __i40e_getnum_vf_vsi_vlan_filters(vsi);
+ 	*vlan_list = kcalloc(*num_vlans, sizeof(**vlan_list), GFP_ATOMIC);
+ 	if (!(*vlan_list))
+ 		goto err;
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index a7f2f5c490e30..e16d20b77a3ff 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -4911,6 +4911,7 @@ static int ice_set_mac_address(struct net_device *netdev, void *pi)
+ 	struct ice_hw *hw = &pf->hw;
+ 	struct sockaddr *addr = pi;
+ 	enum ice_status status;
++	u8 old_mac[ETH_ALEN];
+ 	u8 flags = 0;
+ 	int err = 0;
+ 	u8 *mac;
+@@ -4933,8 +4934,13 @@ static int ice_set_mac_address(struct net_device *netdev, void *pi)
+ 	}
+ 
+ 	netif_addr_lock_bh(netdev);
++	ether_addr_copy(old_mac, netdev->dev_addr);
++	/* change the netdev's MAC address */
++	memcpy(netdev->dev_addr, mac, netdev->addr_len);
++	netif_addr_unlock_bh(netdev);
++
+ 	/* Clean up old MAC filter. Not an error if old filter doesn't exist */
+-	status = ice_fltr_remove_mac(vsi, netdev->dev_addr, ICE_FWD_TO_VSI);
++	status = ice_fltr_remove_mac(vsi, old_mac, ICE_FWD_TO_VSI);
+ 	if (status && status != ICE_ERR_DOES_NOT_EXIST) {
+ 		err = -EADDRNOTAVAIL;
+ 		goto err_update_filters;
+@@ -4957,13 +4963,12 @@ err_update_filters:
+ 	if (err) {
+ 		netdev_err(netdev, "can't set MAC %pM. filter update failed\n",
+ 			   mac);
++		netif_addr_lock_bh(netdev);
++		ether_addr_copy(netdev->dev_addr, old_mac);
+ 		netif_addr_unlock_bh(netdev);
+ 		return err;
+ 	}
+ 
+-	/* change the netdev's MAC address */
+-	memcpy(netdev->dev_addr, mac, netdev->addr_len);
+-	netif_addr_unlock_bh(netdev);
+ 	netdev_dbg(vsi->netdev, "updated MAC address to %pM\n",
+ 		   netdev->dev_addr);
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/common.h b/drivers/net/ethernet/marvell/octeontx2/af/common.h
+index e66109367487a..c1e11cb68d265 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/common.h
+@@ -195,8 +195,6 @@ enum nix_scheduler {
+ #define NIX_CHAN_LBK_CHX(a, b)		(0 + 0x100 * (a) + (b))
+ #define NIX_CHAN_SDP_CH_START		(0x700ull)
+ 
+-#define SDP_CHANNELS			256
+-
+ /* NIX LSO format indices.
+  * As of now TSO is the only one using, so statically assigning indices.
+  */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
+index 7d9e71c6965fb..7a2157709dde0 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
+@@ -12,9 +12,10 @@
+ 
+ int rvu_set_channels_base(struct rvu *rvu)
+ {
++	u16 nr_lbk_chans, nr_sdp_chans, nr_cgx_chans, nr_cpt_chans;
++	u16 sdp_chan_base, cgx_chan_base, cpt_chan_base;
+ 	struct rvu_hwinfo *hw = rvu->hw;
+-	u16 cpt_chan_base;
+-	u64 nix_const;
++	u64 nix_const, nix_const1;
+ 	int blkaddr;
+ 
+ 	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, 0);
+@@ -22,6 +23,7 @@ int rvu_set_channels_base(struct rvu *rvu)
+ 		return blkaddr;
+ 
+ 	nix_const = rvu_read64(rvu, blkaddr, NIX_AF_CONST);
++	nix_const1 = rvu_read64(rvu, blkaddr, NIX_AF_CONST1);
+ 
+ 	hw->cgx = (nix_const >> 12) & 0xFULL;
+ 	hw->lmac_per_cgx = (nix_const >> 8) & 0xFULL;
+@@ -44,14 +46,24 @@ int rvu_set_channels_base(struct rvu *rvu)
+ 	 * channels such that all channel numbers are contiguous
+ 	 * leaving no holes. This way the new CPT channels can be
+ 	 * accomodated. The order of channel numbers assigned is
+-	 * LBK, SDP, CGX and CPT.
++	 * LBK, SDP, CGX and CPT. Also the base channel number
++	 * of a block must be multiple of number of channels
++	 * of the block.
+ 	 */
+-	hw->sdp_chan_base = hw->lbk_chan_base + hw->lbk_links *
+-				((nix_const >> 16) & 0xFFULL);
+-	hw->cgx_chan_base = hw->sdp_chan_base + hw->sdp_links * SDP_CHANNELS;
++	nr_lbk_chans = (nix_const >> 16) & 0xFFULL;
++	nr_sdp_chans = nix_const1 & 0xFFFULL;
++	nr_cgx_chans = nix_const & 0xFFULL;
++	nr_cpt_chans = (nix_const >> 32) & 0xFFFULL;
+ 
+-	cpt_chan_base = hw->cgx_chan_base + hw->cgx_links *
+-				(nix_const & 0xFFULL);
++	sdp_chan_base = hw->lbk_chan_base + hw->lbk_links * nr_lbk_chans;
++	/* Round up base channel to multiple of number of channels */
++	hw->sdp_chan_base = ALIGN(sdp_chan_base, nr_sdp_chans);
++
++	cgx_chan_base = hw->sdp_chan_base + hw->sdp_links * nr_sdp_chans;
++	hw->cgx_chan_base = ALIGN(cgx_chan_base, nr_cgx_chans);
++
++	cpt_chan_base = hw->cgx_chan_base + hw->cgx_links * nr_cgx_chans;
++	hw->cpt_chan_base = ALIGN(cpt_chan_base, nr_cpt_chans);
+ 
+ 	/* Out of 4096 channels start CPT from 2048 so
+ 	 * that MSB for CPT channels is always set
+@@ -155,6 +167,7 @@ err_put:
+ 
+ static void __rvu_nix_set_channels(struct rvu *rvu, int blkaddr)
+ {
++	u64 nix_const1 = rvu_read64(rvu, blkaddr, NIX_AF_CONST1);
+ 	u64 nix_const = rvu_read64(rvu, blkaddr, NIX_AF_CONST);
+ 	u16 cgx_chans, lbk_chans, sdp_chans, cpt_chans;
+ 	struct rvu_hwinfo *hw = rvu->hw;
+@@ -164,7 +177,7 @@ static void __rvu_nix_set_channels(struct rvu *rvu, int blkaddr)
+ 
+ 	cgx_chans = nix_const & 0xFFULL;
+ 	lbk_chans = (nix_const >> 16) & 0xFFULL;
+-	sdp_chans = SDP_CHANNELS;
++	sdp_chans = nix_const1 & 0xFFFULL;
+ 	cpt_chans = (nix_const >> 32) & 0xFFFULL;
+ 
+ 	start = hw->cgx_chan_base;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+index 0bc4529691ec9..d413078fc043b 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+@@ -23,7 +23,7 @@
+ #define RSVD_MCAM_ENTRIES_PER_NIXLF	1 /* Ucast for LFs */
+ 
+ #define NPC_PARSE_RESULT_DMAC_OFFSET	8
+-#define NPC_HW_TSTAMP_OFFSET		8
++#define NPC_HW_TSTAMP_OFFSET		8ULL
+ #define NPC_KEX_CHAN_MASK		0xFFFULL
+ #define NPC_KEX_PF_FUNC_MASK		0xFFFFULL
+ 
+@@ -823,7 +823,7 @@ void rvu_npc_enable_bcast_entry(struct rvu *rvu, u16 pcifunc, bool enable)
+ static void npc_update_vf_flow_entry(struct rvu *rvu, struct npc_mcam *mcam,
+ 				     int blkaddr, u16 pcifunc, u64 rx_action)
+ {
+-	int actindex, index, bank;
++	int actindex, index, bank, entry;
+ 	bool enable;
+ 
+ 	if (!(pcifunc & RVU_PFVF_FUNC_MASK))
+@@ -834,7 +834,7 @@ static void npc_update_vf_flow_entry(struct rvu *rvu, struct npc_mcam *mcam,
+ 		if (mcam->entry2target_pffunc[index] == pcifunc) {
+ 			bank = npc_get_bank(mcam, index);
+ 			actindex = index;
+-			index &= (mcam->banksize - 1);
++			entry = index & (mcam->banksize - 1);
+ 
+ 			/* read vf flow entry enable status */
+ 			enable = is_mcam_entry_enabled(rvu, mcam, blkaddr,
+@@ -844,7 +844,7 @@ static void npc_update_vf_flow_entry(struct rvu *rvu, struct npc_mcam *mcam,
+ 					      false);
+ 			/* update 'action' */
+ 			rvu_write64(rvu, blkaddr,
+-				    NPC_AF_MCAMEX_BANKX_ACTION(index, bank),
++				    NPC_AF_MCAMEX_BANKX_ACTION(entry, bank),
+ 				    rx_action);
+ 			if (enable)
+ 				npc_enable_mcam_entry(rvu, mcam, blkaddr,
+@@ -1619,14 +1619,15 @@ int rvu_npc_init(struct rvu *rvu)
+ 
+ 	/* Enable below for Rx pkts.
+ 	 * - Outer IPv4 header checksum validation.
+-	 * - Detect outer L2 broadcast address and set NPC_RESULT_S[L2M].
++	 * - Detect outer L2 broadcast address and set NPC_RESULT_S[L2B].
++	 * - Detect outer L2 multicast address and set NPC_RESULT_S[L2M].
+ 	 * - Inner IPv4 header checksum validation.
+ 	 * - Set non zero checksum error code value
+ 	 */
+ 	rvu_write64(rvu, blkaddr, NPC_AF_PCK_CFG,
+ 		    rvu_read64(rvu, blkaddr, NPC_AF_PCK_CFG) |
+-		    BIT_ULL(32) | BIT_ULL(24) | BIT_ULL(6) |
+-		    BIT_ULL(2) | BIT_ULL(1));
++		    ((u64)NPC_EC_OIP4_CSUM << 32) | (NPC_EC_IIP4_CSUM << 24) |
++		    BIT_ULL(7) | BIT_ULL(6) | BIT_ULL(2) | BIT_ULL(1));
+ 
+ 	rvu_npc_setup_interfaces(rvu, blkaddr);
+ 
+@@ -1751,7 +1752,7 @@ static void npc_unmap_mcam_entry_and_cntr(struct rvu *rvu,
+ 					  int blkaddr, u16 entry, u16 cntr)
+ {
+ 	u16 index = entry & (mcam->banksize - 1);
+-	u16 bank = npc_get_bank(mcam, entry);
++	u32 bank = npc_get_bank(mcam, entry);
+ 
+ 	/* Remove mapping and reduce counter's refcnt */
+ 	mcam->entry2cntr_map[entry] = NPC_MCAM_INVALID_MAP;
+@@ -2365,8 +2366,8 @@ int rvu_mbox_handler_npc_mcam_shift_entry(struct rvu *rvu,
+ 	struct npc_mcam *mcam = &rvu->hw->mcam;
+ 	u16 pcifunc = req->hdr.pcifunc;
+ 	u16 old_entry, new_entry;
++	int blkaddr, rc = 0;
+ 	u16 index, cntr;
+-	int blkaddr, rc;
+ 
+ 	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
+ 	if (blkaddr < 0)
+@@ -2567,10 +2568,11 @@ int rvu_mbox_handler_npc_mcam_unmap_counter(struct rvu *rvu,
+ 		index = find_next_bit(mcam->bmap, mcam->bmap_entries, entry);
+ 		if (index >= mcam->bmap_entries)
+ 			break;
++		entry = index + 1;
++
+ 		if (mcam->entry2cntr_map[index] != req->cntr)
+ 			continue;
+ 
+-		entry = index + 1;
+ 		npc_unmap_mcam_entry_and_cntr(rvu, mcam, blkaddr,
+ 					      index, req->cntr);
+ 	}
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+index 16ba457197a2b..e0d1af9e7770d 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+@@ -208,7 +208,8 @@ int otx2_set_mac_address(struct net_device *netdev, void *p)
+ 	if (!otx2_hw_set_mac_addr(pfvf, addr->sa_data)) {
+ 		memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len);
+ 		/* update dmac field in vlan offload rule */
+-		if (pfvf->flags & OTX2_FLAG_RX_VLAN_SUPPORT)
++		if (netif_running(netdev) &&
++		    pfvf->flags & OTX2_FLAG_RX_VLAN_SUPPORT)
+ 			otx2_install_rxvlan_offload_flow(pfvf);
+ 	} else {
+ 		return -EPERM;
+@@ -265,6 +266,7 @@ unlock:
+ int otx2_set_flowkey_cfg(struct otx2_nic *pfvf)
+ {
+ 	struct otx2_rss_info *rss = &pfvf->hw.rss_info;
++	struct nix_rss_flowkey_cfg_rsp *rsp;
+ 	struct nix_rss_flowkey_cfg *req;
+ 	int err;
+ 
+@@ -279,6 +281,18 @@ int otx2_set_flowkey_cfg(struct otx2_nic *pfvf)
+ 	req->group = DEFAULT_RSS_CONTEXT_GROUP;
+ 
+ 	err = otx2_sync_mbox_msg(&pfvf->mbox);
++	if (err)
++		goto fail;
++
++	rsp = (struct nix_rss_flowkey_cfg_rsp *)
++			otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
++	if (IS_ERR(rsp)) {
++		err = PTR_ERR(rsp);
++		goto fail;
++	}
++
++	pfvf->hw.flowkey_alg_idx = rsp->alg_idx;
++fail:
+ 	mutex_unlock(&pfvf->mbox.lock);
+ 	return err;
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+index 45730d0d92f2b..c652c27cd3455 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+@@ -195,6 +195,9 @@ struct otx2_hw {
+ 	u8			lso_udpv4_idx;
+ 	u8			lso_udpv6_idx;
+ 
++	/* RSS */
++	u8			flowkey_alg_idx;
++
+ 	/* MSI-X */
+ 	u8			cint_cnt; /* CQ interrupt count */
+ 	u16			npa_msixoff; /* Offset of NPA vectors */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
+index 0b4fa92ba8214..81265dbf91e2a 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
+@@ -682,6 +682,7 @@ static int otx2_add_flow_msg(struct otx2_nic *pfvf, struct otx2_flow *flow)
+ 		if (flow->flow_spec.flow_type & FLOW_RSS) {
+ 			req->op = NIX_RX_ACTIONOP_RSS;
+ 			req->index = flow->rss_ctx_id;
++			req->flow_key_alg = pfvf->hw.flowkey_alg_idx;
+ 		} else {
+ 			req->op = NIX_RX_ACTIONOP_UCAST;
+ 			req->index = ethtool_get_flow_spec_ring(ring_cookie);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
+index 51157b283f6f7..463d2368c1180 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
+@@ -385,8 +385,8 @@ static int otx2_tc_prepare_flow(struct otx2_nic *nic,
+ 				   match.key->vlan_priority << 13;
+ 
+ 			vlan_tci_mask = match.mask->vlan_id |
+-					match.key->vlan_dei << 12 |
+-					match.key->vlan_priority << 13;
++					match.mask->vlan_dei << 12 |
++					match.mask->vlan_priority << 13;
+ 
+ 			flow_spec->vlan_tci = htons(vlan_tci);
+ 			flow_mask->vlan_tci = htons(vlan_tci_mask);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+index def2156e50eeb..20bb372662541 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/dev.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+@@ -397,7 +397,7 @@ int mlx5_register_device(struct mlx5_core_dev *dev)
+ void mlx5_unregister_device(struct mlx5_core_dev *dev)
+ {
+ 	mutex_lock(&mlx5_intf_mutex);
+-	dev->priv.flags |= MLX5_PRIV_FLAGS_DISABLE_ALL_ADEV;
++	dev->priv.flags = MLX5_PRIV_FLAGS_DISABLE_ALL_ADEV;
+ 	mlx5_rescan_drivers_locked(dev);
+ 	mutex_unlock(&mlx5_intf_mutex);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+index 44c458443428c..4794173f8fdbf 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+@@ -664,6 +664,7 @@ params_reg_err:
+ void mlx5_devlink_unregister(struct devlink *devlink)
+ {
+ 	mlx5_devlink_traps_unregister(devlink);
++	devlink_params_unpublish(devlink);
+ 	devlink_params_unregister(devlink, mlx5_devlink_params,
+ 				  ARRAY_SIZE(mlx5_devlink_params));
+ 	devlink_unregister(devlink);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
+index 1d5ce07b83f45..43b092f5565af 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
+@@ -248,18 +248,12 @@ struct ttc_params {
+ 
+ void mlx5e_set_ttc_basic_params(struct mlx5e_priv *priv, struct ttc_params *ttc_params);
+ void mlx5e_set_ttc_ft_params(struct ttc_params *ttc_params);
+-void mlx5e_set_inner_ttc_ft_params(struct ttc_params *ttc_params);
+ 
+ int mlx5e_create_ttc_table(struct mlx5e_priv *priv, struct ttc_params *params,
+ 			   struct mlx5e_ttc_table *ttc);
+ void mlx5e_destroy_ttc_table(struct mlx5e_priv *priv,
+ 			     struct mlx5e_ttc_table *ttc);
+ 
+-int mlx5e_create_inner_ttc_table(struct mlx5e_priv *priv, struct ttc_params *params,
+-				 struct mlx5e_ttc_table *ttc);
+-void mlx5e_destroy_inner_ttc_table(struct mlx5e_priv *priv,
+-				   struct mlx5e_ttc_table *ttc);
+-
+ void mlx5e_destroy_flow_table(struct mlx5e_flow_table *ft);
+ int mlx5e_ttc_fwd_dest(struct mlx5e_priv *priv, enum mlx5e_traffic_types type,
+ 		       struct mlx5_flow_destination *new_dest);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c
+index 5efe3278b0f64..1fd8baf198296 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c
+@@ -733,8 +733,8 @@ static void mlx5e_reset_qdisc(struct net_device *dev, u16 qid)
+ 	spin_unlock_bh(qdisc_lock(qdisc));
+ }
+ 
+-int mlx5e_htb_leaf_del(struct mlx5e_priv *priv, u16 classid, u16 *old_qid,
+-		       u16 *new_qid, struct netlink_ext_ack *extack)
++int mlx5e_htb_leaf_del(struct mlx5e_priv *priv, u16 *classid,
++		       struct netlink_ext_ack *extack)
+ {
+ 	struct mlx5e_qos_node *node;
+ 	struct netdev_queue *txq;
+@@ -742,11 +742,9 @@ int mlx5e_htb_leaf_del(struct mlx5e_priv *priv, u16 classid, u16 *old_qid,
+ 	bool opened;
+ 	int err;
+ 
+-	qos_dbg(priv->mdev, "TC_HTB_LEAF_DEL classid %04x\n", classid);
+-
+-	*old_qid = *new_qid = 0;
++	qos_dbg(priv->mdev, "TC_HTB_LEAF_DEL classid %04x\n", *classid);
+ 
+-	node = mlx5e_sw_node_find(priv, classid);
++	node = mlx5e_sw_node_find(priv, *classid);
+ 	if (!node)
+ 		return -ENOENT;
+ 
+@@ -764,7 +762,7 @@ int mlx5e_htb_leaf_del(struct mlx5e_priv *priv, u16 classid, u16 *old_qid,
+ 	err = mlx5_qos_destroy_node(priv->mdev, node->hw_id);
+ 	if (err) /* Not fatal. */
+ 		qos_warn(priv->mdev, "Failed to destroy leaf node %u (class %04x), err = %d\n",
+-			 node->hw_id, classid, err);
++			 node->hw_id, *classid, err);
+ 
+ 	mlx5e_sw_node_delete(priv, node);
+ 
+@@ -826,8 +824,7 @@ int mlx5e_htb_leaf_del(struct mlx5e_priv *priv, u16 classid, u16 *old_qid,
+ 	if (opened)
+ 		mlx5e_reactivate_qos_sq(priv, moved_qid, txq);
+ 
+-	*old_qid = mlx5e_qid_from_qos(&priv->channels, moved_qid);
+-	*new_qid = mlx5e_qid_from_qos(&priv->channels, qid);
++	*classid = node->classid;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/qos.h b/drivers/net/ethernet/mellanox/mlx5/core/en/qos.h
+index 5af7991fcd194..757682b7c0e04 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/qos.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/qos.h
+@@ -34,8 +34,8 @@ int mlx5e_htb_leaf_alloc_queue(struct mlx5e_priv *priv, u16 classid,
+ 			       struct netlink_ext_ack *extack);
+ int mlx5e_htb_leaf_to_inner(struct mlx5e_priv *priv, u16 classid, u16 child_classid,
+ 			    u64 rate, u64 ceil, struct netlink_ext_ack *extack);
+-int mlx5e_htb_leaf_del(struct mlx5e_priv *priv, u16 classid, u16 *old_qid,
+-		       u16 *new_qid, struct netlink_ext_ack *extack);
++int mlx5e_htb_leaf_del(struct mlx5e_priv *priv, u16 *classid,
++		       struct netlink_ext_ack *extack);
+ int mlx5e_htb_leaf_del_last(struct mlx5e_priv *priv, u16 classid, bool force,
+ 			    struct netlink_ext_ack *extack);
+ int mlx5e_htb_node_modify(struct mlx5e_priv *priv, u16 classid, u64 rate, u64 ceil,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
+index 490131e06efb2..aa4dc7d624f8e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
+@@ -143,7 +143,7 @@ void mlx5e_tc_encap_flows_add(struct mlx5e_priv *priv,
+ 	mlx5e_rep_queue_neigh_stats_work(priv);
+ 
+ 	list_for_each_entry(flow, flow_list, tmp_list) {
+-		if (!mlx5e_is_offloaded_flow(flow))
++		if (!mlx5e_is_offloaded_flow(flow) || !flow_flag_test(flow, SLOW))
+ 			continue;
+ 		attr = flow->attr;
+ 		esw_attr = attr->esw_attr;
+@@ -184,7 +184,7 @@ void mlx5e_tc_encap_flows_del(struct mlx5e_priv *priv,
+ 	int err;
+ 
+ 	list_for_each_entry(flow, flow_list, tmp_list) {
+-		if (!mlx5e_is_offloaded_flow(flow))
++		if (!mlx5e_is_offloaded_flow(flow) || flow_flag_test(flow, SLOW))
+ 			continue;
+ 		attr = flow->attr;
+ 		esw_attr = attr->esw_attr;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+index 0b75fab41ae8f..6464ac3f294e7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+@@ -1324,7 +1324,7 @@ void mlx5e_set_ttc_basic_params(struct mlx5e_priv *priv,
+ 	ttc_params->inner_ttc = &priv->fs.inner_ttc;
+ }
+ 
+-void mlx5e_set_inner_ttc_ft_params(struct ttc_params *ttc_params)
++static void mlx5e_set_inner_ttc_ft_params(struct ttc_params *ttc_params)
+ {
+ 	struct mlx5_flow_table_attr *ft_attr = &ttc_params->ft_attr;
+ 
+@@ -1343,8 +1343,8 @@ void mlx5e_set_ttc_ft_params(struct ttc_params *ttc_params)
+ 	ft_attr->prio = MLX5E_NIC_PRIO;
+ }
+ 
+-int mlx5e_create_inner_ttc_table(struct mlx5e_priv *priv, struct ttc_params *params,
+-				 struct mlx5e_ttc_table *ttc)
++static int mlx5e_create_inner_ttc_table(struct mlx5e_priv *priv, struct ttc_params *params,
++					struct mlx5e_ttc_table *ttc)
+ {
+ 	struct mlx5e_flow_table *ft = &ttc->ft;
+ 	int err;
+@@ -1374,8 +1374,8 @@ err:
+ 	return err;
+ }
+ 
+-void mlx5e_destroy_inner_ttc_table(struct mlx5e_priv *priv,
+-				   struct mlx5e_ttc_table *ttc)
++static void mlx5e_destroy_inner_ttc_table(struct mlx5e_priv *priv,
++					  struct mlx5e_ttc_table *ttc)
+ {
+ 	if (!mlx5e_tunnel_inner_ft_supported(priv->mdev))
+ 		return;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 779a4abead01b..814ff51db1a5b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -2563,6 +2563,14 @@ static int mlx5e_modify_tirs_lro(struct mlx5e_priv *priv)
+ 		err = mlx5_core_modify_tir(mdev, priv->indir_tir[tt].tirn, in);
+ 		if (err)
+ 			goto free_in;
++
++		/* Verify inner tirs resources allocated */
++		if (!priv->inner_indir_tir[0].tirn)
++			continue;
++
++		err = mlx5_core_modify_tir(mdev, priv->inner_indir_tir[tt].tirn, in);
++		if (err)
++			goto free_in;
+ 	}
+ 
+ 	for (ix = 0; ix < priv->max_nch; ix++) {
+@@ -3439,8 +3447,7 @@ static int mlx5e_setup_tc_htb(struct mlx5e_priv *priv, struct tc_htb_qopt_offloa
+ 		return mlx5e_htb_leaf_to_inner(priv, htb->parent_classid, htb->classid,
+ 					       htb->rate, htb->ceil, htb->extack);
+ 	case TC_HTB_LEAF_DEL:
+-		return mlx5e_htb_leaf_del(priv, htb->classid, &htb->moved_qid, &htb->qid,
+-					  htb->extack);
++		return mlx5e_htb_leaf_del(priv, &htb->classid, htb->extack);
+ 	case TC_HTB_LEAF_DEL_LAST:
+ 	case TC_HTB_LEAF_DEL_LAST_FORCE:
+ 		return mlx5e_htb_leaf_del_last(priv, htb->classid,
+@@ -4806,7 +4813,14 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
+ 	netdev->hw_enc_features  |= NETIF_F_HW_VLAN_CTAG_TX;
+ 	netdev->hw_enc_features  |= NETIF_F_HW_VLAN_CTAG_RX;
+ 
++	/* Tunneled LRO is not supported in the driver, and the same RQs are
++	 * shared between inner and outer TIRs, so the driver can't disable LRO
++	 * for inner TIRs while having it enabled for outer TIRs. Due to this,
++	 * block LRO altogether if the firmware declares tunneled LRO support.
++	 */
+ 	if (!!MLX5_CAP_ETH(mdev, lro_cap) &&
++	    !MLX5_CAP_ETH(mdev, tunnel_lro_vxlan) &&
++	    !MLX5_CAP_ETH(mdev, tunnel_lro_gre) &&
+ 	    mlx5e_check_fragmented_striding_rq_cap(mdev))
+ 		netdev->vlan_features    |= NETIF_F_LRO;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 47bd20ad81080..ced6ff0bc9160 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -1310,6 +1310,7 @@ bool mlx5e_tc_is_vf_tunnel(struct net_device *out_dev, struct net_device *route_
+ int mlx5e_tc_query_route_vport(struct net_device *out_dev, struct net_device *route_dev, u16 *vport)
+ {
+ 	struct mlx5e_priv *out_priv, *route_priv;
++	struct mlx5_devcom *devcom = NULL;
+ 	struct mlx5_core_dev *route_mdev;
+ 	struct mlx5_eswitch *esw;
+ 	u16 vhca_id;
+@@ -1321,7 +1322,24 @@ int mlx5e_tc_query_route_vport(struct net_device *out_dev, struct net_device *ro
+ 	route_mdev = route_priv->mdev;
+ 
+ 	vhca_id = MLX5_CAP_GEN(route_mdev, vhca_id);
++	if (mlx5_lag_is_active(out_priv->mdev)) {
++		/* In lag case we may get devices from different eswitch instances.
++		 * If we failed to get vport num, it means, mostly, that we on the wrong
++		 * eswitch.
++		 */
++		err = mlx5_eswitch_vhca_id_to_vport(esw, vhca_id, vport);
++		if (err != -ENOENT)
++			return err;
++
++		devcom = out_priv->mdev->priv.devcom;
++		esw = mlx5_devcom_get_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
++		if (!esw)
++			return -ENODEV;
++	}
++
+ 	err = mlx5_eswitch_vhca_id_to_vport(esw, vhca_id, vport);
++	if (devcom)
++		mlx5_devcom_release_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c
+index 3da7becc1069f..425c91814b34f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/indir_table.c
+@@ -364,6 +364,7 @@ static int mlx5_create_indir_fwd_group(struct mlx5_eswitch *esw,
+ 	dest.type = MLX5_FLOW_DESTINATION_TYPE_VPORT;
+ 	dest.vport.num = e->vport;
+ 	dest.vport.vhca_id = MLX5_CAP_GEN(esw->dev, vhca_id);
++	dest.vport.flags = MLX5_FLOW_DEST_VPORT_VHCA_ID;
+ 	e->fwd_rule = mlx5_add_flow_rules(e->ft, spec, &flow_act, &dest, 1);
+ 	if (IS_ERR(e->fwd_rule)) {
+ 		mlx5_destroy_flow_group(e->fwd_grp);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index d0e4daa55a4a1..c6d3348d759e3 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -3074,8 +3074,11 @@ int mlx5_devlink_eswitch_inline_mode_set(struct devlink *devlink, u8 mode,
+ 
+ 	switch (MLX5_CAP_ETH(dev, wqe_inline_mode)) {
+ 	case MLX5_CAP_INLINE_MODE_NOT_REQUIRED:
+-		if (mode == DEVLINK_ESWITCH_INLINE_MODE_NONE)
++		if (mode == DEVLINK_ESWITCH_INLINE_MODE_NONE) {
++			err = 0;
+ 			goto out;
++		}
++
+ 		fallthrough;
+ 	case MLX5_CAP_INLINE_MODE_L2:
+ 		NL_SET_ERR_MSG_MOD(extack, "Inline mode can't be set");
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
+index 7d7ed025db0da..620d638e1e8ff 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
+@@ -331,17 +331,6 @@ static int mlx5i_create_flow_steering(struct mlx5e_priv *priv)
+ 	}
+ 
+ 	mlx5e_set_ttc_basic_params(priv, &ttc_params);
+-	mlx5e_set_inner_ttc_ft_params(&ttc_params);
+-	for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++)
+-		ttc_params.indir_tirn[tt] = priv->inner_indir_tir[tt].tirn;
+-
+-	err = mlx5e_create_inner_ttc_table(priv, &ttc_params, &priv->fs.inner_ttc);
+-	if (err) {
+-		netdev_err(priv->netdev, "Failed to create inner ttc table, err=%d\n",
+-			   err);
+-		goto err_destroy_arfs_tables;
+-	}
+-
+ 	mlx5e_set_ttc_ft_params(&ttc_params);
+ 	for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++)
+ 		ttc_params.indir_tirn[tt] = priv->indir_tir[tt].tirn;
+@@ -350,13 +339,11 @@ static int mlx5i_create_flow_steering(struct mlx5e_priv *priv)
+ 	if (err) {
+ 		netdev_err(priv->netdev, "Failed to create ttc table, err=%d\n",
+ 			   err);
+-		goto err_destroy_inner_ttc_table;
++		goto err_destroy_arfs_tables;
+ 	}
+ 
+ 	return 0;
+ 
+-err_destroy_inner_ttc_table:
+-	mlx5e_destroy_inner_ttc_table(priv, &priv->fs.inner_ttc);
+ err_destroy_arfs_tables:
+ 	mlx5e_arfs_destroy_tables(priv);
+ 
+@@ -366,7 +353,6 @@ err_destroy_arfs_tables:
+ static void mlx5i_destroy_flow_steering(struct mlx5e_priv *priv)
+ {
+ 	mlx5e_destroy_ttc_table(priv, &priv->fs.ttc);
+-	mlx5e_destroy_inner_ttc_table(priv, &priv->fs.inner_ttc);
+ 	mlx5e_arfs_destroy_tables(priv);
+ }
+ 
+@@ -392,7 +378,7 @@ static int mlx5i_init_rx(struct mlx5e_priv *priv)
+ 	if (err)
+ 		goto err_destroy_indirect_rqts;
+ 
+-	err = mlx5e_create_indirect_tirs(priv, true);
++	err = mlx5e_create_indirect_tirs(priv, false);
+ 	if (err)
+ 		goto err_destroy_direct_rqts;
+ 
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_devlink.c b/drivers/net/ethernet/pensando/ionic/ionic_devlink.c
+index b41301a5b0df8..cd520e4c5522f 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_devlink.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_devlink.c
+@@ -91,20 +91,20 @@ int ionic_devlink_register(struct ionic *ionic)
+ 	attrs.flavour = DEVLINK_PORT_FLAVOUR_PHYSICAL;
+ 	devlink_port_attrs_set(&ionic->dl_port, &attrs);
+ 	err = devlink_port_register(dl, &ionic->dl_port, 0);
+-	if (err)
++	if (err) {
+ 		dev_err(ionic->dev, "devlink_port_register failed: %d\n", err);
+-	else
+-		devlink_port_type_eth_set(&ionic->dl_port,
+-					  ionic->lif->netdev);
++		devlink_unregister(dl);
++		return err;
++	}
+ 
+-	return err;
++	devlink_port_type_eth_set(&ionic->dl_port, ionic->lif->netdev);
++	return 0;
+ }
+ 
+ void ionic_devlink_unregister(struct ionic *ionic)
+ {
+ 	struct devlink *dl = priv_to_devlink(ionic);
+ 
+-	if (ionic->dl_port.registered)
+-		devlink_port_unregister(&ionic->dl_port);
++	devlink_port_unregister(&ionic->dl_port);
+ 	devlink_unregister(dl);
+ }
+diff --git a/drivers/net/ethernet/qualcomm/qca_spi.c b/drivers/net/ethernet/qualcomm/qca_spi.c
+index ab9b02574a152..38018f0248239 100644
+--- a/drivers/net/ethernet/qualcomm/qca_spi.c
++++ b/drivers/net/ethernet/qualcomm/qca_spi.c
+@@ -434,7 +434,7 @@ qcaspi_receive(struct qcaspi *qca)
+ 				skb_put(qca->rx_skb, retcode);
+ 				qca->rx_skb->protocol = eth_type_trans(
+ 					qca->rx_skb, qca->rx_skb->dev);
+-				qca->rx_skb->ip_summed = CHECKSUM_UNNECESSARY;
++				skb_checksum_none_assert(qca->rx_skb);
+ 				netif_rx_ni(qca->rx_skb);
+ 				qca->rx_skb = netdev_alloc_skb_ip_align(net_dev,
+ 					net_dev->mtu + VLAN_ETH_HLEN);
+diff --git a/drivers/net/ethernet/qualcomm/qca_uart.c b/drivers/net/ethernet/qualcomm/qca_uart.c
+index bcdeca7b33664..ce3f7ce31adc1 100644
+--- a/drivers/net/ethernet/qualcomm/qca_uart.c
++++ b/drivers/net/ethernet/qualcomm/qca_uart.c
+@@ -107,7 +107,7 @@ qca_tty_receive(struct serdev_device *serdev, const unsigned char *data,
+ 			skb_put(qca->rx_skb, retcode);
+ 			qca->rx_skb->protocol = eth_type_trans(
+ 						qca->rx_skb, qca->rx_skb->dev);
+-			qca->rx_skb->ip_summed = CHECKSUM_UNNECESSARY;
++			skb_checksum_none_assert(qca->rx_skb);
+ 			netif_rx_ni(qca->rx_skb);
+ 			qca->rx_skb = netdev_alloc_skb_ip_align(netdev,
+ 								netdev->mtu +
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
+index e632702675787..f83db62938dd1 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
+@@ -172,11 +172,12 @@ int dwmac4_dma_interrupt(void __iomem *ioaddr,
+ 		x->rx_normal_irq_n++;
+ 		ret |= handle_rx;
+ 	}
+-	if (likely(intr_status & (DMA_CHAN_STATUS_TI |
+-		DMA_CHAN_STATUS_TBU))) {
++	if (likely(intr_status & DMA_CHAN_STATUS_TI)) {
+ 		x->tx_normal_irq_n++;
+ 		ret |= handle_tx;
+ 	}
++	if (unlikely(intr_status & DMA_CHAN_STATUS_TBU))
++		ret |= handle_tx;
+ 	if (unlikely(intr_status & DMA_CHAN_STATUS_ERI))
+ 		x->rx_early_irq++;
+ 
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index 67a08cbba859d..e967cd1ade36b 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -518,6 +518,10 @@ static int am65_cpsw_nuss_common_open(struct am65_cpsw_common *common,
+ 	}
+ 
+ 	napi_enable(&common->napi_rx);
++	if (common->rx_irq_disabled) {
++		common->rx_irq_disabled = false;
++		enable_irq(common->rx_chns.irq);
++	}
+ 
+ 	dev_dbg(common->dev, "cpsw_nuss started\n");
+ 	return 0;
+@@ -871,8 +875,12 @@ static int am65_cpsw_nuss_rx_poll(struct napi_struct *napi_rx, int budget)
+ 
+ 	dev_dbg(common->dev, "%s num_rx:%d %d\n", __func__, num_rx, budget);
+ 
+-	if (num_rx < budget && napi_complete_done(napi_rx, num_rx))
+-		enable_irq(common->rx_chns.irq);
++	if (num_rx < budget && napi_complete_done(napi_rx, num_rx)) {
++		if (common->rx_irq_disabled) {
++			common->rx_irq_disabled = false;
++			enable_irq(common->rx_chns.irq);
++		}
++	}
+ 
+ 	return num_rx;
+ }
+@@ -1090,6 +1098,7 @@ static irqreturn_t am65_cpsw_nuss_rx_irq(int irq, void *dev_id)
+ {
+ 	struct am65_cpsw_common *common = dev_id;
+ 
++	common->rx_irq_disabled = true;
+ 	disable_irq_nosync(irq);
+ 	napi_schedule(&common->napi_rx);
+ 
+@@ -2388,21 +2397,6 @@ static const struct devlink_param am65_cpsw_devlink_params[] = {
+ 			     am65_cpsw_dl_switch_mode_set, NULL),
+ };
+ 
+-static void am65_cpsw_unregister_devlink_ports(struct am65_cpsw_common *common)
+-{
+-	struct devlink_port *dl_port;
+-	struct am65_cpsw_port *port;
+-	int i;
+-
+-	for (i = 1; i <= common->port_num; i++) {
+-		port = am65_common_get_port(common, i);
+-		dl_port = &port->devlink_port;
+-
+-		if (dl_port->registered)
+-			devlink_port_unregister(dl_port);
+-	}
+-}
+-
+ static int am65_cpsw_nuss_register_devlink(struct am65_cpsw_common *common)
+ {
+ 	struct devlink_port_attrs attrs = {};
+@@ -2464,7 +2458,12 @@ static int am65_cpsw_nuss_register_devlink(struct am65_cpsw_common *common)
+ 	return ret;
+ 
+ dl_port_unreg:
+-	am65_cpsw_unregister_devlink_ports(common);
++	for (i = i - 1; i >= 1; i--) {
++		port = am65_common_get_port(common, i);
++		dl_port = &port->devlink_port;
++
++		devlink_port_unregister(dl_port);
++	}
+ dl_unreg:
+ 	devlink_unregister(common->devlink);
+ dl_free:
+@@ -2475,6 +2474,17 @@ dl_free:
+ 
+ static void am65_cpsw_unregister_devlink(struct am65_cpsw_common *common)
+ {
++	struct devlink_port *dl_port;
++	struct am65_cpsw_port *port;
++	int i;
++
++	for (i = 1; i <= common->port_num; i++) {
++		port = am65_common_get_port(common, i);
++		dl_port = &port->devlink_port;
++
++		devlink_port_unregister(dl_port);
++	}
++
+ 	if (!AM65_CPSW_IS_CPSW2G(common) &&
+ 	    IS_ENABLED(CONFIG_TI_K3_AM65_CPSW_SWITCHDEV)) {
+ 		devlink_params_unpublish(common->devlink);
+@@ -2482,7 +2492,6 @@ static void am65_cpsw_unregister_devlink(struct am65_cpsw_common *common)
+ 					  ARRAY_SIZE(am65_cpsw_devlink_params));
+ 	}
+ 
+-	am65_cpsw_unregister_devlink_ports(common);
+ 	devlink_unregister(common->devlink);
+ 	devlink_free(common->devlink);
+ }
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.h b/drivers/net/ethernet/ti/am65-cpsw-nuss.h
+index 5d93e346f05eb..048ed10143c17 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.h
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.h
+@@ -126,6 +126,8 @@ struct am65_cpsw_common {
+ 	struct am65_cpsw_rx_chn	rx_chns;
+ 	struct napi_struct	napi_rx;
+ 
++	bool			rx_irq_disabled;
++
+ 	u32			nuss_ver;
+ 	u32			cpsw_ver;
+ 	unsigned long		bus_freq;
+diff --git a/drivers/net/phy/marvell10g.c b/drivers/net/phy/marvell10g.c
+index 53a433442803a..f4d758f8a1ee1 100644
+--- a/drivers/net/phy/marvell10g.c
++++ b/drivers/net/phy/marvell10g.c
+@@ -987,11 +987,19 @@ static int mv3310_get_number_of_ports(struct phy_device *phydev)
+ 
+ static int mv3310_match_phy_device(struct phy_device *phydev)
+ {
++	if ((phydev->c45_ids.device_ids[MDIO_MMD_PMAPMD] &
++	     MARVELL_PHY_ID_MASK) != MARVELL_PHY_ID_88X3310)
++		return 0;
++
+ 	return mv3310_get_number_of_ports(phydev) == 1;
+ }
+ 
+ static int mv3340_match_phy_device(struct phy_device *phydev)
+ {
++	if ((phydev->c45_ids.device_ids[MDIO_MMD_PMAPMD] &
++	     MARVELL_PHY_ID_MASK) != MARVELL_PHY_ID_88X3310)
++		return 0;
++
+ 	return mv3310_get_number_of_ports(phydev) == 4;
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath6kl/wmi.c b/drivers/net/wireless/ath/ath6kl/wmi.c
+index b137e7f343979..bd1ef63349978 100644
+--- a/drivers/net/wireless/ath/ath6kl/wmi.c
++++ b/drivers/net/wireless/ath/ath6kl/wmi.c
+@@ -2504,8 +2504,10 @@ static int ath6kl_wmi_sync_point(struct wmi *wmi, u8 if_idx)
+ 		goto free_data_skb;
+ 
+ 	for (index = 0; index < num_pri_streams; index++) {
+-		if (WARN_ON(!data_sync_bufs[index].skb))
++		if (WARN_ON(!data_sync_bufs[index].skb)) {
++			ret = -ENOMEM;
+ 			goto free_data_skb;
++		}
+ 
+ 		ep_id = ath6kl_ac2_endpoint_id(wmi->parent_dev,
+ 					       data_sync_bufs[index].
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+index 143a705b5cb3a..4800e19bdcc39 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+@@ -2075,7 +2075,7 @@ cleanup:
+ 
+ 	err = brcmf_pcie_probe(pdev, NULL);
+ 	if (err)
+-		brcmf_err(bus, "probe after resume failed, err=%d\n", err);
++		__brcmf_err(NULL, __func__, "probe after resume failed, err=%d\n", err);
+ 
+ 	return err;
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+index e31bba836c6f7..dfa4047f97a03 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+@@ -243,7 +243,7 @@ int iwl_acpi_get_tas(struct iwl_fw_runtime *fwrt,
+ 		goto out_free;
+ 	}
+ 
+-	enabled = !!wifi_pkg->package.elements[0].integer.value;
++	enabled = !!wifi_pkg->package.elements[1].integer.value;
+ 
+ 	if (!enabled) {
+ 		*block_list_size = -1;
+@@ -252,15 +252,15 @@ int iwl_acpi_get_tas(struct iwl_fw_runtime *fwrt,
+ 		goto out_free;
+ 	}
+ 
+-	if (wifi_pkg->package.elements[1].type != ACPI_TYPE_INTEGER ||
+-	    wifi_pkg->package.elements[1].integer.value >
++	if (wifi_pkg->package.elements[2].type != ACPI_TYPE_INTEGER ||
++	    wifi_pkg->package.elements[2].integer.value >
+ 	    APCI_WTAS_BLACK_LIST_MAX) {
+ 		IWL_DEBUG_RADIO(fwrt, "TAS invalid array size %llu\n",
+ 				wifi_pkg->package.elements[1].integer.value);
+ 		ret = -EINVAL;
+ 		goto out_free;
+ 	}
+-	*block_list_size = wifi_pkg->package.elements[1].integer.value;
++	*block_list_size = wifi_pkg->package.elements[2].integer.value;
+ 
+ 	IWL_DEBUG_RADIO(fwrt, "TAS array size %d\n", *block_list_size);
+ 	if (*block_list_size > APCI_WTAS_BLACK_LIST_MAX) {
+@@ -273,15 +273,15 @@ int iwl_acpi_get_tas(struct iwl_fw_runtime *fwrt,
+ 	for (i = 0; i < *block_list_size; i++) {
+ 		u32 country;
+ 
+-		if (wifi_pkg->package.elements[2 + i].type !=
++		if (wifi_pkg->package.elements[3 + i].type !=
+ 		    ACPI_TYPE_INTEGER) {
+ 			IWL_DEBUG_RADIO(fwrt,
+-					"TAS invalid array elem %d\n", 2 + i);
++					"TAS invalid array elem %d\n", 3 + i);
+ 			ret = -EINVAL;
+ 			goto out_free;
+ 		}
+ 
+-		country = wifi_pkg->package.elements[2 + i].integer.value;
++		country = wifi_pkg->package.elements[3 + i].integer.value;
+ 		block_list_array[i] = cpu_to_le32(country);
+ 		IWL_DEBUG_RADIO(fwrt, "TAS block list country %d\n", country);
+ 	}
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 9f11a1d5d0346..81e881da7f15d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -556,6 +556,7 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
+ 	IWL_DEV_INFO(0xA0F0, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0, NULL),
+ 	IWL_DEV_INFO(0xA0F0, 0x2074, iwl_ax201_cfg_qu_hr, NULL),
+ 	IWL_DEV_INFO(0xA0F0, 0x4070, iwl_ax201_cfg_qu_hr, NULL),
++	IWL_DEV_INFO(0xA0F0, 0x6074, iwl_ax201_cfg_qu_hr, NULL),
+ 	IWL_DEV_INFO(0x02F0, 0x0070, iwl_ax201_cfg_quz_hr, NULL),
+ 	IWL_DEV_INFO(0x02F0, 0x0074, iwl_ax201_cfg_quz_hr, NULL),
+ 	IWL_DEV_INFO(0x02F0, 0x6074, iwl_ax201_cfg_quz_hr, NULL),
+diff --git a/drivers/net/wireless/rsi/rsi_91x_hal.c b/drivers/net/wireless/rsi/rsi_91x_hal.c
+index 99b21a2c83861..f4a26f16f00f4 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_hal.c
++++ b/drivers/net/wireless/rsi/rsi_91x_hal.c
+@@ -1038,8 +1038,10 @@ static int rsi_load_9116_firmware(struct rsi_hw *adapter)
+ 	}
+ 
+ 	ta_firmware = kmemdup(fw_entry->data, fw_entry->size, GFP_KERNEL);
+-	if (!ta_firmware)
++	if (!ta_firmware) {
++		status = -ENOMEM;
+ 		goto fail_release_fw;
++	}
+ 	fw_p = ta_firmware;
+ 	instructions_sz = fw_entry->size;
+ 	rsi_dbg(INFO_ZONE, "FW Length = %d bytes\n", instructions_sz);
+diff --git a/drivers/net/wireless/rsi/rsi_91x_usb.c b/drivers/net/wireless/rsi/rsi_91x_usb.c
+index 3fbe2a3c14550..416976f098882 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_usb.c
++++ b/drivers/net/wireless/rsi/rsi_91x_usb.c
+@@ -816,6 +816,7 @@ static int rsi_probe(struct usb_interface *pfunction,
+ 	} else {
+ 		rsi_dbg(ERR_ZONE, "%s: Unsupported RSI device id 0x%x\n",
+ 			__func__, id->idProduct);
++		status = -ENODEV;
+ 		goto err1;
+ 	}
+ 
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 4697a94c09459..f80682f7df54d 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -736,13 +736,13 @@ static int nvme_rdma_alloc_io_queues(struct nvme_rdma_ctrl *ctrl)
+ 	if (ret)
+ 		return ret;
+ 
+-	ctrl->ctrl.queue_count = nr_io_queues + 1;
+-	if (ctrl->ctrl.queue_count < 2) {
++	if (nr_io_queues == 0) {
+ 		dev_err(ctrl->ctrl.device,
+ 			"unable to set any I/O queues\n");
+ 		return -ENOMEM;
+ 	}
+ 
++	ctrl->ctrl.queue_count = nr_io_queues + 1;
+ 	dev_info(ctrl->ctrl.device,
+ 		"creating %d I/O queues.\n", nr_io_queues);
+ 
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 79a463090dd30..ab1ea5b0888ea 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1755,13 +1755,13 @@ static int nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl)
+ 	if (ret)
+ 		return ret;
+ 
+-	ctrl->queue_count = nr_io_queues + 1;
+-	if (ctrl->queue_count < 2) {
++	if (nr_io_queues == 0) {
+ 		dev_err(ctrl->device,
+ 			"unable to set any I/O queues\n");
+ 		return -ENOMEM;
+ 	}
+ 
++	ctrl->queue_count = nr_io_queues + 1;
+ 	dev_info(ctrl->device,
+ 		"creating %d I/O queues.\n", nr_io_queues);
+ 
+diff --git a/drivers/nvme/target/fabrics-cmd.c b/drivers/nvme/target/fabrics-cmd.c
+index 7d0f3523fdab2..8ef564c3b32c8 100644
+--- a/drivers/nvme/target/fabrics-cmd.c
++++ b/drivers/nvme/target/fabrics-cmd.c
+@@ -120,6 +120,7 @@ static u16 nvmet_install_queue(struct nvmet_ctrl *ctrl, struct nvmet_req *req)
+ 	if (!sqsize) {
+ 		pr_warn("queue size zero!\n");
+ 		req->error_loc = offsetof(struct nvmf_connect_command, sqsize);
++		req->cqe->result.u32 = IPO_IATTR_CONNECT_SQE(sqsize);
+ 		ret = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
+ 		goto err;
+ 	}
+@@ -260,11 +261,11 @@ static void nvmet_execute_io_connect(struct nvmet_req *req)
+ 	}
+ 
+ 	status = nvmet_install_queue(ctrl, req);
+-	if (status) {
+-		/* pass back cntlid that had the issue of installing queue */
+-		req->cqe->result.u16 = cpu_to_le16(ctrl->cntlid);
++	if (status)
+ 		goto out_ctrl_put;
+-	}
++
++	/* pass back cntlid for successful completion */
++	req->cqe->result.u16 = cpu_to_le16(ctrl->cntlid);
+ 
+ 	pr_debug("adding queue %d to ctrl %d.\n", qid, ctrl->cntlid);
+ 
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 8d4ebe095d0c8..a9d0530b7846d 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -2495,7 +2495,14 @@ static int __pci_enable_wake(struct pci_dev *dev, pci_power_t state, bool enable
+ 	if (enable) {
+ 		int error;
+ 
+-		if (pci_pme_capable(dev, state))
++		/*
++		 * Enable PME signaling if the device can signal PME from
++		 * D3cold regardless of whether or not it can signal PME from
++		 * the current target state, because that will allow it to
++		 * signal PME when the hierarchy above it goes into D3cold and
++		 * the device itself ends up in D3cold as a result of that.
++		 */
++		if (pci_pme_capable(dev, state) || pci_pme_capable(dev, PCI_D3cold))
+ 			pci_pme_active(dev, true);
+ 		else
+ 			ret = 1;
+@@ -2599,16 +2606,20 @@ static pci_power_t pci_target_state(struct pci_dev *dev, bool wakeup)
+ 	if (dev->current_state == PCI_D3cold)
+ 		target_state = PCI_D3cold;
+ 
+-	if (wakeup) {
++	if (wakeup && dev->pme_support) {
++		pci_power_t state = target_state;
++
+ 		/*
+ 		 * Find the deepest state from which the device can generate
+ 		 * PME#.
+ 		 */
+-		if (dev->pme_support) {
+-			while (target_state
+-			      && !(dev->pme_support & (1 << target_state)))
+-				target_state--;
+-		}
++		while (state && !(dev->pme_support & (1 << state)))
++			state--;
++
++		if (state)
++			return state;
++		else if (dev->pme_support & 1)
++			return PCI_D0;
+ 	}
+ 
+ 	return target_state;
+diff --git a/drivers/power/supply/axp288_fuel_gauge.c b/drivers/power/supply/axp288_fuel_gauge.c
+index 37af0e216bc3a..225adaffaa283 100644
+--- a/drivers/power/supply/axp288_fuel_gauge.c
++++ b/drivers/power/supply/axp288_fuel_gauge.c
+@@ -149,7 +149,7 @@ static int fuel_gauge_reg_readb(struct axp288_fg_info *info, int reg)
+ 	}
+ 
+ 	if (ret < 0) {
+-		dev_err(&info->pdev->dev, "axp288 reg read err:%d\n", ret);
++		dev_err(&info->pdev->dev, "Error reading reg 0x%02x err: %d\n", reg, ret);
+ 		return ret;
+ 	}
+ 
+@@ -163,7 +163,7 @@ static int fuel_gauge_reg_writeb(struct axp288_fg_info *info, int reg, u8 val)
+ 	ret = regmap_write(info->regmap, reg, (unsigned int)val);
+ 
+ 	if (ret < 0)
+-		dev_err(&info->pdev->dev, "axp288 reg write err:%d\n", ret);
++		dev_err(&info->pdev->dev, "Error writing reg 0x%02x err: %d\n", reg, ret);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/power/supply/cw2015_battery.c b/drivers/power/supply/cw2015_battery.c
+index d110597746b0a..091868e9e9e82 100644
+--- a/drivers/power/supply/cw2015_battery.c
++++ b/drivers/power/supply/cw2015_battery.c
+@@ -679,7 +679,9 @@ static int cw_bat_probe(struct i2c_client *client)
+ 						    &cw2015_bat_desc,
+ 						    &psy_cfg);
+ 	if (IS_ERR(cw_bat->rk_bat)) {
+-		dev_err(cw_bat->dev, "Failed to register power supply\n");
++		/* try again if this happens */
++		dev_err_probe(&client->dev, PTR_ERR(cw_bat->rk_bat),
++			"Failed to register power supply\n");
+ 		return PTR_ERR(cw_bat->rk_bat);
+ 	}
+ 
+diff --git a/drivers/power/supply/max17042_battery.c b/drivers/power/supply/max17042_battery.c
+index ce2041b30a066..215e77d3b6d93 100644
+--- a/drivers/power/supply/max17042_battery.c
++++ b/drivers/power/supply/max17042_battery.c
+@@ -748,7 +748,7 @@ static inline void max17042_override_por_values(struct max17042_chip *chip)
+ 	struct max17042_config_data *config = chip->pdata->config_data;
+ 
+ 	max17042_override_por(map, MAX17042_TGAIN, config->tgain);
+-	max17042_override_por(map, MAx17042_TOFF, config->toff);
++	max17042_override_por(map, MAX17042_TOFF, config->toff);
+ 	max17042_override_por(map, MAX17042_CGAIN, config->cgain);
+ 	max17042_override_por(map, MAX17042_COFF, config->coff);
+ 
+diff --git a/drivers/power/supply/smb347-charger.c b/drivers/power/supply/smb347-charger.c
+index 3376f42d46c3d..25d239f2330e1 100644
+--- a/drivers/power/supply/smb347-charger.c
++++ b/drivers/power/supply/smb347-charger.c
+@@ -56,6 +56,7 @@
+ #define CFG_PIN_EN_CTRL_ACTIVE_LOW		0x60
+ #define CFG_PIN_EN_APSD_IRQ			BIT(1)
+ #define CFG_PIN_EN_CHARGER_ERROR		BIT(2)
++#define CFG_PIN_EN_CTRL				BIT(4)
+ #define CFG_THERM				0x07
+ #define CFG_THERM_SOFT_HOT_COMPENSATION_MASK	0x03
+ #define CFG_THERM_SOFT_HOT_COMPENSATION_SHIFT	0
+@@ -725,6 +726,15 @@ static int smb347_hw_init(struct smb347_charger *smb)
+ 	if (ret < 0)
+ 		goto fail;
+ 
++	/* Activate pin control, making it writable. */
++	switch (smb->enable_control) {
++	case SMB3XX_CHG_ENABLE_PIN_ACTIVE_LOW:
++	case SMB3XX_CHG_ENABLE_PIN_ACTIVE_HIGH:
++		ret = regmap_set_bits(smb->regmap, CFG_PIN, CFG_PIN_EN_CTRL);
++		if (ret < 0)
++			goto fail;
++	}
++
+ 	/*
+ 	 * Make the charging functionality controllable by a write to the
+ 	 * command register unless pin control is specified in the platform
+diff --git a/drivers/regulator/tps65910-regulator.c b/drivers/regulator/tps65910-regulator.c
+index 1d5b0a1b86f78..06cbe60c990f9 100644
+--- a/drivers/regulator/tps65910-regulator.c
++++ b/drivers/regulator/tps65910-regulator.c
+@@ -1211,12 +1211,10 @@ static int tps65910_probe(struct platform_device *pdev)
+ 
+ 		rdev = devm_regulator_register(&pdev->dev, &pmic->desc[i],
+ 					       &config);
+-		if (IS_ERR(rdev)) {
+-			dev_err(tps65910->dev,
+-				"failed to register %s regulator\n",
+-				pdev->name);
+-			return PTR_ERR(rdev);
+-		}
++		if (IS_ERR(rdev))
++			return dev_err_probe(tps65910->dev, PTR_ERR(rdev),
++					     "failed to register %s regulator\n",
++					     pdev->name);
+ 
+ 		/* Save regulator for cleanup */
+ 		pmic->rdev[i] = rdev;
+diff --git a/drivers/regulator/vctrl-regulator.c b/drivers/regulator/vctrl-regulator.c
+index cbadb1c996790..d2a37978fc3a8 100644
+--- a/drivers/regulator/vctrl-regulator.c
++++ b/drivers/regulator/vctrl-regulator.c
+@@ -37,7 +37,6 @@ struct vctrl_voltage_table {
+ struct vctrl_data {
+ 	struct regulator_dev *rdev;
+ 	struct regulator_desc desc;
+-	struct regulator *ctrl_reg;
+ 	bool enabled;
+ 	unsigned int min_slew_down_rate;
+ 	unsigned int ovp_threshold;
+@@ -82,7 +81,12 @@ static int vctrl_calc_output_voltage(struct vctrl_data *vctrl, int ctrl_uV)
+ static int vctrl_get_voltage(struct regulator_dev *rdev)
+ {
+ 	struct vctrl_data *vctrl = rdev_get_drvdata(rdev);
+-	int ctrl_uV = regulator_get_voltage_rdev(vctrl->ctrl_reg->rdev);
++	int ctrl_uV;
++
++	if (!rdev->supply)
++		return -EPROBE_DEFER;
++
++	ctrl_uV = regulator_get_voltage_rdev(rdev->supply->rdev);
+ 
+ 	return vctrl_calc_output_voltage(vctrl, ctrl_uV);
+ }
+@@ -92,14 +96,19 @@ static int vctrl_set_voltage(struct regulator_dev *rdev,
+ 			     unsigned int *selector)
+ {
+ 	struct vctrl_data *vctrl = rdev_get_drvdata(rdev);
+-	struct regulator *ctrl_reg = vctrl->ctrl_reg;
+-	int orig_ctrl_uV = regulator_get_voltage_rdev(ctrl_reg->rdev);
+-	int uV = vctrl_calc_output_voltage(vctrl, orig_ctrl_uV);
++	int orig_ctrl_uV;
++	int uV;
+ 	int ret;
+ 
++	if (!rdev->supply)
++		return -EPROBE_DEFER;
++
++	orig_ctrl_uV = regulator_get_voltage_rdev(rdev->supply->rdev);
++	uV = vctrl_calc_output_voltage(vctrl, orig_ctrl_uV);
++
+ 	if (req_min_uV >= uV || !vctrl->ovp_threshold)
+ 		/* voltage rising or no OVP */
+-		return regulator_set_voltage_rdev(ctrl_reg->rdev,
++		return regulator_set_voltage_rdev(rdev->supply->rdev,
+ 			vctrl_calc_ctrl_voltage(vctrl, req_min_uV),
+ 			vctrl_calc_ctrl_voltage(vctrl, req_max_uV),
+ 			PM_SUSPEND_ON);
+@@ -117,7 +126,7 @@ static int vctrl_set_voltage(struct regulator_dev *rdev,
+ 		next_uV = max_t(int, req_min_uV, uV - max_drop_uV);
+ 		next_ctrl_uV = vctrl_calc_ctrl_voltage(vctrl, next_uV);
+ 
+-		ret = regulator_set_voltage_rdev(ctrl_reg->rdev,
++		ret = regulator_set_voltage_rdev(rdev->supply->rdev,
+ 					    next_ctrl_uV,
+ 					    next_ctrl_uV,
+ 					    PM_SUSPEND_ON);
+@@ -134,7 +143,7 @@ static int vctrl_set_voltage(struct regulator_dev *rdev,
+ 
+ err:
+ 	/* Try to go back to original voltage */
+-	regulator_set_voltage_rdev(ctrl_reg->rdev, orig_ctrl_uV, orig_ctrl_uV,
++	regulator_set_voltage_rdev(rdev->supply->rdev, orig_ctrl_uV, orig_ctrl_uV,
+ 				   PM_SUSPEND_ON);
+ 
+ 	return ret;
+@@ -151,16 +160,18 @@ static int vctrl_set_voltage_sel(struct regulator_dev *rdev,
+ 				 unsigned int selector)
+ {
+ 	struct vctrl_data *vctrl = rdev_get_drvdata(rdev);
+-	struct regulator *ctrl_reg = vctrl->ctrl_reg;
+ 	unsigned int orig_sel = vctrl->sel;
+ 	int ret;
+ 
++	if (!rdev->supply)
++		return -EPROBE_DEFER;
++
+ 	if (selector >= rdev->desc->n_voltages)
+ 		return -EINVAL;
+ 
+ 	if (selector >= vctrl->sel || !vctrl->ovp_threshold) {
+ 		/* voltage rising or no OVP */
+-		ret = regulator_set_voltage_rdev(ctrl_reg->rdev,
++		ret = regulator_set_voltage_rdev(rdev->supply->rdev,
+ 					    vctrl->vtable[selector].ctrl,
+ 					    vctrl->vtable[selector].ctrl,
+ 					    PM_SUSPEND_ON);
+@@ -179,7 +190,7 @@ static int vctrl_set_voltage_sel(struct regulator_dev *rdev,
+ 		else
+ 			next_sel = vctrl->vtable[vctrl->sel].ovp_min_sel;
+ 
+-		ret = regulator_set_voltage_rdev(ctrl_reg->rdev,
++		ret = regulator_set_voltage_rdev(rdev->supply->rdev,
+ 					    vctrl->vtable[next_sel].ctrl,
+ 					    vctrl->vtable[next_sel].ctrl,
+ 					    PM_SUSPEND_ON);
+@@ -202,7 +213,7 @@ static int vctrl_set_voltage_sel(struct regulator_dev *rdev,
+ err:
+ 	if (vctrl->sel != orig_sel) {
+ 		/* Try to go back to original voltage */
+-		if (!regulator_set_voltage_rdev(ctrl_reg->rdev,
++		if (!regulator_set_voltage_rdev(rdev->supply->rdev,
+ 					   vctrl->vtable[orig_sel].ctrl,
+ 					   vctrl->vtable[orig_sel].ctrl,
+ 					   PM_SUSPEND_ON))
+@@ -234,10 +245,6 @@ static int vctrl_parse_dt(struct platform_device *pdev,
+ 	u32 pval;
+ 	u32 vrange_ctrl[2];
+ 
+-	vctrl->ctrl_reg = devm_regulator_get(&pdev->dev, "ctrl");
+-	if (IS_ERR(vctrl->ctrl_reg))
+-		return PTR_ERR(vctrl->ctrl_reg);
+-
+ 	ret = of_property_read_u32(np, "ovp-threshold-percent", &pval);
+ 	if (!ret) {
+ 		vctrl->ovp_threshold = pval;
+@@ -315,11 +322,11 @@ static int vctrl_cmp_ctrl_uV(const void *a, const void *b)
+ 	return at->ctrl - bt->ctrl;
+ }
+ 
+-static int vctrl_init_vtable(struct platform_device *pdev)
++static int vctrl_init_vtable(struct platform_device *pdev,
++			     struct regulator *ctrl_reg)
+ {
+ 	struct vctrl_data *vctrl = platform_get_drvdata(pdev);
+ 	struct regulator_desc *rdesc = &vctrl->desc;
+-	struct regulator *ctrl_reg = vctrl->ctrl_reg;
+ 	struct vctrl_voltage_range *vrange_ctrl = &vctrl->vrange.ctrl;
+ 	int n_voltages;
+ 	int ctrl_uV;
+@@ -395,23 +402,19 @@ static int vctrl_init_vtable(struct platform_device *pdev)
+ static int vctrl_enable(struct regulator_dev *rdev)
+ {
+ 	struct vctrl_data *vctrl = rdev_get_drvdata(rdev);
+-	int ret = regulator_enable(vctrl->ctrl_reg);
+ 
+-	if (!ret)
+-		vctrl->enabled = true;
++	vctrl->enabled = true;
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static int vctrl_disable(struct regulator_dev *rdev)
+ {
+ 	struct vctrl_data *vctrl = rdev_get_drvdata(rdev);
+-	int ret = regulator_disable(vctrl->ctrl_reg);
+ 
+-	if (!ret)
+-		vctrl->enabled = false;
++	vctrl->enabled = false;
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static int vctrl_is_enabled(struct regulator_dev *rdev)
+@@ -447,6 +450,7 @@ static int vctrl_probe(struct platform_device *pdev)
+ 	struct regulator_desc *rdesc;
+ 	struct regulator_config cfg = { };
+ 	struct vctrl_voltage_range *vrange_ctrl;
++	struct regulator *ctrl_reg;
+ 	int ctrl_uV;
+ 	int ret;
+ 
+@@ -461,15 +465,20 @@ static int vctrl_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		return ret;
+ 
++	ctrl_reg = devm_regulator_get(&pdev->dev, "ctrl");
++	if (IS_ERR(ctrl_reg))
++		return PTR_ERR(ctrl_reg);
++
+ 	vrange_ctrl = &vctrl->vrange.ctrl;
+ 
+ 	rdesc = &vctrl->desc;
+ 	rdesc->name = "vctrl";
+ 	rdesc->type = REGULATOR_VOLTAGE;
+ 	rdesc->owner = THIS_MODULE;
++	rdesc->supply_name = "ctrl";
+ 
+-	if ((regulator_get_linear_step(vctrl->ctrl_reg) == 1) ||
+-	    (regulator_count_voltages(vctrl->ctrl_reg) == -EINVAL)) {
++	if ((regulator_get_linear_step(ctrl_reg) == 1) ||
++	    (regulator_count_voltages(ctrl_reg) == -EINVAL)) {
+ 		rdesc->continuous_voltage_range = true;
+ 		rdesc->ops = &vctrl_ops_cont;
+ 	} else {
+@@ -486,11 +495,12 @@ static int vctrl_probe(struct platform_device *pdev)
+ 	cfg.init_data = init_data;
+ 
+ 	if (!rdesc->continuous_voltage_range) {
+-		ret = vctrl_init_vtable(pdev);
++		ret = vctrl_init_vtable(pdev, ctrl_reg);
+ 		if (ret)
+ 			return ret;
+ 
+-		ctrl_uV = regulator_get_voltage_rdev(vctrl->ctrl_reg->rdev);
++		/* Use locked consumer API when not in regulator framework */
++		ctrl_uV = regulator_get_voltage(ctrl_reg);
+ 		if (ctrl_uV < 0) {
+ 			dev_err(&pdev->dev, "failed to get control voltage\n");
+ 			return ctrl_uV;
+@@ -513,6 +523,9 @@ static int vctrl_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
++	/* Drop ctrl-supply here in favor of regulator core managed supply */
++	devm_regulator_put(ctrl_reg);
++
+ 	vctrl->rdev = devm_regulator_register(&pdev->dev, rdesc, &cfg);
+ 	if (IS_ERR(vctrl->rdev)) {
+ 		ret = PTR_ERR(vctrl->rdev);
+diff --git a/drivers/s390/cio/css.c b/drivers/s390/cio/css.c
+index a974943c27dac..9fcdb8d81eee6 100644
+--- a/drivers/s390/cio/css.c
++++ b/drivers/s390/cio/css.c
+@@ -430,9 +430,26 @@ static ssize_t pimpampom_show(struct device *dev,
+ }
+ static DEVICE_ATTR_RO(pimpampom);
+ 
++static ssize_t dev_busid_show(struct device *dev,
++			      struct device_attribute *attr,
++			      char *buf)
++{
++	struct subchannel *sch = to_subchannel(dev);
++	struct pmcw *pmcw = &sch->schib.pmcw;
++
++	if ((pmcw->st == SUBCHANNEL_TYPE_IO ||
++	     pmcw->st == SUBCHANNEL_TYPE_MSG) && pmcw->dnv)
++		return sysfs_emit(buf, "0.%x.%04x\n", sch->schid.ssid,
++				  pmcw->dev);
++	else
++		return sysfs_emit(buf, "none\n");
++}
++static DEVICE_ATTR_RO(dev_busid);
++
+ static struct attribute *io_subchannel_type_attrs[] = {
+ 	&dev_attr_chpids.attr,
+ 	&dev_attr_pimpampom.attr,
++	&dev_attr_dev_busid.attr,
+ 	NULL,
+ };
+ ATTRIBUTE_GROUPS(io_subchannel_type);
+diff --git a/drivers/s390/crypto/ap_bus.c b/drivers/s390/crypto/ap_bus.c
+index 2758d05a802db..179ceb01e0406 100644
+--- a/drivers/s390/crypto/ap_bus.c
++++ b/drivers/s390/crypto/ap_bus.c
+@@ -121,22 +121,13 @@ static struct bus_type ap_bus_type;
+ /* Adapter interrupt definitions */
+ static void ap_interrupt_handler(struct airq_struct *airq, bool floating);
+ 
+-static int ap_airq_flag;
++static bool ap_irq_flag;
+ 
+ static struct airq_struct ap_airq = {
+ 	.handler = ap_interrupt_handler,
+ 	.isc = AP_ISC,
+ };
+ 
+-/**
+- * ap_using_interrupts() - Returns non-zero if interrupt support is
+- * available.
+- */
+-static inline int ap_using_interrupts(void)
+-{
+-	return ap_airq_flag;
+-}
+-
+ /**
+  * ap_airq_ptr() - Get the address of the adapter interrupt indicator
+  *
+@@ -146,7 +137,7 @@ static inline int ap_using_interrupts(void)
+  */
+ void *ap_airq_ptr(void)
+ {
+-	if (ap_using_interrupts())
++	if (ap_irq_flag)
+ 		return ap_airq.lsi_ptr;
+ 	return NULL;
+ }
+@@ -376,7 +367,7 @@ void ap_wait(enum ap_sm_wait wait)
+ 	switch (wait) {
+ 	case AP_SM_WAIT_AGAIN:
+ 	case AP_SM_WAIT_INTERRUPT:
+-		if (ap_using_interrupts())
++		if (ap_irq_flag)
+ 			break;
+ 		if (ap_poll_kthread) {
+ 			wake_up(&ap_poll_wait);
+@@ -451,7 +442,7 @@ static void ap_tasklet_fn(unsigned long dummy)
+ 	 * be received. Doing it in the beginning of the tasklet is therefor
+ 	 * important that no requests on any AP get lost.
+ 	 */
+-	if (ap_using_interrupts())
++	if (ap_irq_flag)
+ 		xchg(ap_airq.lsi_ptr, 0);
+ 
+ 	spin_lock_bh(&ap_queues_lock);
+@@ -521,7 +512,7 @@ static int ap_poll_thread_start(void)
+ {
+ 	int rc;
+ 
+-	if (ap_using_interrupts() || ap_poll_kthread)
++	if (ap_irq_flag || ap_poll_kthread)
+ 		return 0;
+ 	mutex_lock(&ap_poll_thread_mutex);
+ 	ap_poll_kthread = kthread_run(ap_poll_thread, NULL, "appoll");
+@@ -1119,7 +1110,7 @@ static BUS_ATTR_RO(ap_adapter_mask);
+ static ssize_t ap_interrupts_show(struct bus_type *bus, char *buf)
+ {
+ 	return scnprintf(buf, PAGE_SIZE, "%d\n",
+-			 ap_using_interrupts() ? 1 : 0);
++			 ap_irq_flag ? 1 : 0);
+ }
+ 
+ static BUS_ATTR_RO(ap_interrupts);
+@@ -1832,7 +1823,7 @@ static int __init ap_module_init(void)
+ 	/* enable interrupts if available */
+ 	if (ap_interrupts_available()) {
+ 		rc = register_adapter_interrupt(&ap_airq);
+-		ap_airq_flag = (rc == 0);
++		ap_irq_flag = (rc == 0);
+ 	}
+ 
+ 	/* Create /sys/bus/ap. */
+@@ -1876,7 +1867,7 @@ out_work:
+ out_bus:
+ 	bus_unregister(&ap_bus_type);
+ out:
+-	if (ap_using_interrupts())
++	if (ap_irq_flag)
+ 		unregister_adapter_interrupt(&ap_airq);
+ 	kfree(ap_qci_info);
+ 	return rc;
+diff --git a/drivers/s390/crypto/ap_bus.h b/drivers/s390/crypto/ap_bus.h
+index 472efd3a755c4..2940bc8aa43d4 100644
+--- a/drivers/s390/crypto/ap_bus.h
++++ b/drivers/s390/crypto/ap_bus.h
+@@ -77,12 +77,6 @@ static inline int ap_test_bit(unsigned int *ptr, unsigned int nr)
+ #define AP_FUNC_EP11  5
+ #define AP_FUNC_APXA  6
+ 
+-/*
+- * AP interrupt states
+- */
+-#define AP_INTR_DISABLED	0	/* AP interrupt disabled */
+-#define AP_INTR_ENABLED		1	/* AP interrupt enabled */
+-
+ /*
+  * AP queue state machine states
+  */
+@@ -109,7 +103,7 @@ enum ap_sm_event {
+  * AP queue state wait behaviour
+  */
+ enum ap_sm_wait {
+-	AP_SM_WAIT_AGAIN,	/* retry immediately */
++	AP_SM_WAIT_AGAIN = 0,	/* retry immediately */
+ 	AP_SM_WAIT_TIMEOUT,	/* wait for timeout */
+ 	AP_SM_WAIT_INTERRUPT,	/* wait for thin interrupt (if available) */
+ 	AP_SM_WAIT_NONE,	/* no wait */
+@@ -182,7 +176,7 @@ struct ap_queue {
+ 	enum ap_dev_state dev_state;	/* queue device state */
+ 	bool config;			/* configured state */
+ 	ap_qid_t qid;			/* AP queue id. */
+-	int interrupt;			/* indicate if interrupts are enabled */
++	bool interrupt;			/* indicate if interrupts are enabled */
+ 	int queue_count;		/* # messages currently on AP queue. */
+ 	int pendingq_count;		/* # requests on pendingq list. */
+ 	int requestq_count;		/* # requests on requestq list. */
+diff --git a/drivers/s390/crypto/ap_queue.c b/drivers/s390/crypto/ap_queue.c
+index 337353c9655ed..639f8d25679c3 100644
+--- a/drivers/s390/crypto/ap_queue.c
++++ b/drivers/s390/crypto/ap_queue.c
+@@ -19,7 +19,7 @@
+ static void __ap_flush_queue(struct ap_queue *aq);
+ 
+ /**
+- * ap_queue_enable_interruption(): Enable interruption on an AP queue.
++ * ap_queue_enable_irq(): Enable interrupt support on this AP queue.
+  * @qid: The AP queue number
+  * @ind: the notification indicator byte
+  *
+@@ -27,7 +27,7 @@ static void __ap_flush_queue(struct ap_queue *aq);
+  * value it waits a while and tests the AP queue if interrupts
+  * have been switched on using ap_test_queue().
+  */
+-static int ap_queue_enable_interruption(struct ap_queue *aq, void *ind)
++static int ap_queue_enable_irq(struct ap_queue *aq, void *ind)
+ {
+ 	struct ap_queue_status status;
+ 	struct ap_qirq_ctrl qirqctrl = { 0 };
+@@ -198,7 +198,8 @@ static enum ap_sm_wait ap_sm_read(struct ap_queue *aq)
+ 		return AP_SM_WAIT_NONE;
+ 	case AP_RESPONSE_NO_PENDING_REPLY:
+ 		if (aq->queue_count > 0)
+-			return AP_SM_WAIT_INTERRUPT;
++			return aq->interrupt ?
++				AP_SM_WAIT_INTERRUPT : AP_SM_WAIT_TIMEOUT;
+ 		aq->sm_state = AP_SM_STATE_IDLE;
+ 		return AP_SM_WAIT_NONE;
+ 	default:
+@@ -252,7 +253,8 @@ static enum ap_sm_wait ap_sm_write(struct ap_queue *aq)
+ 		fallthrough;
+ 	case AP_RESPONSE_Q_FULL:
+ 		aq->sm_state = AP_SM_STATE_QUEUE_FULL;
+-		return AP_SM_WAIT_INTERRUPT;
++		return aq->interrupt ?
++			AP_SM_WAIT_INTERRUPT : AP_SM_WAIT_TIMEOUT;
+ 	case AP_RESPONSE_RESET_IN_PROGRESS:
+ 		aq->sm_state = AP_SM_STATE_RESET_WAIT;
+ 		return AP_SM_WAIT_TIMEOUT;
+@@ -302,7 +304,7 @@ static enum ap_sm_wait ap_sm_reset(struct ap_queue *aq)
+ 	case AP_RESPONSE_NORMAL:
+ 	case AP_RESPONSE_RESET_IN_PROGRESS:
+ 		aq->sm_state = AP_SM_STATE_RESET_WAIT;
+-		aq->interrupt = AP_INTR_DISABLED;
++		aq->interrupt = false;
+ 		return AP_SM_WAIT_TIMEOUT;
+ 	default:
+ 		aq->dev_state = AP_DEV_STATE_ERROR;
+@@ -335,7 +337,7 @@ static enum ap_sm_wait ap_sm_reset_wait(struct ap_queue *aq)
+ 	switch (status.response_code) {
+ 	case AP_RESPONSE_NORMAL:
+ 		lsi_ptr = ap_airq_ptr();
+-		if (lsi_ptr && ap_queue_enable_interruption(aq, lsi_ptr) == 0)
++		if (lsi_ptr && ap_queue_enable_irq(aq, lsi_ptr) == 0)
+ 			aq->sm_state = AP_SM_STATE_SETIRQ_WAIT;
+ 		else
+ 			aq->sm_state = (aq->queue_count > 0) ?
+@@ -376,7 +378,7 @@ static enum ap_sm_wait ap_sm_setirq_wait(struct ap_queue *aq)
+ 
+ 	if (status.irq_enabled == 1) {
+ 		/* Irqs are now enabled */
+-		aq->interrupt = AP_INTR_ENABLED;
++		aq->interrupt = true;
+ 		aq->sm_state = (aq->queue_count > 0) ?
+ 			AP_SM_STATE_WORKING : AP_SM_STATE_IDLE;
+ 	}
+@@ -566,7 +568,7 @@ static ssize_t interrupt_show(struct device *dev,
+ 	spin_lock_bh(&aq->lock);
+ 	if (aq->sm_state == AP_SM_STATE_SETIRQ_WAIT)
+ 		rc = scnprintf(buf, PAGE_SIZE, "Enable Interrupt pending.\n");
+-	else if (aq->interrupt == AP_INTR_ENABLED)
++	else if (aq->interrupt)
+ 		rc = scnprintf(buf, PAGE_SIZE, "Interrupts enabled.\n");
+ 	else
+ 		rc = scnprintf(buf, PAGE_SIZE, "Interrupts disabled.\n");
+@@ -747,7 +749,7 @@ struct ap_queue *ap_queue_create(ap_qid_t qid, int device_type)
+ 	aq->ap_dev.device.type = &ap_queue_type;
+ 	aq->ap_dev.device_type = device_type;
+ 	aq->qid = qid;
+-	aq->interrupt = AP_INTR_DISABLED;
++	aq->interrupt = false;
+ 	spin_lock_init(&aq->lock);
+ 	INIT_LIST_HEAD(&aq->pendingq);
+ 	INIT_LIST_HEAD(&aq->requestq);
+diff --git a/drivers/s390/crypto/zcrypt_ccamisc.c b/drivers/s390/crypto/zcrypt_ccamisc.c
+index d68c0ed5e0dd8..f5d0212fb4fe4 100644
+--- a/drivers/s390/crypto/zcrypt_ccamisc.c
++++ b/drivers/s390/crypto/zcrypt_ccamisc.c
+@@ -1724,10 +1724,10 @@ static int fetch_cca_info(u16 cardnr, u16 domain, struct cca_info *ci)
+ 	rlen = vlen = PAGE_SIZE/2;
+ 	rc = cca_query_crypto_facility(cardnr, domain, "STATICSB",
+ 				       rarray, &rlen, varray, &vlen);
+-	if (rc == 0 && rlen >= 10*8 && vlen >= 240) {
+-		ci->new_apka_mk_state = (char) rarray[7*8];
+-		ci->cur_apka_mk_state = (char) rarray[8*8];
+-		ci->old_apka_mk_state = (char) rarray[9*8];
++	if (rc == 0 && rlen >= 13*8 && vlen >= 240) {
++		ci->new_apka_mk_state = (char) rarray[10*8];
++		ci->cur_apka_mk_state = (char) rarray[11*8];
++		ci->old_apka_mk_state = (char) rarray[12*8];
+ 		if (ci->old_apka_mk_state == '2')
+ 			memcpy(&ci->old_apka_mkvp, varray + 208, 8);
+ 		if (ci->cur_apka_mk_state == '2')
+diff --git a/drivers/soc/mediatek/mt8183-mmsys.h b/drivers/soc/mediatek/mt8183-mmsys.h
+index 579dfc8dc8fc9..9dee485807c94 100644
+--- a/drivers/soc/mediatek/mt8183-mmsys.h
++++ b/drivers/soc/mediatek/mt8183-mmsys.h
+@@ -28,25 +28,32 @@
+ static const struct mtk_mmsys_routes mmsys_mt8183_routing_table[] = {
+ 	{
+ 		DDP_COMPONENT_OVL0, DDP_COMPONENT_OVL_2L0,
+-		MT8183_DISP_OVL0_MOUT_EN, MT8183_OVL0_MOUT_EN_OVL0_2L
++		MT8183_DISP_OVL0_MOUT_EN, MT8183_OVL0_MOUT_EN_OVL0_2L,
++		MT8183_OVL0_MOUT_EN_OVL0_2L
+ 	}, {
+ 		DDP_COMPONENT_OVL_2L0, DDP_COMPONENT_RDMA0,
+-		MT8183_DISP_OVL0_2L_MOUT_EN, MT8183_OVL0_2L_MOUT_EN_DISP_PATH0
++		MT8183_DISP_OVL0_2L_MOUT_EN, MT8183_OVL0_2L_MOUT_EN_DISP_PATH0,
++		MT8183_OVL0_2L_MOUT_EN_DISP_PATH0
+ 	}, {
+ 		DDP_COMPONENT_OVL_2L1, DDP_COMPONENT_RDMA1,
+-		MT8183_DISP_OVL1_2L_MOUT_EN, MT8183_OVL1_2L_MOUT_EN_RDMA1
++		MT8183_DISP_OVL1_2L_MOUT_EN, MT8183_OVL1_2L_MOUT_EN_RDMA1,
++		MT8183_OVL1_2L_MOUT_EN_RDMA1
+ 	}, {
+ 		DDP_COMPONENT_DITHER, DDP_COMPONENT_DSI0,
+-		MT8183_DISP_DITHER0_MOUT_EN, MT8183_DITHER0_MOUT_IN_DSI0
++		MT8183_DISP_DITHER0_MOUT_EN, MT8183_DITHER0_MOUT_IN_DSI0,
++		MT8183_DITHER0_MOUT_IN_DSI0
+ 	}, {
+ 		DDP_COMPONENT_OVL_2L0, DDP_COMPONENT_RDMA0,
+-		MT8183_DISP_PATH0_SEL_IN, MT8183_DISP_PATH0_SEL_IN_OVL0_2L
++		MT8183_DISP_PATH0_SEL_IN, MT8183_DISP_PATH0_SEL_IN_OVL0_2L,
++		MT8183_DISP_PATH0_SEL_IN_OVL0_2L
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0,
+-		MT8183_DISP_DPI0_SEL_IN, MT8183_DPI0_SEL_IN_RDMA1
++		MT8183_DISP_DPI0_SEL_IN, MT8183_DPI0_SEL_IN_RDMA1,
++		MT8183_DPI0_SEL_IN_RDMA1
+ 	}, {
+ 		DDP_COMPONENT_RDMA0, DDP_COMPONENT_COLOR0,
+-		MT8183_DISP_RDMA0_SOUT_SEL_IN, MT8183_RDMA0_SOUT_COLOR0
++		MT8183_DISP_RDMA0_SOUT_SEL_IN, MT8183_RDMA0_SOUT_COLOR0,
++		MT8183_RDMA0_SOUT_COLOR0
+ 	}
+ };
+ 
+diff --git a/drivers/soc/mediatek/mtk-mmsys.c b/drivers/soc/mediatek/mtk-mmsys.c
+index 080660ef11bfa..0f949896fd064 100644
+--- a/drivers/soc/mediatek/mtk-mmsys.c
++++ b/drivers/soc/mediatek/mtk-mmsys.c
+@@ -68,7 +68,9 @@ void mtk_mmsys_ddp_connect(struct device *dev,
+ 
+ 	for (i = 0; i < mmsys->data->num_routes; i++)
+ 		if (cur == routes[i].from_comp && next == routes[i].to_comp) {
+-			reg = readl_relaxed(mmsys->regs + routes[i].addr) | routes[i].val;
++			reg = readl_relaxed(mmsys->regs + routes[i].addr);
++			reg &= ~routes[i].mask;
++			reg |= routes[i].val;
+ 			writel_relaxed(reg, mmsys->regs + routes[i].addr);
+ 		}
+ }
+@@ -85,7 +87,8 @@ void mtk_mmsys_ddp_disconnect(struct device *dev,
+ 
+ 	for (i = 0; i < mmsys->data->num_routes; i++)
+ 		if (cur == routes[i].from_comp && next == routes[i].to_comp) {
+-			reg = readl_relaxed(mmsys->regs + routes[i].addr) & ~routes[i].val;
++			reg = readl_relaxed(mmsys->regs + routes[i].addr);
++			reg &= ~routes[i].mask;
+ 			writel_relaxed(reg, mmsys->regs + routes[i].addr);
+ 		}
+ }
+diff --git a/drivers/soc/mediatek/mtk-mmsys.h b/drivers/soc/mediatek/mtk-mmsys.h
+index a760a34e6eca8..5f3e2bf0c40bc 100644
+--- a/drivers/soc/mediatek/mtk-mmsys.h
++++ b/drivers/soc/mediatek/mtk-mmsys.h
+@@ -35,41 +35,54 @@
+ #define RDMA0_SOUT_DSI1				0x1
+ #define RDMA0_SOUT_DSI2				0x4
+ #define RDMA0_SOUT_DSI3				0x5
++#define RDMA0_SOUT_MASK				0x7
+ #define RDMA1_SOUT_DPI0				0x2
+ #define RDMA1_SOUT_DPI1				0x3
+ #define RDMA1_SOUT_DSI1				0x1
+ #define RDMA1_SOUT_DSI2				0x4
+ #define RDMA1_SOUT_DSI3				0x5
++#define RDMA1_SOUT_MASK				0x7
+ #define RDMA2_SOUT_DPI0				0x2
+ #define RDMA2_SOUT_DPI1				0x3
+ #define RDMA2_SOUT_DSI1				0x1
+ #define RDMA2_SOUT_DSI2				0x4
+ #define RDMA2_SOUT_DSI3				0x5
++#define RDMA2_SOUT_MASK				0x7
+ #define DPI0_SEL_IN_RDMA1			0x1
+ #define DPI0_SEL_IN_RDMA2			0x3
++#define DPI0_SEL_IN_MASK			0x3
+ #define DPI1_SEL_IN_RDMA1			(0x1 << 8)
+ #define DPI1_SEL_IN_RDMA2			(0x3 << 8)
++#define DPI1_SEL_IN_MASK			(0x3 << 8)
+ #define DSI0_SEL_IN_RDMA1			0x1
+ #define DSI0_SEL_IN_RDMA2			0x4
++#define DSI0_SEL_IN_MASK			0x7
+ #define DSI1_SEL_IN_RDMA1			0x1
+ #define DSI1_SEL_IN_RDMA2			0x4
++#define DSI1_SEL_IN_MASK			0x7
+ #define DSI2_SEL_IN_RDMA1			(0x1 << 16)
+ #define DSI2_SEL_IN_RDMA2			(0x4 << 16)
++#define DSI2_SEL_IN_MASK			(0x7 << 16)
+ #define DSI3_SEL_IN_RDMA1			(0x1 << 16)
+ #define DSI3_SEL_IN_RDMA2			(0x4 << 16)
++#define DSI3_SEL_IN_MASK			(0x7 << 16)
+ #define COLOR1_SEL_IN_OVL1			0x1
+ 
+ #define OVL_MOUT_EN_RDMA			0x1
+ #define BLS_TO_DSI_RDMA1_TO_DPI1		0x8
+ #define BLS_TO_DPI_RDMA1_TO_DSI			0x2
++#define BLS_RDMA1_DSI_DPI_MASK			0xf
+ #define DSI_SEL_IN_BLS				0x0
+ #define DPI_SEL_IN_BLS				0x0
++#define DPI_SEL_IN_MASK				0x1
+ #define DSI_SEL_IN_RDMA				0x1
++#define DSI_SEL_IN_MASK				0x1
+ 
+ struct mtk_mmsys_routes {
+ 	u32 from_comp;
+ 	u32 to_comp;
+ 	u32 addr;
++	u32 mask;
+ 	u32 val;
+ };
+ 
+@@ -91,124 +104,164 @@ struct mtk_mmsys_driver_data {
+ static const struct mtk_mmsys_routes mmsys_default_routing_table[] = {
+ 	{
+ 		DDP_COMPONENT_BLS, DDP_COMPONENT_DSI0,
+-		DISP_REG_CONFIG_OUT_SEL, BLS_TO_DSI_RDMA1_TO_DPI1
++		DISP_REG_CONFIG_OUT_SEL, BLS_RDMA1_DSI_DPI_MASK,
++		BLS_TO_DSI_RDMA1_TO_DPI1
+ 	}, {
+ 		DDP_COMPONENT_BLS, DDP_COMPONENT_DSI0,
+-		DISP_REG_CONFIG_DSI_SEL, DSI_SEL_IN_BLS
++		DISP_REG_CONFIG_DSI_SEL, DSI_SEL_IN_MASK,
++		DSI_SEL_IN_BLS
+ 	}, {
+ 		DDP_COMPONENT_BLS, DDP_COMPONENT_DPI0,
+-		DISP_REG_CONFIG_OUT_SEL, BLS_TO_DPI_RDMA1_TO_DSI
++		DISP_REG_CONFIG_OUT_SEL, BLS_RDMA1_DSI_DPI_MASK,
++		BLS_TO_DPI_RDMA1_TO_DSI
+ 	}, {
+ 		DDP_COMPONENT_BLS, DDP_COMPONENT_DPI0,
+-		DISP_REG_CONFIG_DSI_SEL, DSI_SEL_IN_RDMA
++		DISP_REG_CONFIG_DSI_SEL, DSI_SEL_IN_MASK,
++		DSI_SEL_IN_RDMA
+ 	}, {
+ 		DDP_COMPONENT_BLS, DDP_COMPONENT_DPI0,
+-		DISP_REG_CONFIG_DPI_SEL, DPI_SEL_IN_BLS
++		DISP_REG_CONFIG_DPI_SEL, DPI_SEL_IN_MASK,
++		DPI_SEL_IN_BLS
+ 	}, {
+ 		DDP_COMPONENT_GAMMA, DDP_COMPONENT_RDMA1,
+-		DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN, GAMMA_MOUT_EN_RDMA1
++		DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN, GAMMA_MOUT_EN_RDMA1,
++		GAMMA_MOUT_EN_RDMA1
+ 	}, {
+ 		DDP_COMPONENT_OD0, DDP_COMPONENT_RDMA0,
+-		DISP_REG_CONFIG_DISP_OD_MOUT_EN, OD_MOUT_EN_RDMA0
++		DISP_REG_CONFIG_DISP_OD_MOUT_EN, OD_MOUT_EN_RDMA0,
++		OD_MOUT_EN_RDMA0
+ 	}, {
+ 		DDP_COMPONENT_OD1, DDP_COMPONENT_RDMA1,
+-		DISP_REG_CONFIG_DISP_OD_MOUT_EN, OD1_MOUT_EN_RDMA1
++		DISP_REG_CONFIG_DISP_OD_MOUT_EN, OD1_MOUT_EN_RDMA1,
++		OD1_MOUT_EN_RDMA1
+ 	}, {
+ 		DDP_COMPONENT_OVL0, DDP_COMPONENT_COLOR0,
+-		DISP_REG_CONFIG_DISP_OVL0_MOUT_EN, OVL0_MOUT_EN_COLOR0
++		DISP_REG_CONFIG_DISP_OVL0_MOUT_EN, OVL0_MOUT_EN_COLOR0,
++		OVL0_MOUT_EN_COLOR0
+ 	}, {
+ 		DDP_COMPONENT_OVL0, DDP_COMPONENT_COLOR0,
+-		DISP_REG_CONFIG_DISP_COLOR0_SEL_IN, COLOR0_SEL_IN_OVL0
++		DISP_REG_CONFIG_DISP_COLOR0_SEL_IN, COLOR0_SEL_IN_OVL0,
++		COLOR0_SEL_IN_OVL0
+ 	}, {
+ 		DDP_COMPONENT_OVL0, DDP_COMPONENT_RDMA0,
+-		DISP_REG_CONFIG_DISP_OVL_MOUT_EN, OVL_MOUT_EN_RDMA
++		DISP_REG_CONFIG_DISP_OVL_MOUT_EN, OVL_MOUT_EN_RDMA,
++		OVL_MOUT_EN_RDMA
+ 	}, {
+ 		DDP_COMPONENT_OVL1, DDP_COMPONENT_COLOR1,
+-		DISP_REG_CONFIG_DISP_OVL1_MOUT_EN, OVL1_MOUT_EN_COLOR1
++		DISP_REG_CONFIG_DISP_OVL1_MOUT_EN, OVL1_MOUT_EN_COLOR1,
++		OVL1_MOUT_EN_COLOR1
+ 	}, {
+ 		DDP_COMPONENT_OVL1, DDP_COMPONENT_COLOR1,
+-		DISP_REG_CONFIG_DISP_COLOR1_SEL_IN, COLOR1_SEL_IN_OVL1
++		DISP_REG_CONFIG_DISP_COLOR1_SEL_IN, COLOR1_SEL_IN_OVL1,
++		COLOR1_SEL_IN_OVL1
+ 	}, {
+ 		DDP_COMPONENT_RDMA0, DDP_COMPONENT_DPI0,
+-		DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_DPI0
++		DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_MASK,
++		RDMA0_SOUT_DPI0
+ 	}, {
+ 		DDP_COMPONENT_RDMA0, DDP_COMPONENT_DPI1,
+-		DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_DPI1
++		DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_MASK,
++		RDMA0_SOUT_DPI1
+ 	}, {
+ 		DDP_COMPONENT_RDMA0, DDP_COMPONENT_DSI1,
+-		DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_DSI1
++		DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_MASK,
++		RDMA0_SOUT_DSI1
+ 	}, {
+ 		DDP_COMPONENT_RDMA0, DDP_COMPONENT_DSI2,
+-		DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_DSI2
++		DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_MASK,
++		RDMA0_SOUT_DSI2
+ 	}, {
+ 		DDP_COMPONENT_RDMA0, DDP_COMPONENT_DSI3,
+-		DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_DSI3
++		DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN, RDMA0_SOUT_MASK,
++		RDMA0_SOUT_DSI3
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0,
+-		DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_DPI0
++		DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_MASK,
++		RDMA1_SOUT_DPI0
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI0,
+-		DISP_REG_CONFIG_DPI_SEL_IN, DPI0_SEL_IN_RDMA1
++		DISP_REG_CONFIG_DPI_SEL_IN, DPI0_SEL_IN_MASK,
++		DPI0_SEL_IN_RDMA1
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI1,
+-		DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_DPI1
++		DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_MASK,
++		RDMA1_SOUT_DPI1
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DPI1,
+-		DISP_REG_CONFIG_DPI_SEL_IN, DPI1_SEL_IN_RDMA1
++		DISP_REG_CONFIG_DPI_SEL_IN, DPI1_SEL_IN_MASK,
++		DPI1_SEL_IN_RDMA1
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI0,
+-		DISP_REG_CONFIG_DSIE_SEL_IN, DSI0_SEL_IN_RDMA1
++		DISP_REG_CONFIG_DSIE_SEL_IN, DSI0_SEL_IN_MASK,
++		DSI0_SEL_IN_RDMA1
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI1,
+-		DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_DSI1
++		DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_MASK,
++		RDMA1_SOUT_DSI1
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI1,
+-		DISP_REG_CONFIG_DSIO_SEL_IN, DSI1_SEL_IN_RDMA1
++		DISP_REG_CONFIG_DSIO_SEL_IN, DSI1_SEL_IN_MASK,
++		DSI1_SEL_IN_RDMA1
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI2,
+-		DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_DSI2
++		DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_MASK,
++		RDMA1_SOUT_DSI2
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI2,
+-		DISP_REG_CONFIG_DSIE_SEL_IN, DSI2_SEL_IN_RDMA1
++		DISP_REG_CONFIG_DSIE_SEL_IN, DSI2_SEL_IN_MASK,
++		DSI2_SEL_IN_RDMA1
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI3,
+-		DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_DSI3
++		DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN, RDMA1_SOUT_MASK,
++		RDMA1_SOUT_DSI3
+ 	}, {
+ 		DDP_COMPONENT_RDMA1, DDP_COMPONENT_DSI3,
+-		DISP_REG_CONFIG_DSIO_SEL_IN, DSI3_SEL_IN_RDMA1
++		DISP_REG_CONFIG_DSIO_SEL_IN, DSI3_SEL_IN_MASK,
++		DSI3_SEL_IN_RDMA1
+ 	}, {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DPI0,
+-		DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_DPI0
++		DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_MASK,
++		RDMA2_SOUT_DPI0
+ 	}, {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DPI0,
+-		DISP_REG_CONFIG_DPI_SEL_IN, DPI0_SEL_IN_RDMA2
++		DISP_REG_CONFIG_DPI_SEL_IN, DPI0_SEL_IN_MASK,
++		DPI0_SEL_IN_RDMA2
+ 	}, {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DPI1,
+-		DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_DPI1
++		DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_MASK,
++		RDMA2_SOUT_DPI1
+ 	}, {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DPI1,
+-		DISP_REG_CONFIG_DPI_SEL_IN, DPI1_SEL_IN_RDMA2
++		DISP_REG_CONFIG_DPI_SEL_IN, DPI1_SEL_IN_MASK,
++		DPI1_SEL_IN_RDMA2
+ 	}, {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI0,
+-		DISP_REG_CONFIG_DSIE_SEL_IN, DSI0_SEL_IN_RDMA2
++		DISP_REG_CONFIG_DSIE_SEL_IN, DSI0_SEL_IN_MASK,
++		DSI0_SEL_IN_RDMA2
+ 	}, {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI1,
+-		DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_DSI1
++		DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_MASK,
++		RDMA2_SOUT_DSI1
+ 	}, {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI1,
+-		DISP_REG_CONFIG_DSIO_SEL_IN, DSI1_SEL_IN_RDMA2
++		DISP_REG_CONFIG_DSIO_SEL_IN, DSI1_SEL_IN_MASK,
++		DSI1_SEL_IN_RDMA2
+ 	}, {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI2,
+-		DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_DSI2
++		DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_MASK,
++		RDMA2_SOUT_DSI2
+ 	}, {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI2,
+-		DISP_REG_CONFIG_DSIE_SEL_IN, DSI2_SEL_IN_RDMA2
++		DISP_REG_CONFIG_DSIE_SEL_IN, DSI2_SEL_IN_MASK,
++		DSI2_SEL_IN_RDMA2
+ 	}, {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI3,
+-		DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_DSI3
++		DISP_REG_CONFIG_DISP_RDMA2_SOUT, RDMA2_SOUT_MASK,
++		RDMA2_SOUT_DSI3
+ 	}, {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI3,
+-		DISP_REG_CONFIG_DSIO_SEL_IN, DSI3_SEL_IN_RDMA2
++		DISP_REG_CONFIG_DSIO_SEL_IN, DSI3_SEL_IN_MASK,
++		DSI3_SEL_IN_RDMA2
+ 	}
+ };
+ 
+diff --git a/drivers/soc/qcom/rpmhpd.c b/drivers/soc/qcom/rpmhpd.c
+index bb21c4f1c0c4b..90d2e58173719 100644
+--- a/drivers/soc/qcom/rpmhpd.c
++++ b/drivers/soc/qcom/rpmhpd.c
+@@ -382,12 +382,11 @@ static int rpmhpd_power_on(struct generic_pm_domain *domain)
+ static int rpmhpd_power_off(struct generic_pm_domain *domain)
+ {
+ 	struct rpmhpd *pd = domain_to_rpmhpd(domain);
+-	int ret = 0;
++	int ret;
+ 
+ 	mutex_lock(&rpmhpd_lock);
+ 
+-	ret = rpmhpd_aggregate_corner(pd, pd->level[0]);
+-
++	ret = rpmhpd_aggregate_corner(pd, 0);
+ 	if (!ret)
+ 		pd->enabled = false;
+ 
+diff --git a/drivers/soc/qcom/smsm.c b/drivers/soc/qcom/smsm.c
+index 1d3d5e3ec2b07..6e9a9cd28b178 100644
+--- a/drivers/soc/qcom/smsm.c
++++ b/drivers/soc/qcom/smsm.c
+@@ -109,7 +109,7 @@ struct smsm_entry {
+ 	DECLARE_BITMAP(irq_enabled, 32);
+ 	DECLARE_BITMAP(irq_rising, 32);
+ 	DECLARE_BITMAP(irq_falling, 32);
+-	u32 last_value;
++	unsigned long last_value;
+ 
+ 	u32 *remote_state;
+ 	u32 *subscription;
+@@ -204,8 +204,7 @@ static irqreturn_t smsm_intr(int irq, void *data)
+ 	u32 val;
+ 
+ 	val = readl(entry->remote_state);
+-	changed = val ^ entry->last_value;
+-	entry->last_value = val;
++	changed = val ^ xchg(&entry->last_value, val);
+ 
+ 	for_each_set_bit(i, entry->irq_enabled, 32) {
+ 		if (!(changed & BIT(i)))
+@@ -264,6 +263,12 @@ static void smsm_unmask_irq(struct irq_data *irqd)
+ 	struct qcom_smsm *smsm = entry->smsm;
+ 	u32 val;
+ 
++	/* Make sure our last cached state is up-to-date */
++	if (readl(entry->remote_state) & BIT(irq))
++		set_bit(irq, &entry->last_value);
++	else
++		clear_bit(irq, &entry->last_value);
++
+ 	set_bit(irq, entry->irq_enabled);
+ 
+ 	if (entry->subscription) {
+diff --git a/drivers/soc/rockchip/Kconfig b/drivers/soc/rockchip/Kconfig
+index 2c13bf4dd5dbe..25eb2c1e31bb2 100644
+--- a/drivers/soc/rockchip/Kconfig
++++ b/drivers/soc/rockchip/Kconfig
+@@ -6,8 +6,8 @@ if ARCH_ROCKCHIP || COMPILE_TEST
+ #
+ 
+ config ROCKCHIP_GRF
+-	bool
+-	default y
++	bool "Rockchip General Register Files support" if COMPILE_TEST
++	default y if ARCH_ROCKCHIP
+ 	help
+ 	  The General Register Files are a central component providing
+ 	  special additional settings registers for a lot of soc-components.
+diff --git a/drivers/spi/spi-coldfire-qspi.c b/drivers/spi/spi-coldfire-qspi.c
+index 8996115ce736a..263ce90473277 100644
+--- a/drivers/spi/spi-coldfire-qspi.c
++++ b/drivers/spi/spi-coldfire-qspi.c
+@@ -444,7 +444,7 @@ static int mcfqspi_remove(struct platform_device *pdev)
+ 	mcfqspi_wr_qmr(mcfqspi, MCFQSPI_QMR_MSTR);
+ 
+ 	mcfqspi_cs_teardown(mcfqspi);
+-	clk_disable(mcfqspi->clk);
++	clk_disable_unprepare(mcfqspi->clk);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/spi/spi-davinci.c b/drivers/spi/spi-davinci.c
+index e114e6fe5ea5b..d112c2cac042b 100644
+--- a/drivers/spi/spi-davinci.c
++++ b/drivers/spi/spi-davinci.c
+@@ -213,12 +213,6 @@ static void davinci_spi_chipselect(struct spi_device *spi, int value)
+ 	 * line for the controller
+ 	 */
+ 	if (spi->cs_gpiod) {
+-		/*
+-		 * FIXME: is this code ever executed? This host does not
+-		 * set SPI_MASTER_GPIO_SS so this chipselect callback should
+-		 * not get called from the SPI core when we are using
+-		 * GPIOs for chip select.
+-		 */
+ 		if (value == BITBANG_CS_ACTIVE)
+ 			gpiod_set_value(spi->cs_gpiod, 1);
+ 		else
+@@ -945,7 +939,7 @@ static int davinci_spi_probe(struct platform_device *pdev)
+ 	master->bus_num = pdev->id;
+ 	master->num_chipselect = pdata->num_chipselect;
+ 	master->bits_per_word_mask = SPI_BPW_RANGE_MASK(2, 16);
+-	master->flags = SPI_MASTER_MUST_RX;
++	master->flags = SPI_MASTER_MUST_RX | SPI_MASTER_GPIO_SS;
+ 	master->setup = davinci_spi_setup;
+ 	master->cleanup = davinci_spi_cleanup;
+ 	master->can_dma = davinci_spi_can_dma;
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index fb45e6af66381..fd004c9db9dc0 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -530,6 +530,7 @@ static int dspi_request_dma(struct fsl_dspi *dspi, phys_addr_t phy_addr)
+ 		goto err_rx_dma_buf;
+ 	}
+ 
++	memset(&cfg, 0, sizeof(cfg));
+ 	cfg.src_addr = phy_addr + SPI_POPR;
+ 	cfg.dst_addr = phy_addr + SPI_PUSHR;
+ 	cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+diff --git a/drivers/spi/spi-pic32.c b/drivers/spi/spi-pic32.c
+index 104bde153efd2..5eb7b61bbb4d8 100644
+--- a/drivers/spi/spi-pic32.c
++++ b/drivers/spi/spi-pic32.c
+@@ -361,6 +361,7 @@ static int pic32_spi_dma_config(struct pic32_spi *pic32s, u32 dma_width)
+ 	struct dma_slave_config cfg;
+ 	int ret;
+ 
++	memset(&cfg, 0, sizeof(cfg));
+ 	cfg.device_fc = true;
+ 	cfg.src_addr = pic32s->dma_base + buf_offset;
+ 	cfg.dst_addr = pic32s->dma_base + buf_offset;
+diff --git a/drivers/spi/spi-sprd-adi.c b/drivers/spi/spi-sprd-adi.c
+index ab19068be8675..98ef17389952a 100644
+--- a/drivers/spi/spi-sprd-adi.c
++++ b/drivers/spi/spi-sprd-adi.c
+@@ -103,7 +103,7 @@
+ #define HWRST_STATUS_WATCHDOG		0xf0
+ 
+ /* Use default timeout 50 ms that converts to watchdog values */
+-#define WDG_LOAD_VAL			((50 * 1000) / 32768)
++#define WDG_LOAD_VAL			((50 * 32768) / 1000)
+ #define WDG_LOAD_MASK			GENMASK(15, 0)
+ #define WDG_UNLOCK_KEY			0xe551
+ 
+diff --git a/drivers/spi/spi-zynq-qspi.c b/drivers/spi/spi-zynq-qspi.c
+index 9262c6418463b..cfa222c9bd5e7 100644
+--- a/drivers/spi/spi-zynq-qspi.c
++++ b/drivers/spi/spi-zynq-qspi.c
+@@ -545,7 +545,7 @@ static int zynq_qspi_exec_mem_op(struct spi_mem *mem,
+ 		zynq_qspi_write_op(xqspi, ZYNQ_QSPI_FIFO_DEPTH, true);
+ 		zynq_qspi_write(xqspi, ZYNQ_QSPI_IEN_OFFSET,
+ 				ZYNQ_QSPI_IXR_RXTX_MASK);
+-		if (!wait_for_completion_interruptible_timeout(&xqspi->data_completion,
++		if (!wait_for_completion_timeout(&xqspi->data_completion,
+ 							       msecs_to_jiffies(1000)))
+ 			err = -ETIMEDOUT;
+ 	}
+@@ -563,7 +563,7 @@ static int zynq_qspi_exec_mem_op(struct spi_mem *mem,
+ 		zynq_qspi_write_op(xqspi, ZYNQ_QSPI_FIFO_DEPTH, true);
+ 		zynq_qspi_write(xqspi, ZYNQ_QSPI_IEN_OFFSET,
+ 				ZYNQ_QSPI_IXR_RXTX_MASK);
+-		if (!wait_for_completion_interruptible_timeout(&xqspi->data_completion,
++		if (!wait_for_completion_timeout(&xqspi->data_completion,
+ 							       msecs_to_jiffies(1000)))
+ 			err = -ETIMEDOUT;
+ 	}
+@@ -579,7 +579,7 @@ static int zynq_qspi_exec_mem_op(struct spi_mem *mem,
+ 		zynq_qspi_write_op(xqspi, ZYNQ_QSPI_FIFO_DEPTH, true);
+ 		zynq_qspi_write(xqspi, ZYNQ_QSPI_IEN_OFFSET,
+ 				ZYNQ_QSPI_IXR_RXTX_MASK);
+-		if (!wait_for_completion_interruptible_timeout(&xqspi->data_completion,
++		if (!wait_for_completion_timeout(&xqspi->data_completion,
+ 							       msecs_to_jiffies(1000)))
+ 			err = -ETIMEDOUT;
+ 
+@@ -603,7 +603,7 @@ static int zynq_qspi_exec_mem_op(struct spi_mem *mem,
+ 		zynq_qspi_write_op(xqspi, ZYNQ_QSPI_FIFO_DEPTH, true);
+ 		zynq_qspi_write(xqspi, ZYNQ_QSPI_IEN_OFFSET,
+ 				ZYNQ_QSPI_IXR_RXTX_MASK);
+-		if (!wait_for_completion_interruptible_timeout(&xqspi->data_completion,
++		if (!wait_for_completion_timeout(&xqspi->data_completion,
+ 							       msecs_to_jiffies(1000)))
+ 			err = -ETIMEDOUT;
+ 	}
+diff --git a/drivers/staging/clocking-wizard/Kconfig b/drivers/staging/clocking-wizard/Kconfig
+index 69cf51445f082..2324b5d737886 100644
+--- a/drivers/staging/clocking-wizard/Kconfig
++++ b/drivers/staging/clocking-wizard/Kconfig
+@@ -5,6 +5,6 @@
+ 
+ config COMMON_CLK_XLNX_CLKWZRD
+ 	tristate "Xilinx Clocking Wizard"
+-	depends on COMMON_CLK && OF && IOMEM
++	depends on COMMON_CLK && OF && HAS_IOMEM
+ 	help
+ 	  Support for the Xilinx Clocking Wizard IP core clock generator.
+diff --git a/drivers/staging/media/atomisp/i2c/atomisp-mt9m114.c b/drivers/staging/media/atomisp/i2c/atomisp-mt9m114.c
+index f5de81132177d..77293579a1348 100644
+--- a/drivers/staging/media/atomisp/i2c/atomisp-mt9m114.c
++++ b/drivers/staging/media/atomisp/i2c/atomisp-mt9m114.c
+@@ -1533,16 +1533,19 @@ static struct v4l2_ctrl_config mt9m114_controls[] = {
+ static int mt9m114_detect(struct mt9m114_device *dev, struct i2c_client *client)
+ {
+ 	struct i2c_adapter *adapter = client->adapter;
+-	u32 retvalue;
++	u32 model;
++	int ret;
+ 
+ 	if (!i2c_check_functionality(adapter, I2C_FUNC_I2C)) {
+ 		dev_err(&client->dev, "%s: i2c error", __func__);
+ 		return -ENODEV;
+ 	}
+-	mt9m114_read_reg(client, MISENSOR_16BIT, (u32)MT9M114_PID, &retvalue);
+-	dev->real_model_id = retvalue;
++	ret = mt9m114_read_reg(client, MISENSOR_16BIT, MT9M114_PID, &model);
++	if (ret)
++		return ret;
++	dev->real_model_id = model;
+ 
+-	if (retvalue != MT9M114_MOD_ID) {
++	if (model != MT9M114_MOD_ID) {
+ 		dev_err(&client->dev, "%s: failed: client->addr = %x\n",
+ 			__func__, client->addr);
+ 		return -ENODEV;
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 0d7ea144a4a6d..1ed4e33cc8cf0 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -2595,7 +2595,7 @@ static int lpuart_probe(struct platform_device *pdev)
+ 		return PTR_ERR(sport->port.membase);
+ 
+ 	sport->port.membase += sdata->reg_off;
+-	sport->port.mapbase = res->start;
++	sport->port.mapbase = res->start + sdata->reg_off;
+ 	sport->port.dev = &pdev->dev;
+ 	sport->port.type = PORT_LPUART;
+ 	sport->devtype = sdata->devtype;
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 5b5e99604989d..87e3f20e120b0 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -2294,8 +2294,6 @@ static int tty_fasync(int fd, struct file *filp, int on)
+  *	Locking:
+  *		Called functions take tty_ldiscs_lock
+  *		current->signal->tty check is safe without locks
+- *
+- *	FIXME: may race normal receive processing
+  */
+ 
+ static int tiocsti(struct tty_struct *tty, char __user *p)
+@@ -2311,8 +2309,10 @@ static int tiocsti(struct tty_struct *tty, char __user *p)
+ 	ld = tty_ldisc_ref_wait(tty);
+ 	if (!ld)
+ 		return -EIO;
++	tty_buffer_lock_exclusive(tty->port);
+ 	if (ld->ops->receive_buf)
+ 		ld->ops->receive_buf(tty, &ch, &mbz, 1);
++	tty_buffer_unlock_exclusive(tty->port);
+ 	tty_ldisc_deref(ld);
+ 	return 0;
+ }
+diff --git a/drivers/usb/dwc3/dwc3-meson-g12a.c b/drivers/usb/dwc3/dwc3-meson-g12a.c
+index ffe301d6ea359..d0f9b7c296b0d 100644
+--- a/drivers/usb/dwc3/dwc3-meson-g12a.c
++++ b/drivers/usb/dwc3/dwc3-meson-g12a.c
+@@ -598,6 +598,8 @@ static int dwc3_meson_g12a_otg_init(struct platform_device *pdev,
+ 				   USB_R5_ID_DIG_IRQ, 0);
+ 
+ 		irq = platform_get_irq(pdev, 0);
++		if (irq < 0)
++			return irq;
+ 		ret = devm_request_threaded_irq(&pdev->dev, irq, NULL,
+ 						dwc3_meson_g12a_irq_thread,
+ 						IRQF_ONESHOT, pdev->name, priv);
+diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
+index 49e6ca94486dd..cfbb96f6627e4 100644
+--- a/drivers/usb/dwc3/dwc3-qcom.c
++++ b/drivers/usb/dwc3/dwc3-qcom.c
+@@ -614,6 +614,10 @@ static int dwc3_qcom_acpi_register_core(struct platform_device *pdev)
+ 		qcom->acpi_pdata->dwc3_core_base_size;
+ 
+ 	irq = platform_get_irq(pdev_irq, 0);
++	if (irq < 0) {
++		ret = irq;
++		goto out;
++	}
+ 	child_res[1].flags = IORESOURCE_IRQ;
+ 	child_res[1].start = child_res[1].end = irq;
+ 
+diff --git a/drivers/usb/gadget/udc/at91_udc.c b/drivers/usb/gadget/udc/at91_udc.c
+index eede5cedacb4a..d9ad9adf7348f 100644
+--- a/drivers/usb/gadget/udc/at91_udc.c
++++ b/drivers/usb/gadget/udc/at91_udc.c
+@@ -1876,7 +1876,9 @@ static int at91udc_probe(struct platform_device *pdev)
+ 	clk_disable(udc->iclk);
+ 
+ 	/* request UDC and maybe VBUS irqs */
+-	udc->udp_irq = platform_get_irq(pdev, 0);
++	udc->udp_irq = retval = platform_get_irq(pdev, 0);
++	if (retval < 0)
++		goto err_unprepare_iclk;
+ 	retval = devm_request_irq(dev, udc->udp_irq, at91_udc_irq, 0,
+ 				  driver_name, udc);
+ 	if (retval) {
+diff --git a/drivers/usb/gadget/udc/bdc/bdc_core.c b/drivers/usb/gadget/udc/bdc/bdc_core.c
+index 0bef6b3f049b9..fa1a3908ec3bb 100644
+--- a/drivers/usb/gadget/udc/bdc/bdc_core.c
++++ b/drivers/usb/gadget/udc/bdc/bdc_core.c
+@@ -488,27 +488,14 @@ static int bdc_probe(struct platform_device *pdev)
+ 	int irq;
+ 	u32 temp;
+ 	struct device *dev = &pdev->dev;
+-	struct clk *clk;
+ 	int phy_num;
+ 
+ 	dev_dbg(dev, "%s()\n", __func__);
+ 
+-	clk = devm_clk_get_optional(dev, "sw_usbd");
+-	if (IS_ERR(clk))
+-		return PTR_ERR(clk);
+-
+-	ret = clk_prepare_enable(clk);
+-	if (ret) {
+-		dev_err(dev, "could not enable clock\n");
+-		return ret;
+-	}
+-
+ 	bdc = devm_kzalloc(dev, sizeof(*bdc), GFP_KERNEL);
+ 	if (!bdc)
+ 		return -ENOMEM;
+ 
+-	bdc->clk = clk;
+-
+ 	bdc->regs = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(bdc->regs))
+ 		return PTR_ERR(bdc->regs);
+@@ -545,10 +532,20 @@ static int bdc_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
++	bdc->clk = devm_clk_get_optional(dev, "sw_usbd");
++	if (IS_ERR(bdc->clk))
++		return PTR_ERR(bdc->clk);
++
++	ret = clk_prepare_enable(bdc->clk);
++	if (ret) {
++		dev_err(dev, "could not enable clock\n");
++		return ret;
++	}
++
+ 	ret = bdc_phy_init(bdc);
+ 	if (ret) {
+ 		dev_err(bdc->dev, "BDC phy init failure:%d\n", ret);
+-		return ret;
++		goto disable_clk;
+ 	}
+ 
+ 	temp = bdc_readl(bdc->regs, BDC_BDCCAP1);
+@@ -560,7 +557,8 @@ static int bdc_probe(struct platform_device *pdev)
+ 		if (ret) {
+ 			dev_err(dev,
+ 				"No suitable DMA config available, abort\n");
+-			return -ENOTSUPP;
++			ret = -ENOTSUPP;
++			goto phycleanup;
+ 		}
+ 		dev_dbg(dev, "Using 32-bit address\n");
+ 	}
+@@ -580,6 +578,8 @@ cleanup:
+ 	bdc_hw_exit(bdc);
+ phycleanup:
+ 	bdc_phy_exit(bdc);
++disable_clk:
++	clk_disable_unprepare(bdc->clk);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/usb/gadget/udc/mv_u3d_core.c b/drivers/usb/gadget/udc/mv_u3d_core.c
+index 5486f5a708681..0db97fecf99e8 100644
+--- a/drivers/usb/gadget/udc/mv_u3d_core.c
++++ b/drivers/usb/gadget/udc/mv_u3d_core.c
+@@ -1921,14 +1921,6 @@ static int mv_u3d_probe(struct platform_device *dev)
+ 		goto err_get_irq;
+ 	}
+ 	u3d->irq = r->start;
+-	if (request_irq(u3d->irq, mv_u3d_irq,
+-		IRQF_SHARED, driver_name, u3d)) {
+-		u3d->irq = 0;
+-		dev_err(&dev->dev, "Request irq %d for u3d failed\n",
+-			u3d->irq);
+-		retval = -ENODEV;
+-		goto err_request_irq;
+-	}
+ 
+ 	/* initialize gadget structure */
+ 	u3d->gadget.ops = &mv_u3d_ops;	/* usb_gadget_ops */
+@@ -1941,6 +1933,15 @@ static int mv_u3d_probe(struct platform_device *dev)
+ 
+ 	mv_u3d_eps_init(u3d);
+ 
++	if (request_irq(u3d->irq, mv_u3d_irq,
++		IRQF_SHARED, driver_name, u3d)) {
++		u3d->irq = 0;
++		dev_err(&dev->dev, "Request irq %d for u3d failed\n",
++			u3d->irq);
++		retval = -ENODEV;
++		goto err_request_irq;
++	}
++
+ 	/* external vbus detection */
+ 	if (u3d->vbus) {
+ 		u3d->clock_gating = 1;
+@@ -1964,8 +1965,8 @@ static int mv_u3d_probe(struct platform_device *dev)
+ 
+ err_unregister:
+ 	free_irq(u3d->irq, u3d);
+-err_request_irq:
+ err_get_irq:
++err_request_irq:
+ 	kfree(u3d->status_req);
+ err_alloc_status_req:
+ 	kfree(u3d->eps);
+diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
+index f1b35a39d1ba8..57d417a7c3e0a 100644
+--- a/drivers/usb/gadget/udc/renesas_usb3.c
++++ b/drivers/usb/gadget/udc/renesas_usb3.c
+@@ -2707,10 +2707,15 @@ static const struct renesas_usb3_priv renesas_usb3_priv_r8a77990 = {
+ 
+ static const struct of_device_id usb3_of_match[] = {
+ 	{
++		.compatible = "renesas,r8a774c0-usb3-peri",
++		.data = &renesas_usb3_priv_r8a77990,
++	}, {
+ 		.compatible = "renesas,r8a7795-usb3-peri",
+ 		.data = &renesas_usb3_priv_gen3,
+-	},
+-	{
++	}, {
++		.compatible = "renesas,r8a77990-usb3-peri",
++		.data = &renesas_usb3_priv_r8a77990,
++	}, {
+ 		.compatible = "renesas,rcar-gen3-usb3-peri",
+ 		.data = &renesas_usb3_priv_gen3,
+ 	},
+@@ -2719,18 +2724,10 @@ static const struct of_device_id usb3_of_match[] = {
+ MODULE_DEVICE_TABLE(of, usb3_of_match);
+ 
+ static const struct soc_device_attribute renesas_usb3_quirks_match[] = {
+-	{
+-		.soc_id = "r8a774c0",
+-		.data = &renesas_usb3_priv_r8a77990,
+-	},
+ 	{
+ 		.soc_id = "r8a7795", .revision = "ES1.*",
+ 		.data = &renesas_usb3_priv_r8a7795_es1,
+ 	},
+-	{
+-		.soc_id = "r8a77990",
+-		.data = &renesas_usb3_priv_r8a77990,
+-	},
+ 	{ /* sentinel */ },
+ };
+ 
+diff --git a/drivers/usb/gadget/udc/s3c2410_udc.c b/drivers/usb/gadget/udc/s3c2410_udc.c
+index b154b62abefa1..82c4f3fb2daec 100644
+--- a/drivers/usb/gadget/udc/s3c2410_udc.c
++++ b/drivers/usb/gadget/udc/s3c2410_udc.c
+@@ -1784,6 +1784,10 @@ static int s3c2410_udc_probe(struct platform_device *pdev)
+ 	s3c2410_udc_reinit(udc);
+ 
+ 	irq_usbd = platform_get_irq(pdev, 0);
++	if (irq_usbd < 0) {
++		retval = irq_usbd;
++		goto err_udc_clk;
++	}
+ 
+ 	/* irq setup after old hardware state is cleaned up */
+ 	retval = request_irq(irq_usbd, s3c2410_udc_irq,
+diff --git a/drivers/usb/host/ehci-orion.c b/drivers/usb/host/ehci-orion.c
+index a319b1df3011c..3626758b3e2aa 100644
+--- a/drivers/usb/host/ehci-orion.c
++++ b/drivers/usb/host/ehci-orion.c
+@@ -264,8 +264,11 @@ static int ehci_orion_drv_probe(struct platform_device *pdev)
+ 	 * the clock does not exists.
+ 	 */
+ 	priv->clk = devm_clk_get(&pdev->dev, NULL);
+-	if (!IS_ERR(priv->clk))
+-		clk_prepare_enable(priv->clk);
++	if (!IS_ERR(priv->clk)) {
++		err = clk_prepare_enable(priv->clk);
++		if (err)
++			goto err_put_hcd;
++	}
+ 
+ 	priv->phy = devm_phy_optional_get(&pdev->dev, "usb");
+ 	if (IS_ERR(priv->phy)) {
+@@ -311,6 +314,7 @@ static int ehci_orion_drv_probe(struct platform_device *pdev)
+ err_dis_clk:
+ 	if (!IS_ERR(priv->clk))
+ 		clk_disable_unprepare(priv->clk);
++err_put_hcd:
+ 	usb_put_hcd(hcd);
+ err:
+ 	dev_err(&pdev->dev, "init %s fail, %d\n",
+diff --git a/drivers/usb/host/ohci-tmio.c b/drivers/usb/host/ohci-tmio.c
+index 7f857bad9e95b..08ec2ab0d95a5 100644
+--- a/drivers/usb/host/ohci-tmio.c
++++ b/drivers/usb/host/ohci-tmio.c
+@@ -202,6 +202,9 @@ static int ohci_hcd_tmio_drv_probe(struct platform_device *dev)
+ 	if (!cell)
+ 		return -EINVAL;
+ 
++	if (irq < 0)
++		return irq;
++
+ 	hcd = usb_create_hcd(&ohci_tmio_hc_driver, &dev->dev, dev_name(&dev->dev));
+ 	if (!hcd) {
+ 		ret = -ENOMEM;
+diff --git a/drivers/usb/misc/brcmstb-usb-pinmap.c b/drivers/usb/misc/brcmstb-usb-pinmap.c
+index 336653091e3b3..2b2019c19cdeb 100644
+--- a/drivers/usb/misc/brcmstb-usb-pinmap.c
++++ b/drivers/usb/misc/brcmstb-usb-pinmap.c
+@@ -293,6 +293,8 @@ static int __init brcmstb_usb_pinmap_probe(struct platform_device *pdev)
+ 
+ 		/* Enable interrupt for out pins */
+ 		irq = platform_get_irq(pdev, 0);
++		if (irq < 0)
++			return irq;
+ 		err = devm_request_irq(&pdev->dev, irq,
+ 				       brcmstb_usb_pinmap_ovr_isr,
+ 				       IRQF_TRIGGER_RISING,
+diff --git a/drivers/usb/phy/phy-fsl-usb.c b/drivers/usb/phy/phy-fsl-usb.c
+index f34c9437a182c..972704262b02b 100644
+--- a/drivers/usb/phy/phy-fsl-usb.c
++++ b/drivers/usb/phy/phy-fsl-usb.c
+@@ -873,6 +873,8 @@ int usb_otg_start(struct platform_device *pdev)
+ 
+ 	/* request irq */
+ 	p_otg->irq = platform_get_irq(pdev, 0);
++	if (p_otg->irq < 0)
++		return p_otg->irq;
+ 	status = request_irq(p_otg->irq, fsl_otg_isr,
+ 				IRQF_SHARED, driver_name, p_otg);
+ 	if (status) {
+diff --git a/drivers/usb/phy/phy-tahvo.c b/drivers/usb/phy/phy-tahvo.c
+index baebb1f5a9737..a3e043e3e4aae 100644
+--- a/drivers/usb/phy/phy-tahvo.c
++++ b/drivers/usb/phy/phy-tahvo.c
+@@ -393,7 +393,9 @@ static int tahvo_usb_probe(struct platform_device *pdev)
+ 
+ 	dev_set_drvdata(&pdev->dev, tu);
+ 
+-	tu->irq = platform_get_irq(pdev, 0);
++	tu->irq = ret = platform_get_irq(pdev, 0);
++	if (ret < 0)
++		return ret;
+ 	ret = request_threaded_irq(tu->irq, NULL, tahvo_usb_vbus_interrupt,
+ 				   IRQF_ONESHOT,
+ 				   "tahvo-vbus", tu);
+diff --git a/drivers/usb/phy/phy-twl6030-usb.c b/drivers/usb/phy/phy-twl6030-usb.c
+index 8ba6c5a915570..ab3c38a7d8ac0 100644
+--- a/drivers/usb/phy/phy-twl6030-usb.c
++++ b/drivers/usb/phy/phy-twl6030-usb.c
+@@ -348,6 +348,11 @@ static int twl6030_usb_probe(struct platform_device *pdev)
+ 	twl->irq2		= platform_get_irq(pdev, 1);
+ 	twl->linkstat		= MUSB_UNKNOWN;
+ 
++	if (twl->irq1 < 0)
++		return twl->irq1;
++	if (twl->irq2 < 0)
++		return twl->irq2;
++
+ 	twl->comparator.set_vbus	= twl6030_set_vbus;
+ 	twl->comparator.start_srp	= twl6030_start_srp;
+ 
+diff --git a/drivers/video/backlight/pwm_bl.c b/drivers/video/backlight/pwm_bl.c
+index e48fded3e414c..8d8959a70e440 100644
+--- a/drivers/video/backlight/pwm_bl.c
++++ b/drivers/video/backlight/pwm_bl.c
+@@ -409,6 +409,33 @@ static bool pwm_backlight_is_linear(struct platform_pwm_backlight_data *data)
+ static int pwm_backlight_initial_power_state(const struct pwm_bl_data *pb)
+ {
+ 	struct device_node *node = pb->dev->of_node;
++	bool active = true;
++
++	/*
++	 * If the enable GPIO is present, observable (either as input
++	 * or output) and off then the backlight is not currently active.
++	 * */
++	if (pb->enable_gpio && gpiod_get_value_cansleep(pb->enable_gpio) == 0)
++		active = false;
++
++	if (!regulator_is_enabled(pb->power_supply))
++		active = false;
++
++	if (!pwm_is_enabled(pb->pwm))
++		active = false;
++
++	/*
++	 * Synchronize the enable_gpio with the observed state of the
++	 * hardware.
++	 */
++	if (pb->enable_gpio)
++		gpiod_direction_output(pb->enable_gpio, active);
++
++	/*
++	 * Do not change pb->enabled here! pb->enabled essentially
++	 * tells us if we own one of the regulator's use counts and
++	 * right now we do not.
++	 */
+ 
+ 	/* Not booted with device tree or no phandle link to the node */
+ 	if (!node || !node->phandle)
+@@ -420,20 +447,7 @@ static int pwm_backlight_initial_power_state(const struct pwm_bl_data *pb)
+ 	 * assume that another driver will enable the backlight at the
+ 	 * appropriate time. Therefore, if it is disabled, keep it so.
+ 	 */
+-
+-	/* if the enable GPIO is disabled, do not enable the backlight */
+-	if (pb->enable_gpio && gpiod_get_value_cansleep(pb->enable_gpio) == 0)
+-		return FB_BLANK_POWERDOWN;
+-
+-	/* The regulator is disabled, do not enable the backlight */
+-	if (!regulator_is_enabled(pb->power_supply))
+-		return FB_BLANK_POWERDOWN;
+-
+-	/* The PWM is disabled, keep it like this */
+-	if (!pwm_is_enabled(pb->pwm))
+-		return FB_BLANK_POWERDOWN;
+-
+-	return FB_BLANK_UNBLANK;
++	return active ? FB_BLANK_UNBLANK: FB_BLANK_POWERDOWN;
+ }
+ 
+ static int pwm_backlight_probe(struct platform_device *pdev)
+@@ -486,18 +500,6 @@ static int pwm_backlight_probe(struct platform_device *pdev)
+ 		goto err_alloc;
+ 	}
+ 
+-	/*
+-	 * If the GPIO is not known to be already configured as output, that
+-	 * is, if gpiod_get_direction returns either 1 or -EINVAL, change the
+-	 * direction to output and set the GPIO as active.
+-	 * Do not force the GPIO to active when it was already output as it
+-	 * could cause backlight flickering or we would enable the backlight too
+-	 * early. Leave the decision of the initial backlight state for later.
+-	 */
+-	if (pb->enable_gpio &&
+-	    gpiod_get_direction(pb->enable_gpio) != 0)
+-		gpiod_direction_output(pb->enable_gpio, 1);
+-
+ 	pb->power_supply = devm_regulator_get(&pdev->dev, "power");
+ 	if (IS_ERR(pb->power_supply)) {
+ 		ret = PTR_ERR(pb->power_supply);
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index 1c855145711ba..63e2f17f3c619 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -962,6 +962,7 @@ fb_set_var(struct fb_info *info, struct fb_var_screeninfo *var)
+ 	struct fb_var_screeninfo old_var;
+ 	struct fb_videomode mode;
+ 	struct fb_event event;
++	u32 unused;
+ 
+ 	if (var->activate & FB_ACTIVATE_INV_MODE) {
+ 		struct fb_videomode mode1, mode2;
+@@ -1008,6 +1009,11 @@ fb_set_var(struct fb_info *info, struct fb_var_screeninfo *var)
+ 	if (var->xres < 8 || var->yres < 8)
+ 		return -EINVAL;
+ 
++	/* Too huge resolution causes multiplication overflow. */
++	if (check_mul_overflow(var->xres, var->yres, &unused) ||
++	    check_mul_overflow(var->xres_virtual, var->yres_virtual, &unused))
++		return -EINVAL;
++
+ 	ret = info->fbops->fb_check_var(var, info);
+ 
+ 	if (ret)
+diff --git a/fs/cifs/cifs_unicode.c b/fs/cifs/cifs_unicode.c
+index 9bd03a2310328..171ad8b42107e 100644
+--- a/fs/cifs/cifs_unicode.c
++++ b/fs/cifs/cifs_unicode.c
+@@ -358,14 +358,9 @@ cifs_strndup_from_utf16(const char *src, const int maxlen,
+ 		if (!dst)
+ 			return NULL;
+ 		cifs_from_utf16(dst, (__le16 *) src, len, maxlen, codepage,
+-			       NO_MAP_UNI_RSVD);
++				NO_MAP_UNI_RSVD);
+ 	} else {
+-		len = strnlen(src, maxlen);
+-		len++;
+-		dst = kmalloc(len, GFP_KERNEL);
+-		if (!dst)
+-			return NULL;
+-		strlcpy(dst, src, len);
++		dst = kstrndup(src, maxlen, GFP_KERNEL);
+ 	}
+ 
+ 	return dst;
+diff --git a/fs/cifs/fs_context.c b/fs/cifs/fs_context.c
+index 72742eb1df4a7..626bb7c552065 100644
+--- a/fs/cifs/fs_context.c
++++ b/fs/cifs/fs_context.c
+@@ -1259,10 +1259,17 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ 			ctx->posix_paths = 1;
+ 		break;
+ 	case Opt_unix:
+-		if (result.negated)
++		if (result.negated) {
++			if (ctx->linux_ext == 1)
++				pr_warn_once("conflicting posix mount options specified\n");
+ 			ctx->linux_ext = 0;
+-		else
+ 			ctx->no_linux_ext = 1;
++		} else {
++			if (ctx->no_linux_ext == 1)
++				pr_warn_once("conflicting posix mount options specified\n");
++			ctx->linux_ext = 1;
++			ctx->no_linux_ext = 0;
++		}
+ 		break;
+ 	case Opt_nocase:
+ 		ctx->nocase = 1;
+diff --git a/fs/cifs/readdir.c b/fs/cifs/readdir.c
+index 63bfc533c9fb6..6927b68fe8528 100644
+--- a/fs/cifs/readdir.c
++++ b/fs/cifs/readdir.c
+@@ -381,7 +381,7 @@ int get_symlink_reparse_path(char *full_path, struct cifs_sb_info *cifs_sb,
+  */
+ 
+ static int
+-initiate_cifs_search(const unsigned int xid, struct file *file,
++_initiate_cifs_search(const unsigned int xid, struct file *file,
+ 		     const char *full_path)
+ {
+ 	__u16 search_flags;
+@@ -463,6 +463,27 @@ error_exit:
+ 	return rc;
+ }
+ 
++static int
++initiate_cifs_search(const unsigned int xid, struct file *file,
++		     const char *full_path)
++{
++	int rc, retry_count = 0;
++
++	do {
++		rc = _initiate_cifs_search(xid, file, full_path);
++		/*
++		 * If we don't have enough credits to start reading the
++		 * directory just try again after short wait.
++		 */
++		if (rc != -EDEADLK)
++			break;
++
++		usleep_range(512, 2048);
++	} while (retry_count++ < 5);
++
++	return rc;
++}
++
+ /* return length of unicode string in bytes */
+ static int cifs_unicode_bytelen(const char *str)
+ {
+diff --git a/fs/debugfs/file.c b/fs/debugfs/file.c
+index ba7c01cd9a5d2..36f2dbe6061fe 100644
+--- a/fs/debugfs/file.c
++++ b/fs/debugfs/file.c
+@@ -179,8 +179,10 @@ static int open_proxy_open(struct inode *inode, struct file *filp)
+ 	if (!fops_get(real_fops)) {
+ #ifdef CONFIG_MODULES
+ 		if (real_fops->owner &&
+-		    real_fops->owner->state == MODULE_STATE_GOING)
++		    real_fops->owner->state == MODULE_STATE_GOING) {
++			r = -ENXIO;
+ 			goto out;
++		}
+ #endif
+ 
+ 		/* Huh? Module did not clean up after itself at exit? */
+@@ -314,8 +316,10 @@ static int full_proxy_open(struct inode *inode, struct file *filp)
+ 	if (!fops_get(real_fops)) {
+ #ifdef CONFIG_MODULES
+ 		if (real_fops->owner &&
+-		    real_fops->owner->state == MODULE_STATE_GOING)
++		    real_fops->owner->state == MODULE_STATE_GOING) {
++			r = -ENXIO;
+ 			goto out;
++		}
+ #endif
+ 
+ 		/* Huh? Module did not cleanup after itself at exit? */
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index ceb575f99048c..fb27d49e4da72 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -263,8 +263,7 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end,
+ 	};
+ 	unsigned int seq_id = 0;
+ 
+-	if (unlikely(f2fs_readonly(inode->i_sb) ||
+-				is_sbi_flag_set(sbi, SBI_CP_DISABLED)))
++	if (unlikely(f2fs_readonly(inode->i_sb)))
+ 		return 0;
+ 
+ 	trace_f2fs_sync_file_enter(inode);
+@@ -278,7 +277,7 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end,
+ 	ret = file_write_and_wait_range(file, start, end);
+ 	clear_inode_flag(inode, FI_NEED_IPU);
+ 
+-	if (ret) {
++	if (ret || is_sbi_flag_set(sbi, SBI_CP_DISABLED)) {
+ 		trace_f2fs_sync_file_exit(inode, cp_reason, datasync, ret);
+ 		return ret;
+ 	}
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index b29de80ab60e8..8553e8e5de0da 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1923,8 +1923,17 @@ restore_flag:
+ 
+ static void f2fs_enable_checkpoint(struct f2fs_sb_info *sbi)
+ {
++	int retry = DEFAULT_RETRY_IO_COUNT;
++
+ 	/* we should flush all the data to keep data consistency */
+-	sync_inodes_sb(sbi->sb);
++	do {
++		sync_inodes_sb(sbi->sb);
++		cond_resched();
++		congestion_wait(BLK_RW_ASYNC, DEFAULT_IO_TIMEOUT);
++	} while (get_pages(sbi, F2FS_DIRTY_DATA) && retry--);
++
++	if (unlikely(retry < 0))
++		f2fs_warn(sbi, "checkpoint=enable has some unwritten data.");
+ 
+ 	down_write(&sbi->gc_lock);
+ 	f2fs_dirty_to_prefree(sbi);
+diff --git a/fs/fcntl.c b/fs/fcntl.c
+index dfc72f15be7fc..887db4918a899 100644
+--- a/fs/fcntl.c
++++ b/fs/fcntl.c
+@@ -150,7 +150,8 @@ void f_delown(struct file *filp)
+ pid_t f_getown(struct file *filp)
+ {
+ 	pid_t pid = 0;
+-	read_lock(&filp->f_owner.lock);
++
++	read_lock_irq(&filp->f_owner.lock);
+ 	rcu_read_lock();
+ 	if (pid_task(filp->f_owner.pid, filp->f_owner.pid_type)) {
+ 		pid = pid_vnr(filp->f_owner.pid);
+@@ -158,7 +159,7 @@ pid_t f_getown(struct file *filp)
+ 			pid = -pid;
+ 	}
+ 	rcu_read_unlock();
+-	read_unlock(&filp->f_owner.lock);
++	read_unlock_irq(&filp->f_owner.lock);
+ 	return pid;
+ }
+ 
+@@ -208,7 +209,7 @@ static int f_getown_ex(struct file *filp, unsigned long arg)
+ 	struct f_owner_ex owner = {};
+ 	int ret = 0;
+ 
+-	read_lock(&filp->f_owner.lock);
++	read_lock_irq(&filp->f_owner.lock);
+ 	rcu_read_lock();
+ 	if (pid_task(filp->f_owner.pid, filp->f_owner.pid_type))
+ 		owner.pid = pid_vnr(filp->f_owner.pid);
+@@ -231,7 +232,7 @@ static int f_getown_ex(struct file *filp, unsigned long arg)
+ 		ret = -EINVAL;
+ 		break;
+ 	}
+-	read_unlock(&filp->f_owner.lock);
++	read_unlock_irq(&filp->f_owner.lock);
+ 
+ 	if (!ret) {
+ 		ret = copy_to_user(owner_p, &owner, sizeof(owner));
+@@ -249,10 +250,10 @@ static int f_getowner_uids(struct file *filp, unsigned long arg)
+ 	uid_t src[2];
+ 	int err;
+ 
+-	read_lock(&filp->f_owner.lock);
++	read_lock_irq(&filp->f_owner.lock);
+ 	src[0] = from_kuid(user_ns, filp->f_owner.uid);
+ 	src[1] = from_kuid(user_ns, filp->f_owner.euid);
+-	read_unlock(&filp->f_owner.lock);
++	read_unlock_irq(&filp->f_owner.lock);
+ 
+ 	err  = put_user(src[0], &dst[0]);
+ 	err |= put_user(src[1], &dst[1]);
+@@ -1003,13 +1004,14 @@ static void kill_fasync_rcu(struct fasync_struct *fa, int sig, int band)
+ {
+ 	while (fa) {
+ 		struct fown_struct *fown;
++		unsigned long flags;
+ 
+ 		if (fa->magic != FASYNC_MAGIC) {
+ 			printk(KERN_ERR "kill_fasync: bad magic number in "
+ 			       "fasync_struct!\n");
+ 			return;
+ 		}
+-		read_lock(&fa->fa_lock);
++		read_lock_irqsave(&fa->fa_lock, flags);
+ 		if (fa->fa_file) {
+ 			fown = &fa->fa_file->f_owner;
+ 			/* Don't send SIGURG to processes which have not set a
+@@ -1018,7 +1020,7 @@ static void kill_fasync_rcu(struct fasync_struct *fa, int sig, int band)
+ 			if (!(sig == SIGURG && fown->signum == 0))
+ 				send_sigio(fown, fa->fa_fd, band);
+ 		}
+-		read_unlock(&fa->fa_lock);
++		read_unlock_irqrestore(&fa->fa_lock, flags);
+ 		fa = rcu_dereference(fa->fa_next);
+ 	}
+ }
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 09ef2a4d25edc..f4e1a6387f90d 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -198,12 +198,11 @@ void fuse_finish_open(struct inode *inode, struct file *file)
+ 	struct fuse_file *ff = file->private_data;
+ 	struct fuse_conn *fc = get_fuse_conn(inode);
+ 
+-	if (!(ff->open_flags & FOPEN_KEEP_CACHE))
+-		invalidate_inode_pages2(inode->i_mapping);
+ 	if (ff->open_flags & FOPEN_STREAM)
+ 		stream_open(inode, file);
+ 	else if (ff->open_flags & FOPEN_NONSEEKABLE)
+ 		nonseekable_open(inode, file);
++
+ 	if (fc->atomic_o_trunc && (file->f_flags & O_TRUNC)) {
+ 		struct fuse_inode *fi = get_fuse_inode(inode);
+ 
+@@ -211,10 +210,14 @@ void fuse_finish_open(struct inode *inode, struct file *file)
+ 		fi->attr_version = atomic64_inc_return(&fc->attr_version);
+ 		i_size_write(inode, 0);
+ 		spin_unlock(&fi->lock);
++		truncate_pagecache(inode, 0);
+ 		fuse_invalidate_attr(inode);
+ 		if (fc->writeback_cache)
+ 			file_update_time(file);
++	} else if (!(ff->open_flags & FOPEN_KEEP_CACHE)) {
++		invalidate_inode_pages2(inode->i_mapping);
+ 	}
++
+ 	if ((file->f_mode & FMODE_WRITE) && fc->writeback_cache)
+ 		fuse_link_write_file(file);
+ }
+@@ -389,6 +392,7 @@ struct fuse_writepage_args {
+ 	struct list_head queue_entry;
+ 	struct fuse_writepage_args *next;
+ 	struct inode *inode;
++	struct fuse_sync_bucket *bucket;
+ };
+ 
+ static struct fuse_writepage_args *fuse_find_writeback(struct fuse_inode *fi,
+@@ -1610,6 +1614,9 @@ static void fuse_writepage_free(struct fuse_writepage_args *wpa)
+ 	struct fuse_args_pages *ap = &wpa->ia.ap;
+ 	int i;
+ 
++	if (wpa->bucket)
++		fuse_sync_bucket_dec(wpa->bucket);
++
+ 	for (i = 0; i < ap->num_pages; i++)
+ 		__free_page(ap->pages[i]);
+ 
+@@ -1873,6 +1880,20 @@ static struct fuse_writepage_args *fuse_writepage_args_alloc(void)
+ 
+ }
+ 
++static void fuse_writepage_add_to_bucket(struct fuse_conn *fc,
++					 struct fuse_writepage_args *wpa)
++{
++	if (!fc->sync_fs)
++		return;
++
++	rcu_read_lock();
++	/* Prevent resurrection of dead bucket in unlikely race with syncfs */
++	do {
++		wpa->bucket = rcu_dereference(fc->curr_bucket);
++	} while (unlikely(!atomic_inc_not_zero(&wpa->bucket->count)));
++	rcu_read_unlock();
++}
++
+ static int fuse_writepage_locked(struct page *page)
+ {
+ 	struct address_space *mapping = page->mapping;
+@@ -1900,6 +1921,7 @@ static int fuse_writepage_locked(struct page *page)
+ 	if (!wpa->ia.ff)
+ 		goto err_nofile;
+ 
++	fuse_writepage_add_to_bucket(fc, wpa);
+ 	fuse_write_args_fill(&wpa->ia, wpa->ia.ff, page_offset(page), 0);
+ 
+ 	copy_highpage(tmp_page, page);
+@@ -2150,6 +2172,8 @@ static int fuse_writepages_fill(struct page *page,
+ 			__free_page(tmp_page);
+ 			goto out_unlock;
+ 		}
++		fuse_writepage_add_to_bucket(fc, wpa);
++
+ 		data->max_pages = 1;
+ 
+ 		ap = &wpa->ia.ap;
+@@ -2883,7 +2907,7 @@ fuse_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ 
+ static int fuse_writeback_range(struct inode *inode, loff_t start, loff_t end)
+ {
+-	int err = filemap_write_and_wait_range(inode->i_mapping, start, end);
++	int err = filemap_write_and_wait_range(inode->i_mapping, start, -1);
+ 
+ 	if (!err)
+ 		fuse_sync_writes(inode);
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index 120f9c5908d19..e840c4a5f9f5b 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -515,6 +515,13 @@ struct fuse_fs_context {
+ 	void **fudptr;
+ };
+ 
++struct fuse_sync_bucket {
++	/* count is a possible scalability bottleneck */
++	atomic_t count;
++	wait_queue_head_t waitq;
++	struct rcu_head rcu;
++};
++
+ /**
+  * A Fuse connection.
+  *
+@@ -807,6 +814,9 @@ struct fuse_conn {
+ 
+ 	/** List of filesystems using this connection */
+ 	struct list_head mounts;
++
++	/* New writepages go into this bucket */
++	struct fuse_sync_bucket __rcu *curr_bucket;
+ };
+ 
+ /*
+@@ -910,6 +920,15 @@ static inline void fuse_page_descs_length_init(struct fuse_page_desc *descs,
+ 		descs[i].length = PAGE_SIZE - descs[i].offset;
+ }
+ 
++static inline void fuse_sync_bucket_dec(struct fuse_sync_bucket *bucket)
++{
++	/* Need RCU protection to prevent use after free after the decrement */
++	rcu_read_lock();
++	if (atomic_dec_and_test(&bucket->count))
++		wake_up(&bucket->waitq);
++	rcu_read_unlock();
++}
++
+ /** Device operations */
+ extern const struct file_operations fuse_dev_operations;
+ 
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index cf16d6d3a6038..eda92e3d26b87 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -506,6 +506,57 @@ static int fuse_statfs(struct dentry *dentry, struct kstatfs *buf)
+ 	return err;
+ }
+ 
++static struct fuse_sync_bucket *fuse_sync_bucket_alloc(void)
++{
++	struct fuse_sync_bucket *bucket;
++
++	bucket = kzalloc(sizeof(*bucket), GFP_KERNEL | __GFP_NOFAIL);
++	if (bucket) {
++		init_waitqueue_head(&bucket->waitq);
++		/* Initial active count */
++		atomic_set(&bucket->count, 1);
++	}
++	return bucket;
++}
++
++static void fuse_sync_fs_writes(struct fuse_conn *fc)
++{
++	struct fuse_sync_bucket *bucket, *new_bucket;
++	int count;
++
++	new_bucket = fuse_sync_bucket_alloc();
++	spin_lock(&fc->lock);
++	bucket = rcu_dereference_protected(fc->curr_bucket, 1);
++	count = atomic_read(&bucket->count);
++	WARN_ON(count < 1);
++	/* No outstanding writes? */
++	if (count == 1) {
++		spin_unlock(&fc->lock);
++		kfree(new_bucket);
++		return;
++	}
++
++	/*
++	 * Completion of new bucket depends on completion of this bucket, so add
++	 * one more count.
++	 */
++	atomic_inc(&new_bucket->count);
++	rcu_assign_pointer(fc->curr_bucket, new_bucket);
++	spin_unlock(&fc->lock);
++	/*
++	 * Drop initial active count.  At this point if all writes in this and
++	 * ancestor buckets complete, the count will go to zero and this task
++	 * will be woken up.
++	 */
++	atomic_dec(&bucket->count);
++
++	wait_event(bucket->waitq, atomic_read(&bucket->count) == 0);
++
++	/* Drop temp count on descendant bucket */
++	fuse_sync_bucket_dec(new_bucket);
++	kfree_rcu(bucket, rcu);
++}
++
+ static int fuse_sync_fs(struct super_block *sb, int wait)
+ {
+ 	struct fuse_mount *fm = get_fuse_mount_super(sb);
+@@ -528,6 +579,8 @@ static int fuse_sync_fs(struct super_block *sb, int wait)
+ 	if (!fc->sync_fs)
+ 		return 0;
+ 
++	fuse_sync_fs_writes(fc);
++
+ 	memset(&inarg, 0, sizeof(inarg));
+ 	args.in_numargs = 1;
+ 	args.in_args[0].size = sizeof(inarg);
+@@ -763,6 +816,7 @@ void fuse_conn_put(struct fuse_conn *fc)
+ {
+ 	if (refcount_dec_and_test(&fc->count)) {
+ 		struct fuse_iqueue *fiq = &fc->iq;
++		struct fuse_sync_bucket *bucket;
+ 
+ 		if (IS_ENABLED(CONFIG_FUSE_DAX))
+ 			fuse_dax_conn_free(fc);
+@@ -770,6 +824,11 @@ void fuse_conn_put(struct fuse_conn *fc)
+ 			fiq->ops->release(fiq);
+ 		put_pid_ns(fc->pid_ns);
+ 		put_user_ns(fc->user_ns);
++		bucket = rcu_dereference_protected(fc->curr_bucket, 1);
++		if (bucket) {
++			WARN_ON(atomic_read(&bucket->count) != 1);
++			kfree(bucket);
++		}
+ 		fc->release(fc);
+ 	}
+ }
+@@ -1366,6 +1425,7 @@ int fuse_fill_super_common(struct super_block *sb, struct fuse_fs_context *ctx)
+ 	if (sb->s_flags & SB_MANDLOCK)
+ 		goto err;
+ 
++	rcu_assign_pointer(fc->curr_bucket, fuse_sync_bucket_alloc());
+ 	fuse_sb_defaults(sb);
+ 
+ 	if (ctx->is_bdev) {
+diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
+index 5f4504dd0875a..ca76e3b8792ce 100644
+--- a/fs/gfs2/ops_fstype.c
++++ b/fs/gfs2/ops_fstype.c
+@@ -677,6 +677,7 @@ static int init_statfs(struct gfs2_sbd *sdp)
+ 			error = PTR_ERR(lsi->si_sc_inode);
+ 			fs_err(sdp, "can't find local \"sc\" file#%u: %d\n",
+ 			       jd->jd_jid, error);
++			kfree(lsi);
+ 			goto free_local;
+ 		}
+ 		lsi->si_jid = jd->jd_jid;
+@@ -1088,6 +1089,34 @@ void gfs2_online_uevent(struct gfs2_sbd *sdp)
+ 	kobject_uevent_env(&sdp->sd_kobj, KOBJ_ONLINE, envp);
+ }
+ 
++static int init_threads(struct gfs2_sbd *sdp)
++{
++	struct task_struct *p;
++	int error = 0;
++
++	p = kthread_run(gfs2_logd, sdp, "gfs2_logd");
++	if (IS_ERR(p)) {
++		error = PTR_ERR(p);
++		fs_err(sdp, "can't start logd thread: %d\n", error);
++		return error;
++	}
++	sdp->sd_logd_process = p;
++
++	p = kthread_run(gfs2_quotad, sdp, "gfs2_quotad");
++	if (IS_ERR(p)) {
++		error = PTR_ERR(p);
++		fs_err(sdp, "can't start quotad thread: %d\n", error);
++		goto fail;
++	}
++	sdp->sd_quotad_process = p;
++	return 0;
++
++fail:
++	kthread_stop(sdp->sd_logd_process);
++	sdp->sd_logd_process = NULL;
++	return error;
++}
++
+ /**
+  * gfs2_fill_super - Read in superblock
+  * @sb: The VFS superblock
+@@ -1216,6 +1245,14 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
+ 		goto fail_per_node;
+ 	}
+ 
++	if (!sb_rdonly(sb)) {
++		error = init_threads(sdp);
++		if (error) {
++			gfs2_withdraw_delayed(sdp);
++			goto fail_per_node;
++		}
++	}
++
+ 	error = gfs2_freeze_lock(sdp, &freeze_gh, 0);
+ 	if (error)
+ 		goto fail_per_node;
+@@ -1225,6 +1262,12 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
+ 
+ 	gfs2_freeze_unlock(&freeze_gh);
+ 	if (error) {
++		if (sdp->sd_quotad_process)
++			kthread_stop(sdp->sd_quotad_process);
++		sdp->sd_quotad_process = NULL;
++		if (sdp->sd_logd_process)
++			kthread_stop(sdp->sd_logd_process);
++		sdp->sd_logd_process = NULL;
+ 		fs_err(sdp, "can't make FS RW: %d\n", error);
+ 		goto fail_per_node;
+ 	}
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index 4d4ceb0b69031..2bdbba5ea8d79 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -119,34 +119,6 @@ int gfs2_jdesc_check(struct gfs2_jdesc *jd)
+ 	return 0;
+ }
+ 
+-static int init_threads(struct gfs2_sbd *sdp)
+-{
+-	struct task_struct *p;
+-	int error = 0;
+-
+-	p = kthread_run(gfs2_logd, sdp, "gfs2_logd");
+-	if (IS_ERR(p)) {
+-		error = PTR_ERR(p);
+-		fs_err(sdp, "can't start logd thread: %d\n", error);
+-		return error;
+-	}
+-	sdp->sd_logd_process = p;
+-
+-	p = kthread_run(gfs2_quotad, sdp, "gfs2_quotad");
+-	if (IS_ERR(p)) {
+-		error = PTR_ERR(p);
+-		fs_err(sdp, "can't start quotad thread: %d\n", error);
+-		goto fail;
+-	}
+-	sdp->sd_quotad_process = p;
+-	return 0;
+-
+-fail:
+-	kthread_stop(sdp->sd_logd_process);
+-	sdp->sd_logd_process = NULL;
+-	return error;
+-}
+-
+ /**
+  * gfs2_make_fs_rw - Turn a Read-Only FS into a Read-Write one
+  * @sdp: the filesystem
+@@ -161,26 +133,17 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
+ 	struct gfs2_log_header_host head;
+ 	int error;
+ 
+-	error = init_threads(sdp);
+-	if (error) {
+-		gfs2_withdraw_delayed(sdp);
+-		return error;
+-	}
+-
+ 	j_gl->gl_ops->go_inval(j_gl, DIO_METADATA);
+-	if (gfs2_withdrawn(sdp)) {
+-		error = -EIO;
+-		goto fail;
+-	}
++	if (gfs2_withdrawn(sdp))
++		return -EIO;
+ 
+ 	error = gfs2_find_jhead(sdp->sd_jdesc, &head, false);
+ 	if (error || gfs2_withdrawn(sdp))
+-		goto fail;
++		return error;
+ 
+ 	if (!(head.lh_flags & GFS2_LOG_HEAD_UNMOUNT)) {
+ 		gfs2_consist(sdp);
+-		error = -EIO;
+-		goto fail;
++		return -EIO;
+ 	}
+ 
+ 	/*  Initialize some head of the log stuff  */
+@@ -188,20 +151,8 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
+ 	gfs2_log_pointers_init(sdp, head.lh_blkno);
+ 
+ 	error = gfs2_quota_init(sdp);
+-	if (error || gfs2_withdrawn(sdp))
+-		goto fail;
+-
+-	set_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags);
+-
+-	return 0;
+-
+-fail:
+-	if (sdp->sd_quotad_process)
+-		kthread_stop(sdp->sd_quotad_process);
+-	sdp->sd_quotad_process = NULL;
+-	if (sdp->sd_logd_process)
+-		kthread_stop(sdp->sd_logd_process);
+-	sdp->sd_logd_process = NULL;
++	if (!error && !gfs2_withdrawn(sdp))
++		set_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags);
+ 	return error;
+ }
+ 
+diff --git a/fs/io-wq.c b/fs/io-wq.c
+index 91b0d1fb90eb3..c7171d9758968 100644
+--- a/fs/io-wq.c
++++ b/fs/io-wq.c
+@@ -53,6 +53,10 @@ struct io_worker {
+ 
+ 	struct completion ref_done;
+ 
++	unsigned long create_state;
++	struct callback_head create_work;
++	int create_index;
++
+ 	struct rcu_head rcu;
+ };
+ 
+@@ -273,24 +277,18 @@ static void io_wqe_inc_running(struct io_worker *worker)
+ 	atomic_inc(&acct->nr_running);
+ }
+ 
+-struct create_worker_data {
+-	struct callback_head work;
+-	struct io_wqe *wqe;
+-	int index;
+-};
+-
+ static void create_worker_cb(struct callback_head *cb)
+ {
+-	struct create_worker_data *cwd;
++	struct io_worker *worker;
+ 	struct io_wq *wq;
+ 	struct io_wqe *wqe;
+ 	struct io_wqe_acct *acct;
+ 	bool do_create = false, first = false;
+ 
+-	cwd = container_of(cb, struct create_worker_data, work);
+-	wqe = cwd->wqe;
++	worker = container_of(cb, struct io_worker, create_work);
++	wqe = worker->wqe;
+ 	wq = wqe->wq;
+-	acct = &wqe->acct[cwd->index];
++	acct = &wqe->acct[worker->create_index];
+ 	raw_spin_lock_irq(&wqe->lock);
+ 	if (acct->nr_workers < acct->max_workers) {
+ 		if (!acct->nr_workers)
+@@ -300,33 +298,42 @@ static void create_worker_cb(struct callback_head *cb)
+ 	}
+ 	raw_spin_unlock_irq(&wqe->lock);
+ 	if (do_create) {
+-		create_io_worker(wq, wqe, cwd->index, first);
++		create_io_worker(wq, wqe, worker->create_index, first);
+ 	} else {
+ 		atomic_dec(&acct->nr_running);
+ 		io_worker_ref_put(wq);
+ 	}
+-	kfree(cwd);
++	clear_bit_unlock(0, &worker->create_state);
++	io_worker_release(worker);
+ }
+ 
+-static void io_queue_worker_create(struct io_wqe *wqe, struct io_wqe_acct *acct)
++static void io_queue_worker_create(struct io_wqe *wqe, struct io_worker *worker,
++				   struct io_wqe_acct *acct)
+ {
+-	struct create_worker_data *cwd;
+ 	struct io_wq *wq = wqe->wq;
+ 
+ 	/* raced with exit, just ignore create call */
+ 	if (test_bit(IO_WQ_BIT_EXIT, &wq->state))
+ 		goto fail;
++	if (!io_worker_get(worker))
++		goto fail;
++	/*
++	 * create_state manages ownership of create_work/index. We should
++	 * only need one entry per worker, as the worker going to sleep
++	 * will trigger the condition, and waking will clear it once it
++	 * runs the task_work.
++	 */
++	if (test_bit(0, &worker->create_state) ||
++	    test_and_set_bit_lock(0, &worker->create_state))
++		goto fail_release;
+ 
+-	cwd = kmalloc(sizeof(*cwd), GFP_ATOMIC);
+-	if (cwd) {
+-		init_task_work(&cwd->work, create_worker_cb);
+-		cwd->wqe = wqe;
+-		cwd->index = acct->index;
+-		if (!task_work_add(wq->task, &cwd->work, TWA_SIGNAL))
+-			return;
+-
+-		kfree(cwd);
+-	}
++	init_task_work(&worker->create_work, create_worker_cb);
++	worker->create_index = acct->index;
++	if (!task_work_add(wq->task, &worker->create_work, TWA_SIGNAL))
++		return;
++	clear_bit_unlock(0, &worker->create_state);
++fail_release:
++	io_worker_release(worker);
+ fail:
+ 	atomic_dec(&acct->nr_running);
+ 	io_worker_ref_put(wq);
+@@ -344,7 +351,7 @@ static void io_wqe_dec_running(struct io_worker *worker)
+ 	if (atomic_dec_and_test(&acct->nr_running) && io_wqe_run_queue(wqe)) {
+ 		atomic_inc(&acct->nr_running);
+ 		atomic_inc(&wqe->wq->worker_refs);
+-		io_queue_worker_create(wqe, acct);
++		io_queue_worker_create(wqe, worker, acct);
+ 	}
+ }
+ 
+@@ -417,7 +424,28 @@ static void io_wait_on_hash(struct io_wqe *wqe, unsigned int hash)
+ 	spin_unlock(&wq->hash->wait.lock);
+ }
+ 
+-static struct io_wq_work *io_get_next_work(struct io_wqe *wqe)
++/*
++ * We can always run the work if the worker is currently the same type as
++ * the work (eg both are bound, or both are unbound). If they are not the
++ * same, only allow it if incrementing the worker count would be allowed.
++ */
++static bool io_worker_can_run_work(struct io_worker *worker,
++				   struct io_wq_work *work)
++{
++	struct io_wqe_acct *acct;
++
++	if (!(worker->flags & IO_WORKER_F_BOUND) !=
++	    !(work->flags & IO_WQ_WORK_UNBOUND))
++		return true;
++
++	/* not the same type, check if we'd go over the limit */
++	acct = io_work_get_acct(worker->wqe, work);
++	return acct->nr_workers < acct->max_workers;
++}
++
++static struct io_wq_work *io_get_next_work(struct io_wqe *wqe,
++					   struct io_worker *worker,
++					   bool *stalled)
+ 	__must_hold(wqe->lock)
+ {
+ 	struct io_wq_work_node *node, *prev;
+@@ -429,6 +457,9 @@ static struct io_wq_work *io_get_next_work(struct io_wqe *wqe)
+ 
+ 		work = container_of(node, struct io_wq_work, list);
+ 
++		if (!io_worker_can_run_work(worker, work))
++			break;
++
+ 		/* not hashed, can run anytime */
+ 		if (!io_wq_is_hashed(work)) {
+ 			wq_list_del(&wqe->work_list, node, prev);
+@@ -455,6 +486,7 @@ static struct io_wq_work *io_get_next_work(struct io_wqe *wqe)
+ 		raw_spin_unlock(&wqe->lock);
+ 		io_wait_on_hash(wqe, stall_hash);
+ 		raw_spin_lock(&wqe->lock);
++		*stalled = true;
+ 	}
+ 
+ 	return NULL;
+@@ -494,6 +526,7 @@ static void io_worker_handle_work(struct io_worker *worker)
+ 
+ 	do {
+ 		struct io_wq_work *work;
++		bool stalled;
+ get_next:
+ 		/*
+ 		 * If we got some work, mark us as busy. If we didn't, but
+@@ -502,10 +535,11 @@ get_next:
+ 		 * can't make progress, any work completion or insertion will
+ 		 * clear the stalled flag.
+ 		 */
+-		work = io_get_next_work(wqe);
++		stalled = false;
++		work = io_get_next_work(wqe, worker, &stalled);
+ 		if (work)
+ 			__io_worker_busy(wqe, worker, work);
+-		else if (!wq_list_empty(&wqe->work_list))
++		else if (stalled)
+ 			wqe->flags |= IO_WQE_FLAG_STALLED;
+ 
+ 		raw_spin_unlock_irq(&wqe->lock);
+@@ -1010,12 +1044,12 @@ err_wq:
+ 
+ static bool io_task_work_match(struct callback_head *cb, void *data)
+ {
+-	struct create_worker_data *cwd;
++	struct io_worker *worker;
+ 
+ 	if (cb->func != create_worker_cb)
+ 		return false;
+-	cwd = container_of(cb, struct create_worker_data, work);
+-	return cwd->wqe->wq == data;
++	worker = container_of(cb, struct io_worker, create_work);
++	return worker->wqe->wq == data;
+ }
+ 
+ void io_wq_exit_start(struct io_wq *wq)
+@@ -1032,12 +1066,13 @@ static void io_wq_exit_workers(struct io_wq *wq)
+ 		return;
+ 
+ 	while ((cb = task_work_cancel_match(wq->task, io_task_work_match, wq)) != NULL) {
+-		struct create_worker_data *cwd;
++		struct io_worker *worker;
+ 
+-		cwd = container_of(cb, struct create_worker_data, work);
+-		atomic_dec(&cwd->wqe->acct[cwd->index].nr_running);
++		worker = container_of(cb, struct io_worker, create_work);
++		atomic_dec(&worker->wqe->acct[worker->create_index].nr_running);
+ 		io_worker_ref_put(wq);
+-		kfree(cwd);
++		clear_bit_unlock(0, &worker->create_state);
++		io_worker_release(worker);
+ 	}
+ 
+ 	rcu_read_lock();
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index f6ddc7182943d..58ae2eab99efa 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -990,6 +990,7 @@ static const struct io_op_def io_op_defs[] = {
+ 	},
+ 	[IORING_OP_WRITE] = {
+ 		.needs_file		= 1,
++		.hash_reg_file		= 1,
+ 		.unbound_nonreg_file	= 1,
+ 		.pollout		= 1,
+ 		.plug			= 1,
+@@ -1060,8 +1061,7 @@ static void __io_queue_sqe(struct io_kiocb *req);
+ static void io_rsrc_put_work(struct work_struct *work);
+ 
+ static void io_req_task_queue(struct io_kiocb *req);
+-static void io_submit_flush_completions(struct io_comp_state *cs,
+-					struct io_ring_ctx *ctx);
++static void io_submit_flush_completions(struct io_ring_ctx *ctx);
+ static bool io_poll_remove_waitqs(struct io_kiocb *req);
+ static int io_req_prep_async(struct io_kiocb *req);
+ 
+@@ -1901,7 +1901,7 @@ static void ctx_flush_and_put(struct io_ring_ctx *ctx)
+ 		return;
+ 	if (ctx->submit_state.comp.nr) {
+ 		mutex_lock(&ctx->uring_lock);
+-		io_submit_flush_completions(&ctx->submit_state.comp, ctx);
++		io_submit_flush_completions(ctx);
+ 		mutex_unlock(&ctx->uring_lock);
+ 	}
+ 	percpu_ref_put(&ctx->refs);
+@@ -2147,9 +2147,9 @@ static void io_req_free_batch(struct req_batch *rb, struct io_kiocb *req,
+ 		list_add(&req->compl.list, &state->comp.free_list);
+ }
+ 
+-static void io_submit_flush_completions(struct io_comp_state *cs,
+-					struct io_ring_ctx *ctx)
++static void io_submit_flush_completions(struct io_ring_ctx *ctx)
+ {
++	struct io_comp_state *cs = &ctx->submit_state.comp;
+ 	int i, nr = cs->nr;
+ 	struct io_kiocb *req;
+ 	struct req_batch rb;
+@@ -6462,7 +6462,7 @@ static void __io_queue_sqe(struct io_kiocb *req)
+ 
+ 			cs->reqs[cs->nr++] = req;
+ 			if (cs->nr == ARRAY_SIZE(cs->reqs))
+-				io_submit_flush_completions(cs, ctx);
++				io_submit_flush_completions(ctx);
+ 		} else {
+ 			io_put_req(req);
+ 		}
+@@ -6676,7 +6676,7 @@ static void io_submit_state_end(struct io_submit_state *state,
+ 	if (state->link.head)
+ 		io_queue_sqe(state->link.head);
+ 	if (state->comp.nr)
+-		io_submit_flush_completions(&state->comp, ctx);
++		io_submit_flush_completions(ctx);
+ 	if (state->plug_started)
+ 		blk_finish_plug(&state->plug);
+ 	io_state_file_put(state);
+@@ -7670,6 +7670,8 @@ static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
+ 		return -EINVAL;
+ 	if (nr_args > IORING_MAX_FIXED_FILES)
+ 		return -EMFILE;
++	if (nr_args > rlimit(RLIMIT_NOFILE))
++		return -EMFILE;
+ 	ret = io_rsrc_node_switch_start(ctx);
+ 	if (ret)
+ 		return ret;
+diff --git a/fs/iomap/swapfile.c b/fs/iomap/swapfile.c
+index 6250ca6a1f851..4ecf4e1f68ef9 100644
+--- a/fs/iomap/swapfile.c
++++ b/fs/iomap/swapfile.c
+@@ -31,11 +31,16 @@ static int iomap_swapfile_add_extent(struct iomap_swapfile_info *isi)
+ {
+ 	struct iomap *iomap = &isi->iomap;
+ 	unsigned long nr_pages;
++	unsigned long max_pages;
+ 	uint64_t first_ppage;
+ 	uint64_t first_ppage_reported;
+ 	uint64_t next_ppage;
+ 	int error;
+ 
++	if (unlikely(isi->nr_pages >= isi->sis->max))
++		return 0;
++	max_pages = isi->sis->max - isi->nr_pages;
++
+ 	/*
+ 	 * Round the start up and the end down so that the physical
+ 	 * extent aligns to a page boundary.
+@@ -48,6 +53,7 @@ static int iomap_swapfile_add_extent(struct iomap_swapfile_info *isi)
+ 	if (first_ppage >= next_ppage)
+ 		return 0;
+ 	nr_pages = next_ppage - first_ppage;
++	nr_pages = min(nr_pages, max_pages);
+ 
+ 	/*
+ 	 * Calculate how much swap space we're adding; the first page contains
+diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c
+index 21edc423b79fa..678e2c51b855c 100644
+--- a/fs/isofs/inode.c
++++ b/fs/isofs/inode.c
+@@ -155,7 +155,6 @@ struct iso9660_options{
+ 	unsigned int overriderockperm:1;
+ 	unsigned int uid_set:1;
+ 	unsigned int gid_set:1;
+-	unsigned int utf8:1;
+ 	unsigned char map;
+ 	unsigned char check;
+ 	unsigned int blocksize;
+@@ -356,7 +355,6 @@ static int parse_options(char *options, struct iso9660_options *popt)
+ 	popt->gid = GLOBAL_ROOT_GID;
+ 	popt->uid = GLOBAL_ROOT_UID;
+ 	popt->iocharset = NULL;
+-	popt->utf8 = 0;
+ 	popt->overriderockperm = 0;
+ 	popt->session=-1;
+ 	popt->sbsector=-1;
+@@ -389,10 +387,13 @@ static int parse_options(char *options, struct iso9660_options *popt)
+ 		case Opt_cruft:
+ 			popt->cruft = 1;
+ 			break;
++#ifdef CONFIG_JOLIET
+ 		case Opt_utf8:
+-			popt->utf8 = 1;
++			kfree(popt->iocharset);
++			popt->iocharset = kstrdup("utf8", GFP_KERNEL);
++			if (!popt->iocharset)
++				return 0;
+ 			break;
+-#ifdef CONFIG_JOLIET
+ 		case Opt_iocharset:
+ 			kfree(popt->iocharset);
+ 			popt->iocharset = match_strdup(&args[0]);
+@@ -495,7 +496,6 @@ static int isofs_show_options(struct seq_file *m, struct dentry *root)
+ 	if (sbi->s_nocompress)		seq_puts(m, ",nocompress");
+ 	if (sbi->s_overriderockperm)	seq_puts(m, ",overriderockperm");
+ 	if (sbi->s_showassoc)		seq_puts(m, ",showassoc");
+-	if (sbi->s_utf8)		seq_puts(m, ",utf8");
+ 
+ 	if (sbi->s_check)		seq_printf(m, ",check=%c", sbi->s_check);
+ 	if (sbi->s_mapping)		seq_printf(m, ",map=%c", sbi->s_mapping);
+@@ -518,9 +518,10 @@ static int isofs_show_options(struct seq_file *m, struct dentry *root)
+ 		seq_printf(m, ",fmode=%o", sbi->s_fmode);
+ 
+ #ifdef CONFIG_JOLIET
+-	if (sbi->s_nls_iocharset &&
+-	    strcmp(sbi->s_nls_iocharset->charset, CONFIG_NLS_DEFAULT) != 0)
++	if (sbi->s_nls_iocharset)
+ 		seq_printf(m, ",iocharset=%s", sbi->s_nls_iocharset->charset);
++	else
++		seq_puts(m, ",iocharset=utf8");
+ #endif
+ 	return 0;
+ }
+@@ -863,14 +864,13 @@ root_found:
+ 	sbi->s_nls_iocharset = NULL;
+ 
+ #ifdef CONFIG_JOLIET
+-	if (joliet_level && opt.utf8 == 0) {
++	if (joliet_level) {
+ 		char *p = opt.iocharset ? opt.iocharset : CONFIG_NLS_DEFAULT;
+-		sbi->s_nls_iocharset = load_nls(p);
+-		if (! sbi->s_nls_iocharset) {
+-			/* Fail only if explicit charset specified */
+-			if (opt.iocharset)
++		if (strcmp(p, "utf8") != 0) {
++			sbi->s_nls_iocharset = opt.iocharset ?
++				load_nls(opt.iocharset) : load_nls_default();
++			if (!sbi->s_nls_iocharset)
+ 				goto out_freesbi;
+-			sbi->s_nls_iocharset = load_nls_default();
+ 		}
+ 	}
+ #endif
+@@ -886,7 +886,6 @@ root_found:
+ 	sbi->s_gid = opt.gid;
+ 	sbi->s_uid_set = opt.uid_set;
+ 	sbi->s_gid_set = opt.gid_set;
+-	sbi->s_utf8 = opt.utf8;
+ 	sbi->s_nocompress = opt.nocompress;
+ 	sbi->s_overriderockperm = opt.overriderockperm;
+ 	/*
+diff --git a/fs/isofs/isofs.h b/fs/isofs/isofs.h
+index 055ec6c586f7f..dcdc191ed1834 100644
+--- a/fs/isofs/isofs.h
++++ b/fs/isofs/isofs.h
+@@ -44,7 +44,6 @@ struct isofs_sb_info {
+ 	unsigned char s_session;
+ 	unsigned int  s_high_sierra:1;
+ 	unsigned int  s_rock:2;
+-	unsigned int  s_utf8:1;
+ 	unsigned int  s_cruft:1; /* Broken disks with high byte of length
+ 				  * containing junk */
+ 	unsigned int  s_nocompress:1;
+diff --git a/fs/isofs/joliet.c b/fs/isofs/joliet.c
+index be8b6a9d0b926..c0f04a1e7f695 100644
+--- a/fs/isofs/joliet.c
++++ b/fs/isofs/joliet.c
+@@ -41,14 +41,12 @@ uni16_to_x8(unsigned char *ascii, __be16 *uni, int len, struct nls_table *nls)
+ int
+ get_joliet_filename(struct iso_directory_record * de, unsigned char *outname, struct inode * inode)
+ {
+-	unsigned char utf8;
+ 	struct nls_table *nls;
+ 	unsigned char len = 0;
+ 
+-	utf8 = ISOFS_SB(inode->i_sb)->s_utf8;
+ 	nls = ISOFS_SB(inode->i_sb)->s_nls_iocharset;
+ 
+-	if (utf8) {
++	if (!nls) {
+ 		len = utf16s_to_utf8s((const wchar_t *) de->name,
+ 				de->name_len[0] >> 1, UTF16_BIG_ENDIAN,
+ 				outname, PAGE_SIZE);
+diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
+index 61d3cc2283dc8..498cb70c2c0d0 100644
+--- a/fs/lockd/svclock.c
++++ b/fs/lockd/svclock.c
+@@ -634,7 +634,7 @@ nlmsvc_testlock(struct svc_rqst *rqstp, struct nlm_file *file,
+ 	conflock->caller = "somehost";	/* FIXME */
+ 	conflock->len = strlen(conflock->caller);
+ 	conflock->oh.len = 0;		/* don't return OH info */
+-	conflock->svid = ((struct nlm_lockowner *)lock->fl.fl_owner)->pid;
++	conflock->svid = lock->fl.fl_pid;
+ 	conflock->fl.fl_type = lock->fl.fl_type;
+ 	conflock->fl.fl_start = lock->fl.fl_start;
+ 	conflock->fl.fl_end = lock->fl.fl_end;
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 90e81f6491ff5..ab81e8ae32659 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -2665,9 +2665,9 @@ static void force_expire_client(struct nfs4_client *clp)
+ 	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
+ 	bool already_expired;
+ 
+-	spin_lock(&clp->cl_lock);
++	spin_lock(&nn->client_lock);
+ 	clp->cl_time = 0;
+-	spin_unlock(&clp->cl_lock);
++	spin_unlock(&nn->client_lock);
+ 
+ 	wait_event(expiry_wq, atomic_read(&clp->cl_rpc_users) == 0);
+ 	spin_lock(&nn->client_lock);
+diff --git a/fs/udf/misc.c b/fs/udf/misc.c
+index eab94527340dc..1614d308d0f06 100644
+--- a/fs/udf/misc.c
++++ b/fs/udf/misc.c
+@@ -173,13 +173,22 @@ struct genericFormat *udf_get_extendedattr(struct inode *inode, uint32_t type,
+ 		else
+ 			offset = le32_to_cpu(eahd->appAttrLocation);
+ 
+-		while (offset < iinfo->i_lenEAttr) {
++		while (offset + sizeof(*gaf) < iinfo->i_lenEAttr) {
++			uint32_t attrLength;
++
+ 			gaf = (struct genericFormat *)&ea[offset];
++			attrLength = le32_to_cpu(gaf->attrLength);
++
++			/* Detect undersized elements and buffer overflows */
++			if ((attrLength < sizeof(*gaf)) ||
++			    (attrLength > (iinfo->i_lenEAttr - offset)))
++				break;
++
+ 			if (le32_to_cpu(gaf->attrType) == type &&
+ 					gaf->attrSubtype == subtype)
+ 				return gaf;
+ 			else
+-				offset += le32_to_cpu(gaf->attrLength);
++				offset += attrLength;
+ 		}
+ 	}
+ 
+diff --git a/fs/udf/super.c b/fs/udf/super.c
+index 2f83c1204e20c..b2d7c57d06881 100644
+--- a/fs/udf/super.c
++++ b/fs/udf/super.c
+@@ -108,16 +108,10 @@ struct logicalVolIntegrityDescImpUse *udf_sb_lvidiu(struct super_block *sb)
+ 		return NULL;
+ 	lvid = (struct logicalVolIntegrityDesc *)UDF_SB(sb)->s_lvid_bh->b_data;
+ 	partnum = le32_to_cpu(lvid->numOfPartitions);
+-	if ((sb->s_blocksize - sizeof(struct logicalVolIntegrityDescImpUse) -
+-	     offsetof(struct logicalVolIntegrityDesc, impUse)) /
+-	     (2 * sizeof(uint32_t)) < partnum) {
+-		udf_err(sb, "Logical volume integrity descriptor corrupted "
+-			"(numOfPartitions = %u)!\n", partnum);
+-		return NULL;
+-	}
+ 	/* The offset is to skip freeSpaceTable and sizeTable arrays */
+ 	offset = partnum * 2 * sizeof(uint32_t);
+-	return (struct logicalVolIntegrityDescImpUse *)&(lvid->impUse[offset]);
++	return (struct logicalVolIntegrityDescImpUse *)
++					(((uint8_t *)(lvid + 1)) + offset);
+ }
+ 
+ /* UDF filesystem type */
+@@ -349,10 +343,10 @@ static int udf_show_options(struct seq_file *seq, struct dentry *root)
+ 		seq_printf(seq, ",lastblock=%u", sbi->s_last_block);
+ 	if (sbi->s_anchor != 0)
+ 		seq_printf(seq, ",anchor=%u", sbi->s_anchor);
+-	if (UDF_QUERY_FLAG(sb, UDF_FLAG_UTF8))
+-		seq_puts(seq, ",utf8");
+-	if (UDF_QUERY_FLAG(sb, UDF_FLAG_NLS_MAP) && sbi->s_nls_map)
++	if (sbi->s_nls_map)
+ 		seq_printf(seq, ",iocharset=%s", sbi->s_nls_map->charset);
++	else
++		seq_puts(seq, ",iocharset=utf8");
+ 
+ 	return 0;
+ }
+@@ -558,19 +552,24 @@ static int udf_parse_options(char *options, struct udf_options *uopt,
+ 			/* Ignored (never implemented properly) */
+ 			break;
+ 		case Opt_utf8:
+-			uopt->flags |= (1 << UDF_FLAG_UTF8);
++			if (!remount) {
++				unload_nls(uopt->nls_map);
++				uopt->nls_map = NULL;
++			}
+ 			break;
+ 		case Opt_iocharset:
+ 			if (!remount) {
+-				if (uopt->nls_map)
+-					unload_nls(uopt->nls_map);
+-				/*
+-				 * load_nls() failure is handled later in
+-				 * udf_fill_super() after all options are
+-				 * parsed.
+-				 */
++				unload_nls(uopt->nls_map);
++				uopt->nls_map = NULL;
++			}
++			/* When nls_map is not loaded then UTF-8 is used */
++			if (!remount && strcmp(args[0].from, "utf8") != 0) {
+ 				uopt->nls_map = load_nls(args[0].from);
+-				uopt->flags |= (1 << UDF_FLAG_NLS_MAP);
++				if (!uopt->nls_map) {
++					pr_err("iocharset %s not found\n",
++						args[0].from);
++					return 0;
++				}
+ 			}
+ 			break;
+ 		case Opt_uforget:
+@@ -1542,6 +1541,7 @@ static void udf_load_logicalvolint(struct super_block *sb, struct kernel_extent_
+ 	struct udf_sb_info *sbi = UDF_SB(sb);
+ 	struct logicalVolIntegrityDesc *lvid;
+ 	int indirections = 0;
++	u32 parts, impuselen;
+ 
+ 	while (++indirections <= UDF_MAX_LVID_NESTING) {
+ 		final_bh = NULL;
+@@ -1568,15 +1568,27 @@ static void udf_load_logicalvolint(struct super_block *sb, struct kernel_extent_
+ 
+ 		lvid = (struct logicalVolIntegrityDesc *)final_bh->b_data;
+ 		if (lvid->nextIntegrityExt.extLength == 0)
+-			return;
++			goto check;
+ 
+ 		loc = leea_to_cpu(lvid->nextIntegrityExt);
+ 	}
+ 
+ 	udf_warn(sb, "Too many LVID indirections (max %u), ignoring.\n",
+ 		UDF_MAX_LVID_NESTING);
++out_err:
+ 	brelse(sbi->s_lvid_bh);
+ 	sbi->s_lvid_bh = NULL;
++	return;
++check:
++	parts = le32_to_cpu(lvid->numOfPartitions);
++	impuselen = le32_to_cpu(lvid->lengthOfImpUse);
++	if (parts >= sb->s_blocksize || impuselen >= sb->s_blocksize ||
++	    sizeof(struct logicalVolIntegrityDesc) + impuselen +
++	    2 * parts * sizeof(u32) > sb->s_blocksize) {
++		udf_warn(sb, "Corrupted LVID (parts=%u, impuselen=%u), "
++			 "ignoring.\n", parts, impuselen);
++		goto out_err;
++	}
+ }
+ 
+ /*
+@@ -2139,21 +2151,6 @@ static int udf_fill_super(struct super_block *sb, void *options, int silent)
+ 	if (!udf_parse_options((char *)options, &uopt, false))
+ 		goto parse_options_failure;
+ 
+-	if (uopt.flags & (1 << UDF_FLAG_UTF8) &&
+-	    uopt.flags & (1 << UDF_FLAG_NLS_MAP)) {
+-		udf_err(sb, "utf8 cannot be combined with iocharset\n");
+-		goto parse_options_failure;
+-	}
+-	if ((uopt.flags & (1 << UDF_FLAG_NLS_MAP)) && !uopt.nls_map) {
+-		uopt.nls_map = load_nls_default();
+-		if (!uopt.nls_map)
+-			uopt.flags &= ~(1 << UDF_FLAG_NLS_MAP);
+-		else
+-			udf_debug("Using default NLS map\n");
+-	}
+-	if (!(uopt.flags & (1 << UDF_FLAG_NLS_MAP)))
+-		uopt.flags |= (1 << UDF_FLAG_UTF8);
+-
+ 	fileset.logicalBlockNum = 0xFFFFFFFF;
+ 	fileset.partitionReferenceNum = 0xFFFF;
+ 
+@@ -2308,8 +2305,7 @@ static int udf_fill_super(struct super_block *sb, void *options, int silent)
+ error_out:
+ 	iput(sbi->s_vat_inode);
+ parse_options_failure:
+-	if (uopt.nls_map)
+-		unload_nls(uopt.nls_map);
++	unload_nls(uopt.nls_map);
+ 	if (lvid_open)
+ 		udf_close_lvid(sb);
+ 	brelse(sbi->s_lvid_bh);
+@@ -2359,8 +2355,7 @@ static void udf_put_super(struct super_block *sb)
+ 	sbi = UDF_SB(sb);
+ 
+ 	iput(sbi->s_vat_inode);
+-	if (UDF_QUERY_FLAG(sb, UDF_FLAG_NLS_MAP))
+-		unload_nls(sbi->s_nls_map);
++	unload_nls(sbi->s_nls_map);
+ 	if (!sb_rdonly(sb))
+ 		udf_close_lvid(sb);
+ 	brelse(sbi->s_lvid_bh);
+diff --git a/fs/udf/udf_sb.h b/fs/udf/udf_sb.h
+index 758efe557a199..4fa620543d302 100644
+--- a/fs/udf/udf_sb.h
++++ b/fs/udf/udf_sb.h
+@@ -20,8 +20,6 @@
+ #define UDF_FLAG_UNDELETE		6
+ #define UDF_FLAG_UNHIDE			7
+ #define UDF_FLAG_VARCONV		8
+-#define UDF_FLAG_NLS_MAP		9
+-#define UDF_FLAG_UTF8			10
+ #define UDF_FLAG_UID_FORGET     11    /* save -1 for uid to disk */
+ #define UDF_FLAG_GID_FORGET     12
+ #define UDF_FLAG_UID_SET	13
+diff --git a/fs/udf/unicode.c b/fs/udf/unicode.c
+index 5fcfa96463ebb..622569007b530 100644
+--- a/fs/udf/unicode.c
++++ b/fs/udf/unicode.c
+@@ -177,7 +177,7 @@ static int udf_name_from_CS0(struct super_block *sb,
+ 		return 0;
+ 	}
+ 
+-	if (UDF_QUERY_FLAG(sb, UDF_FLAG_NLS_MAP))
++	if (UDF_SB(sb)->s_nls_map)
+ 		conv_f = UDF_SB(sb)->s_nls_map->uni2char;
+ 	else
+ 		conv_f = NULL;
+@@ -285,7 +285,7 @@ static int udf_name_to_CS0(struct super_block *sb,
+ 	if (ocu_max_len <= 0)
+ 		return 0;
+ 
+-	if (UDF_QUERY_FLAG(sb, UDF_FLAG_NLS_MAP))
++	if (UDF_SB(sb)->s_nls_map)
+ 		conv_f = UDF_SB(sb)->s_nls_map->char2uni;
+ 	else
+ 		conv_f = NULL;
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index f69c75bd6d276..9bfb2f65534b0 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -1531,6 +1531,22 @@ static inline int queue_limit_discard_alignment(struct queue_limits *lim, sector
+ 	return offset << SECTOR_SHIFT;
+ }
+ 
++/*
++ * Two cases of handling DISCARD merge:
++ * If max_discard_segments > 1, the driver takes every bio
++ * as a range and send them to controller together. The ranges
++ * needn't to be contiguous.
++ * Otherwise, the bios/requests will be handled as same as
++ * others which should be contiguous.
++ */
++static inline bool blk_discard_mergable(struct request *req)
++{
++	if (req_op(req) == REQ_OP_DISCARD &&
++	    queue_max_discard_segments(req->q) > 1)
++		return true;
++	return false;
++}
++
+ static inline int bdev_discard_alignment(struct block_device *bdev)
+ {
+ 	struct request_queue *q = bdev_get_queue(bdev);
+diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
+index 757fc60658fa6..d1e9823d99c82 100644
+--- a/include/linux/energy_model.h
++++ b/include/linux/energy_model.h
+@@ -53,6 +53,22 @@ struct em_perf_domain {
+ #ifdef CONFIG_ENERGY_MODEL
+ #define EM_MAX_POWER 0xFFFF
+ 
++/*
++ * Increase resolution of energy estimation calculations for 64-bit
++ * architectures. The extra resolution improves decision made by EAS for the
++ * task placement when two Performance Domains might provide similar energy
++ * estimation values (w/o better resolution the values could be equal).
++ *
++ * We increase resolution only if we have enough bits to allow this increased
++ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
++ * are pretty high and the returns do not justify the increased costs.
++ */
++#ifdef CONFIG_64BIT
++#define em_scale_power(p) ((p) * 1000)
++#else
++#define em_scale_power(p) (p)
++#endif
++
+ struct em_data_callback {
+ 	/**
+ 	 * active_power() - Provide power at the next performance state of
+diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h
+index bb5e7b0a42746..77295af724264 100644
+--- a/include/linux/hrtimer.h
++++ b/include/linux/hrtimer.h
+@@ -318,16 +318,12 @@ struct clock_event_device;
+ 
+ extern void hrtimer_interrupt(struct clock_event_device *dev);
+ 
+-extern void clock_was_set_delayed(void);
+-
+ extern unsigned int hrtimer_resolution;
+ 
+ #else
+ 
+ #define hrtimer_resolution	(unsigned int)LOW_RES_NSEC
+ 
+-static inline void clock_was_set_delayed(void) { }
+-
+ #endif
+ 
+ static inline ktime_t
+@@ -351,7 +347,6 @@ hrtimer_expires_remaining_adjusted(const struct hrtimer *timer)
+ 						    timer->base->get_time());
+ }
+ 
+-extern void clock_was_set(void);
+ #ifdef CONFIG_TIMERFD
+ extern void timerfd_clock_was_set(void);
+ #else
+diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h
+index ded90b097e6e8..3f02b818625ef 100644
+--- a/include/linux/local_lock_internal.h
++++ b/include/linux/local_lock_internal.h
+@@ -14,29 +14,14 @@ typedef struct {
+ } local_lock_t;
+ 
+ #ifdef CONFIG_DEBUG_LOCK_ALLOC
+-# define LL_DEP_MAP_INIT(lockname)			\
++# define LOCAL_LOCK_DEBUG_INIT(lockname)		\
+ 	.dep_map = {					\
+ 		.name = #lockname,			\
+ 		.wait_type_inner = LD_WAIT_CONFIG,	\
+-		.lock_type = LD_LOCK_PERCPU,			\
+-	}
+-#else
+-# define LL_DEP_MAP_INIT(lockname)
+-#endif
+-
+-#define INIT_LOCAL_LOCK(lockname)	{ LL_DEP_MAP_INIT(lockname) }
+-
+-#define __local_lock_init(lock)					\
+-do {								\
+-	static struct lock_class_key __key;			\
+-								\
+-	debug_check_no_locks_freed((void *)lock, sizeof(*lock));\
+-	lockdep_init_map_type(&(lock)->dep_map, #lock, &__key, 0, \
+-			      LD_WAIT_CONFIG, LD_WAIT_INV,	\
+-			      LD_LOCK_PERCPU);			\
+-} while (0)
++		.lock_type = LD_LOCK_PERCPU,		\
++	},						\
++	.owner = NULL,
+ 
+-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+ static inline void local_lock_acquire(local_lock_t *l)
+ {
+ 	lock_map_acquire(&l->dep_map);
+@@ -51,11 +36,30 @@ static inline void local_lock_release(local_lock_t *l)
+ 	lock_map_release(&l->dep_map);
+ }
+ 
++static inline void local_lock_debug_init(local_lock_t *l)
++{
++	l->owner = NULL;
++}
+ #else /* CONFIG_DEBUG_LOCK_ALLOC */
++# define LOCAL_LOCK_DEBUG_INIT(lockname)
+ static inline void local_lock_acquire(local_lock_t *l) { }
+ static inline void local_lock_release(local_lock_t *l) { }
++static inline void local_lock_debug_init(local_lock_t *l) { }
+ #endif /* !CONFIG_DEBUG_LOCK_ALLOC */
+ 
++#define INIT_LOCAL_LOCK(lockname)	{ LOCAL_LOCK_DEBUG_INIT(lockname) }
++
++#define __local_lock_init(lock)					\
++do {								\
++	static struct lock_class_key __key;			\
++								\
++	debug_check_no_locks_freed((void *)lock, sizeof(*lock));\
++	lockdep_init_map_type(&(lock)->dep_map, #lock, &__key,  \
++			      0, LD_WAIT_CONFIG, LD_WAIT_INV,	\
++			      LD_LOCK_PERCPU);			\
++	local_lock_debug_init(lock);				\
++} while (0)
++
+ #define __local_lock(lock)					\
+ 	do {							\
+ 		preempt_disable();				\
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index eb86e80e4643f..857529a5568d7 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -918,7 +918,8 @@ struct mlx5_ifc_per_protocol_networking_offload_caps_bits {
+ 	u8         scatter_fcs[0x1];
+ 	u8         enhanced_multi_pkt_send_wqe[0x1];
+ 	u8         tunnel_lso_const_out_ip_id[0x1];
+-	u8         reserved_at_1c[0x2];
++	u8         tunnel_lro_gre[0x1];
++	u8         tunnel_lro_vxlan[0x1];
+ 	u8         tunnel_stateless_gre[0x1];
+ 	u8         tunnel_stateless_vxlan[0x1];
+ 
+diff --git a/include/linux/power/max17042_battery.h b/include/linux/power/max17042_battery.h
+index d55c746ac56e2..e00ad1cfb1f1d 100644
+--- a/include/linux/power/max17042_battery.h
++++ b/include/linux/power/max17042_battery.h
+@@ -69,7 +69,7 @@ enum max17042_register {
+ 	MAX17042_RelaxCFG	= 0x2A,
+ 	MAX17042_MiscCFG	= 0x2B,
+ 	MAX17042_TGAIN		= 0x2C,
+-	MAx17042_TOFF		= 0x2D,
++	MAX17042_TOFF		= 0x2D,
+ 	MAX17042_CGAIN		= 0x2E,
+ 	MAX17042_COFF		= 0x2F,
+ 
+diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
+index e91d51ea028bb..65185d1e07ea6 100644
+--- a/include/linux/sunrpc/svc.h
++++ b/include/linux/sunrpc/svc.h
+@@ -523,6 +523,7 @@ void		   svc_wake_up(struct svc_serv *);
+ void		   svc_reserve(struct svc_rqst *rqstp, int space);
+ struct svc_pool *  svc_pool_for_cpu(struct svc_serv *serv, int cpu);
+ char *		   svc_print_addr(struct svc_rqst *, char *, size_t);
++const char *	   svc_proc_name(const struct svc_rqst *rqstp);
+ int		   svc_encode_result_payload(struct svc_rqst *rqstp,
+ 					     unsigned int offset,
+ 					     unsigned int length);
+diff --git a/include/linux/time64.h b/include/linux/time64.h
+index 5117cb5b56561..81b9686a20799 100644
+--- a/include/linux/time64.h
++++ b/include/linux/time64.h
+@@ -25,7 +25,9 @@ struct itimerspec64 {
+ #define TIME64_MIN			(-TIME64_MAX - 1)
+ 
+ #define KTIME_MAX			((s64)~((u64)1 << 63))
++#define KTIME_MIN			(-KTIME_MAX - 1)
+ #define KTIME_SEC_MAX			(KTIME_MAX / NSEC_PER_SEC)
++#define KTIME_SEC_MIN			(KTIME_MIN / NSEC_PER_SEC)
+ 
+ /*
+  * Limits for settimeofday():
+@@ -124,10 +126,13 @@ static inline bool timespec64_valid_settod(const struct timespec64 *ts)
+  */
+ static inline s64 timespec64_to_ns(const struct timespec64 *ts)
+ {
+-	/* Prevent multiplication overflow */
+-	if ((unsigned long long)ts->tv_sec >= KTIME_SEC_MAX)
++	/* Prevent multiplication overflow / underflow */
++	if (ts->tv_sec >= KTIME_SEC_MAX)
+ 		return KTIME_MAX;
+ 
++	if (ts->tv_sec <= KTIME_SEC_MIN)
++		return KTIME_MIN;
++
+ 	return ((s64) ts->tv_sec * NSEC_PER_SEC) + ts->tv_nsec;
+ }
+ 
+diff --git a/include/net/dsa.h b/include/net/dsa.h
+index e1a2610a0e06e..f91317d2df9d6 100644
+--- a/include/net/dsa.h
++++ b/include/net/dsa.h
+@@ -643,8 +643,6 @@ struct dsa_switch_ops {
+ 	int	(*port_bridge_flags)(struct dsa_switch *ds, int port,
+ 				     struct switchdev_brport_flags flags,
+ 				     struct netlink_ext_ack *extack);
+-	int	(*port_set_mrouter)(struct dsa_switch *ds, int port, bool mrouter,
+-				    struct netlink_ext_ack *extack);
+ 
+ 	/*
+ 	 * VLAN support
+diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
+index ec7823921bd26..b420f905f4f6e 100644
+--- a/include/net/pkt_cls.h
++++ b/include/net/pkt_cls.h
+@@ -820,10 +820,9 @@ enum tc_htb_command {
+ struct tc_htb_qopt_offload {
+ 	struct netlink_ext_ack *extack;
+ 	enum tc_htb_command command;
+-	u16 classid;
+ 	u32 parent_classid;
++	u16 classid;
+ 	u16 qid;
+-	u16 moved_qid;
+ 	u64 rate;
+ 	u64 ceil;
+ };
+diff --git a/include/trace/events/io_uring.h b/include/trace/events/io_uring.h
+index abb8b24744fdb..08e65dcbd1e35 100644
+--- a/include/trace/events/io_uring.h
++++ b/include/trace/events/io_uring.h
+@@ -295,14 +295,14 @@ TRACE_EVENT(io_uring_fail_link,
+  */
+ TRACE_EVENT(io_uring_complete,
+ 
+-	TP_PROTO(void *ctx, u64 user_data, long res, unsigned cflags),
++	TP_PROTO(void *ctx, u64 user_data, int res, unsigned cflags),
+ 
+ 	TP_ARGS(ctx, user_data, res, cflags),
+ 
+ 	TP_STRUCT__entry (
+ 		__field(  void *,	ctx		)
+ 		__field(  u64,		user_data	)
+-		__field(  long,		res		)
++		__field(  int,		res		)
+ 		__field(  unsigned,	cflags		)
+ 	),
+ 
+@@ -313,7 +313,7 @@ TRACE_EVENT(io_uring_complete,
+ 		__entry->cflags		= cflags;
+ 	),
+ 
+-	TP_printk("ring %p, user_data 0x%llx, result %ld, cflags %x",
++	TP_printk("ring %p, user_data 0x%llx, result %d, cflags %x",
+ 			  __entry->ctx, (unsigned long long)__entry->user_data,
+ 			  __entry->res, __entry->cflags)
+ );
+diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h
+index d02e01a27b690..87569dbf2fe78 100644
+--- a/include/trace/events/sunrpc.h
++++ b/include/trace/events/sunrpc.h
+@@ -1642,7 +1642,7 @@ TRACE_EVENT(svc_process,
+ 		__field(u32, vers)
+ 		__field(u32, proc)
+ 		__string(service, name)
+-		__string(procedure, rqst->rq_procinfo->pc_name)
++		__string(procedure, svc_proc_name(rqst))
+ 		__string(addr, rqst->rq_xprt ?
+ 			 rqst->rq_xprt->xpt_remotebuf : "(null)")
+ 	),
+@@ -1652,7 +1652,7 @@ TRACE_EVENT(svc_process,
+ 		__entry->vers = rqst->rq_vers;
+ 		__entry->proc = rqst->rq_proc;
+ 		__assign_str(service, name);
+-		__assign_str(procedure, rqst->rq_procinfo->pc_name);
++		__assign_str(procedure, svc_proc_name(rqst));
+ 		__assign_str(addr, rqst->rq_xprt ?
+ 			     rqst->rq_xprt->xpt_remotebuf : "(null)");
+ 	),
+@@ -1918,7 +1918,7 @@ TRACE_EVENT(svc_stats_latency,
+ 	TP_STRUCT__entry(
+ 		__field(u32, xid)
+ 		__field(unsigned long, execute)
+-		__string(procedure, rqst->rq_procinfo->pc_name)
++		__string(procedure, svc_proc_name(rqst))
+ 		__string(addr, rqst->rq_xprt->xpt_remotebuf)
+ 	),
+ 
+@@ -1926,7 +1926,7 @@ TRACE_EVENT(svc_stats_latency,
+ 		__entry->xid = be32_to_cpu(rqst->rq_xid);
+ 		__entry->execute = ktime_to_us(ktime_sub(ktime_get(),
+ 							 rqst->rq_stime));
+-		__assign_str(procedure, rqst->rq_procinfo->pc_name);
++		__assign_str(procedure, svc_proc_name(rqst));
+ 		__assign_str(addr, rqst->rq_xprt->xpt_remotebuf);
+ 	),
+ 
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index ec6d85a817449..353f06cf210e9 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -3222,7 +3222,7 @@ union bpf_attr {
+  * long bpf_sk_select_reuseport(struct sk_reuseport_md *reuse, struct bpf_map *map, void *key, u64 flags)
+  *	Description
+  *		Select a **SO_REUSEPORT** socket from a
+- *		**BPF_MAP_TYPE_REUSEPORT_ARRAY** *map*.
++ *		**BPF_MAP_TYPE_REUSEPORT_SOCKARRAY** *map*.
+  *		It checks the selected socket is matching the incoming
+  *		request in the socket buffer.
+  *	Return
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 6b58fd978b703..d810f9e0ed9d5 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -11383,10 +11383,11 @@ static void convert_pseudo_ld_imm64(struct bpf_verifier_env *env)
+  * insni[off, off + cnt).  Adjust corresponding insn_aux_data by copying
+  * [0, off) and [off, end) to new locations, so the patched range stays zero
+  */
+-static int adjust_insn_aux_data(struct bpf_verifier_env *env,
+-				struct bpf_prog *new_prog, u32 off, u32 cnt)
++static void adjust_insn_aux_data(struct bpf_verifier_env *env,
++				 struct bpf_insn_aux_data *new_data,
++				 struct bpf_prog *new_prog, u32 off, u32 cnt)
+ {
+-	struct bpf_insn_aux_data *new_data, *old_data = env->insn_aux_data;
++	struct bpf_insn_aux_data *old_data = env->insn_aux_data;
+ 	struct bpf_insn *insn = new_prog->insnsi;
+ 	u32 old_seen = old_data[off].seen;
+ 	u32 prog_len;
+@@ -11399,12 +11400,9 @@ static int adjust_insn_aux_data(struct bpf_verifier_env *env,
+ 	old_data[off].zext_dst = insn_has_def32(env, insn + off + cnt - 1);
+ 
+ 	if (cnt == 1)
+-		return 0;
++		return;
+ 	prog_len = new_prog->len;
+-	new_data = vzalloc(array_size(prog_len,
+-				      sizeof(struct bpf_insn_aux_data)));
+-	if (!new_data)
+-		return -ENOMEM;
++
+ 	memcpy(new_data, old_data, sizeof(struct bpf_insn_aux_data) * off);
+ 	memcpy(new_data + off + cnt - 1, old_data + off,
+ 	       sizeof(struct bpf_insn_aux_data) * (prog_len - off - cnt + 1));
+@@ -11415,7 +11413,6 @@ static int adjust_insn_aux_data(struct bpf_verifier_env *env,
+ 	}
+ 	env->insn_aux_data = new_data;
+ 	vfree(old_data);
+-	return 0;
+ }
+ 
+ static void adjust_subprog_starts(struct bpf_verifier_env *env, u32 off, u32 len)
+@@ -11450,6 +11447,14 @@ static struct bpf_prog *bpf_patch_insn_data(struct bpf_verifier_env *env, u32 of
+ 					    const struct bpf_insn *patch, u32 len)
+ {
+ 	struct bpf_prog *new_prog;
++	struct bpf_insn_aux_data *new_data = NULL;
++
++	if (len > 1) {
++		new_data = vzalloc(array_size(env->prog->len + len - 1,
++					      sizeof(struct bpf_insn_aux_data)));
++		if (!new_data)
++			return NULL;
++	}
+ 
+ 	new_prog = bpf_patch_insn_single(env->prog, off, patch, len);
+ 	if (IS_ERR(new_prog)) {
+@@ -11457,10 +11462,10 @@ static struct bpf_prog *bpf_patch_insn_data(struct bpf_verifier_env *env, u32 of
+ 			verbose(env,
+ 				"insn %d cannot be patched due to 16-bit range\n",
+ 				env->insn_aux_data[off].orig_idx);
++		vfree(new_data);
+ 		return NULL;
+ 	}
+-	if (adjust_insn_aux_data(env, new_prog, off, len))
+-		return NULL;
++	adjust_insn_aux_data(env, new_data, new_prog, off, len);
+ 	adjust_subprog_starts(env, off, len);
+ 	adjust_poke_descs(new_prog, off, len);
+ 	return new_prog;
+@@ -11977,6 +11982,10 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
+ 		if (is_narrower_load && size < target_size) {
+ 			u8 shift = bpf_ctx_narrow_access_offset(
+ 				off, size, size_default) * 8;
++			if (shift && cnt + 1 >= ARRAY_SIZE(insn_buf)) {
++				verbose(env, "bpf verifier narrow ctx load misconfigured\n");
++				return -EINVAL;
++			}
+ 			if (ctx_field_size <= 4) {
+ 				if (shift)
+ 					insn_buf[cnt++] = BPF_ALU32_IMM(BPF_RSH,
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index adb5190c44296..13b5be6df4da2 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -1114,7 +1114,7 @@ enum subparts_cmd {
+  * cpus_allowed can be granted or an error code will be returned.
+  *
+  * For partcmd_disable, the cpuset is being transofrmed from a partition
+- * root back to a non-partition root. any CPUs in cpus_allowed that are in
++ * root back to a non-partition root. Any CPUs in cpus_allowed that are in
+  * parent's subparts_cpus will be taken away from that cpumask and put back
+  * into parent's effective_cpus. 0 should always be returned.
+  *
+@@ -1148,6 +1148,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,
+ 	struct cpuset *parent = parent_cs(cpuset);
+ 	int adding;	/* Moving cpus from effective_cpus to subparts_cpus */
+ 	int deleting;	/* Moving cpus from subparts_cpus to effective_cpus */
++	int new_prs;
+ 	bool part_error = false;	/* Partition error? */
+ 
+ 	percpu_rwsem_assert_held(&cpuset_rwsem);
+@@ -1183,6 +1184,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,
+ 	 * A cpumask update cannot make parent's effective_cpus become empty.
+ 	 */
+ 	adding = deleting = false;
++	new_prs = cpuset->partition_root_state;
+ 	if (cmd == partcmd_enable) {
+ 		cpumask_copy(tmp->addmask, cpuset->cpus_allowed);
+ 		adding = true;
+@@ -1225,7 +1227,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,
+ 		/*
+ 		 * partcmd_update w/o newmask:
+ 		 *
+-		 * addmask = cpus_allowed & parent->effectiveb_cpus
++		 * addmask = cpus_allowed & parent->effective_cpus
+ 		 *
+ 		 * Note that parent's subparts_cpus may have been
+ 		 * pre-shrunk in case there is a change in the cpu list.
+@@ -1247,11 +1249,11 @@ static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,
+ 		switch (cpuset->partition_root_state) {
+ 		case PRS_ENABLED:
+ 			if (part_error)
+-				cpuset->partition_root_state = PRS_ERROR;
++				new_prs = PRS_ERROR;
+ 			break;
+ 		case PRS_ERROR:
+ 			if (!part_error)
+-				cpuset->partition_root_state = PRS_ENABLED;
++				new_prs = PRS_ENABLED;
+ 			break;
+ 		}
+ 		/*
+@@ -1260,10 +1262,10 @@ static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,
+ 		part_error = (prev_prs == PRS_ERROR);
+ 	}
+ 
+-	if (!part_error && (cpuset->partition_root_state == PRS_ERROR))
++	if (!part_error && (new_prs == PRS_ERROR))
+ 		return 0;	/* Nothing need to be done */
+ 
+-	if (cpuset->partition_root_state == PRS_ERROR) {
++	if (new_prs == PRS_ERROR) {
+ 		/*
+ 		 * Remove all its cpus from parent's subparts_cpus.
+ 		 */
+@@ -1272,7 +1274,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,
+ 				       parent->subparts_cpus);
+ 	}
+ 
+-	if (!adding && !deleting)
++	if (!adding && !deleting && (new_prs == cpuset->partition_root_state))
+ 		return 0;
+ 
+ 	/*
+@@ -1299,6 +1301,9 @@ static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,
+ 	}
+ 
+ 	parent->nr_subparts_cpus = cpumask_weight(parent->subparts_cpus);
++
++	if (cpuset->partition_root_state != new_prs)
++		cpuset->partition_root_state = new_prs;
+ 	spin_unlock_irq(&callback_lock);
+ 
+ 	return cmd == partcmd_update;
+@@ -1321,6 +1326,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp)
+ 	struct cpuset *cp;
+ 	struct cgroup_subsys_state *pos_css;
+ 	bool need_rebuild_sched_domains = false;
++	int new_prs;
+ 
+ 	rcu_read_lock();
+ 	cpuset_for_each_descendant_pre(cp, pos_css, cs) {
+@@ -1360,17 +1366,18 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp)
+ 		 * update_tasks_cpumask() again for tasks in the parent
+ 		 * cpuset if the parent's subparts_cpus changes.
+ 		 */
+-		if ((cp != cs) && cp->partition_root_state) {
++		new_prs = cp->partition_root_state;
++		if ((cp != cs) && new_prs) {
+ 			switch (parent->partition_root_state) {
+ 			case PRS_DISABLED:
+ 				/*
+ 				 * If parent is not a partition root or an
+-				 * invalid partition root, clear the state
+-				 * state and the CS_CPU_EXCLUSIVE flag.
++				 * invalid partition root, clear its state
++				 * and its CS_CPU_EXCLUSIVE flag.
+ 				 */
+ 				WARN_ON_ONCE(cp->partition_root_state
+ 					     != PRS_ERROR);
+-				cp->partition_root_state = 0;
++				new_prs = PRS_DISABLED;
+ 
+ 				/*
+ 				 * clear_bit() is an atomic operation and
+@@ -1391,11 +1398,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp)
+ 				/*
+ 				 * When parent is invalid, it has to be too.
+ 				 */
+-				cp->partition_root_state = PRS_ERROR;
+-				if (cp->nr_subparts_cpus) {
+-					cp->nr_subparts_cpus = 0;
+-					cpumask_clear(cp->subparts_cpus);
+-				}
++				new_prs = PRS_ERROR;
+ 				break;
+ 			}
+ 		}
+@@ -1407,8 +1410,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp)
+ 		spin_lock_irq(&callback_lock);
+ 
+ 		cpumask_copy(cp->effective_cpus, tmp->new_cpus);
+-		if (cp->nr_subparts_cpus &&
+-		   (cp->partition_root_state != PRS_ENABLED)) {
++		if (cp->nr_subparts_cpus && (new_prs != PRS_ENABLED)) {
+ 			cp->nr_subparts_cpus = 0;
+ 			cpumask_clear(cp->subparts_cpus);
+ 		} else if (cp->nr_subparts_cpus) {
+@@ -1435,6 +1437,10 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp)
+ 					= cpumask_weight(cp->subparts_cpus);
+ 			}
+ 		}
++
++		if (new_prs != cp->partition_root_state)
++			cp->partition_root_state = new_prs;
++
+ 		spin_unlock_irq(&callback_lock);
+ 
+ 		WARN_ON(!is_in_v2_mode() &&
+@@ -1937,34 +1943,32 @@ out:
+ 
+ /*
+  * update_prstate - update partititon_root_state
+- * cs:	the cpuset to update
+- * val: 0 - disabled, 1 - enabled
++ * cs: the cpuset to update
++ * new_prs: new partition root state
+  *
+  * Call with cpuset_mutex held.
+  */
+-static int update_prstate(struct cpuset *cs, int val)
++static int update_prstate(struct cpuset *cs, int new_prs)
+ {
+-	int err;
++	int err, old_prs = cs->partition_root_state;
+ 	struct cpuset *parent = parent_cs(cs);
+-	struct tmpmasks tmp;
++	struct tmpmasks tmpmask;
+ 
+-	if ((val != 0) && (val != 1))
+-		return -EINVAL;
+-	if (val == cs->partition_root_state)
++	if (old_prs == new_prs)
+ 		return 0;
+ 
+ 	/*
+ 	 * Cannot force a partial or invalid partition root to a full
+ 	 * partition root.
+ 	 */
+-	if (val && cs->partition_root_state)
++	if (new_prs && (old_prs == PRS_ERROR))
+ 		return -EINVAL;
+ 
+-	if (alloc_cpumasks(NULL, &tmp))
++	if (alloc_cpumasks(NULL, &tmpmask))
+ 		return -ENOMEM;
+ 
+ 	err = -EINVAL;
+-	if (!cs->partition_root_state) {
++	if (!old_prs) {
+ 		/*
+ 		 * Turning on partition root requires setting the
+ 		 * CS_CPU_EXCLUSIVE bit implicitly as well and cpus_allowed
+@@ -1978,31 +1982,27 @@ static int update_prstate(struct cpuset *cs, int val)
+ 			goto out;
+ 
+ 		err = update_parent_subparts_cpumask(cs, partcmd_enable,
+-						     NULL, &tmp);
++						     NULL, &tmpmask);
+ 		if (err) {
+ 			update_flag(CS_CPU_EXCLUSIVE, cs, 0);
+ 			goto out;
+ 		}
+-		cs->partition_root_state = PRS_ENABLED;
+ 	} else {
+ 		/*
+ 		 * Turning off partition root will clear the
+ 		 * CS_CPU_EXCLUSIVE bit.
+ 		 */
+-		if (cs->partition_root_state == PRS_ERROR) {
+-			cs->partition_root_state = 0;
++		if (old_prs == PRS_ERROR) {
+ 			update_flag(CS_CPU_EXCLUSIVE, cs, 0);
+ 			err = 0;
+ 			goto out;
+ 		}
+ 
+ 		err = update_parent_subparts_cpumask(cs, partcmd_disable,
+-						     NULL, &tmp);
++						     NULL, &tmpmask);
+ 		if (err)
+ 			goto out;
+ 
+-		cs->partition_root_state = 0;
+-
+ 		/* Turning off CS_CPU_EXCLUSIVE will not return error */
+ 		update_flag(CS_CPU_EXCLUSIVE, cs, 0);
+ 	}
+@@ -2015,11 +2015,17 @@ static int update_prstate(struct cpuset *cs, int val)
+ 		update_tasks_cpumask(parent);
+ 
+ 	if (parent->child_ecpus_count)
+-		update_sibling_cpumasks(parent, cs, &tmp);
++		update_sibling_cpumasks(parent, cs, &tmpmask);
+ 
+ 	rebuild_sched_domains_locked();
+ out:
+-	free_cpumasks(NULL, &tmp);
++	if (!err) {
++		spin_lock_irq(&callback_lock);
++		cs->partition_root_state = new_prs;
++		spin_unlock_irq(&callback_lock);
++	}
++
++	free_cpumasks(NULL, &tmpmask);
+ 	return err;
+ }
+ 
+@@ -3060,7 +3066,7 @@ retry:
+ 		goto retry;
+ 	}
+ 
+-	parent =  parent_cs(cs);
++	parent = parent_cs(cs);
+ 	compute_effective_cpumask(&new_cpus, cs, parent);
+ 	nodes_and(new_mems, cs->mems_allowed, parent->effective_mems);
+ 
+@@ -3082,8 +3088,10 @@ retry:
+ 	if (is_partition_root(cs) && (cpumask_empty(&new_cpus) ||
+ 	   (parent->partition_root_state == PRS_ERROR))) {
+ 		if (cs->nr_subparts_cpus) {
++			spin_lock_irq(&callback_lock);
+ 			cs->nr_subparts_cpus = 0;
+ 			cpumask_clear(cs->subparts_cpus);
++			spin_unlock_irq(&callback_lock);
+ 			compute_effective_cpumask(&new_cpus, cs, parent);
+ 		}
+ 
+@@ -3097,7 +3105,9 @@ retry:
+ 		     cpumask_empty(&new_cpus)) {
+ 			update_parent_subparts_cpumask(cs, partcmd_disable,
+ 						       NULL, tmp);
++			spin_lock_irq(&callback_lock);
+ 			cs->partition_root_state = PRS_ERROR;
++			spin_unlock_irq(&callback_lock);
+ 		}
+ 		cpuset_force_rebuild();
+ 	}
+@@ -3168,6 +3178,13 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
+ 	cpus_updated = !cpumask_equal(top_cpuset.effective_cpus, &new_cpus);
+ 	mems_updated = !nodes_equal(top_cpuset.effective_mems, new_mems);
+ 
++	/*
++	 * In the rare case that hotplug removes all the cpus in subparts_cpus,
++	 * we assumed that cpus are updated.
++	 */
++	if (!cpus_updated && top_cpuset.nr_subparts_cpus)
++		cpus_updated = true;
++
+ 	/* synchronize cpus_allowed to cpu_active_mask */
+ 	if (cpus_updated) {
+ 		spin_lock_irq(&callback_lock);
+diff --git a/kernel/cpu_pm.c b/kernel/cpu_pm.c
+index f7e1d0eccdbc6..246efc74e3f34 100644
+--- a/kernel/cpu_pm.c
++++ b/kernel/cpu_pm.c
+@@ -13,19 +13,32 @@
+ #include <linux/spinlock.h>
+ #include <linux/syscore_ops.h>
+ 
+-static ATOMIC_NOTIFIER_HEAD(cpu_pm_notifier_chain);
++/*
++ * atomic_notifiers use a spinlock_t, which can block under PREEMPT_RT.
++ * Notifications for cpu_pm will be issued by the idle task itself, which can
++ * never block, IOW it requires using a raw_spinlock_t.
++ */
++static struct {
++	struct raw_notifier_head chain;
++	raw_spinlock_t lock;
++} cpu_pm_notifier = {
++	.chain = RAW_NOTIFIER_INIT(cpu_pm_notifier.chain),
++	.lock  = __RAW_SPIN_LOCK_UNLOCKED(cpu_pm_notifier.lock),
++};
+ 
+ static int cpu_pm_notify(enum cpu_pm_event event)
+ {
+ 	int ret;
+ 
+ 	/*
+-	 * atomic_notifier_call_chain has a RCU read critical section, which
+-	 * could be disfunctional in cpu idle. Copy RCU_NONIDLE code to let
+-	 * RCU know this.
++	 * This introduces a RCU read critical section, which could be
++	 * disfunctional in cpu idle. Copy RCU_NONIDLE code to let RCU know
++	 * this.
+ 	 */
+ 	rcu_irq_enter_irqson();
+-	ret = atomic_notifier_call_chain(&cpu_pm_notifier_chain, event, NULL);
++	rcu_read_lock();
++	ret = raw_notifier_call_chain(&cpu_pm_notifier.chain, event, NULL);
++	rcu_read_unlock();
+ 	rcu_irq_exit_irqson();
+ 
+ 	return notifier_to_errno(ret);
+@@ -33,10 +46,13 @@ static int cpu_pm_notify(enum cpu_pm_event event)
+ 
+ static int cpu_pm_notify_robust(enum cpu_pm_event event_up, enum cpu_pm_event event_down)
+ {
++	unsigned long flags;
+ 	int ret;
+ 
+ 	rcu_irq_enter_irqson();
+-	ret = atomic_notifier_call_chain_robust(&cpu_pm_notifier_chain, event_up, event_down, NULL);
++	raw_spin_lock_irqsave(&cpu_pm_notifier.lock, flags);
++	ret = raw_notifier_call_chain_robust(&cpu_pm_notifier.chain, event_up, event_down, NULL);
++	raw_spin_unlock_irqrestore(&cpu_pm_notifier.lock, flags);
+ 	rcu_irq_exit_irqson();
+ 
+ 	return notifier_to_errno(ret);
+@@ -49,12 +65,17 @@ static int cpu_pm_notify_robust(enum cpu_pm_event event_up, enum cpu_pm_event ev
+  * Add a driver to a list of drivers that are notified about
+  * CPU and CPU cluster low power entry and exit.
+  *
+- * This function may sleep, and has the same return conditions as
+- * raw_notifier_chain_register.
++ * This function has the same return conditions as raw_notifier_chain_register.
+  */
+ int cpu_pm_register_notifier(struct notifier_block *nb)
+ {
+-	return atomic_notifier_chain_register(&cpu_pm_notifier_chain, nb);
++	unsigned long flags;
++	int ret;
++
++	raw_spin_lock_irqsave(&cpu_pm_notifier.lock, flags);
++	ret = raw_notifier_chain_register(&cpu_pm_notifier.chain, nb);
++	raw_spin_unlock_irqrestore(&cpu_pm_notifier.lock, flags);
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(cpu_pm_register_notifier);
+ 
+@@ -64,12 +85,17 @@ EXPORT_SYMBOL_GPL(cpu_pm_register_notifier);
+  *
+  * Remove a driver from the CPU PM notifier list.
+  *
+- * This function may sleep, and has the same return conditions as
+- * raw_notifier_chain_unregister.
++ * This function has the same return conditions as raw_notifier_chain_unregister.
+  */
+ int cpu_pm_unregister_notifier(struct notifier_block *nb)
+ {
+-	return atomic_notifier_chain_unregister(&cpu_pm_notifier_chain, nb);
++	unsigned long flags;
++	int ret;
++
++	raw_spin_lock_irqsave(&cpu_pm_notifier.lock, flags);
++	ret = raw_notifier_chain_unregister(&cpu_pm_notifier.chain, nb);
++	raw_spin_unlock_irqrestore(&cpu_pm_notifier.lock, flags);
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(cpu_pm_unregister_notifier);
+ 
+diff --git a/kernel/irq/timings.c b/kernel/irq/timings.c
+index 4d2a702d7aa95..c43e2ac2f8def 100644
+--- a/kernel/irq/timings.c
++++ b/kernel/irq/timings.c
+@@ -799,12 +799,14 @@ static int __init irq_timings_test_irqs(struct timings_intervals *ti)
+ 
+ 		__irq_timings_store(irq, irqs, ti->intervals[i]);
+ 		if (irqs->circ_timings[i & IRQ_TIMINGS_MASK] != index) {
++			ret = -EBADSLT;
+ 			pr_err("Failed to store in the circular buffer\n");
+ 			goto out;
+ 		}
+ 	}
+ 
+ 	if (irqs->count != ti->count) {
++		ret = -ERANGE;
+ 		pr_err("Count differs\n");
+ 		goto out;
+ 	}
+diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
+index 013e1b08a1bfb..a03d3d3ff8866 100644
+--- a/kernel/locking/mutex.c
++++ b/kernel/locking/mutex.c
+@@ -928,7 +928,6 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
+ 		    struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
+ {
+ 	struct mutex_waiter waiter;
+-	bool first = false;
+ 	struct ww_mutex *ww;
+ 	int ret;
+ 
+@@ -1007,6 +1006,8 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
+ 
+ 	set_current_state(state);
+ 	for (;;) {
++		bool first;
++
+ 		/*
+ 		 * Once we hold wait_lock, we're serialized against
+ 		 * mutex_unlock() handing the lock off to us, do a trylock
+@@ -1035,15 +1036,9 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
+ 		spin_unlock(&lock->wait_lock);
+ 		schedule_preempt_disabled();
+ 
+-		/*
+-		 * ww_mutex needs to always recheck its position since its waiter
+-		 * list is not FIFO ordered.
+-		 */
+-		if (ww_ctx || !first) {
+-			first = __mutex_waiter_is_first(lock, &waiter);
+-			if (first)
+-				__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
+-		}
++		first = __mutex_waiter_is_first(lock, &waiter);
++		if (first)
++			__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
+ 
+ 		set_current_state(state);
+ 		/*
+diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c
+index 0f4530b3a8cd9..a332ccd829e24 100644
+--- a/kernel/power/energy_model.c
++++ b/kernel/power/energy_model.c
+@@ -170,7 +170,9 @@ static int em_create_perf_table(struct device *dev, struct em_perf_domain *pd,
+ 	/* Compute the cost of each performance state. */
+ 	fmax = (u64) table[nr_states - 1].frequency;
+ 	for (i = 0; i < nr_states; i++) {
+-		table[i].cost = div64_u64(fmax * table[i].power,
++		unsigned long power_res = em_scale_power(table[i].power);
++
++		table[i].cost = div64_u64(fmax * power_res,
+ 					  table[i].frequency);
+ 	}
+ 
+diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h
+index 59b95cc5cbdf1..c615fd153cb29 100644
+--- a/kernel/rcu/tree_stall.h
++++ b/kernel/rcu/tree_stall.h
+@@ -7,6 +7,8 @@
+  * Author: Paul E. McKenney <paulmck@linux.ibm.com>
+  */
+ 
++#include <linux/kvm_para.h>
++
+ //////////////////////////////////////////////////////////////////////////////
+ //
+ // Controlling CPU stall warnings, including delay calculation.
+@@ -267,8 +269,10 @@ static int rcu_print_task_stall(struct rcu_node *rnp, unsigned long flags)
+ 	struct task_struct *ts[8];
+ 
+ 	lockdep_assert_irqs_disabled();
+-	if (!rcu_preempt_blocked_readers_cgp(rnp))
++	if (!rcu_preempt_blocked_readers_cgp(rnp)) {
++		raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
+ 		return 0;
++	}
+ 	pr_err("\tTasks blocked on level-%d rcu_node (CPUs %d-%d):",
+ 	       rnp->level, rnp->grplo, rnp->grphi);
+ 	t = list_entry(rnp->gp_tasks->prev,
+@@ -280,8 +284,8 @@ static int rcu_print_task_stall(struct rcu_node *rnp, unsigned long flags)
+ 			break;
+ 	}
+ 	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
+-	for (i--; i; i--) {
+-		t = ts[i];
++	while (i) {
++		t = ts[--i];
+ 		if (!try_invoke_on_locked_down_task(t, check_slow_task, &rscr))
+ 			pr_cont(" P%d", t->pid);
+ 		else
+@@ -695,6 +699,14 @@ static void check_cpu_stall(struct rcu_data *rdp)
+ 	    (READ_ONCE(rnp->qsmask) & rdp->grpmask) &&
+ 	    cmpxchg(&rcu_state.jiffies_stall, js, jn) == js) {
+ 
++		/*
++		 * If a virtual machine is stopped by the host it can look to
++		 * the watchdog like an RCU stall. Check to see if the host
++		 * stopped the vm.
++		 */
++		if (kvm_check_and_clear_guest_paused())
++			return;
++
+ 		/* We haven't checked in, so go dump stack. */
+ 		print_cpu_stall(gps);
+ 		if (READ_ONCE(rcu_cpu_stall_ftrace_dump))
+@@ -704,6 +716,14 @@ static void check_cpu_stall(struct rcu_data *rdp)
+ 		   ULONG_CMP_GE(j, js + RCU_STALL_RAT_DELAY) &&
+ 		   cmpxchg(&rcu_state.jiffies_stall, js, jn) == js) {
+ 
++		/*
++		 * If a virtual machine is stopped by the host it can look to
++		 * the watchdog like an RCU stall. Check to see if the host
++		 * stopped the vm.
++		 */
++		if (kvm_check_and_clear_guest_paused())
++			return;
++
+ 		/* They had a few time units to dump stack, so complain. */
+ 		print_other_cpu_stall(gs2, gps);
+ 		if (READ_ONCE(rcu_cpu_stall_ftrace_dump))
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 15b4d2fb6be38..1e9672d609f7e 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1281,6 +1281,23 @@ static inline void uclamp_rq_dec(struct rq *rq, struct task_struct *p)
+ 		uclamp_rq_dec_id(rq, p, clamp_id);
+ }
+ 
++static inline void uclamp_rq_reinc_id(struct rq *rq, struct task_struct *p,
++				      enum uclamp_id clamp_id)
++{
++	if (!p->uclamp[clamp_id].active)
++		return;
++
++	uclamp_rq_dec_id(rq, p, clamp_id);
++	uclamp_rq_inc_id(rq, p, clamp_id);
++
++	/*
++	 * Make sure to clear the idle flag if we've transiently reached 0
++	 * active tasks on rq.
++	 */
++	if (clamp_id == UCLAMP_MAX && (rq->uclamp_flags & UCLAMP_FLAG_IDLE))
++		rq->uclamp_flags &= ~UCLAMP_FLAG_IDLE;
++}
++
+ static inline void
+ uclamp_update_active(struct task_struct *p)
+ {
+@@ -1304,12 +1321,8 @@ uclamp_update_active(struct task_struct *p)
+ 	 * affecting a valid clamp bucket, the next time it's enqueued,
+ 	 * it will already see the updated clamp bucket value.
+ 	 */
+-	for_each_clamp_id(clamp_id) {
+-		if (p->uclamp[clamp_id].active) {
+-			uclamp_rq_dec_id(rq, p, clamp_id);
+-			uclamp_rq_inc_id(rq, p, clamp_id);
+-		}
+-	}
++	for_each_clamp_id(clamp_id)
++		uclamp_rq_reinc_id(rq, p, clamp_id);
+ 
+ 	task_rq_unlock(rq, p, &rf);
+ }
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 2f9964b467e03..fa29a69e14c9f 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -1733,6 +1733,7 @@ static void migrate_task_rq_dl(struct task_struct *p, int new_cpu __maybe_unused
+ 	 */
+ 	raw_spin_lock(&rq->lock);
+ 	if (p->dl.dl_non_contending) {
++		update_rq_clock(rq);
+ 		sub_running_bw(&p->dl, &rq->dl);
+ 		p->dl.dl_non_contending = 0;
+ 		/*
+@@ -2729,7 +2730,7 @@ void __setparam_dl(struct task_struct *p, const struct sched_attr *attr)
+ 	dl_se->dl_runtime = attr->sched_runtime;
+ 	dl_se->dl_deadline = attr->sched_deadline;
+ 	dl_se->dl_period = attr->sched_period ?: dl_se->dl_deadline;
+-	dl_se->flags = attr->sched_flags;
++	dl_se->flags = attr->sched_flags & SCHED_DL_FLAGS;
+ 	dl_se->dl_bw = to_ratio(dl_se->dl_period, dl_se->dl_runtime);
+ 	dl_se->dl_density = to_ratio(dl_se->dl_deadline, dl_se->dl_runtime);
+ }
+@@ -2742,7 +2743,8 @@ void __getparam_dl(struct task_struct *p, struct sched_attr *attr)
+ 	attr->sched_runtime = dl_se->dl_runtime;
+ 	attr->sched_deadline = dl_se->dl_deadline;
+ 	attr->sched_period = dl_se->dl_period;
+-	attr->sched_flags = dl_se->flags;
++	attr->sched_flags &= ~SCHED_DL_FLAGS;
++	attr->sched_flags |= dl_se->flags;
+ }
+ 
+ /*
+@@ -2839,7 +2841,7 @@ bool dl_param_changed(struct task_struct *p, const struct sched_attr *attr)
+ 	if (dl_se->dl_runtime != attr->sched_runtime ||
+ 	    dl_se->dl_deadline != attr->sched_deadline ||
+ 	    dl_se->dl_period != attr->sched_period ||
+-	    dl_se->flags != attr->sched_flags)
++	    dl_se->flags != (attr->sched_flags & SCHED_DL_FLAGS))
+ 		return true;
+ 
+ 	return false;
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index c5aacbd492a19..f8c94bebd17db 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -388,6 +388,13 @@ void update_sched_domain_debugfs(void)
+ {
+ 	int cpu, i;
+ 
++	/*
++	 * This can unfortunately be invoked before sched_debug_init() creates
++	 * the debug directory. Don't touch sd_sysctl_cpus until then.
++	 */
++	if (!debugfs_sched)
++		return;
++
+ 	if (!cpumask_available(sd_sysctl_cpus)) {
+ 		if (!alloc_cpumask_var(&sd_sysctl_cpus, GFP_KERNEL))
+ 			return;
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index f60ef0b4ec330..3889fee98d11c 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -1529,7 +1529,7 @@ static inline bool is_core_idle(int cpu)
+ 		if (cpu == sibling)
+ 			continue;
+ 
+-		if (!idle_cpu(cpu))
++		if (!idle_cpu(sibling))
+ 			return false;
+ 	}
+ #endif
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index f2bc99ca01e57..9aa157c20722a 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -227,6 +227,8 @@ static inline void update_avg(u64 *avg, u64 sample)
+  */
+ #define SCHED_FLAG_SUGOV	0x10000000
+ 
++#define SCHED_DL_FLAGS (SCHED_FLAG_RECLAIM | SCHED_FLAG_DL_OVERRUN | SCHED_FLAG_SUGOV)
++
+ static inline bool dl_entity_is_special(struct sched_dl_entity *dl_se)
+ {
+ #ifdef CONFIG_CPU_FREQ_GOV_SCHEDUTIL
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index 55a0a243e8718..41f778f3db057 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -1372,6 +1372,8 @@ int				sched_max_numa_distance;
+ static int			*sched_domains_numa_distance;
+ static struct cpumask		***sched_domains_numa_masks;
+ int __read_mostly		node_reclaim_distance = RECLAIM_DISTANCE;
++
++static unsigned long __read_mostly *sched_numa_onlined_nodes;
+ #endif
+ 
+ /*
+@@ -1719,6 +1721,16 @@ void sched_init_numa(void)
+ 			sched_domains_numa_masks[i][j] = mask;
+ 
+ 			for_each_node(k) {
++				/*
++				 * Distance information can be unreliable for
++				 * offline nodes, defer building the node
++				 * masks to its bringup.
++				 * This relies on all unique distance values
++				 * still being visible at init time.
++				 */
++				if (!node_online(j))
++					continue;
++
+ 				if (sched_debug() && (node_distance(j, k) != node_distance(k, j)))
+ 					sched_numa_warn("Node-distance not symmetric");
+ 
+@@ -1772,6 +1784,53 @@ void sched_init_numa(void)
+ 	sched_max_numa_distance = sched_domains_numa_distance[nr_levels - 1];
+ 
+ 	init_numa_topology_type();
++
++	sched_numa_onlined_nodes = bitmap_alloc(nr_node_ids, GFP_KERNEL);
++	if (!sched_numa_onlined_nodes)
++		return;
++
++	bitmap_zero(sched_numa_onlined_nodes, nr_node_ids);
++	for_each_online_node(i)
++		bitmap_set(sched_numa_onlined_nodes, i, 1);
++}
++
++static void __sched_domains_numa_masks_set(unsigned int node)
++{
++	int i, j;
++
++	/*
++	 * NUMA masks are not built for offline nodes in sched_init_numa().
++	 * Thus, when a CPU of a never-onlined-before node gets plugged in,
++	 * adding that new CPU to the right NUMA masks is not sufficient: the
++	 * masks of that CPU's node must also be updated.
++	 */
++	if (test_bit(node, sched_numa_onlined_nodes))
++		return;
++
++	bitmap_set(sched_numa_onlined_nodes, node, 1);
++
++	for (i = 0; i < sched_domains_numa_levels; i++) {
++		for (j = 0; j < nr_node_ids; j++) {
++			if (!node_online(j) || node == j)
++				continue;
++
++			if (node_distance(j, node) > sched_domains_numa_distance[i])
++				continue;
++
++			/* Add remote nodes in our masks */
++			cpumask_or(sched_domains_numa_masks[i][node],
++				   sched_domains_numa_masks[i][node],
++				   sched_domains_numa_masks[0][j]);
++		}
++	}
++
++	/*
++	 * A new node has been brought up, potentially changing the topology
++	 * classification.
++	 *
++	 * Note that this is racy vs any use of sched_numa_topology_type :/
++	 */
++	init_numa_topology_type();
+ }
+ 
+ void sched_domains_numa_masks_set(unsigned int cpu)
+@@ -1779,8 +1838,14 @@ void sched_domains_numa_masks_set(unsigned int cpu)
+ 	int node = cpu_to_node(cpu);
+ 	int i, j;
+ 
++	__sched_domains_numa_masks_set(node);
++
+ 	for (i = 0; i < sched_domains_numa_levels; i++) {
+ 		for (j = 0; j < nr_node_ids; j++) {
++			if (!node_online(j))
++				continue;
++
++			/* Set ourselves in the remote node's masks */
+ 			if (node_distance(j, node) <= sched_domains_numa_distance[i])
+ 				cpumask_set_cpu(cpu, sched_domains_numa_masks[i][j]);
+ 		}
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index 4a66725b1d4ac..5af7584734888 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -758,22 +758,6 @@ static void hrtimer_switch_to_hres(void)
+ 	retrigger_next_event(NULL);
+ }
+ 
+-static void clock_was_set_work(struct work_struct *work)
+-{
+-	clock_was_set();
+-}
+-
+-static DECLARE_WORK(hrtimer_work, clock_was_set_work);
+-
+-/*
+- * Called from timekeeping and resume code to reprogram the hrtimer
+- * interrupt device on all cpus.
+- */
+-void clock_was_set_delayed(void)
+-{
+-	schedule_work(&hrtimer_work);
+-}
+-
+ #else
+ 
+ static inline int hrtimer_is_hres_enabled(void) { return 0; }
+@@ -891,6 +875,22 @@ void clock_was_set(void)
+ 	timerfd_clock_was_set();
+ }
+ 
++static void clock_was_set_work(struct work_struct *work)
++{
++	clock_was_set();
++}
++
++static DECLARE_WORK(hrtimer_work, clock_was_set_work);
++
++/*
++ * Called from timekeeping and resume code to reprogram the hrtimer
++ * interrupt device on all cpus and to notify timerfd.
++ */
++void clock_was_set_delayed(void)
++{
++	schedule_work(&hrtimer_work);
++}
++
+ /*
+  * During resume we might have to reprogram the high resolution timer
+  * interrupt on all online CPUs.  However, all other CPUs will be
+@@ -1030,12 +1030,13 @@ static void __remove_hrtimer(struct hrtimer *timer,
+  * remove hrtimer, called with base lock held
+  */
+ static inline int
+-remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base, bool restart)
++remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base,
++	       bool restart, bool keep_local)
+ {
+ 	u8 state = timer->state;
+ 
+ 	if (state & HRTIMER_STATE_ENQUEUED) {
+-		int reprogram;
++		bool reprogram;
+ 
+ 		/*
+ 		 * Remove the timer and force reprogramming when high
+@@ -1048,8 +1049,16 @@ remove_hrtimer(struct hrtimer *timer, struct hrtimer_clock_base *base, bool rest
+ 		debug_deactivate(timer);
+ 		reprogram = base->cpu_base == this_cpu_ptr(&hrtimer_bases);
+ 
++		/*
++		 * If the timer is not restarted then reprogramming is
++		 * required if the timer is local. If it is local and about
++		 * to be restarted, avoid programming it twice (on removal
++		 * and a moment later when it's requeued).
++		 */
+ 		if (!restart)
+ 			state = HRTIMER_STATE_INACTIVE;
++		else
++			reprogram &= !keep_local;
+ 
+ 		__remove_hrtimer(timer, base, state, reprogram);
+ 		return 1;
+@@ -1103,9 +1112,31 @@ static int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
+ 				    struct hrtimer_clock_base *base)
+ {
+ 	struct hrtimer_clock_base *new_base;
++	bool force_local, first;
++
++	/*
++	 * If the timer is on the local cpu base and is the first expiring
++	 * timer then this might end up reprogramming the hardware twice
++	 * (on removal and on enqueue). To avoid that by prevent the
++	 * reprogram on removal, keep the timer local to the current CPU
++	 * and enforce reprogramming after it is queued no matter whether
++	 * it is the new first expiring timer again or not.
++	 */
++	force_local = base->cpu_base == this_cpu_ptr(&hrtimer_bases);
++	force_local &= base->cpu_base->next_timer == timer;
+ 
+-	/* Remove an active timer from the queue: */
+-	remove_hrtimer(timer, base, true);
++	/*
++	 * Remove an active timer from the queue. In case it is not queued
++	 * on the current CPU, make sure that remove_hrtimer() updates the
++	 * remote data correctly.
++	 *
++	 * If it's on the current CPU and the first expiring timer, then
++	 * skip reprogramming, keep the timer local and enforce
++	 * reprogramming later if it was the first expiring timer.  This
++	 * avoids programming the underlying clock event twice (once at
++	 * removal and once after enqueue).
++	 */
++	remove_hrtimer(timer, base, true, force_local);
+ 
+ 	if (mode & HRTIMER_MODE_REL)
+ 		tim = ktime_add_safe(tim, base->get_time());
+@@ -1115,9 +1146,24 @@ static int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
+ 	hrtimer_set_expires_range_ns(timer, tim, delta_ns);
+ 
+ 	/* Switch the timer base, if necessary: */
+-	new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED);
++	if (!force_local) {
++		new_base = switch_hrtimer_base(timer, base,
++					       mode & HRTIMER_MODE_PINNED);
++	} else {
++		new_base = base;
++	}
++
++	first = enqueue_hrtimer(timer, new_base, mode);
++	if (!force_local)
++		return first;
+ 
+-	return enqueue_hrtimer(timer, new_base, mode);
++	/*
++	 * Timer was forced to stay on the current CPU to avoid
++	 * reprogramming on removal and enqueue. Force reprogram the
++	 * hardware by evaluating the new first expiring timer.
++	 */
++	hrtimer_force_reprogram(new_base->cpu_base, 1);
++	return 0;
+ }
+ 
+ /**
+@@ -1183,7 +1229,7 @@ int hrtimer_try_to_cancel(struct hrtimer *timer)
+ 	base = lock_hrtimer_base(timer, &flags);
+ 
+ 	if (!hrtimer_callback_running(timer))
+-		ret = remove_hrtimer(timer, base, false);
++		ret = remove_hrtimer(timer, base, false, false);
+ 
+ 	unlock_hrtimer_base(timer, &flags);
+ 
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index aa52fc85dbcbf..a9f8d25220b1a 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -1346,8 +1346,6 @@ void set_process_cpu_timer(struct task_struct *tsk, unsigned int clkid,
+ 			}
+ 		}
+ 
+-		if (!*newval)
+-			return;
+ 		*newval += now;
+ 	}
+ 
+diff --git a/kernel/time/tick-internal.h b/kernel/time/tick-internal.h
+index 7a981c9e87a4a..e61c1244e7d46 100644
+--- a/kernel/time/tick-internal.h
++++ b/kernel/time/tick-internal.h
+@@ -164,3 +164,6 @@ DECLARE_PER_CPU(struct hrtimer_cpu_base, hrtimer_bases);
+ 
+ extern u64 get_next_timer_interrupt(unsigned long basej, u64 basem);
+ void timer_clear_idle(void);
++
++void clock_was_set(void);
++void clock_was_set_delayed(void);
+diff --git a/lib/mpi/mpiutil.c b/lib/mpi/mpiutil.c
+index 3c63710c20c69..e6c4b3180ab1d 100644
+--- a/lib/mpi/mpiutil.c
++++ b/lib/mpi/mpiutil.c
+@@ -148,7 +148,7 @@ int mpi_resize(MPI a, unsigned nlimbs)
+ 		return 0;	/* no need to do it */
+ 
+ 	if (a->d) {
+-		p = kmalloc_array(nlimbs, sizeof(mpi_limb_t), GFP_KERNEL);
++		p = kcalloc(nlimbs, sizeof(mpi_limb_t), GFP_KERNEL);
+ 		if (!p)
+ 			return -ENOMEM;
+ 		memcpy(p, a->d, a->alloced * sizeof(mpi_limb_t));
+diff --git a/net/6lowpan/debugfs.c b/net/6lowpan/debugfs.c
+index 1c140af06d527..600b9563bfc53 100644
+--- a/net/6lowpan/debugfs.c
++++ b/net/6lowpan/debugfs.c
+@@ -170,7 +170,8 @@ static void lowpan_dev_debugfs_ctx_init(struct net_device *dev,
+ 	struct dentry *root;
+ 	char buf[32];
+ 
+-	WARN_ON_ONCE(id > LOWPAN_IPHC_CTX_TABLE_SIZE);
++	if (WARN_ON_ONCE(id >= LOWPAN_IPHC_CTX_TABLE_SIZE))
++		return;
+ 
+ 	sprintf(buf, "%d", id);
+ 
+diff --git a/net/bluetooth/cmtp/cmtp.h b/net/bluetooth/cmtp/cmtp.h
+index c32638dddbf94..f6b9dc4e408f2 100644
+--- a/net/bluetooth/cmtp/cmtp.h
++++ b/net/bluetooth/cmtp/cmtp.h
+@@ -26,7 +26,7 @@
+ #include <linux/types.h>
+ #include <net/bluetooth/bluetooth.h>
+ 
+-#define BTNAMSIZ 18
++#define BTNAMSIZ 21
+ 
+ /* CMTP ioctl defines */
+ #define CMTPCONNADD	_IOW('C', 200, int)
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index ee59d1c7f1f6c..bf1bb08b94aad 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -1343,6 +1343,12 @@ int hci_inquiry(void __user *arg)
+ 		goto done;
+ 	}
+ 
++	/* Restrict maximum inquiry length to 60 seconds */
++	if (ir.length > 60) {
++		err = -EINVAL;
++		goto done;
++	}
++
+ 	hci_dev_lock(hdev);
+ 	if (inquiry_cache_age(hdev) > INQUIRY_CACHE_AGE_MAX ||
+ 	    inquiry_cache_empty(hdev) || ir.flags & IREQ_CACHE_FLUSH) {
+@@ -1734,6 +1740,14 @@ int hci_dev_do_close(struct hci_dev *hdev)
+ 	hci_request_cancel_all(hdev);
+ 	hci_req_sync_lock(hdev);
+ 
++	if (!hci_dev_test_flag(hdev, HCI_UNREGISTER) &&
++	    !hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
++	    test_bit(HCI_UP, &hdev->flags)) {
++		/* Execute vendor specific shutdown routine */
++		if (hdev->shutdown)
++			hdev->shutdown(hdev);
++	}
++
+ 	if (!test_and_clear_bit(HCI_UP, &hdev->flags)) {
+ 		cancel_delayed_work_sync(&hdev->cmd_timer);
+ 		hci_req_sync_unlock(hdev);
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 470eaabb021f9..6a9826164fd7f 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -7725,7 +7725,7 @@ static int add_advertising(struct sock *sk, struct hci_dev *hdev,
+ 	 * advertising.
+ 	 */
+ 	if (hci_dev_test_flag(hdev, HCI_ENABLE_LL_PRIVACY))
+-		return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_ADVERTISING,
++		return mgmt_cmd_status(sk, hdev->id, MGMT_OP_ADD_ADVERTISING,
+ 				       MGMT_STATUS_NOT_SUPPORTED);
+ 
+ 	if (cp->instance < 1 || cp->instance > hdev->le_num_of_adv_sets)
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 3bd41563f118a..9769a7ceb6898 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -85,7 +85,6 @@ static void sco_sock_timeout(struct timer_list *t)
+ 	sk->sk_state_change(sk);
+ 	bh_unlock_sock(sk);
+ 
+-	sco_sock_kill(sk);
+ 	sock_put(sk);
+ }
+ 
+@@ -177,7 +176,6 @@ static void sco_conn_del(struct hci_conn *hcon, int err)
+ 		sco_sock_clear_timer(sk);
+ 		sco_chan_del(sk, err);
+ 		bh_unlock_sock(sk);
+-		sco_sock_kill(sk);
+ 		sock_put(sk);
+ 	}
+ 
+@@ -394,8 +392,7 @@ static void sco_sock_cleanup_listen(struct sock *parent)
+  */
+ static void sco_sock_kill(struct sock *sk)
+ {
+-	if (!sock_flag(sk, SOCK_ZAPPED) || sk->sk_socket ||
+-	    sock_flag(sk, SOCK_DEAD))
++	if (!sock_flag(sk, SOCK_ZAPPED) || sk->sk_socket)
+ 		return;
+ 
+ 	BT_DBG("sk %p state %d", sk, sk->sk_state);
+@@ -447,7 +444,6 @@ static void sco_sock_close(struct sock *sk)
+ 	lock_sock(sk);
+ 	__sco_sock_close(sk);
+ 	release_sock(sk);
+-	sco_sock_kill(sk);
+ }
+ 
+ static void sco_skb_put_cmsg(struct sk_buff *skb, struct msghdr *msg,
+@@ -773,6 +769,11 @@ static void sco_conn_defer_accept(struct hci_conn *conn, u16 setting)
+ 			cp.max_latency = cpu_to_le16(0xffff);
+ 			cp.retrans_effort = 0xff;
+ 			break;
++		default:
++			/* use CVSD settings as fallback */
++			cp.max_latency = cpu_to_le16(0xffff);
++			cp.retrans_effort = 0xff;
++			break;
+ 		}
+ 
+ 		hci_send_cmd(hdev, HCI_OP_ACCEPT_SYNC_CONN_REQ,
+diff --git a/net/core/devlink.c b/net/core/devlink.c
+index 051432ea4f69e..5d01bebffacab 100644
+--- a/net/core/devlink.c
++++ b/net/core/devlink.c
+@@ -3283,10 +3283,12 @@ static void devlink_param_notify(struct devlink *devlink,
+ 				 struct devlink_param_item *param_item,
+ 				 enum devlink_command cmd);
+ 
+-static void devlink_reload_netns_change(struct devlink *devlink,
+-					struct net *dest_net)
++static void devlink_ns_change_notify(struct devlink *devlink,
++				     struct net *dest_net, struct net *curr_net,
++				     bool new)
+ {
+ 	struct devlink_param_item *param_item;
++	enum devlink_command cmd;
+ 
+ 	/* Userspace needs to be notified about devlink objects
+ 	 * removed from original and entering new network namespace.
+@@ -3294,17 +3296,18 @@ static void devlink_reload_netns_change(struct devlink *devlink,
+ 	 * reload process so the notifications are generated separatelly.
+ 	 */
+ 
+-	list_for_each_entry(param_item, &devlink->param_list, list)
+-		devlink_param_notify(devlink, 0, param_item,
+-				     DEVLINK_CMD_PARAM_DEL);
+-	devlink_notify(devlink, DEVLINK_CMD_DEL);
++	if (!dest_net || net_eq(dest_net, curr_net))
++		return;
+ 
+-	__devlink_net_set(devlink, dest_net);
++	if (new)
++		devlink_notify(devlink, DEVLINK_CMD_NEW);
+ 
+-	devlink_notify(devlink, DEVLINK_CMD_NEW);
++	cmd = new ? DEVLINK_CMD_PARAM_NEW : DEVLINK_CMD_PARAM_DEL;
+ 	list_for_each_entry(param_item, &devlink->param_list, list)
+-		devlink_param_notify(devlink, 0, param_item,
+-				     DEVLINK_CMD_PARAM_NEW);
++		devlink_param_notify(devlink, 0, param_item, cmd);
++
++	if (!new)
++		devlink_notify(devlink, DEVLINK_CMD_DEL);
+ }
+ 
+ static bool devlink_reload_supported(const struct devlink_ops *ops)
+@@ -3384,6 +3387,7 @@ static int devlink_reload(struct devlink *devlink, struct net *dest_net,
+ 			  u32 *actions_performed, struct netlink_ext_ack *extack)
+ {
+ 	u32 remote_reload_stats[DEVLINK_RELOAD_STATS_ARRAY_SIZE];
++	struct net *curr_net;
+ 	int err;
+ 
+ 	if (!devlink->reload_enabled)
+@@ -3391,18 +3395,22 @@ static int devlink_reload(struct devlink *devlink, struct net *dest_net,
+ 
+ 	memcpy(remote_reload_stats, devlink->stats.remote_reload_stats,
+ 	       sizeof(remote_reload_stats));
++
++	curr_net = devlink_net(devlink);
++	devlink_ns_change_notify(devlink, dest_net, curr_net, false);
+ 	err = devlink->ops->reload_down(devlink, !!dest_net, action, limit, extack);
+ 	if (err)
+ 		return err;
+ 
+-	if (dest_net && !net_eq(dest_net, devlink_net(devlink)))
+-		devlink_reload_netns_change(devlink, dest_net);
++	if (dest_net && !net_eq(dest_net, curr_net))
++		__devlink_net_set(devlink, dest_net);
+ 
+ 	err = devlink->ops->reload_up(devlink, action, limit, actions_performed, extack);
+ 	devlink_reload_failed_set(devlink, !!err);
+ 	if (err)
+ 		return err;
+ 
++	devlink_ns_change_notify(devlink, dest_net, curr_net, true);
+ 	WARN_ON(!(*actions_performed & BIT(action)));
+ 	/* Catch driver on updating the remote action within devlink reload */
+ 	WARN_ON(memcmp(remote_reload_stats, devlink->stats.remote_reload_stats,
+@@ -3599,7 +3607,7 @@ out_free_msg:
+ 
+ static void devlink_flash_update_begin_notify(struct devlink *devlink)
+ {
+-	struct devlink_flash_notify params = { 0 };
++	struct devlink_flash_notify params = {};
+ 
+ 	__devlink_flash_update_notify(devlink,
+ 				      DEVLINK_CMD_FLASH_UPDATE,
+@@ -3608,7 +3616,7 @@ static void devlink_flash_update_begin_notify(struct devlink *devlink)
+ 
+ static void devlink_flash_update_end_notify(struct devlink *devlink)
+ {
+-	struct devlink_flash_notify params = { 0 };
++	struct devlink_flash_notify params = {};
+ 
+ 	__devlink_flash_update_notify(devlink,
+ 				      DEVLINK_CMD_FLASH_UPDATE_END,
+diff --git a/net/dsa/dsa_priv.h b/net/dsa/dsa_priv.h
+index 92282de54230f..1bf602f30ce4c 100644
+--- a/net/dsa/dsa_priv.h
++++ b/net/dsa/dsa_priv.h
+@@ -211,8 +211,6 @@ int dsa_port_pre_bridge_flags(const struct dsa_port *dp,
+ int dsa_port_bridge_flags(const struct dsa_port *dp,
+ 			  struct switchdev_brport_flags flags,
+ 			  struct netlink_ext_ack *extack);
+-int dsa_port_mrouter(struct dsa_port *dp, bool mrouter,
+-		     struct netlink_ext_ack *extack);
+ int dsa_port_vlan_add(struct dsa_port *dp,
+ 		      const struct switchdev_obj_port_vlan *vlan,
+ 		      struct netlink_ext_ack *extack);
+diff --git a/net/dsa/port.c b/net/dsa/port.c
+index 6379d66a6bb32..c3ffbd41331a9 100644
+--- a/net/dsa/port.c
++++ b/net/dsa/port.c
+@@ -186,10 +186,6 @@ static int dsa_port_switchdev_sync(struct dsa_port *dp,
+ 	if (err && err != -EOPNOTSUPP)
+ 		return err;
+ 
+-	err = dsa_port_mrouter(dp->cpu_dp, br_multicast_router(br), extack);
+-	if (err && err != -EOPNOTSUPP)
+-		return err;
+-
+ 	err = dsa_port_ageing_time(dp, br_get_ageing_time(br));
+ 	if (err && err != -EOPNOTSUPP)
+ 		return err;
+@@ -235,12 +231,6 @@ static void dsa_port_switchdev_unsync(struct dsa_port *dp)
+ 
+ 	/* VLAN filtering is handled by dsa_switch_bridge_leave */
+ 
+-	/* Some drivers treat the notification for having a local multicast
+-	 * router by allowing multicast to be flooded to the CPU, so we should
+-	 * allow this in standalone mode too.
+-	 */
+-	dsa_port_mrouter(dp->cpu_dp, true, NULL);
+-
+ 	/* Ageing time may be global to the switch chip, so don't change it
+ 	 * here because we have no good reason (or value) to change it to.
+ 	 */
+@@ -555,17 +545,6 @@ int dsa_port_bridge_flags(const struct dsa_port *dp,
+ 	return ds->ops->port_bridge_flags(ds, dp->index, flags, extack);
+ }
+ 
+-int dsa_port_mrouter(struct dsa_port *dp, bool mrouter,
+-		     struct netlink_ext_ack *extack)
+-{
+-	struct dsa_switch *ds = dp->ds;
+-
+-	if (!ds->ops->port_set_mrouter)
+-		return -EOPNOTSUPP;
+-
+-	return ds->ops->port_set_mrouter(ds, dp->index, mrouter, extack);
+-}
+-
+ int dsa_port_mtu_change(struct dsa_port *dp, int new_mtu,
+ 			bool propagate_upstream)
+ {
+diff --git a/net/dsa/slave.c b/net/dsa/slave.c
+index d4756b9201089..5882159137eaf 100644
+--- a/net/dsa/slave.c
++++ b/net/dsa/slave.c
+@@ -311,12 +311,6 @@ static int dsa_slave_port_attr_set(struct net_device *dev,
+ 
+ 		ret = dsa_port_bridge_flags(dp, attr->u.brport_flags, extack);
+ 		break;
+-	case SWITCHDEV_ATTR_ID_BRIDGE_MROUTER:
+-		if (!dsa_port_offloads_bridge(dp, attr->orig_dev))
+-			return -EOPNOTSUPP;
+-
+-		ret = dsa_port_mrouter(dp->cpu_dp, attr->u.mrouter, extack);
+-		break;
+ 	default:
+ 		ret = -EOPNOTSUPP;
+ 		break;
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index d8811e1fbd6c8..f495fad73be90 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -586,18 +586,25 @@ static void fnhe_flush_routes(struct fib_nh_exception *fnhe)
+ 	}
+ }
+ 
+-static struct fib_nh_exception *fnhe_oldest(struct fnhe_hash_bucket *hash)
++static void fnhe_remove_oldest(struct fnhe_hash_bucket *hash)
+ {
+-	struct fib_nh_exception *fnhe, *oldest;
++	struct fib_nh_exception __rcu **fnhe_p, **oldest_p;
++	struct fib_nh_exception *fnhe, *oldest = NULL;
+ 
+-	oldest = rcu_dereference(hash->chain);
+-	for (fnhe = rcu_dereference(oldest->fnhe_next); fnhe;
+-	     fnhe = rcu_dereference(fnhe->fnhe_next)) {
+-		if (time_before(fnhe->fnhe_stamp, oldest->fnhe_stamp))
++	for (fnhe_p = &hash->chain; ; fnhe_p = &fnhe->fnhe_next) {
++		fnhe = rcu_dereference_protected(*fnhe_p,
++						 lockdep_is_held(&fnhe_lock));
++		if (!fnhe)
++			break;
++		if (!oldest ||
++		    time_before(fnhe->fnhe_stamp, oldest->fnhe_stamp)) {
+ 			oldest = fnhe;
++			oldest_p = fnhe_p;
++		}
+ 	}
+ 	fnhe_flush_routes(oldest);
+-	return oldest;
++	*oldest_p = oldest->fnhe_next;
++	kfree_rcu(oldest, rcu);
+ }
+ 
+ static u32 fnhe_hashfun(__be32 daddr)
+@@ -676,16 +683,21 @@ static void update_or_create_fnhe(struct fib_nh_common *nhc, __be32 daddr,
+ 		if (rt)
+ 			fill_route_from_fnhe(rt, fnhe);
+ 	} else {
+-		if (depth > FNHE_RECLAIM_DEPTH)
+-			fnhe = fnhe_oldest(hash);
+-		else {
+-			fnhe = kzalloc(sizeof(*fnhe), GFP_ATOMIC);
+-			if (!fnhe)
+-				goto out_unlock;
+-
+-			fnhe->fnhe_next = hash->chain;
+-			rcu_assign_pointer(hash->chain, fnhe);
++		/* Randomize max depth to avoid some side channels attacks. */
++		int max_depth = FNHE_RECLAIM_DEPTH +
++				prandom_u32_max(FNHE_RECLAIM_DEPTH);
++
++		while (depth > max_depth) {
++			fnhe_remove_oldest(hash);
++			depth--;
+ 		}
++
++		fnhe = kzalloc(sizeof(*fnhe), GFP_ATOMIC);
++		if (!fnhe)
++			goto out_unlock;
++
++		fnhe->fnhe_next = hash->chain;
++
+ 		fnhe->fnhe_genid = genid;
+ 		fnhe->fnhe_daddr = daddr;
+ 		fnhe->fnhe_gw = gw;
+@@ -693,6 +705,8 @@ static void update_or_create_fnhe(struct fib_nh_common *nhc, __be32 daddr,
+ 		fnhe->fnhe_mtu_locked = lock;
+ 		fnhe->fnhe_expires = max(1UL, expires);
+ 
++		rcu_assign_pointer(hash->chain, fnhe);
++
+ 		/* Exception created; mark the cached routes for the nexthop
+ 		 * stale, so anyone caching it rechecks if this exception
+ 		 * applies to them.
+@@ -3047,7 +3061,7 @@ static struct sk_buff *inet_rtm_getroute_build_skb(__be32 src, __be32 dst,
+ 		udph = skb_put_zero(skb, sizeof(struct udphdr));
+ 		udph->source = sport;
+ 		udph->dest = dport;
+-		udph->len = sizeof(struct udphdr);
++		udph->len = htons(sizeof(struct udphdr));
+ 		udph->check = 0;
+ 		break;
+ 	}
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 8bb5f7f51dae7..7f3c7d57a39d4 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -2440,6 +2440,7 @@ static void *tcp_get_idx(struct seq_file *seq, loff_t pos)
+ static void *tcp_seek_last_pos(struct seq_file *seq)
+ {
+ 	struct tcp_iter_state *st = seq->private;
++	int bucket = st->bucket;
+ 	int offset = st->offset;
+ 	int orig_num = st->num;
+ 	void *rc = NULL;
+@@ -2450,7 +2451,7 @@ static void *tcp_seek_last_pos(struct seq_file *seq)
+ 			break;
+ 		st->state = TCP_SEQ_STATE_LISTENING;
+ 		rc = listening_get_next(seq, NULL);
+-		while (offset-- && rc)
++		while (offset-- && rc && bucket == st->bucket)
+ 			rc = listening_get_next(seq, rc);
+ 		if (rc)
+ 			break;
+@@ -2461,7 +2462,7 @@ static void *tcp_seek_last_pos(struct seq_file *seq)
+ 		if (st->bucket > tcp_hashinfo.ehash_mask)
+ 			break;
+ 		rc = established_get_first(seq);
+-		while (offset-- && rc)
++		while (offset-- && rc && bucket == st->bucket)
+ 			rc = established_get_next(seq, rc);
+ 	}
+ 
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 67c74469503a5..cd99de5b6882c 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -1657,6 +1657,7 @@ static int rt6_insert_exception(struct rt6_info *nrt,
+ 	struct in6_addr *src_key = NULL;
+ 	struct rt6_exception *rt6_ex;
+ 	struct fib6_nh *nh = res->nh;
++	int max_depth;
+ 	int err = 0;
+ 
+ 	spin_lock_bh(&rt6_exception_lock);
+@@ -1711,7 +1712,9 @@ static int rt6_insert_exception(struct rt6_info *nrt,
+ 	bucket->depth++;
+ 	net->ipv6.rt6_stats->fib_rt_cache++;
+ 
+-	if (bucket->depth > FIB6_MAX_DEPTH)
++	/* Randomize max depth to avoid some side channels attacks. */
++	max_depth = FIB6_MAX_DEPTH + prandom_u32_max(FIB6_MAX_DEPTH);
++	while (bucket->depth > max_depth)
+ 		rt6_exception_remove_oldest(bucket);
+ 
+ out:
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 2651498d05e8e..e61c320974ba7 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -3210,7 +3210,9 @@ static bool ieee80211_amsdu_prepare_head(struct ieee80211_sub_if_data *sdata,
+ 	if (info->control.flags & IEEE80211_TX_CTRL_AMSDU)
+ 		return true;
+ 
+-	if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(*amsdu_hdr)))
++	if (!ieee80211_amsdu_realloc_pad(local, skb,
++					 sizeof(*amsdu_hdr) +
++					 local->hw.extra_tx_headroom))
+ 		return false;
+ 
+ 	data = skb_push(skb, sizeof(*amsdu_hdr));
+diff --git a/net/netlabel/netlabel_cipso_v4.c b/net/netlabel/netlabel_cipso_v4.c
+index 4f50a64315cf0..50f40943c8153 100644
+--- a/net/netlabel/netlabel_cipso_v4.c
++++ b/net/netlabel/netlabel_cipso_v4.c
+@@ -187,14 +187,14 @@ static int netlbl_cipsov4_add_std(struct genl_info *info,
+ 		}
+ 	doi_def->map.std->lvl.local = kcalloc(doi_def->map.std->lvl.local_size,
+ 					      sizeof(u32),
+-					      GFP_KERNEL);
++					      GFP_KERNEL | __GFP_NOWARN);
+ 	if (doi_def->map.std->lvl.local == NULL) {
+ 		ret_val = -ENOMEM;
+ 		goto add_std_failure;
+ 	}
+ 	doi_def->map.std->lvl.cipso = kcalloc(doi_def->map.std->lvl.cipso_size,
+ 					      sizeof(u32),
+-					      GFP_KERNEL);
++					      GFP_KERNEL | __GFP_NOWARN);
+ 	if (doi_def->map.std->lvl.cipso == NULL) {
+ 		ret_val = -ENOMEM;
+ 		goto add_std_failure;
+@@ -263,7 +263,7 @@ static int netlbl_cipsov4_add_std(struct genl_info *info,
+ 		doi_def->map.std->cat.local = kcalloc(
+ 					      doi_def->map.std->cat.local_size,
+ 					      sizeof(u32),
+-					      GFP_KERNEL);
++					      GFP_KERNEL | __GFP_NOWARN);
+ 		if (doi_def->map.std->cat.local == NULL) {
+ 			ret_val = -ENOMEM;
+ 			goto add_std_failure;
+@@ -271,7 +271,7 @@ static int netlbl_cipsov4_add_std(struct genl_info *info,
+ 		doi_def->map.std->cat.cipso = kcalloc(
+ 					      doi_def->map.std->cat.cipso_size,
+ 					      sizeof(u32),
+-					      GFP_KERNEL);
++					      GFP_KERNEL | __GFP_NOWARN);
+ 		if (doi_def->map.std->cat.cipso == NULL) {
+ 			ret_val = -ENOMEM;
+ 			goto add_std_failure;
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index 52b7f6490d248..2e732ea2b82fc 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -493,7 +493,7 @@ int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len)
+ 		goto err;
+ 	}
+ 
+-	if (!size || len != ALIGN(size, 4) + hdrlen)
++	if (!size || size & 3 || len != size + hdrlen)
+ 		goto err;
+ 
+ 	if (cb->dst_port != QRTR_PORT_CTRL && cb->type != QRTR_TYPE_DATA &&
+@@ -506,8 +506,12 @@ int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len)
+ 
+ 	if (cb->type == QRTR_TYPE_NEW_SERVER) {
+ 		/* Remote node endpoint can bridge other distant nodes */
+-		const struct qrtr_ctrl_pkt *pkt = data + hdrlen;
++		const struct qrtr_ctrl_pkt *pkt;
+ 
++		if (size < sizeof(*pkt))
++			goto err;
++
++		pkt = data + hdrlen;
+ 		qrtr_node_assign(node, le32_to_cpu(pkt->server.node));
+ 	}
+ 
+diff --git a/net/sched/sch_cbq.c b/net/sched/sch_cbq.c
+index b79a7e27bb315..38a3a8394bbda 100644
+--- a/net/sched/sch_cbq.c
++++ b/net/sched/sch_cbq.c
+@@ -1614,7 +1614,7 @@ cbq_change_class(struct Qdisc *sch, u32 classid, u32 parentid, struct nlattr **t
+ 	err = tcf_block_get(&cl->block, &cl->filter_list, sch, extack);
+ 	if (err) {
+ 		kfree(cl);
+-		return err;
++		goto failure;
+ 	}
+ 
+ 	if (tca[TCA_RATE]) {
+diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
+index 8827987ba9034..11bc6bf35f845 100644
+--- a/net/sched/sch_htb.c
++++ b/net/sched/sch_htb.c
+@@ -125,6 +125,7 @@ struct htb_class {
+ 		struct htb_class_leaf {
+ 			int		deficit[TC_HTB_MAXDEPTH];
+ 			struct Qdisc	*q;
++			struct netdev_queue *offload_queue;
+ 		} leaf;
+ 		struct htb_class_inner {
+ 			struct htb_prio clprio[TC_HTB_NUMPRIO];
+@@ -1376,24 +1377,47 @@ htb_graft_helper(struct netdev_queue *dev_queue, struct Qdisc *new_q)
+ 	return old_q;
+ }
+ 
+-static void htb_offload_move_qdisc(struct Qdisc *sch, u16 qid_old, u16 qid_new)
++static struct netdev_queue *htb_offload_get_queue(struct htb_class *cl)
++{
++	struct netdev_queue *queue;
++
++	queue = cl->leaf.offload_queue;
++	if (!(cl->leaf.q->flags & TCQ_F_BUILTIN))
++		WARN_ON(cl->leaf.q->dev_queue != queue);
++
++	return queue;
++}
++
++static void htb_offload_move_qdisc(struct Qdisc *sch, struct htb_class *cl_old,
++				   struct htb_class *cl_new, bool destroying)
+ {
+ 	struct netdev_queue *queue_old, *queue_new;
+ 	struct net_device *dev = qdisc_dev(sch);
+-	struct Qdisc *qdisc;
+ 
+-	queue_old = netdev_get_tx_queue(dev, qid_old);
+-	queue_new = netdev_get_tx_queue(dev, qid_new);
++	queue_old = htb_offload_get_queue(cl_old);
++	queue_new = htb_offload_get_queue(cl_new);
+ 
+-	if (dev->flags & IFF_UP)
+-		dev_deactivate(dev);
+-	qdisc = dev_graft_qdisc(queue_old, NULL);
+-	qdisc->dev_queue = queue_new;
+-	qdisc = dev_graft_qdisc(queue_new, qdisc);
+-	if (dev->flags & IFF_UP)
+-		dev_activate(dev);
++	if (!destroying) {
++		struct Qdisc *qdisc;
+ 
+-	WARN_ON(!(qdisc->flags & TCQ_F_BUILTIN));
++		if (dev->flags & IFF_UP)
++			dev_deactivate(dev);
++		qdisc = dev_graft_qdisc(queue_old, NULL);
++		WARN_ON(qdisc != cl_old->leaf.q);
++	}
++
++	if (!(cl_old->leaf.q->flags & TCQ_F_BUILTIN))
++		cl_old->leaf.q->dev_queue = queue_new;
++	cl_old->leaf.offload_queue = queue_new;
++
++	if (!destroying) {
++		struct Qdisc *qdisc;
++
++		qdisc = dev_graft_qdisc(queue_new, cl_old->leaf.q);
++		if (dev->flags & IFF_UP)
++			dev_activate(dev);
++		WARN_ON(!(qdisc->flags & TCQ_F_BUILTIN));
++	}
+ }
+ 
+ static int htb_graft(struct Qdisc *sch, unsigned long arg, struct Qdisc *new,
+@@ -1407,10 +1431,8 @@ static int htb_graft(struct Qdisc *sch, unsigned long arg, struct Qdisc *new,
+ 	if (cl->level)
+ 		return -EINVAL;
+ 
+-	if (q->offload) {
+-		dev_queue = new->dev_queue;
+-		WARN_ON(dev_queue != cl->leaf.q->dev_queue);
+-	}
++	if (q->offload)
++		dev_queue = htb_offload_get_queue(cl);
+ 
+ 	if (!new) {
+ 		new = qdisc_create_dflt(dev_queue, &pfifo_qdisc_ops,
+@@ -1479,6 +1501,8 @@ static void htb_parent_to_leaf(struct Qdisc *sch, struct htb_class *cl,
+ 	parent->ctokens = parent->cbuffer;
+ 	parent->t_c = ktime_get_ns();
+ 	parent->cmode = HTB_CAN_SEND;
++	if (q->offload)
++		parent->leaf.offload_queue = cl->leaf.offload_queue;
+ }
+ 
+ static void htb_parent_to_leaf_offload(struct Qdisc *sch,
+@@ -1499,6 +1523,7 @@ static int htb_destroy_class_offload(struct Qdisc *sch, struct htb_class *cl,
+ 				     struct netlink_ext_ack *extack)
+ {
+ 	struct tc_htb_qopt_offload offload_opt;
++	struct netdev_queue *dev_queue;
+ 	struct Qdisc *q = cl->leaf.q;
+ 	struct Qdisc *old = NULL;
+ 	int err;
+@@ -1507,16 +1532,15 @@ static int htb_destroy_class_offload(struct Qdisc *sch, struct htb_class *cl,
+ 		return -EINVAL;
+ 
+ 	WARN_ON(!q);
+-	if (!destroying) {
+-		/* On destroy of HTB, two cases are possible:
+-		 * 1. q is a normal qdisc, but q->dev_queue has noop qdisc.
+-		 * 2. q is a noop qdisc (for nodes that were inner),
+-		 *    q->dev_queue is noop_netdev_queue.
++	dev_queue = htb_offload_get_queue(cl);
++	old = htb_graft_helper(dev_queue, NULL);
++	if (destroying)
++		/* Before HTB is destroyed, the kernel grafts noop_qdisc to
++		 * all queues.
+ 		 */
+-		old = htb_graft_helper(q->dev_queue, NULL);
+-		WARN_ON(!old);
++		WARN_ON(!(old->flags & TCQ_F_BUILTIN));
++	else
+ 		WARN_ON(old != q);
+-	}
+ 
+ 	if (cl->parent) {
+ 		cl->parent->bstats_bias.bytes += q->bstats.bytes;
+@@ -1535,18 +1559,17 @@ static int htb_destroy_class_offload(struct Qdisc *sch, struct htb_class *cl,
+ 	if (!err || destroying)
+ 		qdisc_put(old);
+ 	else
+-		htb_graft_helper(q->dev_queue, old);
++		htb_graft_helper(dev_queue, old);
+ 
+ 	if (last_child)
+ 		return err;
+ 
+-	if (!err && offload_opt.moved_qid != 0) {
+-		if (destroying)
+-			q->dev_queue = netdev_get_tx_queue(qdisc_dev(sch),
+-							   offload_opt.qid);
+-		else
+-			htb_offload_move_qdisc(sch, offload_opt.moved_qid,
+-					       offload_opt.qid);
++	if (!err && offload_opt.classid != TC_H_MIN(cl->common.classid)) {
++		u32 classid = TC_H_MAJ(sch->handle) |
++			      TC_H_MIN(offload_opt.classid);
++		struct htb_class *moved_cl = htb_find(classid, sch);
++
++		htb_offload_move_qdisc(sch, moved_cl, cl, destroying);
+ 	}
+ 
+ 	return err;
+@@ -1669,9 +1692,11 @@ static int htb_delete(struct Qdisc *sch, unsigned long arg,
+ 	}
+ 
+ 	if (last_child) {
+-		struct netdev_queue *dev_queue;
++		struct netdev_queue *dev_queue = sch->dev_queue;
++
++		if (q->offload)
++			dev_queue = htb_offload_get_queue(cl);
+ 
+-		dev_queue = q->offload ? cl->leaf.q->dev_queue : sch->dev_queue;
+ 		new_q = qdisc_create_dflt(dev_queue, &pfifo_qdisc_ops,
+ 					  cl->parent->common.classid,
+ 					  NULL);
+@@ -1843,7 +1868,7 @@ static int htb_change_class(struct Qdisc *sch, u32 classid,
+ 			}
+ 			dev_queue = netdev_get_tx_queue(dev, offload_opt.qid);
+ 		} else { /* First child. */
+-			dev_queue = parent->leaf.q->dev_queue;
++			dev_queue = htb_offload_get_queue(parent);
+ 			old_q = htb_graft_helper(dev_queue, NULL);
+ 			WARN_ON(old_q != parent->leaf.q);
+ 			offload_opt = (struct tc_htb_qopt_offload) {
+@@ -1900,6 +1925,8 @@ static int htb_change_class(struct Qdisc *sch, u32 classid,
+ 
+ 		/* leaf (we) needs elementary qdisc */
+ 		cl->leaf.q = new_q ? new_q : &noop_qdisc;
++		if (q->offload)
++			cl->leaf.offload_queue = dev_queue;
+ 
+ 		cl->parent = parent;
+ 
+diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
+index 0de918cb3d90d..a47e290b0668e 100644
+--- a/net/sunrpc/svc.c
++++ b/net/sunrpc/svc.c
+@@ -1629,6 +1629,21 @@ u32 svc_max_payload(const struct svc_rqst *rqstp)
+ }
+ EXPORT_SYMBOL_GPL(svc_max_payload);
+ 
++/**
++ * svc_proc_name - Return RPC procedure name in string form
++ * @rqstp: svc_rqst to operate on
++ *
++ * Return value:
++ *   Pointer to a NUL-terminated string
++ */
++const char *svc_proc_name(const struct svc_rqst *rqstp)
++{
++	if (rqstp && rqstp->rq_procinfo)
++		return rqstp->rq_procinfo->pc_name;
++	return "unknown";
++}
++
++
+ /**
+  * svc_encode_result_payload - mark a range of bytes as a result payload
+  * @rqstp: svc_rqst to operate on
+diff --git a/samples/bpf/xdp_redirect_cpu_user.c b/samples/bpf/xdp_redirect_cpu_user.c
+index 5764116125237..c7d7d35867302 100644
+--- a/samples/bpf/xdp_redirect_cpu_user.c
++++ b/samples/bpf/xdp_redirect_cpu_user.c
+@@ -831,7 +831,7 @@ int main(int argc, char **argv)
+ 	memset(cpu, 0, n_cpus * sizeof(int));
+ 
+ 	/* Parse commands line args */
+-	while ((opt = getopt_long(argc, argv, "hSd:s:p:q:c:xzFf:e:r:m:",
++	while ((opt = getopt_long(argc, argv, "hSd:s:p:q:c:xzFf:e:r:m:n",
+ 				  long_options, &longindex)) != -1) {
+ 		switch (opt) {
+ 		case 'd':
+diff --git a/samples/pktgen/pktgen_sample04_many_flows.sh b/samples/pktgen/pktgen_sample04_many_flows.sh
+index ddce876635aa0..507c1143eb96b 100755
+--- a/samples/pktgen/pktgen_sample04_many_flows.sh
++++ b/samples/pktgen/pktgen_sample04_many_flows.sh
+@@ -13,13 +13,15 @@ root_check_run_with_sudo "$@"
+ # Parameter parsing via include
+ source ${basedir}/parameters.sh
+ # Set some default params, if they didn't get set
+-[ -z "$DEST_IP" ]   && DEST_IP="198.18.0.42"
++if [ -z "$DEST_IP" ]; then
++    [ -z "$IP6" ] && DEST_IP="198.18.0.42" || DEST_IP="FD00::1"
++fi
+ [ -z "$DST_MAC" ]   && DST_MAC="90:e2:ba:ff:ff:ff"
+ [ -z "$CLONE_SKB" ] && CLONE_SKB="0"
+ [ -z "$COUNT" ]     && COUNT="0" # Zero means indefinitely
+ if [ -n "$DEST_IP" ]; then
+-    validate_addr $DEST_IP
+-    read -r DST_MIN DST_MAX <<< $(parse_addr $DEST_IP)
++    validate_addr${IP6} $DEST_IP
++    read -r DST_MIN DST_MAX <<< $(parse_addr${IP6} $DEST_IP)
+ fi
+ if [ -n "$DST_PORT" ]; then
+     read -r UDP_DST_MIN UDP_DST_MAX <<< $(parse_ports $DST_PORT)
+@@ -62,8 +64,8 @@ for ((thread = $F_THREAD; thread <= $L_THREAD; thread++)); do
+ 
+     # Single destination
+     pg_set $dev "dst_mac $DST_MAC"
+-    pg_set $dev "dst_min $DST_MIN"
+-    pg_set $dev "dst_max $DST_MAX"
++    pg_set $dev "dst${IP6}_min $DST_MIN"
++    pg_set $dev "dst${IP6}_max $DST_MAX"
+ 
+     if [ -n "$DST_PORT" ]; then
+ 	# Single destination port or random port range
+diff --git a/samples/pktgen/pktgen_sample05_flow_per_thread.sh b/samples/pktgen/pktgen_sample05_flow_per_thread.sh
+index 4a65fe2fcee92..160143ebcdd08 100755
+--- a/samples/pktgen/pktgen_sample05_flow_per_thread.sh
++++ b/samples/pktgen/pktgen_sample05_flow_per_thread.sh
+@@ -17,14 +17,16 @@ root_check_run_with_sudo "$@"
+ # Parameter parsing via include
+ source ${basedir}/parameters.sh
+ # Set some default params, if they didn't get set
+-[ -z "$DEST_IP" ]   && DEST_IP="198.18.0.42"
++if [ -z "$DEST_IP" ]; then
++    [ -z "$IP6" ] && DEST_IP="198.18.0.42" || DEST_IP="FD00::1"
++fi
+ [ -z "$DST_MAC" ]   && DST_MAC="90:e2:ba:ff:ff:ff"
+ [ -z "$CLONE_SKB" ] && CLONE_SKB="0"
+ [ -z "$BURST" ]     && BURST=32
+ [ -z "$COUNT" ]     && COUNT="0" # Zero means indefinitely
+ if [ -n "$DEST_IP" ]; then
+-    validate_addr $DEST_IP
+-    read -r DST_MIN DST_MAX <<< $(parse_addr $DEST_IP)
++    validate_addr${IP6} $DEST_IP
++    read -r DST_MIN DST_MAX <<< $(parse_addr${IP6} $DEST_IP)
+ fi
+ if [ -n "$DST_PORT" ]; then
+     read -r UDP_DST_MIN UDP_DST_MAX <<< $(parse_ports $DST_PORT)
+@@ -52,8 +54,8 @@ for ((thread = $F_THREAD; thread <= $L_THREAD; thread++)); do
+ 
+     # Single destination
+     pg_set $dev "dst_mac $DST_MAC"
+-    pg_set $dev "dst_min $DST_MIN"
+-    pg_set $dev "dst_max $DST_MAX"
++    pg_set $dev "dst${IP6}_min $DST_MIN"
++    pg_set $dev "dst${IP6}_max $DST_MAX"
+ 
+     if [ -n "$DST_PORT" ]; then
+ 	# Single destination port or random port range
+diff --git a/security/integrity/ima/Kconfig b/security/integrity/ima/Kconfig
+index 12e9250c1bec6..9e72edb8d31af 100644
+--- a/security/integrity/ima/Kconfig
++++ b/security/integrity/ima/Kconfig
+@@ -6,7 +6,6 @@ config IMA
+ 	select SECURITYFS
+ 	select CRYPTO
+ 	select CRYPTO_HMAC
+-	select CRYPTO_MD5
+ 	select CRYPTO_SHA1
+ 	select CRYPTO_HASH_INFO
+ 	select TCG_TPM if HAS_IOMEM && !UML
+diff --git a/security/integrity/ima/ima_mok.c b/security/integrity/ima/ima_mok.c
+index 1e5c019161738..95cc31525c573 100644
+--- a/security/integrity/ima/ima_mok.c
++++ b/security/integrity/ima/ima_mok.c
+@@ -21,7 +21,7 @@ struct key *ima_blacklist_keyring;
+ /*
+  * Allocate the IMA blacklist keyring
+  */
+-__init int ima_mok_init(void)
++static __init int ima_mok_init(void)
+ {
+ 	struct key_restriction *restriction;
+ 
+diff --git a/sound/soc/codecs/rt5682-i2c.c b/sound/soc/codecs/rt5682-i2c.c
+index cd964e023d96e..b9d5d7a0975b3 100644
+--- a/sound/soc/codecs/rt5682-i2c.c
++++ b/sound/soc/codecs/rt5682-i2c.c
+@@ -117,6 +117,13 @@ static struct snd_soc_dai_driver rt5682_dai[] = {
+ 	},
+ };
+ 
++static void rt5682_i2c_disable_regulators(void *data)
++{
++	struct rt5682_priv *rt5682 = data;
++
++	regulator_bulk_disable(ARRAY_SIZE(rt5682->supplies), rt5682->supplies);
++}
++
+ static int rt5682_i2c_probe(struct i2c_client *i2c,
+ 		const struct i2c_device_id *id)
+ {
+@@ -157,6 +164,11 @@ static int rt5682_i2c_probe(struct i2c_client *i2c,
+ 		return ret;
+ 	}
+ 
++	ret = devm_add_action_or_reset(&i2c->dev, rt5682_i2c_disable_regulators,
++				       rt5682);
++	if (ret)
++		return ret;
++
+ 	ret = regulator_bulk_enable(ARRAY_SIZE(rt5682->supplies),
+ 				    rt5682->supplies);
+ 	if (ret) {
+@@ -280,6 +292,13 @@ static void rt5682_i2c_shutdown(struct i2c_client *client)
+ 	rt5682_reset(rt5682);
+ }
+ 
++static int rt5682_i2c_remove(struct i2c_client *client)
++{
++	rt5682_i2c_shutdown(client);
++
++	return 0;
++}
++
+ static const struct of_device_id rt5682_of_match[] = {
+ 	{.compatible = "realtek,rt5682i"},
+ 	{},
+@@ -306,6 +325,7 @@ static struct i2c_driver rt5682_i2c_driver = {
+ 		.probe_type = PROBE_PREFER_ASYNCHRONOUS,
+ 	},
+ 	.probe = rt5682_i2c_probe,
++	.remove = rt5682_i2c_remove,
+ 	.shutdown = rt5682_i2c_shutdown,
+ 	.id_table = rt5682_i2c_id,
+ };
+diff --git a/sound/soc/codecs/wcd9335.c b/sound/soc/codecs/wcd9335.c
+index 86c92e03ea5d4..d885ced34f606 100644
+--- a/sound/soc/codecs/wcd9335.c
++++ b/sound/soc/codecs/wcd9335.c
+@@ -4076,6 +4076,16 @@ static int wcd9335_setup_irqs(struct wcd9335_codec *wcd)
+ 	return ret;
+ }
+ 
++static void wcd9335_teardown_irqs(struct wcd9335_codec *wcd)
++{
++	int i;
++
++	/* disable interrupts on all slave ports */
++	for (i = 0; i < WCD9335_SLIM_NUM_PORT_REG; i++)
++		regmap_write(wcd->if_regmap, WCD9335_SLIM_PGD_PORT_INT_EN0 + i,
++			     0x00);
++}
++
+ static void wcd9335_cdc_sido_ccl_enable(struct wcd9335_codec *wcd,
+ 					bool ccl_flag)
+ {
+@@ -4844,6 +4854,7 @@ static void wcd9335_codec_init(struct snd_soc_component *component)
+ static int wcd9335_codec_probe(struct snd_soc_component *component)
+ {
+ 	struct wcd9335_codec *wcd = dev_get_drvdata(component->dev);
++	int ret;
+ 	int i;
+ 
+ 	snd_soc_component_init_regmap(component, wcd->regmap);
+@@ -4861,7 +4872,15 @@ static int wcd9335_codec_probe(struct snd_soc_component *component)
+ 	for (i = 0; i < NUM_CODEC_DAIS; i++)
+ 		INIT_LIST_HEAD(&wcd->dai[i].slim_ch_list);
+ 
+-	return wcd9335_setup_irqs(wcd);
++	ret = wcd9335_setup_irqs(wcd);
++	if (ret)
++		goto free_clsh_ctrl;
++
++	return 0;
++
++free_clsh_ctrl:
++	wcd_clsh_ctrl_free(wcd->clsh_ctrl);
++	return ret;
+ }
+ 
+ static void wcd9335_codec_remove(struct snd_soc_component *comp)
+@@ -4869,7 +4888,7 @@ static void wcd9335_codec_remove(struct snd_soc_component *comp)
+ 	struct wcd9335_codec *wcd = dev_get_drvdata(comp->dev);
+ 
+ 	wcd_clsh_ctrl_free(wcd->clsh_ctrl);
+-	free_irq(regmap_irq_get_virq(wcd->irq_data, WCD9335_IRQ_SLIMBUS), wcd);
++	wcd9335_teardown_irqs(wcd);
+ }
+ 
+ static int wcd9335_codec_set_sysclk(struct snd_soc_component *comp,
+diff --git a/sound/soc/fsl/fsl_rpmsg.c b/sound/soc/fsl/fsl_rpmsg.c
+index ea5c973e2e846..d60f4dac6c1b3 100644
+--- a/sound/soc/fsl/fsl_rpmsg.c
++++ b/sound/soc/fsl/fsl_rpmsg.c
+@@ -165,25 +165,25 @@ static int fsl_rpmsg_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	/* Get the optional clocks */
+-	rpmsg->ipg = devm_clk_get(&pdev->dev, "ipg");
++	rpmsg->ipg = devm_clk_get_optional(&pdev->dev, "ipg");
+ 	if (IS_ERR(rpmsg->ipg))
+-		rpmsg->ipg = NULL;
++		return PTR_ERR(rpmsg->ipg);
+ 
+-	rpmsg->mclk = devm_clk_get(&pdev->dev, "mclk");
++	rpmsg->mclk = devm_clk_get_optional(&pdev->dev, "mclk");
+ 	if (IS_ERR(rpmsg->mclk))
+-		rpmsg->mclk = NULL;
++		return PTR_ERR(rpmsg->mclk);
+ 
+-	rpmsg->dma = devm_clk_get(&pdev->dev, "dma");
++	rpmsg->dma = devm_clk_get_optional(&pdev->dev, "dma");
+ 	if (IS_ERR(rpmsg->dma))
+-		rpmsg->dma = NULL;
++		return PTR_ERR(rpmsg->dma);
+ 
+-	rpmsg->pll8k = devm_clk_get(&pdev->dev, "pll8k");
++	rpmsg->pll8k = devm_clk_get_optional(&pdev->dev, "pll8k");
+ 	if (IS_ERR(rpmsg->pll8k))
+-		rpmsg->pll8k = NULL;
++		return PTR_ERR(rpmsg->pll8k);
+ 
+-	rpmsg->pll11k = devm_clk_get(&pdev->dev, "pll11k");
++	rpmsg->pll11k = devm_clk_get_optional(&pdev->dev, "pll11k");
+ 	if (IS_ERR(rpmsg->pll11k))
+-		rpmsg->pll11k = NULL;
++		return PTR_ERR(rpmsg->pll11k);
+ 
+ 	platform_set_drvdata(pdev, rpmsg);
+ 	pm_runtime_enable(&pdev->dev);
+diff --git a/sound/soc/intel/boards/kbl_da7219_max98927.c b/sound/soc/intel/boards/kbl_da7219_max98927.c
+index 4b7b4a044f81c..255f8df09d84c 100644
+--- a/sound/soc/intel/boards/kbl_da7219_max98927.c
++++ b/sound/soc/intel/boards/kbl_da7219_max98927.c
+@@ -199,7 +199,7 @@ static int kabylake_ssp0_hw_params(struct snd_pcm_substream *substream,
+ 		}
+ 		if (!strcmp(codec_dai->component->name, MAX98373_DEV0_NAME)) {
+ 			ret = snd_soc_dai_set_tdm_slot(codec_dai,
+-							0x03, 3, 8, 24);
++							0x30, 3, 8, 16);
+ 			if (ret < 0) {
+ 				dev_err(runtime->dev,
+ 						"DEV0 TDM slot err:%d\n", ret);
+@@ -208,10 +208,10 @@ static int kabylake_ssp0_hw_params(struct snd_pcm_substream *substream,
+ 		}
+ 		if (!strcmp(codec_dai->component->name, MAX98373_DEV1_NAME)) {
+ 			ret = snd_soc_dai_set_tdm_slot(codec_dai,
+-							0x0C, 3, 8, 24);
++							0xC0, 3, 8, 16);
+ 			if (ret < 0) {
+ 				dev_err(runtime->dev,
+-						"DEV0 TDM slot err:%d\n", ret);
++						"DEV1 TDM slot err:%d\n", ret);
+ 				return ret;
+ 			}
+ 		}
+@@ -311,24 +311,6 @@ static int kabylake_ssp_fixup(struct snd_soc_pcm_runtime *rtd,
+ 	 * The above 2 loops are mutually exclusive based on the stream direction,
+ 	 * thus rtd_dpcm variable will never be overwritten
+ 	 */
+-	/*
+-	 * Topology for kblda7219m98373 & kblmax98373 supports only S24_LE,
+-	 * where as kblda7219m98927 & kblmax98927 supports S16_LE by default.
+-	 * Skipping the port wise FE and BE configuration for kblda7219m98373 &
+-	 * kblmax98373 as the topology (FE & BE) supports S24_LE only.
+-	 */
+-
+-	if (!strcmp(rtd->card->name, "kblda7219m98373") ||
+-		!strcmp(rtd->card->name, "kblmax98373")) {
+-		/* The ADSP will convert the FE rate to 48k, stereo */
+-		rate->min = rate->max = 48000;
+-		chan->min = chan->max = DUAL_CHANNEL;
+-
+-		/* set SSP to 24 bit */
+-		snd_mask_none(fmt);
+-		snd_mask_set_format(fmt, SNDRV_PCM_FORMAT_S24_LE);
+-		return 0;
+-	}
+ 
+ 	/*
+ 	 * The ADSP will convert the FE rate to 48k, stereo, 24 bit
+@@ -479,31 +461,20 @@ static struct snd_pcm_hw_constraint_list constraints_channels_quad = {
+ static int kbl_fe_startup(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+-	struct snd_soc_pcm_runtime *soc_rt = asoc_substream_to_rtd(substream);
+ 
+ 	/*
+ 	 * On this platform for PCM device we support,
+ 	 * 48Khz
+ 	 * stereo
++	 * 16 bit audio
+ 	 */
+ 
+ 	runtime->hw.channels_max = DUAL_CHANNEL;
+ 	snd_pcm_hw_constraint_list(runtime, 0, SNDRV_PCM_HW_PARAM_CHANNELS,
+ 					   &constraints_channels);
+-	/*
+-	 * Setup S24_LE (32 bit container and 24 bit valid data) for
+-	 * kblda7219m98373 & kblmax98373. For kblda7219m98927 &
+-	 * kblmax98927 keeping it as 16/16 due to topology FW dependency.
+-	 */
+-	if (!strcmp(soc_rt->card->name, "kblda7219m98373") ||
+-		!strcmp(soc_rt->card->name, "kblmax98373")) {
+-		runtime->hw.formats = SNDRV_PCM_FMTBIT_S24_LE;
+-		snd_pcm_hw_constraint_msbits(runtime, 0, 32, 24);
+-
+-	} else {
+-		runtime->hw.formats = SNDRV_PCM_FMTBIT_S16_LE;
+-		snd_pcm_hw_constraint_msbits(runtime, 0, 16, 16);
+-	}
++
++	runtime->hw.formats = SNDRV_PCM_FMTBIT_S16_LE;
++	snd_pcm_hw_constraint_msbits(runtime, 0, 16, 16);
+ 
+ 	snd_pcm_hw_constraint_list(runtime, 0,
+ 				SNDRV_PCM_HW_PARAM_RATE, &constraints_rates);
+@@ -536,23 +507,11 @@ static int kabylake_dmic_fixup(struct snd_soc_pcm_runtime *rtd,
+ static int kabylake_dmic_startup(struct snd_pcm_substream *substream)
+ {
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+-	struct snd_soc_pcm_runtime *soc_rt = asoc_substream_to_rtd(substream);
+ 
+ 	runtime->hw.channels_min = runtime->hw.channels_max = QUAD_CHANNEL;
+ 	snd_pcm_hw_constraint_list(runtime, 0, SNDRV_PCM_HW_PARAM_CHANNELS,
+ 			&constraints_channels_quad);
+ 
+-	/*
+-	 * Topology for kblda7219m98373 & kblmax98373 supports only S24_LE.
+-	 * The DMIC also configured for S24_LE. Forcing the DMIC format to
+-	 * S24_LE due to the topology FW dependency.
+-	 */
+-	if (!strcmp(soc_rt->card->name, "kblda7219m98373") ||
+-		!strcmp(soc_rt->card->name, "kblmax98373")) {
+-		runtime->hw.formats = SNDRV_PCM_FMTBIT_S24_LE;
+-		snd_pcm_hw_constraint_msbits(runtime, 0, 32, 24);
+-	}
+-
+ 	return snd_pcm_hw_constraint_list(substream->runtime, 0,
+ 			SNDRV_PCM_HW_PARAM_RATE, &constraints_rates);
+ }
+diff --git a/sound/soc/intel/common/soc-acpi-intel-cml-match.c b/sound/soc/intel/common/soc-acpi-intel-cml-match.c
+index 7f6ef82299698..4d7a181fb8e6b 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-cml-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-cml-match.c
+@@ -75,7 +75,7 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_cml_machines[] = {
+ 	},
+ 	{
+ 		.id = "DLGS7219",
+-		.drv_name = "cml_da7219_max98357a",
++		.drv_name = "cml_da7219_mx98357a",
+ 		.machine_quirk = snd_soc_acpi_codec_list,
+ 		.quirk_data = &max98390_spk_codecs,
+ 		.sof_fw_filename = "sof-cml.ri",
+diff --git a/sound/soc/intel/common/soc-acpi-intel-kbl-match.c b/sound/soc/intel/common/soc-acpi-intel-kbl-match.c
+index ba5ff468c265a..741bf2f9e081f 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-kbl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-kbl-match.c
+@@ -87,7 +87,7 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_kbl_machines[] = {
+ 	},
+ 	{
+ 		.id = "DLGS7219",
+-		.drv_name = "kbl_da7219_max98357a",
++		.drv_name = "kbl_da7219_mx98357a",
+ 		.fw_filename = "intel/dsp_fw_kbl.bin",
+ 		.machine_quirk = snd_soc_acpi_codec_list,
+ 		.quirk_data = &kbl_7219_98357_codecs,
+diff --git a/sound/soc/intel/skylake/skl-topology.c b/sound/soc/intel/skylake/skl-topology.c
+index c0fdab39e7c28..09037d555ec49 100644
+--- a/sound/soc/intel/skylake/skl-topology.c
++++ b/sound/soc/intel/skylake/skl-topology.c
+@@ -113,7 +113,7 @@ static int is_skl_dsp_widget_type(struct snd_soc_dapm_widget *w,
+ 
+ static void skl_dump_mconfig(struct skl_dev *skl, struct skl_module_cfg *mcfg)
+ {
+-	struct skl_module_iface *iface = &mcfg->module->formats[0];
++	struct skl_module_iface *iface = &mcfg->module->formats[mcfg->fmt_idx];
+ 
+ 	dev_dbg(skl->dev, "Dumping config\n");
+ 	dev_dbg(skl->dev, "Input Format:\n");
+@@ -195,8 +195,8 @@ static void skl_tplg_update_params_fixup(struct skl_module_cfg *m_cfg,
+ 	struct skl_module_fmt *in_fmt, *out_fmt;
+ 
+ 	/* Fixups will be applied to pin 0 only */
+-	in_fmt = &m_cfg->module->formats[0].inputs[0].fmt;
+-	out_fmt = &m_cfg->module->formats[0].outputs[0].fmt;
++	in_fmt = &m_cfg->module->formats[m_cfg->fmt_idx].inputs[0].fmt;
++	out_fmt = &m_cfg->module->formats[m_cfg->fmt_idx].outputs[0].fmt;
+ 
+ 	if (params->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ 		if (is_fe) {
+@@ -239,9 +239,9 @@ static void skl_tplg_update_buffer_size(struct skl_dev *skl,
+ 	/* Since fixups is applied to pin 0 only, ibs, obs needs
+ 	 * change for pin 0 only
+ 	 */
+-	res = &mcfg->module->resources[0];
+-	in_fmt = &mcfg->module->formats[0].inputs[0].fmt;
+-	out_fmt = &mcfg->module->formats[0].outputs[0].fmt;
++	res = &mcfg->module->resources[mcfg->res_idx];
++	in_fmt = &mcfg->module->formats[mcfg->fmt_idx].inputs[0].fmt;
++	out_fmt = &mcfg->module->formats[mcfg->fmt_idx].outputs[0].fmt;
+ 
+ 	if (mcfg->m_type == SKL_MODULE_TYPE_SRCINT)
+ 		multiplier = 5;
+@@ -1463,12 +1463,6 @@ static int skl_tplg_tlv_control_set(struct snd_kcontrol *kcontrol,
+ 	struct skl_dev *skl = get_skl_ctx(w->dapm->dev);
+ 
+ 	if (ac->params) {
+-		/*
+-		 * Widget data is expected to be stripped of T and L
+-		 */
+-		size -= 2 * sizeof(unsigned int);
+-		data += 2;
+-
+ 		if (size > ac->max)
+ 			return -EINVAL;
+ 		ac->size = size;
+@@ -1637,11 +1631,12 @@ int skl_tplg_update_pipe_params(struct device *dev,
+ 			struct skl_module_cfg *mconfig,
+ 			struct skl_pipe_params *params)
+ {
+-	struct skl_module_res *res = &mconfig->module->resources[0];
++	struct skl_module_res *res;
+ 	struct skl_dev *skl = get_skl_ctx(dev);
+ 	struct skl_module_fmt *format = NULL;
+ 	u8 cfg_idx = mconfig->pipe->cur_config_idx;
+ 
++	res = &mconfig->module->resources[mconfig->res_idx];
+ 	skl_tplg_fill_dma_id(mconfig, params);
+ 	mconfig->fmt_idx = mconfig->mod_cfg[cfg_idx].fmt_idx;
+ 	mconfig->res_idx = mconfig->mod_cfg[cfg_idx].res_idx;
+@@ -1650,9 +1645,9 @@ int skl_tplg_update_pipe_params(struct device *dev,
+ 		return 0;
+ 
+ 	if (params->stream == SNDRV_PCM_STREAM_PLAYBACK)
+-		format = &mconfig->module->formats[0].inputs[0].fmt;
++		format = &mconfig->module->formats[mconfig->fmt_idx].inputs[0].fmt;
+ 	else
+-		format = &mconfig->module->formats[0].outputs[0].fmt;
++		format = &mconfig->module->formats[mconfig->fmt_idx].outputs[0].fmt;
+ 
+ 	/* set the hw_params */
+ 	format->s_freq = params->s_freq;
+diff --git a/sound/soc/mediatek/mt8183/mt8183-afe-pcm.c b/sound/soc/mediatek/mt8183/mt8183-afe-pcm.c
+index c4a598cbbdaa1..14e77df06b011 100644
+--- a/sound/soc/mediatek/mt8183/mt8183-afe-pcm.c
++++ b/sound/soc/mediatek/mt8183/mt8183-afe-pcm.c
+@@ -1119,25 +1119,26 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	afe->regmap = syscon_node_to_regmap(dev->parent->of_node);
+ 	if (IS_ERR(afe->regmap)) {
+ 		dev_err(dev, "could not get regmap from parent\n");
+-		return PTR_ERR(afe->regmap);
++		ret = PTR_ERR(afe->regmap);
++		goto err_pm_disable;
+ 	}
+ 	ret = regmap_attach_dev(dev, afe->regmap, &mt8183_afe_regmap_config);
+ 	if (ret) {
+ 		dev_warn(dev, "regmap_attach_dev fail, ret %d\n", ret);
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	rstc = devm_reset_control_get(dev, "audiosys");
+ 	if (IS_ERR(rstc)) {
+ 		ret = PTR_ERR(rstc);
+ 		dev_err(dev, "could not get audiosys reset:%d\n", ret);
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	ret = reset_control_reset(rstc);
+ 	if (ret) {
+ 		dev_err(dev, "failed to trigger audio reset:%d\n", ret);
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	/* enable clock for regcache get default value from hw */
+@@ -1147,7 +1148,7 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	ret = regmap_reinit_cache(afe->regmap, &mt8183_afe_regmap_config);
+ 	if (ret) {
+ 		dev_err(dev, "regmap_reinit_cache fail, ret %d\n", ret);
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	pm_runtime_put_sync(&pdev->dev);
+@@ -1160,8 +1161,10 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	afe->memif_size = MT8183_MEMIF_NUM;
+ 	afe->memif = devm_kcalloc(dev, afe->memif_size, sizeof(*afe->memif),
+ 				  GFP_KERNEL);
+-	if (!afe->memif)
+-		return -ENOMEM;
++	if (!afe->memif) {
++		ret = -ENOMEM;
++		goto err_pm_disable;
++	}
+ 
+ 	for (i = 0; i < afe->memif_size; i++) {
+ 		afe->memif[i].data = &memif_data[i];
+@@ -1178,22 +1181,26 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	afe->irqs_size = MT8183_IRQ_NUM;
+ 	afe->irqs = devm_kcalloc(dev, afe->irqs_size, sizeof(*afe->irqs),
+ 				 GFP_KERNEL);
+-	if (!afe->irqs)
+-		return -ENOMEM;
++	if (!afe->irqs) {
++		ret = -ENOMEM;
++		goto err_pm_disable;
++	}
+ 
+ 	for (i = 0; i < afe->irqs_size; i++)
+ 		afe->irqs[i].irq_data = &irq_data[i];
+ 
+ 	/* request irq */
+ 	irq_id = platform_get_irq(pdev, 0);
+-	if (irq_id < 0)
+-		return irq_id;
++	if (irq_id < 0) {
++		ret = irq_id;
++		goto err_pm_disable;
++	}
+ 
+ 	ret = devm_request_irq(dev, irq_id, mt8183_afe_irq_handler,
+ 			       IRQF_TRIGGER_NONE, "asys-isr", (void *)afe);
+ 	if (ret) {
+ 		dev_err(dev, "could not request_irq for asys-isr\n");
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	/* init sub_dais */
+@@ -1204,7 +1211,7 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 		if (ret) {
+ 			dev_warn(afe->dev, "dai register i %d fail, ret %d\n",
+ 				 i, ret);
+-			return ret;
++			goto err_pm_disable;
+ 		}
+ 	}
+ 
+@@ -1213,7 +1220,7 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	if (ret) {
+ 		dev_warn(afe->dev, "mtk_afe_combine_sub_dai fail, ret %d\n",
+ 			 ret);
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	afe->mtk_afe_hardware = &mt8183_afe_hardware;
+@@ -1229,7 +1236,7 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 					      NULL, 0);
+ 	if (ret) {
+ 		dev_warn(dev, "err_platform\n");
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	ret = devm_snd_soc_register_component(afe->dev,
+@@ -1238,10 +1245,14 @@ static int mt8183_afe_pcm_dev_probe(struct platform_device *pdev)
+ 					      afe->num_dai_drivers);
+ 	if (ret) {
+ 		dev_warn(dev, "err_dai_component\n");
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	return ret;
++
++err_pm_disable:
++	pm_runtime_disable(&pdev->dev);
++	return ret;
+ }
+ 
+ static int mt8183_afe_pcm_dev_remove(struct platform_device *pdev)
+diff --git a/sound/soc/mediatek/mt8192/mt8192-afe-pcm.c b/sound/soc/mediatek/mt8192/mt8192-afe-pcm.c
+index 7a1724f5ff4c6..31c280339c503 100644
+--- a/sound/soc/mediatek/mt8192/mt8192-afe-pcm.c
++++ b/sound/soc/mediatek/mt8192/mt8192-afe-pcm.c
+@@ -2229,12 +2229,13 @@ static int mt8192_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	afe->regmap = syscon_node_to_regmap(dev->parent->of_node);
+ 	if (IS_ERR(afe->regmap)) {
+ 		dev_err(dev, "could not get regmap from parent\n");
+-		return PTR_ERR(afe->regmap);
++		ret = PTR_ERR(afe->regmap);
++		goto err_pm_disable;
+ 	}
+ 	ret = regmap_attach_dev(dev, afe->regmap, &mt8192_afe_regmap_config);
+ 	if (ret) {
+ 		dev_warn(dev, "regmap_attach_dev fail, ret %d\n", ret);
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	/* enable clock for regcache get default value from hw */
+@@ -2244,7 +2245,7 @@ static int mt8192_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	ret = regmap_reinit_cache(afe->regmap, &mt8192_afe_regmap_config);
+ 	if (ret) {
+ 		dev_err(dev, "regmap_reinit_cache fail, ret %d\n", ret);
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	pm_runtime_put_sync(&pdev->dev);
+@@ -2257,8 +2258,10 @@ static int mt8192_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	afe->memif_size = MT8192_MEMIF_NUM;
+ 	afe->memif = devm_kcalloc(dev, afe->memif_size, sizeof(*afe->memif),
+ 				  GFP_KERNEL);
+-	if (!afe->memif)
+-		return -ENOMEM;
++	if (!afe->memif) {
++		ret = -ENOMEM;
++		goto err_pm_disable;
++	}
+ 
+ 	for (i = 0; i < afe->memif_size; i++) {
+ 		afe->memif[i].data = &memif_data[i];
+@@ -2272,22 +2275,26 @@ static int mt8192_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	afe->irqs_size = MT8192_IRQ_NUM;
+ 	afe->irqs = devm_kcalloc(dev, afe->irqs_size, sizeof(*afe->irqs),
+ 				 GFP_KERNEL);
+-	if (!afe->irqs)
+-		return -ENOMEM;
++	if (!afe->irqs) {
++		ret = -ENOMEM;
++		goto err_pm_disable;
++	}
+ 
+ 	for (i = 0; i < afe->irqs_size; i++)
+ 		afe->irqs[i].irq_data = &irq_data[i];
+ 
+ 	/* request irq */
+ 	irq_id = platform_get_irq(pdev, 0);
+-	if (irq_id < 0)
+-		return irq_id;
++	if (irq_id < 0) {
++		ret = irq_id;
++		goto err_pm_disable;
++	}
+ 
+ 	ret = devm_request_irq(dev, irq_id, mt8192_afe_irq_handler,
+ 			       IRQF_TRIGGER_NONE, "asys-isr", (void *)afe);
+ 	if (ret) {
+ 		dev_err(dev, "could not request_irq for Afe_ISR_Handle\n");
+-		return ret;
++		goto err_pm_disable;
+ 	}
+ 
+ 	/* init sub_dais */
+diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
+index da4846c9856af..a51bf4c698799 100644
+--- a/tools/bpf/bpftool/prog.c
++++ b/tools/bpf/bpftool/prog.c
+@@ -778,6 +778,8 @@ prog_dump(struct bpf_prog_info *info, enum dump_mode mode,
+ 		kernel_syms_destroy(&dd);
+ 	}
+ 
++	btf__free(btf);
++
+ 	return 0;
+ }
+ 
+@@ -1897,8 +1899,8 @@ static char *profile_target_name(int tgt_fd)
+ 	struct bpf_prog_info_linear *info_linear;
+ 	struct bpf_func_info *func_info;
+ 	const struct btf_type *t;
++	struct btf *btf = NULL;
+ 	char *name = NULL;
+-	struct btf *btf;
+ 
+ 	info_linear = bpf_program__get_prog_info_linear(
+ 		tgt_fd, 1UL << BPF_PROG_INFO_FUNC_INFO);
+@@ -1922,6 +1924,7 @@ static char *profile_target_name(int tgt_fd)
+ 	}
+ 	name = strdup(btf__name_by_offset(btf, t->name_off));
+ out:
++	btf__free(btf);
+ 	free(info_linear);
+ 	return name;
+ }
+diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
+index ec6d85a817449..353f06cf210e9 100644
+--- a/tools/include/uapi/linux/bpf.h
++++ b/tools/include/uapi/linux/bpf.h
+@@ -3222,7 +3222,7 @@ union bpf_attr {
+  * long bpf_sk_select_reuseport(struct sk_reuseport_md *reuse, struct bpf_map *map, void *key, u64 flags)
+  *	Description
+  *		Select a **SO_REUSEPORT** socket from a
+- *		**BPF_MAP_TYPE_REUSEPORT_ARRAY** *map*.
++ *		**BPF_MAP_TYPE_REUSEPORT_SOCKARRAY** *map*.
+  *		It checks the selected socket is matching the incoming
+  *		request in the socket buffer.
+  *	Return
+diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
+index e43e1896cb4be..0d9d8ed6512b0 100644
+--- a/tools/lib/bpf/Makefile
++++ b/tools/lib/bpf/Makefile
+@@ -4,8 +4,9 @@
+ RM ?= rm
+ srctree = $(abs_srctree)
+ 
++VERSION_SCRIPT := libbpf.map
+ LIBBPF_VERSION := $(shell \
+-	grep -oE '^LIBBPF_([0-9.]+)' libbpf.map | \
++	grep -oE '^LIBBPF_([0-9.]+)' $(VERSION_SCRIPT) | \
+ 	sort -rV | head -n1 | cut -d'_' -f2)
+ LIBBPF_MAJOR_VERSION := $(firstword $(subst ., ,$(LIBBPF_VERSION)))
+ 
+@@ -110,7 +111,6 @@ SHARED_OBJDIR	:= $(OUTPUT)sharedobjs/
+ STATIC_OBJDIR	:= $(OUTPUT)staticobjs/
+ BPF_IN_SHARED	:= $(SHARED_OBJDIR)libbpf-in.o
+ BPF_IN_STATIC	:= $(STATIC_OBJDIR)libbpf-in.o
+-VERSION_SCRIPT	:= libbpf.map
+ BPF_HELPER_DEFS	:= $(OUTPUT)bpf_helper_defs.h
+ 
+ LIB_TARGET	:= $(addprefix $(OUTPUT),$(LIB_TARGET))
+@@ -163,10 +163,10 @@ $(BPF_HELPER_DEFS): $(srctree)/tools/include/uapi/linux/bpf.h
+ 
+ $(OUTPUT)libbpf.so: $(OUTPUT)libbpf.so.$(LIBBPF_VERSION)
+ 
+-$(OUTPUT)libbpf.so.$(LIBBPF_VERSION): $(BPF_IN_SHARED)
++$(OUTPUT)libbpf.so.$(LIBBPF_VERSION): $(BPF_IN_SHARED) $(VERSION_SCRIPT)
+ 	$(QUIET_LINK)$(CC) $(LDFLAGS) \
+ 		--shared -Wl,-soname,libbpf.so.$(LIBBPF_MAJOR_VERSION) \
+-		-Wl,--version-script=$(VERSION_SCRIPT) $^ -lelf -lz -o $@
++		-Wl,--version-script=$(VERSION_SCRIPT) $< -lelf -lz -o $@
+ 	@ln -sf $(@F) $(OUTPUT)libbpf.so
+ 	@ln -sf $(@F) $(OUTPUT)libbpf.so.$(LIBBPF_MAJOR_VERSION)
+ 
+@@ -181,7 +181,7 @@ $(OUTPUT)libbpf.pc:
+ 
+ check: check_abi
+ 
+-check_abi: $(OUTPUT)libbpf.so
++check_abi: $(OUTPUT)libbpf.so $(VERSION_SCRIPT)
+ 	@if [ "$(GLOBAL_SYM_COUNT)" != "$(VERSIONED_SYM_COUNT)" ]; then	 \
+ 		echo "Warning: Num of global symbols in $(BPF_IN_SHARED)"	 \
+ 		     "($(GLOBAL_SYM_COUNT)) does NOT match with num of"	 \
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index c41d9b2b59ace..f6ebda75b0306 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -4409,6 +4409,7 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map)
+ {
+ 	struct bpf_create_map_attr create_attr;
+ 	struct bpf_map_def *def = &map->def;
++	int err = 0;
+ 
+ 	memset(&create_attr, 0, sizeof(create_attr));
+ 
+@@ -4451,8 +4452,6 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map)
+ 
+ 	if (bpf_map_type__is_map_in_map(def->type)) {
+ 		if (map->inner_map) {
+-			int err;
+-
+ 			err = bpf_object__create_map(obj, map->inner_map);
+ 			if (err) {
+ 				pr_warn("map '%s': failed to create inner map: %d\n",
+@@ -4469,8 +4468,8 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map)
+ 	if (map->fd < 0 && (create_attr.btf_key_type_id ||
+ 			    create_attr.btf_value_type_id)) {
+ 		char *cp, errmsg[STRERR_BUFSIZE];
+-		int err = -errno;
+ 
++		err = -errno;
+ 		cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg));
+ 		pr_warn("Error in bpf_create_map_xattr(%s):%s(%d). Retrying without BTF.\n",
+ 			map->name, cp, err);
+@@ -4482,15 +4481,14 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map)
+ 		map->fd = bpf_create_map_xattr(&create_attr);
+ 	}
+ 
+-	if (map->fd < 0)
+-		return -errno;
++	err = map->fd < 0 ? -errno : 0;
+ 
+ 	if (bpf_map_type__is_map_in_map(def->type) && map->inner_map) {
+ 		bpf_map__destroy(map->inner_map);
+ 		zfree(&map->inner_map);
+ 	}
+ 
+-	return 0;
++	return err;
+ }
+ 
+ static int init_map_slots(struct bpf_map *map)
+@@ -7365,8 +7363,10 @@ __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz,
+ 	kconfig = OPTS_GET(opts, kconfig, NULL);
+ 	if (kconfig) {
+ 		obj->kconfig = strdup(kconfig);
+-		if (!obj->kconfig)
+-			return ERR_PTR(-ENOMEM);
++		if (!obj->kconfig) {
++			err = -ENOMEM;
++			goto out;
++		}
+ 	}
+ 
+ 	err = bpf_object__elf_init(obj);
+diff --git a/tools/perf/util/bpf-event.c b/tools/perf/util/bpf-event.c
+index cdecda1ddd36e..17a9844e4fbf8 100644
+--- a/tools/perf/util/bpf-event.c
++++ b/tools/perf/util/bpf-event.c
+@@ -296,7 +296,7 @@ static int perf_event__synthesize_one_bpf_prog(struct perf_session *session,
+ 
+ out:
+ 	free(info_linear);
+-	free(btf);
++	btf__free(btf);
+ 	return err ? -1 : 0;
+ }
+ 
+@@ -486,7 +486,7 @@ static void perf_env__add_bpf_info(struct perf_env *env, u32 id)
+ 	perf_env__fetch_btf(env, btf_id, btf);
+ 
+ out:
+-	free(btf);
++	btf__free(btf);
+ 	close(fd);
+ }
+ 
+diff --git a/tools/perf/util/bpf_counter.c b/tools/perf/util/bpf_counter.c
+index 5ed674a2f55e8..a14f0098b343d 100644
+--- a/tools/perf/util/bpf_counter.c
++++ b/tools/perf/util/bpf_counter.c
+@@ -74,8 +74,8 @@ static char *bpf_target_prog_name(int tgt_fd)
+ 	struct bpf_prog_info_linear *info_linear;
+ 	struct bpf_func_info *func_info;
+ 	const struct btf_type *t;
++	struct btf *btf = NULL;
+ 	char *name = NULL;
+-	struct btf *btf;
+ 
+ 	info_linear = bpf_program__get_prog_info_linear(
+ 		tgt_fd, 1UL << BPF_PROG_INFO_FUNC_INFO);
+@@ -99,6 +99,7 @@ static char *bpf_target_prog_name(int tgt_fd)
+ 	}
+ 	name = strdup(btf__name_by_offset(btf, t->name_off));
+ out:
++	btf__free(btf);
+ 	free(info_linear);
+ 	return name;
+ }
+diff --git a/tools/testing/selftests/bpf/prog_tests/btf.c b/tools/testing/selftests/bpf/prog_tests/btf.c
+index 0457ae32b2702..5b7dd3227b785 100644
+--- a/tools/testing/selftests/bpf/prog_tests/btf.c
++++ b/tools/testing/selftests/bpf/prog_tests/btf.c
+@@ -4384,6 +4384,7 @@ skip:
+ 	fprintf(stderr, "OK");
+ 
+ done:
++	btf__free(btf);
+ 	free(func_info);
+ 	bpf_object__close(obj);
+ }
+diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_tcp4.c b/tools/testing/selftests/bpf/progs/bpf_iter_tcp4.c
+index 54380c5e10692..aa96b604b2b31 100644
+--- a/tools/testing/selftests/bpf/progs/bpf_iter_tcp4.c
++++ b/tools/testing/selftests/bpf/progs/bpf_iter_tcp4.c
+@@ -122,7 +122,7 @@ static int dump_tcp_sock(struct seq_file *seq, struct tcp_sock *tp,
+ 	}
+ 
+ 	BPF_SEQ_PRINTF(seq, "%4d: %08X:%04X %08X:%04X ",
+-		       seq_num, src, srcp, destp, destp);
++		       seq_num, src, srcp, dest, destp);
+ 	BPF_SEQ_PRINTF(seq, "%02X %08X:%08X %02X:%08lX %08X %5u %8d %lu %d ",
+ 		       state,
+ 		       tp->write_seq - tp->snd_una, rx_queue,
+diff --git a/tools/testing/selftests/bpf/progs/test_core_autosize.c b/tools/testing/selftests/bpf/progs/test_core_autosize.c
+index 44f5aa2e8956f..9a7829c5e4a72 100644
+--- a/tools/testing/selftests/bpf/progs/test_core_autosize.c
++++ b/tools/testing/selftests/bpf/progs/test_core_autosize.c
+@@ -125,6 +125,16 @@ int handle_downsize(void *ctx)
+ 	return 0;
+ }
+ 
++#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
++#define bpf_core_read_int bpf_core_read
++#else
++#define bpf_core_read_int(dst, sz, src) ({ \
++	/* Prevent "subtraction from stack pointer prohibited" */ \
++	volatile long __off = sizeof(*dst) - (sz); \
++	bpf_core_read((char *)(dst) + __off, sz, src); \
++})
++#endif
++
+ SEC("raw_tp/sys_enter")
+ int handle_probed(void *ctx)
+ {
+@@ -132,23 +142,23 @@ int handle_probed(void *ctx)
+ 	__u64 tmp;
+ 
+ 	tmp = 0;
+-	bpf_core_read(&tmp, bpf_core_field_size(in->ptr), &in->ptr);
++	bpf_core_read_int(&tmp, bpf_core_field_size(in->ptr), &in->ptr);
+ 	ptr_probed = tmp;
+ 
+ 	tmp = 0;
+-	bpf_core_read(&tmp, bpf_core_field_size(in->val1), &in->val1);
++	bpf_core_read_int(&tmp, bpf_core_field_size(in->val1), &in->val1);
+ 	val1_probed = tmp;
+ 
+ 	tmp = 0;
+-	bpf_core_read(&tmp, bpf_core_field_size(in->val2), &in->val2);
++	bpf_core_read_int(&tmp, bpf_core_field_size(in->val2), &in->val2);
+ 	val2_probed = tmp;
+ 
+ 	tmp = 0;
+-	bpf_core_read(&tmp, bpf_core_field_size(in->val3), &in->val3);
++	bpf_core_read_int(&tmp, bpf_core_field_size(in->val3), &in->val3);
+ 	val3_probed = tmp;
+ 
+ 	tmp = 0;
+-	bpf_core_read(&tmp, bpf_core_field_size(in->val4), &in->val4);
++	bpf_core_read_int(&tmp, bpf_core_field_size(in->val4), &in->val4);
+ 	val4_probed = tmp;
+ 
+ 	return 0;
+diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
+index 51adc42b2b40e..aa38dc4a5e85f 100644
+--- a/tools/testing/selftests/bpf/test_maps.c
++++ b/tools/testing/selftests/bpf/test_maps.c
+@@ -747,8 +747,8 @@ static void test_sockmap(unsigned int tasks, void *data)
+ 	udp = socket(AF_INET, SOCK_DGRAM, 0);
+ 	i = 0;
+ 	err = bpf_map_update_elem(fd, &i, &udp, BPF_ANY);
+-	if (!err) {
+-		printf("Failed socket SOCK_DGRAM allowed '%i:%i'\n",
++	if (err) {
++		printf("Failed socket update SOCK_DGRAM '%i:%i'\n",
+ 		       i, udp);
+ 		goto out_sockmap;
+ 	}


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-09-16 11:02 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-09-16 11:02 UTC (permalink / raw
  To: gentoo-commits

commit:     527c78593c55fa24f36571a4d13286fc2c1a947e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Sep 16 11:02:15 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Sep 16 11:02:15 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=527c7859

Linux patch 5.13.18

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 1017_linux-5.13.18.patch | 75 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 75 insertions(+)

diff --git a/1017_linux-5.13.18.patch b/1017_linux-5.13.18.patch
new file mode 100644
index 0000000..f9aefb6
--- /dev/null
+++ b/1017_linux-5.13.18.patch
@@ -0,0 +1,75 @@
+diff --git a/Makefile b/Makefile
+index c79a2c70a22ba..ddbd64b92a723 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 13
+-SUBLEVEL = 17
++SUBLEVEL = 18
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+diff --git a/include/linux/time64.h b/include/linux/time64.h
+index 81b9686a20799..5117cb5b56561 100644
+--- a/include/linux/time64.h
++++ b/include/linux/time64.h
+@@ -25,9 +25,7 @@ struct itimerspec64 {
+ #define TIME64_MIN			(-TIME64_MAX - 1)
+ 
+ #define KTIME_MAX			((s64)~((u64)1 << 63))
+-#define KTIME_MIN			(-KTIME_MAX - 1)
+ #define KTIME_SEC_MAX			(KTIME_MAX / NSEC_PER_SEC)
+-#define KTIME_SEC_MIN			(KTIME_MIN / NSEC_PER_SEC)
+ 
+ /*
+  * Limits for settimeofday():
+@@ -126,13 +124,10 @@ static inline bool timespec64_valid_settod(const struct timespec64 *ts)
+  */
+ static inline s64 timespec64_to_ns(const struct timespec64 *ts)
+ {
+-	/* Prevent multiplication overflow / underflow */
+-	if (ts->tv_sec >= KTIME_SEC_MAX)
++	/* Prevent multiplication overflow */
++	if ((unsigned long long)ts->tv_sec >= KTIME_SEC_MAX)
+ 		return KTIME_MAX;
+ 
+-	if (ts->tv_sec <= KTIME_SEC_MIN)
+-		return KTIME_MIN;
+-
+ 	return ((s64) ts->tv_sec * NSEC_PER_SEC) + ts->tv_nsec;
+ }
+ 
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index a9f8d25220b1a..aa52fc85dbcbf 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -1346,6 +1346,8 @@ void set_process_cpu_timer(struct task_struct *tsk, unsigned int clkid,
+ 			}
+ 		}
+ 
++		if (!*newval)
++			return;
+ 		*newval += now;
+ 	}
+ 
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index bf1bb08b94aad..8d455c2321545 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -1740,14 +1740,6 @@ int hci_dev_do_close(struct hci_dev *hdev)
+ 	hci_request_cancel_all(hdev);
+ 	hci_req_sync_lock(hdev);
+ 
+-	if (!hci_dev_test_flag(hdev, HCI_UNREGISTER) &&
+-	    !hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
+-	    test_bit(HCI_UP, &hdev->flags)) {
+-		/* Execute vendor specific shutdown routine */
+-		if (hdev->shutdown)
+-			hdev->shutdown(hdev);
+-	}
+-
+ 	if (!test_and_clear_bit(HCI_UP, &hdev->flags)) {
+ 		cancel_delayed_work_sync(&hdev->cmd_timer);
+ 		hci_req_sync_unlock(hdev);


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-09-17 12:42 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-09-17 12:42 UTC (permalink / raw
  To: gentoo-commits

commit:     38ba0813f3ef0cb59369255cd684992da6ded9ee
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Sep 17 12:42:25 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Sep 17 12:42:25 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=38ba0813

Update CPU Opt Patch 2021-09-14

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 5010_enable-cpu-optimizations-universal.patch | 158 +++++++++++++-------------
 1 file changed, 76 insertions(+), 82 deletions(-)

diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
index e37528f..b9e8ebb 100644
--- a/5010_enable-cpu-optimizations-universal.patch
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -1,7 +1,7 @@
-From 4af44fbc97bc51eb742f0d6555bde23cf580d4e3 Mon Sep 17 00:00:00 2001
+From d31d2b0747ab55e65c2366d51149a0ec9896155e Mon Sep 17 00:00:00 2001
 From: graysky <graysky@archlinux.us>
-Date: Sun, 6 Jun 2021 09:41:36 -0400
-Subject: [PATCH] more uarches for kernel 5.8+
+Date: Tue, 14 Sep 2021 15:35:34 -0400
+Subject: [PATCH] more uarches for kernel 5.15+
 MIME-Version: 1.0
 Content-Type: text/plain; charset=UTF-8
 Content-Transfer-Encoding: 8bit
@@ -86,7 +86,7 @@ See the following experimental evidence supporting this statement:
 https://github.com/graysky2/kernel_gcc_patch
 
 REQUIREMENTS
-linux version >=5.8
+linux version >=5.15
 gcc version >=9.0 or clang version >=9.0
 
 ACKNOWLEDGMENTS
@@ -102,17 +102,17 @@ REFERENCES
 Signed-off-by: graysky <graysky@archlinux.us>
 ---
  arch/x86/Kconfig.cpu            | 332 ++++++++++++++++++++++++++++++--
- arch/x86/Makefile               |  47 ++++-
+ arch/x86/Makefile               |  40 +++-
  arch/x86/include/asm/vermagic.h |  66 +++++++
- 3 files changed, 428 insertions(+), 17 deletions(-)
+ 3 files changed, 424 insertions(+), 14 deletions(-)
 
 diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
-index 814fe0d349b0..8acf6519d279 100644
+index 814fe0d349b0..61f0d7757499 100644
 --- a/arch/x86/Kconfig.cpu
 +++ b/arch/x86/Kconfig.cpu
 @@ -157,7 +157,7 @@ config MPENTIUM4
- 
- 
+
+
  config MK6
 -	bool "K6/K6-II/K6-III"
 +	bool "AMD K6/K6-II/K6-III"
@@ -121,7 +121,7 @@ index 814fe0d349b0..8acf6519d279 100644
  	  Select this for an AMD K6-family processor.  Enables use of
 @@ -165,7 +165,7 @@ config MK6
  	  flags to GCC.
- 
+
  config MK7
 -	bool "Athlon/Duron/K7"
 +	bool "AMD Athlon/Duron/K7"
@@ -130,7 +130,7 @@ index 814fe0d349b0..8acf6519d279 100644
  	  Select this for an AMD Athlon K7-family processor.  Enables use of
 @@ -173,12 +173,98 @@ config MK7
  	  flags to GCC.
- 
+
  config MK8
 -	bool "Opteron/Athlon64/Hammer/K8"
 +	bool "AMD Opteron/Athlon64/Hammer/K8"
@@ -138,7 +138,7 @@ index 814fe0d349b0..8acf6519d279 100644
  	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
  	  Enables use of some extended instructions, and passes appropriate
  	  optimization flags to GCC.
- 
+
 +config MK8SSE3
 +	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
 +	help
@@ -230,17 +230,17 @@ index 814fe0d349b0..8acf6519d279 100644
  	depends on X86_32
 @@ -270,7 +356,7 @@ config MPSC
  	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
- 
+
  config MCORE2
 -	bool "Core 2/newer Xeon"
 +	bool "Intel Core 2"
  	help
- 
+
  	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
 @@ -278,6 +364,8 @@ config MCORE2
  	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
  	  (not a typo)
- 
+
 +	  Enables -march=core2
 +
  config MATOM
@@ -249,7 +249,7 @@ index 814fe0d349b0..8acf6519d279 100644
 @@ -287,6 +375,182 @@ config MATOM
  	  accordingly optimized code. Use a recent GCC with specific Atom
  	  support in order to fully benefit from selecting this option.
- 
+
 +config MNEHALEM
 +	bool "Intel Nehalem"
 +	select X86_P6_NOP
@@ -432,7 +432,7 @@ index 814fe0d349b0..8acf6519d279 100644
 @@ -294,6 +558,50 @@ config GENERIC_CPU
  	  Generic x86-64 CPU.
  	  Run equally well on all x86-64 CPUs.
- 
+
 +config GENERIC_CPU2
 +	bool "Generic-x86-64-v2"
 +	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
@@ -478,7 +478,7 @@ index 814fe0d349b0..8acf6519d279 100644
 +	  Enables -march=native
 +
  endchoice
- 
+
  config X86_GENERIC
 @@ -318,7 +626,7 @@ config X86_INTERNODE_CACHE_SHIFT
  config X86_L1_CACHE_SHIFT
@@ -488,19 +488,19 @@ index 814fe0d349b0..8acf6519d279 100644
 +	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD || X86_GENERIC || GENERIC_CPU || GENERIC_CPU2 || GENERIC_CPU3 || GENERIC_CPU4
  	default "4" if MELAN || M486SX || M486 || MGEODEGX1
  	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
- 
+
 @@ -336,11 +644,11 @@ config X86_ALIGNMENT_16
- 
+
  config X86_INTEL_USERCOPY
  	def_bool y
 -	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
 +	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL
- 
+
  config X86_USE_PPRO_CHECKSUM
  	def_bool y
 -	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
 +	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
- 
+
  config X86_USE_3DNOW
  	def_bool y
 @@ -360,26 +668,26 @@ config X86_USE_3DNOW
@@ -509,24 +509,24 @@ index 814fe0d349b0..8acf6519d279 100644
  	depends on X86_64
 -	depends on (MCORE2 || MPENTIUM4 || MPSC)
 +	depends on (MCORE2 || MPENTIUM4 || MPSC || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL)
- 
+
  config X86_TSC
  	def_bool y
 -	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
 +	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD) || X86_64
- 
+
  config X86_CMPXCHG64
  	def_bool y
 -	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8
 +	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
- 
+
  # this should be set for all -march=.. options where the compiler
  # generates cmov.
  config X86_CMOV
  	def_bool y
 -	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
 +	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
- 
+
  config X86_MINIMUM_CPU_FAMILY
  	int
  	default "64" if X86_64
@@ -534,65 +534,58 @@ index 814fe0d349b0..8acf6519d279 100644
 +	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8 ||  MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
  	default "5" if X86_32 && X86_CMPXCHG64
  	default "4"
- 
+
 diff --git a/arch/x86/Makefile b/arch/x86/Makefile
-index 78faf9c7e3ae..ee0cd507af8b 100644
+index 7488cfbbd2f6..01876b6fb8e1 100644
 --- a/arch/x86/Makefile
 +++ b/arch/x86/Makefile
-@@ -114,11 +114,48 @@ else
+@@ -119,8 +119,44 @@ else
          # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
-         cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
-         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
--
--        cflags-$(CONFIG_MCORE2) += \
--                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
--	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
--		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
-+        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3)
-+        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
-+        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
-+        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
-+        cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
-+        cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
-+        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
-+        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-mno-tbm)
-+        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
-+        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-mno-tbm)
-+        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
-+        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-mno-tbm)
-+        cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
-+        cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2)
-+        cflags-$(CONFIG_MZEN3) += $(call cc-option,-march=znver3)
-+
-+        cflags-$(CONFIG_MNATIVE_INTEL) += $(call cc-option,-march=native)
-+        cflags-$(CONFIG_MNATIVE_AMD) += $(call cc-option,-march=native)
-+        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell)
-+        cflags-$(CONFIG_MCORE2) += $(call cc-option,-march=core2)
-+        cflags-$(CONFIG_MNEHALEM) += $(call cc-option,-march=nehalem)
-+        cflags-$(CONFIG_MWESTMERE) += $(call cc-option,-march=westmere)
-+        cflags-$(CONFIG_MSILVERMONT) += $(call cc-option,-march=silvermont)
-+        cflags-$(CONFIG_MGOLDMONT) += $(call cc-option,-march=goldmont)
-+        cflags-$(CONFIG_MGOLDMONTPLUS) += $(call cc-option,-march=goldmont-plus)
-+        cflags-$(CONFIG_MSANDYBRIDGE) += $(call cc-option,-march=sandybridge)
-+        cflags-$(CONFIG_MIVYBRIDGE) += $(call cc-option,-march=ivybridge)
-+        cflags-$(CONFIG_MHASWELL) += $(call cc-option,-march=haswell)
-+        cflags-$(CONFIG_MBROADWELL) += $(call cc-option,-march=broadwell)
-+        cflags-$(CONFIG_MSKYLAKE) += $(call cc-option,-march=skylake)
-+        cflags-$(CONFIG_MSKYLAKEX) += $(call cc-option,-march=skylake-avx512)
-+        cflags-$(CONFIG_MCANNONLAKE) += $(call cc-option,-march=cannonlake)
-+        cflags-$(CONFIG_MICELAKE) += $(call cc-option,-march=icelake-client)
-+        cflags-$(CONFIG_MCASCADELAKE) += $(call cc-option,-march=cascadelake)
-+        cflags-$(CONFIG_MCOOPERLAKE) += $(call cc-option,-march=cooperlake)
-+        cflags-$(CONFIG_MTIGERLAKE) += $(call cc-option,-march=tigerlake)
-+        cflags-$(CONFIG_MSAPPHIRERAPIDS) += $(call cc-option,-march=sapphirerapids)
-+        cflags-$(CONFIG_MROCKETLAKE) += $(call cc-option,-march=rocketlake)
-+        cflags-$(CONFIG_MALDERLAKE) += $(call cc-option,-march=alderlake)
-+        cflags-$(CONFIG_GENERIC_CPU2) += $(call cc-option,-march=x86-64-v2)
-+        cflags-$(CONFIG_GENERIC_CPU3) += $(call cc-option,-march=x86-64-v3)
-+        cflags-$(CONFIG_GENERIC_CPU4) += $(call cc-option,-march=x86-64-v4)
-         cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+         cflags-$(CONFIG_MK8)		+= -march=k8
+         cflags-$(CONFIG_MPSC)		+= -march=nocona
+-        cflags-$(CONFIG_MCORE2)		+= -march=core2
+-        cflags-$(CONFIG_MATOM)		+= -march=atom
++        cflags-$(CONFIG_MK8SSE3)	+= -march=k8-sse3
++        cflags-$(CONFIG_MK10) 		+= -march=amdfam10
++        cflags-$(CONFIG_MBARCELONA) 	+= -march=barcelona
++        cflags-$(CONFIG_MBOBCAT) 	+= -march=btver1
++        cflags-$(CONFIG_MJAGUAR) 	+= -march=btver2
++        cflags-$(CONFIG_MBULLDOZER) 	+= -march=bdver1
++        cflags-$(CONFIG_MPILEDRIVER)	+= -march=bdver2
++        cflags-$(CONFIG_MSTEAMROLLER) 	+= -march=bdver3
++        cflags-$(CONFIG_MEXCAVATOR) 	+= -march=bdver4
++        cflags-$(CONFIG_MZEN) 		+= -march=znver1
++        cflags-$(CONFIG_MZEN2) 	+= -march=znver2
++        cflags-$(CONFIG_MZEN3) 	+= -march=znver3
++        cflags-$(CONFIG_MNATIVE_INTEL) += -march=native
++        cflags-$(CONFIG_MNATIVE_AMD) 	+= -march=native
++        cflags-$(CONFIG_MATOM) 	+= -march=bonnell
++        cflags-$(CONFIG_MCORE2) 	+= -march=core2
++        cflags-$(CONFIG_MNEHALEM) 	+= -march=nehalem
++        cflags-$(CONFIG_MWESTMERE) 	+= -march=westmere
++        cflags-$(CONFIG_MSILVERMONT) 	+= -march=silvermont
++        cflags-$(CONFIG_MGOLDMONT) 	+= -march=goldmont
++        cflags-$(CONFIG_MGOLDMONTPLUS) += -march=goldmont-plus
++        cflags-$(CONFIG_MSANDYBRIDGE) 	+= -march=sandybridge
++        cflags-$(CONFIG_MIVYBRIDGE) 	+= -march=ivybridge
++        cflags-$(CONFIG_MHASWELL) 	+= -march=haswell
++        cflags-$(CONFIG_MBROADWELL) 	+= -march=broadwell
++        cflags-$(CONFIG_MSKYLAKE) 	+= -march=skylake
++        cflags-$(CONFIG_MSKYLAKEX) 	+= -march=skylake-avx512
++        cflags-$(CONFIG_MCANNONLAKE) 	+= -march=cannonlake
++        cflags-$(CONFIG_MICELAKE) 	+= -march=icelake-client
++        cflags-$(CONFIG_MCASCADELAKE) 	+= -march=cascadelake
++        cflags-$(CONFIG_MCOOPERLAKE) 	+= -march=cooperlake
++        cflags-$(CONFIG_MTIGERLAKE) 	+= -march=tigerlake
++        cflags-$(CONFIG_MSAPPHIRERAPIDS) += -march=sapphirerapids
++        cflags-$(CONFIG_MROCKETLAKE) 	+= -march=rocketlake
++        cflags-$(CONFIG_MALDERLAKE) 	+= -march=alderlake
++        cflags-$(CONFIG_GENERIC_CPU2) 	+= -march=x86-64-v2
++        cflags-$(CONFIG_GENERIC_CPU3) 	+= -march=x86-64-v3
++        cflags-$(CONFIG_GENERIC_CPU4) 	+= -march=x86-64-v4
+         cflags-$(CONFIG_GENERIC_CPU)	+= -mtune=generic
          KBUILD_CFLAGS += $(cflags-y)
- 
+
 diff --git a/arch/x86/include/asm/vermagic.h b/arch/x86/include/asm/vermagic.h
 index 75884d2cdec3..4e6a08d4c7e5 100644
 --- a/arch/x86/include/asm/vermagic.h
@@ -677,6 +670,7 @@ index 75884d2cdec3..4e6a08d4c7e5 100644
  #elif defined CONFIG_MELAN
  #define MODULE_PROC_FAMILY "ELAN "
  #elif defined CONFIG_MCRUSOE
--- 
-2.31.1
+--
+2.33.0
+
 


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-09-17 12:49 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-09-17 12:49 UTC (permalink / raw
  To: gentoo-commits

commit:     4c4b6ebf4892aace9e870f86902208e82bf1db9a
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Sep 17 12:49:09 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Sep 17 12:49:09 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4c4b6ebf

Add correct CPU OPT Patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 5010_enable-cpu-optimizations-universal.patch | 158 +++++++++++++-------------
 1 file changed, 82 insertions(+), 76 deletions(-)

diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
index b9e8ebb..d437e1a 100644
--- a/5010_enable-cpu-optimizations-universal.patch
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -1,7 +1,7 @@
-From d31d2b0747ab55e65c2366d51149a0ec9896155e Mon Sep 17 00:00:00 2001
+From 4af44fbc97bc51eb742f0d6555bde23cf580d4e3 Mon Sep 17 00:00:00 2001
 From: graysky <graysky@archlinux.us>
-Date: Tue, 14 Sep 2021 15:35:34 -0400
-Subject: [PATCH] more uarches for kernel 5.15+
+Date: Sun, 6 Jun 2021 09:41:36 -0400
+Subject: [PATCH] more uarches for kernel 5.8-5.14
 MIME-Version: 1.0
 Content-Type: text/plain; charset=UTF-8
 Content-Transfer-Encoding: 8bit
@@ -86,7 +86,7 @@ See the following experimental evidence supporting this statement:
 https://github.com/graysky2/kernel_gcc_patch
 
 REQUIREMENTS
-linux version >=5.15
+linux version 5.8-5.14
 gcc version >=9.0 or clang version >=9.0
 
 ACKNOWLEDGMENTS
@@ -102,17 +102,17 @@ REFERENCES
 Signed-off-by: graysky <graysky@archlinux.us>
 ---
  arch/x86/Kconfig.cpu            | 332 ++++++++++++++++++++++++++++++--
- arch/x86/Makefile               |  40 +++-
+ arch/x86/Makefile               |  47 ++++-
  arch/x86/include/asm/vermagic.h |  66 +++++++
- 3 files changed, 424 insertions(+), 14 deletions(-)
+ 3 files changed, 428 insertions(+), 17 deletions(-)
 
 diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
-index 814fe0d349b0..61f0d7757499 100644
+index 814fe0d349b0..8acf6519d279 100644
 --- a/arch/x86/Kconfig.cpu
 +++ b/arch/x86/Kconfig.cpu
 @@ -157,7 +157,7 @@ config MPENTIUM4
-
-
+ 
+ 
  config MK6
 -	bool "K6/K6-II/K6-III"
 +	bool "AMD K6/K6-II/K6-III"
@@ -121,7 +121,7 @@ index 814fe0d349b0..61f0d7757499 100644
  	  Select this for an AMD K6-family processor.  Enables use of
 @@ -165,7 +165,7 @@ config MK6
  	  flags to GCC.
-
+ 
  config MK7
 -	bool "Athlon/Duron/K7"
 +	bool "AMD Athlon/Duron/K7"
@@ -130,7 +130,7 @@ index 814fe0d349b0..61f0d7757499 100644
  	  Select this for an AMD Athlon K7-family processor.  Enables use of
 @@ -173,12 +173,98 @@ config MK7
  	  flags to GCC.
-
+ 
  config MK8
 -	bool "Opteron/Athlon64/Hammer/K8"
 +	bool "AMD Opteron/Athlon64/Hammer/K8"
@@ -138,7 +138,7 @@ index 814fe0d349b0..61f0d7757499 100644
  	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
  	  Enables use of some extended instructions, and passes appropriate
  	  optimization flags to GCC.
-
+ 
 +config MK8SSE3
 +	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
 +	help
@@ -230,17 +230,17 @@ index 814fe0d349b0..61f0d7757499 100644
  	depends on X86_32
 @@ -270,7 +356,7 @@ config MPSC
  	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
-
+ 
  config MCORE2
 -	bool "Core 2/newer Xeon"
 +	bool "Intel Core 2"
  	help
-
+ 
  	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
 @@ -278,6 +364,8 @@ config MCORE2
  	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
  	  (not a typo)
-
+ 
 +	  Enables -march=core2
 +
  config MATOM
@@ -249,7 +249,7 @@ index 814fe0d349b0..61f0d7757499 100644
 @@ -287,6 +375,182 @@ config MATOM
  	  accordingly optimized code. Use a recent GCC with specific Atom
  	  support in order to fully benefit from selecting this option.
-
+ 
 +config MNEHALEM
 +	bool "Intel Nehalem"
 +	select X86_P6_NOP
@@ -432,7 +432,7 @@ index 814fe0d349b0..61f0d7757499 100644
 @@ -294,6 +558,50 @@ config GENERIC_CPU
  	  Generic x86-64 CPU.
  	  Run equally well on all x86-64 CPUs.
-
+ 
 +config GENERIC_CPU2
 +	bool "Generic-x86-64-v2"
 +	depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
@@ -478,7 +478,7 @@ index 814fe0d349b0..61f0d7757499 100644
 +	  Enables -march=native
 +
  endchoice
-
+ 
  config X86_GENERIC
 @@ -318,7 +626,7 @@ config X86_INTERNODE_CACHE_SHIFT
  config X86_L1_CACHE_SHIFT
@@ -488,19 +488,19 @@ index 814fe0d349b0..61f0d7757499 100644
 +	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD || X86_GENERIC || GENERIC_CPU || GENERIC_CPU2 || GENERIC_CPU3 || GENERIC_CPU4
  	default "4" if MELAN || M486SX || M486 || MGEODEGX1
  	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
-
+ 
 @@ -336,11 +644,11 @@ config X86_ALIGNMENT_16
-
+ 
  config X86_INTEL_USERCOPY
  	def_bool y
 -	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
 +	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL
-
+ 
  config X86_USE_PPRO_CHECKSUM
  	def_bool y
 -	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
 +	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
-
+ 
  config X86_USE_3DNOW
  	def_bool y
 @@ -360,26 +668,26 @@ config X86_USE_3DNOW
@@ -509,24 +509,24 @@ index 814fe0d349b0..61f0d7757499 100644
  	depends on X86_64
 -	depends on (MCORE2 || MPENTIUM4 || MPSC)
 +	depends on (MCORE2 || MPENTIUM4 || MPSC || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL)
-
+ 
  config X86_TSC
  	def_bool y
 -	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
 +	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD) || X86_64
-
+ 
  config X86_CMPXCHG64
  	def_bool y
 -	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8
 +	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
-
+ 
  # this should be set for all -march=.. options where the compiler
  # generates cmov.
  config X86_CMOV
  	def_bool y
 -	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
 +	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
-
+ 
  config X86_MINIMUM_CPU_FAMILY
  	int
  	default "64" if X86_64
@@ -534,58 +534,65 @@ index 814fe0d349b0..61f0d7757499 100644
 +	default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8 ||  MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
  	default "5" if X86_32 && X86_CMPXCHG64
  	default "4"
-
+ 
 diff --git a/arch/x86/Makefile b/arch/x86/Makefile
-index 7488cfbbd2f6..01876b6fb8e1 100644
+index 78faf9c7e3ae..ee0cd507af8b 100644
 --- a/arch/x86/Makefile
 +++ b/arch/x86/Makefile
-@@ -119,8 +119,44 @@ else
+@@ -114,11 +114,48 @@ else
          # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
-         cflags-$(CONFIG_MK8)		+= -march=k8
-         cflags-$(CONFIG_MPSC)		+= -march=nocona
--        cflags-$(CONFIG_MCORE2)		+= -march=core2
--        cflags-$(CONFIG_MATOM)		+= -march=atom
-+        cflags-$(CONFIG_MK8SSE3)	+= -march=k8-sse3
-+        cflags-$(CONFIG_MK10) 		+= -march=amdfam10
-+        cflags-$(CONFIG_MBARCELONA) 	+= -march=barcelona
-+        cflags-$(CONFIG_MBOBCAT) 	+= -march=btver1
-+        cflags-$(CONFIG_MJAGUAR) 	+= -march=btver2
-+        cflags-$(CONFIG_MBULLDOZER) 	+= -march=bdver1
-+        cflags-$(CONFIG_MPILEDRIVER)	+= -march=bdver2
-+        cflags-$(CONFIG_MSTEAMROLLER) 	+= -march=bdver3
-+        cflags-$(CONFIG_MEXCAVATOR) 	+= -march=bdver4
-+        cflags-$(CONFIG_MZEN) 		+= -march=znver1
-+        cflags-$(CONFIG_MZEN2) 	+= -march=znver2
-+        cflags-$(CONFIG_MZEN3) 	+= -march=znver3
-+        cflags-$(CONFIG_MNATIVE_INTEL) += -march=native
-+        cflags-$(CONFIG_MNATIVE_AMD) 	+= -march=native
-+        cflags-$(CONFIG_MATOM) 	+= -march=bonnell
-+        cflags-$(CONFIG_MCORE2) 	+= -march=core2
-+        cflags-$(CONFIG_MNEHALEM) 	+= -march=nehalem
-+        cflags-$(CONFIG_MWESTMERE) 	+= -march=westmere
-+        cflags-$(CONFIG_MSILVERMONT) 	+= -march=silvermont
-+        cflags-$(CONFIG_MGOLDMONT) 	+= -march=goldmont
-+        cflags-$(CONFIG_MGOLDMONTPLUS) += -march=goldmont-plus
-+        cflags-$(CONFIG_MSANDYBRIDGE) 	+= -march=sandybridge
-+        cflags-$(CONFIG_MIVYBRIDGE) 	+= -march=ivybridge
-+        cflags-$(CONFIG_MHASWELL) 	+= -march=haswell
-+        cflags-$(CONFIG_MBROADWELL) 	+= -march=broadwell
-+        cflags-$(CONFIG_MSKYLAKE) 	+= -march=skylake
-+        cflags-$(CONFIG_MSKYLAKEX) 	+= -march=skylake-avx512
-+        cflags-$(CONFIG_MCANNONLAKE) 	+= -march=cannonlake
-+        cflags-$(CONFIG_MICELAKE) 	+= -march=icelake-client
-+        cflags-$(CONFIG_MCASCADELAKE) 	+= -march=cascadelake
-+        cflags-$(CONFIG_MCOOPERLAKE) 	+= -march=cooperlake
-+        cflags-$(CONFIG_MTIGERLAKE) 	+= -march=tigerlake
-+        cflags-$(CONFIG_MSAPPHIRERAPIDS) += -march=sapphirerapids
-+        cflags-$(CONFIG_MROCKETLAKE) 	+= -march=rocketlake
-+        cflags-$(CONFIG_MALDERLAKE) 	+= -march=alderlake
-+        cflags-$(CONFIG_GENERIC_CPU2) 	+= -march=x86-64-v2
-+        cflags-$(CONFIG_GENERIC_CPU3) 	+= -march=x86-64-v3
-+        cflags-$(CONFIG_GENERIC_CPU4) 	+= -march=x86-64-v4
-         cflags-$(CONFIG_GENERIC_CPU)	+= -mtune=generic
+         cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
+         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+-
+-        cflags-$(CONFIG_MCORE2) += \
+-                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+-	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+-		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3)
++        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++        cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
++        cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-mno-tbm)
++        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
++        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-mno-tbm)
++        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
++        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-mno-tbm)
++        cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
++        cflags-$(CONFIG_MZEN2) += $(call cc-option,-march=znver2)
++        cflags-$(CONFIG_MZEN3) += $(call cc-option,-march=znver3)
++
++        cflags-$(CONFIG_MNATIVE_INTEL) += $(call cc-option,-march=native)
++        cflags-$(CONFIG_MNATIVE_AMD) += $(call cc-option,-march=native)
++        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell)
++        cflags-$(CONFIG_MCORE2) += $(call cc-option,-march=core2)
++        cflags-$(CONFIG_MNEHALEM) += $(call cc-option,-march=nehalem)
++        cflags-$(CONFIG_MWESTMERE) += $(call cc-option,-march=westmere)
++        cflags-$(CONFIG_MSILVERMONT) += $(call cc-option,-march=silvermont)
++        cflags-$(CONFIG_MGOLDMONT) += $(call cc-option,-march=goldmont)
++        cflags-$(CONFIG_MGOLDMONTPLUS) += $(call cc-option,-march=goldmont-plus)
++        cflags-$(CONFIG_MSANDYBRIDGE) += $(call cc-option,-march=sandybridge)
++        cflags-$(CONFIG_MIVYBRIDGE) += $(call cc-option,-march=ivybridge)
++        cflags-$(CONFIG_MHASWELL) += $(call cc-option,-march=haswell)
++        cflags-$(CONFIG_MBROADWELL) += $(call cc-option,-march=broadwell)
++        cflags-$(CONFIG_MSKYLAKE) += $(call cc-option,-march=skylake)
++        cflags-$(CONFIG_MSKYLAKEX) += $(call cc-option,-march=skylake-avx512)
++        cflags-$(CONFIG_MCANNONLAKE) += $(call cc-option,-march=cannonlake)
++        cflags-$(CONFIG_MICELAKE) += $(call cc-option,-march=icelake-client)
++        cflags-$(CONFIG_MCASCADELAKE) += $(call cc-option,-march=cascadelake)
++        cflags-$(CONFIG_MCOOPERLAKE) += $(call cc-option,-march=cooperlake)
++        cflags-$(CONFIG_MTIGERLAKE) += $(call cc-option,-march=tigerlake)
++        cflags-$(CONFIG_MSAPPHIRERAPIDS) += $(call cc-option,-march=sapphirerapids)
++        cflags-$(CONFIG_MROCKETLAKE) += $(call cc-option,-march=rocketlake)
++        cflags-$(CONFIG_MALDERLAKE) += $(call cc-option,-march=alderlake)
++        cflags-$(CONFIG_GENERIC_CPU2) += $(call cc-option,-march=x86-64-v2)
++        cflags-$(CONFIG_GENERIC_CPU3) += $(call cc-option,-march=x86-64-v3)
++        cflags-$(CONFIG_GENERIC_CPU4) += $(call cc-option,-march=x86-64-v4)
+         cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
          KBUILD_CFLAGS += $(cflags-y)
-
+ 
 diff --git a/arch/x86/include/asm/vermagic.h b/arch/x86/include/asm/vermagic.h
 index 75884d2cdec3..4e6a08d4c7e5 100644
 --- a/arch/x86/include/asm/vermagic.h
@@ -670,7 +677,6 @@ index 75884d2cdec3..4e6a08d4c7e5 100644
  #elif defined CONFIG_MELAN
  #define MODULE_PROC_FAMILY "ELAN "
  #elif defined CONFIG_MCRUSOE
---
-2.33.0
-
+-- 
+2.31.1
 


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-09-18 16:08 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-09-18 16:08 UTC (permalink / raw
  To: gentoo-commits

commit:     9721ae214fbeef46bcc22f4e89853505717a4596
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Sep 18 16:08:39 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Sep 18 16:08:39 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9721ae21

Linux patch 5.13.19

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |     8 +
 1018_linux-5.13.19.patch | 17110 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 17118 insertions(+)

diff --git a/0000_README b/0000_README
index ab01fc2..2dcc6e6 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,14 @@ Patch:  1016_linux-5.13.17.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.13.17
 
+Patch:  1017_linux-5.13.18.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.13.18
+
+Patch:  1018_linux-5.13.19.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.13.19
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1018_linux-5.13.19.patch b/1018_linux-5.13.19.patch
new file mode 100644
index 0000000..5f873ad
--- /dev/null
+++ b/1018_linux-5.13.19.patch
@@ -0,0 +1,17110 @@
+diff --git a/Documentation/admin-guide/devices.txt b/Documentation/admin-guide/devices.txt
+index 9c2be821c2254..922c23bb4372a 100644
+--- a/Documentation/admin-guide/devices.txt
++++ b/Documentation/admin-guide/devices.txt
+@@ -2993,10 +2993,10 @@
+ 		65 = /dev/infiniband/issm1     Second InfiniBand IsSM device
+ 		  ...
+ 		127 = /dev/infiniband/issm63    63rd InfiniBand IsSM device
+-		128 = /dev/infiniband/uverbs0   First InfiniBand verbs device
+-		129 = /dev/infiniband/uverbs1   Second InfiniBand verbs device
++		192 = /dev/infiniband/uverbs0   First InfiniBand verbs device
++		193 = /dev/infiniband/uverbs1   Second InfiniBand verbs device
+ 		  ...
+-		159 = /dev/infiniband/uverbs31  31st InfiniBand verbs device
++		223 = /dev/infiniband/uverbs31  31st InfiniBand verbs device
+ 
+  232 char	Biometric Devices
+ 		0 = /dev/biometric/sensor0/fingerprint	first fingerprint sensor on first device
+diff --git a/Documentation/devicetree/bindings/pinctrl/marvell,armada-37xx-pinctrl.txt b/Documentation/devicetree/bindings/pinctrl/marvell,armada-37xx-pinctrl.txt
+index 38dc56a577604..ecec514b31550 100644
+--- a/Documentation/devicetree/bindings/pinctrl/marvell,armada-37xx-pinctrl.txt
++++ b/Documentation/devicetree/bindings/pinctrl/marvell,armada-37xx-pinctrl.txt
+@@ -43,19 +43,19 @@ group emmc_nb
+ 
+ group pwm0
+  - pin 11 (GPIO1-11)
+- - functions pwm, gpio
++ - functions pwm, led, gpio
+ 
+ group pwm1
+  - pin 12
+- - functions pwm, gpio
++ - functions pwm, led, gpio
+ 
+ group pwm2
+  - pin 13
+- - functions pwm, gpio
++ - functions pwm, led, gpio
+ 
+ group pwm3
+  - pin 14
+- - functions pwm, gpio
++ - functions pwm, led, gpio
+ 
+ group pmic1
+  - pin 7
+diff --git a/Documentation/filesystems/f2fs.rst b/Documentation/filesystems/f2fs.rst
+index 53d396650afbe..7f52d9079d764 100644
+--- a/Documentation/filesystems/f2fs.rst
++++ b/Documentation/filesystems/f2fs.rst
+@@ -185,6 +185,7 @@ fault_type=%d		 Support configuring fault injection type, should be
+ 			 FAULT_KVMALLOC		  0x000000002
+ 			 FAULT_PAGE_ALLOC	  0x000000004
+ 			 FAULT_PAGE_GET		  0x000000008
++			 FAULT_ALLOC_BIO	  0x000000010 (obsolete)
+ 			 FAULT_ALLOC_NID	  0x000000020
+ 			 FAULT_ORPHAN		  0x000000040
+ 			 FAULT_BLOCK		  0x000000080
+@@ -289,6 +290,9 @@ compress_mode=%s	 Control file compression mode. This supports "fs" and "user"
+ 			 choosing the target file and the timing. The user can do manual
+ 			 compression/decompression on the compression enabled files using
+ 			 ioctls.
++compress_cache		 Support to use address space of a filesystem managed inode to
++			 cache compressed block, in order to improve cache hit ratio of
++			 random read.
+ inlinecrypt		 When possible, encrypt/decrypt the contents of encrypted
+ 			 files using the blk-crypto framework rather than
+ 			 filesystem-layer encryption. This allows the use of
+diff --git a/Makefile b/Makefile
+index ddbd64b92a723..528a5c37bc8d2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 13
+-SUBLEVEL = 18
++SUBLEVEL = 19
+ EXTRAVERSION =
+ NAME = Opossums on Parade
+ 
+@@ -404,6 +404,11 @@ ifeq ($(ARCH),sparc64)
+        SRCARCH := sparc
+ endif
+ 
++# Additional ARCH settings for parisc
++ifeq ($(ARCH),parisc64)
++       SRCARCH := parisc
++endif
++
+ export cross_compiling :=
+ ifneq ($(SRCARCH),$(SUBARCH))
+ cross_compiling := 1
+diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
+index 8eb70c1febce3..356f70cfcd3bb 100644
+--- a/arch/arm/boot/compressed/Makefile
++++ b/arch/arm/boot/compressed/Makefile
+@@ -85,6 +85,8 @@ compress-$(CONFIG_KERNEL_LZ4)  = lz4
+ libfdt_objs := fdt_rw.o fdt_ro.o fdt_wip.o fdt.o
+ 
+ ifeq ($(CONFIG_ARM_ATAG_DTB_COMPAT),y)
++CFLAGS_REMOVE_atags_to_fdt.o += -Wframe-larger-than=${CONFIG_FRAME_WARN}
++CFLAGS_atags_to_fdt.o += -Wframe-larger-than=1280
+ OBJS	+= $(libfdt_objs) atags_to_fdt.o
+ endif
+ ifeq ($(CONFIG_USE_OF),y)
+diff --git a/arch/arm/boot/dts/at91-kizbox3_common.dtsi b/arch/arm/boot/dts/at91-kizbox3_common.dtsi
+index c4b3750495da8..abe27adfa4d65 100644
+--- a/arch/arm/boot/dts/at91-kizbox3_common.dtsi
++++ b/arch/arm/boot/dts/at91-kizbox3_common.dtsi
+@@ -336,7 +336,7 @@
+ };
+ 
+ &shutdown_controller {
+-	atmel,shdwc-debouncer = <976>;
++	debounce-delay-us = <976>;
+ 	atmel,wakeup-rtc-timer;
+ 
+ 	input@0 {
+diff --git a/arch/arm/boot/dts/at91-sam9x60ek.dts b/arch/arm/boot/dts/at91-sam9x60ek.dts
+index ebbc9b23aef1c..b1068cca42287 100644
+--- a/arch/arm/boot/dts/at91-sam9x60ek.dts
++++ b/arch/arm/boot/dts/at91-sam9x60ek.dts
+@@ -662,7 +662,7 @@
+ };
+ 
+ &shutdown_controller {
+-	atmel,shdwc-debouncer = <976>;
++	debounce-delay-us = <976>;
+ 	status = "okay";
+ 
+ 	input@0 {
+diff --git a/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts b/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts
+index a9e6fee55a2a8..8034e5dacc808 100644
+--- a/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts
++++ b/arch/arm/boot/dts/at91-sama5d27_som1_ek.dts
+@@ -138,7 +138,7 @@
+ 			};
+ 
+ 			shdwc@f8048010 {
+-				atmel,shdwc-debouncer = <976>;
++				debounce-delay-us = <976>;
+ 				atmel,wakeup-rtc-timer;
+ 
+ 				input@0 {
+diff --git a/arch/arm/boot/dts/at91-sama5d27_wlsom1_ek.dts b/arch/arm/boot/dts/at91-sama5d27_wlsom1_ek.dts
+index ff83967fd0082..c145c4e5ef582 100644
+--- a/arch/arm/boot/dts/at91-sama5d27_wlsom1_ek.dts
++++ b/arch/arm/boot/dts/at91-sama5d27_wlsom1_ek.dts
+@@ -205,7 +205,7 @@
+ };
+ 
+ &shutdown_controller {
+-	atmel,shdwc-debouncer = <976>;
++	debounce-delay-us = <976>;
+ 	atmel,wakeup-rtc-timer;
+ 
+ 	input@0 {
+diff --git a/arch/arm/boot/dts/at91-sama5d2_icp.dts b/arch/arm/boot/dts/at91-sama5d2_icp.dts
+index bd64721fa23ca..34faca597c352 100644
+--- a/arch/arm/boot/dts/at91-sama5d2_icp.dts
++++ b/arch/arm/boot/dts/at91-sama5d2_icp.dts
+@@ -693,7 +693,7 @@
+ };
+ 
+ &shutdown_controller {
+-	atmel,shdwc-debouncer = <976>;
++	debounce-delay-us = <976>;
+ 	atmel,wakeup-rtc-timer;
+ 
+ 	input@0 {
+diff --git a/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts b/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
+index dfd150eb0fd86..3f972a4086c37 100644
+--- a/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
++++ b/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
+@@ -203,7 +203,7 @@
+ 			};
+ 
+ 			shdwc@f8048010 {
+-				atmel,shdwc-debouncer = <976>;
++				debounce-delay-us = <976>;
+ 
+ 				input@0 {
+ 					reg = <0>;
+diff --git a/arch/arm/boot/dts/at91-sama5d2_xplained.dts b/arch/arm/boot/dts/at91-sama5d2_xplained.dts
+index 509c732a0d8b4..627b7bf88d83b 100644
+--- a/arch/arm/boot/dts/at91-sama5d2_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d2_xplained.dts
+@@ -347,7 +347,7 @@
+ 			};
+ 
+ 			shdwc@f8048010 {
+-				atmel,shdwc-debouncer = <976>;
++				debounce-delay-us = <976>;
+ 				atmel,wakeup-rtc-timer;
+ 
+ 				input@0 {
+diff --git a/arch/arm/boot/dts/imx53-ppd.dts b/arch/arm/boot/dts/imx53-ppd.dts
+index be040b6a02fa8..1f3ee60fb102f 100644
+--- a/arch/arm/boot/dts/imx53-ppd.dts
++++ b/arch/arm/boot/dts/imx53-ppd.dts
+@@ -70,6 +70,12 @@
+ 		clock-frequency = <11289600>;
+ 	};
+ 
++	achc_24M: achc-clock {
++		compatible = "fixed-clock";
++		#clock-cells = <0>;
++		clock-frequency = <24000000>;
++	};
++
+ 	sgtlsound: sound {
+ 		compatible = "fsl,imx53-cpuvo-sgtl5000",
+ 			     "fsl,imx-audio-sgtl5000";
+@@ -314,16 +320,13 @@
+ 		    &gpio4 12 GPIO_ACTIVE_LOW>;
+ 	status = "okay";
+ 
+-	spidev0: spi@0 {
+-		compatible = "ge,achc";
+-		reg = <0>;
+-		spi-max-frequency = <1000000>;
+-	};
+-
+-	spidev1: spi@1 {
+-		compatible = "ge,achc";
+-		reg = <1>;
+-		spi-max-frequency = <1000000>;
++	spidev0: spi@1 {
++		compatible = "ge,achc", "nxp,kinetis-k20";
++		reg = <1>, <0>;
++		vdd-supply = <&reg_3v3>;
++		vdda-supply = <&reg_3v3>;
++		clocks = <&achc_24M>;
++		reset-gpios = <&gpio3 6 GPIO_ACTIVE_LOW>;
+ 	};
+ 
+ 	gpioxra0: gpio@2 {
+diff --git a/arch/arm/boot/dts/qcom-apq8064.dtsi b/arch/arm/boot/dts/qcom-apq8064.dtsi
+index 2687c4e890ba8..e36d590e83732 100644
+--- a/arch/arm/boot/dts/qcom-apq8064.dtsi
++++ b/arch/arm/boot/dts/qcom-apq8064.dtsi
+@@ -1262,9 +1262,9 @@
+ 				<&mmcc DSI1_BYTE_CLK>,
+ 				<&mmcc DSI_PIXEL_CLK>,
+ 				<&mmcc DSI1_ESC_CLK>;
+-			clock-names = "iface_clk", "bus_clk", "core_mmss_clk",
+-					"src_clk", "byte_clk", "pixel_clk",
+-					"core_clk";
++			clock-names = "iface", "bus", "core_mmss",
++					"src", "byte", "pixel",
++					"core";
+ 
+ 			assigned-clocks = <&mmcc DSI1_BYTE_SRC>,
+ 					<&mmcc DSI1_ESC_SRC>,
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
+index 6cf1c8b4c6e28..c9577ba2973d3 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
+@@ -172,15 +172,15 @@
+ 			sgtl5000_tx_endpoint: endpoint@0 {
+ 				reg = <0>;
+ 				remote-endpoint = <&sai2a_endpoint>;
+-				frame-master;
+-				bitclock-master;
++				frame-master = <&sgtl5000_tx_endpoint>;
++				bitclock-master = <&sgtl5000_tx_endpoint>;
+ 			};
+ 
+ 			sgtl5000_rx_endpoint: endpoint@1 {
+ 				reg = <1>;
+ 				remote-endpoint = <&sai2b_endpoint>;
+-				frame-master;
+-				bitclock-master;
++				frame-master = <&sgtl5000_rx_endpoint>;
++				bitclock-master = <&sgtl5000_rx_endpoint>;
+ 			};
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
+index 64dca5b7f748d..6885948f3024e 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
+@@ -220,8 +220,8 @@
+ &i2c4 {
+ 	hdmi-transmitter@3d {
+ 		compatible = "adi,adv7513";
+-		reg = <0x3d>, <0x2d>, <0x4d>, <0x5d>;
+-		reg-names = "main", "cec", "edid", "packet";
++		reg = <0x3d>, <0x4d>, <0x2d>, <0x5d>;
++		reg-names = "main", "edid", "cec", "packet";
+ 		clocks = <&cec_clock>;
+ 		clock-names = "cec";
+ 
+@@ -239,8 +239,6 @@
+ 		adi,input-depth = <8>;
+ 		adi,input-colorspace = "rgb";
+ 		adi,input-clock = "1x";
+-		adi,input-style = <1>;
+-		adi,input-justification = "evenly";
+ 
+ 		ports {
+ 			#address-cells = <1>;
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi b/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi
+index 59f18846cf5d0..586aac8a998c0 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi
+@@ -220,15 +220,15 @@
+ 			cs42l51_tx_endpoint: endpoint@0 {
+ 				reg = <0>;
+ 				remote-endpoint = <&sai2a_endpoint>;
+-				frame-master;
+-				bitclock-master;
++				frame-master = <&cs42l51_tx_endpoint>;
++				bitclock-master = <&cs42l51_tx_endpoint>;
+ 			};
+ 
+ 			cs42l51_rx_endpoint: endpoint@1 {
+ 				reg = <1>;
+ 				remote-endpoint = <&sai2b_endpoint>;
+-				frame-master;
+-				bitclock-master;
++				frame-master = <&cs42l51_rx_endpoint>;
++				bitclock-master = <&cs42l51_rx_endpoint>;
+ 			};
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts b/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
+index 14cd3238355b7..2c74993f1a9e8 100644
+--- a/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
++++ b/arch/arm/boot/dts/tegra20-acer-a500-picasso.dts
+@@ -716,7 +716,6 @@
+ 		nvidia,xcvr-setup-use-fuses;
+ 		nvidia,xcvr-lsfslew = <2>;
+ 		nvidia,xcvr-lsrslew = <2>;
+-		vbus-supply = <&vdd_vbus1>;
+ 	};
+ 
+ 	usb@c5008000 {
+@@ -728,7 +727,7 @@
+ 		nvidia,xcvr-setup-use-fuses;
+ 		nvidia,xcvr-lsfslew = <2>;
+ 		nvidia,xcvr-lsrslew = <2>;
+-		vbus-supply = <&vdd_vbus3>;
++		vbus-supply = <&vdd_5v0_sys>;
+ 	};
+ 
+ 	brcm_wifi_pwrseq: wifi-pwrseq {
+@@ -988,28 +987,6 @@
+ 		vin-supply = <&vdd_5v0_sys>;
+ 	};
+ 
+-	vdd_vbus1: regulator@4 {
+-		compatible = "regulator-fixed";
+-		regulator-name = "vdd_usb1_vbus";
+-		regulator-min-microvolt = <5000000>;
+-		regulator-max-microvolt = <5000000>;
+-		regulator-always-on;
+-		gpio = <&gpio TEGRA_GPIO(D, 0) GPIO_ACTIVE_HIGH>;
+-		enable-active-high;
+-		vin-supply = <&vdd_5v0_sys>;
+-	};
+-
+-	vdd_vbus3: regulator@5 {
+-		compatible = "regulator-fixed";
+-		regulator-name = "vdd_usb3_vbus";
+-		regulator-min-microvolt = <5000000>;
+-		regulator-max-microvolt = <5000000>;
+-		regulator-always-on;
+-		gpio = <&gpio TEGRA_GPIO(D, 3) GPIO_ACTIVE_HIGH>;
+-		enable-active-high;
+-		vin-supply = <&vdd_5v0_sys>;
+-	};
+-
+ 	sound {
+ 		compatible = "nvidia,tegra-audio-wm8903-picasso",
+ 			     "nvidia,tegra-audio-wm8903";
+diff --git a/arch/arm/boot/dts/tegra20-tamonten.dtsi b/arch/arm/boot/dts/tegra20-tamonten.dtsi
+index 95e6bccdb4f6e..dd4d506683de7 100644
+--- a/arch/arm/boot/dts/tegra20-tamonten.dtsi
++++ b/arch/arm/boot/dts/tegra20-tamonten.dtsi
+@@ -185,8 +185,9 @@
+ 				nvidia,pins = "ata", "atb", "atc", "atd", "ate",
+ 					"cdev1", "cdev2", "dap1", "dtb", "gma",
+ 					"gmb", "gmc", "gmd", "gme", "gpu7",
+-					"gpv", "i2cp", "pta", "rm", "slxa",
+-					"slxk", "spia", "spib", "uac";
++					"gpv", "i2cp", "irrx", "irtx", "pta",
++					"rm", "slxa", "slxk", "spia", "spib",
++					"uac";
+ 				nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ 				nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ 			};
+@@ -211,7 +212,7 @@
+ 			conf_ddc {
+ 				nvidia,pins = "ddc", "dta", "dtd", "kbca",
+ 					"kbcb", "kbcc", "kbcd", "kbce", "kbcf",
+-					"sdc";
++					"sdc", "uad", "uca";
+ 				nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ 				nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ 			};
+@@ -221,10 +222,9 @@
+ 					"lvp0", "owc", "sdb";
+ 				nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ 			};
+-			conf_irrx {
+-				nvidia,pins = "irrx", "irtx", "sdd", "spic",
+-					"spie", "spih", "uaa", "uab", "uad",
+-					"uca", "ucb";
++			conf_sdd {
++				nvidia,pins = "sdd", "spic", "spie", "spih",
++					"uaa", "uab", "ucb";
+ 				nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ 				nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ 			};
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h6-tanix-tx6.dts b/arch/arm64/boot/dts/allwinner/sun50i-h6-tanix-tx6.dts
+index be81330db14f6..02641191682e0 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-h6-tanix-tx6.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-h6-tanix-tx6.dts
+@@ -32,14 +32,14 @@
+ 		};
+ 	};
+ 
+-	reg_vcc3v3: vcc3v3 {
++	reg_vcc3v3: regulator-vcc3v3 {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "vcc3v3";
+ 		regulator-min-microvolt = <3300000>;
+ 		regulator-max-microvolt = <3300000>;
+ 	};
+ 
+-	reg_vdd_cpu_gpu: vdd-cpu-gpu {
++	reg_vdd_cpu_gpu: regulator-vdd-cpu-gpu {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "vdd-cpu-gpu";
+ 		regulator-min-microvolt = <1135000>;
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1046a-frwy.dts b/arch/arm64/boot/dts/freescale/fsl-ls1046a-frwy.dts
+index db3d303093f61..6d22efbd645cb 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1046a-frwy.dts
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1046a-frwy.dts
+@@ -83,15 +83,9 @@
+ 			};
+ 
+ 			eeprom@52 {
+-				compatible = "atmel,24c512";
++				compatible = "onnn,cat24c04", "atmel,24c04";
+ 				reg = <0x52>;
+ 			};
+-
+-			eeprom@53 {
+-				compatible = "atmel,24c512";
+-				reg = <0x53>;
+-			};
+-
+ 		};
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1046a-rdb.dts b/arch/arm64/boot/dts/freescale/fsl-ls1046a-rdb.dts
+index 60acdf0b689ee..7025aad8ae897 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1046a-rdb.dts
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1046a-rdb.dts
+@@ -59,14 +59,9 @@
+ 	};
+ 
+ 	eeprom@52 {
+-		compatible = "atmel,24c512";
++		compatible = "onnn,cat24c05", "atmel,24c04";
+ 		reg = <0x52>;
+ 	};
+-
+-	eeprom@53 {
+-		compatible = "atmel,24c512";
+-		reg = <0x53>;
+-	};
+ };
+ 
+ &i2c3 {
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw700x.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw700x.dtsi
+index c769fadbd008f..00f86cada30d2 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw700x.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw700x.dtsi
+@@ -278,70 +278,86 @@
+ 
+ 	pmic@69 {
+ 		compatible = "mps,mp5416";
+-		pinctrl-names = "default";
+-		pinctrl-0 = <&pinctrl_pmic>;
+ 		reg = <0x69>;
+ 
+ 		regulators {
++			/* vdd_0p95: DRAM/GPU/VPU */
+ 			buck1 {
+-				regulator-name = "vdd_0p95";
+-				regulator-min-microvolt = <805000>;
++				regulator-name = "buck1";
++				regulator-min-microvolt = <800000>;
+ 				regulator-max-microvolt = <1000000>;
+-				regulator-max-microamp = <2500000>;
++				regulator-min-microamp  = <3800000>;
++				regulator-max-microamp  = <6800000>;
+ 				regulator-boot-on;
++				regulator-always-on;
+ 			};
+ 
++			/* vdd_soc */
+ 			buck2 {
+-				regulator-name = "vdd_soc";
+-				regulator-min-microvolt = <805000>;
++				regulator-name = "buck2";
++				regulator-min-microvolt = <800000>;
+ 				regulator-max-microvolt = <900000>;
+-				regulator-max-microamp = <1000000>;
++				regulator-min-microamp  = <2200000>;
++				regulator-max-microamp  = <5200000>;
+ 				regulator-boot-on;
++				regulator-always-on;
+ 			};
+ 
++			/* vdd_arm */
+ 			buck3_reg: buck3 {
+-				regulator-name = "vdd_arm";
+-				regulator-min-microvolt = <805000>;
++				regulator-name = "buck3";
++				regulator-min-microvolt = <800000>;
+ 				regulator-max-microvolt = <1000000>;
+-				regulator-max-microamp = <2200000>;
+-				regulator-boot-on;
++				regulator-min-microamp  = <3800000>;
++				regulator-max-microamp  = <6800000>;
++				regulator-always-on;
+ 			};
+ 
++			/* vdd_1p8 */
+ 			buck4 {
+-				regulator-name = "vdd_1p8";
++				regulator-name = "buck4";
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <1800000>;
+-				regulator-max-microamp = <500000>;
++				regulator-min-microamp  = <2200000>;
++				regulator-max-microamp  = <5200000>;
+ 				regulator-boot-on;
++				regulator-always-on;
+ 			};
+ 
++			/* nvcc_snvs_1p8 */
+ 			ldo1 {
+-				regulator-name = "nvcc_snvs_1p8";
++				regulator-name = "ldo1";
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <1800000>;
+-				regulator-max-microamp = <300000>;
+ 				regulator-boot-on;
++				regulator-always-on;
+ 			};
+ 
++			/* vdd_snvs_0p8 */
+ 			ldo2 {
+-				regulator-name = "vdd_snvs_0p8";
++				regulator-name = "ldo2";
+ 				regulator-min-microvolt = <800000>;
+ 				regulator-max-microvolt = <800000>;
+ 				regulator-boot-on;
++				regulator-always-on;
+ 			};
+ 
++			/* vdd_0p9 */
+ 			ldo3 {
+-				regulator-name = "vdd_0p95";
+-				regulator-min-microvolt = <800000>;
+-				regulator-max-microvolt = <800000>;
++				regulator-name = "ldo3";
++				regulator-min-microvolt = <900000>;
++				regulator-max-microvolt = <900000>;
+ 				regulator-boot-on;
++				regulator-always-on;
+ 			};
+ 
++			/* vdd_1p8 */
+ 			ldo4 {
+-				regulator-name = "vdd_1p8";
++				regulator-name = "ldo4";
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <1800000>;
+ 				regulator-boot-on;
++				regulator-always-on;
+ 			};
+ 		};
+ 	};
+@@ -426,12 +442,6 @@
+ 		>;
+ 	};
+ 
+-	pinctrl_pmic: pmicgrp {
+-		fsl,pins = <
+-			MX8MM_IOMUXC_GPIO1_IO03_GPIO1_IO3	0x41
+-		>;
+-	};
+-
+ 	pinctrl_uart2: uart2grp {
+ 		fsl,pins = <
+ 			MX8MM_IOMUXC_UART2_RXD_UART2_DCE_RX	0x140
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw71xx.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw71xx.dtsi
+index 905b68a3daa5a..8e4a0ce99790b 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw71xx.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw71xx.dtsi
+@@ -46,7 +46,7 @@
+ 		pinctrl-0 = <&pinctrl_reg_usb1_en>;
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "usb_otg1_vbus";
+-		gpio = <&gpio1 12 GPIO_ACTIVE_HIGH>;
++		gpio = <&gpio1 10 GPIO_ACTIVE_HIGH>;
+ 		enable-active-high;
+ 		regulator-min-microvolt = <5000000>;
+ 		regulator-max-microvolt = <5000000>;
+@@ -156,7 +156,8 @@
+ 
+ 	pinctrl_reg_usb1_en: regusb1grp {
+ 		fsl,pins = <
+-			MX8MM_IOMUXC_GPIO1_IO12_GPIO1_IO12	0x41
++			MX8MM_IOMUXC_GPIO1_IO10_GPIO1_IO10	0x41
++			MX8MM_IOMUXC_GPIO1_IO12_GPIO1_IO12	0x141
+ 			MX8MM_IOMUXC_GPIO1_IO13_USB1_OTG_OC	0x41
+ 		>;
+ 	};
+diff --git a/arch/arm64/boot/dts/nvidia/tegra132.dtsi b/arch/arm64/boot/dts/nvidia/tegra132.dtsi
+index 9928a87f593a5..b0bcda8cc51f4 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra132.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra132.dtsi
+@@ -1227,13 +1227,13 @@
+ 
+ 		cpu@0 {
+ 			device_type = "cpu";
+-			compatible = "nvidia,denver";
++			compatible = "nvidia,tegra132-denver";
+ 			reg = <0>;
+ 		};
+ 
+ 		cpu@1 {
+ 			device_type = "cpu";
+-			compatible = "nvidia,denver";
++			compatible = "nvidia,tegra132-denver";
+ 			reg = <1>;
+ 		};
+ 	};
+diff --git a/arch/arm64/boot/dts/nvidia/tegra194.dtsi b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+index 2e40b60472833..203318aa660f7 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra194.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+@@ -2005,7 +2005,7 @@
+ 	};
+ 
+ 	pcie_ep@14160000 {
+-		compatible = "nvidia,tegra194-pcie-ep", "snps,dw-pcie-ep";
++		compatible = "nvidia,tegra194-pcie-ep";
+ 		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX4A>;
+ 		reg = <0x00 0x14160000 0x0 0x00020000>, /* appl registers (128K)      */
+ 		      <0x00 0x36040000 0x0 0x00040000>, /* iATU_DMA reg space (256K)  */
+@@ -2037,7 +2037,7 @@
+ 	};
+ 
+ 	pcie_ep@14180000 {
+-		compatible = "nvidia,tegra194-pcie-ep", "snps,dw-pcie-ep";
++		compatible = "nvidia,tegra194-pcie-ep";
+ 		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8B>;
+ 		reg = <0x00 0x14180000 0x0 0x00020000>, /* appl registers (128K)      */
+ 		      <0x00 0x38040000 0x0 0x00040000>, /* iATU_DMA reg space (256K)  */
+@@ -2069,7 +2069,7 @@
+ 	};
+ 
+ 	pcie_ep@141a0000 {
+-		compatible = "nvidia,tegra194-pcie-ep", "snps,dw-pcie-ep";
++		compatible = "nvidia,tegra194-pcie-ep";
+ 		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8A>;
+ 		reg = <0x00 0x141a0000 0x0 0x00020000>, /* appl registers (128K)      */
+ 		      <0x00 0x3a040000 0x0 0x00040000>, /* iATU_DMA reg space (256K)  */
+diff --git a/arch/arm64/boot/dts/qcom/ipq6018.dtsi b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+index 9fa5b028e4f39..23ee1bfa43189 100644
+--- a/arch/arm64/boot/dts/qcom/ipq6018.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+@@ -151,7 +151,7 @@
+ 		#size-cells = <2>;
+ 		ranges;
+ 
+-		rpm_msg_ram: memory@0x60000 {
++		rpm_msg_ram: memory@60000 {
+ 			reg = <0x0 0x60000 0x0 0x6000>;
+ 			no-map;
+ 		};
+diff --git a/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts b/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts
+index e8c37a1693d3b..cc08dc4eb56a5 100644
+--- a/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts
++++ b/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts
+@@ -20,7 +20,7 @@
+ 		stdout-path = "serial0";
+ 	};
+ 
+-	memory {
++	memory@40000000 {
+ 		device_type = "memory";
+ 		reg = <0x0 0x40000000 0x0 0x20000000>;
+ 	};
+diff --git a/arch/arm64/boot/dts/qcom/ipq8074.dtsi b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+index a32e5e79ab0b7..e8db62470b230 100644
+--- a/arch/arm64/boot/dts/qcom/ipq8074.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+@@ -567,10 +567,10 @@
+ 
+ 		pcie1: pci@10000000 {
+ 			compatible = "qcom,pcie-ipq8074";
+-			reg =  <0x10000000 0xf1d
+-				0x10000f20 0xa8
+-				0x00088000 0x2000
+-				0x10100000 0x1000>;
++			reg =  <0x10000000 0xf1d>,
++			       <0x10000f20 0xa8>,
++			       <0x00088000 0x2000>,
++			       <0x10100000 0x1000>;
+ 			reg-names = "dbi", "elbi", "parf", "config";
+ 			device_type = "pci";
+ 			linux,pci-domain = <1>;
+@@ -629,10 +629,10 @@
+ 
+ 		pcie0: pci@20000000 {
+ 			compatible = "qcom,pcie-ipq8074";
+-			reg =  <0x20000000 0xf1d
+-				0x20000f20 0xa8
+-				0x00080000 0x2000
+-				0x20100000 0x1000>;
++			reg = <0x20000000 0xf1d>,
++			      <0x20000f20 0xa8>,
++			      <0x00080000 0x2000>,
++			      <0x20100000 0x1000>;
+ 			reg-names = "dbi", "elbi", "parf", "config";
+ 			device_type = "pci";
+ 			linux,pci-domain = <0>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8994.dtsi b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+index f9f0b5aa6a266..87a3217e88efa 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+@@ -15,16 +15,18 @@
+ 	chosen { };
+ 
+ 	clocks {
+-		xo_board: xo_board {
++		xo_board: xo-board {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+ 			clock-frequency = <19200000>;
++			clock-output-names = "xo_board";
+ 		};
+ 
+-		sleep_clk: sleep_clk {
++		sleep_clk: sleep-clk {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+ 			clock-frequency = <32768>;
++			clock-output-names = "sleep_clk";
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index ce430ba9c1183..957487f84eadc 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -17,14 +17,14 @@
+ 	chosen { };
+ 
+ 	clocks {
+-		xo_board: xo_board {
++		xo_board: xo-board {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+ 			clock-frequency = <19200000>;
+ 			clock-output-names = "xo_board";
+ 		};
+ 
+-		sleep_clk: sleep_clk {
++		sleep_clk: sleep-clk {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+ 			clock-frequency = <32764>;
+diff --git a/arch/arm64/boot/dts/qcom/sdm630.dtsi b/arch/arm64/boot/dts/qcom/sdm630.dtsi
+index f91a928466c3b..06a0ae773ad50 100644
+--- a/arch/arm64/boot/dts/qcom/sdm630.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm630.dtsi
+@@ -17,14 +17,14 @@
+ 	chosen { };
+ 
+ 	clocks {
+-		xo_board: xo_board {
++		xo_board: xo-board {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+ 			clock-frequency = <19200000>;
+ 			clock-output-names = "xo_board";
+ 		};
+ 
+-		sleep_clk: sleep_clk {
++		sleep_clk: sleep-clk {
+ 			compatible = "fixed-clock";
+ 			#clock-cells = <0>;
+ 			clock-frequency = <32764>;
+@@ -343,10 +343,19 @@
+ 		};
+ 
+ 		qhee_code: qhee-code@85800000 {
+-			reg = <0x0 0x85800000 0x0 0x3700000>;
++			reg = <0x0 0x85800000 0x0 0x600000>;
+ 			no-map;
+ 		};
+ 
++		rmtfs_mem: memory@85e00000 {
++			compatible = "qcom,rmtfs-mem";
++			reg = <0x0 0x85e00000 0x0 0x200000>;
++			no-map;
++
++			qcom,client-id = <1>;
++			qcom,vmid = <15>;
++		};
++
+ 		smem_region: smem-mem@86000000 {
+ 			reg = <0 0x86000000 0 0x200000>;
+ 			no-map;
+@@ -357,58 +366,44 @@
+ 			no-map;
+ 		};
+ 
+-		modem_fw_mem: modem-fw-region@8ac00000 {
++		mpss_region: mpss@8ac00000 {
+ 			reg = <0x0 0x8ac00000 0x0 0x7e00000>;
+ 			no-map;
+ 		};
+ 
+-		adsp_fw_mem: adsp-fw-region@92a00000 {
++		adsp_region: adsp@92a00000 {
+ 			reg = <0x0 0x92a00000 0x0 0x1e00000>;
+ 			no-map;
+ 		};
+ 
+-		pil_mba_mem: pil-mba-region@94800000 {
++		mba_region: mba@94800000 {
+ 			reg = <0x0 0x94800000 0x0 0x200000>;
+ 			no-map;
+ 		};
+ 
+-		buffer_mem: buffer-region@94a00000 {
++		buffer_mem: tzbuffer@94a00000 {
+ 			reg = <0x0 0x94a00000 0x0 0x100000>;
+ 			no-map;
+ 		};
+ 
+-		venus_fw_mem: venus-fw-region@9f800000 {
++		venus_region: venus@9f800000 {
+ 			reg = <0x0 0x9f800000 0x0 0x800000>;
+ 			no-map;
+ 		};
+ 
+-		secure_region2: secure-region2@f7c00000 {
+-			reg = <0x0 0xf7c00000 0x0 0x5c00000>;
+-			no-map;
+-		};
+-
+ 		adsp_mem: adsp-region@f6000000 {
+ 			reg = <0x0 0xf6000000 0x0 0x800000>;
+ 			no-map;
+ 		};
+ 
+-		qseecom_ta_mem: qseecom-ta-region@fec00000 {
+-			reg = <0x0 0xfec00000 0x0 0x1000000>;
+-			no-map;
+-		};
+-
+ 		qseecom_mem: qseecom-region@f6800000 {
+ 			reg = <0x0 0xf6800000 0x0 0x1400000>;
+ 			no-map;
+ 		};
+ 
+-		secure_display_memory: secure-region@f5c00000 {
+-			reg = <0x0 0xf5c00000 0x0 0x5c00000>;
+-			no-map;
+-		};
+-
+-		cont_splash_mem: cont-splash-region@9d400000 {
+-			reg = <0x0 0x9d400000 0x0 0x23ff000>;
++		zap_shader_region: gpu@fed00000 {
++			compatible = "shared-dma-pool";
++			reg = <0x0 0xfed00000 0x0 0xa00000>;
+ 			no-map;
+ 		};
+ 	};
+@@ -527,14 +522,18 @@
+ 			reg = <0x01f40000 0x20000>;
+ 		};
+ 
+-		tlmm: pinctrl@3000000 {
++		tlmm: pinctrl@3100000 {
+ 			compatible = "qcom,sdm630-pinctrl";
+-			reg = <0x03000000 0xc00000>;
++			reg = <0x03100000 0x400000>,
++				  <0x03500000 0x400000>,
++				  <0x03900000 0x400000>;
++			reg-names = "south", "center", "north";
+ 			interrupts = <GIC_SPI 208 IRQ_TYPE_LEVEL_HIGH>;
+ 			gpio-controller;
+-			#gpio-cells = <0x2>;
++			gpio-ranges = <&tlmm 0 0 114>;
++			#gpio-cells = <2>;
+ 			interrupt-controller;
+-			#interrupt-cells = <0x2>;
++			#interrupt-cells = <2>;
+ 
+ 			blsp1_uart1_default: blsp1-uart1-default {
+ 				pins = "gpio0", "gpio1", "gpio2", "gpio3";
+@@ -554,40 +553,48 @@
+ 				bias-disable;
+ 			};
+ 
+-			blsp2_uart1_tx_active: blsp2-uart1-tx-active {
+-				pins = "gpio16";
+-				drive-strength = <2>;
+-				bias-disable;
+-			};
+-
+-			blsp2_uart1_tx_sleep: blsp2-uart1-tx-sleep {
+-				pins = "gpio16";
+-				drive-strength = <2>;
+-				bias-pull-up;
+-			};
++			blsp2_uart1_default: blsp2-uart1-active {
++				tx-rts {
++					pins = "gpio16", "gpio19";
++					function = "blsp_uart5";
++					drive-strength = <2>;
++					bias-disable;
++				};
+ 
+-			blsp2_uart1_rxcts_active: blsp2-uart1-rxcts-active {
+-				pins = "gpio17", "gpio18";
+-				drive-strength = <2>;
+-				bias-disable;
+-			};
++				rx {
++					/*
++					 * Avoid garbage data while BT module
++					 * is powered off or not driving signal
++					 */
++					pins = "gpio17";
++					function = "blsp_uart5";
++					drive-strength = <2>;
++					bias-pull-up;
++				};
+ 
+-			blsp2_uart1_rxcts_sleep: blsp2-uart1-rxcts-sleep {
+-				pins = "gpio17", "gpio18";
+-				drive-strength = <2>;
+-				bias-no-pull;
++				cts {
++					/* Match the pull of the BT module */
++					pins = "gpio18";
++					function = "blsp_uart5";
++					drive-strength = <2>;
++					bias-pull-down;
++				};
+ 			};
+ 
+-			blsp2_uart1_rfr_active: blsp2-uart1-rfr-active {
+-				pins = "gpio19";
+-				drive-strength = <2>;
+-				bias-disable;
+-			};
++			blsp2_uart1_sleep: blsp2-uart1-sleep {
++				tx {
++					pins = "gpio16";
++					function = "gpio";
++					drive-strength = <2>;
++					bias-pull-up;
++				};
+ 
+-			blsp2_uart1_rfr_sleep: blsp2-uart1-rfr-sleep {
+-				pins = "gpio19";
+-				drive-strength = <2>;
+-				bias-no-pull;
++				rx-cts-rts {
++					pins = "gpio17", "gpio18", "gpio19";
++					function = "gpio";
++					drive-strength = <2>;
++					bias-no-pull;
++				};
+ 			};
+ 
+ 			i2c1_default: i2c1-default {
+@@ -686,50 +693,106 @@
+ 				bias-pull-up;
+ 			};
+ 
+-			sdc1_clk_on: sdc1-clk-on {
+-				pins = "sdc1_clk";
+-				bias-disable;
+-				drive-strength = <16>;
+-			};
++			sdc1_state_on: sdc1-on {
++				clk {
++					pins = "sdc1_clk";
++					bias-disable;
++					drive-strength = <16>;
++				};
+ 
+-			sdc1_clk_off: sdc1-clk-off {
+-				pins = "sdc1_clk";
+-				bias-disable;
+-				drive-strength = <2>;
+-			};
++				cmd {
++					pins = "sdc1_cmd";
++					bias-pull-up;
++					drive-strength = <10>;
++				};
+ 
+-			sdc1_cmd_on: sdc1-cmd-on {
+-				pins = "sdc1_cmd";
+-				bias-pull-up;
+-				drive-strength = <10>;
+-			};
++				data {
++					pins = "sdc1_data";
++					bias-pull-up;
++					drive-strength = <10>;
++				};
+ 
+-			sdc1_cmd_off: sdc1-cmd-off {
+-				pins = "sdc1_cmd";
+-				bias-pull-up;
+-				drive-strength = <2>;
++				rclk {
++					pins = "sdc1_rclk";
++					bias-pull-down;
++				};
+ 			};
+ 
+-			sdc1_data_on: sdc1-data-on {
+-				pins = "sdc1_data";
+-				bias-pull-up;
+-				drive-strength = <8>;
+-			};
++			sdc1_state_off: sdc1-off {
++				clk {
++					pins = "sdc1_clk";
++					bias-disable;
++					drive-strength = <2>;
++				};
+ 
+-			sdc1_data_off: sdc1-data-off {
+-				pins = "sdc1_data";
+-				bias-pull-up;
+-				drive-strength = <2>;
++				cmd {
++					pins = "sdc1_cmd";
++					bias-pull-up;
++					drive-strength = <2>;
++				};
++
++				data {
++					pins = "sdc1_data";
++					bias-pull-up;
++					drive-strength = <2>;
++				};
++
++				rclk {
++					pins = "sdc1_rclk";
++					bias-pull-down;
++				};
+ 			};
+ 
+-			sdc1_rclk_on: sdc1-rclk-on {
+-				pins = "sdc1_rclk";
+-				bias-pull-down;
++			sdc2_state_on: sdc2-on {
++				clk {
++					pins = "sdc2_clk";
++					bias-disable;
++					drive-strength = <16>;
++				};
++
++				cmd {
++					pins = "sdc2_cmd";
++					bias-pull-up;
++					drive-strength = <10>;
++				};
++
++				data {
++					pins = "sdc2_data";
++					bias-pull-up;
++					drive-strength = <10>;
++				};
++
++				sd-cd {
++					pins = "gpio54";
++					bias-pull-up;
++					drive-strength = <2>;
++				};
+ 			};
+ 
+-			sdc1_rclk_off: sdc1-rclk-off {
+-				pins = "sdc1_rclk";
+-				bias-pull-down;
++			sdc2_state_off: sdc2-off {
++				clk {
++					pins = "sdc2_clk";
++					bias-disable;
++					drive-strength = <2>;
++				};
++
++				cmd {
++					pins = "sdc2_cmd";
++					bias-pull-up;
++					drive-strength = <2>;
++				};
++
++				data {
++					pins = "sdc2_data";
++					bias-pull-up;
++					drive-strength = <2>;
++				};
++
++				sd-cd {
++					pins = "gpio54";
++					bias-disable;
++					drive-strength = <2>;
++				};
+ 			};
+ 		};
+ 
+@@ -823,8 +886,8 @@
+ 			clock-names = "core", "iface", "xo", "ice";
+ 
+ 			pinctrl-names = "default", "sleep";
+-			pinctrl-0 = <&sdc1_clk_on &sdc1_cmd_on &sdc1_data_on &sdc1_rclk_on>;
+-			pinctrl-1 = <&sdc1_clk_off &sdc1_cmd_off &sdc1_data_off &sdc1_rclk_off>;
++			pinctrl-0 = <&sdc1_state_on>;
++			pinctrl-1 = <&sdc1_state_off>;
+ 
+ 			bus-width = <8>;
+ 			non-removable;
+@@ -969,10 +1032,8 @@
+ 			dmas = <&blsp2_dma 0>, <&blsp2_dma 1>;
+ 			dma-names = "tx", "rx";
+ 			pinctrl-names = "default", "sleep";
+-			pinctrl-0 = <&blsp2_uart1_tx_active &blsp2_uart1_rxcts_active
+-				&blsp2_uart1_rfr_active>;
+-			pinctrl-1 = <&blsp2_uart1_tx_sleep &blsp2_uart1_rxcts_sleep
+-				&blsp2_uart1_rfr_sleep>;
++			pinctrl-0 = <&blsp2_uart1_default>;
++			pinctrl-1 = <&blsp2_uart1_sleep>;
+ 			status = "disabled";
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index 1316bea3eab52..6d28bfd9a8f59 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -3773,7 +3773,7 @@
+ 			};
+ 		};
+ 
+-		epss_l3: interconnect@18591000 {
++		epss_l3: interconnect@18590000 {
+ 			compatible = "qcom,sm8250-epss-l3";
+ 			reg = <0 0x18590000 0 0x1000>;
+ 
+diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
+index b83fb24954b77..3198acb2aad8c 100644
+--- a/arch/arm64/include/asm/el2_setup.h
++++ b/arch/arm64/include/asm/el2_setup.h
+@@ -149,8 +149,17 @@
+ 	ubfx	x1, x1, #ID_AA64MMFR0_FGT_SHIFT, #4
+ 	cbz	x1, .Lskip_fgt_\@
+ 
+-	msr_s	SYS_HDFGRTR_EL2, xzr
+-	msr_s	SYS_HDFGWTR_EL2, xzr
++	mov	x0, xzr
++	mrs	x1, id_aa64dfr0_el1
++	ubfx	x1, x1, #ID_AA64DFR0_PMSVER_SHIFT, #4
++	cmp	x1, #3
++	b.lt	.Lset_fgt_\@
++	/* Disable PMSNEVFR_EL1 read and write traps */
++	orr	x0, x0, #(1 << 62)
++
++.Lset_fgt_\@:
++	msr_s	SYS_HDFGRTR_EL2, x0
++	msr_s	SYS_HDFGWTR_EL2, x0
+ 	msr_s	SYS_HFGRTR_EL2, xzr
+ 	msr_s	SYS_HFGWTR_EL2, xzr
+ 	msr_s	SYS_HFGITR_EL2, xzr
+diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
+index d44df9d62fc9c..a54ce2646cba2 100644
+--- a/arch/arm64/include/asm/kernel-pgtable.h
++++ b/arch/arm64/include/asm/kernel-pgtable.h
+@@ -65,8 +65,8 @@
+ #define EARLY_KASLR	(0)
+ #endif
+ 
+-#define EARLY_ENTRIES(vstart, vend, shift) (((vend) >> (shift)) \
+-					- ((vstart) >> (shift)) + 1 + EARLY_KASLR)
++#define EARLY_ENTRIES(vstart, vend, shift) \
++	((((vend) - 1) >> (shift)) - ((vstart) >> (shift)) + 1 + EARLY_KASLR)
+ 
+ #define EARLY_PGDS(vstart, vend) (EARLY_ENTRIES(vstart, vend, PGDIR_SHIFT))
+ 
+diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
+index 75beffe2ee8a8..e9c30859f80cd 100644
+--- a/arch/arm64/include/asm/mmu.h
++++ b/arch/arm64/include/asm/mmu.h
+@@ -27,11 +27,32 @@ typedef struct {
+ } mm_context_t;
+ 
+ /*
+- * This macro is only used by the TLBI and low-level switch_mm() code,
+- * neither of which can race with an ASID change. We therefore don't
+- * need to reload the counter using atomic64_read().
++ * We use atomic64_read() here because the ASID for an 'mm_struct' can
++ * be reallocated when scheduling one of its threads following a
++ * rollover event (see new_context() and flush_context()). In this case,
++ * a concurrent TLBI (e.g. via try_to_unmap_one() and ptep_clear_flush())
++ * may use a stale ASID. This is fine in principle as the new ASID is
++ * guaranteed to be clean in the TLB, but the TLBI routines have to take
++ * care to handle the following race:
++ *
++ *    CPU 0                    CPU 1                          CPU 2
++ *
++ *    // ptep_clear_flush(mm)
++ *    xchg_relaxed(pte, 0)
++ *    DSB ISHST
++ *    old = ASID(mm)
++ *         |                                                  <rollover>
++ *         |                   new = new_context(mm)
++ *         \-----------------> atomic_set(mm->context.id, new)
++ *                             cpu_switch_mm(mm)
++ *                             // Hardware walk of pte using new ASID
++ *    TLBI(old)
++ *
++ * In this scenario, the barrier on CPU 0 and the dependency on CPU 1
++ * ensure that the page-table walker on CPU 1 *must* see the invalid PTE
++ * written by CPU 0.
+  */
+-#define ASID(mm)	((mm)->context.id.counter & 0xffff)
++#define ASID(mm)	(atomic64_read(&(mm)->context.id) & 0xffff)
+ 
+ static inline bool arm64_kernel_unmapped_at_el0(void)
+ {
+diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
+index cc3f5a33ff9c5..36f02892e1df8 100644
+--- a/arch/arm64/include/asm/tlbflush.h
++++ b/arch/arm64/include/asm/tlbflush.h
+@@ -245,9 +245,10 @@ static inline void flush_tlb_all(void)
+ 
+ static inline void flush_tlb_mm(struct mm_struct *mm)
+ {
+-	unsigned long asid = __TLBI_VADDR(0, ASID(mm));
++	unsigned long asid;
+ 
+ 	dsb(ishst);
++	asid = __TLBI_VADDR(0, ASID(mm));
+ 	__tlbi(aside1is, asid);
+ 	__tlbi_user(aside1is, asid);
+ 	dsb(ish);
+@@ -256,9 +257,10 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
+ static inline void flush_tlb_page_nosync(struct vm_area_struct *vma,
+ 					 unsigned long uaddr)
+ {
+-	unsigned long addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm));
++	unsigned long addr;
+ 
+ 	dsb(ishst);
++	addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm));
+ 	__tlbi(vale1is, addr);
+ 	__tlbi_user(vale1is, addr);
+ }
+@@ -283,9 +285,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
+ {
+ 	int num = 0;
+ 	int scale = 0;
+-	unsigned long asid = ASID(vma->vm_mm);
+-	unsigned long addr;
+-	unsigned long pages;
++	unsigned long asid, addr, pages;
+ 
+ 	start = round_down(start, stride);
+ 	end = round_up(end, stride);
+@@ -305,6 +305,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
+ 	}
+ 
+ 	dsb(ishst);
++	asid = ASID(vma->vm_mm);
+ 
+ 	/*
+ 	 * When the CPU does not support TLB range operations, flush the TLB
+diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
+index 96873dfa67fd5..3374bbd18fc66 100644
+--- a/arch/arm64/kernel/head.S
++++ b/arch/arm64/kernel/head.S
+@@ -176,7 +176,7 @@ SYM_CODE_END(preserve_boot_args)
+  * to be composed of multiple pages. (This effectively scales the end index).
+  *
+  *	vstart:	virtual address of start of range
+- *	vend:	virtual address of end of range
++ *	vend:	virtual address of end of range - we map [vstart, vend]
+  *	shift:	shift used to transform virtual address into index
+  *	ptrs:	number of entries in page table
+  *	istart:	index in table corresponding to vstart
+@@ -213,17 +213,18 @@ SYM_CODE_END(preserve_boot_args)
+  *
+  *	tbl:	location of page table
+  *	rtbl:	address to be used for first level page table entry (typically tbl + PAGE_SIZE)
+- *	vstart:	start address to map
+- *	vend:	end address to map - we map [vstart, vend]
++ *	vstart:	virtual address of start of range
++ *	vend:	virtual address of end of range - we map [vstart, vend - 1]
+  *	flags:	flags to use to map last level entries
+  *	phys:	physical address corresponding to vstart - physical memory is contiguous
+  *	pgds:	the number of pgd entries
+  *
+  * Temporaries:	istart, iend, tmp, count, sv - these need to be different registers
+- * Preserves:	vstart, vend, flags
+- * Corrupts:	tbl, rtbl, istart, iend, tmp, count, sv
++ * Preserves:	vstart, flags
++ * Corrupts:	tbl, rtbl, vend, istart, iend, tmp, count, sv
+  */
+ 	.macro map_memory, tbl, rtbl, vstart, vend, flags, phys, pgds, istart, iend, tmp, count, sv
++	sub \vend, \vend, #1
+ 	add \rtbl, \tbl, #PAGE_SIZE
+ 	mov \sv, \rtbl
+ 	mov \count, #0
+diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
+index 709d2c433c5e9..f6b1a88245db2 100644
+--- a/arch/arm64/kernel/vmlinux.lds.S
++++ b/arch/arm64/kernel/vmlinux.lds.S
+@@ -181,6 +181,8 @@ SECTIONS
+ 	/* everything from this point to __init_begin will be marked RO NX */
+ 	RO_DATA(PAGE_SIZE)
+ 
++	HYPERVISOR_DATA_SECTIONS
++
+ 	idmap_pg_dir = .;
+ 	. += IDMAP_DIR_SIZE;
+ 	idmap_pg_end = .;
+@@ -260,8 +262,6 @@ SECTIONS
+ 	_sdata = .;
+ 	RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_ALIGN)
+ 
+-	HYPERVISOR_DATA_SECTIONS
+-
+ 	/*
+ 	 * Data written with the MMU off but read with the MMU on requires
+ 	 * cache lines to be invalidated, discarding up to a Cache Writeback
+diff --git a/arch/m68k/Kconfig.bus b/arch/m68k/Kconfig.bus
+index f1be832e2b746..d1e93a39cd3bc 100644
+--- a/arch/m68k/Kconfig.bus
++++ b/arch/m68k/Kconfig.bus
+@@ -63,7 +63,7 @@ source "drivers/zorro/Kconfig"
+ 
+ endif
+ 
+-if !MMU
++if COLDFIRE
+ 
+ config ISA_DMA_API
+ 	def_bool !M5272
+diff --git a/arch/mips/mti-malta/malta-dtshim.c b/arch/mips/mti-malta/malta-dtshim.c
+index 0ddf03df62688..f451268f6c384 100644
+--- a/arch/mips/mti-malta/malta-dtshim.c
++++ b/arch/mips/mti-malta/malta-dtshim.c
+@@ -22,7 +22,7 @@
+ #define  ROCIT_CONFIG_GEN1_MEMMAP_SHIFT	8
+ #define  ROCIT_CONFIG_GEN1_MEMMAP_MASK	(0xf << 8)
+ 
+-static unsigned char fdt_buf[16 << 10] __initdata;
++static unsigned char fdt_buf[16 << 10] __initdata __aligned(8);
+ 
+ /* determined physical memory size, not overridden by command line args	 */
+ extern unsigned long physical_memsize;
+diff --git a/arch/openrisc/kernel/entry.S b/arch/openrisc/kernel/entry.S
+index bc657e55c15f8..98e4f97db5159 100644
+--- a/arch/openrisc/kernel/entry.S
++++ b/arch/openrisc/kernel/entry.S
+@@ -547,6 +547,7 @@ EXCEPTION_ENTRY(_external_irq_handler)
+ 	l.bnf	1f			// ext irq enabled, all ok.
+ 	l.nop
+ 
++#ifdef CONFIG_PRINTK
+ 	l.addi  r1,r1,-0x8
+ 	l.movhi r3,hi(42f)
+ 	l.ori	r3,r3,lo(42f)
+@@ -560,6 +561,7 @@ EXCEPTION_ENTRY(_external_irq_handler)
+ 		.string "\n\rESR interrupt bug: in _external_irq_handler (ESR %x)\n\r"
+ 		.align 4
+ 	.previous
++#endif
+ 
+ 	l.ori	r4,r4,SPR_SR_IEE	// fix the bug
+ //	l.sw	PT_SR(r1),r4
+diff --git a/arch/parisc/Makefile b/arch/parisc/Makefile
+index aed8ea29268bb..2d019aa73b8f0 100644
+--- a/arch/parisc/Makefile
++++ b/arch/parisc/Makefile
+@@ -25,18 +25,18 @@ CHECKFLAGS	+= -D__hppa__=1
+ ifdef CONFIG_64BIT
+ UTS_MACHINE	:= parisc64
+ CHECKFLAGS	+= -D__LP64__=1
+-CC_ARCHES	= hppa64
+ LD_BFD		:= elf64-hppa-linux
+ else # 32-bit
+-CC_ARCHES	= hppa hppa2.0 hppa1.1
+ LD_BFD		:= elf32-hppa-linux
+ endif
+ 
+ # select defconfig based on actual architecture
+-ifeq ($(shell uname -m),parisc64)
++ifeq ($(ARCH),parisc64)
+ 	KBUILD_DEFCONFIG := generic-64bit_defconfig
++	CC_ARCHES := hppa64
+ else
+ 	KBUILD_DEFCONFIG := generic-32bit_defconfig
++	CC_ARCHES := hppa hppa2.0 hppa1.1
+ endif
+ 
+ export LD_BFD
+diff --git a/arch/parisc/kernel/signal.c b/arch/parisc/kernel/signal.c
+index fb1e94a3982bc..db1a47cf424dd 100644
+--- a/arch/parisc/kernel/signal.c
++++ b/arch/parisc/kernel/signal.c
+@@ -237,6 +237,12 @@ setup_rt_frame(struct ksignal *ksig, sigset_t *set, struct pt_regs *regs,
+ #endif
+ 	
+ 	usp = (regs->gr[30] & ~(0x01UL));
++#ifdef CONFIG_64BIT
++	if (is_compat_task()) {
++		/* The gcc alloca implementation leaves garbage in the upper 32 bits of sp */
++		usp = (compat_uint_t)usp;
++	}
++#endif
+ 	/*FIXME: frame_size parameter is unused, remove it. */
+ 	frame = get_sigframe(&ksig->ka, usp, sizeof(*frame));
+ 
+diff --git a/arch/powerpc/configs/mpc885_ads_defconfig b/arch/powerpc/configs/mpc885_ads_defconfig
+index 949ff9ccda5e7..dbf3ff8adc654 100644
+--- a/arch/powerpc/configs/mpc885_ads_defconfig
++++ b/arch/powerpc/configs/mpc885_ads_defconfig
+@@ -34,6 +34,7 @@ CONFIG_MTD_CFI_GEOMETRY=y
+ # CONFIG_MTD_CFI_I2 is not set
+ CONFIG_MTD_CFI_I4=y
+ CONFIG_MTD_CFI_AMDSTD=y
++CONFIG_MTD_PHYSMAP=y
+ CONFIG_MTD_PHYSMAP_OF=y
+ # CONFIG_BLK_DEV is not set
+ CONFIG_NETDEVICES=y
+diff --git a/arch/powerpc/include/asm/pmc.h b/arch/powerpc/include/asm/pmc.h
+index c6bbe9778d3cd..3c09109e708ef 100644
+--- a/arch/powerpc/include/asm/pmc.h
++++ b/arch/powerpc/include/asm/pmc.h
+@@ -34,6 +34,13 @@ static inline void ppc_set_pmu_inuse(int inuse)
+ #endif
+ }
+ 
++#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
++static inline int ppc_get_pmu_inuse(void)
++{
++	return get_paca()->pmcregs_in_use;
++}
++#endif
++
+ extern void power4_enable_pmcs(void);
+ 
+ #else /* CONFIG_PPC64 */
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index df6b468976d53..fe505d8ed55bc 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -1085,7 +1085,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
+ 	}
+ 
+ 	if (cpu_to_chip_id(boot_cpuid) != -1) {
+-		int idx = num_possible_cpus() / threads_per_core;
++		int idx = DIV_ROUND_UP(num_possible_cpus(), threads_per_core);
+ 
+ 		/*
+ 		 * All threads of a core will all belong to the same core,
+@@ -1503,6 +1503,7 @@ static void add_cpu_to_masks(int cpu)
+ 	 * add it to it's own thread sibling mask.
+ 	 */
+ 	cpumask_set_cpu(cpu, cpu_sibling_mask(cpu));
++	cpumask_set_cpu(cpu, cpu_core_mask(cpu));
+ 
+ 	for (i = first_thread; i < first_thread + threads_per_core; i++)
+ 		if (cpu_online(i))
+@@ -1520,11 +1521,6 @@ static void add_cpu_to_masks(int cpu)
+ 	if (chip_id_lookup_table && ret)
+ 		chip_id = cpu_to_chip_id(cpu);
+ 
+-	if (chip_id == -1) {
+-		cpumask_copy(per_cpu(cpu_core_map, cpu), cpu_cpu_mask(cpu));
+-		goto out;
+-	}
+-
+ 	if (shared_caches)
+ 		submask_fn = cpu_l2_cache_mask;
+ 
+@@ -1534,6 +1530,10 @@ static void add_cpu_to_masks(int cpu)
+ 	/* Skip all CPUs already part of current CPU core mask */
+ 	cpumask_andnot(mask, cpu_online_mask, cpu_core_mask(cpu));
+ 
++	/* If chip_id is -1; limit the cpu_core_mask to within DIE*/
++	if (chip_id == -1)
++		cpumask_and(mask, mask, cpu_cpu_mask(cpu));
++
+ 	for_each_cpu(i, mask) {
+ 		if (chip_id == cpu_to_chip_id(i)) {
+ 			or_cpumasks_related(cpu, i, submask_fn, cpu_core_mask);
+@@ -1543,7 +1543,6 @@ static void add_cpu_to_masks(int cpu)
+ 		}
+ 	}
+ 
+-out:
+ 	free_cpumask_var(mask);
+ }
+ 
+diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
+index ea0d9c36e177c..b64b734a5030f 100644
+--- a/arch/powerpc/kernel/stacktrace.c
++++ b/arch/powerpc/kernel/stacktrace.c
+@@ -8,6 +8,7 @@
+  * Copyright 2018 Nick Piggin, Michael Ellerman, IBM Corp.
+  */
+ 
++#include <linux/delay.h>
+ #include <linux/export.h>
+ #include <linux/kallsyms.h>
+ #include <linux/module.h>
+diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+index d909c069363e0..e7924664a9445 100644
+--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
++++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+@@ -64,10 +64,12 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid,
+ 	}
+ 	isync();
+ 
++	pagefault_disable();
+ 	if (is_load)
+-		ret = copy_from_user_nofault(to, (const void __user *)from, n);
++		ret = __copy_from_user_inatomic(to, (const void __user *)from, n);
+ 	else
+-		ret = copy_to_user_nofault((void __user *)to, from, n);
++		ret = __copy_to_user_inatomic((void __user *)to, from, n);
++	pagefault_enable();
+ 
+ 	/* switch the pid first to avoid running host with unallocated pid */
+ 	if (quadrant == 1 && pid != old_pid)
+diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c
+index 083a4e037718d..e5ba96c41f3fc 100644
+--- a/arch/powerpc/kvm/book3s_64_vio_hv.c
++++ b/arch/powerpc/kvm/book3s_64_vio_hv.c
+@@ -173,10 +173,13 @@ static void kvmppc_rm_tce_put(struct kvmppc_spapr_tce_table *stt,
+ 	idx -= stt->offset;
+ 	page = stt->pages[idx / TCES_PER_PAGE];
+ 	/*
+-	 * page must not be NULL in real mode,
+-	 * kvmppc_rm_ioba_validate() must have taken care of this.
++	 * kvmppc_rm_ioba_validate() allows pages not be allocated if TCE is
++	 * being cleared, otherwise it returns H_TOO_HARD and we skip this.
+ 	 */
+-	WARN_ON_ONCE_RM(!page);
++	if (!page) {
++		WARN_ON_ONCE_RM(tce != 0);
++		return;
++	}
+ 	tbl = kvmppc_page_address(page);
+ 
+ 	tbl[idx % TCES_PER_PAGE] = tce;
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 395f98158e81e..890fbf4baf15e 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -59,6 +59,7 @@
+ #include <asm/kvm_book3s.h>
+ #include <asm/mmu_context.h>
+ #include <asm/lppaca.h>
++#include <asm/pmc.h>
+ #include <asm/processor.h>
+ #include <asm/cputhreads.h>
+ #include <asm/page.h>
+@@ -3687,6 +3688,18 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
+ 	    cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
+ 		kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true);
+ 
++#ifdef CONFIG_PPC_PSERIES
++	if (kvmhv_on_pseries()) {
++		barrier();
++		if (vcpu->arch.vpa.pinned_addr) {
++			struct lppaca *lp = vcpu->arch.vpa.pinned_addr;
++			get_lppaca()->pmcregs_in_use = lp->pmcregs_in_use;
++		} else {
++			get_lppaca()->pmcregs_in_use = 1;
++		}
++		barrier();
++	}
++#endif
+ 	kvmhv_load_guest_pmu(vcpu);
+ 
+ 	msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX);
+@@ -3823,6 +3836,13 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
+ 	save_pmu |= nesting_enabled(vcpu->kvm);
+ 
+ 	kvmhv_save_guest_pmu(vcpu, save_pmu);
++#ifdef CONFIG_PPC_PSERIES
++	if (kvmhv_on_pseries()) {
++		barrier();
++		get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse();
++		barrier();
++	}
++#endif
+ 
+ 	vc->entry_exit_map = 0x101;
+ 	vc->in_guest = 0;
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index f2bf98bdcea28..094a1076fd1fe 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -893,7 +893,7 @@ static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn)
+ static void __init find_possible_nodes(void)
+ {
+ 	struct device_node *rtas;
+-	const __be32 *domains;
++	const __be32 *domains = NULL;
+ 	int prop_length, max_nodes;
+ 	u32 i;
+ 
+@@ -909,9 +909,14 @@ static void __init find_possible_nodes(void)
+ 	 * it doesn't exist, then fallback on ibm,max-associativity-domains.
+ 	 * Current denotes what the platform can support compared to max
+ 	 * which denotes what the Hypervisor can support.
++	 *
++	 * If the LPAR is migratable, new nodes might be activated after a LPM,
++	 * so we should consider the max number in that case.
+ 	 */
+-	domains = of_get_property(rtas, "ibm,current-associativity-domains",
+-					&prop_length);
++	if (!of_get_property(of_root, "ibm,migratable-partition", NULL))
++		domains = of_get_property(rtas,
++					  "ibm,current-associativity-domains",
++					  &prop_length);
+ 	if (!domains) {
+ 		domains = of_get_property(rtas, "ibm,max-associativity-domains",
+ 					&prop_length);
+@@ -920,6 +925,8 @@ static void __init find_possible_nodes(void)
+ 	}
+ 
+ 	max_nodes = of_read_number(&domains[min_common_depth], 1);
++	pr_info("Partition configured for %d NUMA nodes.\n", max_nodes);
++
+ 	for (i = 0; i < max_nodes; i++) {
+ 		if (!node_possible(i))
+ 			node_set(i, node_possible_map);
+diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
+index 51622411a7ccd..35658b963d5ab 100644
+--- a/arch/powerpc/perf/core-book3s.c
++++ b/arch/powerpc/perf/core-book3s.c
+@@ -2251,18 +2251,10 @@ unsigned long perf_misc_flags(struct pt_regs *regs)
+  */
+ unsigned long perf_instruction_pointer(struct pt_regs *regs)
+ {
+-	bool use_siar = regs_use_siar(regs);
+ 	unsigned long siar = mfspr(SPRN_SIAR);
+ 
+-	if (ppmu && (ppmu->flags & PPMU_P10_DD1)) {
+-		if (siar)
+-			return siar;
+-		else
+-			return regs->nip;
+-	} else if (use_siar && siar_valid(regs))
+-		return mfspr(SPRN_SIAR) + perf_ip_adjust(regs);
+-	else if (use_siar)
+-		return 0;		// no valid instruction pointer
++	if (regs_use_siar(regs) && siar_valid(regs) && siar)
++		return siar + perf_ip_adjust(regs);
+ 	else
+ 		return regs->nip;
+ }
+diff --git a/arch/powerpc/perf/hv-gpci.c b/arch/powerpc/perf/hv-gpci.c
+index d48413e28c39e..c756228a081fb 100644
+--- a/arch/powerpc/perf/hv-gpci.c
++++ b/arch/powerpc/perf/hv-gpci.c
+@@ -175,7 +175,7 @@ static unsigned long single_gpci_request(u32 req, u32 starting_index,
+ 	 */
+ 	count = 0;
+ 	for (i = offset; i < offset + length; i++)
+-		count |= arg->bytes[i] << (i - offset);
++		count |= (u64)(arg->bytes[i]) << ((length - 1 - (i - offset)) * 8);
+ 
+ 	*value = count;
+ out:
+diff --git a/arch/s390/include/asm/setup.h b/arch/s390/include/asm/setup.h
+index 3e388fa208d4f..519d517fedf4e 100644
+--- a/arch/s390/include/asm/setup.h
++++ b/arch/s390/include/asm/setup.h
+@@ -36,6 +36,7 @@
+ #define MACHINE_FLAG_NX		BIT(15)
+ #define MACHINE_FLAG_GS		BIT(16)
+ #define MACHINE_FLAG_SCC	BIT(17)
++#define MACHINE_FLAG_PCI_MIO	BIT(18)
+ 
+ #define LPP_MAGIC		BIT(31)
+ #define LPP_PID_MASK		_AC(0xffffffff, UL)
+@@ -109,6 +110,7 @@ extern unsigned long mio_wb_bit_mask;
+ #define MACHINE_HAS_NX		(S390_lowcore.machine_flags & MACHINE_FLAG_NX)
+ #define MACHINE_HAS_GS		(S390_lowcore.machine_flags & MACHINE_FLAG_GS)
+ #define MACHINE_HAS_SCC		(S390_lowcore.machine_flags & MACHINE_FLAG_SCC)
++#define MACHINE_HAS_PCI_MIO	(S390_lowcore.machine_flags & MACHINE_FLAG_PCI_MIO)
+ 
+ /*
+  * Console mode. Override with conmode=
+diff --git a/arch/s390/include/asm/smp.h b/arch/s390/include/asm/smp.h
+index e317fd4866c15..f16f4d054ae25 100644
+--- a/arch/s390/include/asm/smp.h
++++ b/arch/s390/include/asm/smp.h
+@@ -18,6 +18,7 @@ extern struct mutex smp_cpu_state_mutex;
+ extern unsigned int smp_cpu_mt_shift;
+ extern unsigned int smp_cpu_mtid;
+ extern __vector128 __initdata boot_cpu_vector_save_area[__NUM_VXRS];
++extern cpumask_t cpu_setup_mask;
+ 
+ extern int __cpu_up(unsigned int cpu, struct task_struct *tidle);
+ 
+diff --git a/arch/s390/kernel/early.c b/arch/s390/kernel/early.c
+index a361d2e70025c..661585587cbee 100644
+--- a/arch/s390/kernel/early.c
++++ b/arch/s390/kernel/early.c
+@@ -236,6 +236,10 @@ static __init void detect_machine_facilities(void)
+ 		clock_comparator_max = -1ULL >> 1;
+ 		__ctl_set_bit(0, 53);
+ 	}
++	if (IS_ENABLED(CONFIG_PCI) && test_facility(153)) {
++		S390_lowcore.machine_flags |= MACHINE_FLAG_PCI_MIO;
++		/* the control bit is set during PCI initialization */
++	}
+ }
+ 
+ static inline void save_vector_registers(void)
+diff --git a/arch/s390/kernel/jump_label.c b/arch/s390/kernel/jump_label.c
+index ab584e8e35275..9156653b56f69 100644
+--- a/arch/s390/kernel/jump_label.c
++++ b/arch/s390/kernel/jump_label.c
+@@ -36,7 +36,7 @@ static void jump_label_bug(struct jump_entry *entry, struct insn *expected,
+ 	unsigned char *ipe = (unsigned char *)expected;
+ 	unsigned char *ipn = (unsigned char *)new;
+ 
+-	pr_emerg("Jump label code mismatch at %pS [%p]\n", ipc, ipc);
++	pr_emerg("Jump label code mismatch at %pS [%px]\n", ipc, ipc);
+ 	pr_emerg("Found:    %6ph\n", ipc);
+ 	pr_emerg("Expected: %6ph\n", ipe);
+ 	pr_emerg("New:      %6ph\n", ipn);
+diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
+index 1fb483e06a647..926ba86f645e3 100644
+--- a/arch/s390/kernel/smp.c
++++ b/arch/s390/kernel/smp.c
+@@ -96,6 +96,7 @@ __vector128 __initdata boot_cpu_vector_save_area[__NUM_VXRS];
+ #endif
+ 
+ static unsigned int smp_max_threads __initdata = -1U;
++cpumask_t cpu_setup_mask;
+ 
+ static int __init early_nosmt(char *s)
+ {
+@@ -883,13 +884,14 @@ static void smp_init_secondary(void)
+ 	vtime_init();
+ 	vdso_getcpu_init();
+ 	pfault_init();
++	cpumask_set_cpu(cpu, &cpu_setup_mask);
++	update_cpu_masks();
+ 	notify_cpu_starting(cpu);
+ 	if (topology_cpu_dedicated(cpu))
+ 		set_cpu_flag(CIF_DEDICATED_CPU);
+ 	else
+ 		clear_cpu_flag(CIF_DEDICATED_CPU);
+ 	set_cpu_online(cpu, true);
+-	update_cpu_masks();
+ 	inc_irq_stat(CPU_RST);
+ 	local_irq_enable();
+ 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+@@ -945,10 +947,13 @@ early_param("possible_cpus", _setup_possible_cpus);
+ int __cpu_disable(void)
+ {
+ 	unsigned long cregs[16];
++	int cpu;
+ 
+ 	/* Handle possible pending IPIs */
+ 	smp_handle_ext_call();
+-	set_cpu_online(smp_processor_id(), false);
++	cpu = smp_processor_id();
++	set_cpu_online(cpu, false);
++	cpumask_clear_cpu(cpu, &cpu_setup_mask);
+ 	update_cpu_masks();
+ 	/* Disable pseudo page faults on this cpu. */
+ 	pfault_fini();
+diff --git a/arch/s390/kernel/topology.c b/arch/s390/kernel/topology.c
+index 26aa2614ee352..eb4047c9da9a3 100644
+--- a/arch/s390/kernel/topology.c
++++ b/arch/s390/kernel/topology.c
+@@ -67,7 +67,7 @@ static void cpu_group_map(cpumask_t *dst, struct mask_info *info, unsigned int c
+ 	static cpumask_t mask;
+ 
+ 	cpumask_clear(&mask);
+-	if (!cpu_online(cpu))
++	if (!cpumask_test_cpu(cpu, &cpu_setup_mask))
+ 		goto out;
+ 	cpumask_set_cpu(cpu, &mask);
+ 	switch (topology_mode) {
+@@ -88,7 +88,7 @@ static void cpu_group_map(cpumask_t *dst, struct mask_info *info, unsigned int c
+ 	case TOPOLOGY_MODE_SINGLE:
+ 		break;
+ 	}
+-	cpumask_and(&mask, &mask, cpu_online_mask);
++	cpumask_and(&mask, &mask, &cpu_setup_mask);
+ out:
+ 	cpumask_copy(dst, &mask);
+ }
+@@ -99,16 +99,16 @@ static void cpu_thread_map(cpumask_t *dst, unsigned int cpu)
+ 	int i;
+ 
+ 	cpumask_clear(&mask);
+-	if (!cpu_online(cpu))
++	if (!cpumask_test_cpu(cpu, &cpu_setup_mask))
+ 		goto out;
+ 	cpumask_set_cpu(cpu, &mask);
+ 	if (topology_mode != TOPOLOGY_MODE_HW)
+ 		goto out;
+ 	cpu -= cpu % (smp_cpu_mtid + 1);
+-	for (i = 0; i <= smp_cpu_mtid; i++)
+-		if (cpu_present(cpu + i))
++	for (i = 0; i <= smp_cpu_mtid; i++) {
++		if (cpumask_test_cpu(cpu + i, &cpu_setup_mask))
+ 			cpumask_set_cpu(cpu + i, &mask);
+-	cpumask_and(&mask, &mask, cpu_online_mask);
++	}
+ out:
+ 	cpumask_copy(dst, &mask);
+ }
+@@ -569,6 +569,7 @@ void __init topology_init_early(void)
+ 	alloc_masks(info, &book_info, 2);
+ 	alloc_masks(info, &drawer_info, 3);
+ out:
++	cpumask_set_cpu(0, &cpu_setup_mask);
+ 	__arch_update_cpu_topology();
+ 	__arch_update_dedicated_flag(NULL);
+ }
+diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
+index 8ac710de1ab1b..07bbee9b7320d 100644
+--- a/arch/s390/mm/init.c
++++ b/arch/s390/mm/init.c
+@@ -186,9 +186,9 @@ static void pv_init(void)
+ 		return;
+ 
+ 	/* make sure bounce buffers are shared */
++	swiotlb_force = SWIOTLB_FORCE;
+ 	swiotlb_init(1);
+ 	swiotlb_update_mem_attributes();
+-	swiotlb_force = SWIOTLB_FORCE;
+ }
+ 
+ void __init mem_init(void)
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index 77cd965cffefa..34839bad33e4d 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -893,7 +893,6 @@ static void zpci_mem_exit(void)
+ }
+ 
+ static unsigned int s390_pci_probe __initdata = 1;
+-static unsigned int s390_pci_no_mio __initdata;
+ unsigned int s390_pci_force_floating __initdata;
+ static unsigned int s390_pci_initialized;
+ 
+@@ -904,7 +903,7 @@ char * __init pcibios_setup(char *str)
+ 		return NULL;
+ 	}
+ 	if (!strcmp(str, "nomio")) {
+-		s390_pci_no_mio = 1;
++		S390_lowcore.machine_flags &= ~MACHINE_FLAG_PCI_MIO;
+ 		return NULL;
+ 	}
+ 	if (!strcmp(str, "force_floating")) {
+@@ -935,7 +934,7 @@ static int __init pci_base_init(void)
+ 		return 0;
+ 	}
+ 
+-	if (test_facility(153) && !s390_pci_no_mio) {
++	if (MACHINE_HAS_PCI_MIO) {
+ 		static_branch_enable(&have_mio);
+ 		ctl_set_bit(2, 5);
+ 	}
+diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
+index 4fa0a42808951..ea87d9ed77e97 100644
+--- a/arch/x86/kernel/cpu/mshyperv.c
++++ b/arch/x86/kernel/cpu/mshyperv.c
+@@ -370,8 +370,6 @@ static void __init ms_hyperv_init_platform(void)
+ 	if (ms_hyperv.features & HV_ACCESS_TSC_INVARIANT) {
+ 		wrmsrl(HV_X64_MSR_TSC_INVARIANT_CONTROL, 0x1);
+ 		setup_force_cpu_cap(X86_FEATURE_TSC_RELIABLE);
+-	} else {
+-		mark_tsc_unstable("running on Hyper-V");
+ 	}
+ 
+ 	/*
+@@ -432,6 +430,13 @@ static void __init ms_hyperv_init_platform(void)
+ 	/* Register Hyper-V specific clocksource */
+ 	hv_init_clocksource();
+ #endif
++	/*
++	 * TSC should be marked as unstable only after Hyper-V
++	 * clocksource has been initialized. This ensures that the
++	 * stability of the sched_clock is not altered.
++	 */
++	if (!(ms_hyperv.features & HV_ACCESS_TSC_INVARIANT))
++		mark_tsc_unstable("running on Hyper-V");
+ }
+ 
+ static bool __init ms_hyperv_x2apic_available(void)
+diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
+index ac06ca32e9ef7..5e6e236977c75 100644
+--- a/arch/x86/xen/p2m.c
++++ b/arch/x86/xen/p2m.c
+@@ -618,8 +618,8 @@ int xen_alloc_p2m_entry(unsigned long pfn)
+ 	}
+ 
+ 	/* Expanded the p2m? */
+-	if (pfn > xen_p2m_last_pfn) {
+-		xen_p2m_last_pfn = pfn;
++	if (pfn >= xen_p2m_last_pfn) {
++		xen_p2m_last_pfn = ALIGN(pfn + 1, P2M_PER_PAGE);
+ 		HYPERVISOR_shared_info->arch.max_pfn = xen_p2m_last_pfn;
+ 	}
+ 
+diff --git a/arch/xtensa/platforms/iss/console.c b/arch/xtensa/platforms/iss/console.c
+index a3dda25a4e45e..eed02cf3d6b03 100644
+--- a/arch/xtensa/platforms/iss/console.c
++++ b/arch/xtensa/platforms/iss/console.c
+@@ -143,9 +143,13 @@ static const struct tty_operations serial_ops = {
+ 
+ static int __init rs_init(void)
+ {
+-	tty_port_init(&serial_port);
++	int ret;
+ 
+ 	serial_driver = alloc_tty_driver(SERIAL_MAX_NUM_LINES);
++	if (!serial_driver)
++		return -ENOMEM;
++
++	tty_port_init(&serial_port);
+ 
+ 	/* Initialize the tty_driver structure */
+ 
+@@ -163,8 +167,15 @@ static int __init rs_init(void)
+ 	tty_set_operations(serial_driver, &serial_ops);
+ 	tty_port_link_device(&serial_port, serial_driver, 0);
+ 
+-	if (tty_register_driver(serial_driver))
+-		panic("Couldn't register serial driver\n");
++	ret = tty_register_driver(serial_driver);
++	if (ret) {
++		pr_err("Couldn't register serial driver\n");
++		tty_driver_kref_put(serial_driver);
++		tty_port_destroy(&serial_port);
++
++		return ret;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 4df33cc08eee0..6dfda57349cc0 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -5258,7 +5258,7 @@ bfq_set_next_ioprio_data(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
+ 	if (bfqq->new_ioprio >= IOPRIO_BE_NR) {
+ 		pr_crit("bfq_set_next_ioprio_data: new_ioprio %d\n",
+ 			bfqq->new_ioprio);
+-		bfqq->new_ioprio = IOPRIO_BE_NR;
++		bfqq->new_ioprio = IOPRIO_BE_NR - 1;
+ 	}
+ 
+ 	bfqq->entity.new_weight = bfq_ioprio_to_weight(bfqq->new_ioprio);
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index 250cb76ee6153..457eceabed2ec 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -288,9 +288,6 @@ int blkdev_report_zones_ioctl(struct block_device *bdev, fmode_t mode,
+ 	if (!blk_queue_is_zoned(q))
+ 		return -ENOTTY;
+ 
+-	if (!capable(CAP_SYS_ADMIN))
+-		return -EACCES;
+-
+ 	if (copy_from_user(&rep, argp, sizeof(struct blk_zone_report)))
+ 		return -EFAULT;
+ 
+@@ -349,9 +346,6 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
+ 	if (!blk_queue_is_zoned(q))
+ 		return -ENOTTY;
+ 
+-	if (!capable(CAP_SYS_ADMIN))
+-		return -EACCES;
+-
+ 	if (!(mode & FMODE_WRITE))
+ 		return -EBADF;
+ 
+diff --git a/block/bsg.c b/block/bsg.c
+index bd10922d5cbb4..4d0ad5846ccfa 100644
+--- a/block/bsg.c
++++ b/block/bsg.c
+@@ -371,10 +371,13 @@ static long bsg_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 	case SG_GET_RESERVED_SIZE:
+ 	case SG_SET_RESERVED_SIZE:
+ 	case SG_EMULATED_HOST:
+-	case SCSI_IOCTL_SEND_COMMAND:
+ 		return scsi_cmd_ioctl(bd->queue, NULL, file->f_mode, cmd, uarg);
+ 	case SG_IO:
+ 		return bsg_sg_io(bd->queue, file->f_mode, uarg);
++	case SCSI_IOCTL_SEND_COMMAND:
++		pr_warn_ratelimited("%s: calling unsupported SCSI_IOCTL_SEND_COMMAND\n",
++				current->comm);
++		return -EINVAL;
+ 	default:
+ 		return -ENOTTY;
+ 	}
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 44f434acfce08..0e6e73b8023fc 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -3950,6 +3950,10 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 	{ "Samsung SSD 850*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
++	{ "Samsung SSD 860*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
++						ATA_HORKAGE_ZERO_AFTER_TRIM, },
++	{ "Samsung SSD 870*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
++						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 	{ "FCCT*M500*",			NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 
+diff --git a/drivers/ata/sata_dwc_460ex.c b/drivers/ata/sata_dwc_460ex.c
+index f0ef844428bb4..338c2e50f7591 100644
+--- a/drivers/ata/sata_dwc_460ex.c
++++ b/drivers/ata/sata_dwc_460ex.c
+@@ -1259,24 +1259,20 @@ static int sata_dwc_probe(struct platform_device *ofdev)
+ 	irq = irq_of_parse_and_map(np, 0);
+ 	if (irq == NO_IRQ) {
+ 		dev_err(&ofdev->dev, "no SATA DMA irq\n");
+-		err = -ENODEV;
+-		goto error_out;
++		return -ENODEV;
+ 	}
+ 
+ #ifdef CONFIG_SATA_DWC_OLD_DMA
+ 	if (!of_find_property(np, "dmas", NULL)) {
+ 		err = sata_dwc_dma_init_old(ofdev, hsdev);
+ 		if (err)
+-			goto error_out;
++			return err;
+ 	}
+ #endif
+ 
+ 	hsdev->phy = devm_phy_optional_get(hsdev->dev, "sata-phy");
+-	if (IS_ERR(hsdev->phy)) {
+-		err = PTR_ERR(hsdev->phy);
+-		hsdev->phy = NULL;
+-		goto error_out;
+-	}
++	if (IS_ERR(hsdev->phy))
++		return PTR_ERR(hsdev->phy);
+ 
+ 	err = phy_init(hsdev->phy);
+ 	if (err)
+diff --git a/drivers/bus/fsl-mc/fsl-mc-bus.c b/drivers/bus/fsl-mc/fsl-mc-bus.c
+index 380ad1fdb7456..57f78d1cc9d84 100644
+--- a/drivers/bus/fsl-mc/fsl-mc-bus.c
++++ b/drivers/bus/fsl-mc/fsl-mc-bus.c
+@@ -67,6 +67,8 @@ struct fsl_mc_addr_translation_range {
+ #define MC_FAPR_PL	BIT(18)
+ #define MC_FAPR_BMT	BIT(17)
+ 
++static phys_addr_t mc_portal_base_phys_addr;
++
+ /**
+  * fsl_mc_bus_match - device to driver matching callback
+  * @dev: the fsl-mc device to match against
+@@ -219,7 +221,7 @@ static int scan_fsl_mc_bus(struct device *dev, void *data)
+ 	root_mc_dev = to_fsl_mc_device(dev);
+ 	root_mc_bus = to_fsl_mc_bus(root_mc_dev);
+ 	mutex_lock(&root_mc_bus->scan_mutex);
+-	dprc_scan_objects(root_mc_dev, NULL);
++	dprc_scan_objects(root_mc_dev, false);
+ 	mutex_unlock(&root_mc_bus->scan_mutex);
+ 
+ exit:
+@@ -702,14 +704,30 @@ static int fsl_mc_device_get_mmio_regions(struct fsl_mc_device *mc_dev,
+ 		 * If base address is in the region_desc use it otherwise
+ 		 * revert to old mechanism
+ 		 */
+-		if (region_desc.base_address)
++		if (region_desc.base_address) {
+ 			regions[i].start = region_desc.base_address +
+ 						region_desc.base_offset;
+-		else
++		} else {
+ 			error = translate_mc_addr(mc_dev, mc_region_type,
+ 					  region_desc.base_offset,
+ 					  &regions[i].start);
+ 
++			/*
++			 * Some versions of the MC firmware wrongly report
++			 * 0 for register base address of the DPMCP associated
++			 * with child DPRC objects thus rendering them unusable.
++			 * This is particularly troublesome in ACPI boot
++			 * scenarios where the legacy way of extracting this
++			 * base address from the device tree does not apply.
++			 * Given that DPMCPs share the same base address,
++			 * workaround this by using the base address extracted
++			 * from the root DPRC container.
++			 */
++			if (is_fsl_mc_bus_dprc(mc_dev) &&
++			    regions[i].start == region_desc.base_offset)
++				regions[i].start += mc_portal_base_phys_addr;
++		}
++
+ 		if (error < 0) {
+ 			dev_err(parent_dev,
+ 				"Invalid MC offset: %#x (for %s.%d\'s region %d)\n",
+@@ -1125,6 +1143,8 @@ static int fsl_mc_bus_probe(struct platform_device *pdev)
+ 	plat_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	mc_portal_phys_addr = plat_res->start;
+ 	mc_portal_size = resource_size(plat_res);
++	mc_portal_base_phys_addr = mc_portal_phys_addr & ~0x3ffffff;
++
+ 	error = fsl_create_mc_io(&pdev->dev, mc_portal_phys_addr,
+ 				 mc_portal_size, NULL,
+ 				 FSL_MC_IO_ATOMIC_CONTEXT_PORTAL, &mc_io);
+diff --git a/drivers/clk/at91/clk-generated.c b/drivers/clk/at91/clk-generated.c
+index b4fc8d71daf20..b656d25a97678 100644
+--- a/drivers/clk/at91/clk-generated.c
++++ b/drivers/clk/at91/clk-generated.c
+@@ -128,6 +128,12 @@ static int clk_generated_determine_rate(struct clk_hw *hw,
+ 	int i;
+ 	u32 div;
+ 
++	/* do not look for a rate that is outside of our range */
++	if (gck->range.max && req->rate > gck->range.max)
++		req->rate = gck->range.max;
++	if (gck->range.min && req->rate < gck->range.min)
++		req->rate = gck->range.min;
++
+ 	for (i = 0; i < clk_hw_get_num_parents(hw); i++) {
+ 		if (gck->chg_pid == i)
+ 			continue;
+diff --git a/drivers/clk/imx/clk-composite-8m.c b/drivers/clk/imx/clk-composite-8m.c
+index 2c309e3dc8e34..04e728538cefe 100644
+--- a/drivers/clk/imx/clk-composite-8m.c
++++ b/drivers/clk/imx/clk-composite-8m.c
+@@ -216,7 +216,8 @@ struct clk_hw *imx8m_clk_hw_composite_flags(const char *name,
+ 		div->width = PCG_PREDIV_WIDTH;
+ 		divider_ops = &imx8m_clk_composite_divider_ops;
+ 		mux_ops = &clk_mux_ops;
+-		flags |= CLK_SET_PARENT_GATE;
++		if (!(composite_flags & IMX_COMPOSITE_FW_MANAGED))
++			flags |= CLK_SET_PARENT_GATE;
+ 	}
+ 
+ 	div->lock = &imx_ccm_lock;
+diff --git a/drivers/clk/imx/clk-imx8mm.c b/drivers/clk/imx/clk-imx8mm.c
+index f1919fafb1247..e92621fa8b9cd 100644
+--- a/drivers/clk/imx/clk-imx8mm.c
++++ b/drivers/clk/imx/clk-imx8mm.c
+@@ -407,10 +407,10 @@ static int imx8mm_clocks_probe(struct platform_device *pdev)
+ 	hws[IMX8MM_SYS_PLL2_500M] = imx_clk_hw_fixed_factor("sys_pll2_500m", "sys_pll2_500m_cg", 1, 2);
+ 	hws[IMX8MM_SYS_PLL2_1000M] = imx_clk_hw_fixed_factor("sys_pll2_1000m", "sys_pll2_out", 1, 1);
+ 
+-	hws[IMX8MM_CLK_CLKOUT1_SEL] = imx_clk_hw_mux("clkout1_sel", base + 0x128, 4, 4, clkout_sels, ARRAY_SIZE(clkout_sels));
++	hws[IMX8MM_CLK_CLKOUT1_SEL] = imx_clk_hw_mux2("clkout1_sel", base + 0x128, 4, 4, clkout_sels, ARRAY_SIZE(clkout_sels));
+ 	hws[IMX8MM_CLK_CLKOUT1_DIV] = imx_clk_hw_divider("clkout1_div", "clkout1_sel", base + 0x128, 0, 4);
+ 	hws[IMX8MM_CLK_CLKOUT1] = imx_clk_hw_gate("clkout1", "clkout1_div", base + 0x128, 8);
+-	hws[IMX8MM_CLK_CLKOUT2_SEL] = imx_clk_hw_mux("clkout2_sel", base + 0x128, 20, 4, clkout_sels, ARRAY_SIZE(clkout_sels));
++	hws[IMX8MM_CLK_CLKOUT2_SEL] = imx_clk_hw_mux2("clkout2_sel", base + 0x128, 20, 4, clkout_sels, ARRAY_SIZE(clkout_sels));
+ 	hws[IMX8MM_CLK_CLKOUT2_DIV] = imx_clk_hw_divider("clkout2_div", "clkout2_sel", base + 0x128, 16, 4);
+ 	hws[IMX8MM_CLK_CLKOUT2] = imx_clk_hw_gate("clkout2", "clkout2_div", base + 0x128, 24);
+ 
+@@ -470,10 +470,11 @@ static int imx8mm_clocks_probe(struct platform_device *pdev)
+ 
+ 	/*
+ 	 * DRAM clocks are manipulated from TF-A outside clock framework.
+-	 * Mark with GET_RATE_NOCACHE to always read div value from hardware
++	 * The fw_managed helper sets GET_RATE_NOCACHE and clears SET_PARENT_GATE
++	 * as div value should always be read from hardware
+ 	 */
+-	hws[IMX8MM_CLK_DRAM_ALT] = __imx8m_clk_hw_composite("dram_alt", imx8mm_dram_alt_sels, base + 0xa000, CLK_GET_RATE_NOCACHE);
+-	hws[IMX8MM_CLK_DRAM_APB] = __imx8m_clk_hw_composite("dram_apb", imx8mm_dram_apb_sels, base + 0xa080, CLK_IS_CRITICAL | CLK_GET_RATE_NOCACHE);
++	hws[IMX8MM_CLK_DRAM_ALT] = imx8m_clk_hw_fw_managed_composite("dram_alt", imx8mm_dram_alt_sels, base + 0xa000);
++	hws[IMX8MM_CLK_DRAM_APB] = imx8m_clk_hw_fw_managed_composite_critical("dram_apb", imx8mm_dram_apb_sels, base + 0xa080);
+ 
+ 	/* IP */
+ 	hws[IMX8MM_CLK_VPU_G1] = imx8m_clk_hw_composite("vpu_g1", imx8mm_vpu_g1_sels, base + 0xa100);
+diff --git a/drivers/clk/imx/clk-imx8mn.c b/drivers/clk/imx/clk-imx8mn.c
+index 88f6630cd472f..0a76f969b28b3 100644
+--- a/drivers/clk/imx/clk-imx8mn.c
++++ b/drivers/clk/imx/clk-imx8mn.c
+@@ -453,10 +453,11 @@ static int imx8mn_clocks_probe(struct platform_device *pdev)
+ 
+ 	/*
+ 	 * DRAM clocks are manipulated from TF-A outside clock framework.
+-	 * Mark with GET_RATE_NOCACHE to always read div value from hardware
++	 * The fw_managed helper sets GET_RATE_NOCACHE and clears SET_PARENT_GATE
++	 * as div value should always be read from hardware
+ 	 */
+-	hws[IMX8MN_CLK_DRAM_ALT] = __imx8m_clk_hw_composite("dram_alt", imx8mn_dram_alt_sels, base + 0xa000, CLK_GET_RATE_NOCACHE);
+-	hws[IMX8MN_CLK_DRAM_APB] = __imx8m_clk_hw_composite("dram_apb", imx8mn_dram_apb_sels, base + 0xa080, CLK_IS_CRITICAL | CLK_GET_RATE_NOCACHE);
++	hws[IMX8MN_CLK_DRAM_ALT] = imx8m_clk_hw_fw_managed_composite("dram_alt", imx8mn_dram_alt_sels, base + 0xa000);
++	hws[IMX8MN_CLK_DRAM_APB] = imx8m_clk_hw_fw_managed_composite_critical("dram_apb", imx8mn_dram_apb_sels, base + 0xa080);
+ 
+ 	hws[IMX8MN_CLK_DISP_PIXEL] = imx8m_clk_hw_composite("disp_pixel", imx8mn_disp_pixel_sels, base + 0xa500);
+ 	hws[IMX8MN_CLK_SAI2] = imx8m_clk_hw_composite("sai2", imx8mn_sai2_sels, base + 0xa600);
+diff --git a/drivers/clk/imx/clk-imx8mq.c b/drivers/clk/imx/clk-imx8mq.c
+index c491bc9c61ce7..83cc2b1c32947 100644
+--- a/drivers/clk/imx/clk-imx8mq.c
++++ b/drivers/clk/imx/clk-imx8mq.c
+@@ -449,11 +449,12 @@ static int imx8mq_clocks_probe(struct platform_device *pdev)
+ 
+ 	/*
+ 	 * DRAM clocks are manipulated from TF-A outside clock framework.
+-	 * Mark with GET_RATE_NOCACHE to always read div value from hardware
++	 * The fw_managed helper sets GET_RATE_NOCACHE and clears SET_PARENT_GATE
++	 * as div value should always be read from hardware
+ 	 */
+ 	hws[IMX8MQ_CLK_DRAM_CORE] = imx_clk_hw_mux2_flags("dram_core_clk", base + 0x9800, 24, 1, imx8mq_dram_core_sels, ARRAY_SIZE(imx8mq_dram_core_sels), CLK_IS_CRITICAL);
+-	hws[IMX8MQ_CLK_DRAM_ALT] = __imx8m_clk_hw_composite("dram_alt", imx8mq_dram_alt_sels, base + 0xa000, CLK_GET_RATE_NOCACHE);
+-	hws[IMX8MQ_CLK_DRAM_APB] = __imx8m_clk_hw_composite("dram_apb", imx8mq_dram_apb_sels, base + 0xa080, CLK_IS_CRITICAL | CLK_GET_RATE_NOCACHE);
++	hws[IMX8MQ_CLK_DRAM_ALT] = imx8m_clk_hw_fw_managed_composite("dram_alt", imx8mq_dram_alt_sels, base + 0xa000);
++	hws[IMX8MQ_CLK_DRAM_APB] = imx8m_clk_hw_fw_managed_composite_critical("dram_apb", imx8mq_dram_apb_sels, base + 0xa080);
+ 
+ 	/* IP */
+ 	hws[IMX8MQ_CLK_VPU_G1] = imx8m_clk_hw_composite("vpu_g1", imx8mq_vpu_g1_sels, base + 0xa100);
+diff --git a/drivers/clk/imx/clk.h b/drivers/clk/imx/clk.h
+index 7571603bee23b..e144f983fd8ce 100644
+--- a/drivers/clk/imx/clk.h
++++ b/drivers/clk/imx/clk.h
+@@ -530,8 +530,9 @@ struct clk_hw *imx_clk_hw_cpu(const char *name, const char *parent_name,
+ 		struct clk *div, struct clk *mux, struct clk *pll,
+ 		struct clk *step);
+ 
+-#define IMX_COMPOSITE_CORE	BIT(0)
+-#define IMX_COMPOSITE_BUS	BIT(1)
++#define IMX_COMPOSITE_CORE		BIT(0)
++#define IMX_COMPOSITE_BUS		BIT(1)
++#define IMX_COMPOSITE_FW_MANAGED	BIT(2)
+ 
+ struct clk_hw *imx8m_clk_hw_composite_flags(const char *name,
+ 					    const char * const *parent_names,
+@@ -567,6 +568,17 @@ struct clk_hw *imx8m_clk_hw_composite_flags(const char *name,
+ 		ARRAY_SIZE(parent_names), reg, 0, \
+ 		flags | CLK_SET_RATE_NO_REPARENT | CLK_OPS_PARENT_ENABLE)
+ 
++#define __imx8m_clk_hw_fw_managed_composite(name, parent_names, reg, flags) \
++	imx8m_clk_hw_composite_flags(name, parent_names, \
++		ARRAY_SIZE(parent_names), reg, IMX_COMPOSITE_FW_MANAGED, \
++		flags | CLK_GET_RATE_NOCACHE | CLK_SET_RATE_NO_REPARENT | CLK_OPS_PARENT_ENABLE)
++
++#define imx8m_clk_hw_fw_managed_composite(name, parent_names, reg) \
++	__imx8m_clk_hw_fw_managed_composite(name, parent_names, reg, 0)
++
++#define imx8m_clk_hw_fw_managed_composite_critical(name, parent_names, reg) \
++	__imx8m_clk_hw_fw_managed_composite(name, parent_names, reg, CLK_IS_CRITICAL)
++
+ #define __imx8m_clk_composite(name, parent_names, reg, flags) \
+ 	to_clk(__imx8m_clk_hw_composite(name, parent_names, reg, flags))
+ 
+diff --git a/drivers/clk/ralink/clk-mt7621.c b/drivers/clk/ralink/clk-mt7621.c
+index 857da1e274be9..a2c045390f008 100644
+--- a/drivers/clk/ralink/clk-mt7621.c
++++ b/drivers/clk/ralink/clk-mt7621.c
+@@ -131,14 +131,7 @@ static int mt7621_gate_ops_init(struct device *dev,
+ 				struct mt7621_gate *sclk)
+ {
+ 	struct clk_init_data init = {
+-		/*
+-		 * Until now no clock driver existed so
+-		 * these SoC drivers are not prepared
+-		 * yet for the clock. We don't want kernel to
+-		 * disable anything so we add CLK_IS_CRITICAL
+-		 * flag here.
+-		 */
+-		.flags = CLK_SET_RATE_PARENT | CLK_IS_CRITICAL,
++		.flags = CLK_SET_RATE_PARENT,
+ 		.num_parents = 1,
+ 		.parent_names = &sclk->parent_name,
+ 		.ops = &mt7621_gate_ops,
+diff --git a/drivers/clk/rockchip/clk-pll.c b/drivers/clk/rockchip/clk-pll.c
+index fe937bcdb4876..f7827b3b7fc1c 100644
+--- a/drivers/clk/rockchip/clk-pll.c
++++ b/drivers/clk/rockchip/clk-pll.c
+@@ -940,7 +940,7 @@ struct clk *rockchip_clk_register_pll(struct rockchip_clk_provider *ctx,
+ 	switch (pll_type) {
+ 	case pll_rk3036:
+ 	case pll_rk3328:
+-		if (!pll->rate_table || IS_ERR(ctx->grf))
++		if (!pll->rate_table)
+ 			init.ops = &rockchip_rk3036_pll_clk_norate_ops;
+ 		else
+ 			init.ops = &rockchip_rk3036_pll_clk_ops;
+diff --git a/drivers/clk/socfpga/clk-agilex.c b/drivers/clk/socfpga/clk-agilex.c
+index 1cb21ea79c640..242e94c0cf8a3 100644
+--- a/drivers/clk/socfpga/clk-agilex.c
++++ b/drivers/clk/socfpga/clk-agilex.c
+@@ -107,10 +107,10 @@ static const struct clk_parent_data gpio_db_free_mux[] = {
+ };
+ 
+ static const struct clk_parent_data psi_ref_free_mux[] = {
+-	{ .fw_name = "main_pll_c3",
+-	  .name = "main_pll_c3", },
+-	{ .fw_name = "peri_pll_c3",
+-	  .name = "peri_pll_c3", },
++	{ .fw_name = "main_pll_c2",
++	  .name = "main_pll_c2", },
++	{ .fw_name = "peri_pll_c2",
++	  .name = "peri_pll_c2", },
+ 	{ .fw_name = "osc1",
+ 	  .name = "osc1", },
+ 	{ .fw_name = "cb-intosc-hs-div2-clk",
+@@ -195,6 +195,13 @@ static const struct clk_parent_data sdmmc_mux[] = {
+ 	  .name = "boot_clk", },
+ };
+ 
++static const struct clk_parent_data s2f_user0_mux[] = {
++	{ .fw_name = "s2f_user0_free_clk",
++	  .name = "s2f_user0_free_clk", },
++	{ .fw_name = "boot_clk",
++	  .name = "boot_clk", },
++};
++
+ static const struct clk_parent_data s2f_user1_mux[] = {
+ 	{ .fw_name = "s2f_user1_free_clk",
+ 	  .name = "s2f_user1_free_clk", },
+@@ -273,7 +280,7 @@ static const struct stratix10_perip_cnt_clock agilex_main_perip_cnt_clks[] = {
+ 	{ AGILEX_SDMMC_FREE_CLK, "sdmmc_free_clk", NULL, sdmmc_free_mux,
+ 	  ARRAY_SIZE(sdmmc_free_mux), 0, 0xE4, 0, 0, 0},
+ 	{ AGILEX_S2F_USER0_FREE_CLK, "s2f_user0_free_clk", NULL, s2f_usr0_free_mux,
+-	  ARRAY_SIZE(s2f_usr0_free_mux), 0, 0xE8, 0, 0, 0},
++	  ARRAY_SIZE(s2f_usr0_free_mux), 0, 0xE8, 0, 0x30, 2},
+ 	{ AGILEX_S2F_USER1_FREE_CLK, "s2f_user1_free_clk", NULL, s2f_usr1_free_mux,
+ 	  ARRAY_SIZE(s2f_usr1_free_mux), 0, 0xEC, 0, 0x88, 5},
+ 	{ AGILEX_PSI_REF_FREE_CLK, "psi_ref_free_clk", NULL, psi_ref_free_mux,
+@@ -319,6 +326,8 @@ static const struct stratix10_gate_clock agilex_gate_clks[] = {
+ 	  4, 0x98, 0, 16, 0x88, 3, 0},
+ 	{ AGILEX_SDMMC_CLK, "sdmmc_clk", NULL, sdmmc_mux, ARRAY_SIZE(sdmmc_mux), 0, 0x7C,
+ 	  5, 0, 0, 0, 0x88, 4, 4},
++	{ AGILEX_S2F_USER0_CLK, "s2f_user0_clk", NULL, s2f_user0_mux, ARRAY_SIZE(s2f_user0_mux), 0, 0x24,
++	  6, 0, 0, 0, 0x30, 2, 0},
+ 	{ AGILEX_S2F_USER1_CLK, "s2f_user1_clk", NULL, s2f_user1_mux, ARRAY_SIZE(s2f_user1_mux), 0, 0x7C,
+ 	  6, 0, 0, 0, 0x88, 5, 0},
+ 	{ AGILEX_PSI_REF_CLK, "psi_ref_clk", NULL, psi_mux, ARRAY_SIZE(psi_mux), 0, 0x7C,
+diff --git a/drivers/cpufreq/powernv-cpufreq.c b/drivers/cpufreq/powernv-cpufreq.c
+index e439b43c19ebe..8977e4de59157 100644
+--- a/drivers/cpufreq/powernv-cpufreq.c
++++ b/drivers/cpufreq/powernv-cpufreq.c
+@@ -36,6 +36,7 @@
+ #define MAX_PSTATE_SHIFT	32
+ #define LPSTATE_SHIFT		48
+ #define GPSTATE_SHIFT		56
++#define MAX_NR_CHIPS		32
+ 
+ #define MAX_RAMP_DOWN_TIME				5120
+ /*
+@@ -1051,12 +1052,20 @@ static int init_chip_info(void)
+ 	unsigned int *chip;
+ 	unsigned int cpu, i;
+ 	unsigned int prev_chip_id = UINT_MAX;
++	cpumask_t *chip_cpu_mask;
+ 	int ret = 0;
+ 
+ 	chip = kcalloc(num_possible_cpus(), sizeof(*chip), GFP_KERNEL);
+ 	if (!chip)
+ 		return -ENOMEM;
+ 
++	/* Allocate a chip cpu mask large enough to fit mask for all chips */
++	chip_cpu_mask = kcalloc(MAX_NR_CHIPS, sizeof(cpumask_t), GFP_KERNEL);
++	if (!chip_cpu_mask) {
++		ret = -ENOMEM;
++		goto free_and_return;
++	}
++
+ 	for_each_possible_cpu(cpu) {
+ 		unsigned int id = cpu_to_chip_id(cpu);
+ 
+@@ -1064,22 +1073,25 @@ static int init_chip_info(void)
+ 			prev_chip_id = id;
+ 			chip[nr_chips++] = id;
+ 		}
++		cpumask_set_cpu(cpu, &chip_cpu_mask[nr_chips-1]);
+ 	}
+ 
+ 	chips = kcalloc(nr_chips, sizeof(struct chip), GFP_KERNEL);
+ 	if (!chips) {
+ 		ret = -ENOMEM;
+-		goto free_and_return;
++		goto out_free_chip_cpu_mask;
+ 	}
+ 
+ 	for (i = 0; i < nr_chips; i++) {
+ 		chips[i].id = chip[i];
+-		cpumask_copy(&chips[i].mask, cpumask_of_node(chip[i]));
++		cpumask_copy(&chips[i].mask, &chip_cpu_mask[i]);
+ 		INIT_WORK(&chips[i].throttle, powernv_cpufreq_work_fn);
+ 		for_each_cpu(cpu, &chips[i].mask)
+ 			per_cpu(chip_info, cpu) =  &chips[i];
+ 	}
+ 
++out_free_chip_cpu_mask:
++	kfree(chip_cpu_mask);
+ free_and_return:
+ 	kfree(chip);
+ 	return ret;
+diff --git a/drivers/cpuidle/cpuidle-pseries.c b/drivers/cpuidle/cpuidle-pseries.c
+index a2b5c6f60cf0e..ff164dec8422e 100644
+--- a/drivers/cpuidle/cpuidle-pseries.c
++++ b/drivers/cpuidle/cpuidle-pseries.c
+@@ -402,7 +402,7 @@ static void __init fixup_cede0_latency(void)
+  * pseries_idle_probe()
+  * Choose state table for shared versus dedicated partition
+  */
+-static int pseries_idle_probe(void)
++static int __init pseries_idle_probe(void)
+ {
+ 
+ 	if (cpuidle_disable != IDLE_NO_OVERRIDE)
+@@ -419,7 +419,21 @@ static int pseries_idle_probe(void)
+ 			cpuidle_state_table = shared_states;
+ 			max_idle_state = ARRAY_SIZE(shared_states);
+ 		} else {
+-			fixup_cede0_latency();
++			/*
++			 * Use firmware provided latency values
++			 * starting with POWER10 platforms. In the
++			 * case that we are running on a POWER10
++			 * platform but in an earlier compat mode, we
++			 * can still use the firmware provided values.
++			 *
++			 * However, on platforms prior to POWER10, we
++			 * cannot rely on the accuracy of the firmware
++			 * provided latency values. On such platforms,
++			 * go with the conservative default estimate
++			 * of 10us.
++			 */
++			if (cpu_has_feature(CPU_FTR_ARCH_31) || pvr_version_is(PVR_POWER10))
++				fixup_cede0_latency();
+ 			cpuidle_state_table = dedicated_states;
+ 			max_idle_state = NR_DEDICATED_STATES;
+ 		}
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 91808402e0bf2..2ecb0e1f65d8d 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -300,6 +300,9 @@ static int __sev_platform_shutdown_locked(int *error)
+ 	struct sev_device *sev = psp_master->sev_data;
+ 	int ret;
+ 
++	if (sev->state == SEV_STATE_UNINIT)
++		return 0;
++
+ 	ret = __sev_do_cmd_locked(SEV_CMD_SHUTDOWN, NULL, error);
+ 	if (ret)
+ 		return ret;
+@@ -1019,6 +1022,20 @@ e_err:
+ 	return ret;
+ }
+ 
++static void sev_firmware_shutdown(struct sev_device *sev)
++{
++	sev_platform_shutdown(NULL);
++
++	if (sev_es_tmr) {
++		/* The TMR area was encrypted, flush it from the cache */
++		wbinvd_on_all_cpus();
++
++		free_pages((unsigned long)sev_es_tmr,
++			   get_order(SEV_ES_TMR_SIZE));
++		sev_es_tmr = NULL;
++	}
++}
++
+ void sev_dev_destroy(struct psp_device *psp)
+ {
+ 	struct sev_device *sev = psp->sev_data;
+@@ -1026,6 +1043,8 @@ void sev_dev_destroy(struct psp_device *psp)
+ 	if (!sev)
+ 		return;
+ 
++	sev_firmware_shutdown(sev);
++
+ 	if (sev->misc)
+ 		kref_put(&misc_dev->refcount, sev_exit);
+ 
+@@ -1056,21 +1075,6 @@ void sev_pci_init(void)
+ 	if (sev_get_api_version())
+ 		goto err;
+ 
+-	/*
+-	 * If platform is not in UNINIT state then firmware upgrade and/or
+-	 * platform INIT command will fail. These command require UNINIT state.
+-	 *
+-	 * In a normal boot we should never run into case where the firmware
+-	 * is not in UNINIT state on boot. But in case of kexec boot, a reboot
+-	 * may not go through a typical shutdown sequence and may leave the
+-	 * firmware in INIT or WORKING state.
+-	 */
+-
+-	if (sev->state != SEV_STATE_UNINIT) {
+-		sev_platform_shutdown(NULL);
+-		sev->state = SEV_STATE_UNINIT;
+-	}
+-
+ 	if (sev_version_greater_or_equal(0, 15) &&
+ 	    sev_update_firmware(sev->dev) == 0)
+ 		sev_get_api_version();
+@@ -1115,17 +1119,10 @@ err:
+ 
+ void sev_pci_exit(void)
+ {
+-	if (!psp_master->sev_data)
+-		return;
+-
+-	sev_platform_shutdown(NULL);
++	struct sev_device *sev = psp_master->sev_data;
+ 
+-	if (sev_es_tmr) {
+-		/* The TMR area was encrypted, flush it from the cache */
+-		wbinvd_on_all_cpus();
++	if (!sev)
++		return;
+ 
+-		free_pages((unsigned long)sev_es_tmr,
+-			   get_order(SEV_ES_TMR_SIZE));
+-		sev_es_tmr = NULL;
+-	}
++	sev_firmware_shutdown(sev);
+ }
+diff --git a/drivers/crypto/ccp/sp-pci.c b/drivers/crypto/ccp/sp-pci.c
+index 6fb6ba35f89d4..9bcc1884c06a1 100644
+--- a/drivers/crypto/ccp/sp-pci.c
++++ b/drivers/crypto/ccp/sp-pci.c
+@@ -241,6 +241,17 @@ e_err:
+ 	return ret;
+ }
+ 
++static void sp_pci_shutdown(struct pci_dev *pdev)
++{
++	struct device *dev = &pdev->dev;
++	struct sp_device *sp = dev_get_drvdata(dev);
++
++	if (!sp)
++		return;
++
++	sp_destroy(sp);
++}
++
+ static void sp_pci_remove(struct pci_dev *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+@@ -371,6 +382,7 @@ static struct pci_driver sp_pci_driver = {
+ 	.id_table = sp_pci_table,
+ 	.probe = sp_pci_probe,
+ 	.remove = sp_pci_remove,
++	.shutdown = sp_pci_shutdown,
+ 	.driver.pm = &sp_pci_pm_ops,
+ };
+ 
+diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c
+index f397cc5bf1021..d19e5ffb5104b 100644
+--- a/drivers/crypto/mxs-dcp.c
++++ b/drivers/crypto/mxs-dcp.c
+@@ -300,21 +300,20 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq)
+ 
+ 	struct scatterlist *dst = req->dst;
+ 	struct scatterlist *src = req->src;
+-	const int nents = sg_nents(req->src);
++	int dst_nents = sg_nents(dst);
+ 
+ 	const int out_off = DCP_BUF_SZ;
+ 	uint8_t *in_buf = sdcp->coh->aes_in_buf;
+ 	uint8_t *out_buf = sdcp->coh->aes_out_buf;
+ 
+-	uint8_t *out_tmp, *src_buf, *dst_buf = NULL;
+ 	uint32_t dst_off = 0;
++	uint8_t *src_buf = NULL;
+ 	uint32_t last_out_len = 0;
+ 
+ 	uint8_t *key = sdcp->coh->aes_key;
+ 
+ 	int ret = 0;
+-	int split = 0;
+-	unsigned int i, len, clen, rem = 0, tlen = 0;
++	unsigned int i, len, clen, tlen = 0;
+ 	int init = 0;
+ 	bool limit_hit = false;
+ 
+@@ -332,7 +331,7 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq)
+ 		memset(key + AES_KEYSIZE_128, 0, AES_KEYSIZE_128);
+ 	}
+ 
+-	for_each_sg(req->src, src, nents, i) {
++	for_each_sg(req->src, src, sg_nents(src), i) {
+ 		src_buf = sg_virt(src);
+ 		len = sg_dma_len(src);
+ 		tlen += len;
+@@ -357,34 +356,17 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq)
+ 			 * submit the buffer.
+ 			 */
+ 			if (actx->fill == out_off || sg_is_last(src) ||
+-				limit_hit) {
++			    limit_hit) {
+ 				ret = mxs_dcp_run_aes(actx, req, init);
+ 				if (ret)
+ 					return ret;
+ 				init = 0;
+ 
+-				out_tmp = out_buf;
++				sg_pcopy_from_buffer(dst, dst_nents, out_buf,
++						     actx->fill, dst_off);
++				dst_off += actx->fill;
+ 				last_out_len = actx->fill;
+-				while (dst && actx->fill) {
+-					if (!split) {
+-						dst_buf = sg_virt(dst);
+-						dst_off = 0;
+-					}
+-					rem = min(sg_dma_len(dst) - dst_off,
+-						  actx->fill);
+-
+-					memcpy(dst_buf + dst_off, out_tmp, rem);
+-					out_tmp += rem;
+-					dst_off += rem;
+-					actx->fill -= rem;
+-
+-					if (dst_off == sg_dma_len(dst)) {
+-						dst = sg_next(dst);
+-						split = 0;
+-					} else {
+-						split = 1;
+-					}
+-				}
++				actx->fill = 0;
+ 			}
+ 		} while (len);
+ 
+diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c
+index d5590c08db51e..1c636d287112e 100644
+--- a/drivers/dma/imx-sdma.c
++++ b/drivers/dma/imx-sdma.c
+@@ -379,7 +379,6 @@ struct sdma_channel {
+ 	unsigned long			watermark_level;
+ 	u32				shp_addr, per_addr;
+ 	enum dma_status			status;
+-	bool				context_loaded;
+ 	struct imx_dma_data		data;
+ 	struct work_struct		terminate_worker;
+ };
+@@ -954,9 +953,6 @@ static int sdma_load_context(struct sdma_channel *sdmac)
+ 	int ret;
+ 	unsigned long flags;
+ 
+-	if (sdmac->context_loaded)
+-		return 0;
+-
+ 	if (sdmac->direction == DMA_DEV_TO_MEM)
+ 		load_address = sdmac->pc_from_device;
+ 	else if (sdmac->direction == DMA_DEV_TO_DEV)
+@@ -999,8 +995,6 @@ static int sdma_load_context(struct sdma_channel *sdmac)
+ 
+ 	spin_unlock_irqrestore(&sdma->channel_0_lock, flags);
+ 
+-	sdmac->context_loaded = true;
+-
+ 	return ret;
+ }
+ 
+@@ -1039,7 +1033,6 @@ static void sdma_channel_terminate_work(struct work_struct *work)
+ 	vchan_get_all_descriptors(&sdmac->vc, &head);
+ 	spin_unlock_irqrestore(&sdmac->vc.lock, flags);
+ 	vchan_dma_desc_free_list(&sdmac->vc, &head);
+-	sdmac->context_loaded = false;
+ }
+ 
+ static int sdma_terminate_all(struct dma_chan *chan)
+@@ -1114,7 +1107,6 @@ static void sdma_set_watermarklevel_for_p2p(struct sdma_channel *sdmac)
+ static int sdma_config_channel(struct dma_chan *chan)
+ {
+ 	struct sdma_channel *sdmac = to_sdma_chan(chan);
+-	int ret;
+ 
+ 	sdma_disable_channel(chan);
+ 
+@@ -1154,9 +1146,7 @@ static int sdma_config_channel(struct dma_chan *chan)
+ 		sdmac->watermark_level = 0; /* FIXME: M3_BASE_ADDRESS */
+ 	}
+ 
+-	ret = sdma_load_context(sdmac);
+-
+-	return ret;
++	return 0;
+ }
+ 
+ static int sdma_set_channel_priority(struct sdma_channel *sdmac,
+@@ -1307,7 +1297,6 @@ static void sdma_free_chan_resources(struct dma_chan *chan)
+ 
+ 	sdmac->event_id0 = 0;
+ 	sdmac->event_id1 = 0;
+-	sdmac->context_loaded = false;
+ 
+ 	sdma_set_channel_priority(sdmac, 0);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+index 311bcdc59eda6..3f4b03a2588b6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+@@ -277,21 +277,18 @@ retry:
+ 	r = amdgpu_gem_object_create(adev, size, args->in.alignment,
+ 				     initial_domain,
+ 				     flags, ttm_bo_type_device, resv, &gobj);
+-	if (r) {
+-		if (r != -ERESTARTSYS) {
+-			if (flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED) {
+-				flags &= ~AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
+-				goto retry;
+-			}
++	if (r && r != -ERESTARTSYS) {
++		if (flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED) {
++			flags &= ~AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
++			goto retry;
++		}
+ 
+-			if (initial_domain == AMDGPU_GEM_DOMAIN_VRAM) {
+-				initial_domain |= AMDGPU_GEM_DOMAIN_GTT;
+-				goto retry;
+-			}
+-			DRM_DEBUG("Failed to allocate GEM object (%llu, %d, %llu, %d)\n",
+-				  size, initial_domain, args->in.alignment, r);
++		if (initial_domain == AMDGPU_GEM_DOMAIN_VRAM) {
++			initial_domain |= AMDGPU_GEM_DOMAIN_GTT;
++			goto retry;
+ 		}
+-		return r;
++		DRM_DEBUG("Failed to allocate GEM object (%llu, %d, %llu, %d)\n",
++				size, initial_domain, args->in.alignment, r);
+ 	}
+ 
+ 	if (flags & AMDGPU_GEM_CREATE_VM_ALWAYS_VALID) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c
+index bca4dddd5a15b..82608df433964 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c
+@@ -339,7 +339,7 @@ static void amdgpu_i2c_put_byte(struct amdgpu_i2c_chan *i2c_bus,
+ void
+ amdgpu_i2c_router_select_ddc_port(const struct amdgpu_connector *amdgpu_connector)
+ {
+-	u8 val;
++	u8 val = 0;
+ 
+ 	if (!amdgpu_connector->router.ddc_valid)
+ 		return;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index 3933a42f8d811..67b6eda21529e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -202,7 +202,7 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain)
+ 		c++;
+ 	}
+ 
+-	BUG_ON(c >= AMDGPU_BO_MAX_PLACEMENTS);
++	BUG_ON(c > AMDGPU_BO_MAX_PLACEMENTS);
+ 
+ 	placement->num_placement = c;
+ 	placement->placement = places;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
+index f40c871da0c62..fb701c4fd5c5f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
+@@ -321,7 +321,7 @@ int amdgpu_ras_eeprom_init(struct amdgpu_ras_eeprom_control *control,
+ 		return ret;
+ 	}
+ 
+-	__decode_table_header_from_buff(hdr, &buff[2]);
++	__decode_table_header_from_buff(hdr, buff);
+ 
+ 	if (hdr->header == EEPROM_TABLE_HDR_VAL) {
+ 		control->num_recs = (hdr->tbl_size - EEPROM_TABLE_HEADER_SIZE) /
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+index 27b1ced145d2c..14ae2bfad59da 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+@@ -119,7 +119,7 @@ static int vcn_v1_0_sw_init(void *handle)
+ 		adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].fw = adev->vcn.fw;
+ 		adev->firmware.fw_size +=
+ 			ALIGN(le32_to_cpu(hdr->ucode_size_bytes), PAGE_SIZE);
+-		DRM_INFO("PSP loading VCN firmware\n");
++		dev_info(adev->dev, "Will use PSP to load VCN firmware\n");
+ 	}
+ 
+ 	r = amdgpu_vcn_resume(adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
+index 8af567c546dbc..f4686e918e0d1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
+@@ -122,7 +122,7 @@ static int vcn_v2_0_sw_init(void *handle)
+ 		adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].fw = adev->vcn.fw;
+ 		adev->firmware.fw_size +=
+ 			ALIGN(le32_to_cpu(hdr->ucode_size_bytes), PAGE_SIZE);
+-		DRM_INFO("PSP loading VCN firmware\n");
++		dev_info(adev->dev, "Will use PSP to load VCN firmware\n");
+ 	}
+ 
+ 	r = amdgpu_vcn_resume(adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
+index 888b17d84691c..e0c0c3734432e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
+@@ -152,7 +152,7 @@ static int vcn_v2_5_sw_init(void *handle)
+ 			adev->firmware.fw_size +=
+ 				ALIGN(le32_to_cpu(hdr->ucode_size_bytes), PAGE_SIZE);
+ 		}
+-		DRM_INFO("PSP loading VCN firmware\n");
++		dev_info(adev->dev, "Will use PSP to load VCN firmware\n");
+ 	}
+ 
+ 	r = amdgpu_vcn_resume(adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+index 3b23de996db22..c2c5c4af51d2e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+@@ -152,7 +152,7 @@ static int vcn_v3_0_sw_init(void *handle)
+ 			adev->firmware.fw_size +=
+ 				ALIGN(le32_to_cpu(hdr->ucode_size_bytes), PAGE_SIZE);
+ 		}
+-		DRM_INFO("PSP loading VCN firmware\n");
++		dev_info(adev->dev, "Will use PSP to load VCN firmware\n");
+ 	}
+ 
+ 	r = amdgpu_vcn_resume(adev);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c
+index 88813dad731fa..c021519af8106 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c
+@@ -98,36 +98,78 @@ void mqd_symmetrically_map_cu_mask(struct mqd_manager *mm,
+ 		uint32_t *se_mask)
+ {
+ 	struct kfd_cu_info cu_info;
+-	uint32_t cu_per_se[KFD_MAX_NUM_SE] = {0};
+-	int i, se, sh, cu = 0;
+-
++	uint32_t cu_per_sh[KFD_MAX_NUM_SE][KFD_MAX_NUM_SH_PER_SE] = {0};
++	int i, se, sh, cu;
+ 	amdgpu_amdkfd_get_cu_info(mm->dev->kgd, &cu_info);
+ 
+ 	if (cu_mask_count > cu_info.cu_active_number)
+ 		cu_mask_count = cu_info.cu_active_number;
+ 
++	/* Exceeding these bounds corrupts the stack and indicates a coding error.
++	 * Returning with no CU's enabled will hang the queue, which should be
++	 * attention grabbing.
++	 */
++	if (cu_info.num_shader_engines > KFD_MAX_NUM_SE) {
++		pr_err("Exceeded KFD_MAX_NUM_SE, chip reports %d\n", cu_info.num_shader_engines);
++		return;
++	}
++	if (cu_info.num_shader_arrays_per_engine > KFD_MAX_NUM_SH_PER_SE) {
++		pr_err("Exceeded KFD_MAX_NUM_SH, chip reports %d\n",
++			cu_info.num_shader_arrays_per_engine * cu_info.num_shader_engines);
++		return;
++	}
++	/* Count active CUs per SH.
++	 *
++	 * Some CUs in an SH may be disabled.	HW expects disabled CUs to be
++	 * represented in the high bits of each SH's enable mask (the upper and lower
++	 * 16 bits of se_mask) and will take care of the actual distribution of
++	 * disabled CUs within each SH automatically.
++	 * Each half of se_mask must be filled only on bits 0-cu_per_sh[se][sh]-1.
++	 *
++	 * See note on Arcturus cu_bitmap layout in gfx_v9_0_get_cu_info.
++	 */
+ 	for (se = 0; se < cu_info.num_shader_engines; se++)
+ 		for (sh = 0; sh < cu_info.num_shader_arrays_per_engine; sh++)
+-			cu_per_se[se] += hweight32(cu_info.cu_bitmap[se % 4][sh + (se / 4)]);
+-
+-	/* Symmetrically map cu_mask to all SEs:
+-	 * cu_mask[0] bit0 -> se_mask[0] bit0;
+-	 * cu_mask[0] bit1 -> se_mask[1] bit0;
+-	 * ... (if # SE is 4)
+-	 * cu_mask[0] bit4 -> se_mask[0] bit1;
++			cu_per_sh[se][sh] = hweight32(cu_info.cu_bitmap[se % 4][sh + (se / 4)]);
++
++	/* Symmetrically map cu_mask to all SEs & SHs:
++	 * se_mask programs up to 2 SH in the upper and lower 16 bits.
++	 *
++	 * Examples
++	 * Assuming 1 SH/SE, 4 SEs:
++	 * cu_mask[0] bit0 -> se_mask[0] bit0
++	 * cu_mask[0] bit1 -> se_mask[1] bit0
++	 * ...
++	 * cu_mask[0] bit4 -> se_mask[0] bit1
++	 * ...
++	 *
++	 * Assuming 2 SH/SE, 4 SEs
++	 * cu_mask[0] bit0 -> se_mask[0] bit0 (SE0,SH0,CU0)
++	 * cu_mask[0] bit1 -> se_mask[1] bit0 (SE1,SH0,CU0)
++	 * ...
++	 * cu_mask[0] bit4 -> se_mask[0] bit16 (SE0,SH1,CU0)
++	 * cu_mask[0] bit5 -> se_mask[1] bit16 (SE1,SH1,CU0)
++	 * ...
++	 * cu_mask[0] bit8 -> se_mask[0] bit1 (SE0,SH0,CU1)
+ 	 * ...
++	 *
++	 * First ensure all CUs are disabled, then enable user specified CUs.
+ 	 */
+-	se = 0;
+-	for (i = 0; i < cu_mask_count; i++) {
+-		if (cu_mask[i / 32] & (1 << (i % 32)))
+-			se_mask[se] |= 1 << cu;
+-
+-		do {
+-			se++;
+-			if (se == cu_info.num_shader_engines) {
+-				se = 0;
+-				cu++;
++	for (i = 0; i < cu_info.num_shader_engines; i++)
++		se_mask[i] = 0;
++
++	i = 0;
++	for (cu = 0; cu < 16; cu++) {
++		for (sh = 0; sh < cu_info.num_shader_arrays_per_engine; sh++) {
++			for (se = 0; se < cu_info.num_shader_engines; se++) {
++				if (cu_per_sh[se][sh] > cu) {
++					if (cu_mask[i / 32] & (1 << (i % 32)))
++						se_mask[se] |= 1 << (cu + sh * 16);
++					i++;
++					if (i == cu_mask_count)
++						return;
++				}
+ 			}
+-		} while (cu >= cu_per_se[se] && cu < 32);
++		}
+ 	}
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h
+index b5e2ea7550d41..6e6918ccedfdb 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h
+@@ -27,6 +27,7 @@
+ #include "kfd_priv.h"
+ 
+ #define KFD_MAX_NUM_SE 8
++#define KFD_MAX_NUM_SH_PER_SE 2
+ 
+ /**
+  * struct mqd_manager
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index ed221f815a1fa..8c345f0319b84 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1176,7 +1176,7 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ 	dc_hardware_init(adev->dm.dc);
+ 
+ #if defined(CONFIG_DRM_AMD_DC_DCN)
+-	if (adev->apu_flags) {
++	if ((adev->flags & AMD_IS_APU) && (adev->asic_type >= CHIP_CARRIZO)) {
+ 		struct dc_phy_addr_space_config pa_config;
+ 
+ 		mmhub_read_system_context(adev, &pa_config);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+index 1b6b15708b96a..08ff1166ffc89 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+@@ -197,29 +197,29 @@ static ssize_t dp_link_settings_read(struct file *f, char __user *buf,
+ 
+ 	rd_buf_ptr = rd_buf;
+ 
+-	str_len = strlen("Current:  %d  %d  %d  ");
+-	snprintf(rd_buf_ptr, str_len, "Current:  %d  %d  %d  ",
++	str_len = strlen("Current:  %d  0x%x  %d  ");
++	snprintf(rd_buf_ptr, str_len, "Current:  %d  0x%x  %d  ",
+ 			link->cur_link_settings.lane_count,
+ 			link->cur_link_settings.link_rate,
+ 			link->cur_link_settings.link_spread);
+ 	rd_buf_ptr += str_len;
+ 
+-	str_len = strlen("Verified:  %d  %d  %d  ");
+-	snprintf(rd_buf_ptr, str_len, "Verified:  %d  %d  %d  ",
++	str_len = strlen("Verified:  %d  0x%x  %d  ");
++	snprintf(rd_buf_ptr, str_len, "Verified:  %d  0x%x  %d  ",
+ 			link->verified_link_cap.lane_count,
+ 			link->verified_link_cap.link_rate,
+ 			link->verified_link_cap.link_spread);
+ 	rd_buf_ptr += str_len;
+ 
+-	str_len = strlen("Reported:  %d  %d  %d  ");
+-	snprintf(rd_buf_ptr, str_len, "Reported:  %d  %d  %d  ",
++	str_len = strlen("Reported:  %d  0x%x  %d  ");
++	snprintf(rd_buf_ptr, str_len, "Reported:  %d  0x%x  %d  ",
+ 			link->reported_link_cap.lane_count,
+ 			link->reported_link_cap.link_rate,
+ 			link->reported_link_cap.link_spread);
+ 	rd_buf_ptr += str_len;
+ 
+-	str_len = strlen("Preferred:  %d  %d  %d  ");
+-	snprintf(rd_buf_ptr, str_len, "Preferred:  %d  %d  %d\n",
++	str_len = strlen("Preferred:  %d  0x%x  %d  ");
++	snprintf(rd_buf_ptr, str_len, "Preferred:  %d  0x%x  %d\n",
+ 			link->preferred_link_setting.lane_count,
+ 			link->preferred_link_setting.link_rate,
+ 			link->preferred_link_setting.link_spread);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+index 7c939c0a977b3..29f61a8d3e291 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+@@ -3938,13 +3938,12 @@ enum dc_status dcn10_set_clock(struct dc *dc,
+ 	struct dc_clock_config clock_cfg = {0};
+ 	struct dc_clocks *current_clocks = &context->bw_ctx.bw.dcn.clk;
+ 
+-	if (dc->clk_mgr && dc->clk_mgr->funcs->get_clock)
+-				dc->clk_mgr->funcs->get_clock(dc->clk_mgr,
+-						context, clock_type, &clock_cfg);
+-
+-	if (!dc->clk_mgr->funcs->get_clock)
++	if (!dc->clk_mgr || !dc->clk_mgr->funcs->get_clock)
+ 		return DC_FAIL_UNSUPPORTED_1;
+ 
++	dc->clk_mgr->funcs->get_clock(dc->clk_mgr,
++		context, clock_type, &clock_cfg);
++
+ 	if (clk_khz > clock_cfg.max_clock_khz)
+ 		return DC_FAIL_CLK_EXCEED_MAX;
+ 
+@@ -3962,7 +3961,7 @@ enum dc_status dcn10_set_clock(struct dc *dc,
+ 	else
+ 		return DC_ERROR_UNEXPECTED;
+ 
+-	if (dc->clk_mgr && dc->clk_mgr->funcs->update_clocks)
++	if (dc->clk_mgr->funcs->update_clocks)
+ 				dc->clk_mgr->funcs->update_clocks(dc->clk_mgr,
+ 				context, true);
+ 	return DC_OK;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+index 793554e61c520..03b941e76de2a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+@@ -1703,13 +1703,15 @@ void dcn20_program_front_end_for_ctx(
+ 				dcn20_program_pipe(dc, pipe, context);
+ 				pipe = pipe->bottom_pipe;
+ 			}
+-			/* Program secondary blending tree and writeback pipes */
+-			pipe = &context->res_ctx.pipe_ctx[i];
+-			if (!pipe->prev_odm_pipe && pipe->stream->num_wb_info > 0
+-					&& (pipe->update_flags.raw || pipe->plane_state->update_flags.raw || pipe->stream->update_flags.raw)
+-					&& hws->funcs.program_all_writeback_pipes_in_tree)
+-				hws->funcs.program_all_writeback_pipes_in_tree(dc, pipe->stream, context);
+ 		}
++		/* Program secondary blending tree and writeback pipes */
++		pipe = &context->res_ctx.pipe_ctx[i];
++		if (!pipe->top_pipe && !pipe->prev_odm_pipe
++				&& pipe->stream && pipe->stream->num_wb_info > 0
++				&& (pipe->update_flags.raw || (pipe->plane_state && pipe->plane_state->update_flags.raw)
++					|| pipe->stream->update_flags.raw)
++				&& hws->funcs.program_all_writeback_pipes_in_tree)
++			hws->funcs.program_all_writeback_pipes_in_tree(dc, pipe->stream, context);
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index 81f583733fa87..12e92f6204833 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -2461,7 +2461,7 @@ void dcn20_set_mcif_arb_params(
+ 				wb_arb_params->cli_watermark[k] = get_wm_writeback_urgent(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+ 				wb_arb_params->pstate_watermark[k] = get_wm_writeback_dram_clock_change(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+ 			}
+-			wb_arb_params->time_per_pixel = 16.0 / context->res_ctx.pipe_ctx[i].stream->phy_pix_clk; /* 4 bit fraction, ms */
++			wb_arb_params->time_per_pixel = 16.0 * 1000 / (context->res_ctx.pipe_ctx[i].stream->phy_pix_clk / 1000); /* 4 bit fraction, ms */
+ 			wb_arb_params->slice_lines = 32;
+ 			wb_arb_params->arbitration_slice = 2;
+ 			wb_arb_params->max_scaled_time = dcn20_calc_max_scaled_time(wb_arb_params->time_per_pixel,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dwb_cm.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dwb_cm.c
+index 3fe9e41e4dbd7..6a3d3a0ec0a36 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dwb_cm.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dwb_cm.c
+@@ -49,6 +49,11 @@
+ static void dwb3_get_reg_field_ogam(struct dcn30_dwbc *dwbc30,
+ 	struct dcn3_xfer_func_reg *reg)
+ {
++	reg->shifts.field_region_start_base = dwbc30->dwbc_shift->DWB_OGAM_RAMA_EXP_REGION_START_BASE_B;
++	reg->masks.field_region_start_base = dwbc30->dwbc_mask->DWB_OGAM_RAMA_EXP_REGION_START_BASE_B;
++	reg->shifts.field_offset = dwbc30->dwbc_shift->DWB_OGAM_RAMA_OFFSET_B;
++	reg->masks.field_offset = dwbc30->dwbc_mask->DWB_OGAM_RAMA_OFFSET_B;
++
+ 	reg->shifts.exp_region0_lut_offset = dwbc30->dwbc_shift->DWB_OGAM_RAMA_EXP_REGION0_LUT_OFFSET;
+ 	reg->masks.exp_region0_lut_offset = dwbc30->dwbc_mask->DWB_OGAM_RAMA_EXP_REGION0_LUT_OFFSET;
+ 	reg->shifts.exp_region0_num_segments = dwbc30->dwbc_shift->DWB_OGAM_RAMA_EXP_REGION0_NUM_SEGMENTS;
+@@ -66,8 +71,6 @@ static void dwb3_get_reg_field_ogam(struct dcn30_dwbc *dwbc30,
+ 	reg->masks.field_region_end_base = dwbc30->dwbc_mask->DWB_OGAM_RAMA_EXP_REGION_END_BASE_B;
+ 	reg->shifts.field_region_linear_slope = dwbc30->dwbc_shift->DWB_OGAM_RAMA_EXP_REGION_START_SLOPE_B;
+ 	reg->masks.field_region_linear_slope = dwbc30->dwbc_mask->DWB_OGAM_RAMA_EXP_REGION_START_SLOPE_B;
+-	reg->masks.field_offset = dwbc30->dwbc_mask->DWB_OGAM_RAMA_OFFSET_B;
+-	reg->shifts.field_offset = dwbc30->dwbc_shift->DWB_OGAM_RAMA_OFFSET_B;
+ 	reg->shifts.exp_region_start = dwbc30->dwbc_shift->DWB_OGAM_RAMA_EXP_REGION_START_B;
+ 	reg->masks.exp_region_start = dwbc30->dwbc_mask->DWB_OGAM_RAMA_EXP_REGION_START_B;
+ 	reg->shifts.exp_resion_start_segment = dwbc30->dwbc_shift->DWB_OGAM_RAMA_EXP_REGION_START_SEGMENT_B;
+@@ -147,18 +150,19 @@ static enum dc_lut_mode dwb3_get_ogam_current(
+ 	uint32_t state_mode;
+ 	uint32_t ram_select;
+ 
+-	REG_GET(DWB_OGAM_CONTROL,
+-		DWB_OGAM_MODE, &state_mode);
+-	REG_GET(DWB_OGAM_CONTROL,
+-		DWB_OGAM_SELECT, &ram_select);
++	REG_GET_2(DWB_OGAM_CONTROL,
++		DWB_OGAM_MODE_CURRENT, &state_mode,
++		DWB_OGAM_SELECT_CURRENT, &ram_select);
+ 
+ 	if (state_mode == 0) {
+ 		mode = LUT_BYPASS;
+ 	} else if (state_mode == 2) {
+ 		if (ram_select == 0)
+ 			mode = LUT_RAM_A;
+-		else
++		else if (ram_select == 1)
+ 			mode = LUT_RAM_B;
++		else
++			mode = LUT_BYPASS;
+ 	} else {
+ 		// Reserved value
+ 		mode = LUT_BYPASS;
+@@ -172,10 +176,10 @@ static void dwb3_configure_ogam_lut(
+ 	struct dcn30_dwbc *dwbc30,
+ 	bool is_ram_a)
+ {
+-	REG_UPDATE(DWB_OGAM_LUT_CONTROL,
+-		DWB_OGAM_LUT_READ_COLOR_SEL, 7);
+-	REG_UPDATE(DWB_OGAM_CONTROL,
+-		DWB_OGAM_SELECT, is_ram_a == true ? 0 : 1);
++	REG_UPDATE_2(DWB_OGAM_LUT_CONTROL,
++		DWB_OGAM_LUT_WRITE_COLOR_MASK, 7,
++		DWB_OGAM_LUT_HOST_SEL, (is_ram_a == true) ? 0 : 1);
++
+ 	REG_SET(DWB_OGAM_LUT_INDEX, 0, DWB_OGAM_LUT_INDEX, 0);
+ }
+ 
+@@ -185,17 +189,45 @@ static void dwb3_program_ogam_pwl(struct dcn30_dwbc *dwbc30,
+ {
+ 	uint32_t i;
+ 
+-    // triple base implementation
+-	for (i = 0; i < num/2; i++) {
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+0].red_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+0].green_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+0].blue_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+1].red_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+1].green_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+1].blue_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+2].red_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+2].green_reg);
+-		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[2*i+2].blue_reg);
++	uint32_t last_base_value_red = rgb[num-1].red_reg + rgb[num-1].delta_red_reg;
++	uint32_t last_base_value_green = rgb[num-1].green_reg + rgb[num-1].delta_green_reg;
++	uint32_t last_base_value_blue = rgb[num-1].blue_reg + rgb[num-1].delta_blue_reg;
++
++	if (is_rgb_equal(rgb,  num)) {
++		for (i = 0 ; i < num; i++)
++			REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[i].red_reg);
++
++		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, last_base_value_red);
++
++	} else {
++
++		REG_UPDATE(DWB_OGAM_LUT_CONTROL,
++				DWB_OGAM_LUT_WRITE_COLOR_MASK, 4);
++
++		for (i = 0 ; i < num; i++)
++			REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[i].red_reg);
++
++		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, last_base_value_red);
++
++		REG_SET(DWB_OGAM_LUT_INDEX, 0, DWB_OGAM_LUT_INDEX, 0);
++
++		REG_UPDATE(DWB_OGAM_LUT_CONTROL,
++				DWB_OGAM_LUT_WRITE_COLOR_MASK, 2);
++
++		for (i = 0 ; i < num; i++)
++			REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[i].green_reg);
++
++		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, last_base_value_green);
++
++		REG_SET(DWB_OGAM_LUT_INDEX, 0, DWB_OGAM_LUT_INDEX, 0);
++
++		REG_UPDATE(DWB_OGAM_LUT_CONTROL,
++				DWB_OGAM_LUT_WRITE_COLOR_MASK, 1);
++
++		for (i = 0 ; i < num; i++)
++			REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, rgb[i].blue_reg);
++
++		REG_SET(DWB_OGAM_LUT_DATA, 0, DWB_OGAM_LUT_DATA, last_base_value_blue);
+ 	}
+ }
+ 
+@@ -211,6 +243,8 @@ static bool dwb3_program_ogam_lut(
+ 		return false;
+ 	}
+ 
++	REG_SET(DWB_OGAM_CONTROL, 0, DWB_OGAM_MODE, 2);
++
+ 	current_mode = dwb3_get_ogam_current(dwbc30);
+ 	if (current_mode == LUT_BYPASS || current_mode == LUT_RAM_A)
+ 		next_mode = LUT_RAM_B;
+@@ -227,8 +261,7 @@ static bool dwb3_program_ogam_lut(
+ 	dwb3_program_ogam_pwl(
+ 		dwbc30, params->rgb_resulted, params->hw_points_num);
+ 
+-	REG_SET(DWB_OGAM_CONTROL, 0, DWB_OGAM_MODE, 2);
+-	REG_SET(DWB_OGAM_CONTROL, 0, DWB_OGAM_SELECT, next_mode == LUT_RAM_A ? 0 : 1);
++	REG_UPDATE(DWB_OGAM_CONTROL, DWB_OGAM_SELECT, next_mode == LUT_RAM_A ? 0 : 1);
+ 
+ 	return true;
+ }
+@@ -271,14 +304,19 @@ static void dwb3_program_gamut_remap(
+ 
+ 	struct color_matrices_reg gam_regs;
+ 
+-	REG_UPDATE(DWB_GAMUT_REMAP_COEF_FORMAT, DWB_GAMUT_REMAP_COEF_FORMAT, coef_format);
+-
+ 	if (regval == NULL || select == CM_GAMUT_REMAP_MODE_BYPASS) {
+ 		REG_SET(DWB_GAMUT_REMAP_MODE, 0,
+ 				DWB_GAMUT_REMAP_MODE, 0);
+ 		return;
+ 	}
+ 
++	REG_UPDATE(DWB_GAMUT_REMAP_COEF_FORMAT, DWB_GAMUT_REMAP_COEF_FORMAT, coef_format);
++
++	gam_regs.shifts.csc_c11 = dwbc30->dwbc_shift->DWB_GAMUT_REMAPA_C11;
++	gam_regs.masks.csc_c11  = dwbc30->dwbc_mask->DWB_GAMUT_REMAPA_C11;
++	gam_regs.shifts.csc_c12 = dwbc30->dwbc_shift->DWB_GAMUT_REMAPA_C12;
++	gam_regs.masks.csc_c12 = dwbc30->dwbc_mask->DWB_GAMUT_REMAPA_C12;
++
+ 	switch (select) {
+ 	case CM_GAMUT_REMAP_MODE_RAMA_COEFF:
+ 		gam_regs.csc_c11_c12 = REG(DWB_GAMUT_REMAPA_C11_C12);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+index d53f8b39699b3..37944f94c6931 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+@@ -396,12 +396,22 @@ void dcn30_program_all_writeback_pipes_in_tree(
+ 			for (i_pipe = 0; i_pipe < dc->res_pool->pipe_count; i_pipe++) {
+ 				struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i_pipe];
+ 
++				if (!pipe_ctx->plane_state)
++					continue;
++
+ 				if (pipe_ctx->plane_state == wb_info.writeback_source_plane) {
+ 					wb_info.mpcc_inst = pipe_ctx->plane_res.mpcc_inst;
+ 					break;
+ 				}
+ 			}
+-			ASSERT(wb_info.mpcc_inst != -1);
++
++			if (wb_info.mpcc_inst == -1) {
++				/* Disable writeback pipe and disconnect from MPCC
++				 * if source plane has been removed
++				 */
++				dc->hwss.disable_writeback(dc, wb_info.dwb_pipe_inst);
++				continue;
++			}
+ 
+ 			ASSERT(wb_info.dwb_pipe_inst < dc->res_pool->res_cap->num_dwb);
+ 			dwb = dc->res_pool->dwbc[wb_info.dwb_pipe_inst];
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
+index a5a1cb62f967f..393447ebff6e7 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
+@@ -2398,16 +2398,37 @@ void dcn30_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params
+ 	dc->dml.soc.dispclk_dppclk_vco_speed_mhz = dc->clk_mgr->dentist_vco_freq_khz / 1000.0;
+ 
+ 	if (bw_params->clk_table.entries[0].memclk_mhz) {
++		int max_dcfclk_mhz = 0, max_dispclk_mhz = 0, max_dppclk_mhz = 0, max_phyclk_mhz = 0;
++
++		for (i = 0; i < MAX_NUM_DPM_LVL; i++) {
++			if (bw_params->clk_table.entries[i].dcfclk_mhz > max_dcfclk_mhz)
++				max_dcfclk_mhz = bw_params->clk_table.entries[i].dcfclk_mhz;
++			if (bw_params->clk_table.entries[i].dispclk_mhz > max_dispclk_mhz)
++				max_dispclk_mhz = bw_params->clk_table.entries[i].dispclk_mhz;
++			if (bw_params->clk_table.entries[i].dppclk_mhz > max_dppclk_mhz)
++				max_dppclk_mhz = bw_params->clk_table.entries[i].dppclk_mhz;
++			if (bw_params->clk_table.entries[i].phyclk_mhz > max_phyclk_mhz)
++				max_phyclk_mhz = bw_params->clk_table.entries[i].phyclk_mhz;
++		}
++
++		if (!max_dcfclk_mhz)
++			max_dcfclk_mhz = dcn3_0_soc.clock_limits[0].dcfclk_mhz;
++		if (!max_dispclk_mhz)
++			max_dispclk_mhz = dcn3_0_soc.clock_limits[0].dispclk_mhz;
++		if (!max_dppclk_mhz)
++			max_dppclk_mhz = dcn3_0_soc.clock_limits[0].dppclk_mhz;
++		if (!max_phyclk_mhz)
++			max_phyclk_mhz = dcn3_0_soc.clock_limits[0].phyclk_mhz;
+ 
+-		if (bw_params->clk_table.entries[1].dcfclk_mhz > dcfclk_sta_targets[num_dcfclk_sta_targets-1]) {
++		if (max_dcfclk_mhz > dcfclk_sta_targets[num_dcfclk_sta_targets-1]) {
+ 			// If max DCFCLK is greater than the max DCFCLK STA target, insert into the DCFCLK STA target array
+-			dcfclk_sta_targets[num_dcfclk_sta_targets] = bw_params->clk_table.entries[1].dcfclk_mhz;
++			dcfclk_sta_targets[num_dcfclk_sta_targets] = max_dcfclk_mhz;
+ 			num_dcfclk_sta_targets++;
+-		} else if (bw_params->clk_table.entries[1].dcfclk_mhz < dcfclk_sta_targets[num_dcfclk_sta_targets-1]) {
++		} else if (max_dcfclk_mhz < dcfclk_sta_targets[num_dcfclk_sta_targets-1]) {
+ 			// If max DCFCLK is less than the max DCFCLK STA target, cap values and remove duplicates
+ 			for (i = 0; i < num_dcfclk_sta_targets; i++) {
+-				if (dcfclk_sta_targets[i] > bw_params->clk_table.entries[1].dcfclk_mhz) {
+-					dcfclk_sta_targets[i] = bw_params->clk_table.entries[1].dcfclk_mhz;
++				if (dcfclk_sta_targets[i] > max_dcfclk_mhz) {
++					dcfclk_sta_targets[i] = max_dcfclk_mhz;
+ 					break;
+ 				}
+ 			}
+@@ -2447,7 +2468,7 @@ void dcn30_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params
+ 				dcfclk_mhz[num_states] = dcfclk_sta_targets[i];
+ 				dram_speed_mts[num_states++] = optimal_uclk_for_dcfclk_sta_targets[i++];
+ 			} else {
+-				if (j < num_uclk_states && optimal_dcfclk_for_uclk[j] <= bw_params->clk_table.entries[1].dcfclk_mhz) {
++				if (j < num_uclk_states && optimal_dcfclk_for_uclk[j] <= max_dcfclk_mhz) {
+ 					dcfclk_mhz[num_states] = optimal_dcfclk_for_uclk[j];
+ 					dram_speed_mts[num_states++] = bw_params->clk_table.entries[j++].memclk_mhz * 16;
+ 				} else {
+@@ -2462,11 +2483,12 @@ void dcn30_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params
+ 		}
+ 
+ 		while (j < num_uclk_states && num_states < DC__VOLTAGE_STATES &&
+-				optimal_dcfclk_for_uclk[j] <= bw_params->clk_table.entries[1].dcfclk_mhz) {
++				optimal_dcfclk_for_uclk[j] <= max_dcfclk_mhz) {
+ 			dcfclk_mhz[num_states] = optimal_dcfclk_for_uclk[j];
+ 			dram_speed_mts[num_states++] = bw_params->clk_table.entries[j++].memclk_mhz * 16;
+ 		}
+ 
++		dcn3_0_soc.num_states = num_states;
+ 		for (i = 0; i < dcn3_0_soc.num_states; i++) {
+ 			dcn3_0_soc.clock_limits[i].state = i;
+ 			dcn3_0_soc.clock_limits[i].dcfclk_mhz = dcfclk_mhz[i];
+@@ -2474,9 +2496,9 @@ void dcn30_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params
+ 			dcn3_0_soc.clock_limits[i].dram_speed_mts = dram_speed_mts[i];
+ 
+ 			/* Fill all states with max values of all other clocks */
+-			dcn3_0_soc.clock_limits[i].dispclk_mhz = bw_params->clk_table.entries[1].dispclk_mhz;
+-			dcn3_0_soc.clock_limits[i].dppclk_mhz  = bw_params->clk_table.entries[1].dppclk_mhz;
+-			dcn3_0_soc.clock_limits[i].phyclk_mhz  = bw_params->clk_table.entries[1].phyclk_mhz;
++			dcn3_0_soc.clock_limits[i].dispclk_mhz = max_dispclk_mhz;
++			dcn3_0_soc.clock_limits[i].dppclk_mhz  = max_dppclk_mhz;
++			dcn3_0_soc.clock_limits[i].phyclk_mhz  = max_phyclk_mhz;
+ 			dcn3_0_soc.clock_limits[i].dtbclk_mhz = dcn3_0_soc.clock_limits[0].dtbclk_mhz;
+ 			/* These clocks cannot come from bw_params, always fill from dcn3_0_soc[1] */
+ 			/* FCLK, PHYCLK_D18, SOCCLK, DSCCLK */
+diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
+index 911f9f4147741..39ca338eb80b3 100644
+--- a/drivers/gpu/drm/ast/ast_drv.h
++++ b/drivers/gpu/drm/ast/ast_drv.h
+@@ -337,6 +337,11 @@ int ast_mode_config_init(struct ast_private *ast);
+ #define AST_DP501_LINKRATE	0xf014
+ #define AST_DP501_EDID_DATA	0xf020
+ 
++/* Define for Soc scratched reg */
++#define AST_VRAM_INIT_STATUS_MASK	GENMASK(7, 6)
++//#define AST_VRAM_INIT_BY_BMC		BIT(7)
++//#define AST_VRAM_INIT_READY		BIT(6)
++
+ int ast_mm_init(struct ast_private *ast);
+ 
+ /* ast post */
+@@ -346,6 +351,7 @@ bool ast_is_vga_enabled(struct drm_device *dev);
+ void ast_post_gpu(struct drm_device *dev);
+ u32 ast_mindwm(struct ast_private *ast, u32 r);
+ void ast_moutdwm(struct ast_private *ast, u32 r, u32 v);
++void ast_patch_ahb_2500(struct ast_private *ast);
+ /* ast dp501 */
+ void ast_set_dp501_video_output(struct drm_device *dev, u8 mode);
+ bool ast_backup_fw(struct drm_device *dev, u8 *addr, u32 size);
+diff --git a/drivers/gpu/drm/ast/ast_main.c b/drivers/gpu/drm/ast/ast_main.c
+index 2aff2e6cf450c..79a3618679554 100644
+--- a/drivers/gpu/drm/ast/ast_main.c
++++ b/drivers/gpu/drm/ast/ast_main.c
+@@ -97,6 +97,11 @@ static void ast_detect_config_mode(struct drm_device *dev, u32 *scu_rev)
+ 	jregd0 = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xd0, 0xff);
+ 	jregd1 = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xd1, 0xff);
+ 	if (!(jregd0 & 0x80) || !(jregd1 & 0x10)) {
++		/* Patch AST2500 */
++		if (((pdev->revision & 0xF0) == 0x40)
++			&& ((jregd0 & AST_VRAM_INIT_STATUS_MASK) == 0))
++			ast_patch_ahb_2500(ast);
++
+ 		/* Double check it's actually working */
+ 		data = ast_read32(ast, 0xf004);
+ 		if ((data != 0xFFFFFFFF) && (data != 0x00)) {
+diff --git a/drivers/gpu/drm/ast/ast_post.c b/drivers/gpu/drm/ast/ast_post.c
+index 0607658dde51b..b5d92f652fd85 100644
+--- a/drivers/gpu/drm/ast/ast_post.c
++++ b/drivers/gpu/drm/ast/ast_post.c
+@@ -2028,6 +2028,40 @@ static bool ast_dram_init_2500(struct ast_private *ast)
+ 	return true;
+ }
+ 
++void ast_patch_ahb_2500(struct ast_private *ast)
++{
++	u32	data;
++
++	/* Clear bus lock condition */
++	ast_moutdwm(ast, 0x1e600000, 0xAEED1A03);
++	ast_moutdwm(ast, 0x1e600084, 0x00010000);
++	ast_moutdwm(ast, 0x1e600088, 0x00000000);
++	ast_moutdwm(ast, 0x1e6e2000, 0x1688A8A8);
++	data = ast_mindwm(ast, 0x1e6e2070);
++	if (data & 0x08000000) {					/* check fast reset */
++		/*
++		 * If "Fast restet" is enabled for ARM-ICE debugger,
++		 * then WDT needs to enable, that
++		 * WDT04 is WDT#1 Reload reg.
++		 * WDT08 is WDT#1 counter restart reg to avoid system deadlock
++		 * WDT0C is WDT#1 control reg
++		 *	[6:5]:= 01:Full chip
++		 *	[4]:= 1:1MHz clock source
++		 *	[1]:= 1:WDT will be cleeared and disabled after timeout occurs
++		 *	[0]:= 1:WDT enable
++		 */
++		ast_moutdwm(ast, 0x1E785004, 0x00000010);
++		ast_moutdwm(ast, 0x1E785008, 0x00004755);
++		ast_moutdwm(ast, 0x1E78500c, 0x00000033);
++		udelay(1000);
++	}
++	do {
++		ast_moutdwm(ast, 0x1e6e2000, 0x1688A8A8);
++		data = ast_mindwm(ast, 0x1e6e2000);
++	}	while (data != 1);
++	ast_moutdwm(ast, 0x1e6e207c, 0x08000000);	/* clear fast reset */
++}
++
+ void ast_post_chip_2500(struct drm_device *dev)
+ {
+ 	struct ast_private *ast = to_ast_private(dev);
+@@ -2035,39 +2069,44 @@ void ast_post_chip_2500(struct drm_device *dev)
+ 	u8 reg;
+ 
+ 	reg = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xd0, 0xff);
+-	if ((reg & 0x80) == 0) {/* vga only */
++	if ((reg & AST_VRAM_INIT_STATUS_MASK) == 0) {/* vga only */
+ 		/* Clear bus lock condition */
+-		ast_moutdwm(ast, 0x1e600000, 0xAEED1A03);
+-		ast_moutdwm(ast, 0x1e600084, 0x00010000);
+-		ast_moutdwm(ast, 0x1e600088, 0x00000000);
+-		ast_moutdwm(ast, 0x1e6e2000, 0x1688A8A8);
+-		ast_write32(ast, 0xf004, 0x1e6e0000);
+-		ast_write32(ast, 0xf000, 0x1);
+-		ast_write32(ast, 0x12000, 0x1688a8a8);
+-		while (ast_read32(ast, 0x12000) != 0x1)
+-			;
+-
+-		ast_write32(ast, 0x10000, 0xfc600309);
+-		while (ast_read32(ast, 0x10000) != 0x1)
+-			;
++		ast_patch_ahb_2500(ast);
++
++		/* Disable watchdog */
++		ast_moutdwm(ast, 0x1E78502C, 0x00000000);
++		ast_moutdwm(ast, 0x1E78504C, 0x00000000);
++
++		/*
++		 * Reset USB port to patch USB unknown device issue
++		 * SCU90 is Multi-function Pin Control #5
++		 *	[29]:= 1:Enable USB2.0 Host port#1 (that the mutually shared USB2.0 Hub
++		 *				port).
++		 * SCU94 is Multi-function Pin Control #6
++		 *	[14:13]:= 1x:USB2.0 Host2 controller
++		 * SCU70 is Hardware Strap reg
++		 *	[23]:= 1:CLKIN is 25MHz and USBCK1 = 24/48 MHz (determined by
++		 *				[18]: 0(24)/1(48) MHz)
++		 * SCU7C is Write clear reg to SCU70
++		 *	[23]:= write 1 and then SCU70[23] will be clear as 0b.
++		 */
++		ast_moutdwm(ast, 0x1E6E2090, 0x20000000);
++		ast_moutdwm(ast, 0x1E6E2094, 0x00004000);
++		if (ast_mindwm(ast, 0x1E6E2070) & 0x00800000) {
++			ast_moutdwm(ast, 0x1E6E207C, 0x00800000);
++			mdelay(100);
++			ast_moutdwm(ast, 0x1E6E2070, 0x00800000);
++		}
++		/* Modify eSPI reset pin */
++		temp = ast_mindwm(ast, 0x1E6E2070);
++		if (temp & 0x02000000)
++			ast_moutdwm(ast, 0x1E6E207C, 0x00004000);
+ 
+ 		/* Slow down CPU/AHB CLK in VGA only mode */
+ 		temp = ast_read32(ast, 0x12008);
+ 		temp |= 0x73;
+ 		ast_write32(ast, 0x12008, temp);
+ 
+-		/* Reset USB port to patch USB unknown device issue */
+-		ast_moutdwm(ast, 0x1e6e2090, 0x20000000);
+-		temp  = ast_mindwm(ast, 0x1e6e2094);
+-		temp |= 0x00004000;
+-		ast_moutdwm(ast, 0x1e6e2094, temp);
+-		temp  = ast_mindwm(ast, 0x1e6e2070);
+-		if (temp & 0x00800000) {
+-			ast_moutdwm(ast, 0x1e6e207c, 0x00800000);
+-			mdelay(100);
+-			ast_moutdwm(ast, 0x1e6e2070, 0x00800000);
+-		}
+-
+ 		if (!ast_dram_init_2500(ast))
+ 			drm_err(dev, "DRAM init failed !\n");
+ 
+diff --git a/drivers/gpu/drm/bridge/nwl-dsi.c b/drivers/gpu/drm/bridge/nwl-dsi.c
+index c65ca860712d2..6cac2e58cd15f 100644
+--- a/drivers/gpu/drm/bridge/nwl-dsi.c
++++ b/drivers/gpu/drm/bridge/nwl-dsi.c
+@@ -196,7 +196,7 @@ static u32 ps2bc(struct nwl_dsi *dsi, unsigned long long ps)
+ 	u32 bpp = mipi_dsi_pixel_format_to_bpp(dsi->format);
+ 
+ 	return DIV64_U64_ROUND_UP(ps * dsi->mode.clock * bpp,
+-				  dsi->lanes * 8 * NSEC_PER_SEC);
++				  dsi->lanes * 8ULL * NSEC_PER_SEC);
+ }
+ 
+ /*
+diff --git a/drivers/gpu/drm/drm_auth.c b/drivers/gpu/drm/drm_auth.c
+index 232abbba36868..c7adbeaf10b1b 100644
+--- a/drivers/gpu/drm/drm_auth.c
++++ b/drivers/gpu/drm/drm_auth.c
+@@ -135,16 +135,18 @@ static void drm_set_master(struct drm_device *dev, struct drm_file *fpriv,
+ static int drm_new_set_master(struct drm_device *dev, struct drm_file *fpriv)
+ {
+ 	struct drm_master *old_master;
++	struct drm_master *new_master;
+ 
+ 	lockdep_assert_held_once(&dev->master_mutex);
+ 
+ 	WARN_ON(fpriv->is_master);
+ 	old_master = fpriv->master;
+-	fpriv->master = drm_master_create(dev);
+-	if (!fpriv->master) {
+-		fpriv->master = old_master;
++	new_master = drm_master_create(dev);
++	if (!new_master)
+ 		return -ENOMEM;
+-	}
++	spin_lock(&fpriv->master_lookup_lock);
++	fpriv->master = new_master;
++	spin_unlock(&fpriv->master_lookup_lock);
+ 
+ 	fpriv->is_master = 1;
+ 	fpriv->authenticated = 1;
+@@ -302,10 +304,13 @@ int drm_master_open(struct drm_file *file_priv)
+ 	/* if there is no current master make this fd it, but do not create
+ 	 * any master object for render clients */
+ 	mutex_lock(&dev->master_mutex);
+-	if (!dev->master)
++	if (!dev->master) {
+ 		ret = drm_new_set_master(dev, file_priv);
+-	else
++	} else {
++		spin_lock(&file_priv->master_lookup_lock);
+ 		file_priv->master = drm_master_get(dev->master);
++		spin_unlock(&file_priv->master_lookup_lock);
++	}
+ 	mutex_unlock(&dev->master_mutex);
+ 
+ 	return ret;
+@@ -371,6 +376,31 @@ struct drm_master *drm_master_get(struct drm_master *master)
+ }
+ EXPORT_SYMBOL(drm_master_get);
+ 
++/**
++ * drm_file_get_master - reference &drm_file.master of @file_priv
++ * @file_priv: DRM file private
++ *
++ * Increments the reference count of @file_priv's &drm_file.master and returns
++ * the &drm_file.master. If @file_priv has no &drm_file.master, returns NULL.
++ *
++ * Master pointers returned from this function should be unreferenced using
++ * drm_master_put().
++ */
++struct drm_master *drm_file_get_master(struct drm_file *file_priv)
++{
++	struct drm_master *master = NULL;
++
++	spin_lock(&file_priv->master_lookup_lock);
++	if (!file_priv->master)
++		goto unlock;
++	master = drm_master_get(file_priv->master);
++
++unlock:
++	spin_unlock(&file_priv->master_lookup_lock);
++	return master;
++}
++EXPORT_SYMBOL(drm_file_get_master);
++
+ static void drm_master_destroy(struct kref *kref)
+ {
+ 	struct drm_master *master = container_of(kref, struct drm_master, refcount);
+diff --git a/drivers/gpu/drm/drm_debugfs.c b/drivers/gpu/drm/drm_debugfs.c
+index 3d7182001004d..b0a8264894885 100644
+--- a/drivers/gpu/drm/drm_debugfs.c
++++ b/drivers/gpu/drm/drm_debugfs.c
+@@ -91,6 +91,7 @@ static int drm_clients_info(struct seq_file *m, void *data)
+ 	mutex_lock(&dev->filelist_mutex);
+ 	list_for_each_entry_reverse(priv, &dev->filelist, lhead) {
+ 		struct task_struct *task;
++		bool is_current_master = drm_is_current_master(priv);
+ 
+ 		rcu_read_lock(); /* locks pid_task()->comm */
+ 		task = pid_task(priv->pid, PIDTYPE_PID);
+@@ -99,7 +100,7 @@ static int drm_clients_info(struct seq_file *m, void *data)
+ 			   task ? task->comm : "<unknown>",
+ 			   pid_vnr(priv->pid),
+ 			   priv->minor->index,
+-			   drm_is_current_master(priv) ? 'y' : 'n',
++			   is_current_master ? 'y' : 'n',
+ 			   priv->authenticated ? 'y' : 'n',
+ 			   from_kuid_munged(seq_user_ns(m), uid),
+ 			   priv->magic);
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index a68dc25a19c6d..04e7a8d20f259 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -2867,11 +2867,13 @@ static int process_single_tx_qlock(struct drm_dp_mst_topology_mgr *mgr,
+ 	idx += tosend + 1;
+ 
+ 	ret = drm_dp_send_sideband_msg(mgr, up, chunk, idx);
+-	if (unlikely(ret) && drm_debug_enabled(DRM_UT_DP)) {
+-		struct drm_printer p = drm_debug_printer(DBG_PREFIX);
++	if (ret) {
++		if (drm_debug_enabled(DRM_UT_DP)) {
++			struct drm_printer p = drm_debug_printer(DBG_PREFIX);
+ 
+-		drm_printf(&p, "sideband msg failed to send\n");
+-		drm_dp_mst_dump_sideband_msg_tx(&p, txmsg);
++			drm_printf(&p, "sideband msg failed to send\n");
++			drm_dp_mst_dump_sideband_msg_tx(&p, txmsg);
++		}
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
+index 7efbccffc2eaf..c6feeb5651b0c 100644
+--- a/drivers/gpu/drm/drm_file.c
++++ b/drivers/gpu/drm/drm_file.c
+@@ -176,6 +176,7 @@ struct drm_file *drm_file_alloc(struct drm_minor *minor)
+ 	init_waitqueue_head(&file->event_wait);
+ 	file->event_space = 4096; /* set aside 4k for event buffer */
+ 
++	spin_lock_init(&file->master_lookup_lock);
+ 	mutex_init(&file->event_read_lock);
+ 
+ 	if (drm_core_check_feature(dev, DRIVER_GEM))
+diff --git a/drivers/gpu/drm/drm_lease.c b/drivers/gpu/drm/drm_lease.c
+index da4f085fc09e7..aef22634005ef 100644
+--- a/drivers/gpu/drm/drm_lease.c
++++ b/drivers/gpu/drm/drm_lease.c
+@@ -107,10 +107,19 @@ static bool _drm_has_leased(struct drm_master *master, int id)
+  */
+ bool _drm_lease_held(struct drm_file *file_priv, int id)
+ {
+-	if (!file_priv || !file_priv->master)
++	bool ret;
++	struct drm_master *master;
++
++	if (!file_priv)
+ 		return true;
+ 
+-	return _drm_lease_held_master(file_priv->master, id);
++	master = drm_file_get_master(file_priv);
++	if (!master)
++		return true;
++	ret = _drm_lease_held_master(master, id);
++	drm_master_put(&master);
++
++	return ret;
+ }
+ 
+ /**
+@@ -129,13 +138,22 @@ bool drm_lease_held(struct drm_file *file_priv, int id)
+ 	struct drm_master *master;
+ 	bool ret;
+ 
+-	if (!file_priv || !file_priv->master || !file_priv->master->lessor)
++	if (!file_priv)
+ 		return true;
+ 
+-	master = file_priv->master;
++	master = drm_file_get_master(file_priv);
++	if (!master)
++		return true;
++	if (!master->lessor) {
++		ret = true;
++		goto out;
++	}
+ 	mutex_lock(&master->dev->mode_config.idr_mutex);
+ 	ret = _drm_lease_held_master(master, id);
+ 	mutex_unlock(&master->dev->mode_config.idr_mutex);
++
++out:
++	drm_master_put(&master);
+ 	return ret;
+ }
+ 
+@@ -155,10 +173,16 @@ uint32_t drm_lease_filter_crtcs(struct drm_file *file_priv, uint32_t crtcs_in)
+ 	int count_in, count_out;
+ 	uint32_t crtcs_out = 0;
+ 
+-	if (!file_priv || !file_priv->master || !file_priv->master->lessor)
++	if (!file_priv)
+ 		return crtcs_in;
+ 
+-	master = file_priv->master;
++	master = drm_file_get_master(file_priv);
++	if (!master)
++		return crtcs_in;
++	if (!master->lessor) {
++		crtcs_out = crtcs_in;
++		goto out;
++	}
+ 	dev = master->dev;
+ 
+ 	count_in = count_out = 0;
+@@ -177,6 +201,9 @@ uint32_t drm_lease_filter_crtcs(struct drm_file *file_priv, uint32_t crtcs_in)
+ 		count_in++;
+ 	}
+ 	mutex_unlock(&master->dev->mode_config.idr_mutex);
++
++out:
++	drm_master_put(&master);
+ 	return crtcs_out;
+ }
+ 
+@@ -490,7 +517,7 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
+ 	size_t object_count;
+ 	int ret = 0;
+ 	struct idr leases;
+-	struct drm_master *lessor = lessor_priv->master;
++	struct drm_master *lessor;
+ 	struct drm_master *lessee = NULL;
+ 	struct file *lessee_file = NULL;
+ 	struct file *lessor_file = lessor_priv->filp;
+@@ -502,12 +529,6 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
+ 	if (!drm_core_check_feature(dev, DRIVER_MODESET))
+ 		return -EOPNOTSUPP;
+ 
+-	/* Do not allow sub-leases */
+-	if (lessor->lessor) {
+-		DRM_DEBUG_LEASE("recursive leasing not allowed\n");
+-		return -EINVAL;
+-	}
+-
+ 	/* need some objects */
+ 	if (cl->object_count == 0) {
+ 		DRM_DEBUG_LEASE("no objects in lease\n");
+@@ -519,12 +540,22 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
+ 		return -EINVAL;
+ 	}
+ 
++	lessor = drm_file_get_master(lessor_priv);
++	/* Do not allow sub-leases */
++	if (lessor->lessor) {
++		DRM_DEBUG_LEASE("recursive leasing not allowed\n");
++		ret = -EINVAL;
++		goto out_lessor;
++	}
++
+ 	object_count = cl->object_count;
+ 
+ 	object_ids = memdup_user(u64_to_user_ptr(cl->object_ids),
+ 			array_size(object_count, sizeof(__u32)));
+-	if (IS_ERR(object_ids))
+-		return PTR_ERR(object_ids);
++	if (IS_ERR(object_ids)) {
++		ret = PTR_ERR(object_ids);
++		goto out_lessor;
++	}
+ 
+ 	idr_init(&leases);
+ 
+@@ -535,14 +566,15 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
+ 	if (ret) {
+ 		DRM_DEBUG_LEASE("lease object lookup failed: %i\n", ret);
+ 		idr_destroy(&leases);
+-		return ret;
++		goto out_lessor;
+ 	}
+ 
+ 	/* Allocate a file descriptor for the lease */
+ 	fd = get_unused_fd_flags(cl->flags & (O_CLOEXEC | O_NONBLOCK));
+ 	if (fd < 0) {
+ 		idr_destroy(&leases);
+-		return fd;
++		ret = fd;
++		goto out_lessor;
+ 	}
+ 
+ 	DRM_DEBUG_LEASE("Creating lease\n");
+@@ -578,6 +610,7 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
+ 	/* Hook up the fd */
+ 	fd_install(fd, lessee_file);
+ 
++	drm_master_put(&lessor);
+ 	DRM_DEBUG_LEASE("drm_mode_create_lease_ioctl succeeded\n");
+ 	return 0;
+ 
+@@ -587,6 +620,8 @@ out_lessee:
+ out_leases:
+ 	put_unused_fd(fd);
+ 
++out_lessor:
++	drm_master_put(&lessor);
+ 	DRM_DEBUG_LEASE("drm_mode_create_lease_ioctl failed: %d\n", ret);
+ 	return ret;
+ }
+@@ -609,7 +644,7 @@ int drm_mode_list_lessees_ioctl(struct drm_device *dev,
+ 	struct drm_mode_list_lessees *arg = data;
+ 	__u32 __user *lessee_ids = (__u32 __user *) (uintptr_t) (arg->lessees_ptr);
+ 	__u32 count_lessees = arg->count_lessees;
+-	struct drm_master *lessor = lessor_priv->master, *lessee;
++	struct drm_master *lessor, *lessee;
+ 	int count;
+ 	int ret = 0;
+ 
+@@ -620,6 +655,7 @@ int drm_mode_list_lessees_ioctl(struct drm_device *dev,
+ 	if (!drm_core_check_feature(dev, DRIVER_MODESET))
+ 		return -EOPNOTSUPP;
+ 
++	lessor = drm_file_get_master(lessor_priv);
+ 	DRM_DEBUG_LEASE("List lessees for %d\n", lessor->lessee_id);
+ 
+ 	mutex_lock(&dev->mode_config.idr_mutex);
+@@ -643,6 +679,7 @@ int drm_mode_list_lessees_ioctl(struct drm_device *dev,
+ 		arg->count_lessees = count;
+ 
+ 	mutex_unlock(&dev->mode_config.idr_mutex);
++	drm_master_put(&lessor);
+ 
+ 	return ret;
+ }
+@@ -662,7 +699,7 @@ int drm_mode_get_lease_ioctl(struct drm_device *dev,
+ 	struct drm_mode_get_lease *arg = data;
+ 	__u32 __user *object_ids = (__u32 __user *) (uintptr_t) (arg->objects_ptr);
+ 	__u32 count_objects = arg->count_objects;
+-	struct drm_master *lessee = lessee_priv->master;
++	struct drm_master *lessee;
+ 	struct idr *object_idr;
+ 	int count;
+ 	void *entry;
+@@ -676,6 +713,7 @@ int drm_mode_get_lease_ioctl(struct drm_device *dev,
+ 	if (!drm_core_check_feature(dev, DRIVER_MODESET))
+ 		return -EOPNOTSUPP;
+ 
++	lessee = drm_file_get_master(lessee_priv);
+ 	DRM_DEBUG_LEASE("get lease for %d\n", lessee->lessee_id);
+ 
+ 	mutex_lock(&dev->mode_config.idr_mutex);
+@@ -703,6 +741,7 @@ int drm_mode_get_lease_ioctl(struct drm_device *dev,
+ 		arg->count_objects = count;
+ 
+ 	mutex_unlock(&dev->mode_config.idr_mutex);
++	drm_master_put(&lessee);
+ 
+ 	return ret;
+ }
+@@ -721,7 +760,7 @@ int drm_mode_revoke_lease_ioctl(struct drm_device *dev,
+ 				void *data, struct drm_file *lessor_priv)
+ {
+ 	struct drm_mode_revoke_lease *arg = data;
+-	struct drm_master *lessor = lessor_priv->master;
++	struct drm_master *lessor;
+ 	struct drm_master *lessee;
+ 	int ret = 0;
+ 
+@@ -731,6 +770,7 @@ int drm_mode_revoke_lease_ioctl(struct drm_device *dev,
+ 	if (!drm_core_check_feature(dev, DRIVER_MODESET))
+ 		return -EOPNOTSUPP;
+ 
++	lessor = drm_file_get_master(lessor_priv);
+ 	mutex_lock(&dev->mode_config.idr_mutex);
+ 
+ 	lessee = _drm_find_lessee(lessor, arg->lessee_id);
+@@ -751,6 +791,7 @@ int drm_mode_revoke_lease_ioctl(struct drm_device *dev,
+ 
+ fail:
+ 	mutex_unlock(&dev->mode_config.idr_mutex);
++	drm_master_put(&lessor);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_dma.c b/drivers/gpu/drm/exynos/exynos_drm_dma.c
+index 0644936afee26..bf33c3084cb41 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_dma.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_dma.c
+@@ -115,6 +115,8 @@ int exynos_drm_register_dma(struct drm_device *drm, struct device *dev,
+ 				EXYNOS_DEV_ADDR_START, EXYNOS_DEV_ADDR_SIZE);
+ 		else if (IS_ENABLED(CONFIG_IOMMU_DMA))
+ 			mapping = iommu_get_domain_for_dev(priv->dma_dev);
++		else
++			mapping = ERR_PTR(-ENODEV);
+ 
+ 		if (IS_ERR(mapping))
+ 			return PTR_ERR(mapping);
+diff --git a/drivers/gpu/drm/mgag200/mgag200_drv.h b/drivers/gpu/drm/mgag200/mgag200_drv.h
+index 749a075fe9e4c..d1b51c133e27a 100644
+--- a/drivers/gpu/drm/mgag200/mgag200_drv.h
++++ b/drivers/gpu/drm/mgag200/mgag200_drv.h
+@@ -43,6 +43,22 @@
+ #define ATTR_INDEX 0x1fc0
+ #define ATTR_DATA 0x1fc1
+ 
++#define WREG_MISC(v)						\
++	WREG8(MGA_MISC_OUT, v)
++
++#define RREG_MISC(v)						\
++	((v) = RREG8(MGA_MISC_IN))
++
++#define WREG_MISC_MASKED(v, mask)				\
++	do {							\
++		u8 misc_;					\
++		u8 mask_ = (mask);				\
++		RREG_MISC(misc_);				\
++		misc_ &= ~mask_;				\
++		misc_ |= ((v) & mask_);				\
++		WREG_MISC(misc_);				\
++	} while (0)
++
+ #define WREG_ATTR(reg, v)					\
+ 	do {							\
+ 		RREG8(0x1fda);					\
+diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
+index cece3e57fb273..45d71d65b6d3e 100644
+--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
++++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
+@@ -174,6 +174,8 @@ static int mgag200_g200_set_plls(struct mga_device *mdev, long clock)
+ 	drm_dbg_kms(dev, "clock: %ld vco: %ld m: %d n: %d p: %d s: %d\n",
+ 		    clock, f_vco, m, n, p, s);
+ 
++	WREG_MISC_MASKED(MGAREG_MISC_CLKSEL_MGA, MGAREG_MISC_CLKSEL_MASK);
++
+ 	WREG_DAC(MGA1064_PIX_PLLC_M, m);
+ 	WREG_DAC(MGA1064_PIX_PLLC_N, n);
+ 	WREG_DAC(MGA1064_PIX_PLLC_P, (p | (s << 3)));
+@@ -289,6 +291,8 @@ static int mga_g200se_set_plls(struct mga_device *mdev, long clock)
+ 		return 1;
+ 	}
+ 
++	WREG_MISC_MASKED(MGAREG_MISC_CLKSEL_MGA, MGAREG_MISC_CLKSEL_MASK);
++
+ 	WREG_DAC(MGA1064_PIX_PLLC_M, m);
+ 	WREG_DAC(MGA1064_PIX_PLLC_N, n);
+ 	WREG_DAC(MGA1064_PIX_PLLC_P, p);
+@@ -385,6 +389,8 @@ static int mga_g200wb_set_plls(struct mga_device *mdev, long clock)
+ 		}
+ 	}
+ 
++	WREG_MISC_MASKED(MGAREG_MISC_CLKSEL_MGA, MGAREG_MISC_CLKSEL_MASK);
++
+ 	for (i = 0; i <= 32 && pll_locked == false; i++) {
+ 		if (i > 0) {
+ 			WREG8(MGAREG_CRTC_INDEX, 0x1e);
+@@ -522,6 +528,8 @@ static int mga_g200ev_set_plls(struct mga_device *mdev, long clock)
+ 		}
+ 	}
+ 
++	WREG_MISC_MASKED(MGAREG_MISC_CLKSEL_MGA, MGAREG_MISC_CLKSEL_MASK);
++
+ 	WREG8(DAC_INDEX, MGA1064_PIX_CLK_CTL);
+ 	tmp = RREG8(DAC_DATA);
+ 	tmp |= MGA1064_PIX_CLK_CTL_CLK_DIS;
+@@ -654,6 +662,9 @@ static int mga_g200eh_set_plls(struct mga_device *mdev, long clock)
+ 			}
+ 		}
+ 	}
++
++	WREG_MISC_MASKED(MGAREG_MISC_CLKSEL_MGA, MGAREG_MISC_CLKSEL_MASK);
++
+ 	for (i = 0; i <= 32 && pll_locked == false; i++) {
+ 		WREG8(DAC_INDEX, MGA1064_PIX_CLK_CTL);
+ 		tmp = RREG8(DAC_DATA);
+@@ -754,6 +765,8 @@ static int mga_g200er_set_plls(struct mga_device *mdev, long clock)
+ 		}
+ 	}
+ 
++	WREG_MISC_MASKED(MGAREG_MISC_CLKSEL_MGA, MGAREG_MISC_CLKSEL_MASK);
++
+ 	WREG8(DAC_INDEX, MGA1064_PIX_CLK_CTL);
+ 	tmp = RREG8(DAC_DATA);
+ 	tmp |= MGA1064_PIX_CLK_CTL_CLK_DIS;
+@@ -787,8 +800,6 @@ static int mga_g200er_set_plls(struct mga_device *mdev, long clock)
+ 
+ static int mgag200_crtc_set_plls(struct mga_device *mdev, long clock)
+ {
+-	u8 misc;
+-
+ 	switch(mdev->type) {
+ 	case G200_PCI:
+ 	case G200_AGP:
+@@ -808,11 +819,6 @@ static int mgag200_crtc_set_plls(struct mga_device *mdev, long clock)
+ 		return mga_g200er_set_plls(mdev, clock);
+ 	}
+ 
+-	misc = RREG8(MGA_MISC_IN);
+-	misc &= ~MGAREG_MISC_CLK_SEL_MASK;
+-	misc |= MGAREG_MISC_CLK_SEL_MGA_MSK;
+-	WREG8(MGA_MISC_OUT, misc);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/mgag200/mgag200_reg.h b/drivers/gpu/drm/mgag200/mgag200_reg.h
+index 977be0565c061..60e705283fe84 100644
+--- a/drivers/gpu/drm/mgag200/mgag200_reg.h
++++ b/drivers/gpu/drm/mgag200/mgag200_reg.h
+@@ -222,11 +222,10 @@
+ 
+ #define MGAREG_MISC_IOADSEL	(0x1 << 0)
+ #define MGAREG_MISC_RAMMAPEN	(0x1 << 1)
+-#define MGAREG_MISC_CLK_SEL_MASK	GENMASK(3, 2)
+-#define MGAREG_MISC_CLK_SEL_VGA25	(0x0 << 2)
+-#define MGAREG_MISC_CLK_SEL_VGA28	(0x1 << 2)
+-#define MGAREG_MISC_CLK_SEL_MGA_PIX	(0x2 << 2)
+-#define MGAREG_MISC_CLK_SEL_MGA_MSK	(0x3 << 2)
++#define MGAREG_MISC_CLKSEL_MASK		GENMASK(3, 2)
++#define MGAREG_MISC_CLKSEL_VGA25	(0x0 << 2)
++#define MGAREG_MISC_CLKSEL_VGA28	(0x1 << 2)
++#define MGAREG_MISC_CLKSEL_MGA		(0x3 << 2)
+ #define MGAREG_MISC_VIDEO_DIS	(0x1 << 4)
+ #define MGAREG_MISC_HIGH_PG_SEL	(0x1 << 5)
+ #define MGAREG_MISC_HSYNCPOL		BIT(6)
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+index 2daf81f630764..b057295574361 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+@@ -902,6 +902,7 @@ static const struct dpu_perf_cfg sdm845_perf_data = {
+ 	.amortizable_threshold = 25,
+ 	.min_prefill_lines = 24,
+ 	.danger_lut_tbl = {0xf, 0xffff, 0x0},
++	.safe_lut_tbl = {0xfff0, 0xf000, 0xffff},
+ 	.qos_lut_tbl = {
+ 		{.nentry = ARRAY_SIZE(sdm845_qos_linear),
+ 		.entries = sdm845_qos_linear
+@@ -929,6 +930,7 @@ static const struct dpu_perf_cfg sc7180_perf_data = {
+ 	.min_dram_ib = 1600000,
+ 	.min_prefill_lines = 24,
+ 	.danger_lut_tbl = {0xff, 0xffff, 0x0},
++	.safe_lut_tbl = {0xfff0, 0xff00, 0xffff},
+ 	.qos_lut_tbl = {
+ 		{.nentry = ARRAY_SIZE(sc7180_qos_linear),
+ 		.entries = sc7180_qos_linear
+@@ -956,6 +958,7 @@ static const struct dpu_perf_cfg sm8150_perf_data = {
+ 	.min_dram_ib = 800000,
+ 	.min_prefill_lines = 24,
+ 	.danger_lut_tbl = {0xf, 0xffff, 0x0},
++	.safe_lut_tbl = {0xfff8, 0xf000, 0xffff},
+ 	.qos_lut_tbl = {
+ 		{.nentry = ARRAY_SIZE(sm8150_qos_linear),
+ 		.entries = sm8150_qos_linear
+@@ -984,6 +987,7 @@ static const struct dpu_perf_cfg sm8250_perf_data = {
+ 	.min_dram_ib = 800000,
+ 	.min_prefill_lines = 35,
+ 	.danger_lut_tbl = {0xf, 0xffff, 0x0},
++	.safe_lut_tbl = {0xfff0, 0xff00, 0xffff},
+ 	.qos_lut_tbl = {
+ 		{.nentry = ARRAY_SIZE(sc7180_qos_linear),
+ 		.entries = sc7180_qos_linear
+@@ -1012,6 +1016,7 @@ static const struct dpu_perf_cfg sc7280_perf_data = {
+ 	.min_dram_ib = 1600000,
+ 	.min_prefill_lines = 24,
+ 	.danger_lut_tbl = {0xffff, 0xffff, 0x0},
++	.safe_lut_tbl = {0xff00, 0xff00, 0xffff},
+ 	.qos_lut_tbl = {
+ 		{.nentry = ARRAY_SIZE(sc7180_qos_macrotile),
+ 		.entries = sc7180_qos_macrotile
+diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
+index 0712752742f4f..cdcaf470f1480 100644
+--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
++++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
+@@ -89,13 +89,6 @@ static void mdp4_disable_commit(struct msm_kms *kms)
+ 
+ static void mdp4_prepare_commit(struct msm_kms *kms, struct drm_atomic_state *state)
+ {
+-	int i;
+-	struct drm_crtc *crtc;
+-	struct drm_crtc_state *crtc_state;
+-
+-	/* see 119ecb7fd */
+-	for_each_new_crtc_in_state(state, crtc, crtc_state, i)
+-		drm_crtc_vblank_get(crtc);
+ }
+ 
+ static void mdp4_flush_commit(struct msm_kms *kms, unsigned crtc_mask)
+@@ -114,12 +107,6 @@ static void mdp4_wait_flush(struct msm_kms *kms, unsigned crtc_mask)
+ 
+ static void mdp4_complete_commit(struct msm_kms *kms, unsigned crtc_mask)
+ {
+-	struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms));
+-	struct drm_crtc *crtc;
+-
+-	/* see 119ecb7fd */
+-	for_each_crtc_mask(mdp4_kms->dev, crtc, crtc_mask)
+-		drm_crtc_vblank_put(crtc);
+ }
+ 
+ static long mdp4_round_pixclk(struct msm_kms *kms, unsigned long rate,
+@@ -412,6 +399,7 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
+ {
+ 	struct platform_device *pdev = to_platform_device(dev->dev);
+ 	struct mdp4_platform_config *config = mdp4_get_config(pdev);
++	struct msm_drm_private *priv = dev->dev_private;
+ 	struct mdp4_kms *mdp4_kms;
+ 	struct msm_kms *kms = NULL;
+ 	struct msm_gem_address_space *aspace;
+@@ -431,7 +419,8 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
+ 		goto fail;
+ 	}
+ 
+-	kms = &mdp4_kms->base.base;
++	priv->kms = &mdp4_kms->base.base;
++	kms = priv->kms;
+ 
+ 	mdp4_kms->dev = dev;
+ 
+diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+index 6856223e91e12..c1514f2cb409c 100644
+--- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
++++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+@@ -83,13 +83,6 @@ struct dp_ctrl_private {
+ 	struct completion video_comp;
+ };
+ 
+-struct dp_cr_status {
+-	u8 lane_0_1;
+-	u8 lane_2_3;
+-};
+-
+-#define DP_LANE0_1_CR_DONE	0x11
+-
+ static int dp_aux_link_configure(struct drm_dp_aux *aux,
+ 					struct dp_link_info *link)
+ {
+@@ -1080,7 +1073,7 @@ static int dp_ctrl_read_link_status(struct dp_ctrl_private *ctrl,
+ }
+ 
+ static int dp_ctrl_link_train_1(struct dp_ctrl_private *ctrl,
+-		struct dp_cr_status *cr, int *training_step)
++			int *training_step)
+ {
+ 	int tries, old_v_level, ret = 0;
+ 	u8 link_status[DP_LINK_STATUS_SIZE];
+@@ -1109,9 +1102,6 @@ static int dp_ctrl_link_train_1(struct dp_ctrl_private *ctrl,
+ 		if (ret)
+ 			return ret;
+ 
+-		cr->lane_0_1 = link_status[0];
+-		cr->lane_2_3 = link_status[1];
+-
+ 		if (drm_dp_clock_recovery_ok(link_status,
+ 			ctrl->link->link_params.num_lanes)) {
+ 			return 0;
+@@ -1188,7 +1178,7 @@ static void dp_ctrl_clear_training_pattern(struct dp_ctrl_private *ctrl)
+ }
+ 
+ static int dp_ctrl_link_train_2(struct dp_ctrl_private *ctrl,
+-		struct dp_cr_status *cr, int *training_step)
++			int *training_step)
+ {
+ 	int tries = 0, ret = 0;
+ 	char pattern;
+@@ -1204,10 +1194,6 @@ static int dp_ctrl_link_train_2(struct dp_ctrl_private *ctrl,
+ 	else
+ 		pattern = DP_TRAINING_PATTERN_2;
+ 
+-	ret = dp_ctrl_update_vx_px(ctrl);
+-	if (ret)
+-		return ret;
+-
+ 	ret = dp_catalog_ctrl_set_pattern(ctrl->catalog, pattern);
+ 	if (ret)
+ 		return ret;
+@@ -1220,8 +1206,6 @@ static int dp_ctrl_link_train_2(struct dp_ctrl_private *ctrl,
+ 		ret = dp_ctrl_read_link_status(ctrl, link_status);
+ 		if (ret)
+ 			return ret;
+-		cr->lane_0_1 = link_status[0];
+-		cr->lane_2_3 = link_status[1];
+ 
+ 		if (drm_dp_channel_eq_ok(link_status,
+ 			ctrl->link->link_params.num_lanes)) {
+@@ -1241,7 +1225,7 @@ static int dp_ctrl_link_train_2(struct dp_ctrl_private *ctrl,
+ static int dp_ctrl_reinitialize_mainlink(struct dp_ctrl_private *ctrl);
+ 
+ static int dp_ctrl_link_train(struct dp_ctrl_private *ctrl,
+-		struct dp_cr_status *cr, int *training_step)
++			int *training_step)
+ {
+ 	int ret = 0;
+ 	u8 encoding = DP_SET_ANSI_8B10B;
+@@ -1257,7 +1241,7 @@ static int dp_ctrl_link_train(struct dp_ctrl_private *ctrl,
+ 	drm_dp_dpcd_write(ctrl->aux, DP_MAIN_LINK_CHANNEL_CODING_SET,
+ 				&encoding, 1);
+ 
+-	ret = dp_ctrl_link_train_1(ctrl, cr, training_step);
++	ret = dp_ctrl_link_train_1(ctrl, training_step);
+ 	if (ret) {
+ 		DRM_ERROR("link training #1 failed. ret=%d\n", ret);
+ 		goto end;
+@@ -1266,7 +1250,7 @@ static int dp_ctrl_link_train(struct dp_ctrl_private *ctrl,
+ 	/* print success info as this is a result of user initiated action */
+ 	DRM_DEBUG_DP("link training #1 successful\n");
+ 
+-	ret = dp_ctrl_link_train_2(ctrl, cr, training_step);
++	ret = dp_ctrl_link_train_2(ctrl, training_step);
+ 	if (ret) {
+ 		DRM_ERROR("link training #2 failed. ret=%d\n", ret);
+ 		goto end;
+@@ -1282,7 +1266,7 @@ end:
+ }
+ 
+ static int dp_ctrl_setup_main_link(struct dp_ctrl_private *ctrl,
+-		struct dp_cr_status *cr, int *training_step)
++			int *training_step)
+ {
+ 	int ret = 0;
+ 
+@@ -1297,7 +1281,7 @@ static int dp_ctrl_setup_main_link(struct dp_ctrl_private *ctrl,
+ 	 * a link training pattern, we have to first do soft reset.
+ 	 */
+ 
+-	ret = dp_ctrl_link_train(ctrl, cr, training_step);
++	ret = dp_ctrl_link_train(ctrl, training_step);
+ 
+ 	return ret;
+ }
+@@ -1494,14 +1478,16 @@ static int dp_ctrl_deinitialize_mainlink(struct dp_ctrl_private *ctrl)
+ static int dp_ctrl_link_maintenance(struct dp_ctrl_private *ctrl)
+ {
+ 	int ret = 0;
+-	struct dp_cr_status cr;
+ 	int training_step = DP_TRAINING_NONE;
+ 
+ 	dp_ctrl_push_idle(&ctrl->dp_ctrl);
+ 
++	ctrl->link->phy_params.p_level = 0;
++	ctrl->link->phy_params.v_level = 0;
++
+ 	ctrl->dp_ctrl.pixel_rate = ctrl->panel->dp_mode.drm_mode.clock;
+ 
+-	ret = dp_ctrl_setup_main_link(ctrl, &cr, &training_step);
++	ret = dp_ctrl_setup_main_link(ctrl, &training_step);
+ 	if (ret)
+ 		goto end;
+ 
+@@ -1632,6 +1618,35 @@ void dp_ctrl_handle_sink_request(struct dp_ctrl *dp_ctrl)
+ 	}
+ }
+ 
++static bool dp_ctrl_clock_recovery_any_ok(
++			const u8 link_status[DP_LINK_STATUS_SIZE],
++			int lane_count)
++{
++	int reduced_cnt;
++
++	if (lane_count <= 1)
++		return false;
++
++	/*
++	 * only interested in the lane number after reduced
++	 * lane_count = 4, then only interested in 2 lanes
++	 * lane_count = 2, then only interested in 1 lane
++	 */
++	reduced_cnt = lane_count >> 1;
++
++	return drm_dp_clock_recovery_ok(link_status, reduced_cnt);
++}
++
++static bool dp_ctrl_channel_eq_ok(struct dp_ctrl_private *ctrl)
++{
++	u8 link_status[DP_LINK_STATUS_SIZE];
++	int num_lanes = ctrl->link->link_params.num_lanes;
++
++	dp_ctrl_read_link_status(ctrl, link_status);
++
++	return drm_dp_channel_eq_ok(link_status, num_lanes);
++}
++
+ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ {
+ 	int rc = 0;
+@@ -1639,7 +1654,7 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ 	u32 rate = 0;
+ 	int link_train_max_retries = 5;
+ 	u32 const phy_cts_pixel_clk_khz = 148500;
+-	struct dp_cr_status cr;
++	u8 link_status[DP_LINK_STATUS_SIZE];
+ 	unsigned int training_step;
+ 
+ 	if (!dp_ctrl)
+@@ -1666,6 +1681,9 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ 		ctrl->link->link_params.rate,
+ 		ctrl->link->link_params.num_lanes, ctrl->dp_ctrl.pixel_rate);
+ 
++	ctrl->link->phy_params.p_level = 0;
++	ctrl->link->phy_params.v_level = 0;
++
+ 	rc = dp_ctrl_enable_mainlink_clocks(ctrl);
+ 	if (rc)
+ 		return rc;
+@@ -1679,19 +1697,21 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ 		}
+ 
+ 		training_step = DP_TRAINING_NONE;
+-		rc = dp_ctrl_setup_main_link(ctrl, &cr, &training_step);
++		rc = dp_ctrl_setup_main_link(ctrl, &training_step);
+ 		if (rc == 0) {
+ 			/* training completed successfully */
+ 			break;
+ 		} else if (training_step == DP_TRAINING_1) {
+ 			/* link train_1 failed */
+-			if (!dp_catalog_link_is_connected(ctrl->catalog)) {
++			if (!dp_catalog_link_is_connected(ctrl->catalog))
+ 				break;
+-			}
++
++			dp_ctrl_read_link_status(ctrl, link_status);
+ 
+ 			rc = dp_ctrl_link_rate_down_shift(ctrl);
+ 			if (rc < 0) { /* already in RBR = 1.6G */
+-				if (cr.lane_0_1 & DP_LANE0_1_CR_DONE) {
++				if (dp_ctrl_clock_recovery_any_ok(link_status,
++					ctrl->link->link_params.num_lanes)) {
+ 					/*
+ 					 * some lanes are ready,
+ 					 * reduce lane number
+@@ -1707,12 +1727,18 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ 				}
+ 			}
+ 		} else if (training_step == DP_TRAINING_2) {
+-			/* link train_2 failed, lower lane rate */
+-			if (!dp_catalog_link_is_connected(ctrl->catalog)) {
++			/* link train_2 failed */
++			if (!dp_catalog_link_is_connected(ctrl->catalog))
+ 				break;
+-			}
+ 
+-			rc = dp_ctrl_link_lane_down_shift(ctrl);
++			dp_ctrl_read_link_status(ctrl, link_status);
++
++			if (!drm_dp_clock_recovery_ok(link_status,
++					ctrl->link->link_params.num_lanes))
++				rc = dp_ctrl_link_rate_down_shift(ctrl);
++			else
++				rc = dp_ctrl_link_lane_down_shift(ctrl);
++
+ 			if (rc < 0) {
+ 				/* end with failure */
+ 				break; /* lane == 1 already */
+@@ -1723,17 +1749,19 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ 	if (ctrl->link->sink_request & DP_TEST_LINK_PHY_TEST_PATTERN)
+ 		return rc;
+ 
+-	/* stop txing train pattern */
+-	dp_ctrl_clear_training_pattern(ctrl);
++	if (rc == 0) {  /* link train successfully */
++		/*
++		 * do not stop train pattern here
++		 * stop link training at on_stream
++		 * to pass compliance test
++		 */
++	} else  {
++		/*
++		 * link training failed
++		 * end txing train pattern here
++		 */
++		dp_ctrl_clear_training_pattern(ctrl);
+ 
+-	/*
+-	 * keep transmitting idle pattern until video ready
+-	 * to avoid main link from loss of sync
+-	 */
+-	if (rc == 0)  /* link train successfully */
+-		dp_ctrl_push_idle(dp_ctrl);
+-	else  {
+-		/* link training failed */
+ 		dp_ctrl_deinitialize_mainlink(ctrl);
+ 		rc = -ECONNRESET;
+ 	}
+@@ -1741,9 +1769,15 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ 	return rc;
+ }
+ 
++static int dp_ctrl_link_retrain(struct dp_ctrl_private *ctrl)
++{
++	int training_step = DP_TRAINING_NONE;
++
++	return dp_ctrl_setup_main_link(ctrl, &training_step);
++}
++
+ int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl)
+ {
+-	u32 rate = 0;
+ 	int ret = 0;
+ 	bool mainlink_ready = false;
+ 	struct dp_ctrl_private *ctrl;
+@@ -1753,10 +1787,6 @@ int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl)
+ 
+ 	ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl);
+ 
+-	rate = ctrl->panel->link_info.rate;
+-
+-	ctrl->link->link_params.rate = rate;
+-	ctrl->link->link_params.num_lanes = ctrl->panel->link_info.num_lanes;
+ 	ctrl->dp_ctrl.pixel_rate = ctrl->panel->dp_mode.drm_mode.clock;
+ 
+ 	DRM_DEBUG_DP("rate=%d, num_lanes=%d, pixel_rate=%d\n",
+@@ -1771,6 +1801,12 @@ int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl)
+ 		}
+ 	}
+ 
++	if (!dp_ctrl_channel_eq_ok(ctrl))
++		dp_ctrl_link_retrain(ctrl);
++
++	/* stop txing train pattern to end link training */
++	dp_ctrl_clear_training_pattern(ctrl);
++
+ 	ret = dp_ctrl_enable_stream_clocks(ctrl);
+ 	if (ret) {
+ 		DRM_ERROR("Failed to start pixel clocks. ret=%d\n", ret);
+diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c
+index 9cc8166636686..6eeb9a14b5846 100644
+--- a/drivers/gpu/drm/msm/dp/dp_panel.c
++++ b/drivers/gpu/drm/msm/dp/dp_panel.c
+@@ -272,7 +272,7 @@ static u8 dp_panel_get_edid_checksum(struct edid *edid)
+ {
+ 	struct edid *last_block;
+ 	u8 *raw_edid;
+-	bool is_edid_corrupt;
++	bool is_edid_corrupt = false;
+ 
+ 	if (!edid) {
+ 		DRM_ERROR("invalid edid input\n");
+@@ -304,7 +304,12 @@ void dp_panel_handle_sink_request(struct dp_panel *dp_panel)
+ 	panel = container_of(dp_panel, struct dp_panel_private, dp_panel);
+ 
+ 	if (panel->link->sink_request & DP_TEST_LINK_EDID_READ) {
+-		u8 checksum = dp_panel_get_edid_checksum(dp_panel->edid);
++		u8 checksum;
++
++		if (dp_panel->edid)
++			checksum = dp_panel_get_edid_checksum(dp_panel->edid);
++		else
++			checksum = dp_panel->connector->real_edid_checksum;
+ 
+ 		dp_link_send_edid_checksum(panel->link, checksum);
+ 		dp_link_send_test_response(panel->link);
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_cfg.c b/drivers/gpu/drm/msm/dsi/dsi_cfg.c
+index f3f1c03c7db95..763f127e46213 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_cfg.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_cfg.c
+@@ -154,7 +154,6 @@ static const struct msm_dsi_config sdm660_dsi_cfg = {
+ 	.reg_cfg = {
+ 		.num = 2,
+ 		.regs = {
+-			{"vdd", 73400, 32 },	/* 0.9 V */
+ 			{"vdda", 12560, 4 },	/* 1.2 V */
+ 		},
+ 	},
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+index 65d68eb9e3cb4..c96fd752fa1d7 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+@@ -1049,7 +1049,7 @@ const struct msm_dsi_phy_cfg dsi_phy_14nm_660_cfgs = {
+ 	.reg_cfg = {
+ 		.num = 1,
+ 		.regs = {
+-			{"vcca", 17000, 32},
++			{"vcca", 73400, 32},
+ 		},
+ 	},
+ 	.ops = {
+diff --git a/drivers/gpu/drm/omapdrm/omap_plane.c b/drivers/gpu/drm/omapdrm/omap_plane.c
+index 801da917507d5..512af976b7e90 100644
+--- a/drivers/gpu/drm/omapdrm/omap_plane.c
++++ b/drivers/gpu/drm/omapdrm/omap_plane.c
+@@ -6,6 +6,7 @@
+ 
+ #include <drm/drm_atomic.h>
+ #include <drm/drm_atomic_helper.h>
++#include <drm/drm_gem_atomic_helper.h>
+ #include <drm/drm_plane_helper.h>
+ 
+ #include "omap_dmm_tiler.h"
+@@ -29,6 +30,8 @@ static int omap_plane_prepare_fb(struct drm_plane *plane,
+ 	if (!new_state->fb)
+ 		return 0;
+ 
++	drm_gem_plane_helper_prepare_fb(plane, new_state);
++
+ 	return omap_framebuffer_pin(new_state->fb);
+ }
+ 
+diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h
+index 597cf1459b0a8..4c6bdea5537b9 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_device.h
++++ b/drivers/gpu/drm/panfrost/panfrost_device.h
+@@ -120,8 +120,12 @@ struct panfrost_device {
+ };
+ 
+ struct panfrost_mmu {
++	struct panfrost_device *pfdev;
++	struct kref refcount;
+ 	struct io_pgtable_cfg pgtbl_cfg;
+ 	struct io_pgtable_ops *pgtbl_ops;
++	struct drm_mm mm;
++	spinlock_t mm_lock;
+ 	int as;
+ 	atomic_t as_count;
+ 	struct list_head list;
+@@ -132,9 +136,7 @@ struct panfrost_file_priv {
+ 
+ 	struct drm_sched_entity sched_entity[NUM_JOB_SLOTS];
+ 
+-	struct panfrost_mmu mmu;
+-	struct drm_mm mm;
+-	spinlock_t mm_lock;
++	struct panfrost_mmu *mmu;
+ };
+ 
+ static inline struct panfrost_device *to_panfrost_device(struct drm_device *ddev)
+diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
+index 83a461bdeea84..b2aa8e0503147 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
++++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
+@@ -417,7 +417,7 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
+ 		 * anyway, so let's not bother.
+ 		 */
+ 		if (!list_is_singular(&bo->mappings.list) ||
+-		    WARN_ON_ONCE(first->mmu != &priv->mmu)) {
++		    WARN_ON_ONCE(first->mmu != priv->mmu)) {
+ 			ret = -EINVAL;
+ 			goto out_unlock_mappings;
+ 		}
+@@ -449,32 +449,6 @@ int panfrost_unstable_ioctl_check(void)
+ 	return 0;
+ }
+ 
+-#define PFN_4G		(SZ_4G >> PAGE_SHIFT)
+-#define PFN_4G_MASK	(PFN_4G - 1)
+-#define PFN_16M		(SZ_16M >> PAGE_SHIFT)
+-
+-static void panfrost_drm_mm_color_adjust(const struct drm_mm_node *node,
+-					 unsigned long color,
+-					 u64 *start, u64 *end)
+-{
+-	/* Executable buffers can't start or end on a 4GB boundary */
+-	if (!(color & PANFROST_BO_NOEXEC)) {
+-		u64 next_seg;
+-
+-		if ((*start & PFN_4G_MASK) == 0)
+-			(*start)++;
+-
+-		if ((*end & PFN_4G_MASK) == 0)
+-			(*end)--;
+-
+-		next_seg = ALIGN(*start, PFN_4G);
+-		if (next_seg - *start <= PFN_16M)
+-			*start = next_seg + 1;
+-
+-		*end = min(*end, ALIGN(*start, PFN_4G) - 1);
+-	}
+-}
+-
+ static int
+ panfrost_open(struct drm_device *dev, struct drm_file *file)
+ {
+@@ -489,15 +463,11 @@ panfrost_open(struct drm_device *dev, struct drm_file *file)
+ 	panfrost_priv->pfdev = pfdev;
+ 	file->driver_priv = panfrost_priv;
+ 
+-	spin_lock_init(&panfrost_priv->mm_lock);
+-
+-	/* 4G enough for now. can be 48-bit */
+-	drm_mm_init(&panfrost_priv->mm, SZ_32M >> PAGE_SHIFT, (SZ_4G - SZ_32M) >> PAGE_SHIFT);
+-	panfrost_priv->mm.color_adjust = panfrost_drm_mm_color_adjust;
+-
+-	ret = panfrost_mmu_pgtable_alloc(panfrost_priv);
+-	if (ret)
+-		goto err_pgtable;
++	panfrost_priv->mmu = panfrost_mmu_ctx_create(pfdev);
++	if (IS_ERR(panfrost_priv->mmu)) {
++		ret = PTR_ERR(panfrost_priv->mmu);
++		goto err_free;
++	}
+ 
+ 	ret = panfrost_job_open(panfrost_priv);
+ 	if (ret)
+@@ -506,9 +476,8 @@ panfrost_open(struct drm_device *dev, struct drm_file *file)
+ 	return 0;
+ 
+ err_job:
+-	panfrost_mmu_pgtable_free(panfrost_priv);
+-err_pgtable:
+-	drm_mm_takedown(&panfrost_priv->mm);
++	panfrost_mmu_ctx_put(panfrost_priv->mmu);
++err_free:
+ 	kfree(panfrost_priv);
+ 	return ret;
+ }
+@@ -521,8 +490,7 @@ panfrost_postclose(struct drm_device *dev, struct drm_file *file)
+ 	panfrost_perfcnt_close(file);
+ 	panfrost_job_close(panfrost_priv);
+ 
+-	panfrost_mmu_pgtable_free(panfrost_priv);
+-	drm_mm_takedown(&panfrost_priv->mm);
++	panfrost_mmu_ctx_put(panfrost_priv->mmu);
+ 	kfree(panfrost_priv);
+ }
+ 
+diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c
+index 3e0723bc36bda..23377481f4e31 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_gem.c
++++ b/drivers/gpu/drm/panfrost/panfrost_gem.c
+@@ -60,7 +60,7 @@ panfrost_gem_mapping_get(struct panfrost_gem_object *bo,
+ 
+ 	mutex_lock(&bo->mappings.lock);
+ 	list_for_each_entry(iter, &bo->mappings.list, node) {
+-		if (iter->mmu == &priv->mmu) {
++		if (iter->mmu == priv->mmu) {
+ 			kref_get(&iter->refcount);
+ 			mapping = iter;
+ 			break;
+@@ -74,16 +74,13 @@ panfrost_gem_mapping_get(struct panfrost_gem_object *bo,
+ static void
+ panfrost_gem_teardown_mapping(struct panfrost_gem_mapping *mapping)
+ {
+-	struct panfrost_file_priv *priv;
+-
+ 	if (mapping->active)
+ 		panfrost_mmu_unmap(mapping);
+ 
+-	priv = container_of(mapping->mmu, struct panfrost_file_priv, mmu);
+-	spin_lock(&priv->mm_lock);
++	spin_lock(&mapping->mmu->mm_lock);
+ 	if (drm_mm_node_allocated(&mapping->mmnode))
+ 		drm_mm_remove_node(&mapping->mmnode);
+-	spin_unlock(&priv->mm_lock);
++	spin_unlock(&mapping->mmu->mm_lock);
+ }
+ 
+ static void panfrost_gem_mapping_release(struct kref *kref)
+@@ -94,6 +91,7 @@ static void panfrost_gem_mapping_release(struct kref *kref)
+ 
+ 	panfrost_gem_teardown_mapping(mapping);
+ 	drm_gem_object_put(&mapping->obj->base.base);
++	panfrost_mmu_ctx_put(mapping->mmu);
+ 	kfree(mapping);
+ }
+ 
+@@ -143,11 +141,11 @@ int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_priv)
+ 	else
+ 		align = size >= SZ_2M ? SZ_2M >> PAGE_SHIFT : 0;
+ 
+-	mapping->mmu = &priv->mmu;
+-	spin_lock(&priv->mm_lock);
+-	ret = drm_mm_insert_node_generic(&priv->mm, &mapping->mmnode,
++	mapping->mmu = panfrost_mmu_ctx_get(priv->mmu);
++	spin_lock(&mapping->mmu->mm_lock);
++	ret = drm_mm_insert_node_generic(&mapping->mmu->mm, &mapping->mmnode,
+ 					 size >> PAGE_SHIFT, align, color, 0);
+-	spin_unlock(&priv->mm_lock);
++	spin_unlock(&mapping->mmu->mm_lock);
+ 	if (ret)
+ 		goto err;
+ 
+@@ -176,7 +174,7 @@ void panfrost_gem_close(struct drm_gem_object *obj, struct drm_file *file_priv)
+ 
+ 	mutex_lock(&bo->mappings.lock);
+ 	list_for_each_entry(iter, &bo->mappings.list, node) {
+-		if (iter->mmu == &priv->mmu) {
++		if (iter->mmu == priv->mmu) {
+ 			mapping = iter;
+ 			list_del(&iter->node);
+ 			break;
+diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
+index 6003cfeb13221..682f2161b9999 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_job.c
++++ b/drivers/gpu/drm/panfrost/panfrost_job.c
+@@ -165,7 +165,7 @@ static void panfrost_job_hw_submit(struct panfrost_job *job, int js)
+ 		return;
+ 	}
+ 
+-	cfg = panfrost_mmu_as_get(pfdev, &job->file_priv->mmu);
++	cfg = panfrost_mmu_as_get(pfdev, job->file_priv->mmu);
+ 
+ 	job_write(pfdev, JS_HEAD_NEXT_LO(js), jc_head & 0xFFFFFFFF);
+ 	job_write(pfdev, JS_HEAD_NEXT_HI(js), jc_head >> 32);
+@@ -527,7 +527,7 @@ static irqreturn_t panfrost_job_irq_handler(int irq, void *data)
+ 			if (job) {
+ 				pfdev->jobs[j] = NULL;
+ 
+-				panfrost_mmu_as_put(pfdev, &job->file_priv->mmu);
++				panfrost_mmu_as_put(pfdev, job->file_priv->mmu);
+ 				panfrost_devfreq_record_idle(&pfdev->pfdevfreq);
+ 
+ 				dma_fence_signal_locked(job->done_fence);
+diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+index 0581186ebfb3a..eea6ade902cb4 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
++++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+@@ -1,5 +1,8 @@
+ // SPDX-License-Identifier:	GPL-2.0
+ /* Copyright 2019 Linaro, Ltd, Rob Herring <robh@kernel.org> */
++
++#include <drm/panfrost_drm.h>
++
+ #include <linux/atomic.h>
+ #include <linux/bitfield.h>
+ #include <linux/delay.h>
+@@ -52,25 +55,16 @@ static int write_cmd(struct panfrost_device *pfdev, u32 as_nr, u32 cmd)
+ }
+ 
+ static void lock_region(struct panfrost_device *pfdev, u32 as_nr,
+-			u64 iova, size_t size)
++			u64 iova, u64 size)
+ {
+ 	u8 region_width;
+ 	u64 region = iova & PAGE_MASK;
+-	/*
+-	 * fls returns:
+-	 * 1 .. 32
+-	 *
+-	 * 10 + fls(num_pages)
+-	 * results in the range (11 .. 42)
+-	 */
+-
+-	size = round_up(size, PAGE_SIZE);
+ 
+-	region_width = 10 + fls(size >> PAGE_SHIFT);
+-	if ((size >> PAGE_SHIFT) != (1ul << (region_width - 11))) {
+-		/* not pow2, so must go up to the next pow2 */
+-		region_width += 1;
+-	}
++	/* The size is encoded as ceil(log2) minus(1), which may be calculated
++	 * with fls. The size must be clamped to hardware bounds.
++	 */
++	size = max_t(u64, size, AS_LOCK_REGION_MIN_SIZE);
++	region_width = fls64(size - 1) - 1;
+ 	region |= region_width;
+ 
+ 	/* Lock the region that needs to be updated */
+@@ -81,7 +75,7 @@ static void lock_region(struct panfrost_device *pfdev, u32 as_nr,
+ 
+ 
+ static int mmu_hw_do_operation_locked(struct panfrost_device *pfdev, int as_nr,
+-				      u64 iova, size_t size, u32 op)
++				      u64 iova, u64 size, u32 op)
+ {
+ 	if (as_nr < 0)
+ 		return 0;
+@@ -98,7 +92,7 @@ static int mmu_hw_do_operation_locked(struct panfrost_device *pfdev, int as_nr,
+ 
+ static int mmu_hw_do_operation(struct panfrost_device *pfdev,
+ 			       struct panfrost_mmu *mmu,
+-			       u64 iova, size_t size, u32 op)
++			       u64 iova, u64 size, u32 op)
+ {
+ 	int ret;
+ 
+@@ -115,7 +109,7 @@ static void panfrost_mmu_enable(struct panfrost_device *pfdev, struct panfrost_m
+ 	u64 transtab = cfg->arm_mali_lpae_cfg.transtab;
+ 	u64 memattr = cfg->arm_mali_lpae_cfg.memattr;
+ 
+-	mmu_hw_do_operation_locked(pfdev, as_nr, 0, ~0UL, AS_COMMAND_FLUSH_MEM);
++	mmu_hw_do_operation_locked(pfdev, as_nr, 0, ~0ULL, AS_COMMAND_FLUSH_MEM);
+ 
+ 	mmu_write(pfdev, AS_TRANSTAB_LO(as_nr), transtab & 0xffffffffUL);
+ 	mmu_write(pfdev, AS_TRANSTAB_HI(as_nr), transtab >> 32);
+@@ -131,7 +125,7 @@ static void panfrost_mmu_enable(struct panfrost_device *pfdev, struct panfrost_m
+ 
+ static void panfrost_mmu_disable(struct panfrost_device *pfdev, u32 as_nr)
+ {
+-	mmu_hw_do_operation_locked(pfdev, as_nr, 0, ~0UL, AS_COMMAND_FLUSH_MEM);
++	mmu_hw_do_operation_locked(pfdev, as_nr, 0, ~0ULL, AS_COMMAND_FLUSH_MEM);
+ 
+ 	mmu_write(pfdev, AS_TRANSTAB_LO(as_nr), 0);
+ 	mmu_write(pfdev, AS_TRANSTAB_HI(as_nr), 0);
+@@ -231,7 +225,7 @@ static size_t get_pgsize(u64 addr, size_t size)
+ 
+ static void panfrost_mmu_flush_range(struct panfrost_device *pfdev,
+ 				     struct panfrost_mmu *mmu,
+-				     u64 iova, size_t size)
++				     u64 iova, u64 size)
+ {
+ 	if (mmu->as < 0)
+ 		return;
+@@ -337,7 +331,7 @@ static void mmu_tlb_inv_context_s1(void *cookie)
+ 
+ static void mmu_tlb_sync_context(void *cookie)
+ {
+-	//struct panfrost_device *pfdev = cookie;
++	//struct panfrost_mmu *mmu = cookie;
+ 	// TODO: Wait 1000 GPU cycles for HW_ISSUE_6367/T60X
+ }
+ 
+@@ -352,57 +346,10 @@ static const struct iommu_flush_ops mmu_tlb_ops = {
+ 	.tlb_flush_walk = mmu_tlb_flush_walk,
+ };
+ 
+-int panfrost_mmu_pgtable_alloc(struct panfrost_file_priv *priv)
+-{
+-	struct panfrost_mmu *mmu = &priv->mmu;
+-	struct panfrost_device *pfdev = priv->pfdev;
+-
+-	INIT_LIST_HEAD(&mmu->list);
+-	mmu->as = -1;
+-
+-	mmu->pgtbl_cfg = (struct io_pgtable_cfg) {
+-		.pgsize_bitmap	= SZ_4K | SZ_2M,
+-		.ias		= FIELD_GET(0xff, pfdev->features.mmu_features),
+-		.oas		= FIELD_GET(0xff00, pfdev->features.mmu_features),
+-		.coherent_walk	= pfdev->coherent,
+-		.tlb		= &mmu_tlb_ops,
+-		.iommu_dev	= pfdev->dev,
+-	};
+-
+-	mmu->pgtbl_ops = alloc_io_pgtable_ops(ARM_MALI_LPAE, &mmu->pgtbl_cfg,
+-					      priv);
+-	if (!mmu->pgtbl_ops)
+-		return -EINVAL;
+-
+-	return 0;
+-}
+-
+-void panfrost_mmu_pgtable_free(struct panfrost_file_priv *priv)
+-{
+-	struct panfrost_device *pfdev = priv->pfdev;
+-	struct panfrost_mmu *mmu = &priv->mmu;
+-
+-	spin_lock(&pfdev->as_lock);
+-	if (mmu->as >= 0) {
+-		pm_runtime_get_noresume(pfdev->dev);
+-		if (pm_runtime_active(pfdev->dev))
+-			panfrost_mmu_disable(pfdev, mmu->as);
+-		pm_runtime_put_autosuspend(pfdev->dev);
+-
+-		clear_bit(mmu->as, &pfdev->as_alloc_mask);
+-		clear_bit(mmu->as, &pfdev->as_in_use_mask);
+-		list_del(&mmu->list);
+-	}
+-	spin_unlock(&pfdev->as_lock);
+-
+-	free_io_pgtable_ops(mmu->pgtbl_ops);
+-}
+-
+ static struct panfrost_gem_mapping *
+ addr_to_mapping(struct panfrost_device *pfdev, int as, u64 addr)
+ {
+ 	struct panfrost_gem_mapping *mapping = NULL;
+-	struct panfrost_file_priv *priv;
+ 	struct drm_mm_node *node;
+ 	u64 offset = addr >> PAGE_SHIFT;
+ 	struct panfrost_mmu *mmu;
+@@ -415,11 +362,10 @@ addr_to_mapping(struct panfrost_device *pfdev, int as, u64 addr)
+ 	goto out;
+ 
+ found_mmu:
+-	priv = container_of(mmu, struct panfrost_file_priv, mmu);
+ 
+-	spin_lock(&priv->mm_lock);
++	spin_lock(&mmu->mm_lock);
+ 
+-	drm_mm_for_each_node(node, &priv->mm) {
++	drm_mm_for_each_node(node, &mmu->mm) {
+ 		if (offset >= node->start &&
+ 		    offset < (node->start + node->size)) {
+ 			mapping = drm_mm_node_to_panfrost_mapping(node);
+@@ -429,7 +375,7 @@ found_mmu:
+ 		}
+ 	}
+ 
+-	spin_unlock(&priv->mm_lock);
++	spin_unlock(&mmu->mm_lock);
+ out:
+ 	spin_unlock(&pfdev->as_lock);
+ 	return mapping;
+@@ -542,6 +488,107 @@ err_bo:
+ 	return ret;
+ }
+ 
++static void panfrost_mmu_release_ctx(struct kref *kref)
++{
++	struct panfrost_mmu *mmu = container_of(kref, struct panfrost_mmu,
++						refcount);
++	struct panfrost_device *pfdev = mmu->pfdev;
++
++	spin_lock(&pfdev->as_lock);
++	if (mmu->as >= 0) {
++		pm_runtime_get_noresume(pfdev->dev);
++		if (pm_runtime_active(pfdev->dev))
++			panfrost_mmu_disable(pfdev, mmu->as);
++		pm_runtime_put_autosuspend(pfdev->dev);
++
++		clear_bit(mmu->as, &pfdev->as_alloc_mask);
++		clear_bit(mmu->as, &pfdev->as_in_use_mask);
++		list_del(&mmu->list);
++	}
++	spin_unlock(&pfdev->as_lock);
++
++	free_io_pgtable_ops(mmu->pgtbl_ops);
++	drm_mm_takedown(&mmu->mm);
++	kfree(mmu);
++}
++
++void panfrost_mmu_ctx_put(struct panfrost_mmu *mmu)
++{
++	kref_put(&mmu->refcount, panfrost_mmu_release_ctx);
++}
++
++struct panfrost_mmu *panfrost_mmu_ctx_get(struct panfrost_mmu *mmu)
++{
++	kref_get(&mmu->refcount);
++
++	return mmu;
++}
++
++#define PFN_4G		(SZ_4G >> PAGE_SHIFT)
++#define PFN_4G_MASK	(PFN_4G - 1)
++#define PFN_16M		(SZ_16M >> PAGE_SHIFT)
++
++static void panfrost_drm_mm_color_adjust(const struct drm_mm_node *node,
++					 unsigned long color,
++					 u64 *start, u64 *end)
++{
++	/* Executable buffers can't start or end on a 4GB boundary */
++	if (!(color & PANFROST_BO_NOEXEC)) {
++		u64 next_seg;
++
++		if ((*start & PFN_4G_MASK) == 0)
++			(*start)++;
++
++		if ((*end & PFN_4G_MASK) == 0)
++			(*end)--;
++
++		next_seg = ALIGN(*start, PFN_4G);
++		if (next_seg - *start <= PFN_16M)
++			*start = next_seg + 1;
++
++		*end = min(*end, ALIGN(*start, PFN_4G) - 1);
++	}
++}
++
++struct panfrost_mmu *panfrost_mmu_ctx_create(struct panfrost_device *pfdev)
++{
++	struct panfrost_mmu *mmu;
++
++	mmu = kzalloc(sizeof(*mmu), GFP_KERNEL);
++	if (!mmu)
++		return ERR_PTR(-ENOMEM);
++
++	mmu->pfdev = pfdev;
++	spin_lock_init(&mmu->mm_lock);
++
++	/* 4G enough for now. can be 48-bit */
++	drm_mm_init(&mmu->mm, SZ_32M >> PAGE_SHIFT, (SZ_4G - SZ_32M) >> PAGE_SHIFT);
++	mmu->mm.color_adjust = panfrost_drm_mm_color_adjust;
++
++	INIT_LIST_HEAD(&mmu->list);
++	mmu->as = -1;
++
++	mmu->pgtbl_cfg = (struct io_pgtable_cfg) {
++		.pgsize_bitmap	= SZ_4K | SZ_2M,
++		.ias		= FIELD_GET(0xff, pfdev->features.mmu_features),
++		.oas		= FIELD_GET(0xff00, pfdev->features.mmu_features),
++		.coherent_walk	= pfdev->coherent,
++		.tlb		= &mmu_tlb_ops,
++		.iommu_dev	= pfdev->dev,
++	};
++
++	mmu->pgtbl_ops = alloc_io_pgtable_ops(ARM_MALI_LPAE, &mmu->pgtbl_cfg,
++					      mmu);
++	if (!mmu->pgtbl_ops) {
++		kfree(mmu);
++		return ERR_PTR(-EINVAL);
++	}
++
++	kref_init(&mmu->refcount);
++
++	return mmu;
++}
++
+ static const char *access_type_name(struct panfrost_device *pfdev,
+ 		u32 fault_status)
+ {
+diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.h b/drivers/gpu/drm/panfrost/panfrost_mmu.h
+index 44fc2edf63ce6..cc2a0d307febc 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_mmu.h
++++ b/drivers/gpu/drm/panfrost/panfrost_mmu.h
+@@ -18,7 +18,8 @@ void panfrost_mmu_reset(struct panfrost_device *pfdev);
+ u32 panfrost_mmu_as_get(struct panfrost_device *pfdev, struct panfrost_mmu *mmu);
+ void panfrost_mmu_as_put(struct panfrost_device *pfdev, struct panfrost_mmu *mmu);
+ 
+-int panfrost_mmu_pgtable_alloc(struct panfrost_file_priv *priv);
+-void panfrost_mmu_pgtable_free(struct panfrost_file_priv *priv);
++struct panfrost_mmu *panfrost_mmu_ctx_get(struct panfrost_mmu *mmu);
++void panfrost_mmu_ctx_put(struct panfrost_mmu *mmu);
++struct panfrost_mmu *panfrost_mmu_ctx_create(struct panfrost_device *pfdev);
+ 
+ #endif
+diff --git a/drivers/gpu/drm/panfrost/panfrost_regs.h b/drivers/gpu/drm/panfrost/panfrost_regs.h
+index eddaa62ad8b0e..2ae3a4d301d39 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_regs.h
++++ b/drivers/gpu/drm/panfrost/panfrost_regs.h
+@@ -318,6 +318,8 @@
+ #define AS_FAULTSTATUS_ACCESS_TYPE_READ		(0x2 << 8)
+ #define AS_FAULTSTATUS_ACCESS_TYPE_WRITE	(0x3 << 8)
+ 
++#define AS_LOCK_REGION_MIN_SIZE                 (1ULL << 15)
++
+ #define gpu_write(dev, reg, data) writel(data, dev->iomem + reg)
+ #define gpu_read(dev, reg) readl(dev->iomem + reg)
+ 
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_drv.c b/drivers/gpu/drm/rcar-du/rcar_du_drv.c
+index c22551c2facb1..2a06ec1cbefb0 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_drv.c
++++ b/drivers/gpu/drm/rcar-du/rcar_du_drv.c
+@@ -559,6 +559,13 @@ static int rcar_du_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++static void rcar_du_shutdown(struct platform_device *pdev)
++{
++	struct rcar_du_device *rcdu = platform_get_drvdata(pdev);
++
++	drm_atomic_helper_shutdown(&rcdu->ddev);
++}
++
+ static int rcar_du_probe(struct platform_device *pdev)
+ {
+ 	struct rcar_du_device *rcdu;
+@@ -615,6 +622,7 @@ error:
+ static struct platform_driver rcar_du_platform_driver = {
+ 	.probe		= rcar_du_probe,
+ 	.remove		= rcar_du_remove,
++	.shutdown	= rcar_du_shutdown,
+ 	.driver		= {
+ 		.name	= "rcar-du",
+ 		.pm	= &rcar_du_pm_ops,
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index edee565334d8e..155f305e7c4e5 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -1205,7 +1205,9 @@ static int vc4_hdmi_audio_trigger(struct snd_pcm_substream *substream, int cmd,
+ 		HDMI_WRITE(HDMI_MAI_CTL,
+ 			   VC4_SET_FIELD(vc4_hdmi->audio.channels,
+ 					 VC4_HD_MAI_CTL_CHNUM) |
+-			   VC4_HD_MAI_CTL_ENABLE);
++					 VC4_HD_MAI_CTL_WHOLSMP |
++					 VC4_HD_MAI_CTL_CHALIGN |
++					 VC4_HD_MAI_CTL_ENABLE);
+ 		break;
+ 	case SNDRV_PCM_TRIGGER_STOP:
+ 		HDMI_WRITE(HDMI_MAI_CTL,
+diff --git a/drivers/gpu/drm/vkms/vkms_plane.c b/drivers/gpu/drm/vkms/vkms_plane.c
+index 6d310d31b75d4..1b10ab2b80a31 100644
+--- a/drivers/gpu/drm/vkms/vkms_plane.c
++++ b/drivers/gpu/drm/vkms/vkms_plane.c
+@@ -8,7 +8,6 @@
+ #include <drm/drm_gem_atomic_helper.h>
+ #include <drm/drm_gem_framebuffer_helper.h>
+ #include <drm/drm_plane_helper.h>
+-#include <drm/drm_gem_shmem_helper.h>
+ 
+ #include "vkms_drv.h"
+ 
+@@ -150,45 +149,10 @@ static int vkms_plane_atomic_check(struct drm_plane *plane,
+ 	return 0;
+ }
+ 
+-static int vkms_prepare_fb(struct drm_plane *plane,
+-			   struct drm_plane_state *state)
+-{
+-	struct drm_gem_object *gem_obj;
+-	struct dma_buf_map map;
+-	int ret;
+-
+-	if (!state->fb)
+-		return 0;
+-
+-	gem_obj = drm_gem_fb_get_obj(state->fb, 0);
+-	ret = drm_gem_shmem_vmap(gem_obj, &map);
+-	if (ret)
+-		DRM_ERROR("vmap failed: %d\n", ret);
+-
+-	return drm_gem_plane_helper_prepare_fb(plane, state);
+-}
+-
+-static void vkms_cleanup_fb(struct drm_plane *plane,
+-			    struct drm_plane_state *old_state)
+-{
+-	struct drm_gem_object *gem_obj;
+-	struct drm_gem_shmem_object *shmem_obj;
+-	struct dma_buf_map map;
+-
+-	if (!old_state->fb)
+-		return;
+-
+-	gem_obj = drm_gem_fb_get_obj(old_state->fb, 0);
+-	shmem_obj = to_drm_gem_shmem_obj(drm_gem_fb_get_obj(old_state->fb, 0));
+-	dma_buf_map_set_vaddr(&map, shmem_obj->vaddr);
+-	drm_gem_shmem_vunmap(gem_obj, &map);
+-}
+-
+ static const struct drm_plane_helper_funcs vkms_primary_helper_funcs = {
+ 	.atomic_update		= vkms_plane_atomic_update,
+ 	.atomic_check		= vkms_plane_atomic_check,
+-	.prepare_fb		= vkms_prepare_fb,
+-	.cleanup_fb		= vkms_cleanup_fb,
++	DRM_GEM_SHADOW_PLANE_HELPER_FUNCS,
+ };
+ 
+ struct drm_plane *vkms_plane_init(struct vkms_device *vkmsdev,
+diff --git a/drivers/gpu/drm/vmwgfx/ttm_memory.c b/drivers/gpu/drm/vmwgfx/ttm_memory.c
+index aeb0a22a2c347..edd17c30d5a51 100644
+--- a/drivers/gpu/drm/vmwgfx/ttm_memory.c
++++ b/drivers/gpu/drm/vmwgfx/ttm_memory.c
+@@ -435,8 +435,10 @@ int ttm_mem_global_init(struct ttm_mem_global *glob, struct device *dev)
+ 
+ 	si_meminfo(&si);
+ 
++	spin_lock(&glob->lock);
+ 	/* set it as 0 by default to keep original behavior of OOM */
+ 	glob->lower_mem_limit = 0;
++	spin_unlock(&glob->lock);
+ 
+ 	ret = ttm_mem_init_kernel_zone(glob, &si);
+ 	if (unlikely(ret != 0))
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_binding.c b/drivers/gpu/drm/vmwgfx/vmwgfx_binding.c
+index 81f525a82b77f..4e7de45407c81 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_binding.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_binding.c
+@@ -715,7 +715,7 @@ static int vmw_binding_scrub_cb(struct vmw_ctx_bindinfo *bi, bool rebind)
+  * without checking which bindings actually need to be emitted
+  *
+  * @cbs: Pointer to the context's struct vmw_ctx_binding_state
+- * @bi: Pointer to where the binding info array is stored in @cbs
++ * @biv: Pointer to where the binding info array is stored in @cbs
+  * @max_num: Maximum number of entries in the @bi array.
+  *
+  * Scans the @bi array for bindings and builds a buffer of view id data.
+@@ -725,11 +725,9 @@ static int vmw_binding_scrub_cb(struct vmw_ctx_bindinfo *bi, bool rebind)
+  * contains the command data.
+  */
+ static void vmw_collect_view_ids(struct vmw_ctx_binding_state *cbs,
+-				 const struct vmw_ctx_bindinfo *bi,
++				 const struct vmw_ctx_bindinfo_view *biv,
+ 				 u32 max_num)
+ {
+-	const struct vmw_ctx_bindinfo_view *biv =
+-		container_of(bi, struct vmw_ctx_bindinfo_view, bi);
+ 	unsigned long i;
+ 
+ 	cbs->bind_cmd_count = 0;
+@@ -838,7 +836,7 @@ static int vmw_emit_set_sr(struct vmw_ctx_binding_state *cbs,
+  */
+ static int vmw_emit_set_rt(struct vmw_ctx_binding_state *cbs)
+ {
+-	const struct vmw_ctx_bindinfo *loc = &cbs->render_targets[0].bi;
++	const struct vmw_ctx_bindinfo_view *loc = &cbs->render_targets[0];
+ 	struct {
+ 		SVGA3dCmdHeader header;
+ 		SVGA3dCmdDXSetRenderTargets body;
+@@ -874,7 +872,7 @@ static int vmw_emit_set_rt(struct vmw_ctx_binding_state *cbs)
+  * without checking which bindings actually need to be emitted
+  *
+  * @cbs: Pointer to the context's struct vmw_ctx_binding_state
+- * @bi: Pointer to where the binding info array is stored in @cbs
++ * @biso: Pointer to where the binding info array is stored in @cbs
+  * @max_num: Maximum number of entries in the @bi array.
+  *
+  * Scans the @bi array for bindings and builds a buffer of SVGA3dSoTarget data.
+@@ -884,11 +882,9 @@ static int vmw_emit_set_rt(struct vmw_ctx_binding_state *cbs)
+  * contains the command data.
+  */
+ static void vmw_collect_so_targets(struct vmw_ctx_binding_state *cbs,
+-				   const struct vmw_ctx_bindinfo *bi,
++				   const struct vmw_ctx_bindinfo_so_target *biso,
+ 				   u32 max_num)
+ {
+-	const struct vmw_ctx_bindinfo_so_target *biso =
+-		container_of(bi, struct vmw_ctx_bindinfo_so_target, bi);
+ 	unsigned long i;
+ 	SVGA3dSoTarget *so_buffer = (SVGA3dSoTarget *) cbs->bind_cmd_buffer;
+ 
+@@ -919,7 +915,7 @@ static void vmw_collect_so_targets(struct vmw_ctx_binding_state *cbs,
+  */
+ static int vmw_emit_set_so_target(struct vmw_ctx_binding_state *cbs)
+ {
+-	const struct vmw_ctx_bindinfo *loc = &cbs->so_targets[0].bi;
++	const struct vmw_ctx_bindinfo_so_target *loc = &cbs->so_targets[0];
+ 	struct {
+ 		SVGA3dCmdHeader header;
+ 		SVGA3dCmdDXSetSOTargets body;
+@@ -1066,7 +1062,7 @@ static int vmw_emit_set_vb(struct vmw_ctx_binding_state *cbs)
+ 
+ static int vmw_emit_set_uav(struct vmw_ctx_binding_state *cbs)
+ {
+-	const struct vmw_ctx_bindinfo *loc = &cbs->ua_views[0].views[0].bi;
++	const struct vmw_ctx_bindinfo_view *loc = &cbs->ua_views[0].views[0];
+ 	struct {
+ 		SVGA3dCmdHeader header;
+ 		SVGA3dCmdDXSetUAViews body;
+@@ -1096,7 +1092,7 @@ static int vmw_emit_set_uav(struct vmw_ctx_binding_state *cbs)
+ 
+ static int vmw_emit_set_cs_uav(struct vmw_ctx_binding_state *cbs)
+ {
+-	const struct vmw_ctx_bindinfo *loc = &cbs->ua_views[1].views[0].bi;
++	const struct vmw_ctx_bindinfo_view *loc = &cbs->ua_views[1].views[0];
+ 	struct {
+ 		SVGA3dCmdHeader header;
+ 		SVGA3dCmdDXSetCSUAViews body;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c
+index 2e23e537cdf52..dac4624c5dc16 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c
+@@ -516,7 +516,7 @@ static void vmw_cmdbuf_work_func(struct work_struct *work)
+ 	struct vmw_cmdbuf_man *man =
+ 		container_of(work, struct vmw_cmdbuf_man, work);
+ 	struct vmw_cmdbuf_header *entry, *next;
+-	uint32_t dummy;
++	uint32_t dummy = 0;
+ 	bool send_fence = false;
+ 	struct list_head restart_head[SVGA_CB_CONTEXT_MAX];
+ 	int i;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf_res.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf_res.c
+index b262d61d839d5..9487faff52293 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf_res.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf_res.c
+@@ -159,6 +159,7 @@ void vmw_cmdbuf_res_commit(struct list_head *list)
+ void vmw_cmdbuf_res_revert(struct list_head *list)
+ {
+ 	struct vmw_cmdbuf_res *entry, *next;
++	int ret;
+ 
+ 	list_for_each_entry_safe(entry, next, list, head) {
+ 		switch (entry->state) {
+@@ -166,7 +167,8 @@ void vmw_cmdbuf_res_revert(struct list_head *list)
+ 			vmw_cmdbuf_res_free(entry->man, entry);
+ 			break;
+ 		case VMW_CMDBUF_RES_DEL:
+-			drm_ht_insert_item(&entry->man->resources, &entry->hash);
++			ret = drm_ht_insert_item(&entry->man->resources, &entry->hash);
++			BUG_ON(ret);
+ 			list_del(&entry->head);
+ 			list_add_tail(&entry->head, &entry->man->list);
+ 			entry->state = VMW_CMDBUF_RES_COMMITTED;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+index d6a6d8a3387a9..319ecca5d1cb8 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+@@ -2546,6 +2546,8 @@ static int vmw_cmd_dx_so_define(struct vmw_private *dev_priv,
+ 
+ 	so_type = vmw_so_cmd_to_type(header->id);
+ 	res = vmw_context_cotable(ctx_node->ctx, vmw_so_cotables[so_type]);
++	if (IS_ERR(res))
++		return PTR_ERR(res);
+ 	cmd = container_of(header, typeof(*cmd), header);
+ 	ret = vmw_cotable_notify(res, cmd->defined_id);
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c b/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c
+index f2d6254154585..2d8caf09f1727 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c
+@@ -506,11 +506,13 @@ static void vmw_mob_pt_setup(struct vmw_mob *mob,
+ {
+ 	unsigned long num_pt_pages = 0;
+ 	struct ttm_buffer_object *bo = mob->pt_bo;
+-	struct vmw_piter save_pt_iter;
++	struct vmw_piter save_pt_iter = {0};
+ 	struct vmw_piter pt_iter;
+ 	const struct vmw_sg_table *vsgt;
+ 	int ret;
+ 
++	BUG_ON(num_data_pages == 0);
++
+ 	ret = ttm_bo_reserve(bo, false, true, NULL);
+ 	BUG_ON(ret != 0);
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
+index 609269625468d..e90fd3d16697e 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
+@@ -154,6 +154,7 @@ static unsigned long vmw_port_hb_out(struct rpc_channel *channel,
+ 	/* HB port can't access encrypted memory. */
+ 	if (hb && !mem_encrypt_active()) {
+ 		unsigned long bp = channel->cookie_high;
++		u32 channel_id = (channel->channel_id << 16);
+ 
+ 		si = (uintptr_t) msg;
+ 		di = channel->cookie_low;
+@@ -161,7 +162,7 @@ static unsigned long vmw_port_hb_out(struct rpc_channel *channel,
+ 		VMW_PORT_HB_OUT(
+ 			(MESSAGE_STATUS_SUCCESS << 16) | VMW_PORT_CMD_HB_MSG,
+ 			msg_len, si, di,
+-			VMWARE_HYPERVISOR_HB | (channel->channel_id << 16) |
++			VMWARE_HYPERVISOR_HB | channel_id |
+ 			VMWARE_HYPERVISOR_OUT,
+ 			VMW_HYPERVISOR_MAGIC, bp,
+ 			eax, ebx, ecx, edx, si, di);
+@@ -209,6 +210,7 @@ static unsigned long vmw_port_hb_in(struct rpc_channel *channel, char *reply,
+ 	/* HB port can't access encrypted memory */
+ 	if (hb && !mem_encrypt_active()) {
+ 		unsigned long bp = channel->cookie_low;
++		u32 channel_id = (channel->channel_id << 16);
+ 
+ 		si = channel->cookie_high;
+ 		di = (uintptr_t) reply;
+@@ -216,7 +218,7 @@ static unsigned long vmw_port_hb_in(struct rpc_channel *channel, char *reply,
+ 		VMW_PORT_HB_IN(
+ 			(MESSAGE_STATUS_SUCCESS << 16) | VMW_PORT_CMD_HB_MSG,
+ 			reply_len, si, di,
+-			VMWARE_HYPERVISOR_HB | (channel->channel_id << 16),
++			VMWARE_HYPERVISOR_HB | channel_id,
+ 			VMW_HYPERVISOR_MAGIC, bp,
+ 			eax, ebx, ecx, edx, si, di);
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
+index 35f02958ee2cc..f275a08999ef1 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
+@@ -114,6 +114,7 @@ static void vmw_resource_release(struct kref *kref)
+ 	    container_of(kref, struct vmw_resource, kref);
+ 	struct vmw_private *dev_priv = res->dev_priv;
+ 	int id;
++	int ret;
+ 	struct idr *idr = &dev_priv->res_idr[res->func->res_type];
+ 
+ 	spin_lock(&dev_priv->resource_lock);
+@@ -122,7 +123,8 @@ static void vmw_resource_release(struct kref *kref)
+ 	if (res->backup) {
+ 		struct ttm_buffer_object *bo = &res->backup->base;
+ 
+-		ttm_bo_reserve(bo, false, false, NULL);
++		ret = ttm_bo_reserve(bo, false, false, NULL);
++		BUG_ON(ret);
+ 		if (vmw_resource_mob_attached(res) &&
+ 		    res->func->unbind != NULL) {
+ 			struct ttm_validate_buffer val_buf;
+@@ -1002,7 +1004,9 @@ int vmw_resource_pin(struct vmw_resource *res, bool interruptible)
+ 		if (res->backup) {
+ 			vbo = res->backup;
+ 
+-			ttm_bo_reserve(&vbo->base, interruptible, false, NULL);
++			ret = ttm_bo_reserve(&vbo->base, interruptible, false, NULL);
++			if (ret)
++				goto out_no_validate;
+ 			if (!vbo->base.pin_count) {
+ 				ret = ttm_bo_validate
+ 					(&vbo->base,
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_so.c b/drivers/gpu/drm/vmwgfx/vmwgfx_so.c
+index 2877c7b43bd78..615bf9ca03d78 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_so.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_so.c
+@@ -539,7 +539,8 @@ const SVGACOTableType vmw_so_cotables[] = {
+ 	[vmw_so_ds] = SVGA_COTABLE_DEPTHSTENCIL,
+ 	[vmw_so_rs] = SVGA_COTABLE_RASTERIZERSTATE,
+ 	[vmw_so_ss] = SVGA_COTABLE_SAMPLER,
+-	[vmw_so_so] = SVGA_COTABLE_STREAMOUTPUT
++	[vmw_so_so] = SVGA_COTABLE_STREAMOUTPUT,
++	[vmw_so_max]= SVGA_COTABLE_MAX
+ };
+ 
+ 
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+index beab3e19d8e21..0c62cd400b64c 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+@@ -869,7 +869,7 @@ int vmw_surface_define_ioctl(struct drm_device *dev, void *data,
+ 	user_srf->prime.base.shareable = false;
+ 	user_srf->prime.base.tfile = NULL;
+ 	if (drm_is_primary_client(file_priv))
+-		user_srf->master = drm_master_get(file_priv->master);
++		user_srf->master = drm_file_get_master(file_priv);
+ 
+ 	/**
+ 	 * From this point, the generic resource management functions
+@@ -1540,7 +1540,7 @@ vmw_gb_surface_define_internal(struct drm_device *dev,
+ 
+ 	user_srf = container_of(srf, struct vmw_user_surface, srf);
+ 	if (drm_is_primary_client(file_priv))
+-		user_srf->master = drm_master_get(file_priv->master);
++		user_srf->master = drm_file_get_master(file_priv);
+ 
+ 	ret = ttm_read_lock(&dev_priv->reservation_sem, true);
+ 	if (unlikely(ret != 0))
+@@ -1883,7 +1883,6 @@ static void vmw_surface_dirty_range_add(struct vmw_resource *res, size_t start,
+ static int vmw_surface_dirty_sync(struct vmw_resource *res)
+ {
+ 	struct vmw_private *dev_priv = res->dev_priv;
+-	bool has_dx = 0;
+ 	u32 i, num_dirty;
+ 	struct vmw_surface_dirty *dirty =
+ 		(struct vmw_surface_dirty *) res->dirty;
+@@ -1910,7 +1909,7 @@ static int vmw_surface_dirty_sync(struct vmw_resource *res)
+ 	if (!num_dirty)
+ 		goto out;
+ 
+-	alloc_size = num_dirty * ((has_dx) ? sizeof(*cmd1) : sizeof(*cmd2));
++	alloc_size = num_dirty * ((has_sm4_context(dev_priv)) ? sizeof(*cmd1) : sizeof(*cmd2));
+ 	cmd = VMW_CMD_RESERVE(dev_priv, alloc_size);
+ 	if (!cmd)
+ 		return -ENOMEM;
+@@ -1928,7 +1927,7 @@ static int vmw_surface_dirty_sync(struct vmw_resource *res)
+ 		 * DX_UPDATE_SUBRESOURCE is aware of array surfaces.
+ 		 * UPDATE_GB_IMAGE is not.
+ 		 */
+-		if (has_dx) {
++		if (has_sm4_context(dev_priv)) {
+ 			cmd1->header.id = SVGA_3D_CMD_DX_UPDATE_SUBRESOURCE;
+ 			cmd1->header.size = sizeof(cmd1->body);
+ 			cmd1->body.sid = res->id;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_validation.c b/drivers/gpu/drm/vmwgfx/vmwgfx_validation.c
+index e7570f422400d..bf20ca9f3a245 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_validation.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_validation.c
+@@ -586,13 +586,13 @@ int vmw_validation_bo_validate(struct vmw_validation_context *ctx, bool intr)
+ 			container_of(entry->base.bo, typeof(*vbo), base);
+ 
+ 		if (entry->cpu_blit) {
+-			struct ttm_operation_ctx ctx = {
++			struct ttm_operation_ctx ttm_ctx = {
+ 				.interruptible = intr,
+ 				.no_wait_gpu = false
+ 			};
+ 
+ 			ret = ttm_bo_validate(entry->base.bo,
+-					      &vmw_nonfixed_placement, &ctx);
++					      &vmw_nonfixed_placement, &ttm_ctx);
+ 		} else {
+ 			ret = vmw_validation_bo_validate_single
+ 			(entry->base.bo, intr, entry->as_mob);
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_disp.c b/drivers/gpu/drm/xlnx/zynqmp_disp.c
+index 109d627968ac0..01c6ce7784ddb 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_disp.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_disp.c
+@@ -1452,9 +1452,10 @@ zynqmp_disp_crtc_atomic_enable(struct drm_crtc *crtc,
+ 	struct drm_display_mode *adjusted_mode = &crtc->state->adjusted_mode;
+ 	int ret, vrefresh;
+ 
++	pm_runtime_get_sync(disp->dev);
++
+ 	zynqmp_disp_crtc_setup_clock(crtc, adjusted_mode);
+ 
+-	pm_runtime_get_sync(disp->dev);
+ 	ret = clk_prepare_enable(disp->pclk);
+ 	if (ret) {
+ 		dev_err(disp->dev, "failed to enable a pixel clock\n");
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_dp.c b/drivers/gpu/drm/xlnx/zynqmp_dp.c
+index 59d1fb017da01..13811332b349f 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_dp.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_dp.c
+@@ -402,10 +402,6 @@ static int zynqmp_dp_phy_init(struct zynqmp_dp *dp)
+ 		}
+ 	}
+ 
+-	ret = zynqmp_dp_reset(dp, false);
+-	if (ret < 0)
+-		return ret;
+-
+ 	zynqmp_dp_clr(dp, ZYNQMP_DP_PHY_RESET, ZYNQMP_DP_PHY_RESET_ALL_RESET);
+ 
+ 	/*
+@@ -441,8 +437,6 @@ static void zynqmp_dp_phy_exit(struct zynqmp_dp *dp)
+ 				ret);
+ 	}
+ 
+-	zynqmp_dp_reset(dp, true);
+-
+ 	for (i = 0; i < dp->num_lanes; i++) {
+ 		ret = phy_exit(dp->phy[i]);
+ 		if (ret)
+@@ -1682,9 +1676,13 @@ int zynqmp_dp_probe(struct zynqmp_dpsub *dpsub, struct drm_device *drm)
+ 		return PTR_ERR(dp->reset);
+ 	}
+ 
++	ret = zynqmp_dp_reset(dp, false);
++	if (ret < 0)
++		return ret;
++
+ 	ret = zynqmp_dp_phy_probe(dp);
+ 	if (ret)
+-		return ret;
++		goto err_reset;
+ 
+ 	/* Initialize the hardware. */
+ 	zynqmp_dp_write(dp, ZYNQMP_DP_TX_PHY_POWER_DOWN,
+@@ -1696,7 +1694,7 @@ int zynqmp_dp_probe(struct zynqmp_dpsub *dpsub, struct drm_device *drm)
+ 
+ 	ret = zynqmp_dp_phy_init(dp);
+ 	if (ret)
+-		return ret;
++		goto err_reset;
+ 
+ 	zynqmp_dp_write(dp, ZYNQMP_DP_TRANSMITTER_ENABLE, 1);
+ 
+@@ -1708,15 +1706,18 @@ int zynqmp_dp_probe(struct zynqmp_dpsub *dpsub, struct drm_device *drm)
+ 					zynqmp_dp_irq_handler, IRQF_ONESHOT,
+ 					dev_name(dp->dev), dp);
+ 	if (ret < 0)
+-		goto error;
++		goto err_phy_exit;
+ 
+ 	dev_dbg(dp->dev, "ZynqMP DisplayPort Tx probed with %u lanes\n",
+ 		dp->num_lanes);
+ 
+ 	return 0;
+ 
+-error:
++err_phy_exit:
+ 	zynqmp_dp_phy_exit(dp);
++err_reset:
++	zynqmp_dp_reset(dp, true);
++
+ 	return ret;
+ }
+ 
+@@ -1734,4 +1735,5 @@ void zynqmp_dp_remove(struct zynqmp_dpsub *dpsub)
+ 	zynqmp_dp_write(dp, ZYNQMP_DP_INT_DS, 0xffffffff);
+ 
+ 	zynqmp_dp_phy_exit(dp);
++	zynqmp_dp_reset(dp, true);
+ }
+diff --git a/drivers/hid/Makefile b/drivers/hid/Makefile
+index 1ea1a7c0b20fe..e29efcb1c0402 100644
+--- a/drivers/hid/Makefile
++++ b/drivers/hid/Makefile
+@@ -115,7 +115,6 @@ obj-$(CONFIG_HID_STEELSERIES)	+= hid-steelseries.o
+ obj-$(CONFIG_HID_SUNPLUS)	+= hid-sunplus.o
+ obj-$(CONFIG_HID_GREENASIA)	+= hid-gaff.o
+ obj-$(CONFIG_HID_THRUSTMASTER)	+= hid-tmff.o hid-thrustmaster.o
+-obj-$(CONFIG_HID_TMINIT)	+= hid-tminit.o
+ obj-$(CONFIG_HID_TIVO)		+= hid-tivo.o
+ obj-$(CONFIG_HID_TOPSEED)	+= hid-topseed.o
+ obj-$(CONFIG_HID_TWINHAN)	+= hid-twinhan.o
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_client.c b/drivers/hid/amd-sfh-hid/amd_sfh_client.c
+index 3589d9945da1c..9c7b64e5357ad 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_client.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_client.c
+@@ -186,7 +186,7 @@ int amd_sfh_hid_client_init(struct amd_mp2_dev *privdata)
+ 			rc = -ENOMEM;
+ 			goto cleanup;
+ 		}
+-		info.period = msecs_to_jiffies(AMD_SFH_IDLE_LOOP);
++		info.period = AMD_SFH_IDLE_LOOP;
+ 		info.sensor_idx = cl_idx;
+ 		info.dma_address = cl_data->sensor_dma_addr[i];
+ 
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index 68c8644234a4a..f43b40450e97c 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -419,8 +419,6 @@ static int hidinput_get_battery_property(struct power_supply *psy,
+ 
+ 		if (dev->battery_status == HID_BATTERY_UNKNOWN)
+ 			val->intval = POWER_SUPPLY_STATUS_UNKNOWN;
+-		else if (dev->battery_capacity == 100)
+-			val->intval = POWER_SUPPLY_STATUS_FULL;
+ 		else
+ 			val->intval = POWER_SUPPLY_STATUS_DISCHARGING;
+ 		break;
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index 51b39bda9a9d2..2e104682c22b9 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -662,8 +662,6 @@ static const struct hid_device_id hid_have_special_driver[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb653) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb654) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb65a) },
+-#endif
+-#if IS_ENABLED(CONFIG_HID_TMINIT)
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb65d) },
+ #endif
+ #if IS_ENABLED(CONFIG_HID_TIVO)
+diff --git a/drivers/hid/i2c-hid/i2c-hid-core.c b/drivers/hid/i2c-hid/i2c-hid-core.c
+index 46474612e73c6..517141138b007 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-core.c
++++ b/drivers/hid/i2c-hid/i2c-hid-core.c
+@@ -171,8 +171,6 @@ static const struct i2c_hid_quirks {
+ 		I2C_HID_QUIRK_NO_IRQ_AFTER_RESET },
+ 	{ I2C_VENDOR_ID_RAYDIUM, I2C_PRODUCT_ID_RAYDIUM_3118,
+ 		I2C_HID_QUIRK_NO_IRQ_AFTER_RESET },
+-	{ USB_VENDOR_ID_ELAN, HID_ANY_ID,
+-		 I2C_HID_QUIRK_BOGUS_IRQ },
+ 	{ USB_VENDOR_ID_ALPS_JP, HID_ANY_ID,
+ 		 I2C_HID_QUIRK_RESET_ON_RESUME },
+ 	{ I2C_VENDOR_ID_SYNAPTICS, I2C_PRODUCT_ID_SYNAPTICS_SYNA2393,
+@@ -183,7 +181,8 @@ static const struct i2c_hid_quirks {
+ 	 * Sending the wakeup after reset actually break ELAN touchscreen controller
+ 	 */
+ 	{ USB_VENDOR_ID_ELAN, HID_ANY_ID,
+-		 I2C_HID_QUIRK_NO_WAKEUP_AFTER_RESET },
++		 I2C_HID_QUIRK_NO_WAKEUP_AFTER_RESET |
++		 I2C_HID_QUIRK_BOGUS_IRQ },
+ 	{ 0, 0 }
+ };
+ 
+diff --git a/drivers/hwmon/pmbus/ibm-cffps.c b/drivers/hwmon/pmbus/ibm-cffps.c
+index 5668d8305b78e..df712ce4b164d 100644
+--- a/drivers/hwmon/pmbus/ibm-cffps.c
++++ b/drivers/hwmon/pmbus/ibm-cffps.c
+@@ -50,9 +50,9 @@
+ #define CFFPS_MFR_VAUX_FAULT			BIT(6)
+ #define CFFPS_MFR_CURRENT_SHARE_WARNING		BIT(7)
+ 
+-#define CFFPS_LED_BLINK				BIT(0)
+-#define CFFPS_LED_ON				BIT(1)
+-#define CFFPS_LED_OFF				BIT(2)
++#define CFFPS_LED_BLINK				(BIT(0) | BIT(6))
++#define CFFPS_LED_ON				(BIT(1) | BIT(6))
++#define CFFPS_LED_OFF				(BIT(2) | BIT(6))
+ #define CFFPS_BLINK_RATE_MS			250
+ 
+ enum {
+diff --git a/drivers/iio/dac/ad5624r_spi.c b/drivers/iio/dac/ad5624r_spi.c
+index 9bde869829121..530529feebb51 100644
+--- a/drivers/iio/dac/ad5624r_spi.c
++++ b/drivers/iio/dac/ad5624r_spi.c
+@@ -229,7 +229,7 @@ static int ad5624r_probe(struct spi_device *spi)
+ 	if (!indio_dev)
+ 		return -ENOMEM;
+ 	st = iio_priv(indio_dev);
+-	st->reg = devm_regulator_get(&spi->dev, "vcc");
++	st->reg = devm_regulator_get_optional(&spi->dev, "vref");
+ 	if (!IS_ERR(st->reg)) {
+ 		ret = regulator_enable(st->reg);
+ 		if (ret)
+@@ -240,6 +240,22 @@ static int ad5624r_probe(struct spi_device *spi)
+ 			goto error_disable_reg;
+ 
+ 		voltage_uv = ret;
++	} else {
++		if (PTR_ERR(st->reg) != -ENODEV)
++			return PTR_ERR(st->reg);
++		/* Backwards compatibility. This naming is not correct */
++		st->reg = devm_regulator_get_optional(&spi->dev, "vcc");
++		if (!IS_ERR(st->reg)) {
++			ret = regulator_enable(st->reg);
++			if (ret)
++				return ret;
++
++			ret = regulator_get_voltage(st->reg);
++			if (ret < 0)
++				goto error_disable_reg;
++
++			voltage_uv = ret;
++		}
+ 	}
+ 
+ 	spi_set_drvdata(spi, indio_dev);
+diff --git a/drivers/iio/temperature/ltc2983.c b/drivers/iio/temperature/ltc2983.c
+index 3b5ba26d7d867..3b4a0e60e6059 100644
+--- a/drivers/iio/temperature/ltc2983.c
++++ b/drivers/iio/temperature/ltc2983.c
+@@ -89,6 +89,8 @@
+ 
+ #define	LTC2983_STATUS_START_MASK	BIT(7)
+ #define	LTC2983_STATUS_START(x)		FIELD_PREP(LTC2983_STATUS_START_MASK, x)
++#define	LTC2983_STATUS_UP_MASK		GENMASK(7, 6)
++#define	LTC2983_STATUS_UP(reg)		FIELD_GET(LTC2983_STATUS_UP_MASK, reg)
+ 
+ #define	LTC2983_STATUS_CHAN_SEL_MASK	GENMASK(4, 0)
+ #define	LTC2983_STATUS_CHAN_SEL(x) \
+@@ -1362,17 +1364,16 @@ put_child:
+ 
+ static int ltc2983_setup(struct ltc2983_data *st, bool assign_iio)
+ {
+-	u32 iio_chan_t = 0, iio_chan_v = 0, chan, iio_idx = 0;
++	u32 iio_chan_t = 0, iio_chan_v = 0, chan, iio_idx = 0, status;
+ 	int ret;
+-	unsigned long time;
+-
+-	/* make sure the device is up */
+-	time = wait_for_completion_timeout(&st->completion,
+-					    msecs_to_jiffies(250));
+ 
+-	if (!time) {
++	/* make sure the device is up: start bit (7) is 0 and done bit (6) is 1 */
++	ret = regmap_read_poll_timeout(st->regmap, LTC2983_STATUS_REG, status,
++				       LTC2983_STATUS_UP(status) == 1, 25000,
++				       25000 * 10);
++	if (ret) {
+ 		dev_err(&st->spi->dev, "Device startup timed out\n");
+-		return -ETIMEDOUT;
++		return ret;
+ 	}
+ 
+ 	st->iio_chan = devm_kzalloc(&st->spi->dev,
+@@ -1492,10 +1493,11 @@ static int ltc2983_probe(struct spi_device *spi)
+ 	ret = ltc2983_parse_dt(st);
+ 	if (ret)
+ 		return ret;
+-	/*
+-	 * let's request the irq now so it is used to sync the device
+-	 * startup in ltc2983_setup()
+-	 */
++
++	ret = ltc2983_setup(st, true);
++	if (ret)
++		return ret;
++
+ 	ret = devm_request_irq(&spi->dev, spi->irq, ltc2983_irq_handler,
+ 			       IRQF_TRIGGER_RISING, name, st);
+ 	if (ret) {
+@@ -1503,10 +1505,6 @@ static int ltc2983_probe(struct spi_device *spi)
+ 		return ret;
+ 	}
+ 
+-	ret = ltc2983_setup(st, true);
+-	if (ret)
+-		return ret;
+-
+ 	indio_dev->name = name;
+ 	indio_dev->num_channels = st->iio_channels;
+ 	indio_dev->channels = st->iio_chan;
+diff --git a/drivers/infiniband/core/iwcm.c b/drivers/infiniband/core/iwcm.c
+index da8adadf47559..75b6da00065a3 100644
+--- a/drivers/infiniband/core/iwcm.c
++++ b/drivers/infiniband/core/iwcm.c
+@@ -1187,29 +1187,34 @@ static int __init iw_cm_init(void)
+ 
+ 	ret = iwpm_init(RDMA_NL_IWCM);
+ 	if (ret)
+-		pr_err("iw_cm: couldn't init iwpm\n");
+-	else
+-		rdma_nl_register(RDMA_NL_IWCM, iwcm_nl_cb_table);
++		return ret;
++
+ 	iwcm_wq = alloc_ordered_workqueue("iw_cm_wq", 0);
+ 	if (!iwcm_wq)
+-		return -ENOMEM;
++		goto err_alloc;
+ 
+ 	iwcm_ctl_table_hdr = register_net_sysctl(&init_net, "net/iw_cm",
+ 						 iwcm_ctl_table);
+ 	if (!iwcm_ctl_table_hdr) {
+ 		pr_err("iw_cm: couldn't register sysctl paths\n");
+-		destroy_workqueue(iwcm_wq);
+-		return -ENOMEM;
++		goto err_sysctl;
+ 	}
+ 
++	rdma_nl_register(RDMA_NL_IWCM, iwcm_nl_cb_table);
+ 	return 0;
++
++err_sysctl:
++	destroy_workqueue(iwcm_wq);
++err_alloc:
++	iwpm_exit(RDMA_NL_IWCM);
++	return -ENOMEM;
+ }
+ 
+ static void __exit iw_cm_cleanup(void)
+ {
++	rdma_nl_unregister(RDMA_NL_IWCM);
+ 	unregister_net_sysctl_table(iwcm_ctl_table_hdr);
+ 	destroy_workqueue(iwcm_wq);
+-	rdma_nl_unregister(RDMA_NL_IWCM);
+ 	iwpm_exit(RDMA_NL_IWCM);
+ }
+ 
+diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
+index 51572f1dc6111..72621ecd81f70 100644
+--- a/drivers/infiniband/hw/efa/efa_verbs.c
++++ b/drivers/infiniband/hw/efa/efa_verbs.c
+@@ -717,7 +717,6 @@ struct ib_qp *efa_create_qp(struct ib_pd *ibpd,
+ 
+ 	qp->qp_handle = create_qp_resp.qp_handle;
+ 	qp->ibqp.qp_num = create_qp_resp.qp_num;
+-	qp->ibqp.qp_type = init_attr->qp_type;
+ 	qp->max_send_wr = init_attr->cap.max_send_wr;
+ 	qp->max_recv_wr = init_attr->cap.max_recv_wr;
+ 	qp->max_send_sge = init_attr->cap.max_send_sge;
+diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
+index e3a8a420c0455..c076eed9c3b77 100644
+--- a/drivers/infiniband/hw/hfi1/init.c
++++ b/drivers/infiniband/hw/hfi1/init.c
+@@ -650,12 +650,7 @@ void hfi1_init_pportdata(struct pci_dev *pdev, struct hfi1_pportdata *ppd,
+ 
+ 	ppd->pkeys[default_pkey_idx] = DEFAULT_P_KEY;
+ 	ppd->part_enforce |= HFI1_PART_ENFORCE_IN;
+-
+-	if (loopback) {
+-		dd_dev_err(dd, "Faking data partition 0x8001 in idx %u\n",
+-			   !default_pkey_idx);
+-		ppd->pkeys[!default_pkey_idx] = 0x8001;
+-	}
++	ppd->pkeys[0] = 0x8001;
+ 
+ 	INIT_WORK(&ppd->link_vc_work, handle_verify_cap);
+ 	INIT_WORK(&ppd->link_up_work, handle_link_up);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index dcbe5e28a4f7a..90945e664f5da 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -4735,8 +4735,10 @@ static int get_dip_ctx_idx(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
+ 	spin_lock_irqsave(&hr_dev->dip_list_lock, flags);
+ 
+ 	list_for_each_entry(hr_dip, &hr_dev->dip_list, node) {
+-		if (!memcmp(grh->dgid.raw, hr_dip->dgid, 16))
++		if (!memcmp(grh->dgid.raw, hr_dip->dgid, 16)) {
++			*dip_idx = hr_dip->dip_idx;
+ 			goto out;
++		}
+ 	}
+ 
+ 	/* If no dgid is found, a new dip and a mapping between dgid and
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index 23cf2f6bc7a54..d4da840dbc2ef 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -1784,7 +1784,7 @@ struct hns_roce_eq_context {
+ 
+ struct hns_roce_dip {
+ 	u8 dgid[GID_LEN_V2];
+-	u8 dip_idx;
++	u32 dip_idx;
+ 	struct list_head node;	/* all dips are on a list */
+ };
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
+index b8454dcb03183..39a085f8e6055 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
++++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
+@@ -361,7 +361,9 @@ struct ib_mr *hns_roce_rereg_user_mr(struct ib_mr *ibmr, int flags, u64 start,
+ free_cmd_mbox:
+ 	hns_roce_free_cmd_mailbox(hr_dev, mailbox);
+ 
+-	return ERR_PTR(ret);
++	if (ret)
++		return ERR_PTR(ret);
++	return NULL;
+ }
+ 
+ int hns_roce_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index 230a909ba9bcd..5d5dd0b5d5075 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -835,7 +835,6 @@ static int alloc_qp_db(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
+ 				goto err_out;
+ 			}
+ 			hr_qp->en_flags |= HNS_ROCE_QP_CAP_SQ_RECORD_DB;
+-			resp->cap_flags |= HNS_ROCE_QP_CAP_SQ_RECORD_DB;
+ 		}
+ 
+ 		if (user_qp_has_rdb(hr_dev, init_attr, udata, resp)) {
+@@ -848,7 +847,6 @@ static int alloc_qp_db(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
+ 				goto err_sdb;
+ 			}
+ 			hr_qp->en_flags |= HNS_ROCE_QP_CAP_RQ_RECORD_DB;
+-			resp->cap_flags |= HNS_ROCE_QP_CAP_RQ_RECORD_DB;
+ 		}
+ 	} else {
+ 		if (hr_dev->pci_dev->revision >= PCI_REVISION_ID_HIP09)
+@@ -1060,6 +1058,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
+ 	}
+ 
+ 	if (udata) {
++		resp.cap_flags = hr_qp->en_flags;
+ 		ret = ib_copy_to_udata(udata, &resp,
+ 				       min(udata->outlen, sizeof(resp)));
+ 		if (ret) {
+@@ -1158,14 +1157,8 @@ struct ib_qp *hns_roce_create_qp(struct ib_pd *pd,
+ 	if (!hr_qp)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	if (init_attr->qp_type == IB_QPT_XRC_INI)
+-		init_attr->recv_cq = NULL;
+-
+-	if (init_attr->qp_type == IB_QPT_XRC_TGT) {
++	if (init_attr->qp_type == IB_QPT_XRC_TGT)
+ 		hr_qp->xrcdn = to_hr_xrcd(init_attr->xrcd)->xrcdn;
+-		init_attr->recv_cq = NULL;
+-		init_attr->send_cq = NULL;
+-	}
+ 
+ 	if (init_attr->qp_type == IB_QPT_GSI) {
+ 		hr_qp->port = init_attr->port_num - 1;
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index 5851486c0d930..2471f48ea5f39 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -1896,7 +1896,6 @@ static int get_atomic_mode(struct mlx5_ib_dev *dev,
+ static int create_xrc_tgt_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
+ 			     struct mlx5_create_qp_params *params)
+ {
+-	struct mlx5_ib_create_qp *ucmd = params->ucmd;
+ 	struct ib_qp_init_attr *attr = params->attr;
+ 	u32 uidx = params->uidx;
+ 	struct mlx5_ib_resources *devr = &dev->devr;
+@@ -1916,8 +1915,6 @@ static int create_xrc_tgt_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
+ 	if (!in)
+ 		return -ENOMEM;
+ 
+-	if (MLX5_CAP_GEN(mdev, ece_support) && ucmd)
+-		MLX5_SET(create_qp_in, in, ece, ucmd->ece_options);
+ 	qpc = MLX5_ADDR_OF(create_qp_in, in, qpc);
+ 
+ 	MLX5_SET(qpc, qpc, st, MLX5_QP_ST_XRC);
+diff --git a/drivers/input/mouse/elan_i2c.h b/drivers/input/mouse/elan_i2c.h
+index dc4a240f44895..3c84deefa327d 100644
+--- a/drivers/input/mouse/elan_i2c.h
++++ b/drivers/input/mouse/elan_i2c.h
+@@ -55,8 +55,9 @@
+ #define ETP_FW_PAGE_SIZE_512	512
+ #define ETP_FW_SIGNATURE_SIZE	6
+ 
+-#define ETP_PRODUCT_ID_DELBIN	0x00C2
++#define ETP_PRODUCT_ID_WHITEBOX	0x00B8
+ #define ETP_PRODUCT_ID_VOXEL	0x00BF
++#define ETP_PRODUCT_ID_DELBIN	0x00C2
+ #define ETP_PRODUCT_ID_MAGPIE	0x0120
+ #define ETP_PRODUCT_ID_BOBBA	0x0121
+ 
+diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
+index dad22c1ea6a0f..47af62c122672 100644
+--- a/drivers/input/mouse/elan_i2c_core.c
++++ b/drivers/input/mouse/elan_i2c_core.c
+@@ -105,6 +105,7 @@ static u32 elan_i2c_lookup_quirks(u16 ic_type, u16 product_id)
+ 		u32 quirks;
+ 	} elan_i2c_quirks[] = {
+ 		{ 0x0D, ETP_PRODUCT_ID_DELBIN, ETP_QUIRK_QUICK_WAKEUP },
++		{ 0x0D, ETP_PRODUCT_ID_WHITEBOX, ETP_QUIRK_QUICK_WAKEUP },
+ 		{ 0x10, ETP_PRODUCT_ID_VOXEL, ETP_QUIRK_QUICK_WAKEUP },
+ 		{ 0x14, ETP_PRODUCT_ID_MAGPIE, ETP_QUIRK_QUICK_WAKEUP },
+ 		{ 0x14, ETP_PRODUCT_ID_BOBBA, ETP_QUIRK_QUICK_WAKEUP },
+diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h
+index c11bc8b833b8e..d5552e2c160d2 100644
+--- a/drivers/iommu/intel/pasid.h
++++ b/drivers/iommu/intel/pasid.h
+@@ -28,12 +28,12 @@
+ #define VCMD_CMD_ALLOC			0x1
+ #define VCMD_CMD_FREE			0x2
+ #define VCMD_VRSP_IP			0x1
+-#define VCMD_VRSP_SC(e)			(((e) >> 1) & 0x3)
++#define VCMD_VRSP_SC(e)			(((e) & 0xff) >> 1)
+ #define VCMD_VRSP_SC_SUCCESS		0
+-#define VCMD_VRSP_SC_NO_PASID_AVAIL	2
+-#define VCMD_VRSP_SC_INVALID_PASID	2
+-#define VCMD_VRSP_RESULT_PASID(e)	(((e) >> 8) & 0xfffff)
+-#define VCMD_CMD_OPERAND(e)		((e) << 8)
++#define VCMD_VRSP_SC_NO_PASID_AVAIL	16
++#define VCMD_VRSP_SC_INVALID_PASID	16
++#define VCMD_VRSP_RESULT_PASID(e)	(((e) >> 16) & 0xfffff)
++#define VCMD_CMD_OPERAND(e)		((e) << 16)
+ /*
+  * Domain ID reserved for pasid entries programmed for first-level
+  * only and pass-through transfer modes.
+diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c
+index 5665b6ea8119f..75378e35c3d66 100644
+--- a/drivers/mailbox/mtk-cmdq-mailbox.c
++++ b/drivers/mailbox/mtk-cmdq-mailbox.c
+@@ -168,7 +168,8 @@ static void cmdq_task_insert_into_thread(struct cmdq_task *task)
+ 	dma_sync_single_for_cpu(dev, prev_task->pa_base,
+ 				prev_task->pkt->cmd_buf_size, DMA_TO_DEVICE);
+ 	prev_task_base[CMDQ_NUM_CMD(prev_task->pkt) - 1] =
+-		(u64)CMDQ_JUMP_BY_PA << 32 | task->pa_base;
++		(u64)CMDQ_JUMP_BY_PA << 32 |
++		(task->pa_base >> task->cmdq->shift_pa);
+ 	dma_sync_single_for_device(dev, prev_task->pa_base,
+ 				   prev_task->pkt->cmd_buf_size, DMA_TO_DEVICE);
+ 
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index b0ab080f25676..85f3a1a4fbb39 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -2661,7 +2661,12 @@ static void *crypt_page_alloc(gfp_t gfp_mask, void *pool_data)
+ 	struct crypt_config *cc = pool_data;
+ 	struct page *page;
+ 
+-	if (unlikely(percpu_counter_compare(&cc->n_allocated_pages, dm_crypt_pages_per_client) >= 0) &&
++	/*
++	 * Note, percpu_counter_read_positive() may over (and under) estimate
++	 * the current usage by at most (batch - 1) * num_online_cpus() pages,
++	 * but avoids potential spinlock contention of an exact result.
++	 */
++	if (unlikely(percpu_counter_read_positive(&cc->n_allocated_pages) >= dm_crypt_pages_per_client) &&
+ 	    likely(gfp_mask & __GFP_NORETRY))
+ 		return NULL;
+ 
+diff --git a/drivers/media/cec/platform/stm32/stm32-cec.c b/drivers/media/cec/platform/stm32/stm32-cec.c
+index ea4b1ebfca991..0ffd89712536b 100644
+--- a/drivers/media/cec/platform/stm32/stm32-cec.c
++++ b/drivers/media/cec/platform/stm32/stm32-cec.c
+@@ -305,14 +305,16 @@ static int stm32_cec_probe(struct platform_device *pdev)
+ 
+ 	cec->clk_hdmi_cec = devm_clk_get(&pdev->dev, "hdmi-cec");
+ 	if (IS_ERR(cec->clk_hdmi_cec) &&
+-	    PTR_ERR(cec->clk_hdmi_cec) == -EPROBE_DEFER)
+-		return -EPROBE_DEFER;
++	    PTR_ERR(cec->clk_hdmi_cec) == -EPROBE_DEFER) {
++		ret = -EPROBE_DEFER;
++		goto err_unprepare_cec_clk;
++	}
+ 
+ 	if (!IS_ERR(cec->clk_hdmi_cec)) {
+ 		ret = clk_prepare(cec->clk_hdmi_cec);
+ 		if (ret) {
+ 			dev_err(&pdev->dev, "Can't prepare hdmi-cec clock\n");
+-			return ret;
++			goto err_unprepare_cec_clk;
+ 		}
+ 	}
+ 
+@@ -324,19 +326,27 @@ static int stm32_cec_probe(struct platform_device *pdev)
+ 			CEC_NAME, caps,	CEC_MAX_LOG_ADDRS);
+ 	ret = PTR_ERR_OR_ZERO(cec->adap);
+ 	if (ret)
+-		return ret;
++		goto err_unprepare_hdmi_cec_clk;
+ 
+ 	ret = cec_register_adapter(cec->adap, &pdev->dev);
+-	if (ret) {
+-		cec_delete_adapter(cec->adap);
+-		return ret;
+-	}
++	if (ret)
++		goto err_delete_adapter;
+ 
+ 	cec_hw_init(cec);
+ 
+ 	platform_set_drvdata(pdev, cec);
+ 
+ 	return 0;
++
++err_delete_adapter:
++	cec_delete_adapter(cec->adap);
++
++err_unprepare_hdmi_cec_clk:
++	clk_unprepare(cec->clk_hdmi_cec);
++
++err_unprepare_cec_clk:
++	clk_unprepare(cec->clk_cec);
++	return ret;
+ }
+ 
+ static int stm32_cec_remove(struct platform_device *pdev)
+diff --git a/drivers/media/cec/platform/tegra/tegra_cec.c b/drivers/media/cec/platform/tegra/tegra_cec.c
+index 1ac0c70a59818..5e907395ca2e5 100644
+--- a/drivers/media/cec/platform/tegra/tegra_cec.c
++++ b/drivers/media/cec/platform/tegra/tegra_cec.c
+@@ -366,7 +366,11 @@ static int tegra_cec_probe(struct platform_device *pdev)
+ 		return -ENOENT;
+ 	}
+ 
+-	clk_prepare_enable(cec->clk);
++	ret = clk_prepare_enable(cec->clk);
++	if (ret) {
++		dev_err(&pdev->dev, "Unable to prepare clock for CEC\n");
++		return ret;
++	}
+ 
+ 	/* set context info. */
+ 	cec->dev = &pdev->dev;
+@@ -446,9 +450,7 @@ static int tegra_cec_resume(struct platform_device *pdev)
+ 
+ 	dev_notice(&pdev->dev, "Resuming\n");
+ 
+-	clk_prepare_enable(cec->clk);
+-
+-	return 0;
++	return clk_prepare_enable(cec->clk);
+ }
+ #endif
+ 
+diff --git a/drivers/media/dvb-frontends/dib8000.c b/drivers/media/dvb-frontends/dib8000.c
+index 082796534b0ae..bb02354a48b81 100644
+--- a/drivers/media/dvb-frontends/dib8000.c
++++ b/drivers/media/dvb-frontends/dib8000.c
+@@ -2107,32 +2107,55 @@ static void dib8000_load_ana_fe_coefs(struct dib8000_state *state, const s16 *an
+ 			dib8000_write_word(state, 117 + mode, ana_fe[mode]);
+ }
+ 
+-static const u16 lut_prbs_2k[14] = {
+-	0, 0x423, 0x009, 0x5C7, 0x7A6, 0x3D8, 0x527, 0x7FF, 0x79B, 0x3D6, 0x3A2, 0x53B, 0x2F4, 0x213
++static const u16 lut_prbs_2k[13] = {
++	0x423, 0x009, 0x5C7,
++	0x7A6, 0x3D8, 0x527,
++	0x7FF, 0x79B, 0x3D6,
++	0x3A2, 0x53B, 0x2F4,
++	0x213
+ };
+-static const u16 lut_prbs_4k[14] = {
+-	0, 0x208, 0x0C3, 0x7B9, 0x423, 0x5C7, 0x3D8, 0x7FF, 0x3D6, 0x53B, 0x213, 0x029, 0x0D0, 0x48E
++
++static const u16 lut_prbs_4k[13] = {
++	0x208, 0x0C3, 0x7B9,
++	0x423, 0x5C7, 0x3D8,
++	0x7FF, 0x3D6, 0x53B,
++	0x213, 0x029, 0x0D0,
++	0x48E
+ };
+-static const u16 lut_prbs_8k[14] = {
+-	0, 0x740, 0x069, 0x7DD, 0x208, 0x7B9, 0x5C7, 0x7FF, 0x53B, 0x029, 0x48E, 0x4C4, 0x367, 0x684
++
++static const u16 lut_prbs_8k[13] = {
++	0x740, 0x069, 0x7DD,
++	0x208, 0x7B9, 0x5C7,
++	0x7FF, 0x53B, 0x029,
++	0x48E, 0x4C4, 0x367,
++	0x684
+ };
+ 
+ static u16 dib8000_get_init_prbs(struct dib8000_state *state, u16 subchannel)
+ {
+ 	int sub_channel_prbs_group = 0;
++	int prbs_group;
+ 
+-	sub_channel_prbs_group = (subchannel / 3) + 1;
+-	dprintk("sub_channel_prbs_group = %d , subchannel =%d prbs = 0x%04x\n", sub_channel_prbs_group, subchannel, lut_prbs_8k[sub_channel_prbs_group]);
++	sub_channel_prbs_group = subchannel / 3;
++	if (sub_channel_prbs_group >= ARRAY_SIZE(lut_prbs_2k))
++		return 0;
+ 
+ 	switch (state->fe[0]->dtv_property_cache.transmission_mode) {
+ 	case TRANSMISSION_MODE_2K:
+-			return lut_prbs_2k[sub_channel_prbs_group];
++		prbs_group = lut_prbs_2k[sub_channel_prbs_group];
++		break;
+ 	case TRANSMISSION_MODE_4K:
+-			return lut_prbs_4k[sub_channel_prbs_group];
++		prbs_group =  lut_prbs_4k[sub_channel_prbs_group];
++		break;
+ 	default:
+ 	case TRANSMISSION_MODE_8K:
+-			return lut_prbs_8k[sub_channel_prbs_group];
++		prbs_group = lut_prbs_8k[sub_channel_prbs_group];
+ 	}
++
++	dprintk("sub_channel_prbs_group = %d , subchannel =%d prbs = 0x%04x\n",
++		sub_channel_prbs_group, subchannel, prbs_group);
++
++	return prbs_group;
+ }
+ 
+ static void dib8000_set_13seg_channel(struct dib8000_state *state)
+@@ -2409,10 +2432,8 @@ static void dib8000_set_isdbt_common_channel(struct dib8000_state *state, u8 seq
+ 	/* TSB or ISDBT ? apply it now */
+ 	if (c->isdbt_sb_mode) {
+ 		dib8000_set_sb_channel(state);
+-		if (c->isdbt_sb_subchannel < 14)
+-			init_prbs = dib8000_get_init_prbs(state, c->isdbt_sb_subchannel);
+-		else
+-			init_prbs = 0;
++		init_prbs = dib8000_get_init_prbs(state,
++						  c->isdbt_sb_subchannel);
+ 	} else {
+ 		dib8000_set_13seg_channel(state);
+ 		init_prbs = 0xfff;
+@@ -3004,6 +3025,7 @@ static int dib8000_tune(struct dvb_frontend *fe)
+ 
+ 	unsigned long *timeout = &state->timeout;
+ 	unsigned long now = jiffies;
++	u16 init_prbs;
+ #ifdef DIB8000_AGC_FREEZE
+ 	u16 agc1, agc2;
+ #endif
+@@ -3302,8 +3324,10 @@ static int dib8000_tune(struct dvb_frontend *fe)
+ 		break;
+ 
+ 	case CT_DEMOD_STEP_11:  /* 41 : init prbs autosearch */
+-		if (state->subchannel <= 41) {
+-			dib8000_set_subchannel_prbs(state, dib8000_get_init_prbs(state, state->subchannel));
++		init_prbs = dib8000_get_init_prbs(state, state->subchannel);
++
++		if (init_prbs) {
++			dib8000_set_subchannel_prbs(state, init_prbs);
+ 			*tune_state = CT_DEMOD_STEP_9;
+ 		} else {
+ 			*tune_state = CT_DEMOD_STOP;
+diff --git a/drivers/media/i2c/imx258.c b/drivers/media/i2c/imx258.c
+index a017ec4e0f504..cdeaaec318791 100644
+--- a/drivers/media/i2c/imx258.c
++++ b/drivers/media/i2c/imx258.c
+@@ -23,7 +23,7 @@
+ #define IMX258_CHIP_ID			0x0258
+ 
+ /* V_TIMING internal */
+-#define IMX258_VTS_30FPS		0x0c98
++#define IMX258_VTS_30FPS		0x0c50
+ #define IMX258_VTS_30FPS_2K		0x0638
+ #define IMX258_VTS_30FPS_VGA		0x034c
+ #define IMX258_VTS_MAX			0xffff
+@@ -47,7 +47,7 @@
+ /* Analog gain control */
+ #define IMX258_REG_ANALOG_GAIN		0x0204
+ #define IMX258_ANA_GAIN_MIN		0
+-#define IMX258_ANA_GAIN_MAX		0x1fff
++#define IMX258_ANA_GAIN_MAX		480
+ #define IMX258_ANA_GAIN_STEP		1
+ #define IMX258_ANA_GAIN_DEFAULT		0x0
+ 
+diff --git a/drivers/media/i2c/tda1997x.c b/drivers/media/i2c/tda1997x.c
+index 9554c8348c020..17cc69c3227f8 100644
+--- a/drivers/media/i2c/tda1997x.c
++++ b/drivers/media/i2c/tda1997x.c
+@@ -1695,14 +1695,15 @@ static int tda1997x_query_dv_timings(struct v4l2_subdev *sd,
+ 				     struct v4l2_dv_timings *timings)
+ {
+ 	struct tda1997x_state *state = to_state(sd);
++	int ret;
+ 
+ 	v4l_dbg(1, debug, state->client, "%s\n", __func__);
+ 	memset(timings, 0, sizeof(struct v4l2_dv_timings));
+ 	mutex_lock(&state->lock);
+-	tda1997x_detect_std(state, timings);
++	ret = tda1997x_detect_std(state, timings);
+ 	mutex_unlock(&state->lock);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static const struct v4l2_subdev_video_ops tda1997x_video_ops = {
+diff --git a/drivers/media/platform/ti-vpe/cal-camerarx.c b/drivers/media/platform/ti-vpe/cal-camerarx.c
+index cbe6114908de7..63d13bcc83b47 100644
+--- a/drivers/media/platform/ti-vpe/cal-camerarx.c
++++ b/drivers/media/platform/ti-vpe/cal-camerarx.c
+@@ -842,7 +842,9 @@ struct cal_camerarx *cal_camerarx_create(struct cal_dev *cal,
+ 	if (ret)
+ 		goto error;
+ 
+-	cal_camerarx_sd_init_cfg(sd, NULL);
++	ret = cal_camerarx_sd_init_cfg(sd, NULL);
++	if (ret)
++		goto error;
+ 
+ 	ret = v4l2_device_register_subdev(&cal->v4l2_dev, sd);
+ 	if (ret)
+diff --git a/drivers/media/platform/ti-vpe/cal-video.c b/drivers/media/platform/ti-vpe/cal-video.c
+index 7b7436a355ee3..b9405f70af9f5 100644
+--- a/drivers/media/platform/ti-vpe/cal-video.c
++++ b/drivers/media/platform/ti-vpe/cal-video.c
+@@ -694,7 +694,7 @@ static int cal_start_streaming(struct vb2_queue *vq, unsigned int count)
+ 
+ 	spin_lock_irq(&ctx->dma.lock);
+ 	buf = list_first_entry(&ctx->dma.queue, struct cal_buffer, list);
+-	ctx->dma.pending = buf;
++	ctx->dma.active = buf;
+ 	list_del(&buf->list);
+ 	spin_unlock_irq(&ctx->dma.lock);
+ 
+diff --git a/drivers/media/rc/rc-loopback.c b/drivers/media/rc/rc-loopback.c
+index 1ba3f96ffa7dc..40ab66c850f23 100644
+--- a/drivers/media/rc/rc-loopback.c
++++ b/drivers/media/rc/rc-loopback.c
+@@ -42,7 +42,7 @@ static int loop_set_tx_mask(struct rc_dev *dev, u32 mask)
+ 
+ 	if ((mask & (RXMASK_REGULAR | RXMASK_LEARNING)) != mask) {
+ 		dprintk("invalid tx mask: %u\n", mask);
+-		return -EINVAL;
++		return 2;
+ 	}
+ 
+ 	dprintk("setting tx mask: %u\n", mask);
+diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c
+index 252136cc885ce..6acb8013de08b 100644
+--- a/drivers/media/usb/uvc/uvc_v4l2.c
++++ b/drivers/media/usb/uvc/uvc_v4l2.c
+@@ -899,8 +899,8 @@ static int uvc_ioctl_g_input(struct file *file, void *fh, unsigned int *input)
+ {
+ 	struct uvc_fh *handle = fh;
+ 	struct uvc_video_chain *chain = handle->chain;
++	u8 *buf;
+ 	int ret;
+-	u8 i;
+ 
+ 	if (chain->selector == NULL ||
+ 	    (chain->dev->quirks & UVC_QUIRK_IGNORE_SELECTOR_UNIT)) {
+@@ -908,22 +908,27 @@ static int uvc_ioctl_g_input(struct file *file, void *fh, unsigned int *input)
+ 		return 0;
+ 	}
+ 
++	buf = kmalloc(1, GFP_KERNEL);
++	if (!buf)
++		return -ENOMEM;
++
+ 	ret = uvc_query_ctrl(chain->dev, UVC_GET_CUR, chain->selector->id,
+ 			     chain->dev->intfnum,  UVC_SU_INPUT_SELECT_CONTROL,
+-			     &i, 1);
+-	if (ret < 0)
+-		return ret;
++			     buf, 1);
++	if (!ret)
++		*input = *buf - 1;
+ 
+-	*input = i - 1;
+-	return 0;
++	kfree(buf);
++
++	return ret;
+ }
+ 
+ static int uvc_ioctl_s_input(struct file *file, void *fh, unsigned int input)
+ {
+ 	struct uvc_fh *handle = fh;
+ 	struct uvc_video_chain *chain = handle->chain;
++	u8 *buf;
+ 	int ret;
+-	u32 i;
+ 
+ 	ret = uvc_acquire_privileges(handle);
+ 	if (ret < 0)
+@@ -939,10 +944,17 @@ static int uvc_ioctl_s_input(struct file *file, void *fh, unsigned int input)
+ 	if (input >= chain->selector->bNrInPins)
+ 		return -EINVAL;
+ 
+-	i = input + 1;
+-	return uvc_query_ctrl(chain->dev, UVC_SET_CUR, chain->selector->id,
+-			      chain->dev->intfnum, UVC_SU_INPUT_SELECT_CONTROL,
+-			      &i, 1);
++	buf = kmalloc(1, GFP_KERNEL);
++	if (!buf)
++		return -ENOMEM;
++
++	*buf = input + 1;
++	ret = uvc_query_ctrl(chain->dev, UVC_SET_CUR, chain->selector->id,
++			     chain->dev->intfnum, UVC_SU_INPUT_SELECT_CONTROL,
++			     buf, 1);
++	kfree(buf);
++
++	return ret;
+ }
+ 
+ static int uvc_ioctl_queryctrl(struct file *file, void *fh,
+diff --git a/drivers/media/v4l2-core/v4l2-dv-timings.c b/drivers/media/v4l2-core/v4l2-dv-timings.c
+index 230d65a642178..af48705c704f8 100644
+--- a/drivers/media/v4l2-core/v4l2-dv-timings.c
++++ b/drivers/media/v4l2-core/v4l2-dv-timings.c
+@@ -196,7 +196,7 @@ bool v4l2_find_dv_timings_cap(struct v4l2_dv_timings *t,
+ 	if (!v4l2_valid_dv_timings(t, cap, fnc, fnc_handle))
+ 		return false;
+ 
+-	for (i = 0; i < v4l2_dv_timings_presets[i].bt.width; i++) {
++	for (i = 0; v4l2_dv_timings_presets[i].bt.width; i++) {
+ 		if (v4l2_valid_dv_timings(v4l2_dv_timings_presets + i, cap,
+ 					  fnc, fnc_handle) &&
+ 		    v4l2_match_dv_timings(t, v4l2_dv_timings_presets + i,
+@@ -218,7 +218,7 @@ bool v4l2_find_dv_timings_cea861_vic(struct v4l2_dv_timings *t, u8 vic)
+ {
+ 	unsigned int i;
+ 
+-	for (i = 0; i < v4l2_dv_timings_presets[i].bt.width; i++) {
++	for (i = 0; v4l2_dv_timings_presets[i].bt.width; i++) {
+ 		const struct v4l2_bt_timings *bt =
+ 			&v4l2_dv_timings_presets[i].bt;
+ 
+diff --git a/drivers/misc/pvpanic/pvpanic-pci.c b/drivers/misc/pvpanic/pvpanic-pci.c
+index 046ce4ecc1959..4a32505644428 100644
+--- a/drivers/misc/pvpanic/pvpanic-pci.c
++++ b/drivers/misc/pvpanic/pvpanic-pci.c
+@@ -119,4 +119,6 @@ static struct pci_driver pvpanic_pci_driver = {
+ 	},
+ };
+ 
++MODULE_DEVICE_TABLE(pci, pvpanic_pci_id_tbl);
++
+ module_pci_driver(pvpanic_pci_driver);
+diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.c b/drivers/misc/vmw_vmci/vmci_queue_pair.c
+index 880c33ab9f47b..94ebf7f3fd58a 100644
+--- a/drivers/misc/vmw_vmci/vmci_queue_pair.c
++++ b/drivers/misc/vmw_vmci/vmci_queue_pair.c
+@@ -2243,7 +2243,8 @@ int vmci_qp_broker_map(struct vmci_handle handle,
+ 
+ 	result = VMCI_SUCCESS;
+ 
+-	if (context_id != VMCI_HOST_CONTEXT_ID) {
++	if (context_id != VMCI_HOST_CONTEXT_ID &&
++	    !QPBROKERSTATE_HAS_MEM(entry)) {
+ 		struct vmci_qp_page_store page_store;
+ 
+ 		page_store.pages = guest_mem;
+@@ -2350,7 +2351,8 @@ int vmci_qp_broker_unmap(struct vmci_handle handle,
+ 		goto out;
+ 	}
+ 
+-	if (context_id != VMCI_HOST_CONTEXT_ID) {
++	if (context_id != VMCI_HOST_CONTEXT_ID &&
++	    QPBROKERSTATE_HAS_MEM(entry)) {
+ 		qp_acquire_queue_mutex(entry->produce_q);
+ 		result = qp_save_headers(entry);
+ 		if (result < VMCI_SUCCESS)
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 2518bc0856596..d47829b9fc0ff 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -542,6 +542,7 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+ 		return mmc_sanitize(card, idata->ic.cmd_timeout_ms);
+ 
+ 	mmc_wait_for_req(card->host, &mrq);
++	memcpy(&idata->ic.response, cmd.resp, sizeof(cmd.resp));
+ 
+ 	if (cmd.error) {
+ 		dev_err(mmc_dev(card->host), "%s: cmd error %d\n",
+@@ -591,8 +592,6 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+ 	if (idata->ic.postsleep_min_us)
+ 		usleep_range(idata->ic.postsleep_min_us, idata->ic.postsleep_max_us);
+ 
+-	memcpy(&(idata->ic.response), cmd.resp, sizeof(cmd.resp));
+-
+ 	if (idata->rpmb || (cmd.flags & MMC_RSP_R1B) == MMC_RSP_R1B) {
+ 		/*
+ 		 * Ensure RPMB/R1B command has completed by polling CMD13
+diff --git a/drivers/mmc/host/rtsx_pci_sdmmc.c b/drivers/mmc/host/rtsx_pci_sdmmc.c
+index 4ca9374157348..58cfaffa3c2d8 100644
+--- a/drivers/mmc/host/rtsx_pci_sdmmc.c
++++ b/drivers/mmc/host/rtsx_pci_sdmmc.c
+@@ -542,9 +542,22 @@ static int sd_write_long_data(struct realtek_pci_sdmmc *host,
+ 	return 0;
+ }
+ 
++static inline void sd_enable_initial_mode(struct realtek_pci_sdmmc *host)
++{
++	rtsx_pci_write_register(host->pcr, SD_CFG1,
++			SD_CLK_DIVIDE_MASK, SD_CLK_DIVIDE_128);
++}
++
++static inline void sd_disable_initial_mode(struct realtek_pci_sdmmc *host)
++{
++	rtsx_pci_write_register(host->pcr, SD_CFG1,
++			SD_CLK_DIVIDE_MASK, SD_CLK_DIVIDE_0);
++}
++
+ static int sd_rw_multi(struct realtek_pci_sdmmc *host, struct mmc_request *mrq)
+ {
+ 	struct mmc_data *data = mrq->data;
++	int err;
+ 
+ 	if (host->sg_count < 0) {
+ 		data->error = host->sg_count;
+@@ -553,22 +566,19 @@ static int sd_rw_multi(struct realtek_pci_sdmmc *host, struct mmc_request *mrq)
+ 		return data->error;
+ 	}
+ 
+-	if (data->flags & MMC_DATA_READ)
+-		return sd_read_long_data(host, mrq);
++	if (data->flags & MMC_DATA_READ) {
++		if (host->initial_mode)
++			sd_disable_initial_mode(host);
+ 
+-	return sd_write_long_data(host, mrq);
+-}
++		err = sd_read_long_data(host, mrq);
+ 
+-static inline void sd_enable_initial_mode(struct realtek_pci_sdmmc *host)
+-{
+-	rtsx_pci_write_register(host->pcr, SD_CFG1,
+-			SD_CLK_DIVIDE_MASK, SD_CLK_DIVIDE_128);
+-}
++		if (host->initial_mode)
++			sd_enable_initial_mode(host);
+ 
+-static inline void sd_disable_initial_mode(struct realtek_pci_sdmmc *host)
+-{
+-	rtsx_pci_write_register(host->pcr, SD_CFG1,
+-			SD_CLK_DIVIDE_MASK, SD_CLK_DIVIDE_0);
++		return err;
++	}
++
++	return sd_write_long_data(host, mrq);
+ }
+ 
+ static void sd_normal_rw(struct realtek_pci_sdmmc *host,
+diff --git a/drivers/mmc/host/sdhci-of-arasan.c b/drivers/mmc/host/sdhci-of-arasan.c
+index 839965f7c717f..9a630ba37484e 100644
+--- a/drivers/mmc/host/sdhci-of-arasan.c
++++ b/drivers/mmc/host/sdhci-of-arasan.c
+@@ -159,6 +159,12 @@ struct sdhci_arasan_data {
+ /* Controller immediately reports SDHCI_CLOCK_INT_STABLE after enabling the
+  * internal clock even when the clock isn't stable */
+ #define SDHCI_ARASAN_QUIRK_CLOCK_UNSTABLE BIT(1)
++/*
++ * Some of the Arasan variations might not have timing requirements
++ * met at 25MHz for Default Speed mode, those controllers work at
++ * 19MHz instead
++ */
++#define SDHCI_ARASAN_QUIRK_CLOCK_25_BROKEN BIT(2)
+ };
+ 
+ struct sdhci_arasan_of_data {
+@@ -267,7 +273,12 @@ static void sdhci_arasan_set_clock(struct sdhci_host *host, unsigned int clock)
+ 			 * through low speeds without power cycling.
+ 			 */
+ 			sdhci_set_clock(host, host->max_clk);
+-			phy_power_on(sdhci_arasan->phy);
++			if (phy_power_on(sdhci_arasan->phy)) {
++				pr_err("%s: Cannot power on phy.\n",
++				       mmc_hostname(host->mmc));
++				return;
++			}
++
+ 			sdhci_arasan->is_phy_on = true;
+ 
+ 			/*
+@@ -290,6 +301,16 @@ static void sdhci_arasan_set_clock(struct sdhci_host *host, unsigned int clock)
+ 		sdhci_arasan->is_phy_on = false;
+ 	}
+ 
++	if (sdhci_arasan->quirks & SDHCI_ARASAN_QUIRK_CLOCK_25_BROKEN) {
++		/*
++		 * Some of the Arasan variations might not have timing
++		 * requirements met at 25MHz for Default Speed mode,
++		 * those controllers work at 19MHz instead.
++		 */
++		if (clock == DEFAULT_SPEED_MAX_DTR)
++			clock = (DEFAULT_SPEED_MAX_DTR * 19) / 25;
++	}
++
+ 	/* Set the Input and Output Clock Phase Delays */
+ 	if (clk_data->set_clk_delays)
+ 		clk_data->set_clk_delays(host);
+@@ -307,7 +328,12 @@ static void sdhci_arasan_set_clock(struct sdhci_host *host, unsigned int clock)
+ 		msleep(20);
+ 
+ 	if (ctrl_phy) {
+-		phy_power_on(sdhci_arasan->phy);
++		if (phy_power_on(sdhci_arasan->phy)) {
++			pr_err("%s: Cannot power on phy.\n",
++			       mmc_hostname(host->mmc));
++			return;
++		}
++
+ 		sdhci_arasan->is_phy_on = true;
+ 	}
+ }
+@@ -463,7 +489,9 @@ static int sdhci_arasan_suspend(struct device *dev)
+ 		ret = phy_power_off(sdhci_arasan->phy);
+ 		if (ret) {
+ 			dev_err(dev, "Cannot power off phy.\n");
+-			sdhci_resume_host(host);
++			if (sdhci_resume_host(host))
++				dev_err(dev, "Cannot resume host.\n");
++
+ 			return ret;
+ 		}
+ 		sdhci_arasan->is_phy_on = false;
+@@ -1598,6 +1626,8 @@ static int sdhci_arasan_probe(struct platform_device *pdev)
+ 	if (of_device_is_compatible(np, "xlnx,zynqmp-8.9a")) {
+ 		host->mmc_host_ops.execute_tuning =
+ 			arasan_zynqmp_execute_tuning;
++
++		sdhci_arasan->quirks |= SDHCI_ARASAN_QUIRK_CLOCK_25_BROKEN;
+ 	}
+ 
+ 	arasan_dt_parse_clk_phases(dev, &sdhci_arasan->clk_data);
+diff --git a/drivers/mtd/nand/raw/intel-nand-controller.c b/drivers/mtd/nand/raw/intel-nand-controller.c
+index 8b49fd56cf964..29e8a546dcd60 100644
+--- a/drivers/mtd/nand/raw/intel-nand-controller.c
++++ b/drivers/mtd/nand/raw/intel-nand-controller.c
+@@ -631,19 +631,26 @@ static int ebu_nand_probe(struct platform_device *pdev)
+ 	ebu_host->clk_rate = clk_get_rate(ebu_host->clk);
+ 
+ 	ebu_host->dma_tx = dma_request_chan(dev, "tx");
+-	if (IS_ERR(ebu_host->dma_tx))
+-		return dev_err_probe(dev, PTR_ERR(ebu_host->dma_tx),
+-				     "failed to request DMA tx chan!.\n");
++	if (IS_ERR(ebu_host->dma_tx)) {
++		ret = dev_err_probe(dev, PTR_ERR(ebu_host->dma_tx),
++				    "failed to request DMA tx chan!.\n");
++		goto err_disable_unprepare_clk;
++	}
+ 
+ 	ebu_host->dma_rx = dma_request_chan(dev, "rx");
+-	if (IS_ERR(ebu_host->dma_rx))
+-		return dev_err_probe(dev, PTR_ERR(ebu_host->dma_rx),
+-				     "failed to request DMA rx chan!.\n");
++	if (IS_ERR(ebu_host->dma_rx)) {
++		ret = dev_err_probe(dev, PTR_ERR(ebu_host->dma_rx),
++				    "failed to request DMA rx chan!.\n");
++		ebu_host->dma_rx = NULL;
++		goto err_cleanup_dma;
++	}
+ 
+ 	resname = devm_kasprintf(dev, GFP_KERNEL, "addr_sel%d", cs);
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, resname);
+-	if (!res)
+-		return -EINVAL;
++	if (!res) {
++		ret = -EINVAL;
++		goto err_cleanup_dma;
++	}
+ 	ebu_host->cs[cs].addr_sel = res->start;
+ 	writel(ebu_host->cs[cs].addr_sel | EBU_ADDR_MASK(5) | EBU_ADDR_SEL_REGEN,
+ 	       ebu_host->ebu + EBU_ADDR_SEL(cs));
+@@ -653,7 +660,8 @@ static int ebu_nand_probe(struct platform_device *pdev)
+ 	mtd = nand_to_mtd(&ebu_host->chip);
+ 	if (!mtd->name) {
+ 		dev_err(ebu_host->dev, "NAND label property is mandatory\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_cleanup_dma;
+ 	}
+ 
+ 	mtd->dev.parent = dev;
+@@ -681,6 +689,7 @@ err_clean_nand:
+ 	nand_cleanup(&ebu_host->chip);
+ err_cleanup_dma:
+ 	ebu_dma_cleanup(ebu_host);
++err_disable_unprepare_clk:
+ 	clk_disable_unprepare(ebu_host->clk);
+ 
+ 	return ret;
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 9a184c99fbe44..3476ef2237360 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -2245,7 +2245,6 @@ static int __bond_release_one(struct net_device *bond_dev,
+ 	/* recompute stats just before removing the slave */
+ 	bond_get_stats(bond->dev, &bond->bond_stats);
+ 
+-	bond_upper_dev_unlink(bond, slave);
+ 	/* unregister rx_handler early so bond_handle_frame wouldn't be called
+ 	 * for this slave anymore.
+ 	 */
+@@ -2254,6 +2253,8 @@ static int __bond_release_one(struct net_device *bond_dev,
+ 	if (BOND_MODE(bond) == BOND_MODE_8023AD)
+ 		bond_3ad_unbind_slave(slave);
+ 
++	bond_upper_dev_unlink(bond, slave);
++
+ 	if (bond_mode_can_use_xmit_hash(bond))
+ 		bond_update_slave_arr(bond, slave);
+ 
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index e78026ef6d8cc..64d6dfa831220 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -843,7 +843,8 @@ static int gswip_setup(struct dsa_switch *ds)
+ 
+ 	gswip_switch_mask(priv, 0, GSWIP_MAC_CTRL_2_MLEN,
+ 			  GSWIP_MAC_CTRL_2p(cpu_port));
+-	gswip_switch_w(priv, VLAN_ETH_FRAME_LEN + 8, GSWIP_MAC_FLEN);
++	gswip_switch_w(priv, VLAN_ETH_FRAME_LEN + 8 + ETH_FCS_LEN,
++		       GSWIP_MAC_FLEN);
+ 	gswip_switch_mask(priv, 0, GSWIP_BM_QUEUE_GCTRL_GL_MOD,
+ 			  GSWIP_BM_QUEUE_GCTRL);
+ 
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
+index 58964d22cb17d..e3a3499ba7a23 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
+@@ -3231,12 +3231,6 @@ static int dpaa2_switch_probe(struct fsl_mc_device *sw_dev)
+ 			       &ethsw->fq[i].napi, dpaa2_switch_poll,
+ 			       NAPI_POLL_WEIGHT);
+ 
+-	err = dpsw_enable(ethsw->mc_io, 0, ethsw->dpsw_handle);
+-	if (err) {
+-		dev_err(ethsw->dev, "dpsw_enable err %d\n", err);
+-		goto err_free_netdev;
+-	}
+-
+ 	/* Setup IRQs */
+ 	err = dpaa2_switch_setup_irqs(sw_dev);
+ 	if (err)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+index 38b601031db46..95343f6d15e12 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+@@ -10,7 +10,14 @@
+ 
+ static u16 hclge_errno_to_resp(int errno)
+ {
+-	return abs(errno);
++	int resp = abs(errno);
++
++	/* The status for pf to vf msg cmd is u16, constrainted by HW.
++	 * We need to keep the same type with it.
++	 * The intput errno is the stander error code, it's safely to
++	 * use a u16 to store the abs(errno).
++	 */
++	return (u16)resp;
+ }
+ 
+ /* hclge_gen_resp_to_vf: used to generate a synchronous response to VF when PF
+diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
+index 90793b36126e6..68c80f04113c8 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf.h
++++ b/drivers/net/ethernet/intel/iavf/iavf.h
+@@ -186,12 +186,6 @@ enum iavf_state_t {
+ 	__IAVF_RUNNING,		/* opened, working */
+ };
+ 
+-enum iavf_critical_section_t {
+-	__IAVF_IN_CRITICAL_TASK,	/* cannot be interrupted */
+-	__IAVF_IN_CLIENT_TASK,
+-	__IAVF_IN_REMOVE_TASK,	/* device being removed */
+-};
+-
+ #define IAVF_CLOUD_FIELD_OMAC		0x01
+ #define IAVF_CLOUD_FIELD_IMAC		0x02
+ #define IAVF_CLOUD_FIELD_IVLAN	0x04
+@@ -236,6 +230,9 @@ struct iavf_adapter {
+ 	struct iavf_q_vector *q_vectors;
+ 	struct list_head vlan_filter_list;
+ 	struct list_head mac_filter_list;
++	struct mutex crit_lock;
++	struct mutex client_lock;
++	struct mutex remove_lock;
+ 	/* Lock to protect accesses to MAC and VLAN lists */
+ 	spinlock_t mac_vlan_list_lock;
+ 	char misc_vector_name[IFNAMSIZ + 9];
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+index af43fbd8cb75e..edbeb27213f83 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+@@ -1352,8 +1352,7 @@ static int iavf_add_fdir_ethtool(struct iavf_adapter *adapter, struct ethtool_rx
+ 	if (!fltr)
+ 		return -ENOMEM;
+ 
+-	while (test_and_set_bit(__IAVF_IN_CRITICAL_TASK,
+-				&adapter->crit_section)) {
++	while (!mutex_trylock(&adapter->crit_lock)) {
+ 		if (--count == 0) {
+ 			kfree(fltr);
+ 			return -EINVAL;
+@@ -1378,7 +1377,7 @@ ret:
+ 	if (err && fltr)
+ 		kfree(fltr);
+ 
+-	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->crit_lock);
+ 	return err;
+ }
+ 
+@@ -1563,8 +1562,7 @@ iavf_set_adv_rss_hash_opt(struct iavf_adapter *adapter,
+ 		return -EINVAL;
+ 	}
+ 
+-	while (test_and_set_bit(__IAVF_IN_CRITICAL_TASK,
+-				&adapter->crit_section)) {
++	while (!mutex_trylock(&adapter->crit_lock)) {
+ 		if (--count == 0) {
+ 			kfree(rss_new);
+ 			return -EINVAL;
+@@ -1600,7 +1598,7 @@ iavf_set_adv_rss_hash_opt(struct iavf_adapter *adapter,
+ 	if (!err)
+ 		mod_delayed_work(iavf_wq, &adapter->watchdog_task, 0);
+ 
+-	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->crit_lock);
+ 
+ 	if (!rss_new_add)
+ 		kfree(rss_new);
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 606a01ce40739..23762a7ef740b 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -131,6 +131,27 @@ enum iavf_status iavf_free_virt_mem_d(struct iavf_hw *hw,
+ 	return 0;
+ }
+ 
++/**
++ * iavf_lock_timeout - try to lock mutex but give up after timeout
++ * @lock: mutex that should be locked
++ * @msecs: timeout in msecs
++ *
++ * Returns 0 on success, negative on failure
++ **/
++static int iavf_lock_timeout(struct mutex *lock, unsigned int msecs)
++{
++	unsigned int wait, delay = 10;
++
++	for (wait = 0; wait < msecs; wait += delay) {
++		if (mutex_trylock(lock))
++			return 0;
++
++		msleep(delay);
++	}
++
++	return -1;
++}
++
+ /**
+  * iavf_schedule_reset - Set the flags and schedule a reset event
+  * @adapter: board private structure
+@@ -1916,7 +1937,7 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 	struct iavf_hw *hw = &adapter->hw;
+ 	u32 reg_val;
+ 
+-	if (test_and_set_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section))
++	if (!mutex_trylock(&adapter->crit_lock))
+ 		goto restart_watchdog;
+ 
+ 	if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)
+@@ -1934,8 +1955,7 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 			adapter->state = __IAVF_STARTUP;
+ 			adapter->flags &= ~IAVF_FLAG_PF_COMMS_FAILED;
+ 			queue_delayed_work(iavf_wq, &adapter->init_task, 10);
+-			clear_bit(__IAVF_IN_CRITICAL_TASK,
+-				  &adapter->crit_section);
++			mutex_unlock(&adapter->crit_lock);
+ 			/* Don't reschedule the watchdog, since we've restarted
+ 			 * the init task. When init_task contacts the PF and
+ 			 * gets everything set up again, it'll restart the
+@@ -1945,14 +1965,13 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 		}
+ 		adapter->aq_required = 0;
+ 		adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+-		clear_bit(__IAVF_IN_CRITICAL_TASK,
+-			  &adapter->crit_section);
++		mutex_unlock(&adapter->crit_lock);
+ 		queue_delayed_work(iavf_wq,
+ 				   &adapter->watchdog_task,
+ 				   msecs_to_jiffies(10));
+ 		goto watchdog_done;
+ 	case __IAVF_RESETTING:
+-		clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++		mutex_unlock(&adapter->crit_lock);
+ 		queue_delayed_work(iavf_wq, &adapter->watchdog_task, HZ * 2);
+ 		return;
+ 	case __IAVF_DOWN:
+@@ -1975,7 +1994,7 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 		}
+ 		break;
+ 	case __IAVF_REMOVE:
+-		clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++		mutex_unlock(&adapter->crit_lock);
+ 		return;
+ 	default:
+ 		goto restart_watchdog;
+@@ -1984,7 +2003,6 @@ static void iavf_watchdog_task(struct work_struct *work)
+ 		/* check for hw reset */
+ 	reg_val = rd32(hw, IAVF_VF_ARQLEN1) & IAVF_VF_ARQLEN1_ARQENABLE_MASK;
+ 	if (!reg_val) {
+-		adapter->state = __IAVF_RESETTING;
+ 		adapter->flags |= IAVF_FLAG_RESET_PENDING;
+ 		adapter->aq_required = 0;
+ 		adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+@@ -1998,7 +2016,7 @@ watchdog_done:
+ 	if (adapter->state == __IAVF_RUNNING ||
+ 	    adapter->state == __IAVF_COMM_FAILED)
+ 		iavf_detect_recover_hung(&adapter->vsi);
+-	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->crit_lock);
+ restart_watchdog:
+ 	if (adapter->aq_required)
+ 		queue_delayed_work(iavf_wq, &adapter->watchdog_task,
+@@ -2062,7 +2080,7 @@ static void iavf_disable_vf(struct iavf_adapter *adapter)
+ 	memset(adapter->vf_res, 0, IAVF_VIRTCHNL_VF_RESOURCE_SIZE);
+ 	iavf_shutdown_adminq(&adapter->hw);
+ 	adapter->netdev->flags &= ~IFF_UP;
+-	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->crit_lock);
+ 	adapter->flags &= ~IAVF_FLAG_RESET_PENDING;
+ 	adapter->state = __IAVF_DOWN;
+ 	wake_up(&adapter->down_waitqueue);
+@@ -2095,11 +2113,14 @@ static void iavf_reset_task(struct work_struct *work)
+ 	/* When device is being removed it doesn't make sense to run the reset
+ 	 * task, just return in such a case.
+ 	 */
+-	if (test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section))
++	if (mutex_is_locked(&adapter->remove_lock))
+ 		return;
+ 
+-	while (test_and_set_bit(__IAVF_IN_CLIENT_TASK,
+-				&adapter->crit_section))
++	if (iavf_lock_timeout(&adapter->crit_lock, 200)) {
++		schedule_work(&adapter->reset_task);
++		return;
++	}
++	while (!mutex_trylock(&adapter->client_lock))
+ 		usleep_range(500, 1000);
+ 	if (CLIENT_ENABLED(adapter)) {
+ 		adapter->flags &= ~(IAVF_FLAG_CLIENT_NEEDS_OPEN |
+@@ -2151,7 +2172,7 @@ static void iavf_reset_task(struct work_struct *work)
+ 		dev_err(&adapter->pdev->dev, "Reset never finished (%x)\n",
+ 			reg_val);
+ 		iavf_disable_vf(adapter);
+-		clear_bit(__IAVF_IN_CLIENT_TASK, &adapter->crit_section);
++		mutex_unlock(&adapter->client_lock);
+ 		return; /* Do not attempt to reinit. It's dead, Jim. */
+ 	}
+ 
+@@ -2278,13 +2299,13 @@ continue_reset:
+ 		adapter->state = __IAVF_DOWN;
+ 		wake_up(&adapter->down_waitqueue);
+ 	}
+-	clear_bit(__IAVF_IN_CLIENT_TASK, &adapter->crit_section);
+-	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->client_lock);
++	mutex_unlock(&adapter->crit_lock);
+ 
+ 	return;
+ reset_err:
+-	clear_bit(__IAVF_IN_CLIENT_TASK, &adapter->crit_section);
+-	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->client_lock);
++	mutex_unlock(&adapter->crit_lock);
+ 	dev_err(&adapter->pdev->dev, "failed to allocate resources during reinit\n");
+ 	iavf_close(netdev);
+ }
+@@ -2312,6 +2333,8 @@ static void iavf_adminq_task(struct work_struct *work)
+ 	if (!event.msg_buf)
+ 		goto out;
+ 
++	if (iavf_lock_timeout(&adapter->crit_lock, 200))
++		goto freedom;
+ 	do {
+ 		ret = iavf_clean_arq_element(hw, &event, &pending);
+ 		v_op = (enum virtchnl_ops)le32_to_cpu(event.desc.cookie_high);
+@@ -2325,6 +2348,7 @@ static void iavf_adminq_task(struct work_struct *work)
+ 		if (pending != 0)
+ 			memset(event.msg_buf, 0, IAVF_MAX_AQ_BUF_SIZE);
+ 	} while (pending);
++	mutex_unlock(&adapter->crit_lock);
+ 
+ 	if ((adapter->flags &
+ 	     (IAVF_FLAG_RESET_PENDING | IAVF_FLAG_RESET_NEEDED)) ||
+@@ -2391,7 +2415,7 @@ static void iavf_client_task(struct work_struct *work)
+ 	 * later.
+ 	 */
+ 
+-	if (test_and_set_bit(__IAVF_IN_CLIENT_TASK, &adapter->crit_section))
++	if (!mutex_trylock(&adapter->client_lock))
+ 		return;
+ 
+ 	if (adapter->flags & IAVF_FLAG_SERVICE_CLIENT_REQUESTED) {
+@@ -2414,7 +2438,7 @@ static void iavf_client_task(struct work_struct *work)
+ 		adapter->flags &= ~IAVF_FLAG_CLIENT_NEEDS_OPEN;
+ 	}
+ out:
+-	clear_bit(__IAVF_IN_CLIENT_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->client_lock);
+ }
+ 
+ /**
+@@ -3017,8 +3041,7 @@ static int iavf_configure_clsflower(struct iavf_adapter *adapter,
+ 	if (!filter)
+ 		return -ENOMEM;
+ 
+-	while (test_and_set_bit(__IAVF_IN_CRITICAL_TASK,
+-				&adapter->crit_section)) {
++	while (!mutex_trylock(&adapter->crit_lock)) {
+ 		if (--count == 0)
+ 			goto err;
+ 		udelay(1);
+@@ -3049,7 +3072,7 @@ err:
+ 	if (err)
+ 		kfree(filter);
+ 
+-	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->crit_lock);
+ 	return err;
+ }
+ 
+@@ -3196,8 +3219,7 @@ static int iavf_open(struct net_device *netdev)
+ 		return -EIO;
+ 	}
+ 
+-	while (test_and_set_bit(__IAVF_IN_CRITICAL_TASK,
+-				&adapter->crit_section))
++	while (!mutex_trylock(&adapter->crit_lock))
+ 		usleep_range(500, 1000);
+ 
+ 	if (adapter->state != __IAVF_DOWN) {
+@@ -3232,7 +3254,7 @@ static int iavf_open(struct net_device *netdev)
+ 
+ 	iavf_irq_enable(adapter, true);
+ 
+-	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->crit_lock);
+ 
+ 	return 0;
+ 
+@@ -3244,7 +3266,7 @@ err_setup_rx:
+ err_setup_tx:
+ 	iavf_free_all_tx_resources(adapter);
+ err_unlock:
+-	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->crit_lock);
+ 
+ 	return err;
+ }
+@@ -3268,8 +3290,7 @@ static int iavf_close(struct net_device *netdev)
+ 	if (adapter->state <= __IAVF_DOWN_PENDING)
+ 		return 0;
+ 
+-	while (test_and_set_bit(__IAVF_IN_CRITICAL_TASK,
+-				&adapter->crit_section))
++	while (!mutex_trylock(&adapter->crit_lock))
+ 		usleep_range(500, 1000);
+ 
+ 	set_bit(__IAVF_VSI_DOWN, adapter->vsi.state);
+@@ -3280,7 +3301,7 @@ static int iavf_close(struct net_device *netdev)
+ 	adapter->state = __IAVF_DOWN_PENDING;
+ 	iavf_free_traffic_irqs(adapter);
+ 
+-	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->crit_lock);
+ 
+ 	/* We explicitly don't free resources here because the hardware is
+ 	 * still active and can DMA into memory. Resources are cleared in
+@@ -3629,6 +3650,10 @@ static void iavf_init_task(struct work_struct *work)
+ 						    init_task.work);
+ 	struct iavf_hw *hw = &adapter->hw;
+ 
++	if (iavf_lock_timeout(&adapter->crit_lock, 5000)) {
++		dev_warn(&adapter->pdev->dev, "failed to acquire crit_lock in %s\n", __FUNCTION__);
++		return;
++	}
+ 	switch (adapter->state) {
+ 	case __IAVF_STARTUP:
+ 		if (iavf_startup(adapter) < 0)
+@@ -3641,14 +3666,14 @@ static void iavf_init_task(struct work_struct *work)
+ 	case __IAVF_INIT_GET_RESOURCES:
+ 		if (iavf_init_get_resources(adapter) < 0)
+ 			goto init_failed;
+-		return;
++		goto out;
+ 	default:
+ 		goto init_failed;
+ 	}
+ 
+ 	queue_delayed_work(iavf_wq, &adapter->init_task,
+ 			   msecs_to_jiffies(30));
+-	return;
++	goto out;
+ init_failed:
+ 	if (++adapter->aq_wait_count > IAVF_AQ_MAX_ERR) {
+ 		dev_err(&adapter->pdev->dev,
+@@ -3657,9 +3682,11 @@ init_failed:
+ 		iavf_shutdown_adminq(hw);
+ 		adapter->state = __IAVF_STARTUP;
+ 		queue_delayed_work(iavf_wq, &adapter->init_task, HZ * 5);
+-		return;
++		goto out;
+ 	}
+ 	queue_delayed_work(iavf_wq, &adapter->init_task, HZ);
++out:
++	mutex_unlock(&adapter->crit_lock);
+ }
+ 
+ /**
+@@ -3676,9 +3703,12 @@ static void iavf_shutdown(struct pci_dev *pdev)
+ 	if (netif_running(netdev))
+ 		iavf_close(netdev);
+ 
++	if (iavf_lock_timeout(&adapter->crit_lock, 5000))
++		dev_warn(&adapter->pdev->dev, "failed to acquire crit_lock in %s\n", __FUNCTION__);
+ 	/* Prevent the watchdog from running. */
+ 	adapter->state = __IAVF_REMOVE;
+ 	adapter->aq_required = 0;
++	mutex_unlock(&adapter->crit_lock);
+ 
+ #ifdef CONFIG_PM
+ 	pci_save_state(pdev);
+@@ -3772,6 +3802,9 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* set up the locks for the AQ, do this only once in probe
+ 	 * and destroy them only once in remove
+ 	 */
++	mutex_init(&adapter->crit_lock);
++	mutex_init(&adapter->client_lock);
++	mutex_init(&adapter->remove_lock);
+ 	mutex_init(&hw->aq.asq_mutex);
+ 	mutex_init(&hw->aq.arq_mutex);
+ 
+@@ -3823,8 +3856,7 @@ static int __maybe_unused iavf_suspend(struct device *dev_d)
+ 
+ 	netif_device_detach(netdev);
+ 
+-	while (test_and_set_bit(__IAVF_IN_CRITICAL_TASK,
+-				&adapter->crit_section))
++	while (!mutex_trylock(&adapter->crit_lock))
+ 		usleep_range(500, 1000);
+ 
+ 	if (netif_running(netdev)) {
+@@ -3835,7 +3867,7 @@ static int __maybe_unused iavf_suspend(struct device *dev_d)
+ 	iavf_free_misc_irq(adapter);
+ 	iavf_reset_interrupt_capability(adapter);
+ 
+-	clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section);
++	mutex_unlock(&adapter->crit_lock);
+ 
+ 	return 0;
+ }
+@@ -3897,7 +3929,7 @@ static void iavf_remove(struct pci_dev *pdev)
+ 	struct iavf_hw *hw = &adapter->hw;
+ 	int err;
+ 	/* Indicate we are in remove and not to run reset_task */
+-	set_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section);
++	mutex_lock(&adapter->remove_lock);
+ 	cancel_delayed_work_sync(&adapter->init_task);
+ 	cancel_work_sync(&adapter->reset_task);
+ 	cancel_delayed_work_sync(&adapter->client_task);
+@@ -3912,10 +3944,6 @@ static void iavf_remove(struct pci_dev *pdev)
+ 				 err);
+ 	}
+ 
+-	/* Shut down all the garbage mashers on the detention level */
+-	adapter->state = __IAVF_REMOVE;
+-	adapter->aq_required = 0;
+-	adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED;
+ 	iavf_request_reset(adapter);
+ 	msleep(50);
+ 	/* If the FW isn't responding, kick it once, but only once. */
+@@ -3923,6 +3951,13 @@ static void iavf_remove(struct pci_dev *pdev)
+ 		iavf_request_reset(adapter);
+ 		msleep(50);
+ 	}
++	if (iavf_lock_timeout(&adapter->crit_lock, 5000))
++		dev_warn(&adapter->pdev->dev, "failed to acquire crit_lock in %s\n", __FUNCTION__);
++
++	/* Shut down all the garbage mashers on the detention level */
++	adapter->state = __IAVF_REMOVE;
++	adapter->aq_required = 0;
++	adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED;
+ 	iavf_free_all_tx_resources(adapter);
+ 	iavf_free_all_rx_resources(adapter);
+ 	iavf_misc_irq_disable(adapter);
+@@ -3942,6 +3977,11 @@ static void iavf_remove(struct pci_dev *pdev)
+ 	/* destroy the locks only once, here */
+ 	mutex_destroy(&hw->aq.arq_mutex);
+ 	mutex_destroy(&hw->aq.asq_mutex);
++	mutex_destroy(&adapter->client_lock);
++	mutex_unlock(&adapter->crit_lock);
++	mutex_destroy(&adapter->crit_lock);
++	mutex_unlock(&adapter->remove_lock);
++	mutex_destroy(&adapter->remove_lock);
+ 
+ 	iounmap(hw->hw_addr);
+ 	pci_release_regions(pdev);
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 9b85fdf012977..3e301c5c5270a 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -4402,6 +4402,7 @@ static irqreturn_t igc_msix_ring(int irq, void *data)
+  */
+ static int igc_request_msix(struct igc_adapter *adapter)
+ {
++	unsigned int num_q_vectors = adapter->num_q_vectors;
+ 	int i = 0, err = 0, vector = 0, free_vector = 0;
+ 	struct net_device *netdev = adapter->netdev;
+ 
+@@ -4410,7 +4411,13 @@ static int igc_request_msix(struct igc_adapter *adapter)
+ 	if (err)
+ 		goto err_out;
+ 
+-	for (i = 0; i < adapter->num_q_vectors; i++) {
++	if (num_q_vectors > MAX_Q_VECTORS) {
++		num_q_vectors = MAX_Q_VECTORS;
++		dev_warn(&adapter->pdev->dev,
++			 "The number of queue vectors (%d) is higher than max allowed (%d)\n",
++			 adapter->num_q_vectors, MAX_Q_VECTORS);
++	}
++	for (i = 0; i < num_q_vectors; i++) {
+ 		struct igc_q_vector *q_vector = adapter->q_vector[i];
+ 
+ 		vector++;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+index e0d1af9e7770d..6c64fdbef0df1 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+@@ -1201,7 +1201,22 @@ static int otx2_aura_init(struct otx2_nic *pfvf, int aura_id,
+ 	/* Enable backpressure for RQ aura */
+ 	if (aura_id < pfvf->hw.rqpool_cnt && !is_otx2_lbkvf(pfvf->pdev)) {
+ 		aq->aura.bp_ena = 0;
++		/* If NIX1 LF is attached then specify NIX1_RX.
++		 *
++		 * Below NPA_AURA_S[BP_ENA] is set according to the
++		 * NPA_BPINTF_E enumeration given as:
++		 * 0x0 + a*0x1 where 'a' is 0 for NIX0_RX and 1 for NIX1_RX so
++		 * NIX0_RX is 0x0 + 0*0x1 = 0
++		 * NIX1_RX is 0x0 + 1*0x1 = 1
++		 * But in HRM it is given that
++		 * "NPA_AURA_S[BP_ENA](w1[33:32]) - Enable aura backpressure to
++		 * NIX-RX based on [BP] level. One bit per NIX-RX; index
++		 * enumerated by NPA_BPINTF_E."
++		 */
++		if (pfvf->nix_blkaddr == BLKADDR_NIX1)
++			aq->aura.bp_ena = 1;
+ 		aq->aura.nix0_bpid = pfvf->bpid[0];
++
+ 		/* Set backpressure level for RQ's Aura */
+ 		aq->aura.bp = RQ_BP_LVL_AURA;
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index 9d79c5ec31e9f..db5dfff585c99 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -877,7 +877,7 @@ static void cb_timeout_handler(struct work_struct *work)
+ 	ent->ret = -ETIMEDOUT;
+ 	mlx5_core_warn(dev, "cmd[%d]: %s(0x%x) Async, timeout. Will cause a leak of a command resource\n",
+ 		       ent->idx, mlx5_command_str(msg_to_opcode(ent->in)), msg_to_opcode(ent->in));
+-	mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true);
++	mlx5_cmd_comp_handler(dev, 1ULL << ent->idx, true);
+ 
+ out:
+ 	cmd_ent_put(ent); /* for the cmd_ent_get() took on schedule delayed work */
+@@ -994,7 +994,7 @@ static void cmd_work_handler(struct work_struct *work)
+ 		MLX5_SET(mbox_out, ent->out, status, status);
+ 		MLX5_SET(mbox_out, ent->out, syndrome, drv_synd);
+ 
+-		mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true);
++		mlx5_cmd_comp_handler(dev, 1ULL << ent->idx, true);
+ 		return;
+ 	}
+ 
+@@ -1008,7 +1008,7 @@ static void cmd_work_handler(struct work_struct *work)
+ 		poll_timeout(ent);
+ 		/* make sure we read the descriptor after ownership is SW */
+ 		rmb();
+-		mlx5_cmd_comp_handler(dev, 1UL << ent->idx, (ent->ret == -ETIMEDOUT));
++		mlx5_cmd_comp_handler(dev, 1ULL << ent->idx, (ent->ret == -ETIMEDOUT));
+ 	}
+ }
+ 
+@@ -1068,7 +1068,7 @@ static void wait_func_handle_exec_timeout(struct mlx5_core_dev *dev,
+ 		       mlx5_command_str(msg_to_opcode(ent->in)), msg_to_opcode(ent->in));
+ 
+ 	ent->ret = -ETIMEDOUT;
+-	mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true);
++	mlx5_cmd_comp_handler(dev, 1ULL << ent->idx, true);
+ }
+ 
+ static int wait_func(struct mlx5_core_dev *dev, struct mlx5_cmd_work_ent *ent)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c
+index 43356fad53deb..ffdfb5a94b14b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c
+@@ -846,9 +846,9 @@ again:
+ 			new_htbl = dr_rule_rehash(rule, nic_rule, cur_htbl,
+ 						  ste_location, send_ste_list);
+ 			if (!new_htbl) {
+-				mlx5dr_htbl_put(cur_htbl);
+ 				mlx5dr_err(dmn, "Failed creating rehash table, htbl-log_size: %d\n",
+ 					   cur_htbl->chunk_size);
++				mlx5dr_htbl_put(cur_htbl);
+ 			} else {
+ 				cur_htbl = new_htbl;
+ 			}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+index 9df0e73d1c358..69b49deb66b22 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+@@ -620,6 +620,7 @@ static int dr_cmd_modify_qp_rtr2rts(struct mlx5_core_dev *mdev,
+ 
+ 	MLX5_SET(qpc, qpc, retry_count, attr->retry_cnt);
+ 	MLX5_SET(qpc, qpc, rnr_retry, attr->rnr_retry);
++	MLX5_SET(qpc, qpc, primary_address_path.ack_timeout, 0x8); /* ~1ms */
+ 
+ 	MLX5_SET(rtr2rts_qp_in, in, opcode, MLX5_CMD_OP_RTR2RTS_QP);
+ 	MLX5_SET(rtr2rts_qp_in, in, qpn, dr_qp->qpn);
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+index eeb30680b4dcf..0a0a26376bea0 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+@@ -1697,7 +1697,7 @@ nfp_net_parse_meta(struct net_device *netdev, struct nfp_meta_parsed *meta,
+ 		case NFP_NET_META_RESYNC_INFO:
+ 			if (nfp_net_tls_rx_resync_req(netdev, data, pkt,
+ 						      pkt_len))
+-				return NULL;
++				return false;
+ 			data += sizeof(struct nfp_net_tls_resync_req);
+ 			break;
+ 		default:
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
+index 28dd0ed85a824..f7dc8458cde86 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
+@@ -289,10 +289,7 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
+ 		val &= ~NSS_COMMON_GMAC_CTL_PHY_IFACE_SEL;
+ 		break;
+ 	default:
+-		dev_err(&pdev->dev, "Unsupported PHY mode: \"%s\"\n",
+-			phy_modes(gmac->phy_mode));
+-		err = -EINVAL;
+-		goto err_remove_config_dt;
++		goto err_unsupported_phy;
+ 	}
+ 	regmap_write(gmac->nss_common, NSS_COMMON_GMAC_CTL(gmac->id), val);
+ 
+@@ -309,10 +306,7 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
+ 			NSS_COMMON_CLK_SRC_CTRL_OFFSET(gmac->id);
+ 		break;
+ 	default:
+-		dev_err(&pdev->dev, "Unsupported PHY mode: \"%s\"\n",
+-			phy_modes(gmac->phy_mode));
+-		err = -EINVAL;
+-		goto err_remove_config_dt;
++		goto err_unsupported_phy;
+ 	}
+ 	regmap_write(gmac->nss_common, NSS_COMMON_CLK_SRC_CTRL, val);
+ 
+@@ -329,8 +323,7 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
+ 				NSS_COMMON_CLK_GATE_GMII_TX_EN(gmac->id);
+ 		break;
+ 	default:
+-		/* We don't get here; the switch above will have errored out */
+-		unreachable();
++		goto err_unsupported_phy;
+ 	}
+ 	regmap_write(gmac->nss_common, NSS_COMMON_CLK_GATE, val);
+ 
+@@ -361,6 +354,11 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ 
++err_unsupported_phy:
++	dev_err(&pdev->dev, "Unsupported PHY mode: \"%s\"\n",
++		phy_modes(gmac->phy_mode));
++	err = -EINVAL;
++
+ err_remove_config_dt:
+ 	stmmac_remove_config_dt(pdev, plat_dat);
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index a5285a8a9eaeb..4d92fcfe703c9 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -5358,7 +5358,7 @@ static int stmmac_napi_poll_rxtx(struct napi_struct *napi, int budget)
+ 	struct stmmac_channel *ch =
+ 		container_of(napi, struct stmmac_channel, rxtx_napi);
+ 	struct stmmac_priv *priv = ch->priv_data;
+-	int rx_done, tx_done;
++	int rx_done, tx_done, rxtx_done;
+ 	u32 chan = ch->index;
+ 
+ 	priv->xstats.napi_poll++;
+@@ -5368,14 +5368,16 @@ static int stmmac_napi_poll_rxtx(struct napi_struct *napi, int budget)
+ 
+ 	rx_done = stmmac_rx_zc(priv, budget, chan);
+ 
++	rxtx_done = max(tx_done, rx_done);
++
+ 	/* If either TX or RX work is not complete, return budget
+ 	 * and keep pooling
+ 	 */
+-	if (tx_done >= budget || rx_done >= budget)
++	if (rxtx_done >= budget)
+ 		return budget;
+ 
+ 	/* all work done, exit the polling mode */
+-	if (napi_complete_done(napi, rx_done)) {
++	if (napi_complete_done(napi, rxtx_done)) {
+ 		unsigned long flags;
+ 
+ 		spin_lock_irqsave(&ch->lock, flags);
+@@ -5386,7 +5388,7 @@ static int stmmac_napi_poll_rxtx(struct napi_struct *napi, int budget)
+ 		spin_unlock_irqrestore(&ch->lock, flags);
+ 	}
+ 
+-	return min(rx_done, budget - 1);
++	return min(rxtx_done, budget - 1);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/wiznet/w5100.c b/drivers/net/ethernet/wiznet/w5100.c
+index ec5db481c9cd0..15e13d6dc5db5 100644
+--- a/drivers/net/ethernet/wiznet/w5100.c
++++ b/drivers/net/ethernet/wiznet/w5100.c
+@@ -1052,6 +1052,8 @@ static int w5100_mmio_probe(struct platform_device *pdev)
+ 		mac_addr = data->mac_addr;
+ 
+ 	mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!mem)
++		return -EINVAL;
+ 	if (resource_size(mem) < W5100_BUS_DIRECT_SIZE)
+ 		ops = &w5100_mmio_indirect_ops;
+ 	else
+diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
+index 525cdf28d9ea7..1e43748dcb401 100644
+--- a/drivers/net/ipa/ipa_cmd.c
++++ b/drivers/net/ipa/ipa_cmd.c
+@@ -159,35 +159,45 @@ static void ipa_cmd_validate_build(void)
+ 	BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_NHASH_SIZE_FMASK));
+ #undef TABLE_COUNT_MAX
+ #undef TABLE_SIZE
+-}
+ 
+-#ifdef IPA_VALIDATE
++	/* Hashed and non-hashed fields are assumed to be the same size */
++	BUILD_BUG_ON(field_max(IP_FLTRT_FLAGS_HASH_SIZE_FMASK) !=
++		     field_max(IP_FLTRT_FLAGS_NHASH_SIZE_FMASK));
++	BUILD_BUG_ON(field_max(IP_FLTRT_FLAGS_HASH_ADDR_FMASK) !=
++		     field_max(IP_FLTRT_FLAGS_NHASH_ADDR_FMASK));
++}
+ 
+ /* Validate a memory region holding a table */
+-bool ipa_cmd_table_valid(struct ipa *ipa, const struct ipa_mem *mem,
+-			 bool route, bool ipv6, bool hashed)
++bool ipa_cmd_table_valid(struct ipa *ipa, const struct ipa_mem *mem, bool route)
+ {
++	u32 offset_max = field_max(IP_FLTRT_FLAGS_NHASH_ADDR_FMASK);
++	u32 size_max = field_max(IP_FLTRT_FLAGS_NHASH_SIZE_FMASK);
++	const char *table = route ? "route" : "filter";
+ 	struct device *dev = &ipa->pdev->dev;
+-	u32 offset_max;
+ 
+-	offset_max = hashed ? field_max(IP_FLTRT_FLAGS_HASH_ADDR_FMASK)
+-			    : field_max(IP_FLTRT_FLAGS_NHASH_ADDR_FMASK);
++	/* Size must fit in the immediate command field that holds it */
++	if (mem->size > size_max) {
++		dev_err(dev, "%s table region size too large\n", table);
++		dev_err(dev, "    (0x%04x > 0x%04x)\n",
++			mem->size, size_max);
++
++		return false;
++	}
++
++	/* Offset must fit in the immediate command field that holds it */
+ 	if (mem->offset > offset_max ||
+ 	    ipa->mem_offset > offset_max - mem->offset) {
+-		dev_err(dev, "IPv%c %s%s table region offset too large\n",
+-			ipv6 ? '6' : '4', hashed ? "hashed " : "",
+-			route ? "route" : "filter");
++		dev_err(dev, "%s table region offset too large\n", table);
+ 		dev_err(dev, "    (0x%04x + 0x%04x > 0x%04x)\n",
+ 			ipa->mem_offset, mem->offset, offset_max);
+ 
+ 		return false;
+ 	}
+ 
++	/* Entire memory range must fit within IPA-local memory */
+ 	if (mem->offset > ipa->mem_size ||
+ 	    mem->size > ipa->mem_size - mem->offset) {
+-		dev_err(dev, "IPv%c %s%s table region out of range\n",
+-			ipv6 ? '6' : '4', hashed ? "hashed " : "",
+-			route ? "route" : "filter");
++		dev_err(dev, "%s table region out of range\n", table);
+ 		dev_err(dev, "    (0x%04x + 0x%04x > 0x%04x)\n",
+ 			mem->offset, mem->size, ipa->mem_size);
+ 
+@@ -197,6 +207,8 @@ bool ipa_cmd_table_valid(struct ipa *ipa, const struct ipa_mem *mem,
+ 	return true;
+ }
+ 
++#ifdef IPA_VALIDATE
++
+ /* Validate the memory region that holds headers */
+ static bool ipa_cmd_header_valid(struct ipa *ipa)
+ {
+diff --git a/drivers/net/ipa/ipa_cmd.h b/drivers/net/ipa/ipa_cmd.h
+index b99262281f41c..ea723419c826b 100644
+--- a/drivers/net/ipa/ipa_cmd.h
++++ b/drivers/net/ipa/ipa_cmd.h
+@@ -57,20 +57,18 @@ struct ipa_cmd_info {
+ 	enum dma_data_direction direction;
+ };
+ 
+-#ifdef IPA_VALIDATE
+-
+ /**
+  * ipa_cmd_table_valid() - Validate a memory region holding a table
+  * @ipa:	- IPA pointer
+  * @mem:	- IPA memory region descriptor
+  * @route:	- Whether the region holds a route or filter table
+- * @ipv6:	- Whether the table is for IPv6 or IPv4
+- * @hashed:	- Whether the table is hashed or non-hashed
+  *
+  * Return:	true if region is valid, false otherwise
+  */
+ bool ipa_cmd_table_valid(struct ipa *ipa, const struct ipa_mem *mem,
+-			    bool route, bool ipv6, bool hashed);
++			    bool route);
++
++#ifdef IPA_VALIDATE
+ 
+ /**
+  * ipa_cmd_data_valid() - Validate command-realted configuration is valid
+@@ -82,13 +80,6 @@ bool ipa_cmd_data_valid(struct ipa *ipa);
+ 
+ #else /* !IPA_VALIDATE */
+ 
+-static inline bool ipa_cmd_table_valid(struct ipa *ipa,
+-				       const struct ipa_mem *mem, bool route,
+-				       bool ipv6, bool hashed)
+-{
+-	return true;
+-}
+-
+ static inline bool ipa_cmd_data_valid(struct ipa *ipa)
+ {
+ 	return true;
+diff --git a/drivers/net/ipa/ipa_data-v4.11.c b/drivers/net/ipa/ipa_data-v4.11.c
+index 05806ceae8b54..157f8d47058b5 100644
+--- a/drivers/net/ipa/ipa_data-v4.11.c
++++ b/drivers/net/ipa/ipa_data-v4.11.c
+@@ -346,18 +346,13 @@ static const struct ipa_mem_data ipa_mem_data = {
+ static const struct ipa_interconnect_data ipa_interconnect_data[] = {
+ 	{
+ 		.name			= "memory",
+-		.peak_bandwidth		= 465000,	/* 465 MBps */
+-		.average_bandwidth	= 80000,	/* 80 MBps */
+-	},
+-	/* Average rate is unused for the next two interconnects */
+-	{
+-		.name			= "imem",
+-		.peak_bandwidth		= 68570,	/* 68.57 MBps */
+-		.average_bandwidth	= 80000,	/* 80 MBps (unused?) */
++		.peak_bandwidth		= 600000,	/* 600 MBps */
++		.average_bandwidth	= 150000,	/* 150 MBps */
+ 	},
++	/* Average rate is unused for the next interconnect */
+ 	{
+ 		.name			= "config",
+-		.peak_bandwidth		= 30000,	/* 30 MBps */
++		.peak_bandwidth		= 74000,	/* 74 MBps */
+ 		.average_bandwidth	= 0,		/* unused */
+ 	},
+ };
+diff --git a/drivers/net/ipa/ipa_data-v4.9.c b/drivers/net/ipa/ipa_data-v4.9.c
+index e41be790f45e5..75b50a50e3487 100644
+--- a/drivers/net/ipa/ipa_data-v4.9.c
++++ b/drivers/net/ipa/ipa_data-v4.9.c
+@@ -392,18 +392,13 @@ static const struct ipa_mem_data ipa_mem_data = {
+ /* Interconnect rates are in 1000 byte/second units */
+ static const struct ipa_interconnect_data ipa_interconnect_data[] = {
+ 	{
+-		.name			= "ipa_to_llcc",
++		.name			= "memory",
+ 		.peak_bandwidth		= 600000,	/* 600 MBps */
+ 		.average_bandwidth	= 150000,	/* 150 MBps */
+ 	},
+-	{
+-		.name			= "llcc_to_ebi1",
+-		.peak_bandwidth		= 1804000,	/* 1.804 GBps */
+-		.average_bandwidth	= 150000,	/* 150 MBps */
+-	},
+ 	/* Average rate is unused for the next interconnect */
+ 	{
+-		.name			= "appss_to_ipa",
++		.name			= "config",
+ 		.peak_bandwidth		= 74000,	/* 74 MBps */
+ 		.average_bandwidth	= 0,		/* unused */
+ 	},
+diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
+index 3168d72f42450..618a84cf669ac 100644
+--- a/drivers/net/ipa/ipa_table.c
++++ b/drivers/net/ipa/ipa_table.c
+@@ -174,7 +174,7 @@ ipa_table_valid_one(struct ipa *ipa, bool route, bool ipv6, bool hashed)
+ 		size = (1 + IPA_FILTER_COUNT_MAX) * sizeof(__le64);
+ 	}
+ 
+-	if (!ipa_cmd_table_valid(ipa, mem, route, ipv6, hashed))
++	if (!ipa_cmd_table_valid(ipa, mem, route))
+ 		return false;
+ 
+ 	/* mem->size >= size is sufficient, but we'll demand more */
+diff --git a/drivers/net/phy/dp83822.c b/drivers/net/phy/dp83822.c
+index f7a2ec150e542..211b5476a6f51 100644
+--- a/drivers/net/phy/dp83822.c
++++ b/drivers/net/phy/dp83822.c
+@@ -326,11 +326,9 @@ static irqreturn_t dp83822_handle_interrupt(struct phy_device *phydev)
+ 
+ static int dp8382x_disable_wol(struct phy_device *phydev)
+ {
+-	int value = DP83822_WOL_EN | DP83822_WOL_MAGIC_EN |
+-		    DP83822_WOL_SECURE_ON;
+-
+-	return phy_clear_bits_mmd(phydev, DP83822_DEVADDR,
+-				  MII_DP83822_WOL_CFG, value);
++	return phy_clear_bits_mmd(phydev, DP83822_DEVADDR, MII_DP83822_WOL_CFG,
++				  DP83822_WOL_EN | DP83822_WOL_MAGIC_EN |
++				  DP83822_WOL_SECURE_ON);
+ }
+ 
+ static int dp83822_read_status(struct phy_device *phydev)
+diff --git a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
+index b4885a700296e..b0a4ca3559fd8 100644
+--- a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
++++ b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
+@@ -3351,7 +3351,8 @@ found:
+ 			"Found block at %x: code=%d ref=%d length=%d major=%d minor=%d\n",
+ 			cptr, code, reference, length, major, minor);
+ 		if ((!AR_SREV_9485(ah) && length >= 1024) ||
+-		    (AR_SREV_9485(ah) && length > EEPROM_DATA_LEN_9485)) {
++		    (AR_SREV_9485(ah) && length > EEPROM_DATA_LEN_9485) ||
++		    (length > cptr)) {
+ 			ath_dbg(common, EEPROM, "Skipping bad header\n");
+ 			cptr -= COMP_HDR_LEN;
+ 			continue;
+diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c
+index 2ca3b86714a9d..172081ffe4774 100644
+--- a/drivers/net/wireless/ath/ath9k/hw.c
++++ b/drivers/net/wireless/ath/ath9k/hw.c
+@@ -1621,7 +1621,6 @@ static void ath9k_hw_apply_gpio_override(struct ath_hw *ah)
+ 		ath9k_hw_gpio_request_out(ah, i, NULL,
+ 					  AR_GPIO_OUTPUT_MUX_AS_OUTPUT);
+ 		ath9k_hw_set_gpio(ah, i, !!(ah->gpio_val & BIT(i)));
+-		ath9k_hw_gpio_free(ah, i);
+ 	}
+ }
+ 
+@@ -2728,14 +2727,17 @@ static void ath9k_hw_gpio_cfg_output_mux(struct ath_hw *ah, u32 gpio, u32 type)
+ static void ath9k_hw_gpio_cfg_soc(struct ath_hw *ah, u32 gpio, bool out,
+ 				  const char *label)
+ {
++	int err;
++
+ 	if (ah->caps.gpio_requested & BIT(gpio))
+ 		return;
+ 
+-	/* may be requested by BSP, free anyway */
+-	gpio_free(gpio);
+-
+-	if (gpio_request_one(gpio, out ? GPIOF_OUT_INIT_LOW : GPIOF_IN, label))
++	err = gpio_request_one(gpio, out ? GPIOF_OUT_INIT_LOW : GPIOF_IN, label);
++	if (err) {
++		ath_err(ath9k_hw_common(ah), "request GPIO%d failed:%d\n",
++			gpio, err);
+ 		return;
++	}
+ 
+ 	ah->caps.gpio_requested |= BIT(gpio);
+ }
+diff --git a/drivers/net/wireless/ath/wcn36xx/main.c b/drivers/net/wireless/ath/wcn36xx/main.c
+index dabed4e3ca457..e8c772a671764 100644
+--- a/drivers/net/wireless/ath/wcn36xx/main.c
++++ b/drivers/net/wireless/ath/wcn36xx/main.c
+@@ -405,13 +405,14 @@ static int wcn36xx_config(struct ieee80211_hw *hw, u32 changed)
+ 		wcn36xx_dbg(WCN36XX_DBG_MAC, "wcn36xx_config channel switch=%d\n",
+ 			    ch);
+ 
+-		if (wcn->sw_scan_opchannel == ch) {
++		if (wcn->sw_scan_opchannel == ch && wcn->sw_scan_channel) {
+ 			/* If channel is the initial operating channel, we may
+ 			 * want to receive/transmit regular data packets, then
+ 			 * simply stop the scan session and exit PS mode.
+ 			 */
+ 			wcn36xx_smd_finish_scan(wcn, HAL_SYS_MODE_SCAN,
+ 						wcn->sw_scan_vif);
++			wcn->sw_scan_channel = 0;
+ 		} else if (wcn->sw_scan) {
+ 			/* A scan is ongoing, do not change the operating
+ 			 * channel, but start a scan session on the channel.
+@@ -419,6 +420,7 @@ static int wcn36xx_config(struct ieee80211_hw *hw, u32 changed)
+ 			wcn36xx_smd_init_scan(wcn, HAL_SYS_MODE_SCAN,
+ 					      wcn->sw_scan_vif);
+ 			wcn36xx_smd_start_scan(wcn, ch);
++			wcn->sw_scan_channel = ch;
+ 		} else {
+ 			wcn36xx_change_opchannel(wcn, ch);
+ 		}
+@@ -699,6 +701,7 @@ static void wcn36xx_sw_scan_start(struct ieee80211_hw *hw,
+ 
+ 	wcn->sw_scan = true;
+ 	wcn->sw_scan_vif = vif;
++	wcn->sw_scan_channel = 0;
+ 	if (vif_priv->sta_assoc)
+ 		wcn->sw_scan_opchannel = WCN36XX_HW_CHANNEL(wcn);
+ 	else
+diff --git a/drivers/net/wireless/ath/wcn36xx/txrx.c b/drivers/net/wireless/ath/wcn36xx/txrx.c
+index 1b831157ede17..cab196bb38cd4 100644
+--- a/drivers/net/wireless/ath/wcn36xx/txrx.c
++++ b/drivers/net/wireless/ath/wcn36xx/txrx.c
+@@ -287,6 +287,10 @@ int wcn36xx_rx_skb(struct wcn36xx *wcn, struct sk_buff *skb)
+ 		status.rate_idx = 0;
+ 	}
+ 
++	if (ieee80211_is_beacon(hdr->frame_control) ||
++	    ieee80211_is_probe_resp(hdr->frame_control))
++		status.boottime_ns = ktime_get_boottime_ns();
++
+ 	memcpy(IEEE80211_SKB_RXCB(skb), &status, sizeof(status));
+ 
+ 	if (ieee80211_is_beacon(hdr->frame_control)) {
+diff --git a/drivers/net/wireless/ath/wcn36xx/wcn36xx.h b/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
+index 71fa9992b118c..d0fcce86903ae 100644
+--- a/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
++++ b/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
+@@ -232,6 +232,7 @@ struct wcn36xx {
+ 	struct cfg80211_scan_request *scan_req;
+ 	bool			sw_scan;
+ 	u8			sw_scan_opchannel;
++	u8			sw_scan_channel;
+ 	struct ieee80211_vif	*sw_scan_vif;
+ 	struct mutex		scan_lock;
+ 	bool			scan_aborted;
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h b/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h
+index b2605aefc2909..8b200379f7c20 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
+ /*
+- * Copyright (C) 2012-2014, 2018-2020 Intel Corporation
++ * Copyright (C) 2012-2014, 2018-2021 Intel Corporation
+  * Copyright (C) 2013-2015 Intel Mobile Communications GmbH
+  * Copyright (C) 2016-2017 Intel Deutschland GmbH
+  */
+@@ -874,7 +874,7 @@ struct iwl_scan_probe_params_v3 {
+ 	u8 reserved;
+ 	struct iwl_ssid_ie direct_scan[PROBE_OPTION_MAX];
+ 	__le32 short_ssid[SCAN_SHORT_SSID_MAX_SIZE];
+-	u8 bssid_array[ETH_ALEN][SCAN_BSSID_MAX_SIZE];
++	u8 bssid_array[SCAN_BSSID_MAX_SIZE][ETH_ALEN];
+ } __packed; /* SCAN_PROBE_PARAMS_API_S_VER_3 */
+ 
+ /**
+@@ -894,7 +894,7 @@ struct iwl_scan_probe_params_v4 {
+ 	__le16 reserved;
+ 	struct iwl_ssid_ie direct_scan[PROBE_OPTION_MAX];
+ 	__le32 short_ssid[SCAN_SHORT_SSID_MAX_SIZE];
+-	u8 bssid_array[ETH_ALEN][SCAN_BSSID_MAX_SIZE];
++	u8 bssid_array[SCAN_BSSID_MAX_SIZE][ETH_ALEN];
+ } __packed; /* SCAN_PROBE_PARAMS_API_S_VER_4 */
+ 
+ #define SCAN_MAX_NUM_CHANS_V3 67
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+index cc4e18ca95662..a27849419d29e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+@@ -2314,7 +2314,7 @@ static void iwl_fw_error_dump(struct iwl_fw_runtime *fwrt,
+ 		return;
+ 
+ 	if (dump_data->monitor_only)
+-		dump_mask &= IWL_FW_ERROR_DUMP_FW_MONITOR;
++		dump_mask &= BIT(IWL_FW_ERROR_DUMP_FW_MONITOR);
+ 
+ 	fw_error_dump.trans_ptr = iwl_trans_dump_data(fwrt->trans, dump_mask);
+ 	file_len = le32_to_cpu(dump_file->file_len);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+index fd5e089616515..7f0c821898082 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+@@ -1005,8 +1005,10 @@ int iwl_mvm_mac_ctxt_beacon_changed(struct iwl_mvm *mvm,
+ 		return -ENOMEM;
+ 
+ #ifdef CONFIG_IWLWIFI_DEBUGFS
+-	if (mvm->beacon_inject_active)
++	if (mvm->beacon_inject_active) {
++		dev_kfree_skb(beacon);
+ 		return -EBUSY;
++	}
+ #endif
+ 
+ 	ret = iwl_mvm_mac_ctxt_send_beacon(mvm, vif, beacon);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 141d9fc299b01..6981608ef165a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -2987,16 +2987,20 @@ static void iwl_mvm_check_he_obss_narrow_bw_ru_iter(struct wiphy *wiphy,
+ 						    void *_data)
+ {
+ 	struct iwl_mvm_he_obss_narrow_bw_ru_data *data = _data;
++	const struct cfg80211_bss_ies *ies;
+ 	const struct element *elem;
+ 
+-	elem = cfg80211_find_elem(WLAN_EID_EXT_CAPABILITY, bss->ies->data,
+-				  bss->ies->len);
++	rcu_read_lock();
++	ies = rcu_dereference(bss->ies);
++	elem = cfg80211_find_elem(WLAN_EID_EXT_CAPABILITY, ies->data,
++				  ies->len);
+ 
+ 	if (!elem || elem->datalen < 10 ||
+ 	    !(elem->data[10] &
+ 	      WLAN_EXT_CAPA10_OBSS_NARROW_BW_RU_TOLERANCE_SUPPORT)) {
+ 		data->tolerated = false;
+ 	}
++	rcu_read_unlock();
+ }
+ 
+ static void iwl_mvm_check_he_obss_narrow_bw_ru(struct ieee80211_hw *hw,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index ebed82c590e56..31611542e1aa0 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -754,10 +754,26 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
+ 
+ 	mvm->fw_restart = iwlwifi_mod_params.fw_restart ? -1 : 0;
+ 
+-	mvm->aux_queue = IWL_MVM_DQA_AUX_QUEUE;
+-	mvm->snif_queue = IWL_MVM_DQA_INJECT_MONITOR_QUEUE;
+-	mvm->probe_queue = IWL_MVM_DQA_AP_PROBE_RESP_QUEUE;
+-	mvm->p2p_dev_queue = IWL_MVM_DQA_P2P_DEVICE_QUEUE;
++	if (iwl_mvm_has_new_tx_api(mvm)) {
++		/*
++		 * If we have the new TX/queue allocation API initialize them
++		 * all to invalid numbers. We'll rewrite the ones that we need
++		 * later, but that doesn't happen for all of them all of the
++		 * time (e.g. P2P Device is optional), and if a dynamic queue
++		 * ends up getting number 2 (IWL_MVM_DQA_P2P_DEVICE_QUEUE) then
++		 * iwl_mvm_is_static_queue() erroneously returns true, and we
++		 * might have things getting stuck.
++		 */
++		mvm->aux_queue = IWL_MVM_INVALID_QUEUE;
++		mvm->snif_queue = IWL_MVM_INVALID_QUEUE;
++		mvm->probe_queue = IWL_MVM_INVALID_QUEUE;
++		mvm->p2p_dev_queue = IWL_MVM_INVALID_QUEUE;
++	} else {
++		mvm->aux_queue = IWL_MVM_DQA_AUX_QUEUE;
++		mvm->snif_queue = IWL_MVM_DQA_INJECT_MONITOR_QUEUE;
++		mvm->probe_queue = IWL_MVM_DQA_AP_PROBE_RESP_QUEUE;
++		mvm->p2p_dev_queue = IWL_MVM_DQA_P2P_DEVICE_QUEUE;
++	}
+ 
+ 	mvm->sf_state = SF_UNINIT;
+ 	if (iwl_mvm_has_unified_ucode(mvm))
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+index 5a0696c44f6df..ee3aff8bf7c25 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+@@ -1648,7 +1648,7 @@ iwl_mvm_umac_scan_cfg_channels_v6(struct iwl_mvm *mvm,
+ 		struct iwl_scan_channel_cfg_umac *cfg = &cp->channel_config[i];
+ 		u32 n_aps_flag =
+ 			iwl_mvm_scan_ch_n_aps_flag(vif_type,
+-						   cfg->v2.channel_num);
++						   channels[i]->hw_value);
+ 
+ 		cfg->flags = cpu_to_le32(flags | n_aps_flag);
+ 		cfg->v2.channel_num = channels[i]->hw_value;
+@@ -2368,14 +2368,17 @@ static int iwl_mvm_scan_umac_v14(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ 	if (ret)
+ 		return ret;
+ 
+-	iwl_mvm_scan_umac_fill_probe_p_v4(params, &scan_p->probe_params,
+-					  &bitmap_ssid);
+ 	if (!params->scan_6ghz) {
++		iwl_mvm_scan_umac_fill_probe_p_v4(params, &scan_p->probe_params,
++					  &bitmap_ssid);
+ 		iwl_mvm_scan_umac_fill_ch_p_v6(mvm, params, vif,
+-					       &scan_p->channel_params, bitmap_ssid);
++				       &scan_p->channel_params, bitmap_ssid);
+ 
+ 		return 0;
++	} else {
++		pb->preq = params->preq;
+ 	}
++
+ 	cp->flags = iwl_mvm_scan_umac_chan_flags_v2(mvm, params, vif);
+ 	cp->n_aps_override[0] = IWL_SCAN_ADWELL_N_APS_GO_FRIENDLY;
+ 	cp->n_aps_override[1] = IWL_SCAN_ADWELL_N_APS_SOCIAL_CHS;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+index f618368eda832..c310c366c38e8 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+@@ -316,8 +316,9 @@ static int iwl_mvm_invalidate_sta_queue(struct iwl_mvm *mvm, int queue,
+ }
+ 
+ static int iwl_mvm_disable_txq(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+-			       int queue, u8 tid, u8 flags)
++			       u16 *queueptr, u8 tid, u8 flags)
+ {
++	int queue = *queueptr;
+ 	struct iwl_scd_txq_cfg_cmd cmd = {
+ 		.scd_queue = queue,
+ 		.action = SCD_CFG_DISABLE_QUEUE,
+@@ -326,6 +327,7 @@ static int iwl_mvm_disable_txq(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ 
+ 	if (iwl_mvm_has_new_tx_api(mvm)) {
+ 		iwl_trans_txq_free(mvm->trans, queue);
++		*queueptr = IWL_MVM_INVALID_QUEUE;
+ 
+ 		return 0;
+ 	}
+@@ -487,6 +489,7 @@ static int iwl_mvm_free_inactive_queue(struct iwl_mvm *mvm, int queue,
+ 	u8 sta_id, tid;
+ 	unsigned long disable_agg_tids = 0;
+ 	bool same_sta;
++	u16 queue_tmp = queue;
+ 	int ret;
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+@@ -509,7 +512,7 @@ static int iwl_mvm_free_inactive_queue(struct iwl_mvm *mvm, int queue,
+ 		iwl_mvm_invalidate_sta_queue(mvm, queue,
+ 					     disable_agg_tids, false);
+ 
+-	ret = iwl_mvm_disable_txq(mvm, old_sta, queue, tid, 0);
++	ret = iwl_mvm_disable_txq(mvm, old_sta, &queue_tmp, tid, 0);
+ 	if (ret) {
+ 		IWL_ERR(mvm,
+ 			"Failed to free inactive queue %d (ret=%d)\n",
+@@ -1184,6 +1187,7 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm,
+ 	unsigned int wdg_timeout =
+ 		iwl_mvm_get_wd_timeout(mvm, mvmsta->vif, false, false);
+ 	int queue = -1;
++	u16 queue_tmp;
+ 	unsigned long disable_agg_tids = 0;
+ 	enum iwl_mvm_agg_state queue_state;
+ 	bool shared_queue = false, inc_ssn;
+@@ -1332,7 +1336,8 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm,
+ 	return 0;
+ 
+ out_err:
+-	iwl_mvm_disable_txq(mvm, sta, queue, tid, 0);
++	queue_tmp = queue;
++	iwl_mvm_disable_txq(mvm, sta, &queue_tmp, tid, 0);
+ 
+ 	return ret;
+ }
+@@ -1779,7 +1784,7 @@ static void iwl_mvm_disable_sta_queues(struct iwl_mvm *mvm,
+ 		if (mvm_sta->tid_data[i].txq_id == IWL_MVM_INVALID_QUEUE)
+ 			continue;
+ 
+-		iwl_mvm_disable_txq(mvm, sta, mvm_sta->tid_data[i].txq_id, i,
++		iwl_mvm_disable_txq(mvm, sta, &mvm_sta->tid_data[i].txq_id, i,
+ 				    0);
+ 		mvm_sta->tid_data[i].txq_id = IWL_MVM_INVALID_QUEUE;
+ 	}
+@@ -1987,7 +1992,7 @@ static int iwl_mvm_add_int_sta_with_queue(struct iwl_mvm *mvm, int macidx,
+ 	ret = iwl_mvm_add_int_sta_common(mvm, sta, addr, macidx, maccolor);
+ 	if (ret) {
+ 		if (!iwl_mvm_has_new_tx_api(mvm))
+-			iwl_mvm_disable_txq(mvm, NULL, *queue,
++			iwl_mvm_disable_txq(mvm, NULL, queue,
+ 					    IWL_MAX_TID_COUNT, 0);
+ 		return ret;
+ 	}
+@@ -2060,7 +2065,7 @@ int iwl_mvm_rm_snif_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+ 	if (WARN_ON_ONCE(mvm->snif_sta.sta_id == IWL_MVM_INVALID_STA))
+ 		return -EINVAL;
+ 
+-	iwl_mvm_disable_txq(mvm, NULL, mvm->snif_queue, IWL_MAX_TID_COUNT, 0);
++	iwl_mvm_disable_txq(mvm, NULL, &mvm->snif_queue, IWL_MAX_TID_COUNT, 0);
+ 	ret = iwl_mvm_rm_sta_common(mvm, mvm->snif_sta.sta_id);
+ 	if (ret)
+ 		IWL_WARN(mvm, "Failed sending remove station\n");
+@@ -2077,7 +2082,7 @@ int iwl_mvm_rm_aux_sta(struct iwl_mvm *mvm)
+ 	if (WARN_ON_ONCE(mvm->aux_sta.sta_id == IWL_MVM_INVALID_STA))
+ 		return -EINVAL;
+ 
+-	iwl_mvm_disable_txq(mvm, NULL, mvm->aux_queue, IWL_MAX_TID_COUNT, 0);
++	iwl_mvm_disable_txq(mvm, NULL, &mvm->aux_queue, IWL_MAX_TID_COUNT, 0);
+ 	ret = iwl_mvm_rm_sta_common(mvm, mvm->aux_sta.sta_id);
+ 	if (ret)
+ 		IWL_WARN(mvm, "Failed sending remove station\n");
+@@ -2173,7 +2178,7 @@ static void iwl_mvm_free_bcast_sta_queues(struct iwl_mvm *mvm,
+ 					  struct ieee80211_vif *vif)
+ {
+ 	struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+-	int queue;
++	u16 *queueptr, queue;
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+ 
+@@ -2182,10 +2187,10 @@ static void iwl_mvm_free_bcast_sta_queues(struct iwl_mvm *mvm,
+ 	switch (vif->type) {
+ 	case NL80211_IFTYPE_AP:
+ 	case NL80211_IFTYPE_ADHOC:
+-		queue = mvm->probe_queue;
++		queueptr = &mvm->probe_queue;
+ 		break;
+ 	case NL80211_IFTYPE_P2P_DEVICE:
+-		queue = mvm->p2p_dev_queue;
++		queueptr = &mvm->p2p_dev_queue;
+ 		break;
+ 	default:
+ 		WARN(1, "Can't free bcast queue on vif type %d\n",
+@@ -2193,7 +2198,8 @@ static void iwl_mvm_free_bcast_sta_queues(struct iwl_mvm *mvm,
+ 		return;
+ 	}
+ 
+-	iwl_mvm_disable_txq(mvm, NULL, queue, IWL_MAX_TID_COUNT, 0);
++	queue = *queueptr;
++	iwl_mvm_disable_txq(mvm, NULL, queueptr, IWL_MAX_TID_COUNT, 0);
+ 	if (iwl_mvm_has_new_tx_api(mvm))
+ 		return;
+ 
+@@ -2428,7 +2434,7 @@ int iwl_mvm_rm_mcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+ 
+ 	iwl_mvm_flush_sta(mvm, &mvmvif->mcast_sta, true);
+ 
+-	iwl_mvm_disable_txq(mvm, NULL, mvmvif->cab_queue, 0, 0);
++	iwl_mvm_disable_txq(mvm, NULL, &mvmvif->cab_queue, 0, 0);
+ 
+ 	ret = iwl_mvm_rm_sta_common(mvm, mvmvif->mcast_sta.sta_id);
+ 	if (ret)
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+index fb8491412be44..586c4104edf22 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+@@ -487,6 +487,9 @@ void iwl_pcie_free_rbs_pool(struct iwl_trans *trans)
+ 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+ 	int i;
+ 
++	if (!trans_pcie->rx_pool)
++		return;
++
+ 	for (i = 0; i < RX_POOL_SIZE(trans_pcie->num_rx_bufs); i++) {
+ 		if (!trans_pcie->rx_pool[i].page)
+ 			continue;
+@@ -1093,7 +1096,7 @@ static int _iwl_pcie_rx_init(struct iwl_trans *trans)
+ 	INIT_LIST_HEAD(&rba->rbd_empty);
+ 	spin_unlock_bh(&rba->lock);
+ 
+-	/* free all first - we might be reconfigured for a different size */
++	/* free all first - we overwrite everything here */
+ 	iwl_pcie_free_rbs_pool(trans);
+ 
+ 	for (i = 0; i < RX_QUEUE_SIZE; i++)
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index 239bc177a3e5c..a7a495dbf64db 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -1866,6 +1866,9 @@ static void iwl_trans_pcie_configure(struct iwl_trans *trans,
+ {
+ 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+ 
++	/* free all first - we might be reconfigured for a different size */
++	iwl_pcie_free_rbs_pool(trans);
++
+ 	trans->txqs.cmd.q_id = trans_cfg->cmd_queue;
+ 	trans->txqs.cmd.fifo = trans_cfg->cmd_fifo;
+ 	trans->txqs.cmd.wdg_timeout = trans_cfg->cmd_q_wdg_timeout;
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+index 01735776345a9..7ddce3c3f0c48 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
+@@ -1378,6 +1378,8 @@ struct rtl8xxxu_priv {
+ 	u8 no_pape:1;
+ 	u8 int_buf[USB_INTR_CONTENT_LENGTH];
+ 	u8 rssi_level;
++	DECLARE_BITMAP(tx_aggr_started, IEEE80211_NUM_TIDS);
++	DECLARE_BITMAP(tid_tx_operational, IEEE80211_NUM_TIDS);
+ 	/*
+ 	 * Only one virtual interface permitted because only STA mode
+ 	 * is supported and no iface_combinations are provided.
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+index 9ff09cf7eb622..ce8e2438f86b0 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+@@ -4805,6 +4805,8 @@ rtl8xxxu_fill_txdesc_v1(struct ieee80211_hw *hw, struct ieee80211_hdr *hdr,
+ 	struct ieee80211_rate *tx_rate = ieee80211_get_tx_rate(hw, tx_info);
+ 	struct rtl8xxxu_priv *priv = hw->priv;
+ 	struct device *dev = &priv->udev->dev;
++	u8 *qc = ieee80211_get_qos_ctl(hdr);
++	u8 tid = qc[0] & IEEE80211_QOS_CTL_TID_MASK;
+ 	u32 rate;
+ 	u16 rate_flags = tx_info->control.rates[0].flags;
+ 	u16 seq_number;
+@@ -4828,7 +4830,7 @@ rtl8xxxu_fill_txdesc_v1(struct ieee80211_hw *hw, struct ieee80211_hdr *hdr,
+ 
+ 	tx_desc->txdw3 = cpu_to_le32((u32)seq_number << TXDESC32_SEQ_SHIFT);
+ 
+-	if (ampdu_enable)
++	if (ampdu_enable && test_bit(tid, priv->tid_tx_operational))
+ 		tx_desc->txdw1 |= cpu_to_le32(TXDESC32_AGG_ENABLE);
+ 	else
+ 		tx_desc->txdw1 |= cpu_to_le32(TXDESC32_AGG_BREAK);
+@@ -4876,6 +4878,8 @@ rtl8xxxu_fill_txdesc_v2(struct ieee80211_hw *hw, struct ieee80211_hdr *hdr,
+ 	struct rtl8xxxu_priv *priv = hw->priv;
+ 	struct device *dev = &priv->udev->dev;
+ 	struct rtl8xxxu_txdesc40 *tx_desc40;
++	u8 *qc = ieee80211_get_qos_ctl(hdr);
++	u8 tid = qc[0] & IEEE80211_QOS_CTL_TID_MASK;
+ 	u32 rate;
+ 	u16 rate_flags = tx_info->control.rates[0].flags;
+ 	u16 seq_number;
+@@ -4902,7 +4906,7 @@ rtl8xxxu_fill_txdesc_v2(struct ieee80211_hw *hw, struct ieee80211_hdr *hdr,
+ 
+ 	tx_desc40->txdw9 = cpu_to_le32((u32)seq_number << TXDESC40_SEQ_SHIFT);
+ 
+-	if (ampdu_enable)
++	if (ampdu_enable && test_bit(tid, priv->tid_tx_operational))
+ 		tx_desc40->txdw2 |= cpu_to_le32(TXDESC40_AGG_ENABLE);
+ 	else
+ 		tx_desc40->txdw2 |= cpu_to_le32(TXDESC40_AGG_BREAK);
+@@ -5015,12 +5019,19 @@ static void rtl8xxxu_tx(struct ieee80211_hw *hw,
+ 	if (ieee80211_is_data_qos(hdr->frame_control) && sta) {
+ 		if (sta->ht_cap.ht_supported) {
+ 			u32 ampdu, val32;
++			u8 *qc = ieee80211_get_qos_ctl(hdr);
++			u8 tid = qc[0] & IEEE80211_QOS_CTL_TID_MASK;
+ 
+ 			ampdu = (u32)sta->ht_cap.ampdu_density;
+ 			val32 = ampdu << TXDESC_AMPDU_DENSITY_SHIFT;
+ 			tx_desc->txdw2 |= cpu_to_le32(val32);
+ 
+ 			ampdu_enable = true;
++
++			if (!test_bit(tid, priv->tx_aggr_started) &&
++			    !(skb->protocol == cpu_to_be16(ETH_P_PAE)))
++				if (!ieee80211_start_tx_ba_session(sta, tid, 0))
++					set_bit(tid, priv->tx_aggr_started);
+ 		}
+ 	}
+ 
+@@ -6089,6 +6100,7 @@ rtl8xxxu_ampdu_action(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 	struct device *dev = &priv->udev->dev;
+ 	u8 ampdu_factor, ampdu_density;
+ 	struct ieee80211_sta *sta = params->sta;
++	u16 tid = params->tid;
+ 	enum ieee80211_ampdu_mlme_action action = params->action;
+ 
+ 	switch (action) {
+@@ -6101,17 +6113,20 @@ rtl8xxxu_ampdu_action(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 		dev_dbg(dev,
+ 			"Changed HT: ampdu_factor %02x, ampdu_density %02x\n",
+ 			ampdu_factor, ampdu_density);
+-		break;
++		return IEEE80211_AMPDU_TX_START_IMMEDIATE;
++	case IEEE80211_AMPDU_TX_STOP_CONT:
+ 	case IEEE80211_AMPDU_TX_STOP_FLUSH:
+-		dev_dbg(dev, "%s: IEEE80211_AMPDU_TX_STOP_FLUSH\n", __func__);
+-		rtl8xxxu_set_ampdu_factor(priv, 0);
+-		rtl8xxxu_set_ampdu_min_space(priv, 0);
+-		break;
+ 	case IEEE80211_AMPDU_TX_STOP_FLUSH_CONT:
+-		dev_dbg(dev, "%s: IEEE80211_AMPDU_TX_STOP_FLUSH_CONT\n",
+-			 __func__);
++		dev_dbg(dev, "%s: IEEE80211_AMPDU_TX_STOP\n", __func__);
+ 		rtl8xxxu_set_ampdu_factor(priv, 0);
+ 		rtl8xxxu_set_ampdu_min_space(priv, 0);
++		clear_bit(tid, priv->tx_aggr_started);
++		clear_bit(tid, priv->tid_tx_operational);
++		ieee80211_stop_tx_ba_cb_irqsafe(vif, sta->addr, tid);
++		break;
++	case IEEE80211_AMPDU_TX_OPERATIONAL:
++		dev_dbg(dev, "%s: IEEE80211_AMPDU_TX_OPERATIONAL\n", __func__);
++		set_bit(tid, priv->tid_tx_operational);
+ 		break;
+ 	case IEEE80211_AMPDU_RX_START:
+ 		dev_dbg(dev, "%s: IEEE80211_AMPDU_RX_START\n", __func__);
+diff --git a/drivers/net/wireless/realtek/rtw88/Makefile b/drivers/net/wireless/realtek/rtw88/Makefile
+index c0e4b111c8b4e..73d6807a8cdfb 100644
+--- a/drivers/net/wireless/realtek/rtw88/Makefile
++++ b/drivers/net/wireless/realtek/rtw88/Makefile
+@@ -15,9 +15,9 @@ rtw88_core-y += main.o \
+ 	   ps.o \
+ 	   sec.o \
+ 	   bf.o \
+-	   wow.o \
+ 	   regd.o
+ 
++rtw88_core-$(CONFIG_PM) += wow.o
+ 
+ obj-$(CONFIG_RTW88_8822B)	+= rtw88_8822b.o
+ rtw88_8822b-objs		:= rtw8822b.o rtw8822b_table.o
+diff --git a/drivers/net/wireless/realtek/rtw88/fw.c b/drivers/net/wireless/realtek/rtw88/fw.c
+index ea2cd4db1d3ce..ce57932e38a44 100644
+--- a/drivers/net/wireless/realtek/rtw88/fw.c
++++ b/drivers/net/wireless/realtek/rtw88/fw.c
+@@ -715,7 +715,7 @@ static u16 rtw_get_rsvd_page_probe_req_size(struct rtw_dev *rtwdev,
+ 			continue;
+ 		if ((!ssid && !rsvd_pkt->ssid) ||
+ 		    rtw_ssid_equal(rsvd_pkt->ssid, ssid))
+-			size = rsvd_pkt->skb->len;
++			size = rsvd_pkt->probe_req_size;
+ 	}
+ 
+ 	return size;
+@@ -943,6 +943,8 @@ static struct sk_buff *rtw_get_rsvd_page_skb(struct ieee80211_hw *hw,
+ 							 ssid->ssid_len, 0);
+ 		else
+ 			skb_new = ieee80211_probereq_get(hw, vif->addr, NULL, 0, 0);
++		if (skb_new)
++			rsvd_pkt->probe_req_size = (u16)skb_new->len;
+ 		break;
+ 	case RSVD_NLO_INFO:
+ 		skb_new = rtw_nlo_info_get(hw);
+@@ -1539,6 +1541,7 @@ int rtw_fw_dump_fifo(struct rtw_dev *rtwdev, u8 fifo_sel, u32 addr, u32 size,
+ static void __rtw_fw_update_pkt(struct rtw_dev *rtwdev, u8 pkt_id, u16 size,
+ 				u8 location)
+ {
++	struct rtw_chip_info *chip = rtwdev->chip;
+ 	u8 h2c_pkt[H2C_PKT_SIZE] = {0};
+ 	u16 total_size = H2C_PKT_HDR_SIZE + H2C_PKT_UPDATE_PKT_LEN;
+ 
+@@ -1549,6 +1552,7 @@ static void __rtw_fw_update_pkt(struct rtw_dev *rtwdev, u8 pkt_id, u16 size,
+ 	UPDATE_PKT_SET_LOCATION(h2c_pkt, location);
+ 
+ 	/* include txdesc size */
++	size += chip->tx_pkt_desc_sz;
+ 	UPDATE_PKT_SET_SIZE(h2c_pkt, size);
+ 
+ 	rtw_fw_send_h2c_packet(rtwdev, h2c_pkt);
+@@ -1558,7 +1562,7 @@ void rtw_fw_update_pkt_probe_req(struct rtw_dev *rtwdev,
+ 				 struct cfg80211_ssid *ssid)
+ {
+ 	u8 loc;
+-	u32 size;
++	u16 size;
+ 
+ 	loc = rtw_get_rsvd_page_probe_req_location(rtwdev, ssid);
+ 	if (!loc) {
+diff --git a/drivers/net/wireless/realtek/rtw88/fw.h b/drivers/net/wireless/realtek/rtw88/fw.h
+index 7c5b1d75e26f1..35bc9e10dcbaa 100644
+--- a/drivers/net/wireless/realtek/rtw88/fw.h
++++ b/drivers/net/wireless/realtek/rtw88/fw.h
+@@ -126,6 +126,7 @@ struct rtw_rsvd_page {
+ 	u8 page;
+ 	bool add_txdesc;
+ 	struct cfg80211_ssid *ssid;
++	u16 probe_req_size;
+ };
+ 
+ enum rtw_keep_alive_pkt_type {
+diff --git a/drivers/net/wireless/realtek/rtw88/wow.c b/drivers/net/wireless/realtek/rtw88/wow.c
+index fc9544f4e5e45..bdccfa70dddc7 100644
+--- a/drivers/net/wireless/realtek/rtw88/wow.c
++++ b/drivers/net/wireless/realtek/rtw88/wow.c
+@@ -283,15 +283,26 @@ static void rtw_wow_rx_dma_start(struct rtw_dev *rtwdev)
+ 
+ static int rtw_wow_check_fw_status(struct rtw_dev *rtwdev, bool wow_enable)
+ {
+-	/* wait 100ms for wow firmware to finish work */
+-	msleep(100);
++	int ret;
++	u8 check;
++	u32 check_dis;
+ 
+ 	if (wow_enable) {
+-		if (rtw_read8(rtwdev, REG_WOWLAN_WAKE_REASON))
++		ret = read_poll_timeout(rtw_read8, check, !check, 1000,
++					100000, true, rtwdev,
++					REG_WOWLAN_WAKE_REASON);
++		if (ret)
+ 			goto wow_fail;
+ 	} else {
+-		if (rtw_read32_mask(rtwdev, REG_FE1IMR, BIT_FS_RXDONE) ||
+-		    rtw_read32_mask(rtwdev, REG_RXPKT_NUM, BIT_RW_RELEASE))
++		ret = read_poll_timeout(rtw_read32_mask, check_dis,
++					!check_dis, 1000, 100000, true, rtwdev,
++					REG_FE1IMR, BIT_FS_RXDONE);
++		if (ret)
++			goto wow_fail;
++		ret = read_poll_timeout(rtw_read32_mask, check_dis,
++					!check_dis, 1000, 100000, false, rtwdev,
++					REG_RXPKT_NUM, BIT_RW_RELEASE);
++		if (ret)
+ 			goto wow_fail;
+ 	}
+ 
+diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
+index ed10a8b66068a..3c7c4f1d55cd9 100644
+--- a/drivers/nvdimm/pmem.c
++++ b/drivers/nvdimm/pmem.c
+@@ -449,11 +449,11 @@ static int pmem_attach_disk(struct device *dev,
+ 		pmem->pfn_flags |= PFN_MAP;
+ 		bb_range = pmem->pgmap.range;
+ 	} else {
++		addr = devm_memremap(dev, pmem->phys_addr,
++				pmem->size, ARCH_MEMREMAP_PMEM);
+ 		if (devm_add_action_or_reset(dev, pmem_release_queue,
+ 					&pmem->pgmap))
+ 			return -ENOMEM;
+-		addr = devm_memremap(dev, pmem->phys_addr,
+-				pmem->size, ARCH_MEMREMAP_PMEM);
+ 		bb_range.start =  res->start;
+ 		bb_range.end = res->end;
+ 	}
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 148e756857a89..a13eec2fca5aa 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1009,7 +1009,8 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req)
+ 		return BLK_STS_IOERR;
+ 	}
+ 
+-	cmd->common.command_id = req->tag;
++	nvme_req(req)->genctr++;
++	cmd->common.command_id = nvme_cid(req);
+ 	trace_nvme_setup_cmd(req, cmd);
+ 	return ret;
+ }
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 0015860ec12bf..632076b9c1c9d 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -158,6 +158,7 @@ enum nvme_quirks {
+ struct nvme_request {
+ 	struct nvme_command	*cmd;
+ 	union nvme_result	result;
++	u8			genctr;
+ 	u8			retries;
+ 	u8			flags;
+ 	u16			status;
+@@ -497,6 +498,49 @@ struct nvme_ctrl_ops {
+ 	int (*get_address)(struct nvme_ctrl *ctrl, char *buf, int size);
+ };
+ 
++/*
++ * nvme command_id is constructed as such:
++ * | xxxx | xxxxxxxxxxxx |
++ *   gen    request tag
++ */
++#define nvme_genctr_mask(gen)			(gen & 0xf)
++#define nvme_cid_install_genctr(gen)		(nvme_genctr_mask(gen) << 12)
++#define nvme_genctr_from_cid(cid)		((cid & 0xf000) >> 12)
++#define nvme_tag_from_cid(cid)			(cid & 0xfff)
++
++static inline u16 nvme_cid(struct request *rq)
++{
++	return nvme_cid_install_genctr(nvme_req(rq)->genctr) | rq->tag;
++}
++
++static inline struct request *nvme_find_rq(struct blk_mq_tags *tags,
++		u16 command_id)
++{
++	u8 genctr = nvme_genctr_from_cid(command_id);
++	u16 tag = nvme_tag_from_cid(command_id);
++	struct request *rq;
++
++	rq = blk_mq_tag_to_rq(tags, tag);
++	if (unlikely(!rq)) {
++		pr_err("could not locate request for tag %#x\n",
++			tag);
++		return NULL;
++	}
++	if (unlikely(nvme_genctr_mask(nvme_req(rq)->genctr) != genctr)) {
++		dev_err(nvme_req(rq)->ctrl->device,
++			"request %#x genctr mismatch (got %#x expected %#x)\n",
++			tag, genctr, nvme_genctr_mask(nvme_req(rq)->genctr));
++		return NULL;
++	}
++	return rq;
++}
++
++static inline struct request *nvme_cid_to_rq(struct blk_mq_tags *tags,
++                u16 command_id)
++{
++	return blk_mq_tag_to_rq(tags, nvme_tag_from_cid(command_id));
++}
++
+ #ifdef CONFIG_FAULT_INJECTION_DEBUG_FS
+ void nvme_fault_inject_init(struct nvme_fault_inject *fault_inj,
+ 			    const char *dev_name);
+@@ -594,7 +638,8 @@ static inline void nvme_put_ctrl(struct nvme_ctrl *ctrl)
+ 
+ static inline bool nvme_is_aen_req(u16 qid, __u16 command_id)
+ {
+-	return !qid && command_id >= NVME_AQ_BLK_MQ_DEPTH;
++	return !qid &&
++		nvme_tag_from_cid(command_id) >= NVME_AQ_BLK_MQ_DEPTH;
+ }
+ 
+ void nvme_complete_rq(struct request *req);
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index d963f25fc7aed..01feb1c2278dc 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -1017,7 +1017,7 @@ static inline void nvme_handle_cqe(struct nvme_queue *nvmeq, u16 idx)
+ 		return;
+ 	}
+ 
+-	req = blk_mq_tag_to_rq(nvme_queue_tagset(nvmeq), command_id);
++	req = nvme_find_rq(nvme_queue_tagset(nvmeq), command_id);
+ 	if (unlikely(!req)) {
+ 		dev_warn(nvmeq->dev->ctrl.device,
+ 			"invalid id %d completed on queue %d\n",
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index f80682f7df54d..b95945c58b3b4 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -1731,10 +1731,10 @@ static void nvme_rdma_process_nvme_rsp(struct nvme_rdma_queue *queue,
+ 	struct request *rq;
+ 	struct nvme_rdma_request *req;
+ 
+-	rq = blk_mq_tag_to_rq(nvme_rdma_tagset(queue), cqe->command_id);
++	rq = nvme_find_rq(nvme_rdma_tagset(queue), cqe->command_id);
+ 	if (!rq) {
+ 		dev_err(queue->ctrl->ctrl.device,
+-			"tag 0x%x on QP %#x not found\n",
++			"got bad command_id %#x on QP %#x\n",
+ 			cqe->command_id, queue->qp->qp_num);
+ 		nvme_rdma_error_recovery(queue->ctrl);
+ 		return;
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index ab1ea5b0888ea..258d71807367a 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -487,11 +487,11 @@ static int nvme_tcp_process_nvme_cqe(struct nvme_tcp_queue *queue,
+ {
+ 	struct request *rq;
+ 
+-	rq = blk_mq_tag_to_rq(nvme_tcp_tagset(queue), cqe->command_id);
++	rq = nvme_find_rq(nvme_tcp_tagset(queue), cqe->command_id);
+ 	if (!rq) {
+ 		dev_err(queue->ctrl->ctrl.device,
+-			"queue %d tag 0x%x not found\n",
+-			nvme_tcp_queue_id(queue), cqe->command_id);
++			"got bad cqe.command_id %#x on queue %d\n",
++			cqe->command_id, nvme_tcp_queue_id(queue));
+ 		nvme_tcp_error_recovery(&queue->ctrl->ctrl);
+ 		return -EINVAL;
+ 	}
+@@ -508,11 +508,11 @@ static int nvme_tcp_handle_c2h_data(struct nvme_tcp_queue *queue,
+ {
+ 	struct request *rq;
+ 
+-	rq = blk_mq_tag_to_rq(nvme_tcp_tagset(queue), pdu->command_id);
++	rq = nvme_find_rq(nvme_tcp_tagset(queue), pdu->command_id);
+ 	if (!rq) {
+ 		dev_err(queue->ctrl->ctrl.device,
+-			"queue %d tag %#x not found\n",
+-			nvme_tcp_queue_id(queue), pdu->command_id);
++			"got bad c2hdata.command_id %#x on queue %d\n",
++			pdu->command_id, nvme_tcp_queue_id(queue));
+ 		return -ENOENT;
+ 	}
+ 
+@@ -606,7 +606,7 @@ static int nvme_tcp_setup_h2c_data_pdu(struct nvme_tcp_request *req,
+ 	data->hdr.plen =
+ 		cpu_to_le32(data->hdr.hlen + hdgst + req->pdu_len + ddgst);
+ 	data->ttag = pdu->ttag;
+-	data->command_id = rq->tag;
++	data->command_id = nvme_cid(rq);
+ 	data->data_offset = cpu_to_le32(req->data_sent);
+ 	data->data_length = cpu_to_le32(req->pdu_len);
+ 	return 0;
+@@ -619,11 +619,11 @@ static int nvme_tcp_handle_r2t(struct nvme_tcp_queue *queue,
+ 	struct request *rq;
+ 	int ret;
+ 
+-	rq = blk_mq_tag_to_rq(nvme_tcp_tagset(queue), pdu->command_id);
++	rq = nvme_find_rq(nvme_tcp_tagset(queue), pdu->command_id);
+ 	if (!rq) {
+ 		dev_err(queue->ctrl->ctrl.device,
+-			"queue %d tag %#x not found\n",
+-			nvme_tcp_queue_id(queue), pdu->command_id);
++			"got bad r2t.command_id %#x on queue %d\n",
++			pdu->command_id, nvme_tcp_queue_id(queue));
+ 		return -ENOENT;
+ 	}
+ 	req = blk_mq_rq_to_pdu(rq);
+@@ -702,17 +702,9 @@ static int nvme_tcp_recv_data(struct nvme_tcp_queue *queue, struct sk_buff *skb,
+ 			      unsigned int *offset, size_t *len)
+ {
+ 	struct nvme_tcp_data_pdu *pdu = (void *)queue->pdu;
+-	struct nvme_tcp_request *req;
+-	struct request *rq;
+-
+-	rq = blk_mq_tag_to_rq(nvme_tcp_tagset(queue), pdu->command_id);
+-	if (!rq) {
+-		dev_err(queue->ctrl->ctrl.device,
+-			"queue %d tag %#x not found\n",
+-			nvme_tcp_queue_id(queue), pdu->command_id);
+-		return -ENOENT;
+-	}
+-	req = blk_mq_rq_to_pdu(rq);
++	struct request *rq =
++		nvme_cid_to_rq(nvme_tcp_tagset(queue), pdu->command_id);
++	struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
+ 
+ 	while (true) {
+ 		int recv_len, ret;
+@@ -804,8 +796,8 @@ static int nvme_tcp_recv_ddgst(struct nvme_tcp_queue *queue,
+ 	}
+ 
+ 	if (pdu->hdr.flags & NVME_TCP_F_DATA_SUCCESS) {
+-		struct request *rq = blk_mq_tag_to_rq(nvme_tcp_tagset(queue),
+-						pdu->command_id);
++		struct request *rq = nvme_cid_to_rq(nvme_tcp_tagset(queue),
++					pdu->command_id);
+ 
+ 		nvme_tcp_end_request(rq, NVME_SC_SUCCESS);
+ 		queue->nr_cqe++;
+diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
+index a5c4a18650263..f6ee47de3038a 100644
+--- a/drivers/nvme/target/loop.c
++++ b/drivers/nvme/target/loop.c
+@@ -107,10 +107,10 @@ static void nvme_loop_queue_response(struct nvmet_req *req)
+ 	} else {
+ 		struct request *rq;
+ 
+-		rq = blk_mq_tag_to_rq(nvme_loop_tagset(queue), cqe->command_id);
++		rq = nvme_find_rq(nvme_loop_tagset(queue), cqe->command_id);
+ 		if (!rq) {
+ 			dev_err(queue->ctrl->ctrl.device,
+-				"tag 0x%x on queue %d not found\n",
++				"got bad command_id %#x on queue %d\n",
+ 				cqe->command_id, nvme_loop_queue_idx(queue));
+ 			return;
+ 		}
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index f9c9c98599197..30691b50731f3 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -818,8 +818,11 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
+ 
+ 	if (nvmem->nkeepout) {
+ 		rval = nvmem_validate_keepouts(nvmem);
+-		if (rval)
+-			goto err_put_device;
++		if (rval) {
++			ida_free(&nvmem_ida, nvmem->id);
++			kfree(nvmem);
++			return ERR_PTR(rval);
++		}
+ 	}
+ 
+ 	dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);
+diff --git a/drivers/nvmem/qfprom.c b/drivers/nvmem/qfprom.c
+index d6d3f24685a85..f372eda2b2551 100644
+--- a/drivers/nvmem/qfprom.c
++++ b/drivers/nvmem/qfprom.c
+@@ -138,6 +138,9 @@ static void qfprom_disable_fuse_blowing(const struct qfprom_priv *priv,
+ {
+ 	int ret;
+ 
++	writel(old->timer_val, priv->qfpconf + QFPROM_BLOW_TIMER_OFFSET);
++	writel(old->accel_val, priv->qfpconf + QFPROM_ACCEL_OFFSET);
++
+ 	/*
+ 	 * This may be a shared rail and may be able to run at a lower rate
+ 	 * when we're not blowing fuses.  At the moment, the regulator framework
+@@ -158,9 +161,6 @@ static void qfprom_disable_fuse_blowing(const struct qfprom_priv *priv,
+ 			 "Failed to set clock rate for disable (ignoring)\n");
+ 
+ 	clk_disable_unprepare(priv->secclk);
+-
+-	writel(old->timer_val, priv->qfpconf + QFPROM_BLOW_TIMER_OFFSET);
+-	writel(old->accel_val, priv->qfpconf + QFPROM_ACCEL_OFFSET);
+ }
+ 
+ /**
+diff --git a/drivers/of/kobj.c b/drivers/of/kobj.c
+index a32e60b024b8d..6675b5e56960c 100644
+--- a/drivers/of/kobj.c
++++ b/drivers/of/kobj.c
+@@ -119,7 +119,7 @@ int __of_attach_node_sysfs(struct device_node *np)
+ 	struct property *pp;
+ 	int rc;
+ 
+-	if (!of_kset)
++	if (!IS_ENABLED(CONFIG_SYSFS) || !of_kset)
+ 		return 0;
+ 
+ 	np->kobj.kset = of_kset;
+diff --git a/drivers/opp/of.c b/drivers/opp/of.c
+index 01feeba78426c..de550ee48e77c 100644
+--- a/drivers/opp/of.c
++++ b/drivers/opp/of.c
+@@ -95,15 +95,7 @@ static struct dev_pm_opp *_find_opp_of_np(struct opp_table *opp_table,
+ static struct device_node *of_parse_required_opp(struct device_node *np,
+ 						 int index)
+ {
+-	struct device_node *required_np;
+-
+-	required_np = of_parse_phandle(np, "required-opps", index);
+-	if (unlikely(!required_np)) {
+-		pr_err("%s: Unable to parse required-opps: %pOF, index: %d\n",
+-		       __func__, np, index);
+-	}
+-
+-	return required_np;
++	return of_parse_phandle(np, "required-opps", index);
+ }
+ 
+ /* The caller must call dev_pm_opp_put_opp_table() after the table is used */
+@@ -1349,7 +1341,7 @@ int of_get_required_opp_performance_state(struct device_node *np, int index)
+ 
+ 	required_np = of_parse_required_opp(np, index);
+ 	if (!required_np)
+-		return -EINVAL;
++		return -ENODEV;
+ 
+ 	opp_table = _find_table_of_opp_np(required_np);
+ 	if (IS_ERR(opp_table)) {
+diff --git a/drivers/parport/ieee1284_ops.c b/drivers/parport/ieee1284_ops.c
+index 2c11bd3fe1fd6..17061f1df0f44 100644
+--- a/drivers/parport/ieee1284_ops.c
++++ b/drivers/parport/ieee1284_ops.c
+@@ -518,7 +518,7 @@ size_t parport_ieee1284_ecp_read_data (struct parport *port,
+ 				goto out;
+ 
+ 			/* Yield the port for a while. */
+-			if (count && dev->port->irq != PARPORT_IRQ_NONE) {
++			if (dev->port->irq != PARPORT_IRQ_NONE) {
+ 				parport_release (dev);
+ 				schedule_timeout_interruptible(msecs_to_jiffies(40));
+ 				parport_claim_or_block (dev);
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index c95ebe808f92b..fdbf051586970 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -58,6 +58,7 @@
+ #define   PIO_COMPLETION_STATUS_CRS		2
+ #define   PIO_COMPLETION_STATUS_CA		4
+ #define   PIO_NON_POSTED_REQ			BIT(10)
++#define   PIO_ERR_STATUS			BIT(11)
+ #define PIO_ADDR_LS				(PIO_BASE_ADDR + 0x8)
+ #define PIO_ADDR_MS				(PIO_BASE_ADDR + 0xc)
+ #define PIO_WR_DATA				(PIO_BASE_ADDR + 0x10)
+@@ -118,6 +119,46 @@
+ #define PCIE_MSI_MASK_REG			(CONTROL_BASE_ADDR + 0x5C)
+ #define PCIE_MSI_PAYLOAD_REG			(CONTROL_BASE_ADDR + 0x9C)
+ 
++/* PCIe window configuration */
++#define OB_WIN_BASE_ADDR			0x4c00
++#define OB_WIN_BLOCK_SIZE			0x20
++#define OB_WIN_COUNT				8
++#define OB_WIN_REG_ADDR(win, offset)		(OB_WIN_BASE_ADDR + \
++						 OB_WIN_BLOCK_SIZE * (win) + \
++						 (offset))
++#define OB_WIN_MATCH_LS(win)			OB_WIN_REG_ADDR(win, 0x00)
++#define     OB_WIN_ENABLE			BIT(0)
++#define OB_WIN_MATCH_MS(win)			OB_WIN_REG_ADDR(win, 0x04)
++#define OB_WIN_REMAP_LS(win)			OB_WIN_REG_ADDR(win, 0x08)
++#define OB_WIN_REMAP_MS(win)			OB_WIN_REG_ADDR(win, 0x0c)
++#define OB_WIN_MASK_LS(win)			OB_WIN_REG_ADDR(win, 0x10)
++#define OB_WIN_MASK_MS(win)			OB_WIN_REG_ADDR(win, 0x14)
++#define OB_WIN_ACTIONS(win)			OB_WIN_REG_ADDR(win, 0x18)
++#define OB_WIN_DEFAULT_ACTIONS			(OB_WIN_ACTIONS(OB_WIN_COUNT-1) + 0x4)
++#define     OB_WIN_FUNC_NUM_MASK		GENMASK(31, 24)
++#define     OB_WIN_FUNC_NUM_SHIFT		24
++#define     OB_WIN_FUNC_NUM_ENABLE		BIT(23)
++#define     OB_WIN_BUS_NUM_BITS_MASK		GENMASK(22, 20)
++#define     OB_WIN_BUS_NUM_BITS_SHIFT		20
++#define     OB_WIN_MSG_CODE_ENABLE		BIT(22)
++#define     OB_WIN_MSG_CODE_MASK		GENMASK(21, 14)
++#define     OB_WIN_MSG_CODE_SHIFT		14
++#define     OB_WIN_MSG_PAYLOAD_LEN		BIT(12)
++#define     OB_WIN_ATTR_ENABLE			BIT(11)
++#define     OB_WIN_ATTR_TC_MASK			GENMASK(10, 8)
++#define     OB_WIN_ATTR_TC_SHIFT		8
++#define     OB_WIN_ATTR_RELAXED			BIT(7)
++#define     OB_WIN_ATTR_NOSNOOP			BIT(6)
++#define     OB_WIN_ATTR_POISON			BIT(5)
++#define     OB_WIN_ATTR_IDO			BIT(4)
++#define     OB_WIN_TYPE_MASK			GENMASK(3, 0)
++#define     OB_WIN_TYPE_SHIFT			0
++#define     OB_WIN_TYPE_MEM			0x0
++#define     OB_WIN_TYPE_IO			0x4
++#define     OB_WIN_TYPE_CONFIG_TYPE0		0x8
++#define     OB_WIN_TYPE_CONFIG_TYPE1		0x9
++#define     OB_WIN_TYPE_MSG			0xc
++
+ /* LMI registers base address and register offsets */
+ #define LMI_BASE_ADDR				0x6000
+ #define CFG_REG					(LMI_BASE_ADDR + 0x0)
+@@ -166,7 +207,7 @@
+ #define PCIE_CONFIG_WR_TYPE0			0xa
+ #define PCIE_CONFIG_WR_TYPE1			0xb
+ 
+-#define PIO_RETRY_CNT			500
++#define PIO_RETRY_CNT			750000 /* 1.5 s */
+ #define PIO_RETRY_DELAY			2 /* 2 us*/
+ 
+ #define LINK_WAIT_MAX_RETRIES		10
+@@ -180,8 +221,16 @@
+ struct advk_pcie {
+ 	struct platform_device *pdev;
+ 	void __iomem *base;
++	struct {
++		phys_addr_t match;
++		phys_addr_t remap;
++		phys_addr_t mask;
++		u32 actions;
++	} wins[OB_WIN_COUNT];
++	u8 wins_count;
+ 	struct irq_domain *irq_domain;
+ 	struct irq_chip irq_chip;
++	raw_spinlock_t irq_lock;
+ 	struct irq_domain *msi_domain;
+ 	struct irq_domain *msi_inner_domain;
+ 	struct irq_chip msi_bottom_irq_chip;
+@@ -366,9 +415,39 @@ err:
+ 	dev_err(dev, "link never came up\n");
+ }
+ 
++/*
++ * Set PCIe address window register which could be used for memory
++ * mapping.
++ */
++static void advk_pcie_set_ob_win(struct advk_pcie *pcie, u8 win_num,
++				 phys_addr_t match, phys_addr_t remap,
++				 phys_addr_t mask, u32 actions)
++{
++	advk_writel(pcie, OB_WIN_ENABLE |
++			  lower_32_bits(match), OB_WIN_MATCH_LS(win_num));
++	advk_writel(pcie, upper_32_bits(match), OB_WIN_MATCH_MS(win_num));
++	advk_writel(pcie, lower_32_bits(remap), OB_WIN_REMAP_LS(win_num));
++	advk_writel(pcie, upper_32_bits(remap), OB_WIN_REMAP_MS(win_num));
++	advk_writel(pcie, lower_32_bits(mask), OB_WIN_MASK_LS(win_num));
++	advk_writel(pcie, upper_32_bits(mask), OB_WIN_MASK_MS(win_num));
++	advk_writel(pcie, actions, OB_WIN_ACTIONS(win_num));
++}
++
++static void advk_pcie_disable_ob_win(struct advk_pcie *pcie, u8 win_num)
++{
++	advk_writel(pcie, 0, OB_WIN_MATCH_LS(win_num));
++	advk_writel(pcie, 0, OB_WIN_MATCH_MS(win_num));
++	advk_writel(pcie, 0, OB_WIN_REMAP_LS(win_num));
++	advk_writel(pcie, 0, OB_WIN_REMAP_MS(win_num));
++	advk_writel(pcie, 0, OB_WIN_MASK_LS(win_num));
++	advk_writel(pcie, 0, OB_WIN_MASK_MS(win_num));
++	advk_writel(pcie, 0, OB_WIN_ACTIONS(win_num));
++}
++
+ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ {
+ 	u32 reg;
++	int i;
+ 
+ 	/* Enable TX */
+ 	reg = advk_readl(pcie, PCIE_CORE_REF_CLK_REG);
+@@ -447,15 +526,51 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ 	reg = PCIE_IRQ_ALL_MASK & (~PCIE_IRQ_ENABLE_INTS_MASK);
+ 	advk_writel(pcie, reg, HOST_CTRL_INT_MASK_REG);
+ 
++	/*
++	 * Enable AXI address window location generation:
++	 * When it is enabled, the default outbound window
++	 * configurations (Default User Field: 0xD0074CFC)
++	 * are used to transparent address translation for
++	 * the outbound transactions. Thus, PCIe address
++	 * windows are not required for transparent memory
++	 * access when default outbound window configuration
++	 * is set for memory access.
++	 */
+ 	reg = advk_readl(pcie, PCIE_CORE_CTRL2_REG);
+ 	reg |= PCIE_CORE_CTRL2_OB_WIN_ENABLE;
+ 	advk_writel(pcie, reg, PCIE_CORE_CTRL2_REG);
+ 
+-	/* Bypass the address window mapping for PIO */
++	/*
++	 * Set memory access in Default User Field so it
++	 * is not required to configure PCIe address for
++	 * transparent memory access.
++	 */
++	advk_writel(pcie, OB_WIN_TYPE_MEM, OB_WIN_DEFAULT_ACTIONS);
++
++	/*
++	 * Bypass the address window mapping for PIO:
++	 * Since PIO access already contains all required
++	 * info over AXI interface by PIO registers, the
++	 * address window is not required.
++	 */
+ 	reg = advk_readl(pcie, PIO_CTRL);
+ 	reg |= PIO_CTRL_ADDR_WIN_DISABLE;
+ 	advk_writel(pcie, reg, PIO_CTRL);
+ 
++	/*
++	 * Configure PCIe address windows for non-memory or
++	 * non-transparent access as by default PCIe uses
++	 * transparent memory access.
++	 */
++	for (i = 0; i < pcie->wins_count; i++)
++		advk_pcie_set_ob_win(pcie, i,
++				     pcie->wins[i].match, pcie->wins[i].remap,
++				     pcie->wins[i].mask, pcie->wins[i].actions);
++
++	/* Disable remaining PCIe outbound windows */
++	for (i = pcie->wins_count; i < OB_WIN_COUNT; i++)
++		advk_pcie_disable_ob_win(pcie, i);
++
+ 	advk_pcie_train_link(pcie);
+ 
+ 	/*
+@@ -472,7 +587,7 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ 	advk_writel(pcie, reg, PCIE_CORE_CMD_STATUS_REG);
+ }
+ 
+-static void advk_pcie_check_pio_status(struct advk_pcie *pcie)
++static int advk_pcie_check_pio_status(struct advk_pcie *pcie, u32 *val)
+ {
+ 	struct device *dev = &pcie->pdev->dev;
+ 	u32 reg;
+@@ -483,14 +598,49 @@ static void advk_pcie_check_pio_status(struct advk_pcie *pcie)
+ 	status = (reg & PIO_COMPLETION_STATUS_MASK) >>
+ 		PIO_COMPLETION_STATUS_SHIFT;
+ 
+-	if (!status)
+-		return;
+-
++	/*
++	 * According to HW spec, the PIO status check sequence as below:
++	 * 1) even if COMPLETION_STATUS(bit9:7) indicates successful,
++	 *    it still needs to check Error Status(bit11), only when this bit
++	 *    indicates no error happen, the operation is successful.
++	 * 2) value Unsupported Request(1) of COMPLETION_STATUS(bit9:7) only
++	 *    means a PIO write error, and for PIO read it is successful with
++	 *    a read value of 0xFFFFFFFF.
++	 * 3) value Completion Retry Status(CRS) of COMPLETION_STATUS(bit9:7)
++	 *    only means a PIO write error, and for PIO read it is successful
++	 *    with a read value of 0xFFFF0001.
++	 * 4) value Completer Abort (CA) of COMPLETION_STATUS(bit9:7) means
++	 *    error for both PIO read and PIO write operation.
++	 * 5) other errors are indicated as 'unknown'.
++	 */
+ 	switch (status) {
++	case PIO_COMPLETION_STATUS_OK:
++		if (reg & PIO_ERR_STATUS) {
++			strcomp_status = "COMP_ERR";
++			break;
++		}
++		/* Get the read result */
++		if (val)
++			*val = advk_readl(pcie, PIO_RD_DATA);
++		/* No error */
++		strcomp_status = NULL;
++		break;
+ 	case PIO_COMPLETION_STATUS_UR:
+ 		strcomp_status = "UR";
+ 		break;
+ 	case PIO_COMPLETION_STATUS_CRS:
++		/* PCIe r4.0, sec 2.3.2, says:
++		 * If CRS Software Visibility is not enabled, the Root Complex
++		 * must re-issue the Configuration Request as a new Request.
++		 * A Root Complex implementation may choose to limit the number
++		 * of Configuration Request/CRS Completion Status loops before
++		 * determining that something is wrong with the target of the
++		 * Request and taking appropriate action, e.g., complete the
++		 * Request to the host as a failed transaction.
++		 *
++		 * To simplify implementation do not re-issue the Configuration
++		 * Request and complete the Request as a failed transaction.
++		 */
+ 		strcomp_status = "CRS";
+ 		break;
+ 	case PIO_COMPLETION_STATUS_CA:
+@@ -501,6 +651,9 @@ static void advk_pcie_check_pio_status(struct advk_pcie *pcie)
+ 		break;
+ 	}
+ 
++	if (!strcomp_status)
++		return 0;
++
+ 	if (reg & PIO_NON_POSTED_REQ)
+ 		str_posted = "Non-posted";
+ 	else
+@@ -508,6 +661,8 @@ static void advk_pcie_check_pio_status(struct advk_pcie *pcie)
+ 
+ 	dev_err(dev, "%s PIO Response Status: %s, %#x @ %#x\n",
+ 		str_posted, strcomp_status, reg, advk_readl(pcie, PIO_ADDR_LS));
++
++	return -EFAULT;
+ }
+ 
+ static int advk_pcie_wait_pio(struct advk_pcie *pcie)
+@@ -745,10 +900,13 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
+ 		return PCIBIOS_SET_FAILED;
+ 	}
+ 
+-	advk_pcie_check_pio_status(pcie);
++	/* Check PIO status and get the read result */
++	ret = advk_pcie_check_pio_status(pcie, val);
++	if (ret < 0) {
++		*val = 0xffffffff;
++		return PCIBIOS_SET_FAILED;
++	}
+ 
+-	/* Get the read result */
+-	*val = advk_readl(pcie, PIO_RD_DATA);
+ 	if (size == 1)
+ 		*val = (*val >> (8 * (where & 3))) & 0xff;
+ 	else if (size == 2)
+@@ -812,7 +970,9 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
+ 	if (ret < 0)
+ 		return PCIBIOS_SET_FAILED;
+ 
+-	advk_pcie_check_pio_status(pcie);
++	ret = advk_pcie_check_pio_status(pcie, NULL);
++	if (ret < 0)
++		return PCIBIOS_SET_FAILED;
+ 
+ 	return PCIBIOS_SUCCESSFUL;
+ }
+@@ -886,22 +1046,28 @@ static void advk_pcie_irq_mask(struct irq_data *d)
+ {
+ 	struct advk_pcie *pcie = d->domain->host_data;
+ 	irq_hw_number_t hwirq = irqd_to_hwirq(d);
++	unsigned long flags;
+ 	u32 mask;
+ 
++	raw_spin_lock_irqsave(&pcie->irq_lock, flags);
+ 	mask = advk_readl(pcie, PCIE_ISR1_MASK_REG);
+ 	mask |= PCIE_ISR1_INTX_ASSERT(hwirq);
+ 	advk_writel(pcie, mask, PCIE_ISR1_MASK_REG);
++	raw_spin_unlock_irqrestore(&pcie->irq_lock, flags);
+ }
+ 
+ static void advk_pcie_irq_unmask(struct irq_data *d)
+ {
+ 	struct advk_pcie *pcie = d->domain->host_data;
+ 	irq_hw_number_t hwirq = irqd_to_hwirq(d);
++	unsigned long flags;
+ 	u32 mask;
+ 
++	raw_spin_lock_irqsave(&pcie->irq_lock, flags);
+ 	mask = advk_readl(pcie, PCIE_ISR1_MASK_REG);
+ 	mask &= ~PCIE_ISR1_INTX_ASSERT(hwirq);
+ 	advk_writel(pcie, mask, PCIE_ISR1_MASK_REG);
++	raw_spin_unlock_irqrestore(&pcie->irq_lock, flags);
+ }
+ 
+ static int advk_pcie_irq_map(struct irq_domain *h,
+@@ -985,6 +1151,8 @@ static int advk_pcie_init_irq_domain(struct advk_pcie *pcie)
+ 	struct irq_chip *irq_chip;
+ 	int ret = 0;
+ 
++	raw_spin_lock_init(&pcie->irq_lock);
++
+ 	pcie_intc_node =  of_get_next_child(node, NULL);
+ 	if (!pcie_intc_node) {
+ 		dev_err(dev, "No PCIe Intc node found\n");
+@@ -1162,6 +1330,7 @@ static int advk_pcie_probe(struct platform_device *pdev)
+ 	struct device *dev = &pdev->dev;
+ 	struct advk_pcie *pcie;
+ 	struct pci_host_bridge *bridge;
++	struct resource_entry *entry;
+ 	int ret, irq;
+ 
+ 	bridge = devm_pci_alloc_host_bridge(dev, sizeof(struct advk_pcie));
+@@ -1172,6 +1341,80 @@ static int advk_pcie_probe(struct platform_device *pdev)
+ 	pcie->pdev = pdev;
+ 	platform_set_drvdata(pdev, pcie);
+ 
++	resource_list_for_each_entry(entry, &bridge->windows) {
++		resource_size_t start = entry->res->start;
++		resource_size_t size = resource_size(entry->res);
++		unsigned long type = resource_type(entry->res);
++		u64 win_size;
++
++		/*
++		 * Aardvark hardware allows to configure also PCIe window
++		 * for config type 0 and type 1 mapping, but driver uses
++		 * only PIO for issuing configuration transfers which does
++		 * not use PCIe window configuration.
++		 */
++		if (type != IORESOURCE_MEM && type != IORESOURCE_MEM_64 &&
++		    type != IORESOURCE_IO)
++			continue;
++
++		/*
++		 * Skip transparent memory resources. Default outbound access
++		 * configuration is set to transparent memory access so it
++		 * does not need window configuration.
++		 */
++		if ((type == IORESOURCE_MEM || type == IORESOURCE_MEM_64) &&
++		    entry->offset == 0)
++			continue;
++
++		/*
++		 * The n-th PCIe window is configured by tuple (match, remap, mask)
++		 * and an access to address A uses this window if A matches the
++		 * match with given mask.
++		 * So every PCIe window size must be a power of two and every start
++		 * address must be aligned to window size. Minimal size is 64 KiB
++		 * because lower 16 bits of mask must be zero. Remapped address
++		 * may have set only bits from the mask.
++		 */
++		while (pcie->wins_count < OB_WIN_COUNT && size > 0) {
++			/* Calculate the largest aligned window size */
++			win_size = (1ULL << (fls64(size)-1)) |
++				   (start ? (1ULL << __ffs64(start)) : 0);
++			win_size = 1ULL << __ffs64(win_size);
++			if (win_size < 0x10000)
++				break;
++
++			dev_dbg(dev,
++				"Configuring PCIe window %d: [0x%llx-0x%llx] as %lu\n",
++				pcie->wins_count, (unsigned long long)start,
++				(unsigned long long)start + win_size, type);
++
++			if (type == IORESOURCE_IO) {
++				pcie->wins[pcie->wins_count].actions = OB_WIN_TYPE_IO;
++				pcie->wins[pcie->wins_count].match = pci_pio_to_address(start);
++			} else {
++				pcie->wins[pcie->wins_count].actions = OB_WIN_TYPE_MEM;
++				pcie->wins[pcie->wins_count].match = start;
++			}
++			pcie->wins[pcie->wins_count].remap = start - entry->offset;
++			pcie->wins[pcie->wins_count].mask = ~(win_size - 1);
++
++			if (pcie->wins[pcie->wins_count].remap & (win_size - 1))
++				break;
++
++			start += win_size;
++			size -= win_size;
++			pcie->wins_count++;
++		}
++
++		if (size > 0) {
++			dev_err(&pcie->pdev->dev,
++				"Invalid PCIe region [0x%llx-0x%llx]\n",
++				(unsigned long long)entry->res->start,
++				(unsigned long long)entry->res->end + 1);
++			return -EINVAL;
++		}
++	}
++
+ 	pcie->base = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(pcie->base))
+ 		return PTR_ERR(pcie->base);
+@@ -1252,6 +1495,7 @@ static int advk_pcie_remove(struct platform_device *pdev)
+ {
+ 	struct advk_pcie *pcie = platform_get_drvdata(pdev);
+ 	struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie);
++	int i;
+ 
+ 	pci_lock_rescan_remove();
+ 	pci_stop_root_bus(bridge->bus);
+@@ -1261,6 +1505,10 @@ static int advk_pcie_remove(struct platform_device *pdev)
+ 	advk_pcie_remove_msi_irq_domain(pcie);
+ 	advk_pcie_remove_irq_domain(pcie);
+ 
++	/* Disable outbound address windows mapping */
++	for (i = 0; i < OB_WIN_COUNT; i++)
++		advk_pcie_disable_ob_win(pcie, i);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/pci/controller/pcie-xilinx-nwl.c b/drivers/pci/controller/pcie-xilinx-nwl.c
+index 8689311c5ef66..1c3d5b87ef20e 100644
+--- a/drivers/pci/controller/pcie-xilinx-nwl.c
++++ b/drivers/pci/controller/pcie-xilinx-nwl.c
+@@ -6,6 +6,7 @@
+  * (C) Copyright 2014 - 2015, Xilinx, Inc.
+  */
+ 
++#include <linux/clk.h>
+ #include <linux/delay.h>
+ #include <linux/interrupt.h>
+ #include <linux/irq.h>
+@@ -169,6 +170,7 @@ struct nwl_pcie {
+ 	u8 last_busno;
+ 	struct nwl_msi msi;
+ 	struct irq_domain *legacy_irq_domain;
++	struct clk *clk;
+ 	raw_spinlock_t leg_mask_lock;
+ };
+ 
+@@ -823,6 +825,16 @@ static int nwl_pcie_probe(struct platform_device *pdev)
+ 		return err;
+ 	}
+ 
++	pcie->clk = devm_clk_get(dev, NULL);
++	if (IS_ERR(pcie->clk))
++		return PTR_ERR(pcie->clk);
++
++	err = clk_prepare_enable(pcie->clk);
++	if (err) {
++		dev_err(dev, "can't enable PCIe ref clock\n");
++		return err;
++	}
++
+ 	err = nwl_pcie_bridge_init(pcie);
+ 	if (err) {
+ 		dev_err(dev, "HW Initialization failed\n");
+diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
+index 5516647e53a87..9f320fba2d9b0 100644
+--- a/drivers/pci/msi.c
++++ b/drivers/pci/msi.c
+@@ -776,6 +776,9 @@ static void msix_mask_all(void __iomem *base, int tsize)
+ 	u32 ctrl = PCI_MSIX_ENTRY_CTRL_MASKBIT;
+ 	int i;
+ 
++	if (pci_msi_ignore_mask)
++		return;
++
+ 	for (i = 0; i < tsize; i++, base += PCI_MSIX_ENTRY_SIZE)
+ 		writel(ctrl, base + PCI_MSIX_ENTRY_VECTOR_CTRL);
+ }
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index a9d0530b7846d..8b3cb62c63cca 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -1906,11 +1906,7 @@ static int pci_enable_device_flags(struct pci_dev *dev, unsigned long flags)
+ 	 * so that things like MSI message writing will behave as expected
+ 	 * (e.g. if the device really is in D0 at enable time).
+ 	 */
+-	if (dev->pm_cap) {
+-		u16 pmcsr;
+-		pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
+-		dev->current_state = (pmcsr & PCI_PM_CTRL_STATE_MASK);
+-	}
++	pci_update_current_state(dev, dev->current_state);
+ 
+ 	if (atomic_inc_return(&dev->enable_cnt) > 1)
+ 		return 0;		/* already enabled */
+diff --git a/drivers/pci/pcie/portdrv_core.c b/drivers/pci/pcie/portdrv_core.c
+index e1fed6649c41f..3ee63968deaa5 100644
+--- a/drivers/pci/pcie/portdrv_core.c
++++ b/drivers/pci/pcie/portdrv_core.c
+@@ -257,8 +257,13 @@ static int get_port_device_capability(struct pci_dev *dev)
+ 		services |= PCIE_PORT_SERVICE_DPC;
+ 
+ 	if (pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM ||
+-	    pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT)
+-		services |= PCIE_PORT_SERVICE_BWNOTIF;
++	    pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) {
++		u32 linkcap;
++
++		pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &linkcap);
++		if (linkcap & PCI_EXP_LNKCAP_LBNC)
++			services |= PCIE_PORT_SERVICE_BWNOTIF;
++	}
+ 
+ 	return services;
+ }
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 7b1c81b899cdf..1905ee0297a4c 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -3241,6 +3241,7 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE,
+ 			PCI_DEVICE_ID_SOLARFLARE_SFC4000A_1, fixup_mpss_256);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE,
+ 			PCI_DEVICE_ID_SOLARFLARE_SFC4000B, fixup_mpss_256);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_ASMEDIA, 0x0612, fixup_mpss_256);
+ 
+ /*
+  * Intel 5000 and 5100 Memory controllers have an erratum with read completion
+diff --git a/drivers/pci/syscall.c b/drivers/pci/syscall.c
+index 8b003c890b87b..c9f03418e71e0 100644
+--- a/drivers/pci/syscall.c
++++ b/drivers/pci/syscall.c
+@@ -22,8 +22,10 @@ SYSCALL_DEFINE5(pciconfig_read, unsigned long, bus, unsigned long, dfn,
+ 	long err;
+ 	int cfg_ret;
+ 
++	err = -EPERM;
++	dev = NULL;
+ 	if (!capable(CAP_SYS_ADMIN))
+-		return -EPERM;
++		goto error;
+ 
+ 	err = -ENODEV;
+ 	dev = pci_get_domain_bus_and_slot(0, bus, dfn);
+diff --git a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+index 5a68e242f6b34..5cb018f988003 100644
+--- a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
++++ b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+@@ -167,10 +167,14 @@ static struct armada_37xx_pin_group armada_37xx_nb_groups[] = {
+ 	PIN_GRP_GPIO("jtag", 20, 5, BIT(0), "jtag"),
+ 	PIN_GRP_GPIO("sdio0", 8, 3, BIT(1), "sdio"),
+ 	PIN_GRP_GPIO("emmc_nb", 27, 9, BIT(2), "emmc"),
+-	PIN_GRP_GPIO("pwm0", 11, 1, BIT(3), "pwm"),
+-	PIN_GRP_GPIO("pwm1", 12, 1, BIT(4), "pwm"),
+-	PIN_GRP_GPIO("pwm2", 13, 1, BIT(5), "pwm"),
+-	PIN_GRP_GPIO("pwm3", 14, 1, BIT(6), "pwm"),
++	PIN_GRP_GPIO_3("pwm0", 11, 1, BIT(3) | BIT(20), 0, BIT(20), BIT(3),
++		       "pwm", "led"),
++	PIN_GRP_GPIO_3("pwm1", 12, 1, BIT(4) | BIT(21), 0, BIT(21), BIT(4),
++		       "pwm", "led"),
++	PIN_GRP_GPIO_3("pwm2", 13, 1, BIT(5) | BIT(22), 0, BIT(22), BIT(5),
++		       "pwm", "led"),
++	PIN_GRP_GPIO_3("pwm3", 14, 1, BIT(6) | BIT(23), 0, BIT(23), BIT(6),
++		       "pwm", "led"),
+ 	PIN_GRP_GPIO("pmic1", 7, 1, BIT(7), "pmic"),
+ 	PIN_GRP_GPIO("pmic0", 6, 1, BIT(8), "pmic"),
+ 	PIN_GRP_GPIO("i2c2", 2, 2, BIT(9), "i2c"),
+@@ -184,10 +188,6 @@ static struct armada_37xx_pin_group armada_37xx_nb_groups[] = {
+ 	PIN_GRP_EXTRA("uart2", 9, 2, BIT(1) | BIT(13) | BIT(14) | BIT(19),
+ 		      BIT(1) | BIT(13) | BIT(14), BIT(1) | BIT(19),
+ 		      18, 2, "gpio", "uart"),
+-	PIN_GRP_GPIO_2("led0_od", 11, 1, BIT(20), BIT(20), 0, "led"),
+-	PIN_GRP_GPIO_2("led1_od", 12, 1, BIT(21), BIT(21), 0, "led"),
+-	PIN_GRP_GPIO_2("led2_od", 13, 1, BIT(22), BIT(22), 0, "led"),
+-	PIN_GRP_GPIO_2("led3_od", 14, 1, BIT(23), BIT(23), 0, "led"),
+ };
+ 
+ static struct armada_37xx_pin_group armada_37xx_sb_groups[] = {
+diff --git a/drivers/pinctrl/pinctrl-ingenic.c b/drivers/pinctrl/pinctrl-ingenic.c
+index 983ba9865f772..263498be8e319 100644
+--- a/drivers/pinctrl/pinctrl-ingenic.c
++++ b/drivers/pinctrl/pinctrl-ingenic.c
+@@ -710,7 +710,7 @@ static const struct ingenic_chip_info jz4755_chip_info = {
+ };
+ 
+ static const u32 jz4760_pull_ups[6] = {
+-	0xffffffff, 0xfffcf3ff, 0xffffffff, 0xffffcfff, 0xfffffb7c, 0xfffff00f,
++	0xffffffff, 0xfffcf3ff, 0xffffffff, 0xffffcfff, 0xfffffb7c, 0x0000000f,
+ };
+ 
+ static const u32 jz4760_pull_downs[6] = {
+@@ -936,11 +936,11 @@ static const struct ingenic_chip_info jz4760_chip_info = {
+ };
+ 
+ static const u32 jz4770_pull_ups[6] = {
+-	0x3fffffff, 0xfff0030c, 0xffffffff, 0xffff4fff, 0xfffffb7c, 0xffa7f00f,
++	0x3fffffff, 0xfff0f3fc, 0xffffffff, 0xffff4fff, 0xfffffb7c, 0x0024f00f,
+ };
+ 
+ static const u32 jz4770_pull_downs[6] = {
+-	0x00000000, 0x000f0c03, 0x00000000, 0x0000b000, 0x00000483, 0x00580ff0,
++	0x00000000, 0x000f0c03, 0x00000000, 0x0000b000, 0x00000483, 0x005b0ff0,
+ };
+ 
+ static int jz4770_uart0_data_pins[] = { 0xa0, 0xa3, };
+@@ -3441,17 +3441,17 @@ static void ingenic_set_bias(struct ingenic_pinctrl *jzpc,
+ {
+ 	if (jzpc->info->version >= ID_X2000) {
+ 		switch (bias) {
+-		case PIN_CONFIG_BIAS_PULL_UP:
++		case GPIO_PULL_UP:
+ 			ingenic_config_pin(jzpc, pin, X2000_GPIO_PEPD, false);
+ 			ingenic_config_pin(jzpc, pin, X2000_GPIO_PEPU, true);
+ 			break;
+ 
+-		case PIN_CONFIG_BIAS_PULL_DOWN:
++		case GPIO_PULL_DOWN:
+ 			ingenic_config_pin(jzpc, pin, X2000_GPIO_PEPU, false);
+ 			ingenic_config_pin(jzpc, pin, X2000_GPIO_PEPD, true);
+ 			break;
+ 
+-		case PIN_CONFIG_BIAS_DISABLE:
++		case GPIO_PULL_DIS:
+ 		default:
+ 			ingenic_config_pin(jzpc, pin, X2000_GPIO_PEPU, false);
+ 			ingenic_config_pin(jzpc, pin, X2000_GPIO_PEPD, false);
+diff --git a/drivers/pinctrl/pinctrl-single.c b/drivers/pinctrl/pinctrl-single.c
+index 2c9c9835f375e..b1f6e4e8bcbb5 100644
+--- a/drivers/pinctrl/pinctrl-single.c
++++ b/drivers/pinctrl/pinctrl-single.c
+@@ -1221,6 +1221,7 @@ static int pcs_parse_bits_in_pinctrl_entry(struct pcs_device *pcs,
+ 
+ 	if (PCS_HAS_PINCONF) {
+ 		dev_err(pcs->dev, "pinconf not supported\n");
++		res = -ENOTSUPP;
+ 		goto free_pingroups;
+ 	}
+ 
+diff --git a/drivers/pinctrl/pinctrl-stmfx.c b/drivers/pinctrl/pinctrl-stmfx.c
+index 008c83107a3ca..5fa2488fae87a 100644
+--- a/drivers/pinctrl/pinctrl-stmfx.c
++++ b/drivers/pinctrl/pinctrl-stmfx.c
+@@ -566,7 +566,7 @@ static irqreturn_t stmfx_pinctrl_irq_thread_fn(int irq, void *dev_id)
+ 	u8 pending[NR_GPIO_REGS];
+ 	u8 src[NR_GPIO_REGS] = {0, 0, 0};
+ 	unsigned long n, status;
+-	int ret;
++	int i, ret;
+ 
+ 	ret = regmap_bulk_read(pctl->stmfx->map, STMFX_REG_IRQ_GPI_PENDING,
+ 			       &pending, NR_GPIO_REGS);
+@@ -576,7 +576,9 @@ static irqreturn_t stmfx_pinctrl_irq_thread_fn(int irq, void *dev_id)
+ 	regmap_bulk_write(pctl->stmfx->map, STMFX_REG_IRQ_GPI_SRC,
+ 			  src, NR_GPIO_REGS);
+ 
+-	status = *(unsigned long *)pending;
++	BUILD_BUG_ON(NR_GPIO_REGS > sizeof(status));
++	for (i = 0, status = 0; i < NR_GPIO_REGS; i++)
++		status |= (unsigned long)pending[i] << (i * 8);
+ 	for_each_set_bit(n, &status, gc->ngpio) {
+ 		handle_nested_irq(irq_find_mapping(gc->irq.domain, n));
+ 		stmfx_pinctrl_irq_toggle_trigger(pctl, n);
+diff --git a/drivers/pinctrl/samsung/pinctrl-samsung.c b/drivers/pinctrl/samsung/pinctrl-samsung.c
+index 376876bd66058..2975b4369f32f 100644
+--- a/drivers/pinctrl/samsung/pinctrl-samsung.c
++++ b/drivers/pinctrl/samsung/pinctrl-samsung.c
+@@ -918,7 +918,7 @@ static int samsung_pinctrl_register(struct platform_device *pdev,
+ 		pin_bank->grange.pin_base = drvdata->pin_base
+ 						+ pin_bank->pin_base;
+ 		pin_bank->grange.base = pin_bank->grange.pin_base;
+-		pin_bank->grange.npins = pin_bank->gpio_chip.ngpio;
++		pin_bank->grange.npins = pin_bank->nr_pins;
+ 		pin_bank->grange.gc = &pin_bank->gpio_chip;
+ 		pinctrl_add_gpio_range(drvdata->pctl_dev, &pin_bank->grange);
+ 	}
+diff --git a/drivers/platform/chrome/cros_ec_proto.c b/drivers/platform/chrome/cros_ec_proto.c
+index aa7f7aa772971..a7404d69b2d32 100644
+--- a/drivers/platform/chrome/cros_ec_proto.c
++++ b/drivers/platform/chrome/cros_ec_proto.c
+@@ -279,6 +279,15 @@ static int cros_ec_host_command_proto_query(struct cros_ec_device *ec_dev,
+ 	msg->insize = sizeof(struct ec_response_get_protocol_info);
+ 
+ 	ret = send_command(ec_dev, msg);
++	/*
++	 * Send command once again when timeout occurred.
++	 * Fingerprint MCU (FPMCU) is restarted during system boot which
++	 * introduces small window in which FPMCU won't respond for any
++	 * messages sent by kernel. There is no need to wait before next
++	 * attempt because we waited at least EC_MSG_DEADLINE_MS.
++	 */
++	if (ret == -ETIMEDOUT)
++		ret = send_command(ec_dev, msg);
+ 
+ 	if (ret < 0) {
+ 		dev_dbg(ec_dev->dev,
+diff --git a/drivers/platform/x86/dell/dell-smbios-wmi.c b/drivers/platform/x86/dell/dell-smbios-wmi.c
+index 33f8237727335..8e761991455af 100644
+--- a/drivers/platform/x86/dell/dell-smbios-wmi.c
++++ b/drivers/platform/x86/dell/dell-smbios-wmi.c
+@@ -69,6 +69,7 @@ static int run_smbios_call(struct wmi_device *wdev)
+ 		if (obj->type == ACPI_TYPE_INTEGER)
+ 			dev_dbg(&wdev->dev, "SMBIOS call failed: %llu\n",
+ 				obj->integer.value);
++		kfree(output.pointer);
+ 		return -EIO;
+ 	}
+ 	memcpy(&priv->buf->std, obj->buffer.pointer, obj->buffer.length);
+diff --git a/drivers/power/supply/max17042_battery.c b/drivers/power/supply/max17042_battery.c
+index 215e77d3b6d93..622bdae6182c0 100644
+--- a/drivers/power/supply/max17042_battery.c
++++ b/drivers/power/supply/max17042_battery.c
+@@ -869,8 +869,12 @@ static irqreturn_t max17042_thread_handler(int id, void *dev)
+ {
+ 	struct max17042_chip *chip = dev;
+ 	u32 val;
++	int ret;
++
++	ret = regmap_read(chip->regmap, MAX17042_STATUS, &val);
++	if (ret)
++		return IRQ_HANDLED;
+ 
+-	regmap_read(chip->regmap, MAX17042_STATUS, &val);
+ 	if ((val & STATUS_INTR_SOCMIN_BIT) ||
+ 		(val & STATUS_INTR_SOCMAX_BIT)) {
+ 		dev_info(&chip->client->dev, "SOC threshold INTR\n");
+diff --git a/drivers/rtc/rtc-tps65910.c b/drivers/rtc/rtc-tps65910.c
+index bc89c62ccb9b5..75e4c2d777b9c 100644
+--- a/drivers/rtc/rtc-tps65910.c
++++ b/drivers/rtc/rtc-tps65910.c
+@@ -467,6 +467,6 @@ static struct platform_driver tps65910_rtc_driver = {
+ };
+ 
+ module_platform_driver(tps65910_rtc_driver);
+-MODULE_ALIAS("platform:rtc-tps65910");
++MODULE_ALIAS("platform:tps65910-rtc");
+ MODULE_AUTHOR("Venu Byravarasu <vbyravarasu@nvidia.com>");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/s390/cio/qdio_main.c b/drivers/s390/cio/qdio_main.c
+index 307ce7ff5ca44..2d672aa27a549 100644
+--- a/drivers/s390/cio/qdio_main.c
++++ b/drivers/s390/cio/qdio_main.c
+@@ -886,6 +886,33 @@ static void qdio_shutdown_queues(struct qdio_irq *irq_ptr)
+ 	}
+ }
+ 
++static int qdio_cancel_ccw(struct qdio_irq *irq, int how)
++{
++	struct ccw_device *cdev = irq->cdev;
++	int rc;
++
++	spin_lock_irq(get_ccwdev_lock(cdev));
++	qdio_set_state(irq, QDIO_IRQ_STATE_CLEANUP);
++	if (how & QDIO_FLAG_CLEANUP_USING_CLEAR)
++		rc = ccw_device_clear(cdev, QDIO_DOING_CLEANUP);
++	else
++		/* default behaviour is halt */
++		rc = ccw_device_halt(cdev, QDIO_DOING_CLEANUP);
++	spin_unlock_irq(get_ccwdev_lock(cdev));
++	if (rc) {
++		DBF_ERROR("%4x SHUTD ERR", irq->schid.sch_no);
++		DBF_ERROR("rc:%4d", rc);
++		return rc;
++	}
++
++	wait_event_interruptible_timeout(cdev->private->wait_q,
++					 irq->state == QDIO_IRQ_STATE_INACTIVE ||
++					 irq->state == QDIO_IRQ_STATE_ERR,
++					 10 * HZ);
++
++	return 0;
++}
++
+ /**
+  * qdio_shutdown - shut down a qdio subchannel
+  * @cdev: associated ccw device
+@@ -923,27 +950,7 @@ int qdio_shutdown(struct ccw_device *cdev, int how)
+ 	qdio_shutdown_queues(irq_ptr);
+ 	qdio_shutdown_debug_entries(irq_ptr);
+ 
+-	/* cleanup subchannel */
+-	spin_lock_irq(get_ccwdev_lock(cdev));
+-	qdio_set_state(irq_ptr, QDIO_IRQ_STATE_CLEANUP);
+-	if (how & QDIO_FLAG_CLEANUP_USING_CLEAR)
+-		rc = ccw_device_clear(cdev, QDIO_DOING_CLEANUP);
+-	else
+-		/* default behaviour is halt */
+-		rc = ccw_device_halt(cdev, QDIO_DOING_CLEANUP);
+-	spin_unlock_irq(get_ccwdev_lock(cdev));
+-	if (rc) {
+-		DBF_ERROR("%4x SHUTD ERR", irq_ptr->schid.sch_no);
+-		DBF_ERROR("rc:%4d", rc);
+-		goto no_cleanup;
+-	}
+-
+-	wait_event_interruptible_timeout(cdev->private->wait_q,
+-		irq_ptr->state == QDIO_IRQ_STATE_INACTIVE ||
+-		irq_ptr->state == QDIO_IRQ_STATE_ERR,
+-		10 * HZ);
+-
+-no_cleanup:
++	rc = qdio_cancel_ccw(irq_ptr, how);
+ 	qdio_shutdown_thinint(irq_ptr);
+ 	qdio_shutdown_irq(irq_ptr);
+ 
+@@ -1079,6 +1086,7 @@ int qdio_establish(struct ccw_device *cdev,
+ {
+ 	struct qdio_irq *irq_ptr = cdev->private->qdio_data;
+ 	struct subchannel_id schid;
++	long timeout;
+ 	int rc;
+ 
+ 	ccw_device_get_schid(cdev, &schid);
+@@ -1107,11 +1115,8 @@ int qdio_establish(struct ccw_device *cdev,
+ 	qdio_setup_irq(irq_ptr, init_data);
+ 
+ 	rc = qdio_establish_thinint(irq_ptr);
+-	if (rc) {
+-		qdio_shutdown_irq(irq_ptr);
+-		mutex_unlock(&irq_ptr->setup_mutex);
+-		return rc;
+-	}
++	if (rc)
++		goto err_thinint;
+ 
+ 	/* establish q */
+ 	irq_ptr->ccw.cmd_code = irq_ptr->equeue.cmd;
+@@ -1127,15 +1132,16 @@ int qdio_establish(struct ccw_device *cdev,
+ 	if (rc) {
+ 		DBF_ERROR("%4x est IO ERR", irq_ptr->schid.sch_no);
+ 		DBF_ERROR("rc:%4x", rc);
+-		qdio_shutdown_thinint(irq_ptr);
+-		qdio_shutdown_irq(irq_ptr);
+-		mutex_unlock(&irq_ptr->setup_mutex);
+-		return rc;
++		goto err_ccw_start;
+ 	}
+ 
+-	wait_event_interruptible_timeout(cdev->private->wait_q,
+-		irq_ptr->state == QDIO_IRQ_STATE_ESTABLISHED ||
+-		irq_ptr->state == QDIO_IRQ_STATE_ERR, HZ);
++	timeout = wait_event_interruptible_timeout(cdev->private->wait_q,
++						   irq_ptr->state == QDIO_IRQ_STATE_ESTABLISHED ||
++						   irq_ptr->state == QDIO_IRQ_STATE_ERR, HZ);
++	if (timeout <= 0) {
++		rc = (timeout == -ERESTARTSYS) ? -EINTR : -ETIME;
++		goto err_ccw_timeout;
++	}
+ 
+ 	if (irq_ptr->state != QDIO_IRQ_STATE_ESTABLISHED) {
+ 		mutex_unlock(&irq_ptr->setup_mutex);
+@@ -1152,6 +1158,16 @@ int qdio_establish(struct ccw_device *cdev,
+ 	qdio_print_subchannel_info(irq_ptr);
+ 	qdio_setup_debug_entries(irq_ptr);
+ 	return 0;
++
++err_ccw_timeout:
++	qdio_cancel_ccw(irq_ptr, QDIO_FLAG_CLEANUP_USING_CLEAR);
++err_ccw_start:
++	qdio_shutdown_thinint(irq_ptr);
++err_thinint:
++	qdio_shutdown_irq(irq_ptr);
++	qdio_set_state(irq_ptr, QDIO_IRQ_STATE_INACTIVE);
++	mutex_unlock(&irq_ptr->setup_mutex);
++	return rc;
+ }
+ EXPORT_SYMBOL_GPL(qdio_establish);
+ 
+diff --git a/drivers/scsi/BusLogic.c b/drivers/scsi/BusLogic.c
+index adddcd5899416..0df93d2cd3c36 100644
+--- a/drivers/scsi/BusLogic.c
++++ b/drivers/scsi/BusLogic.c
+@@ -1711,7 +1711,7 @@ static bool __init blogic_reportconfig(struct blogic_adapter *adapter)
+ 	if (adapter->adapter_bus_type != BLOGIC_PCI_BUS) {
+ 		blogic_info("  DMA Channel: None, ", adapter);
+ 		if (adapter->bios_addr > 0)
+-			blogic_info("BIOS Address: 0x%lX, ", adapter,
++			blogic_info("BIOS Address: 0x%X, ", adapter,
+ 					adapter->bios_addr);
+ 		else
+ 			blogic_info("BIOS Address: None, ", adapter);
+@@ -3451,7 +3451,7 @@ static void blogic_msg(enum blogic_msglevel msglevel, char *fmt,
+ 			if (buf[0] != '\n' || len > 1)
+ 				printk("%sscsi%d: %s", blogic_msglevelmap[msglevel], adapter->host_no, buf);
+ 		} else
+-			printk("%s", buf);
++			pr_cont("%s", buf);
+ 	} else {
+ 		if (begin) {
+ 			if (adapter != NULL && adapter->adapter_initd)
+@@ -3459,7 +3459,7 @@ static void blogic_msg(enum blogic_msglevel msglevel, char *fmt,
+ 			else
+ 				printk("%s%s", blogic_msglevelmap[msglevel], buf);
+ 		} else
+-			printk("%s", buf);
++			pr_cont("%s", buf);
+ 	}
+ 	begin = (buf[len - 1] == '\n');
+ }
+diff --git a/drivers/scsi/pcmcia/fdomain_cs.c b/drivers/scsi/pcmcia/fdomain_cs.c
+index e42acf314d068..33df6a9ba9b5f 100644
+--- a/drivers/scsi/pcmcia/fdomain_cs.c
++++ b/drivers/scsi/pcmcia/fdomain_cs.c
+@@ -45,8 +45,10 @@ static int fdomain_probe(struct pcmcia_device *link)
+ 		goto fail_disable;
+ 
+ 	if (!request_region(link->resource[0]->start, FDOMAIN_REGION_SIZE,
+-			    "fdomain_cs"))
++			    "fdomain_cs")) {
++		ret = -EBUSY;
+ 		goto fail_disable;
++	}
+ 
+ 	sh = fdomain_create(link->resource[0]->start, link->irq, 7, &link->dev);
+ 	if (!sh) {
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index b92570a7c309d..98981a61b0122 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -3000,7 +3000,7 @@ static int qedf_alloc_global_queues(struct qedf_ctx *qedf)
+ {
+ 	u32 *list;
+ 	int i;
+-	int status = 0, rc;
++	int status;
+ 	u32 *pbl;
+ 	dma_addr_t page;
+ 	int num_pages;
+@@ -3012,7 +3012,7 @@ static int qedf_alloc_global_queues(struct qedf_ctx *qedf)
+ 	 */
+ 	if (!qedf->num_queues) {
+ 		QEDF_ERR(&(qedf->dbg_ctx), "No MSI-X vectors available!\n");
+-		return 1;
++		return -ENOMEM;
+ 	}
+ 
+ 	/*
+@@ -3020,7 +3020,7 @@ static int qedf_alloc_global_queues(struct qedf_ctx *qedf)
+ 	 * addresses of our queues
+ 	 */
+ 	if (!qedf->p_cpuq) {
+-		status = 1;
++		status = -EINVAL;
+ 		QEDF_ERR(&qedf->dbg_ctx, "p_cpuq is NULL.\n");
+ 		goto mem_alloc_failure;
+ 	}
+@@ -3036,8 +3036,8 @@ static int qedf_alloc_global_queues(struct qedf_ctx *qedf)
+ 		   "qedf->global_queues=%p.\n", qedf->global_queues);
+ 
+ 	/* Allocate DMA coherent buffers for BDQ */
+-	rc = qedf_alloc_bdq(qedf);
+-	if (rc) {
++	status = qedf_alloc_bdq(qedf);
++	if (status) {
+ 		QEDF_ERR(&qedf->dbg_ctx, "Unable to allocate bdq.\n");
+ 		goto mem_alloc_failure;
+ 	}
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index edf9154327048..99e1a323807d1 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -1621,7 +1621,7 @@ static int qedi_alloc_global_queues(struct qedi_ctx *qedi)
+ {
+ 	u32 *list;
+ 	int i;
+-	int status = 0, rc;
++	int status;
+ 	u32 *pbl;
+ 	dma_addr_t page;
+ 	int num_pages;
+@@ -1632,14 +1632,14 @@ static int qedi_alloc_global_queues(struct qedi_ctx *qedi)
+ 	 */
+ 	if (!qedi->num_queues) {
+ 		QEDI_ERR(&qedi->dbg_ctx, "No MSI-X vectors available!\n");
+-		return 1;
++		return -ENOMEM;
+ 	}
+ 
+ 	/* Make sure we allocated the PBL that will contain the physical
+ 	 * addresses of our queues
+ 	 */
+ 	if (!qedi->p_cpuq) {
+-		status = 1;
++		status = -EINVAL;
+ 		goto mem_alloc_failure;
+ 	}
+ 
+@@ -1654,13 +1654,13 @@ static int qedi_alloc_global_queues(struct qedi_ctx *qedi)
+ 		  "qedi->global_queues=%p.\n", qedi->global_queues);
+ 
+ 	/* Allocate DMA coherent buffers for BDQ */
+-	rc = qedi_alloc_bdq(qedi);
+-	if (rc)
++	status = qedi_alloc_bdq(qedi);
++	if (status)
+ 		goto mem_alloc_failure;
+ 
+ 	/* Allocate DMA coherent buffers for NVM_ISCSI_CFG */
+-	rc = qedi_alloc_nvm_iscsi_cfg(qedi);
+-	if (rc)
++	status = qedi_alloc_nvm_iscsi_cfg(qedi);
++	if (status)
+ 		goto mem_alloc_failure;
+ 
+ 	/* Allocate a CQ and an associated PBL for each MSI-X
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index 0cacb667a88b9..adc9129972116 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -91,8 +91,9 @@ static int qla_nvme_alloc_queue(struct nvme_fc_local_port *lport,
+ 	struct qla_hw_data *ha;
+ 	struct qla_qpair *qpair;
+ 
+-	if (!qidx)
+-		qidx++;
++	/* Map admin queue and 1st IO queue to index 0 */
++	if (qidx)
++		qidx--;
+ 
+ 	vha = (struct scsi_qla_host *)lport->private;
+ 	ha = vha->hw;
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 4eab564ea6a05..df4199fd44d6e 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -14,6 +14,7 @@
+ #include <linux/slab.h>
+ #include <linux/blk-mq-pci.h>
+ #include <linux/refcount.h>
++#include <linux/crash_dump.h>
+ 
+ #include <scsi/scsi_tcq.h>
+ #include <scsi/scsicam.h>
+@@ -2818,6 +2819,11 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ 			return ret;
+ 	}
+ 
++	if (is_kdump_kernel()) {
++		ql2xmqsupport = 0;
++		ql2xallocfwdump = 0;
++	}
++
+ 	/* This may fail but that's ok */
+ 	pci_enable_pcie_error_reporting(pdev);
+ 
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index 5db16509b6e1c..f573517e8f6e4 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -1322,6 +1322,7 @@ static int pqi_get_raid_map(struct pqi_ctrl_info *ctrl_info,
+ 				"requested %u bytes, received %u bytes\n",
+ 				raid_map_size,
+ 				get_unaligned_le32(&raid_map->structure_size));
++			rc = -EINVAL;
+ 			goto error;
+ 		}
+ 	}
+diff --git a/drivers/scsi/ufs/ufs-exynos.c b/drivers/scsi/ufs/ufs-exynos.c
+index 70647eacf1953..3e5690c45e63b 100644
+--- a/drivers/scsi/ufs/ufs-exynos.c
++++ b/drivers/scsi/ufs/ufs-exynos.c
+@@ -259,7 +259,7 @@ static int exynos_ufs_get_clk_info(struct exynos_ufs *ufs)
+ 	struct ufs_hba *hba = ufs->hba;
+ 	struct list_head *head = &hba->clk_list_head;
+ 	struct ufs_clk_info *clki;
+-	u32 pclk_rate;
++	unsigned long pclk_rate;
+ 	u32 f_min, f_max;
+ 	u8 div = 0;
+ 	int ret = 0;
+@@ -298,7 +298,7 @@ static int exynos_ufs_get_clk_info(struct exynos_ufs *ufs)
+ 	}
+ 
+ 	if (unlikely(pclk_rate < f_min || pclk_rate > f_max)) {
+-		dev_err(hba->dev, "not available pclk range %d\n", pclk_rate);
++		dev_err(hba->dev, "not available pclk range %lu\n", pclk_rate);
+ 		ret = -EINVAL;
+ 		goto out;
+ 	}
+diff --git a/drivers/scsi/ufs/ufs-exynos.h b/drivers/scsi/ufs/ufs-exynos.h
+index 06ee565f7eb02..a5804e8eb3586 100644
+--- a/drivers/scsi/ufs/ufs-exynos.h
++++ b/drivers/scsi/ufs/ufs-exynos.h
+@@ -184,7 +184,7 @@ struct exynos_ufs {
+ 	u32 pclk_div;
+ 	u32 pclk_avail_min;
+ 	u32 pclk_avail_max;
+-	u32 mclk_rate;
++	unsigned long mclk_rate;
+ 	int avail_ln_rx;
+ 	int avail_ln_tx;
+ 	int rx_sel_idx;
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 72fd41bfbd54b..90837e54c2fea 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -3326,9 +3326,11 @@ int ufshcd_read_desc_param(struct ufs_hba *hba,
+ 
+ 	if (is_kmalloc) {
+ 		/* Make sure we don't copy more data than available */
+-		if (param_offset + param_size > buff_len)
+-			param_size = buff_len - param_offset;
+-		memcpy(param_read_buf, &desc_buf[param_offset], param_size);
++		if (param_offset >= buff_len)
++			ret = -EINVAL;
++		else
++			memcpy(param_read_buf, &desc_buf[param_offset],
++			       min_t(u32, param_size, buff_len - param_offset));
+ 	}
+ out:
+ 	if (is_kmalloc)
+diff --git a/drivers/soc/aspeed/aspeed-lpc-ctrl.c b/drivers/soc/aspeed/aspeed-lpc-ctrl.c
+index c557ffd0992c7..55e46fa6cf424 100644
+--- a/drivers/soc/aspeed/aspeed-lpc-ctrl.c
++++ b/drivers/soc/aspeed/aspeed-lpc-ctrl.c
+@@ -51,7 +51,7 @@ static int aspeed_lpc_ctrl_mmap(struct file *file, struct vm_area_struct *vma)
+ 	unsigned long vsize = vma->vm_end - vma->vm_start;
+ 	pgprot_t prot = vma->vm_page_prot;
+ 
+-	if (vma->vm_pgoff + vsize > lpc_ctrl->mem_base + lpc_ctrl->mem_size)
++	if (vma->vm_pgoff + vma_pages(vma) > lpc_ctrl->mem_size >> PAGE_SHIFT)
+ 		return -EINVAL;
+ 
+ 	/* ast2400/2500 AHB accesses are not cache coherent */
+diff --git a/drivers/soc/aspeed/aspeed-p2a-ctrl.c b/drivers/soc/aspeed/aspeed-p2a-ctrl.c
+index b60fbeaffcbd0..20b5fb2a207cc 100644
+--- a/drivers/soc/aspeed/aspeed-p2a-ctrl.c
++++ b/drivers/soc/aspeed/aspeed-p2a-ctrl.c
+@@ -110,7 +110,7 @@ static int aspeed_p2a_mmap(struct file *file, struct vm_area_struct *vma)
+ 	vsize = vma->vm_end - vma->vm_start;
+ 	prot = vma->vm_page_prot;
+ 
+-	if (vma->vm_pgoff + vsize > ctrl->mem_base + ctrl->mem_size)
++	if (vma->vm_pgoff + vma_pages(vma) > ctrl->mem_size >> PAGE_SHIFT)
+ 		return -EINVAL;
+ 
+ 	/* ast2400/2500 AHB accesses are not cache coherent */
+diff --git a/drivers/soc/mediatek/mtk-mmsys.h b/drivers/soc/mediatek/mtk-mmsys.h
+index 5f3e2bf0c40bc..9e2b81bd38db1 100644
+--- a/drivers/soc/mediatek/mtk-mmsys.h
++++ b/drivers/soc/mediatek/mtk-mmsys.h
+@@ -262,6 +262,10 @@ static const struct mtk_mmsys_routes mmsys_default_routing_table[] = {
+ 		DDP_COMPONENT_RDMA2, DDP_COMPONENT_DSI3,
+ 		DISP_REG_CONFIG_DSIO_SEL_IN, DSI3_SEL_IN_MASK,
+ 		DSI3_SEL_IN_RDMA2
++	}, {
++		DDP_COMPONENT_UFOE, DDP_COMPONENT_DSI0,
++		DISP_REG_CONFIG_DISP_UFOE_MOUT_EN, UFOE_MOUT_EN_DSI0,
++		UFOE_MOUT_EN_DSI0
+ 	}
+ };
+ 
+diff --git a/drivers/soc/qcom/qcom_aoss.c b/drivers/soc/qcom/qcom_aoss.c
+index 934fcc4d2b057..7b6b94332510a 100644
+--- a/drivers/soc/qcom/qcom_aoss.c
++++ b/drivers/soc/qcom/qcom_aoss.c
+@@ -476,12 +476,12 @@ static int qmp_cooling_device_add(struct qmp *qmp,
+ static int qmp_cooling_devices_register(struct qmp *qmp)
+ {
+ 	struct device_node *np, *child;
+-	int count = QMP_NUM_COOLING_RESOURCES;
++	int count = 0;
+ 	int ret;
+ 
+ 	np = qmp->dev->of_node;
+ 
+-	qmp->cooling_devs = devm_kcalloc(qmp->dev, count,
++	qmp->cooling_devs = devm_kcalloc(qmp->dev, QMP_NUM_COOLING_RESOURCES,
+ 					 sizeof(*qmp->cooling_devs),
+ 					 GFP_KERNEL);
+ 
+@@ -497,12 +497,16 @@ static int qmp_cooling_devices_register(struct qmp *qmp)
+ 			goto unroll;
+ 	}
+ 
++	if (!count)
++		devm_kfree(qmp->dev, qmp->cooling_devs);
++
+ 	return 0;
+ 
+ unroll:
+ 	while (--count >= 0)
+ 		thermal_cooling_device_unregister
+ 			(qmp->cooling_devs[count].cdev);
++	devm_kfree(qmp->dev, qmp->cooling_devs);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/soundwire/intel.c b/drivers/soundwire/intel.c
+index fd95f94630b1c..c03d51ad40bf5 100644
+--- a/drivers/soundwire/intel.c
++++ b/drivers/soundwire/intel.c
+@@ -537,12 +537,14 @@ static int intel_link_power_down(struct sdw_intel *sdw)
+ 
+ 	mutex_lock(sdw->link_res->shim_lock);
+ 
+-	intel_shim_master_ip_to_glue(sdw);
+-
+ 	if (!(*shim_mask & BIT(link_id)))
+ 		dev_err(sdw->cdns.dev,
+ 			"%s: Unbalanced power-up/down calls\n", __func__);
+ 
++	sdw->cdns.link_up = false;
++
++	intel_shim_master_ip_to_glue(sdw);
++
+ 	*shim_mask &= ~BIT(link_id);
+ 
+ 	if (!*shim_mask) {
+@@ -559,18 +561,19 @@ static int intel_link_power_down(struct sdw_intel *sdw)
+ 		link_control &=  spa_mask;
+ 
+ 		ret = intel_clear_bit(shim, SDW_SHIM_LCTL, link_control, cpa_mask);
++		if (ret < 0) {
++			dev_err(sdw->cdns.dev, "%s: could not power down link\n", __func__);
++
++			/*
++			 * we leave the sdw->cdns.link_up flag as false since we've disabled
++			 * the link at this point and cannot handle interrupts any longer.
++			 */
++		}
+ 	}
+ 
+ 	mutex_unlock(sdw->link_res->shim_lock);
+ 
+-	if (ret < 0) {
+-		dev_err(sdw->cdns.dev, "%s: could not power down link\n", __func__);
+-
+-		return ret;
+-	}
+-
+-	sdw->cdns.link_up = false;
+-	return 0;
++	return ret;
+ }
+ 
+ static void intel_shim_sync_arm(struct sdw_intel *sdw)
+diff --git a/drivers/spi/spi-fsi.c b/drivers/spi/spi-fsi.c
+index 87f8829c39952..829770b8ec74c 100644
+--- a/drivers/spi/spi-fsi.c
++++ b/drivers/spi/spi-fsi.c
+@@ -25,16 +25,11 @@
+ 
+ #define SPI_FSI_BASE			0x70000
+ #define SPI_FSI_INIT_TIMEOUT_MS		1000
+-#define SPI_FSI_MAX_XFR_SIZE		2048
+-#define SPI_FSI_MAX_XFR_SIZE_RESTRICTED	8
++#define SPI_FSI_MAX_RX_SIZE		8
++#define SPI_FSI_MAX_TX_SIZE		40
+ 
+ #define SPI_FSI_ERROR			0x0
+ #define SPI_FSI_COUNTER_CFG		0x1
+-#define  SPI_FSI_COUNTER_CFG_LOOPS(x)	 (((u64)(x) & 0xffULL) << 32)
+-#define  SPI_FSI_COUNTER_CFG_N2_RX	 BIT_ULL(8)
+-#define  SPI_FSI_COUNTER_CFG_N2_TX	 BIT_ULL(9)
+-#define  SPI_FSI_COUNTER_CFG_N2_IMPLICIT BIT_ULL(10)
+-#define  SPI_FSI_COUNTER_CFG_N2_RELOAD	 BIT_ULL(11)
+ #define SPI_FSI_CFG1			0x2
+ #define SPI_FSI_CLOCK_CFG		0x3
+ #define  SPI_FSI_CLOCK_CFG_MM_ENABLE	 BIT_ULL(32)
+@@ -76,8 +71,6 @@ struct fsi_spi {
+ 	struct device *dev;	/* SPI controller device */
+ 	struct fsi_device *fsi;	/* FSI2SPI CFAM engine device */
+ 	u32 base;
+-	size_t max_xfr_size;
+-	bool restricted;
+ };
+ 
+ struct fsi_spi_sequence {
+@@ -241,7 +234,7 @@ static int fsi_spi_reset(struct fsi_spi *ctx)
+ 	return fsi_spi_write_reg(ctx, SPI_FSI_STATUS, 0ULL);
+ }
+ 
+-static int fsi_spi_sequence_add(struct fsi_spi_sequence *seq, u8 val)
++static void fsi_spi_sequence_add(struct fsi_spi_sequence *seq, u8 val)
+ {
+ 	/*
+ 	 * Add the next byte of instruction to the 8-byte sequence register.
+@@ -251,8 +244,6 @@ static int fsi_spi_sequence_add(struct fsi_spi_sequence *seq, u8 val)
+ 	 */
+ 	seq->data |= (u64)val << seq->bit;
+ 	seq->bit -= 8;
+-
+-	return ((64 - seq->bit) / 8) - 2;
+ }
+ 
+ static void fsi_spi_sequence_init(struct fsi_spi_sequence *seq)
+@@ -261,71 +252,11 @@ static void fsi_spi_sequence_init(struct fsi_spi_sequence *seq)
+ 	seq->data = 0ULL;
+ }
+ 
+-static int fsi_spi_sequence_transfer(struct fsi_spi *ctx,
+-				     struct fsi_spi_sequence *seq,
+-				     struct spi_transfer *transfer)
+-{
+-	int loops;
+-	int idx;
+-	int rc;
+-	u8 val = 0;
+-	u8 len = min(transfer->len, 8U);
+-	u8 rem = transfer->len % len;
+-
+-	loops = transfer->len / len;
+-
+-	if (transfer->tx_buf) {
+-		val = SPI_FSI_SEQUENCE_SHIFT_OUT(len);
+-		idx = fsi_spi_sequence_add(seq, val);
+-
+-		if (rem)
+-			rem = SPI_FSI_SEQUENCE_SHIFT_OUT(rem);
+-	} else if (transfer->rx_buf) {
+-		val = SPI_FSI_SEQUENCE_SHIFT_IN(len);
+-		idx = fsi_spi_sequence_add(seq, val);
+-
+-		if (rem)
+-			rem = SPI_FSI_SEQUENCE_SHIFT_IN(rem);
+-	} else {
+-		return -EINVAL;
+-	}
+-
+-	if (ctx->restricted && loops > 1) {
+-		dev_warn(ctx->dev,
+-			 "Transfer too large; no branches permitted.\n");
+-		return -EINVAL;
+-	}
+-
+-	if (loops > 1) {
+-		u64 cfg = SPI_FSI_COUNTER_CFG_LOOPS(loops - 1);
+-
+-		fsi_spi_sequence_add(seq, SPI_FSI_SEQUENCE_BRANCH(idx));
+-
+-		if (transfer->rx_buf)
+-			cfg |= SPI_FSI_COUNTER_CFG_N2_RX |
+-				SPI_FSI_COUNTER_CFG_N2_TX |
+-				SPI_FSI_COUNTER_CFG_N2_IMPLICIT |
+-				SPI_FSI_COUNTER_CFG_N2_RELOAD;
+-
+-		rc = fsi_spi_write_reg(ctx, SPI_FSI_COUNTER_CFG, cfg);
+-		if (rc)
+-			return rc;
+-	} else {
+-		fsi_spi_write_reg(ctx, SPI_FSI_COUNTER_CFG, 0ULL);
+-	}
+-
+-	if (rem)
+-		fsi_spi_sequence_add(seq, rem);
+-
+-	return 0;
+-}
+-
+ static int fsi_spi_transfer_data(struct fsi_spi *ctx,
+ 				 struct spi_transfer *transfer)
+ {
+ 	int rc = 0;
+ 	u64 status = 0ULL;
+-	u64 cfg = 0ULL;
+ 
+ 	if (transfer->tx_buf) {
+ 		int nb;
+@@ -363,16 +294,6 @@ static int fsi_spi_transfer_data(struct fsi_spi *ctx,
+ 		u64 in = 0ULL;
+ 		u8 *rx = transfer->rx_buf;
+ 
+-		rc = fsi_spi_read_reg(ctx, SPI_FSI_COUNTER_CFG, &cfg);
+-		if (rc)
+-			return rc;
+-
+-		if (cfg & SPI_FSI_COUNTER_CFG_N2_IMPLICIT) {
+-			rc = fsi_spi_write_reg(ctx, SPI_FSI_DATA_TX, 0);
+-			if (rc)
+-				return rc;
+-		}
+-
+ 		while (transfer->len > recv) {
+ 			do {
+ 				rc = fsi_spi_read_reg(ctx, SPI_FSI_STATUS,
+@@ -439,6 +360,10 @@ static int fsi_spi_transfer_init(struct fsi_spi *ctx)
+ 		}
+ 	} while (seq_state && (seq_state != SPI_FSI_STATUS_SEQ_STATE_IDLE));
+ 
++	rc = fsi_spi_write_reg(ctx, SPI_FSI_COUNTER_CFG, 0ULL);
++	if (rc)
++		return rc;
++
+ 	rc = fsi_spi_read_reg(ctx, SPI_FSI_CLOCK_CFG, &clock_cfg);
+ 	if (rc)
+ 		return rc;
+@@ -459,6 +384,7 @@ static int fsi_spi_transfer_one_message(struct spi_controller *ctlr,
+ {
+ 	int rc;
+ 	u8 seq_slave = SPI_FSI_SEQUENCE_SEL_SLAVE(mesg->spi->chip_select + 1);
++	unsigned int len;
+ 	struct spi_transfer *transfer;
+ 	struct fsi_spi *ctx = spi_controller_get_devdata(ctlr);
+ 
+@@ -471,8 +397,7 @@ static int fsi_spi_transfer_one_message(struct spi_controller *ctlr,
+ 		struct spi_transfer *next = NULL;
+ 
+ 		/* Sequencer must do shift out (tx) first. */
+-		if (!transfer->tx_buf ||
+-		    transfer->len > (ctx->max_xfr_size + 8)) {
++		if (!transfer->tx_buf || transfer->len > SPI_FSI_MAX_TX_SIZE) {
+ 			rc = -EINVAL;
+ 			goto error;
+ 		}
+@@ -486,9 +411,13 @@ static int fsi_spi_transfer_one_message(struct spi_controller *ctlr,
+ 		fsi_spi_sequence_init(&seq);
+ 		fsi_spi_sequence_add(&seq, seq_slave);
+ 
+-		rc = fsi_spi_sequence_transfer(ctx, &seq, transfer);
+-		if (rc)
+-			goto error;
++		len = transfer->len;
++		while (len > 8) {
++			fsi_spi_sequence_add(&seq,
++					     SPI_FSI_SEQUENCE_SHIFT_OUT(8));
++			len -= 8;
++		}
++		fsi_spi_sequence_add(&seq, SPI_FSI_SEQUENCE_SHIFT_OUT(len));
+ 
+ 		if (!list_is_last(&transfer->transfer_list,
+ 				  &mesg->transfers)) {
+@@ -496,7 +425,9 @@ static int fsi_spi_transfer_one_message(struct spi_controller *ctlr,
+ 
+ 			/* Sequencer can only do shift in (rx) after tx. */
+ 			if (next->rx_buf) {
+-				if (next->len > ctx->max_xfr_size) {
++				u8 shift;
++
++				if (next->len > SPI_FSI_MAX_RX_SIZE) {
+ 					rc = -EINVAL;
+ 					goto error;
+ 				}
+@@ -504,10 +435,8 @@ static int fsi_spi_transfer_one_message(struct spi_controller *ctlr,
+ 				dev_dbg(ctx->dev, "Sequence rx of %d bytes.\n",
+ 					next->len);
+ 
+-				rc = fsi_spi_sequence_transfer(ctx, &seq,
+-							       next);
+-				if (rc)
+-					goto error;
++				shift = SPI_FSI_SEQUENCE_SHIFT_IN(next->len);
++				fsi_spi_sequence_add(&seq, shift);
+ 			} else {
+ 				next = NULL;
+ 			}
+@@ -541,9 +470,7 @@ error:
+ 
+ static size_t fsi_spi_max_transfer_size(struct spi_device *spi)
+ {
+-	struct fsi_spi *ctx = spi_controller_get_devdata(spi->controller);
+-
+-	return ctx->max_xfr_size;
++	return SPI_FSI_MAX_RX_SIZE;
+ }
+ 
+ static int fsi_spi_probe(struct device *dev)
+@@ -582,14 +509,6 @@ static int fsi_spi_probe(struct device *dev)
+ 		ctx->fsi = fsi;
+ 		ctx->base = base + SPI_FSI_BASE;
+ 
+-		if (of_device_is_compatible(np, "ibm,fsi2spi-restricted")) {
+-			ctx->restricted = true;
+-			ctx->max_xfr_size = SPI_FSI_MAX_XFR_SIZE_RESTRICTED;
+-		} else {
+-			ctx->restricted = false;
+-			ctx->max_xfr_size = SPI_FSI_MAX_XFR_SIZE;
+-		}
+-
+ 		rc = devm_spi_register_controller(dev, ctlr);
+ 		if (rc)
+ 			spi_controller_put(ctlr);
+diff --git a/drivers/staging/board/board.c b/drivers/staging/board/board.c
+index cb6feb34dd401..f980af0373452 100644
+--- a/drivers/staging/board/board.c
++++ b/drivers/staging/board/board.c
+@@ -136,6 +136,7 @@ int __init board_staging_register_clock(const struct board_staging_clk *bsc)
+ static int board_staging_add_dev_domain(struct platform_device *pdev,
+ 					const char *domain)
+ {
++	struct device *dev = &pdev->dev;
+ 	struct of_phandle_args pd_args;
+ 	struct device_node *np;
+ 
+@@ -148,7 +149,11 @@ static int board_staging_add_dev_domain(struct platform_device *pdev,
+ 	pd_args.np = np;
+ 	pd_args.args_count = 0;
+ 
+-	return of_genpd_add_device(&pd_args, &pdev->dev);
++	/* Initialization similar to device_pm_init_common() */
++	spin_lock_init(&dev->power.lock);
++	dev->power.early_init = true;
++
++	return of_genpd_add_device(&pd_args, dev);
+ }
+ #else
+ static inline int board_staging_add_dev_domain(struct platform_device *pdev,
+diff --git a/drivers/staging/hikey9xx/hisilicon,hi6421-spmi-pmic.yaml b/drivers/staging/hikey9xx/hisilicon,hi6421-spmi-pmic.yaml
+index 3b23ad56b31ac..ef664b4458fb4 100644
+--- a/drivers/staging/hikey9xx/hisilicon,hi6421-spmi-pmic.yaml
++++ b/drivers/staging/hikey9xx/hisilicon,hi6421-spmi-pmic.yaml
+@@ -42,6 +42,8 @@ properties:
+   regulators:
+     type: object
+ 
++    additionalProperties: false
++
+     properties:
+       '#address-cells':
+         const: 1
+@@ -50,11 +52,13 @@ properties:
+         const: 0
+ 
+     patternProperties:
+-      '^ldo[0-9]+@[0-9a-f]$':
++      '^(ldo|LDO)[0-9]+$':
+         type: object
+ 
+         $ref: "/schemas/regulator/regulator.yaml#"
+ 
++        unevaluatedProperties: false
++
+ required:
+   - compatible
+   - reg
+diff --git a/drivers/staging/ks7010/ks7010_sdio.c b/drivers/staging/ks7010/ks7010_sdio.c
+index cbc0032c16045..98d759e7cc957 100644
+--- a/drivers/staging/ks7010/ks7010_sdio.c
++++ b/drivers/staging/ks7010/ks7010_sdio.c
+@@ -939,9 +939,9 @@ static void ks7010_private_init(struct ks_wlan_private *priv,
+ 	memset(&priv->wstats, 0, sizeof(priv->wstats));
+ 
+ 	/* sleep mode */
++	atomic_set(&priv->sleepstatus.status, 0);
+ 	atomic_set(&priv->sleepstatus.doze_request, 0);
+ 	atomic_set(&priv->sleepstatus.wakeup_request, 0);
+-	atomic_set(&priv->sleepstatus.wakeup_request, 0);
+ 
+ 	trx_device_init(priv);
+ 	hostif_init(priv);
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_v4l2.c b/drivers/staging/media/atomisp/pci/atomisp_v4l2.c
+index 0295e2e32d797..fa1bd99cd6f17 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_v4l2.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_v4l2.c
+@@ -1763,7 +1763,8 @@ static int atomisp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *i
+ 	if (err < 0)
+ 		goto register_entities_fail;
+ 	/* init atomisp wdts */
+-	if (init_atomisp_wdts(isp) != 0)
++	err = init_atomisp_wdts(isp);
++	if (err != 0)
+ 		goto wdt_work_queue_fail;
+ 
+ 	/* save the iunit context only once after all the values are init'ed. */
+@@ -1815,6 +1816,7 @@ request_irq_fail:
+ 	hmm_cleanup();
+ 	hmm_pool_unregister(HMM_POOL_TYPE_RESERVED);
+ hmm_pool_fail:
++	pm_runtime_get_noresume(&pdev->dev);
+ 	destroy_workqueue(isp->wdt_work_queue);
+ wdt_work_queue_fail:
+ 	atomisp_acc_cleanup(isp);
+diff --git a/drivers/staging/media/hantro/hantro_g1_vp8_dec.c b/drivers/staging/media/hantro/hantro_g1_vp8_dec.c
+index 57002ba701768..3cd90637ac63e 100644
+--- a/drivers/staging/media/hantro/hantro_g1_vp8_dec.c
++++ b/drivers/staging/media/hantro/hantro_g1_vp8_dec.c
+@@ -376,12 +376,17 @@ static void cfg_ref(struct hantro_ctx *ctx,
+ 	vb2_dst = hantro_get_dst_buf(ctx);
+ 
+ 	ref = hantro_get_ref(ctx, hdr->last_frame_ts);
+-	if (!ref)
++	if (!ref) {
++		vpu_debug(0, "failed to find last frame ts=%llu\n",
++			  hdr->last_frame_ts);
+ 		ref = vb2_dma_contig_plane_dma_addr(&vb2_dst->vb2_buf, 0);
++	}
+ 	vdpu_write_relaxed(vpu, ref, G1_REG_ADDR_REF(0));
+ 
+ 	ref = hantro_get_ref(ctx, hdr->golden_frame_ts);
+-	WARN_ON(!ref && hdr->golden_frame_ts);
++	if (!ref && hdr->golden_frame_ts)
++		vpu_debug(0, "failed to find golden frame ts=%llu\n",
++			  hdr->golden_frame_ts);
+ 	if (!ref)
+ 		ref = vb2_dma_contig_plane_dma_addr(&vb2_dst->vb2_buf, 0);
+ 	if (hdr->flags & V4L2_VP8_FRAME_FLAG_SIGN_BIAS_GOLDEN)
+@@ -389,7 +394,9 @@ static void cfg_ref(struct hantro_ctx *ctx,
+ 	vdpu_write_relaxed(vpu, ref, G1_REG_ADDR_REF(4));
+ 
+ 	ref = hantro_get_ref(ctx, hdr->alt_frame_ts);
+-	WARN_ON(!ref && hdr->alt_frame_ts);
++	if (!ref && hdr->alt_frame_ts)
++		vpu_debug(0, "failed to find alt frame ts=%llu\n",
++			  hdr->alt_frame_ts);
+ 	if (!ref)
+ 		ref = vb2_dma_contig_plane_dma_addr(&vb2_dst->vb2_buf, 0);
+ 	if (hdr->flags & V4L2_VP8_FRAME_FLAG_SIGN_BIAS_ALT)
+diff --git a/drivers/staging/media/hantro/rk3399_vpu_hw_vp8_dec.c b/drivers/staging/media/hantro/rk3399_vpu_hw_vp8_dec.c
+index 8661a3cc1e6b5..3616192016053 100644
+--- a/drivers/staging/media/hantro/rk3399_vpu_hw_vp8_dec.c
++++ b/drivers/staging/media/hantro/rk3399_vpu_hw_vp8_dec.c
+@@ -453,12 +453,17 @@ static void cfg_ref(struct hantro_ctx *ctx,
+ 	vb2_dst = hantro_get_dst_buf(ctx);
+ 
+ 	ref = hantro_get_ref(ctx, hdr->last_frame_ts);
+-	if (!ref)
++	if (!ref) {
++		vpu_debug(0, "failed to find last frame ts=%llu\n",
++			  hdr->last_frame_ts);
+ 		ref = vb2_dma_contig_plane_dma_addr(&vb2_dst->vb2_buf, 0);
++	}
+ 	vdpu_write_relaxed(vpu, ref, VDPU_REG_VP8_ADDR_REF0);
+ 
+ 	ref = hantro_get_ref(ctx, hdr->golden_frame_ts);
+-	WARN_ON(!ref && hdr->golden_frame_ts);
++	if (!ref && hdr->golden_frame_ts)
++		vpu_debug(0, "failed to find golden frame ts=%llu\n",
++			  hdr->golden_frame_ts);
+ 	if (!ref)
+ 		ref = vb2_dma_contig_plane_dma_addr(&vb2_dst->vb2_buf, 0);
+ 	if (hdr->flags & V4L2_VP8_FRAME_FLAG_SIGN_BIAS_GOLDEN)
+@@ -466,7 +471,9 @@ static void cfg_ref(struct hantro_ctx *ctx,
+ 	vdpu_write_relaxed(vpu, ref, VDPU_REG_VP8_ADDR_REF2_5(2));
+ 
+ 	ref = hantro_get_ref(ctx, hdr->alt_frame_ts);
+-	WARN_ON(!ref && hdr->alt_frame_ts);
++	if (!ref && hdr->alt_frame_ts)
++		vpu_debug(0, "failed to find alt frame ts=%llu\n",
++			  hdr->alt_frame_ts);
+ 	if (!ref)
+ 		ref = vb2_dma_contig_plane_dma_addr(&vb2_dst->vb2_buf, 0);
+ 	if (hdr->flags & V4L2_VP8_FRAME_FLAG_SIGN_BIAS_ALT)
+diff --git a/drivers/staging/media/imx/imx7-media-csi.c b/drivers/staging/media/imx/imx7-media-csi.c
+index f85a2f5f1413b..ad1bca3fe0471 100644
+--- a/drivers/staging/media/imx/imx7-media-csi.c
++++ b/drivers/staging/media/imx/imx7-media-csi.c
+@@ -361,6 +361,7 @@ static void imx7_csi_dma_unsetup_vb2_buf(struct imx7_csi *csi,
+ 
+ 			vb->timestamp = ktime_get_ns();
+ 			vb2_buffer_done(vb, return_status);
++			csi->active_vb2_buf[i] = NULL;
+ 		}
+ 	}
+ }
+@@ -386,9 +387,10 @@ static int imx7_csi_dma_setup(struct imx7_csi *csi)
+ 	return 0;
+ }
+ 
+-static void imx7_csi_dma_cleanup(struct imx7_csi *csi)
++static void imx7_csi_dma_cleanup(struct imx7_csi *csi,
++				 enum vb2_buffer_state return_status)
+ {
+-	imx7_csi_dma_unsetup_vb2_buf(csi, VB2_BUF_STATE_ERROR);
++	imx7_csi_dma_unsetup_vb2_buf(csi, return_status);
+ 	imx_media_free_dma_buf(csi->dev, &csi->underrun_buf);
+ }
+ 
+@@ -537,9 +539,10 @@ static int imx7_csi_init(struct imx7_csi *csi)
+ 	return 0;
+ }
+ 
+-static void imx7_csi_deinit(struct imx7_csi *csi)
++static void imx7_csi_deinit(struct imx7_csi *csi,
++			    enum vb2_buffer_state return_status)
+ {
+-	imx7_csi_dma_cleanup(csi);
++	imx7_csi_dma_cleanup(csi, return_status);
+ 	imx7_csi_init_default(csi);
+ 	imx7_csi_dmareq_rff_disable(csi);
+ 	clk_disable_unprepare(csi->mclk);
+@@ -702,7 +705,7 @@ static int imx7_csi_s_stream(struct v4l2_subdev *sd, int enable)
+ 
+ 		ret = v4l2_subdev_call(csi->src_sd, video, s_stream, 1);
+ 		if (ret < 0) {
+-			imx7_csi_deinit(csi);
++			imx7_csi_deinit(csi, VB2_BUF_STATE_QUEUED);
+ 			goto out_unlock;
+ 		}
+ 
+@@ -712,7 +715,7 @@ static int imx7_csi_s_stream(struct v4l2_subdev *sd, int enable)
+ 
+ 		v4l2_subdev_call(csi->src_sd, video, s_stream, 0);
+ 
+-		imx7_csi_deinit(csi);
++		imx7_csi_deinit(csi, VB2_BUF_STATE_ERROR);
+ 	}
+ 
+ 	csi->is_streaming = !!enable;
+diff --git a/drivers/staging/rtl8723bs/hal/hal_com_phycfg.c b/drivers/staging/rtl8723bs/hal/hal_com_phycfg.c
+index 94d11689b4ac6..33ff80da32775 100644
+--- a/drivers/staging/rtl8723bs/hal/hal_com_phycfg.c
++++ b/drivers/staging/rtl8723bs/hal/hal_com_phycfg.c
+@@ -707,7 +707,7 @@ static void PHY_StoreTxPowerByRateNew(
+ 	if (RfPath > ODM_RF_PATH_D)
+ 		return;
+ 
+-	if (TxNum > ODM_RF_PATH_D)
++	if (TxNum > RF_MAX_TX_NUM)
+ 		return;
+ 
+ 	for (i = 0; i < rateNum; ++i) {
+diff --git a/drivers/staging/rts5208/rtsx_scsi.c b/drivers/staging/rts5208/rtsx_scsi.c
+index 1deb74112ad43..11d9d9155eef2 100644
+--- a/drivers/staging/rts5208/rtsx_scsi.c
++++ b/drivers/staging/rts5208/rtsx_scsi.c
+@@ -2802,10 +2802,10 @@ static int get_ms_information(struct scsi_cmnd *srb, struct rtsx_chip *chip)
+ 	}
+ 
+ 	if (dev_info_id == 0x15) {
+-		buf_len = 0x3A;
++		buf_len = 0x3C;
+ 		data_len = 0x3A;
+ 	} else {
+-		buf_len = 0x6A;
++		buf_len = 0x6C;
+ 		data_len = 0x6A;
+ 	}
+ 
+@@ -2855,11 +2855,7 @@ static int get_ms_information(struct scsi_cmnd *srb, struct rtsx_chip *chip)
+ 	}
+ 
+ 	rtsx_stor_set_xfer_buf(buf, buf_len, srb);
+-
+-	if (dev_info_id == 0x15)
+-		scsi_set_resid(srb, scsi_bufflen(srb) - 0x3C);
+-	else
+-		scsi_set_resid(srb, scsi_bufflen(srb) - 0x6C);
++	scsi_set_resid(srb, scsi_bufflen(srb) - buf_len);
+ 
+ 	kfree(buf);
+ 	return STATUS_SUCCESS;
+diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
+index a82032c081e83..03229350ea731 100644
+--- a/drivers/thunderbolt/switch.c
++++ b/drivers/thunderbolt/switch.c
+@@ -2308,7 +2308,7 @@ static void tb_switch_default_link_ports(struct tb_switch *sw)
+ {
+ 	int i;
+ 
+-	for (i = 1; i <= sw->config.max_port_number; i += 2) {
++	for (i = 1; i <= sw->config.max_port_number; i++) {
+ 		struct tb_port *port = &sw->ports[i];
+ 		struct tb_port *subordinate;
+ 
+diff --git a/drivers/tty/hvc/hvsi.c b/drivers/tty/hvc/hvsi.c
+index e8c58f9bd2632..d6afaae1729aa 100644
+--- a/drivers/tty/hvc/hvsi.c
++++ b/drivers/tty/hvc/hvsi.c
+@@ -1038,7 +1038,7 @@ static const struct tty_operations hvsi_ops = {
+ 
+ static int __init hvsi_init(void)
+ {
+-	int i;
++	int i, ret;
+ 
+ 	hvsi_driver = alloc_tty_driver(hvsi_count);
+ 	if (!hvsi_driver)
+@@ -1069,12 +1069,25 @@ static int __init hvsi_init(void)
+ 	}
+ 	hvsi_wait = wait_for_state; /* irqs active now */
+ 
+-	if (tty_register_driver(hvsi_driver))
+-		panic("Couldn't register hvsi console driver\n");
++	ret = tty_register_driver(hvsi_driver);
++	if (ret) {
++		pr_err("Couldn't register hvsi console driver\n");
++		goto err_free_irq;
++	}
+ 
+ 	printk(KERN_DEBUG "HVSI: registered %i devices\n", hvsi_count);
+ 
+ 	return 0;
++err_free_irq:
++	hvsi_wait = poll_for_state;
++	for (i = 0; i < hvsi_count; i++) {
++		struct hvsi_struct *hp = &hvsi_ports[i];
++
++		free_irq(hp->virq, hp);
++	}
++	tty_driver_kref_put(hvsi_driver);
++
++	return ret;
+ }
+ device_initcall(hvsi_init);
+ 
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index 79418d4beb48f..b6c731a267d26 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -617,7 +617,7 @@ static irqreturn_t omap8250_irq(int irq, void *dev_id)
+ 	struct uart_port *port = dev_id;
+ 	struct omap8250_priv *priv = port->private_data;
+ 	struct uart_8250_port *up = up_to_u8250p(port);
+-	unsigned int iir;
++	unsigned int iir, lsr;
+ 	int ret;
+ 
+ #ifdef CONFIG_SERIAL_8250_DMA
+@@ -628,6 +628,7 @@ static irqreturn_t omap8250_irq(int irq, void *dev_id)
+ #endif
+ 
+ 	serial8250_rpm_get(up);
++	lsr = serial_port_in(port, UART_LSR);
+ 	iir = serial_port_in(port, UART_IIR);
+ 	ret = serial8250_handle_irq(port, iir);
+ 
+@@ -642,6 +643,24 @@ static irqreturn_t omap8250_irq(int irq, void *dev_id)
+ 		serial_port_in(port, UART_RX);
+ 	}
+ 
++	/* Stop processing interrupts on input overrun */
++	if ((lsr & UART_LSR_OE) && up->overrun_backoff_time_ms > 0) {
++		unsigned long delay;
++
++		up->ier = port->serial_in(port, UART_IER);
++		if (up->ier & (UART_IER_RLSI | UART_IER_RDI)) {
++			port->ops->stop_rx(port);
++		} else {
++			/* Keep restarting the timer until
++			 * the input overrun subsides.
++			 */
++			cancel_delayed_work(&up->overrun_backoff);
++		}
++
++		delay = msecs_to_jiffies(up->overrun_backoff_time_ms);
++		schedule_delayed_work(&up->overrun_backoff, delay);
++	}
++
+ 	serial8250_rpm_put(up);
+ 
+ 	return IRQ_RETVAL(ret);
+@@ -1353,6 +1372,10 @@ static int omap8250_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
++	if (of_property_read_u32(np, "overrun-throttle-ms",
++				 &up.overrun_backoff_time_ms) != 0)
++		up.overrun_backoff_time_ms = 0;
++
+ 	priv->wakeirq = irq_of_parse_and_map(np, 1);
+ 
+ 	pdata = of_device_get_match_data(&pdev->dev);
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index 1934940b96170..2ad136dcfcc8f 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -87,7 +87,7 @@ static void moan_device(const char *str, struct pci_dev *dev)
+ 
+ static int
+ setup_port(struct serial_private *priv, struct uart_8250_port *port,
+-	   int bar, int offset, int regshift)
++	   u8 bar, unsigned int offset, int regshift)
+ {
+ 	struct pci_dev *dev = priv->dev;
+ 
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 9422284bb3f33..81bafcf77bb2e 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -122,7 +122,8 @@ static const struct serial8250_config uart_config[] = {
+ 		.name		= "16C950/954",
+ 		.fifo_size	= 128,
+ 		.tx_loadsz	= 128,
+-		.fcr		= UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
++		.fcr		= UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_01,
++		.rxtrig_bytes	= {16, 32, 112, 120},
+ 		/* UART_CAP_EFR breaks billionon CF bluetooth card. */
+ 		.flags		= UART_CAP_FIFO | UART_CAP_SLEEP,
+ 	},
+diff --git a/drivers/tty/serial/jsm/jsm_neo.c b/drivers/tty/serial/jsm/jsm_neo.c
+index bf0e2a4cb0cef..c6f927a76c3be 100644
+--- a/drivers/tty/serial/jsm/jsm_neo.c
++++ b/drivers/tty/serial/jsm/jsm_neo.c
+@@ -815,7 +815,9 @@ static void neo_parse_isr(struct jsm_board *brd, u32 port)
+ 		/* Parse any modem signal changes */
+ 		jsm_dbg(INTR, &ch->ch_bd->pci_dev,
+ 			"MOD_STAT: sending to parse_modem_sigs\n");
++		spin_lock_irqsave(&ch->uart_port.lock, lock_flags);
+ 		neo_parse_modem(ch, readb(&ch->ch_neo_uart->msr));
++		spin_unlock_irqrestore(&ch->uart_port.lock, lock_flags);
+ 	}
+ }
+ 
+diff --git a/drivers/tty/serial/jsm/jsm_tty.c b/drivers/tty/serial/jsm/jsm_tty.c
+index 8e42a7682c63d..d74cbbbf33c62 100644
+--- a/drivers/tty/serial/jsm/jsm_tty.c
++++ b/drivers/tty/serial/jsm/jsm_tty.c
+@@ -187,6 +187,7 @@ static void jsm_tty_break(struct uart_port *port, int break_state)
+ 
+ static int jsm_tty_open(struct uart_port *port)
+ {
++	unsigned long lock_flags;
+ 	struct jsm_board *brd;
+ 	struct jsm_channel *channel =
+ 		container_of(port, struct jsm_channel, uart_port);
+@@ -240,6 +241,7 @@ static int jsm_tty_open(struct uart_port *port)
+ 	channel->ch_cached_lsr = 0;
+ 	channel->ch_stops_sent = 0;
+ 
++	spin_lock_irqsave(&port->lock, lock_flags);
+ 	termios = &port->state->port.tty->termios;
+ 	channel->ch_c_cflag	= termios->c_cflag;
+ 	channel->ch_c_iflag	= termios->c_iflag;
+@@ -259,6 +261,7 @@ static int jsm_tty_open(struct uart_port *port)
+ 	jsm_carrier(channel);
+ 
+ 	channel->ch_open_count++;
++	spin_unlock_irqrestore(&port->lock, lock_flags);
+ 
+ 	jsm_dbg(OPEN, &channel->ch_bd->pci_dev, "finish\n");
+ 	return 0;
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index 2d5487bf68559..a2e62f372e10e 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -1760,6 +1760,10 @@ static irqreturn_t sci_br_interrupt(int irq, void *ptr)
+ 
+ 	/* Handle BREAKs */
+ 	sci_handle_breaks(port);
++
++	/* drop invalid character received before break was detected */
++	serial_port_in(port, SCxRDR);
++
+ 	sci_clear_SCxSR(port, SCxSR_BREAK_CLEAR(port));
+ 
+ 	return IRQ_HANDLED;
+@@ -1839,7 +1843,8 @@ static irqreturn_t sci_mpxed_interrupt(int irq, void *ptr)
+ 		ret = sci_er_interrupt(irq, ptr);
+ 
+ 	/* Break Interrupt */
+-	if ((ssr_status & SCxSR_BRK(port)) && err_enabled)
++	if (s->irqs[SCIx_ERI_IRQ] != s->irqs[SCIx_BRI_IRQ] &&
++	    (ssr_status & SCxSR_BRK(port)) && err_enabled)
+ 		ret = sci_br_interrupt(irq, ptr);
+ 
+ 	/* Overrun Interrupt */
+diff --git a/drivers/usb/chipidea/host.c b/drivers/usb/chipidea/host.c
+index e86d13c04bdbe..bdc3885c0d493 100644
+--- a/drivers/usb/chipidea/host.c
++++ b/drivers/usb/chipidea/host.c
+@@ -240,15 +240,18 @@ static int ci_ehci_hub_control(
+ )
+ {
+ 	struct ehci_hcd	*ehci = hcd_to_ehci(hcd);
++	unsigned int	ports = HCS_N_PORTS(ehci->hcs_params);
+ 	u32 __iomem	*status_reg;
+-	u32		temp;
++	u32		temp, port_index;
+ 	unsigned long	flags;
+ 	int		retval = 0;
+ 	bool		done = false;
+ 	struct device *dev = hcd->self.controller;
+ 	struct ci_hdrc *ci = dev_get_drvdata(dev);
+ 
+-	status_reg = &ehci->regs->port_status[(wIndex & 0xff) - 1];
++	port_index = wIndex & 0xff;
++	port_index -= (port_index > 0);
++	status_reg = &ehci->regs->port_status[port_index];
+ 
+ 	spin_lock_irqsave(&ehci->lock, flags);
+ 
+@@ -260,6 +263,11 @@ static int ci_ehci_hub_control(
+ 	}
+ 
+ 	if (typeReq == SetPortFeature && wValue == USB_PORT_FEAT_SUSPEND) {
++		if (!wIndex || wIndex > ports) {
++			retval = -EPIPE;
++			goto done;
++		}
++
+ 		temp = ehci_readl(ehci, status_reg);
+ 		if ((temp & PORT_PE) == 0 || (temp & PORT_RESET) != 0) {
+ 			retval = -EPIPE;
+@@ -288,7 +296,7 @@ static int ci_ehci_hub_control(
+ 			ehci_writel(ehci, temp, status_reg);
+ 		}
+ 
+-		set_bit((wIndex & 0xff) - 1, &ehci->suspended_ports);
++		set_bit(port_index, &ehci->suspended_ports);
+ 		goto done;
+ 	}
+ 
+diff --git a/drivers/usb/dwc3/dwc3-imx8mp.c b/drivers/usb/dwc3/dwc3-imx8mp.c
+index 756faa46d33a7..d328d20abfbc4 100644
+--- a/drivers/usb/dwc3/dwc3-imx8mp.c
++++ b/drivers/usb/dwc3/dwc3-imx8mp.c
+@@ -152,13 +152,6 @@ static int dwc3_imx8mp_probe(struct platform_device *pdev)
+ 	}
+ 	dwc3_imx->irq = irq;
+ 
+-	err = devm_request_threaded_irq(dev, irq, NULL, dwc3_imx8mp_interrupt,
+-					IRQF_ONESHOT, dev_name(dev), dwc3_imx);
+-	if (err) {
+-		dev_err(dev, "failed to request IRQ #%d --> %d\n", irq, err);
+-		goto disable_clks;
+-	}
+-
+ 	pm_runtime_set_active(dev);
+ 	pm_runtime_enable(dev);
+ 	err = pm_runtime_get_sync(dev);
+@@ -186,6 +179,13 @@ static int dwc3_imx8mp_probe(struct platform_device *pdev)
+ 	}
+ 	of_node_put(dwc3_np);
+ 
++	err = devm_request_threaded_irq(dev, irq, NULL, dwc3_imx8mp_interrupt,
++					IRQF_ONESHOT, dev_name(dev), dwc3_imx);
++	if (err) {
++		dev_err(dev, "failed to request IRQ #%d --> %d\n", irq, err);
++		goto depopulate;
++	}
++
+ 	device_set_wakeup_capable(dev, true);
+ 	pm_runtime_put(dev);
+ 
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index 72a9797dbbae0..504c1cbc255d1 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -482,7 +482,7 @@ static u8 encode_bMaxPower(enum usb_device_speed speed,
+ {
+ 	unsigned val;
+ 
+-	if (c->MaxPower)
++	if (c->MaxPower || (c->bmAttributes & USB_CONFIG_ATT_SELFPOWER))
+ 		val = c->MaxPower;
+ 	else
+ 		val = CONFIG_USB_GADGET_VBUS_DRAW;
+@@ -936,7 +936,11 @@ static int set_config(struct usb_composite_dev *cdev,
+ 	}
+ 
+ 	/* when we return, be sure our power usage is valid */
+-	power = c->MaxPower ? c->MaxPower : CONFIG_USB_GADGET_VBUS_DRAW;
++	if (c->MaxPower || (c->bmAttributes & USB_CONFIG_ATT_SELFPOWER))
++		power = c->MaxPower;
++	else
++		power = CONFIG_USB_GADGET_VBUS_DRAW;
++
+ 	if (gadget->speed < USB_SPEED_SUPER)
+ 		power = min(power, 500U);
+ 	else
+diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c
+index d1d044d9f8594..85a3f6d4b5af3 100644
+--- a/drivers/usb/gadget/function/u_ether.c
++++ b/drivers/usb/gadget/function/u_ether.c
+@@ -492,8 +492,9 @@ static netdev_tx_t eth_start_xmit(struct sk_buff *skb,
+ 	}
+ 	spin_unlock_irqrestore(&dev->lock, flags);
+ 
+-	if (skb && !in) {
+-		dev_kfree_skb_any(skb);
++	if (!in) {
++		if (skb)
++			dev_kfree_skb_any(skb);
+ 		return NETDEV_TX_OK;
+ 	}
+ 
+diff --git a/drivers/usb/host/ehci-mv.c b/drivers/usb/host/ehci-mv.c
+index cffdc8d01b2a8..8fd27249ad257 100644
+--- a/drivers/usb/host/ehci-mv.c
++++ b/drivers/usb/host/ehci-mv.c
+@@ -42,26 +42,25 @@ struct ehci_hcd_mv {
+ 	int (*set_vbus)(unsigned int vbus);
+ };
+ 
+-static void ehci_clock_enable(struct ehci_hcd_mv *ehci_mv)
++static int mv_ehci_enable(struct ehci_hcd_mv *ehci_mv)
+ {
+-	clk_prepare_enable(ehci_mv->clk);
+-}
++	int retval;
+ 
+-static void ehci_clock_disable(struct ehci_hcd_mv *ehci_mv)
+-{
+-	clk_disable_unprepare(ehci_mv->clk);
+-}
++	retval = clk_prepare_enable(ehci_mv->clk);
++	if (retval)
++		return retval;
+ 
+-static int mv_ehci_enable(struct ehci_hcd_mv *ehci_mv)
+-{
+-	ehci_clock_enable(ehci_mv);
+-	return phy_init(ehci_mv->phy);
++	retval = phy_init(ehci_mv->phy);
++	if (retval)
++		clk_disable_unprepare(ehci_mv->clk);
++
++	return retval;
+ }
+ 
+ static void mv_ehci_disable(struct ehci_hcd_mv *ehci_mv)
+ {
+ 	phy_exit(ehci_mv->phy);
+-	ehci_clock_disable(ehci_mv);
++	clk_disable_unprepare(ehci_mv->clk);
+ }
+ 
+ static int mv_ehci_reset(struct usb_hcd *hcd)
+diff --git a/drivers/usb/host/fotg210-hcd.c b/drivers/usb/host/fotg210-hcd.c
+index 9c2eda0918e13..670a2fecc9c77 100644
+--- a/drivers/usb/host/fotg210-hcd.c
++++ b/drivers/usb/host/fotg210-hcd.c
+@@ -2509,11 +2509,6 @@ retry_xacterr:
+ 	return count;
+ }
+ 
+-/* high bandwidth multiplier, as encoded in highspeed endpoint descriptors */
+-#define hb_mult(wMaxPacketSize) (1 + (((wMaxPacketSize) >> 11) & 0x03))
+-/* ... and packet size, for any kind of endpoint descriptor */
+-#define max_packet(wMaxPacketSize) ((wMaxPacketSize) & 0x07ff)
+-
+ /* reverse of qh_urb_transaction:  free a list of TDs.
+  * used for cleanup after errors, before HC sees an URB's TDs.
+  */
+@@ -2599,7 +2594,7 @@ static struct list_head *qh_urb_transaction(struct fotg210_hcd *fotg210,
+ 		token |= (1 /* "in" */ << 8);
+ 	/* else it's already initted to "out" pid (0 << 8) */
+ 
+-	maxpacket = max_packet(usb_maxpacket(urb->dev, urb->pipe, !is_input));
++	maxpacket = usb_maxpacket(urb->dev, urb->pipe, !is_input);
+ 
+ 	/*
+ 	 * buffer gets wrapped in one or more qtds;
+@@ -2713,9 +2708,11 @@ static struct fotg210_qh *qh_make(struct fotg210_hcd *fotg210, struct urb *urb,
+ 		gfp_t flags)
+ {
+ 	struct fotg210_qh *qh = fotg210_qh_alloc(fotg210, flags);
++	struct usb_host_endpoint *ep;
+ 	u32 info1 = 0, info2 = 0;
+ 	int is_input, type;
+ 	int maxp = 0;
++	int mult;
+ 	struct usb_tt *tt = urb->dev->tt;
+ 	struct fotg210_qh_hw *hw;
+ 
+@@ -2730,14 +2727,15 @@ static struct fotg210_qh *qh_make(struct fotg210_hcd *fotg210, struct urb *urb,
+ 
+ 	is_input = usb_pipein(urb->pipe);
+ 	type = usb_pipetype(urb->pipe);
+-	maxp = usb_maxpacket(urb->dev, urb->pipe, !is_input);
++	ep = usb_pipe_endpoint(urb->dev, urb->pipe);
++	maxp = usb_endpoint_maxp(&ep->desc);
++	mult = usb_endpoint_maxp_mult(&ep->desc);
+ 
+ 	/* 1024 byte maxpacket is a hardware ceiling.  High bandwidth
+ 	 * acts like up to 3KB, but is built from smaller packets.
+ 	 */
+-	if (max_packet(maxp) > 1024) {
+-		fotg210_dbg(fotg210, "bogus qh maxpacket %d\n",
+-				max_packet(maxp));
++	if (maxp > 1024) {
++		fotg210_dbg(fotg210, "bogus qh maxpacket %d\n", maxp);
+ 		goto done;
+ 	}
+ 
+@@ -2751,8 +2749,7 @@ static struct fotg210_qh *qh_make(struct fotg210_hcd *fotg210, struct urb *urb,
+ 	 */
+ 	if (type == PIPE_INTERRUPT) {
+ 		qh->usecs = NS_TO_US(usb_calc_bus_time(USB_SPEED_HIGH,
+-				is_input, 0,
+-				hb_mult(maxp) * max_packet(maxp)));
++				is_input, 0, mult * maxp));
+ 		qh->start = NO_FRAME;
+ 
+ 		if (urb->dev->speed == USB_SPEED_HIGH) {
+@@ -2789,7 +2786,7 @@ static struct fotg210_qh *qh_make(struct fotg210_hcd *fotg210, struct urb *urb,
+ 			think_time = tt ? tt->think_time : 0;
+ 			qh->tt_usecs = NS_TO_US(think_time +
+ 					usb_calc_bus_time(urb->dev->speed,
+-					is_input, 0, max_packet(maxp)));
++					is_input, 0, maxp));
+ 			qh->period = urb->interval;
+ 			if (qh->period > fotg210->periodic_size) {
+ 				qh->period = fotg210->periodic_size;
+@@ -2852,11 +2849,11 @@ static struct fotg210_qh *qh_make(struct fotg210_hcd *fotg210, struct urb *urb,
+ 			 * to help them do so.  So now people expect to use
+ 			 * such nonconformant devices with Linux too; sigh.
+ 			 */
+-			info1 |= max_packet(maxp) << 16;
++			info1 |= maxp << 16;
+ 			info2 |= (FOTG210_TUNE_MULT_HS << 30);
+ 		} else {		/* PIPE_INTERRUPT */
+-			info1 |= max_packet(maxp) << 16;
+-			info2 |= hb_mult(maxp) << 30;
++			info1 |= maxp << 16;
++			info2 |= mult << 30;
+ 		}
+ 		break;
+ 	default:
+@@ -3926,6 +3923,7 @@ static void iso_stream_init(struct fotg210_hcd *fotg210,
+ 	int is_input;
+ 	long bandwidth;
+ 	unsigned multi;
++	struct usb_host_endpoint *ep;
+ 
+ 	/*
+ 	 * this might be a "high bandwidth" highspeed endpoint,
+@@ -3933,14 +3931,14 @@ static void iso_stream_init(struct fotg210_hcd *fotg210,
+ 	 */
+ 	epnum = usb_pipeendpoint(pipe);
+ 	is_input = usb_pipein(pipe) ? USB_DIR_IN : 0;
+-	maxp = usb_maxpacket(dev, pipe, !is_input);
++	ep = usb_pipe_endpoint(dev, pipe);
++	maxp = usb_endpoint_maxp(&ep->desc);
+ 	if (is_input)
+ 		buf1 = (1 << 11);
+ 	else
+ 		buf1 = 0;
+ 
+-	maxp = max_packet(maxp);
+-	multi = hb_mult(maxp);
++	multi = usb_endpoint_maxp_mult(&ep->desc);
+ 	buf1 |= maxp;
+ 	maxp *= multi;
+ 
+@@ -4461,13 +4459,12 @@ static bool itd_complete(struct fotg210_hcd *fotg210, struct fotg210_itd *itd)
+ 
+ 			/* HC need not update length with this error */
+ 			if (!(t & FOTG210_ISOC_BABBLE)) {
+-				desc->actual_length =
+-					fotg210_itdlen(urb, desc, t);
++				desc->actual_length = FOTG210_ITD_LENGTH(t);
+ 				urb->actual_length += desc->actual_length;
+ 			}
+ 		} else if (likely((t & FOTG210_ISOC_ACTIVE) == 0)) {
+ 			desc->status = 0;
+-			desc->actual_length = fotg210_itdlen(urb, desc, t);
++			desc->actual_length = FOTG210_ITD_LENGTH(t);
+ 			urb->actual_length += desc->actual_length;
+ 		} else {
+ 			/* URB was too late */
+diff --git a/drivers/usb/host/fotg210.h b/drivers/usb/host/fotg210.h
+index 6cee40ec65b41..67f59517ebade 100644
+--- a/drivers/usb/host/fotg210.h
++++ b/drivers/usb/host/fotg210.h
+@@ -686,11 +686,6 @@ static inline unsigned fotg210_read_frame_index(struct fotg210_hcd *fotg210)
+ 	return fotg210_readl(fotg210, &fotg210->regs->frame_index);
+ }
+ 
+-#define fotg210_itdlen(urb, desc, t) ({			\
+-	usb_pipein((urb)->pipe) ?				\
+-	(desc)->length - FOTG210_ITD_LENGTH(t) :			\
+-	FOTG210_ITD_LENGTH(t);					\
+-})
+ /*-------------------------------------------------------------------------*/
+ 
+ #endif /* __LINUX_FOTG210_H */
+diff --git a/drivers/usb/host/xhci-mtk.c b/drivers/usb/host/xhci-mtk.c
+index b2058b3bc834c..86e5710a5307d 100644
+--- a/drivers/usb/host/xhci-mtk.c
++++ b/drivers/usb/host/xhci-mtk.c
+@@ -571,7 +571,7 @@ disable_ldos:
+ 	xhci_mtk_ldos_disable(mtk);
+ 
+ disable_pm:
+-	pm_runtime_put_sync_autosuspend(dev);
++	pm_runtime_put_noidle(dev);
+ 	pm_runtime_disable(dev);
+ 	return ret;
+ }
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 9248ce8d09a4a..cb21add9d7bee 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -4705,19 +4705,19 @@ static u16 xhci_calculate_u1_timeout(struct xhci_hcd *xhci,
+ {
+ 	unsigned long long timeout_ns;
+ 
+-	if (xhci->quirks & XHCI_INTEL_HOST)
+-		timeout_ns = xhci_calculate_intel_u1_timeout(udev, desc);
+-	else
+-		timeout_ns = udev->u1_params.sel;
+-
+ 	/* Prevent U1 if service interval is shorter than U1 exit latency */
+ 	if (usb_endpoint_xfer_int(desc) || usb_endpoint_xfer_isoc(desc)) {
+-		if (xhci_service_interval_to_ns(desc) <= timeout_ns) {
++		if (xhci_service_interval_to_ns(desc) <= udev->u1_params.mel) {
+ 			dev_dbg(&udev->dev, "Disable U1, ESIT shorter than exit latency\n");
+ 			return USB3_LPM_DISABLED;
+ 		}
+ 	}
+ 
++	if (xhci->quirks & XHCI_INTEL_HOST)
++		timeout_ns = xhci_calculate_intel_u1_timeout(udev, desc);
++	else
++		timeout_ns = udev->u1_params.sel;
++
+ 	/* The U1 timeout is encoded in 1us intervals.
+ 	 * Don't return a timeout of zero, because that's USB3_LPM_DISABLED.
+ 	 */
+@@ -4769,19 +4769,19 @@ static u16 xhci_calculate_u2_timeout(struct xhci_hcd *xhci,
+ {
+ 	unsigned long long timeout_ns;
+ 
+-	if (xhci->quirks & XHCI_INTEL_HOST)
+-		timeout_ns = xhci_calculate_intel_u2_timeout(udev, desc);
+-	else
+-		timeout_ns = udev->u2_params.sel;
+-
+ 	/* Prevent U2 if service interval is shorter than U2 exit latency */
+ 	if (usb_endpoint_xfer_int(desc) || usb_endpoint_xfer_isoc(desc)) {
+-		if (xhci_service_interval_to_ns(desc) <= timeout_ns) {
++		if (xhci_service_interval_to_ns(desc) <= udev->u2_params.mel) {
+ 			dev_dbg(&udev->dev, "Disable U2, ESIT shorter than exit latency\n");
+ 			return USB3_LPM_DISABLED;
+ 		}
+ 	}
+ 
++	if (xhci->quirks & XHCI_INTEL_HOST)
++		timeout_ns = xhci_calculate_intel_u2_timeout(udev, desc);
++	else
++		timeout_ns = udev->u2_params.sel;
++
+ 	/* The U2 timeout is encoded in 256us intervals */
+ 	timeout_ns = DIV_ROUND_UP_ULL(timeout_ns, 256 * 1000);
+ 	/* If the necessary timeout value is bigger than what we can set in the
+diff --git a/drivers/usb/musb/musb_dsps.c b/drivers/usb/musb/musb_dsps.c
+index 5892f3ce0cdc8..ce9fc46c92661 100644
+--- a/drivers/usb/musb/musb_dsps.c
++++ b/drivers/usb/musb/musb_dsps.c
+@@ -890,23 +890,22 @@ static int dsps_probe(struct platform_device *pdev)
+ 	if (!glue->usbss_base)
+ 		return -ENXIO;
+ 
+-	if (usb_get_dr_mode(&pdev->dev) == USB_DR_MODE_PERIPHERAL) {
+-		ret = dsps_setup_optional_vbus_irq(pdev, glue);
+-		if (ret)
+-			goto err_iounmap;
+-	}
+-
+ 	platform_set_drvdata(pdev, glue);
+ 	pm_runtime_enable(&pdev->dev);
+ 	ret = dsps_create_musb_pdev(glue, pdev);
+ 	if (ret)
+ 		goto err;
+ 
++	if (usb_get_dr_mode(&pdev->dev) == USB_DR_MODE_PERIPHERAL) {
++		ret = dsps_setup_optional_vbus_irq(pdev, glue);
++		if (ret)
++			goto err;
++	}
++
+ 	return 0;
+ 
+ err:
+ 	pm_runtime_disable(&pdev->dev);
+-err_iounmap:
+ 	iounmap(glue->usbss_base);
+ 	return ret;
+ }
+diff --git a/drivers/usb/usbip/vhci_hcd.c b/drivers/usb/usbip/vhci_hcd.c
+index 4ba6bcdaa8e9d..b07b2925ff78b 100644
+--- a/drivers/usb/usbip/vhci_hcd.c
++++ b/drivers/usb/usbip/vhci_hcd.c
+@@ -455,8 +455,14 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 			vhci_hcd->port_status[rhport] &= ~(1 << USB_PORT_FEAT_RESET);
+ 			vhci_hcd->re_timeout = 0;
+ 
++			/*
++			 * A few drivers do usb reset during probe when
++			 * the device could be in VDEV_ST_USED state
++			 */
+ 			if (vhci_hcd->vdev[rhport].ud.status ==
+-			    VDEV_ST_NOTASSIGNED) {
++				VDEV_ST_NOTASSIGNED ||
++			    vhci_hcd->vdev[rhport].ud.status ==
++				VDEV_ST_USED) {
+ 				usbip_dbg_vhci_rh(
+ 					" enable rhport %d (status %u)\n",
+ 					rhport,
+@@ -957,8 +963,32 @@ static void vhci_device_unlink_cleanup(struct vhci_device *vdev)
+ 	spin_lock(&vdev->priv_lock);
+ 
+ 	list_for_each_entry_safe(unlink, tmp, &vdev->unlink_tx, list) {
++		struct urb *urb;
++
++		/* give back urb of unsent unlink request */
+ 		pr_info("unlink cleanup tx %lu\n", unlink->unlink_seqnum);
++
++		urb = pickup_urb_and_free_priv(vdev, unlink->unlink_seqnum);
++		if (!urb) {
++			list_del(&unlink->list);
++			kfree(unlink);
++			continue;
++		}
++
++		urb->status = -ENODEV;
++
++		usb_hcd_unlink_urb_from_ep(hcd, urb);
++
+ 		list_del(&unlink->list);
++
++		spin_unlock(&vdev->priv_lock);
++		spin_unlock_irqrestore(&vhci->lock, flags);
++
++		usb_hcd_giveback_urb(hcd, urb, urb->status);
++
++		spin_lock_irqsave(&vhci->lock, flags);
++		spin_lock(&vdev->priv_lock);
++
+ 		kfree(unlink);
+ 	}
+ 
+diff --git a/drivers/vfio/Kconfig b/drivers/vfio/Kconfig
+index 67d0bf4efa160..e44bf736e2b22 100644
+--- a/drivers/vfio/Kconfig
++++ b/drivers/vfio/Kconfig
+@@ -29,7 +29,7 @@ menuconfig VFIO
+ 
+ 	  If you don't know what to do here, say N.
+ 
+-menuconfig VFIO_NOIOMMU
++config VFIO_NOIOMMU
+ 	bool "VFIO No-IOMMU support"
+ 	depends on VFIO
+ 	help
+diff --git a/drivers/video/fbdev/asiliantfb.c b/drivers/video/fbdev/asiliantfb.c
+index 3e006da477523..84c56f525889f 100644
+--- a/drivers/video/fbdev/asiliantfb.c
++++ b/drivers/video/fbdev/asiliantfb.c
+@@ -227,6 +227,9 @@ static int asiliantfb_check_var(struct fb_var_screeninfo *var,
+ {
+ 	unsigned long Ftarget, ratio, remainder;
+ 
++	if (!var->pixclock)
++		return -EINVAL;
++
+ 	ratio = 1000000 / var->pixclock;
+ 	remainder = 1000000 % var->pixclock;
+ 	Ftarget = 1000000 * ratio + (1000000 * remainder) / var->pixclock;
+diff --git a/drivers/video/fbdev/kyro/fbdev.c b/drivers/video/fbdev/kyro/fbdev.c
+index 8fbde92ae8b9c..25801e8e3f74a 100644
+--- a/drivers/video/fbdev/kyro/fbdev.c
++++ b/drivers/video/fbdev/kyro/fbdev.c
+@@ -372,6 +372,11 @@ static int kyro_dev_overlay_viewport_set(u32 x, u32 y, u32 ulWidth, u32 ulHeight
+ 		/* probably haven't called CreateOverlay yet */
+ 		return -EINVAL;
+ 
++	if (ulWidth == 0 || ulWidth == 0xffffffff ||
++	    ulHeight == 0 || ulHeight == 0xffffffff ||
++	    (x < 2 && ulWidth + 2 == 0))
++		return -EINVAL;
++
+ 	/* Stop Ramdac Output */
+ 	DisableRamdacOutput(deviceInfo.pSTGReg);
+ 
+@@ -394,6 +399,9 @@ static int kyrofb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
+ {
+ 	struct kyrofb_info *par = info->par;
+ 
++	if (!var->pixclock)
++		return -EINVAL;
++
+ 	if (var->bits_per_pixel != 16 && var->bits_per_pixel != 32) {
+ 		printk(KERN_WARNING "kyrofb: depth not supported: %u\n", var->bits_per_pixel);
+ 		return -EINVAL;
+diff --git a/drivers/video/fbdev/riva/fbdev.c b/drivers/video/fbdev/riva/fbdev.c
+index 55554b0433cb4..84d5e23ad7d38 100644
+--- a/drivers/video/fbdev/riva/fbdev.c
++++ b/drivers/video/fbdev/riva/fbdev.c
+@@ -1084,6 +1084,9 @@ static int rivafb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
+ 	int mode_valid = 0;
+ 	
+ 	NVTRACE_ENTER();
++	if (!var->pixclock)
++		return -EINVAL;
++
+ 	switch (var->bits_per_pixel) {
+ 	case 1 ... 8:
+ 		var->red.offset = var->green.offset = var->blue.offset = 0;
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index cf53713f8aa01..a29b5ffca99b7 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1550,7 +1550,7 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
+ 				bg->start, div64_u64(bg->used * 100, bg->length));
+ 		trace_btrfs_reclaim_block_group(bg);
+ 		ret = btrfs_relocate_chunk(fs_info, bg->start);
+-		if (ret)
++		if (ret && ret != -EAGAIN)
+ 			btrfs_err(fs_info, "error relocating chunk %llu",
+ 				  bg->start);
+ 
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 8d386a5587ee9..51395cfb75742 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3329,6 +3329,30 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ 	 */
+ 	fs_info->compress_type = BTRFS_COMPRESS_ZLIB;
+ 
++	/*
++	 * Flag our filesystem as having big metadata blocks if they are bigger
++	 * than the page size.
++	 */
++	if (btrfs_super_nodesize(disk_super) > PAGE_SIZE) {
++		if (!(features & BTRFS_FEATURE_INCOMPAT_BIG_METADATA))
++			btrfs_info(fs_info,
++				"flagging fs with big metadata feature");
++		features |= BTRFS_FEATURE_INCOMPAT_BIG_METADATA;
++	}
++
++	/* Set up fs_info before parsing mount options */
++	nodesize = btrfs_super_nodesize(disk_super);
++	sectorsize = btrfs_super_sectorsize(disk_super);
++	stripesize = sectorsize;
++	fs_info->dirty_metadata_batch = nodesize * (1 + ilog2(nr_cpu_ids));
++	fs_info->delalloc_batch = sectorsize * 512 * (1 + ilog2(nr_cpu_ids));
++
++	fs_info->nodesize = nodesize;
++	fs_info->sectorsize = sectorsize;
++	fs_info->sectorsize_bits = ilog2(sectorsize);
++	fs_info->csums_per_leaf = BTRFS_MAX_ITEM_SIZE(fs_info) / fs_info->csum_size;
++	fs_info->stripesize = stripesize;
++
+ 	ret = btrfs_parse_options(fs_info, options, sb->s_flags);
+ 	if (ret) {
+ 		err = ret;
+@@ -3355,30 +3379,6 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ 	if (features & BTRFS_FEATURE_INCOMPAT_SKINNY_METADATA)
+ 		btrfs_info(fs_info, "has skinny extents");
+ 
+-	/*
+-	 * flag our filesystem as having big metadata blocks if
+-	 * they are bigger than the page size
+-	 */
+-	if (btrfs_super_nodesize(disk_super) > PAGE_SIZE) {
+-		if (!(features & BTRFS_FEATURE_INCOMPAT_BIG_METADATA))
+-			btrfs_info(fs_info,
+-				"flagging fs with big metadata feature");
+-		features |= BTRFS_FEATURE_INCOMPAT_BIG_METADATA;
+-	}
+-
+-	nodesize = btrfs_super_nodesize(disk_super);
+-	sectorsize = btrfs_super_sectorsize(disk_super);
+-	stripesize = sectorsize;
+-	fs_info->dirty_metadata_batch = nodesize * (1 + ilog2(nr_cpu_ids));
+-	fs_info->delalloc_batch = sectorsize * 512 * (1 + ilog2(nr_cpu_ids));
+-
+-	/* Cache block sizes */
+-	fs_info->nodesize = nodesize;
+-	fs_info->sectorsize = sectorsize;
+-	fs_info->sectorsize_bits = ilog2(sectorsize);
+-	fs_info->csums_per_leaf = BTRFS_MAX_ITEM_SIZE(fs_info) / fs_info->csum_size;
+-	fs_info->stripesize = stripesize;
+-
+ 	/*
+ 	 * mixed block groups end up with duplicate but slightly offset
+ 	 * extent buffers for the same range.  It leads to corruptions
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index 4806295116d88..c3510c8fdaf8c 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -2652,8 +2652,11 @@ int btrfs_remove_free_space(struct btrfs_block_group *block_group,
+ 		 * btrfs_pin_extent_for_log_replay() when replaying the log.
+ 		 * Advance the pointer not to overwrite the tree-log nodes.
+ 		 */
+-		if (block_group->alloc_offset < offset + bytes)
+-			block_group->alloc_offset = offset + bytes;
++		if (block_group->start + block_group->alloc_offset <
++		    offset + bytes) {
++			block_group->alloc_offset =
++				offset + bytes - block_group->start;
++		}
+ 		return 0;
+ 	}
+ 
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 6328deb27d126..044300db5e228 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1248,11 +1248,6 @@ static noinline void async_cow_submit(struct btrfs_work *work)
+ 	nr_pages = (async_chunk->end - async_chunk->start + PAGE_SIZE) >>
+ 		PAGE_SHIFT;
+ 
+-	/* atomic_sub_return implies a barrier */
+-	if (atomic_sub_return(nr_pages, &fs_info->async_delalloc_pages) <
+-	    5 * SZ_1M)
+-		cond_wake_up_nomb(&fs_info->async_submit_wait);
+-
+ 	/*
+ 	 * ->inode could be NULL if async_chunk_start has failed to compress,
+ 	 * in which case we don't have anything to submit, yet we need to
+@@ -1261,6 +1256,11 @@ static noinline void async_cow_submit(struct btrfs_work *work)
+ 	 */
+ 	if (async_chunk->inode)
+ 		submit_compressed_extents(async_chunk);
++
++	/* atomic_sub_return implies a barrier */
++	if (atomic_sub_return(nr_pages, &fs_info->async_delalloc_pages) <
++	    5 * SZ_1M)
++		cond_wake_up_nomb(&fs_info->async_submit_wait);
+ }
+ 
+ static noinline void async_cow_free(struct btrfs_work *work)
+@@ -5064,15 +5064,13 @@ static int maybe_insert_hole(struct btrfs_root *root, struct btrfs_inode *inode,
+ 	int ret;
+ 
+ 	/*
+-	 * Still need to make sure the inode looks like it's been updated so
+-	 * that any holes get logged if we fsync.
++	 * If NO_HOLES is enabled, we don't need to do anything.
++	 * Later, up in the call chain, either btrfs_set_inode_last_sub_trans()
++	 * or btrfs_update_inode() will be called, which guarantee that the next
++	 * fsync will know this inode was changed and needs to be logged.
+ 	 */
+-	if (btrfs_fs_incompat(fs_info, NO_HOLES)) {
+-		inode->last_trans = fs_info->generation;
+-		inode->last_sub_trans = root->log_transid;
+-		inode->last_log_commit = root->last_log_commit;
++	if (btrfs_fs_incompat(fs_info, NO_HOLES))
+ 		return 0;
+-	}
+ 
+ 	/*
+ 	 * 1 - for the one we're dropping
+@@ -9774,10 +9772,6 @@ static int start_delalloc_inodes(struct btrfs_root *root,
+ 					 &work->work);
+ 		} else {
+ 			ret = sync_inode(inode, wbc);
+-			if (!ret &&
+-			    test_bit(BTRFS_INODE_HAS_ASYNC_EXTENT,
+-				     &BTRFS_I(inode)->runtime_flags))
+-				ret = sync_inode(inode, wbc);
+ 			btrfs_add_delayed_iput(inode);
+ 			if (ret || wbc->nr_to_write <= 0)
+ 				goto out;
+diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
+index 6c413bb451a3d..8326f4bee89ff 100644
+--- a/fs/btrfs/ordered-data.c
++++ b/fs/btrfs/ordered-data.c
+@@ -917,6 +917,7 @@ static int clone_ordered_extent(struct btrfs_ordered_extent *ordered, u64 pos,
+ 				u64 len)
+ {
+ 	struct inode *inode = ordered->inode;
++	struct btrfs_fs_info *fs_info = BTRFS_I(inode)->root->fs_info;
+ 	u64 file_offset = ordered->file_offset + pos;
+ 	u64 disk_bytenr = ordered->disk_bytenr + pos;
+ 	u64 num_bytes = len;
+@@ -934,6 +935,13 @@ static int clone_ordered_extent(struct btrfs_ordered_extent *ordered, u64 pos,
+ 	else
+ 		type = __ffs(flags_masked);
+ 
++	/*
++	 * The splitting extent is already counted and will be added again
++	 * in btrfs_add_ordered_extent_*(). Subtract num_bytes to avoid
++	 * double counting.
++	 */
++	percpu_counter_add_batch(&fs_info->ordered_bytes, -num_bytes,
++				 fs_info->delalloc_batch);
+ 	if (test_bit(BTRFS_ORDERED_COMPRESSED, &ordered->flags)) {
+ 		WARN_ON_ONCE(1);
+ 		ret = btrfs_add_ordered_extent_compress(BTRFS_I(inode),
+diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
+index 2dc674b7c3b14..088220539788c 100644
+--- a/fs/btrfs/space-info.c
++++ b/fs/btrfs/space-info.c
+@@ -539,9 +539,49 @@ static void shrink_delalloc(struct btrfs_fs_info *fs_info,
+ 	while ((delalloc_bytes || ordered_bytes) && loops < 3) {
+ 		u64 temp = min(delalloc_bytes, to_reclaim) >> PAGE_SHIFT;
+ 		long nr_pages = min_t(u64, temp, LONG_MAX);
++		int async_pages;
+ 
+ 		btrfs_start_delalloc_roots(fs_info, nr_pages, true);
+ 
++		/*
++		 * We need to make sure any outstanding async pages are now
++		 * processed before we continue.  This is because things like
++		 * sync_inode() try to be smart and skip writing if the inode is
++		 * marked clean.  We don't use filemap_fwrite for flushing
++		 * because we want to control how many pages we write out at a
++		 * time, thus this is the only safe way to make sure we've
++		 * waited for outstanding compressed workers to have started
++		 * their jobs and thus have ordered extents set up properly.
++		 *
++		 * This exists because we do not want to wait for each
++		 * individual inode to finish its async work, we simply want to
++		 * start the IO on everybody, and then come back here and wait
++		 * for all of the async work to catch up.  Once we're done with
++		 * that we know we'll have ordered extents for everything and we
++		 * can decide if we wait for that or not.
++		 *
++		 * If we choose to replace this in the future, make absolutely
++		 * sure that the proper waiting is being done in the async case,
++		 * as there have been bugs in that area before.
++		 */
++		async_pages = atomic_read(&fs_info->async_delalloc_pages);
++		if (!async_pages)
++			goto skip_async;
++
++		/*
++		 * We don't want to wait forever, if we wrote less pages in this
++		 * loop than we have outstanding, only wait for that number of
++		 * pages, otherwise we can wait for all async pages to finish
++		 * before continuing.
++		 */
++		if (async_pages > nr_pages)
++			async_pages -= nr_pages;
++		else
++			async_pages = 0;
++		wait_event(fs_info->async_submit_wait,
++			   atomic_read(&fs_info->async_delalloc_pages) <=
++			   async_pages);
++skip_async:
+ 		loops++;
+ 		if (wait_ordered && !trans) {
+ 			btrfs_wait_ordered_roots(fs_info, items, 0, (u64)-1);
+@@ -793,7 +833,7 @@ static bool need_preemptive_reclaim(struct btrfs_fs_info *fs_info,
+ 				    struct btrfs_space_info *space_info)
+ {
+ 	u64 ordered, delalloc;
+-	u64 thresh = div_factor_fine(space_info->total_bytes, 98);
++	u64 thresh = div_factor_fine(space_info->total_bytes, 90);
+ 	u64 used;
+ 
+ 	/* If we're just plain full then async reclaim just slows us down. */
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 24555cc1f42d5..9e9ab41df7da6 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -753,7 +753,9 @@ static noinline int replay_one_extent(struct btrfs_trans_handle *trans,
+ 			 */
+ 			ret = btrfs_lookup_data_extent(fs_info, ins.objectid,
+ 						ins.offset);
+-			if (ret == 0) {
++			if (ret < 0) {
++				goto out;
++			} else if (ret == 0) {
+ 				btrfs_init_generic_ref(&ref,
+ 						BTRFS_ADD_DELAYED_REF,
+ 						ins.objectid, ins.offset, 0);
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 7d5875824be7a..dc1f31cf3d4a5 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -1130,6 +1130,9 @@ static void btrfs_close_one_device(struct btrfs_device *device)
+ 		fs_devices->rw_devices--;
+ 	}
+ 
++	if (device->devid == BTRFS_DEV_REPLACE_DEVID)
++		clear_bit(BTRFS_DEV_STATE_REPLACE_TGT, &device->dev_state);
++
+ 	if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state))
+ 		fs_devices->missing_devices--;
+ 
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index 805c656a2e72a..f602e51c00065 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -1756,6 +1756,9 @@ struct ceph_cap_flush *ceph_alloc_cap_flush(void)
+ 	struct ceph_cap_flush *cf;
+ 
+ 	cf = kmem_cache_alloc(ceph_cap_flush_cachep, GFP_KERNEL);
++	if (!cf)
++		return NULL;
++
+ 	cf->is_capsnap = false;
+ 	return cf;
+ }
+diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
+index a92a1fb7cb526..4c22f73b31232 100644
+--- a/fs/cifs/sess.c
++++ b/fs/cifs/sess.c
+@@ -889,7 +889,7 @@ sess_alloc_buffer(struct sess_data *sess_data, int wct)
+ 	return 0;
+ 
+ out_free_smb_buf:
+-	kfree(smb_buf);
++	cifs_small_buf_release(smb_buf);
+ 	sess_data->iov[0].iov_base = NULL;
+ 	sess_data->iov[0].iov_len = 0;
+ 	sess_data->buf0_type = CIFS_NO_BUFFER;
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index f795049e63d55..6c208108d69c1 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -444,7 +444,7 @@ static int f2fs_set_meta_page_dirty(struct page *page)
+ 	if (!PageDirty(page)) {
+ 		__set_page_dirty_nobuffers(page);
+ 		inc_page_count(F2FS_P_SB(page), F2FS_DIRTY_META);
+-		f2fs_set_page_private(page, 0);
++		set_page_private_reference(page);
+ 		return 1;
+ 	}
+ 	return 0;
+@@ -1018,7 +1018,7 @@ void f2fs_update_dirty_page(struct inode *inode, struct page *page)
+ 	inode_inc_dirty_pages(inode);
+ 	spin_unlock(&sbi->inode_lock[type]);
+ 
+-	f2fs_set_page_private(page, 0);
++	set_page_private_reference(page);
+ }
+ 
+ void f2fs_remove_dirty_inode(struct inode *inode)
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index 925a5ca3744a9..14e6a78503f18 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -12,9 +12,11 @@
+ #include <linux/lzo.h>
+ #include <linux/lz4.h>
+ #include <linux/zstd.h>
++#include <linux/pagevec.h>
+ 
+ #include "f2fs.h"
+ #include "node.h"
++#include "segment.h"
+ #include <trace/events/f2fs.h>
+ 
+ static struct kmem_cache *cic_entry_slab;
+@@ -74,7 +76,7 @@ bool f2fs_is_compressed_page(struct page *page)
+ 		return false;
+ 	if (!page_private(page))
+ 		return false;
+-	if (IS_ATOMIC_WRITTEN_PAGE(page) || IS_DUMMY_WRITTEN_PAGE(page))
++	if (page_private_nonpointer(page))
+ 		return false;
+ 
+ 	f2fs_bug_on(F2FS_M_SB(page->mapping),
+@@ -85,8 +87,7 @@ bool f2fs_is_compressed_page(struct page *page)
+ static void f2fs_set_compressed_page(struct page *page,
+ 		struct inode *inode, pgoff_t index, void *data)
+ {
+-	SetPagePrivate(page);
+-	set_page_private(page, (unsigned long)data);
++	attach_page_private(page, (void *)data);
+ 
+ 	/* i_crypto_info and iv index */
+ 	page->index = index;
+@@ -589,8 +590,7 @@ static void f2fs_compress_free_page(struct page *page)
+ {
+ 	if (!page)
+ 		return;
+-	set_page_private(page, (unsigned long)NULL);
+-	ClearPagePrivate(page);
++	detach_page_private(page);
+ 	page->mapping = NULL;
+ 	unlock_page(page);
+ 	mempool_free(page, compress_page_pool);
+@@ -738,7 +738,7 @@ out:
+ 	return ret;
+ }
+ 
+-static void f2fs_decompress_cluster(struct decompress_io_ctx *dic)
++void f2fs_decompress_cluster(struct decompress_io_ctx *dic)
+ {
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(dic->inode);
+ 	struct f2fs_inode_info *fi = F2FS_I(dic->inode);
+@@ -837,7 +837,8 @@ out_end_io:
+  * page being waited on in the cluster, and if so, it decompresses the cluster
+  * (or in the case of a failure, cleans up without actually decompressing).
+  */
+-void f2fs_end_read_compressed_page(struct page *page, bool failed)
++void f2fs_end_read_compressed_page(struct page *page, bool failed,
++						block_t blkaddr)
+ {
+ 	struct decompress_io_ctx *dic =
+ 			(struct decompress_io_ctx *)page_private(page);
+@@ -847,6 +848,9 @@ void f2fs_end_read_compressed_page(struct page *page, bool failed)
+ 
+ 	if (failed)
+ 		WRITE_ONCE(dic->failed, true);
++	else if (blkaddr)
++		f2fs_cache_compressed_page(sbi, page,
++					dic->inode->i_ino, blkaddr);
+ 
+ 	if (atomic_dec_and_test(&dic->remaining_pages))
+ 		f2fs_decompress_cluster(dic);
+@@ -1359,12 +1363,6 @@ out_destroy_crypt:
+ 
+ 	for (--i; i >= 0; i--)
+ 		fscrypt_finalize_bounce_page(&cc->cpages[i]);
+-	for (i = 0; i < cc->nr_cpages; i++) {
+-		if (!cc->cpages[i])
+-			continue;
+-		f2fs_compress_free_page(cc->cpages[i]);
+-		cc->cpages[i] = NULL;
+-	}
+ out_put_cic:
+ 	kmem_cache_free(cic_entry_slab, cic);
+ out_put_dnode:
+@@ -1375,6 +1373,12 @@ out_unlock_op:
+ 	else
+ 		f2fs_unlock_op(sbi);
+ out_free:
++	for (i = 0; i < cc->nr_cpages; i++) {
++		if (!cc->cpages[i])
++			continue;
++		f2fs_compress_free_page(cc->cpages[i]);
++		cc->cpages[i] = NULL;
++	}
+ 	page_array_free(cc->inode, cc->cpages, cc->nr_cpages);
+ 	cc->cpages = NULL;
+ 	return -EAGAIN;
+@@ -1399,7 +1403,7 @@ void f2fs_compress_write_end_io(struct bio *bio, struct page *page)
+ 
+ 	for (i = 0; i < cic->nr_rpages; i++) {
+ 		WARN_ON(!cic->rpages[i]);
+-		clear_cold_data(cic->rpages[i]);
++		clear_page_private_gcing(cic->rpages[i]);
+ 		end_page_writeback(cic->rpages[i]);
+ 	}
+ 
+@@ -1685,6 +1689,164 @@ void f2fs_put_page_dic(struct page *page)
+ 	f2fs_put_dic(dic);
+ }
+ 
++const struct address_space_operations f2fs_compress_aops = {
++	.releasepage = f2fs_release_page,
++	.invalidatepage = f2fs_invalidate_page,
++};
++
++struct address_space *COMPRESS_MAPPING(struct f2fs_sb_info *sbi)
++{
++	return sbi->compress_inode->i_mapping;
++}
++
++void f2fs_invalidate_compress_page(struct f2fs_sb_info *sbi, block_t blkaddr)
++{
++	if (!sbi->compress_inode)
++		return;
++	invalidate_mapping_pages(COMPRESS_MAPPING(sbi), blkaddr, blkaddr);
++}
++
++void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
++						nid_t ino, block_t blkaddr)
++{
++	struct page *cpage;
++	int ret;
++
++	if (!test_opt(sbi, COMPRESS_CACHE))
++		return;
++
++	if (!f2fs_is_valid_blkaddr(sbi, blkaddr, DATA_GENERIC_ENHANCE_READ))
++		return;
++
++	if (!f2fs_available_free_memory(sbi, COMPRESS_PAGE))
++		return;
++
++	cpage = find_get_page(COMPRESS_MAPPING(sbi), blkaddr);
++	if (cpage) {
++		f2fs_put_page(cpage, 0);
++		return;
++	}
++
++	cpage = alloc_page(__GFP_NOWARN | __GFP_IO);
++	if (!cpage)
++		return;
++
++	ret = add_to_page_cache_lru(cpage, COMPRESS_MAPPING(sbi),
++						blkaddr, GFP_NOFS);
++	if (ret) {
++		f2fs_put_page(cpage, 0);
++		return;
++	}
++
++	set_page_private_data(cpage, ino);
++
++	if (!f2fs_is_valid_blkaddr(sbi, blkaddr, DATA_GENERIC_ENHANCE_READ))
++		goto out;
++
++	memcpy(page_address(cpage), page_address(page), PAGE_SIZE);
++	SetPageUptodate(cpage);
++out:
++	f2fs_put_page(cpage, 1);
++}
++
++bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
++								block_t blkaddr)
++{
++	struct page *cpage;
++	bool hitted = false;
++
++	if (!test_opt(sbi, COMPRESS_CACHE))
++		return false;
++
++	cpage = f2fs_pagecache_get_page(COMPRESS_MAPPING(sbi),
++				blkaddr, FGP_LOCK | FGP_NOWAIT, GFP_NOFS);
++	if (cpage) {
++		if (PageUptodate(cpage)) {
++			atomic_inc(&sbi->compress_page_hit);
++			memcpy(page_address(page),
++				page_address(cpage), PAGE_SIZE);
++			hitted = true;
++		}
++		f2fs_put_page(cpage, 1);
++	}
++
++	return hitted;
++}
++
++void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino)
++{
++	struct address_space *mapping = sbi->compress_inode->i_mapping;
++	struct pagevec pvec;
++	pgoff_t index = 0;
++	pgoff_t end = MAX_BLKADDR(sbi);
++
++	if (!mapping->nrpages)
++		return;
++
++	pagevec_init(&pvec);
++
++	do {
++		unsigned int nr_pages;
++		int i;
++
++		nr_pages = pagevec_lookup_range(&pvec, mapping,
++						&index, end - 1);
++		if (!nr_pages)
++			break;
++
++		for (i = 0; i < nr_pages; i++) {
++			struct page *page = pvec.pages[i];
++
++			if (page->index > end)
++				break;
++
++			lock_page(page);
++			if (page->mapping != mapping) {
++				unlock_page(page);
++				continue;
++			}
++
++			if (ino != get_page_private_data(page)) {
++				unlock_page(page);
++				continue;
++			}
++
++			generic_error_remove_page(mapping, page);
++			unlock_page(page);
++		}
++		pagevec_release(&pvec);
++		cond_resched();
++	} while (index < end);
++}
++
++int f2fs_init_compress_inode(struct f2fs_sb_info *sbi)
++{
++	struct inode *inode;
++
++	if (!test_opt(sbi, COMPRESS_CACHE))
++		return 0;
++
++	inode = f2fs_iget(sbi->sb, F2FS_COMPRESS_INO(sbi));
++	if (IS_ERR(inode))
++		return PTR_ERR(inode);
++	sbi->compress_inode = inode;
++
++	sbi->compress_percent = COMPRESS_PERCENT;
++	sbi->compress_watermark = COMPRESS_WATERMARK;
++
++	atomic_set(&sbi->compress_page_hit, 0);
++
++	return 0;
++}
++
++void f2fs_destroy_compress_inode(struct f2fs_sb_info *sbi)
++{
++	if (!sbi->compress_inode)
++		return;
++	iput(sbi->compress_inode);
++	sbi->compress_inode = NULL;
++}
++
+ int f2fs_init_page_array_cache(struct f2fs_sb_info *sbi)
+ {
+ 	dev_t dev = sbi->sb->s_bdev->bd_dev;
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index e2d0c7d9673e0..198e5ad7c98b5 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -58,18 +58,19 @@ static bool __is_cp_guaranteed(struct page *page)
+ 	if (!mapping)
+ 		return false;
+ 
+-	if (f2fs_is_compressed_page(page))
+-		return false;
+-
+ 	inode = mapping->host;
+ 	sbi = F2FS_I_SB(inode);
+ 
+ 	if (inode->i_ino == F2FS_META_INO(sbi) ||
+ 			inode->i_ino == F2FS_NODE_INO(sbi) ||
+-			S_ISDIR(inode->i_mode) ||
+-			(S_ISREG(inode->i_mode) &&
++			S_ISDIR(inode->i_mode))
++		return true;
++
++	if (f2fs_is_compressed_page(page))
++		return false;
++	if ((S_ISREG(inode->i_mode) &&
+ 			(f2fs_is_atomic_file(inode) || IS_NOQUOTA(inode))) ||
+-			is_cold_data(page))
++			page_private_gcing(page))
+ 		return true;
+ 	return false;
+ }
+@@ -131,7 +132,7 @@ static void f2fs_finish_read_bio(struct bio *bio)
+ 
+ 		if (f2fs_is_compressed_page(page)) {
+ 			if (bio->bi_status)
+-				f2fs_end_read_compressed_page(page, true);
++				f2fs_end_read_compressed_page(page, true, 0);
+ 			f2fs_put_page_dic(page);
+ 			continue;
+ 		}
+@@ -227,15 +228,19 @@ static void f2fs_handle_step_decompress(struct bio_post_read_ctx *ctx)
+ 	struct bio_vec *bv;
+ 	struct bvec_iter_all iter_all;
+ 	bool all_compressed = true;
++	block_t blkaddr = SECTOR_TO_BLOCK(ctx->bio->bi_iter.bi_sector);
+ 
+ 	bio_for_each_segment_all(bv, ctx->bio, iter_all) {
+ 		struct page *page = bv->bv_page;
+ 
+ 		/* PG_error was set if decryption failed. */
+ 		if (f2fs_is_compressed_page(page))
+-			f2fs_end_read_compressed_page(page, PageError(page));
++			f2fs_end_read_compressed_page(page, PageError(page),
++						blkaddr);
+ 		else
+ 			all_compressed = false;
++
++		blkaddr++;
+ 	}
+ 
+ 	/*
+@@ -299,9 +304,8 @@ static void f2fs_write_end_io(struct bio *bio)
+ 		struct page *page = bvec->bv_page;
+ 		enum count_type type = WB_DATA_TYPE(page);
+ 
+-		if (IS_DUMMY_WRITTEN_PAGE(page)) {
+-			set_page_private(page, (unsigned long)NULL);
+-			ClearPagePrivate(page);
++		if (page_private_dummy(page)) {
++			clear_page_private_dummy(page);
+ 			unlock_page(page);
+ 			mempool_free(page, sbi->write_io_dummy);
+ 
+@@ -331,7 +335,7 @@ static void f2fs_write_end_io(struct bio *bio)
+ 		dec_page_count(sbi, type);
+ 		if (f2fs_in_warm_node_list(sbi, page))
+ 			f2fs_del_fsync_node_entry(sbi, page);
+-		clear_cold_data(page);
++		clear_page_private_gcing(page);
+ 		end_page_writeback(page);
+ 	}
+ 	if (!get_pages(sbi, F2FS_WB_CP_DATA) &&
+@@ -455,10 +459,11 @@ static inline void __submit_bio(struct f2fs_sb_info *sbi,
+ 					      GFP_NOIO | __GFP_NOFAIL);
+ 			f2fs_bug_on(sbi, !page);
+ 
+-			zero_user_segment(page, 0, PAGE_SIZE);
+-			SetPagePrivate(page);
+-			set_page_private(page, DUMMY_WRITTEN_PAGE);
+ 			lock_page(page);
++
++			zero_user_segment(page, 0, PAGE_SIZE);
++			set_page_private_dummy(page);
++
+ 			if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE)
+ 				f2fs_bug_on(sbi, 1);
+ 		}
+@@ -1351,9 +1356,11 @@ alloc:
+ 	old_blkaddr = dn->data_blkaddr;
+ 	f2fs_allocate_data_block(sbi, NULL, old_blkaddr, &dn->data_blkaddr,
+ 				&sum, seg_type, NULL);
+-	if (GET_SEGNO(sbi, old_blkaddr) != NULL_SEGNO)
++	if (GET_SEGNO(sbi, old_blkaddr) != NULL_SEGNO) {
+ 		invalidate_mapping_pages(META_MAPPING(sbi),
+ 					old_blkaddr, old_blkaddr);
++		f2fs_invalidate_compress_page(sbi, old_blkaddr);
++	}
+ 	f2fs_update_data_blkaddr(dn, dn->data_blkaddr);
+ 
+ 	/*
+@@ -1483,7 +1490,21 @@ next_dnode:
+ 	if (err) {
+ 		if (flag == F2FS_GET_BLOCK_BMAP)
+ 			map->m_pblk = 0;
++
+ 		if (err == -ENOENT) {
++			/*
++			 * There is one exceptional case that read_node_page()
++			 * may return -ENOENT due to filesystem has been
++			 * shutdown or cp_error, so force to convert error
++			 * number to EIO for such case.
++			 */
++			if (map->m_may_create &&
++				(is_sbi_flag_set(sbi, SBI_IS_SHUTDOWN) ||
++				f2fs_cp_error(sbi))) {
++				err = -EIO;
++				goto unlock_out;
++			}
++
+ 			err = 0;
+ 			if (map->m_next_pgofs)
+ 				*map->m_next_pgofs =
+@@ -2130,6 +2151,8 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
+ 			continue;
+ 		}
+ 		unlock_page(page);
++		if (for_write)
++			put_page(page);
+ 		cc->rpages[i] = NULL;
+ 		cc->nr_rpages--;
+ 	}
+@@ -2173,7 +2196,7 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
+ 		goto out_put_dnode;
+ 	}
+ 
+-	for (i = 0; i < dic->nr_cpages; i++) {
++	for (i = 0; i < cc->nr_cpages; i++) {
+ 		struct page *page = dic->cpages[i];
+ 		block_t blkaddr;
+ 		struct bio_post_read_ctx *ctx;
+@@ -2181,6 +2204,14 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
+ 		blkaddr = data_blkaddr(dn.inode, dn.node_page,
+ 						dn.ofs_in_node + i + 1);
+ 
++		f2fs_wait_on_block_writeback(inode, blkaddr);
++
++		if (f2fs_load_compressed_page(sbi, page, blkaddr)) {
++			if (atomic_dec_and_test(&dic->remaining_pages))
++				f2fs_decompress_cluster(dic);
++			continue;
++		}
++
+ 		if (bio && (!page_is_mergeable(sbi, bio,
+ 					*last_block_in_bio, blkaddr) ||
+ 		    !f2fs_crypt_mergeable_bio(bio, inode, page->index, NULL))) {
+@@ -2202,8 +2233,6 @@ submit_and_realloc:
+ 			}
+ 		}
+ 
+-		f2fs_wait_on_block_writeback(inode, blkaddr);
+-
+ 		if (bio_add_page(bio, page, blocksize, 0) < blocksize)
+ 			goto submit_and_realloc;
+ 
+@@ -2482,9 +2511,9 @@ bool f2fs_should_update_outplace(struct inode *inode, struct f2fs_io_info *fio)
+ 	if (f2fs_is_atomic_file(inode))
+ 		return true;
+ 	if (fio) {
+-		if (is_cold_data(fio->page))
++		if (page_private_gcing(fio->page))
+ 			return true;
+-		if (IS_ATOMIC_WRITTEN_PAGE(fio->page))
++		if (page_private_dummy(fio->page))
+ 			return true;
+ 		if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED) &&
+ 			f2fs_is_checkpointed_data(sbi, fio->old_blkaddr)))
+@@ -2540,7 +2569,7 @@ int f2fs_do_write_data_page(struct f2fs_io_info *fio)
+ 	/* This page is already truncated */
+ 	if (fio->old_blkaddr == NULL_ADDR) {
+ 		ClearPageUptodate(page);
+-		clear_cold_data(page);
++		clear_page_private_gcing(page);
+ 		goto out_writepage;
+ 	}
+ got_it:
+@@ -2750,7 +2779,7 @@ out:
+ 	inode_dec_dirty_pages(inode);
+ 	if (err) {
+ 		ClearPageUptodate(page);
+-		clear_cold_data(page);
++		clear_page_private_gcing(page);
+ 	}
+ 
+ 	if (wbc->for_reclaim) {
+@@ -3224,7 +3253,7 @@ restart:
+ 			f2fs_do_read_inline_data(page, ipage);
+ 			set_inode_flag(inode, FI_DATA_EXIST);
+ 			if (inode->i_nlink)
+-				set_inline_node(ipage);
++				set_page_private_inline(ipage);
+ 		} else {
+ 			err = f2fs_convert_inline_page(&dn, page);
+ 			if (err)
+@@ -3615,12 +3644,20 @@ void f2fs_invalidate_page(struct page *page, unsigned int offset,
+ 		}
+ 	}
+ 
+-	clear_cold_data(page);
++	clear_page_private_gcing(page);
+ 
+-	if (IS_ATOMIC_WRITTEN_PAGE(page))
++	if (test_opt(sbi, COMPRESS_CACHE)) {
++		if (f2fs_compressed_file(inode))
++			f2fs_invalidate_compress_pages(sbi, inode->i_ino);
++		if (inode->i_ino == F2FS_COMPRESS_INO(sbi))
++			clear_page_private_data(page);
++	}
++
++	if (page_private_atomic(page))
+ 		return f2fs_drop_inmem_page(inode, page);
+ 
+-	f2fs_clear_page_private(page);
++	detach_page_private(page);
++	set_page_private(page, 0);
+ }
+ 
+ int f2fs_release_page(struct page *page, gfp_t wait)
+@@ -3630,11 +3667,23 @@ int f2fs_release_page(struct page *page, gfp_t wait)
+ 		return 0;
+ 
+ 	/* This is atomic written page, keep Private */
+-	if (IS_ATOMIC_WRITTEN_PAGE(page))
++	if (page_private_atomic(page))
+ 		return 0;
+ 
+-	clear_cold_data(page);
+-	f2fs_clear_page_private(page);
++	if (test_opt(F2FS_P_SB(page), COMPRESS_CACHE)) {
++		struct f2fs_sb_info *sbi = F2FS_P_SB(page);
++		struct inode *inode = page->mapping->host;
++
++		if (f2fs_compressed_file(inode))
++			f2fs_invalidate_compress_pages(sbi, inode->i_ino);
++		if (inode->i_ino == F2FS_COMPRESS_INO(sbi))
++			clear_page_private_data(page);
++	}
++
++	clear_page_private_gcing(page);
++
++	detach_page_private(page);
++	set_page_private(page, 0);
+ 	return 1;
+ }
+ 
+@@ -3650,7 +3699,7 @@ static int f2fs_set_data_page_dirty(struct page *page)
+ 		return __set_page_dirty_nobuffers(page);
+ 
+ 	if (f2fs_is_atomic_file(inode) && !f2fs_is_commit_atomic_write(inode)) {
+-		if (!IS_ATOMIC_WRITTEN_PAGE(page)) {
++		if (!page_private_atomic(page)) {
+ 			f2fs_register_inmem_page(inode, page);
+ 			return 1;
+ 		}
+@@ -3742,7 +3791,7 @@ int f2fs_migrate_page(struct address_space *mapping,
+ {
+ 	int rc, extra_count;
+ 	struct f2fs_inode_info *fi = F2FS_I(mapping->host);
+-	bool atomic_written = IS_ATOMIC_WRITTEN_PAGE(page);
++	bool atomic_written = page_private_atomic(page);
+ 
+ 	BUG_ON(PageWriteback(page));
+ 
+@@ -3778,8 +3827,13 @@ int f2fs_migrate_page(struct address_space *mapping,
+ 	}
+ 
+ 	if (PagePrivate(page)) {
+-		f2fs_set_page_private(newpage, page_private(page));
+-		f2fs_clear_page_private(page);
++		set_page_private(newpage, page_private(page));
++		SetPagePrivate(newpage);
++		get_page(newpage);
++
++		set_page_private(page, 0);
++		ClearPagePrivate(page);
++		put_page(page);
+ 	}
+ 
+ 	if (mode != MIGRATE_SYNC_NO_COPY)
+diff --git a/fs/f2fs/debug.c b/fs/f2fs/debug.c
+index c03949a7ccff5..833325038ef31 100644
+--- a/fs/f2fs/debug.c
++++ b/fs/f2fs/debug.c
+@@ -152,6 +152,12 @@ static void update_general_status(struct f2fs_sb_info *sbi)
+ 		si->node_pages = NODE_MAPPING(sbi)->nrpages;
+ 	if (sbi->meta_inode)
+ 		si->meta_pages = META_MAPPING(sbi)->nrpages;
++#ifdef CONFIG_F2FS_FS_COMPRESSION
++	if (sbi->compress_inode) {
++		si->compress_pages = COMPRESS_MAPPING(sbi)->nrpages;
++		si->compress_page_hit = atomic_read(&sbi->compress_page_hit);
++	}
++#endif
+ 	si->nats = NM_I(sbi)->nat_cnt[TOTAL_NAT];
+ 	si->dirty_nats = NM_I(sbi)->nat_cnt[DIRTY_NAT];
+ 	si->sits = MAIN_SEGS(sbi);
+@@ -309,6 +315,12 @@ get_cache:
+ 
+ 		si->page_mem += (unsigned long long)npages << PAGE_SHIFT;
+ 	}
++#ifdef CONFIG_F2FS_FS_COMPRESSION
++	if (sbi->compress_inode) {
++		unsigned npages = COMPRESS_MAPPING(sbi)->nrpages;
++		si->page_mem += (unsigned long long)npages << PAGE_SHIFT;
++	}
++#endif
+ }
+ 
+ static int stat_show(struct seq_file *s, void *v)
+@@ -476,6 +488,7 @@ static int stat_show(struct seq_file *s, void *v)
+ 			"volatile IO: %4d (Max. %4d)\n",
+ 			   si->inmem_pages, si->aw_cnt, si->max_aw_cnt,
+ 			   si->vw_cnt, si->max_vw_cnt);
++		seq_printf(s, "  - compress: %4d, hit:%8d\n", si->compress_pages, si->compress_page_hit);
+ 		seq_printf(s, "  - nodes: %4d in %4d\n",
+ 			   si->ndirty_node, si->node_pages);
+ 		seq_printf(s, "  - dents: %4d in dirs:%4d (%4d)\n",
+diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
+index dc7ce79672b8d..c821015c0a469 100644
+--- a/fs/f2fs/dir.c
++++ b/fs/f2fs/dir.c
+@@ -929,11 +929,15 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,
+ 		!f2fs_truncate_hole(dir, page->index, page->index + 1)) {
+ 		f2fs_clear_page_cache_dirty_tag(page);
+ 		clear_page_dirty_for_io(page);
+-		f2fs_clear_page_private(page);
+ 		ClearPageUptodate(page);
+-		clear_cold_data(page);
++
++		clear_page_private_gcing(page);
++
+ 		inode_dec_dirty_pages(dir);
+ 		f2fs_remove_dirty_inode(dir);
++
++		detach_page_private(page);
++		set_page_private(page, 0);
+ 	}
+ 	f2fs_put_page(page, 1);
+ 
+@@ -991,6 +995,7 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d,
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(d->inode);
+ 	struct blk_plug plug;
+ 	bool readdir_ra = sbi->readdir_ra == 1;
++	bool found_valid_dirent = false;
+ 	int err = 0;
+ 
+ 	bit_pos = ((unsigned long)ctx->pos % d->max);
+@@ -1005,13 +1010,15 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d,
+ 
+ 		de = &d->dentry[bit_pos];
+ 		if (de->name_len == 0) {
++			if (found_valid_dirent || !bit_pos) {
++				printk_ratelimited(
++					"%sF2FS-fs (%s): invalid namelen(0), ino:%u, run fsck to fix.",
++					KERN_WARNING, sbi->sb->s_id,
++					le32_to_cpu(de->ino));
++				set_sbi_flag(sbi, SBI_NEED_FSCK);
++			}
+ 			bit_pos++;
+ 			ctx->pos = start_pos + bit_pos;
+-			printk_ratelimited(
+-				"%sF2FS-fs (%s): invalid namelen(0), ino:%u, run fsck to fix.",
+-				KERN_WARNING, sbi->sb->s_id,
+-				le32_to_cpu(de->ino));
+-			set_sbi_flag(sbi, SBI_NEED_FSCK);
+ 			continue;
+ 		}
+ 
+@@ -1054,6 +1061,7 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d,
+ 			f2fs_ra_node_page(sbi, le32_to_cpu(de->ino));
+ 
+ 		ctx->pos = start_pos + bit_pos;
++		found_valid_dirent = true;
+ 	}
+ out:
+ 	if (readdir_ra)
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index a5de48e768d7b..395f18e90a8f6 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -43,6 +43,7 @@ enum {
+ 	FAULT_KVMALLOC,
+ 	FAULT_PAGE_ALLOC,
+ 	FAULT_PAGE_GET,
++	FAULT_ALLOC_BIO,	/* it's obsolete due to bio_alloc() will never fail */
+ 	FAULT_ALLOC_NID,
+ 	FAULT_ORPHAN,
+ 	FAULT_BLOCK,
+@@ -98,6 +99,7 @@ extern const char *f2fs_fault_name[FAULT_MAX];
+ #define F2FS_MOUNT_ATGC			0x08000000
+ #define F2FS_MOUNT_MERGE_CHECKPOINT	0x10000000
+ #define	F2FS_MOUNT_GC_MERGE		0x20000000
++#define F2FS_MOUNT_COMPRESS_CACHE	0x40000000
+ 
+ #define F2FS_OPTION(sbi)	((sbi)->mount_opt)
+ #define clear_opt(sbi, option)	(F2FS_OPTION(sbi).opt &= ~F2FS_MOUNT_##option)
+@@ -1291,17 +1293,116 @@ enum {
+ 				 */
+ };
+ 
++static inline int f2fs_test_bit(unsigned int nr, char *addr);
++static inline void f2fs_set_bit(unsigned int nr, char *addr);
++static inline void f2fs_clear_bit(unsigned int nr, char *addr);
++
+ /*
+- * this value is set in page as a private data which indicate that
+- * the page is atomically written, and it is in inmem_pages list.
++ * Layout of f2fs page.private:
++ *
++ * Layout A: lowest bit should be 1
++ * | bit0 = 1 | bit1 | bit2 | ... | bit MAX | private data .... |
++ * bit 0	PAGE_PRIVATE_NOT_POINTER
++ * bit 1	PAGE_PRIVATE_ATOMIC_WRITE
++ * bit 2	PAGE_PRIVATE_DUMMY_WRITE
++ * bit 3	PAGE_PRIVATE_ONGOING_MIGRATION
++ * bit 4	PAGE_PRIVATE_INLINE_INODE
++ * bit 5	PAGE_PRIVATE_REF_RESOURCE
++ * bit 6-	f2fs private data
++ *
++ * Layout B: lowest bit should be 0
++ * page.private is a wrapped pointer.
+  */
+-#define ATOMIC_WRITTEN_PAGE		((unsigned long)-1)
+-#define DUMMY_WRITTEN_PAGE		((unsigned long)-2)
++enum {
++	PAGE_PRIVATE_NOT_POINTER,		/* private contains non-pointer data */
++	PAGE_PRIVATE_ATOMIC_WRITE,		/* data page from atomic write path */
++	PAGE_PRIVATE_DUMMY_WRITE,		/* data page for padding aligned IO */
++	PAGE_PRIVATE_ONGOING_MIGRATION,		/* data page which is on-going migrating */
++	PAGE_PRIVATE_INLINE_INODE,		/* inode page contains inline data */
++	PAGE_PRIVATE_REF_RESOURCE,		/* dirty page has referenced resources */
++	PAGE_PRIVATE_MAX
++};
++
++#define PAGE_PRIVATE_GET_FUNC(name, flagname) \
++static inline bool page_private_##name(struct page *page) \
++{ \
++	return test_bit(PAGE_PRIVATE_NOT_POINTER, &page_private(page)) && \
++		test_bit(PAGE_PRIVATE_##flagname, &page_private(page)); \
++}
++
++#define PAGE_PRIVATE_SET_FUNC(name, flagname) \
++static inline void set_page_private_##name(struct page *page) \
++{ \
++	if (!PagePrivate(page)) { \
++		get_page(page); \
++		SetPagePrivate(page); \
++	} \
++	set_bit(PAGE_PRIVATE_NOT_POINTER, &page_private(page)); \
++	set_bit(PAGE_PRIVATE_##flagname, &page_private(page)); \
++}
++
++#define PAGE_PRIVATE_CLEAR_FUNC(name, flagname) \
++static inline void clear_page_private_##name(struct page *page) \
++{ \
++	clear_bit(PAGE_PRIVATE_##flagname, &page_private(page)); \
++	if (page_private(page) == 1 << PAGE_PRIVATE_NOT_POINTER) { \
++		set_page_private(page, 0); \
++		if (PagePrivate(page)) { \
++			ClearPagePrivate(page); \
++			put_page(page); \
++		}\
++	} \
++}
++
++PAGE_PRIVATE_GET_FUNC(nonpointer, NOT_POINTER);
++PAGE_PRIVATE_GET_FUNC(reference, REF_RESOURCE);
++PAGE_PRIVATE_GET_FUNC(inline, INLINE_INODE);
++PAGE_PRIVATE_GET_FUNC(gcing, ONGOING_MIGRATION);
++PAGE_PRIVATE_GET_FUNC(atomic, ATOMIC_WRITE);
++PAGE_PRIVATE_GET_FUNC(dummy, DUMMY_WRITE);
+ 
+-#define IS_ATOMIC_WRITTEN_PAGE(page)			\
+-		(page_private(page) == ATOMIC_WRITTEN_PAGE)
+-#define IS_DUMMY_WRITTEN_PAGE(page)			\
+-		(page_private(page) == DUMMY_WRITTEN_PAGE)
++PAGE_PRIVATE_SET_FUNC(reference, REF_RESOURCE);
++PAGE_PRIVATE_SET_FUNC(inline, INLINE_INODE);
++PAGE_PRIVATE_SET_FUNC(gcing, ONGOING_MIGRATION);
++PAGE_PRIVATE_SET_FUNC(atomic, ATOMIC_WRITE);
++PAGE_PRIVATE_SET_FUNC(dummy, DUMMY_WRITE);
++
++PAGE_PRIVATE_CLEAR_FUNC(reference, REF_RESOURCE);
++PAGE_PRIVATE_CLEAR_FUNC(inline, INLINE_INODE);
++PAGE_PRIVATE_CLEAR_FUNC(gcing, ONGOING_MIGRATION);
++PAGE_PRIVATE_CLEAR_FUNC(atomic, ATOMIC_WRITE);
++PAGE_PRIVATE_CLEAR_FUNC(dummy, DUMMY_WRITE);
++
++static inline unsigned long get_page_private_data(struct page *page)
++{
++	unsigned long data = page_private(page);
++
++	if (!test_bit(PAGE_PRIVATE_NOT_POINTER, &data))
++		return 0;
++	return data >> PAGE_PRIVATE_MAX;
++}
++
++static inline void set_page_private_data(struct page *page, unsigned long data)
++{
++	if (!PagePrivate(page)) {
++		get_page(page);
++		SetPagePrivate(page);
++	}
++	set_bit(PAGE_PRIVATE_NOT_POINTER, &page_private(page));
++	page_private(page) |= data << PAGE_PRIVATE_MAX;
++}
++
++static inline void clear_page_private_data(struct page *page)
++{
++	page_private(page) &= (1 << PAGE_PRIVATE_MAX) - 1;
++	if (page_private(page) == 1 << PAGE_PRIVATE_NOT_POINTER) {
++		set_page_private(page, 0);
++		if (PagePrivate(page)) {
++			ClearPagePrivate(page);
++			put_page(page);
++		}
++	}
++}
+ 
+ /* For compression */
+ enum compress_algorithm_type {
+@@ -1317,6 +1418,9 @@ enum compress_flag {
+ 	COMPRESS_MAX_FLAG,
+ };
+ 
++#define	COMPRESS_WATERMARK			20
++#define	COMPRESS_PERCENT			20
++
+ #define COMPRESS_DATA_RESERVED_SIZE		4
+ struct compress_data {
+ 	__le32 clen;			/* compressed data size */
+@@ -1626,6 +1730,12 @@ struct f2fs_sb_info {
+ 	u64 compr_written_block;
+ 	u64 compr_saved_block;
+ 	u32 compr_new_inode;
++
++	/* For compressed block cache */
++	struct inode *compress_inode;		/* cache compressed blocks */
++	unsigned int compress_percent;		/* cache page percentage */
++	unsigned int compress_watermark;	/* cache page watermark */
++	atomic_t compress_page_hit;		/* cache hit count */
+ #endif
+ };
+ 
+@@ -3169,20 +3279,6 @@ static inline bool __is_valid_data_blkaddr(block_t blkaddr)
+ 	return true;
+ }
+ 
+-static inline void f2fs_set_page_private(struct page *page,
+-						unsigned long data)
+-{
+-	if (PagePrivate(page))
+-		return;
+-
+-	attach_page_private(page, (void *)data);
+-}
+-
+-static inline void f2fs_clear_page_private(struct page *page)
+-{
+-	detach_page_private(page);
+-}
+-
+ /*
+  * file.c
+  */
+@@ -3606,7 +3702,8 @@ struct f2fs_stat_info {
+ 	unsigned int bimodal, avg_vblocks;
+ 	int util_free, util_valid, util_invalid;
+ 	int rsvd_segs, overp_segs;
+-	int dirty_count, node_pages, meta_pages;
++	int dirty_count, node_pages, meta_pages, compress_pages;
++	int compress_page_hit;
+ 	int prefree_count, call_count, cp_count, bg_cp_count;
+ 	int tot_segs, node_segs, data_segs, free_segs, free_secs;
+ 	int bg_node_segs, bg_data_segs;
+@@ -3942,7 +4039,9 @@ void f2fs_compress_write_end_io(struct bio *bio, struct page *page);
+ bool f2fs_is_compress_backend_ready(struct inode *inode);
+ int f2fs_init_compress_mempool(void);
+ void f2fs_destroy_compress_mempool(void);
+-void f2fs_end_read_compressed_page(struct page *page, bool failed);
++void f2fs_decompress_cluster(struct decompress_io_ctx *dic);
++void f2fs_end_read_compressed_page(struct page *page, bool failed,
++							block_t blkaddr);
+ bool f2fs_cluster_is_empty(struct compress_ctx *cc);
+ bool f2fs_cluster_can_merge_page(struct compress_ctx *cc, pgoff_t index);
+ void f2fs_compress_ctx_add_page(struct compress_ctx *cc, struct page *page);
+@@ -3960,10 +4059,19 @@ void f2fs_put_page_dic(struct page *page);
+ int f2fs_init_compress_ctx(struct compress_ctx *cc);
+ void f2fs_destroy_compress_ctx(struct compress_ctx *cc, bool reuse);
+ void f2fs_init_compress_info(struct f2fs_sb_info *sbi);
++int f2fs_init_compress_inode(struct f2fs_sb_info *sbi);
++void f2fs_destroy_compress_inode(struct f2fs_sb_info *sbi);
+ int f2fs_init_page_array_cache(struct f2fs_sb_info *sbi);
+ void f2fs_destroy_page_array_cache(struct f2fs_sb_info *sbi);
+ int __init f2fs_init_compress_cache(void);
+ void f2fs_destroy_compress_cache(void);
++struct address_space *COMPRESS_MAPPING(struct f2fs_sb_info *sbi);
++void f2fs_invalidate_compress_page(struct f2fs_sb_info *sbi, block_t blkaddr);
++void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
++						nid_t ino, block_t blkaddr);
++bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
++								block_t blkaddr);
++void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino);
+ #define inc_compr_inode_stat(inode)					\
+ 	do {								\
+ 		struct f2fs_sb_info *sbi = F2FS_I_SB(inode);		\
+@@ -3992,7 +4100,9 @@ static inline struct page *f2fs_compress_control_page(struct page *page)
+ }
+ static inline int f2fs_init_compress_mempool(void) { return 0; }
+ static inline void f2fs_destroy_compress_mempool(void) { }
+-static inline void f2fs_end_read_compressed_page(struct page *page, bool failed)
++static inline void f2fs_decompress_cluster(struct decompress_io_ctx *dic) { }
++static inline void f2fs_end_read_compressed_page(struct page *page,
++						bool failed, block_t blkaddr)
+ {
+ 	WARN_ON_ONCE(1);
+ }
+@@ -4000,10 +4110,20 @@ static inline void f2fs_put_page_dic(struct page *page)
+ {
+ 	WARN_ON_ONCE(1);
+ }
++static inline int f2fs_init_compress_inode(struct f2fs_sb_info *sbi) { return 0; }
++static inline void f2fs_destroy_compress_inode(struct f2fs_sb_info *sbi) { }
+ static inline int f2fs_init_page_array_cache(struct f2fs_sb_info *sbi) { return 0; }
+ static inline void f2fs_destroy_page_array_cache(struct f2fs_sb_info *sbi) { }
+ static inline int __init f2fs_init_compress_cache(void) { return 0; }
+ static inline void f2fs_destroy_compress_cache(void) { }
++static inline void f2fs_invalidate_compress_page(struct f2fs_sb_info *sbi,
++				block_t blkaddr) { }
++static inline void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi,
++				struct page *page, nid_t ino, block_t blkaddr) { }
++static inline bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi,
++				struct page *page, block_t blkaddr) { return false; }
++static inline void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi,
++							nid_t ino) { }
+ #define inc_compr_inode_stat(inode)		do { } while (0)
+ #endif
+ 
+@@ -4020,7 +4140,8 @@ static inline void set_compress_context(struct inode *inode)
+ 				1 << COMPRESS_CHKSUM : 0;
+ 	F2FS_I(inode)->i_cluster_size =
+ 			1 << F2FS_I(inode)->i_log_cluster_size;
+-	if (F2FS_I(inode)->i_compress_algorithm == COMPRESS_LZ4 &&
++	if ((F2FS_I(inode)->i_compress_algorithm == COMPRESS_LZ4 ||
++		F2FS_I(inode)->i_compress_algorithm == COMPRESS_ZSTD) &&
+ 			F2FS_OPTION(sbi).compress_level)
+ 		F2FS_I(inode)->i_compress_flag |=
+ 				F2FS_OPTION(sbi).compress_level <<
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index fb27d49e4da72..3a11e81fdf659 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1086,7 +1086,6 @@ static int punch_hole(struct inode *inode, loff_t offset, loff_t len)
+ 		}
+ 
+ 		if (pg_start < pg_end) {
+-			struct address_space *mapping = inode->i_mapping;
+ 			loff_t blk_start, blk_end;
+ 			struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 
+@@ -1098,8 +1097,7 @@ static int punch_hole(struct inode *inode, loff_t offset, loff_t len)
+ 			down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 			down_write(&F2FS_I(inode)->i_mmap_sem);
+ 
+-			truncate_inode_pages_range(mapping, blk_start,
+-					blk_end - 1);
++			truncate_pagecache_range(inode, blk_start, blk_end - 1);
+ 
+ 			f2fs_lock_op(sbi);
+ 			ret = f2fs_truncate_hole(inode, pg_start, pg_end);
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index ab63951c08cbc..1c05e156d9d50 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1261,6 +1261,7 @@ static int move_data_block(struct inode *inode, block_t bidx,
+ 	f2fs_put_page(mpage, 1);
+ 	invalidate_mapping_pages(META_MAPPING(fio.sbi),
+ 				fio.old_blkaddr, fio.old_blkaddr);
++	f2fs_invalidate_compress_page(fio.sbi, fio.old_blkaddr);
+ 
+ 	set_page_dirty(fio.encrypted_page);
+ 	if (clear_page_dirty_for_io(fio.encrypted_page))
+@@ -1336,7 +1337,7 @@ static int move_data_page(struct inode *inode, block_t bidx, int gc_type,
+ 			goto out;
+ 		}
+ 		set_page_dirty(page);
+-		set_cold_data(page);
++		set_page_private_gcing(page);
+ 	} else {
+ 		struct f2fs_io_info fio = {
+ 			.sbi = F2FS_I_SB(inode),
+@@ -1362,11 +1363,11 @@ retry:
+ 			f2fs_remove_dirty_inode(inode);
+ 		}
+ 
+-		set_cold_data(page);
++		set_page_private_gcing(page);
+ 
+ 		err = f2fs_do_write_data_page(&fio);
+ 		if (err) {
+-			clear_cold_data(page);
++			clear_page_private_gcing(page);
+ 			if (err == -ENOMEM) {
+ 				congestion_wait(BLK_RW_ASYNC,
+ 						DEFAULT_IO_TIMEOUT);
+@@ -1496,8 +1497,10 @@ next_step:
+ 			int err;
+ 
+ 			if (S_ISREG(inode->i_mode)) {
+-				if (!down_write_trylock(&fi->i_gc_rwsem[READ]))
++				if (!down_write_trylock(&fi->i_gc_rwsem[READ])) {
++					sbi->skipped_gc_rwsem++;
+ 					continue;
++				}
+ 				if (!down_write_trylock(
+ 						&fi->i_gc_rwsem[WRITE])) {
+ 					sbi->skipped_gc_rwsem++;
+diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
+index 92652ca7a7c8b..56a20d5c15dad 100644
+--- a/fs/f2fs/inline.c
++++ b/fs/f2fs/inline.c
+@@ -173,7 +173,7 @@ int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page)
+ 
+ 	/* clear inline data and flag after data writeback */
+ 	f2fs_truncate_inline_inode(dn->inode, dn->inode_page, 0);
+-	clear_inline_node(dn->inode_page);
++	clear_page_private_inline(dn->inode_page);
+ clear_out:
+ 	stat_dec_inline_inode(dn->inode);
+ 	clear_inode_flag(dn->inode, FI_INLINE_DATA);
+@@ -255,7 +255,7 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page)
+ 	set_inode_flag(inode, FI_APPEND_WRITE);
+ 	set_inode_flag(inode, FI_DATA_EXIST);
+ 
+-	clear_inline_node(dn.inode_page);
++	clear_page_private_inline(dn.inode_page);
+ 	f2fs_put_dnode(&dn);
+ 	return 0;
+ }
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index b401f08569f70..9141147b5bb00 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -18,6 +18,10 @@
+ 
+ #include <trace/events/f2fs.h>
+ 
++#ifdef CONFIG_F2FS_FS_COMPRESSION
++extern const struct address_space_operations f2fs_compress_aops;
++#endif
++
+ void f2fs_mark_inode_dirty_sync(struct inode *inode, bool sync)
+ {
+ 	if (is_inode_flag_set(inode, FI_NEW_INODE))
+@@ -494,6 +498,11 @@ struct inode *f2fs_iget(struct super_block *sb, unsigned long ino)
+ 	if (ino == F2FS_NODE_INO(sbi) || ino == F2FS_META_INO(sbi))
+ 		goto make_now;
+ 
++#ifdef CONFIG_F2FS_FS_COMPRESSION
++	if (ino == F2FS_COMPRESS_INO(sbi))
++		goto make_now;
++#endif
++
+ 	ret = do_read_inode(inode);
+ 	if (ret)
+ 		goto bad_inode;
+@@ -504,6 +513,12 @@ make_now:
+ 	} else if (ino == F2FS_META_INO(sbi)) {
+ 		inode->i_mapping->a_ops = &f2fs_meta_aops;
+ 		mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS);
++	} else if (ino == F2FS_COMPRESS_INO(sbi)) {
++#ifdef CONFIG_F2FS_FS_COMPRESSION
++		inode->i_mapping->a_ops = &f2fs_compress_aops;
++#endif
++		mapping_set_gfp_mask(inode->i_mapping,
++			GFP_NOFS | __GFP_HIGHMEM | __GFP_MOVABLE);
+ 	} else if (S_ISREG(inode->i_mode)) {
+ 		inode->i_op = &f2fs_file_inode_operations;
+ 		inode->i_fop = &f2fs_file_operations;
+@@ -646,7 +661,7 @@ void f2fs_update_inode(struct inode *inode, struct page *node_page)
+ 
+ 	/* deleted inode */
+ 	if (inode->i_nlink == 0)
+-		clear_inline_node(node_page);
++		clear_page_private_inline(node_page);
+ 
+ 	F2FS_I(inode)->i_disk_time[0] = inode->i_atime;
+ 	F2FS_I(inode)->i_disk_time[1] = inode->i_ctime;
+@@ -723,8 +738,12 @@ void f2fs_evict_inode(struct inode *inode)
+ 	trace_f2fs_evict_inode(inode);
+ 	truncate_inode_pages_final(&inode->i_data);
+ 
++	if (test_opt(sbi, COMPRESS_CACHE) && f2fs_compressed_file(inode))
++		f2fs_invalidate_compress_pages(sbi, inode->i_ino);
++
+ 	if (inode->i_ino == F2FS_NODE_INO(sbi) ||
+-			inode->i_ino == F2FS_META_INO(sbi))
++			inode->i_ino == F2FS_META_INO(sbi) ||
++			inode->i_ino == F2FS_COMPRESS_INO(sbi))
+ 		goto out_clear;
+ 
+ 	f2fs_bug_on(sbi, get_dirty_pages(inode));
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index e67ce5f13b98e..dd611efa8aa4a 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -97,6 +97,20 @@ bool f2fs_available_free_memory(struct f2fs_sb_info *sbi, int type)
+ 		mem_size = (atomic_read(&dcc->discard_cmd_cnt) *
+ 				sizeof(struct discard_cmd)) >> PAGE_SHIFT;
+ 		res = mem_size < (avail_ram * nm_i->ram_thresh / 100);
++	} else if (type == COMPRESS_PAGE) {
++#ifdef CONFIG_F2FS_FS_COMPRESSION
++		unsigned long free_ram = val.freeram;
++
++		/*
++		 * free memory is lower than watermark or cached page count
++		 * exceed threshold, deny caching compress page.
++		 */
++		res = (free_ram > avail_ram * sbi->compress_watermark / 100) &&
++			(COMPRESS_MAPPING(sbi)->nrpages <
++			 free_ram * sbi->compress_percent / 100);
++#else
++		res = false;
++#endif
+ 	} else {
+ 		if (!sbi->sb->s_bdi->wb.dirty_exceeded)
+ 			return true;
+@@ -1860,8 +1874,8 @@ continue_unlock:
+ 			}
+ 
+ 			/* flush inline_data, if it's async context. */
+-			if (is_inline_node(page)) {
+-				clear_inline_node(page);
++			if (page_private_inline(page)) {
++				clear_page_private_inline(page);
+ 				unlock_page(page);
+ 				flush_inline_data(sbi, ino_of_node(page));
+ 				continue;
+@@ -1941,8 +1955,8 @@ continue_unlock:
+ 				goto write_node;
+ 
+ 			/* flush inline_data */
+-			if (is_inline_node(page)) {
+-				clear_inline_node(page);
++			if (page_private_inline(page)) {
++				clear_page_private_inline(page);
+ 				unlock_page(page);
+ 				flush_inline_data(sbi, ino_of_node(page));
+ 				goto lock_node;
+@@ -2096,7 +2110,7 @@ static int f2fs_set_node_page_dirty(struct page *page)
+ 	if (!PageDirty(page)) {
+ 		__set_page_dirty_nobuffers(page);
+ 		inc_page_count(F2FS_P_SB(page), F2FS_DIRTY_NODES);
+-		f2fs_set_page_private(page, 0);
++		set_page_private_reference(page);
+ 		return 1;
+ 	}
+ 	return 0;
+diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
+index 7a45c0f106295..84d45385d1f20 100644
+--- a/fs/f2fs/node.h
++++ b/fs/f2fs/node.h
+@@ -148,6 +148,7 @@ enum mem_type {
+ 	EXTENT_CACHE,	/* indicates extent cache */
+ 	INMEM_PAGES,	/* indicates inmemory pages */
+ 	DISCARD_CACHE,	/* indicates memory of cached discard cmds */
++	COMPRESS_PAGE,	/* indicates memory of cached compressed pages */
+ 	BASE_CHECK,	/* check kernel status */
+ };
+ 
+@@ -389,20 +390,6 @@ static inline nid_t get_nid(struct page *p, int off, bool i)
+  *  - Mark cold node blocks in their node footer
+  *  - Mark cold data pages in page cache
+  */
+-static inline int is_cold_data(struct page *page)
+-{
+-	return PageChecked(page);
+-}
+-
+-static inline void set_cold_data(struct page *page)
+-{
+-	SetPageChecked(page);
+-}
+-
+-static inline void clear_cold_data(struct page *page)
+-{
+-	ClearPageChecked(page);
+-}
+ 
+ static inline int is_node(struct page *page, int type)
+ {
+@@ -414,21 +401,6 @@ static inline int is_node(struct page *page, int type)
+ #define is_fsync_dnode(page)	is_node(page, FSYNC_BIT_SHIFT)
+ #define is_dent_dnode(page)	is_node(page, DENT_BIT_SHIFT)
+ 
+-static inline int is_inline_node(struct page *page)
+-{
+-	return PageChecked(page);
+-}
+-
+-static inline void set_inline_node(struct page *page)
+-{
+-	SetPageChecked(page);
+-}
+-
+-static inline void clear_inline_node(struct page *page)
+-{
+-	ClearPageChecked(page);
+-}
+-
+ static inline void set_cold_node(struct page *page, bool is_dir)
+ {
+ 	struct f2fs_node *rn = F2FS_NODE(page);
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 51dc79fad4fe2..406a6b2447822 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -186,10 +186,7 @@ void f2fs_register_inmem_page(struct inode *inode, struct page *page)
+ {
+ 	struct inmem_pages *new;
+ 
+-	if (PagePrivate(page))
+-		set_page_private(page, (unsigned long)ATOMIC_WRITTEN_PAGE);
+-	else
+-		f2fs_set_page_private(page, ATOMIC_WRITTEN_PAGE);
++	set_page_private_atomic(page);
+ 
+ 	new = f2fs_kmem_cache_alloc(inmem_entry_slab, GFP_NOFS);
+ 
+@@ -272,9 +269,10 @@ next:
+ 		/* we don't need to invalidate this in the sccessful status */
+ 		if (drop || recover) {
+ 			ClearPageUptodate(page);
+-			clear_cold_data(page);
++			clear_page_private_gcing(page);
+ 		}
+-		f2fs_clear_page_private(page);
++		detach_page_private(page);
++		set_page_private(page, 0);
+ 		f2fs_put_page(page, 1);
+ 
+ 		list_del(&cur->list);
+@@ -357,7 +355,7 @@ void f2fs_drop_inmem_page(struct inode *inode, struct page *page)
+ 	struct list_head *head = &fi->inmem_pages;
+ 	struct inmem_pages *cur = NULL;
+ 
+-	f2fs_bug_on(sbi, !IS_ATOMIC_WRITTEN_PAGE(page));
++	f2fs_bug_on(sbi, !page_private_atomic(page));
+ 
+ 	mutex_lock(&fi->inmem_lock);
+ 	list_for_each_entry(cur, head, list) {
+@@ -373,9 +371,12 @@ void f2fs_drop_inmem_page(struct inode *inode, struct page *page)
+ 	kmem_cache_free(inmem_entry_slab, cur);
+ 
+ 	ClearPageUptodate(page);
+-	f2fs_clear_page_private(page);
++	clear_page_private_atomic(page);
+ 	f2fs_put_page(page, 0);
+ 
++	detach_page_private(page);
++	set_page_private(page, 0);
++
+ 	trace_f2fs_commit_inmem_page(page, INMEM_INVALIDATE);
+ }
+ 
+@@ -2321,6 +2322,7 @@ void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr)
+ 		return;
+ 
+ 	invalidate_mapping_pages(META_MAPPING(sbi), addr, addr);
++	f2fs_invalidate_compress_page(sbi, addr);
+ 
+ 	/* add it into sit main buffer */
+ 	down_write(&sit_i->sentry_lock);
+@@ -3289,7 +3291,7 @@ static int __get_segment_type_6(struct f2fs_io_info *fio)
+ 	if (fio->type == DATA) {
+ 		struct inode *inode = fio->page->mapping->host;
+ 
+-		if (is_cold_data(fio->page)) {
++		if (page_private_gcing(fio->page)) {
+ 			if (fio->sbi->am.atgc_enabled &&
+ 				(fio->io_type == FS_DATA_IO) &&
+ 				(fio->sbi->gc_mode != GC_URGENT_HIGH))
+@@ -3468,9 +3470,11 @@ static void do_write_page(struct f2fs_summary *sum, struct f2fs_io_info *fio)
+ reallocate:
+ 	f2fs_allocate_data_block(fio->sbi, fio->page, fio->old_blkaddr,
+ 			&fio->new_blkaddr, sum, type, fio);
+-	if (GET_SEGNO(fio->sbi, fio->old_blkaddr) != NULL_SEGNO)
++	if (GET_SEGNO(fio->sbi, fio->old_blkaddr) != NULL_SEGNO) {
+ 		invalidate_mapping_pages(META_MAPPING(fio->sbi),
+ 					fio->old_blkaddr, fio->old_blkaddr);
++		f2fs_invalidate_compress_page(fio->sbi, fio->old_blkaddr);
++	}
+ 
+ 	/* writeout dirty page into bdev */
+ 	f2fs_submit_page_write(fio);
+@@ -3660,6 +3664,7 @@ void f2fs_do_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ 	if (GET_SEGNO(sbi, old_blkaddr) != NULL_SEGNO) {
+ 		invalidate_mapping_pages(META_MAPPING(sbi),
+ 					old_blkaddr, old_blkaddr);
++		f2fs_invalidate_compress_page(sbi, old_blkaddr);
+ 		if (!from_gc)
+ 			update_segment_mtime(sbi, old_blkaddr, 0);
+ 		update_sit_entry(sbi, old_blkaddr, -1);
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 8553e8e5de0da..d61f7fcdc66b3 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -150,6 +150,7 @@ enum {
+ 	Opt_compress_extension,
+ 	Opt_compress_chksum,
+ 	Opt_compress_mode,
++	Opt_compress_cache,
+ 	Opt_atgc,
+ 	Opt_gc_merge,
+ 	Opt_nogc_merge,
+@@ -224,6 +225,7 @@ static match_table_t f2fs_tokens = {
+ 	{Opt_compress_extension, "compress_extension=%s"},
+ 	{Opt_compress_chksum, "compress_chksum"},
+ 	{Opt_compress_mode, "compress_mode=%s"},
++	{Opt_compress_cache, "compress_cache"},
+ 	{Opt_atgc, "atgc"},
+ 	{Opt_gc_merge, "gc_merge"},
+ 	{Opt_nogc_merge, "nogc_merge"},
+@@ -1066,12 +1068,16 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount)
+ 			}
+ 			kfree(name);
+ 			break;
++		case Opt_compress_cache:
++			set_opt(sbi, COMPRESS_CACHE);
++			break;
+ #else
+ 		case Opt_compress_algorithm:
+ 		case Opt_compress_log_size:
+ 		case Opt_compress_extension:
+ 		case Opt_compress_chksum:
+ 		case Opt_compress_mode:
++		case Opt_compress_cache:
+ 			f2fs_info(sbi, "compression options not supported");
+ 			break;
+ #endif
+@@ -1403,6 +1409,8 @@ static void f2fs_put_super(struct super_block *sb)
+ 
+ 	f2fs_bug_on(sbi, sbi->fsync_node_num);
+ 
++	f2fs_destroy_compress_inode(sbi);
++
+ 	iput(sbi->node_inode);
+ 	sbi->node_inode = NULL;
+ 
+@@ -1672,6 +1680,9 @@ static inline void f2fs_show_compress_options(struct seq_file *seq,
+ 		seq_printf(seq, ",compress_mode=%s", "fs");
+ 	else if (F2FS_OPTION(sbi).compress_mode == COMPR_MODE_USER)
+ 		seq_printf(seq, ",compress_mode=%s", "user");
++
++	if (test_opt(sbi, COMPRESS_CACHE))
++		seq_puts(seq, ",compress_cache");
+ }
+ #endif
+ 
+@@ -1955,10 +1966,10 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
+ 	bool need_restart_ckpt = false, need_stop_ckpt = false;
+ 	bool need_restart_flush = false, need_stop_flush = false;
+ 	bool no_extent_cache = !test_opt(sbi, EXTENT_CACHE);
+-	bool disable_checkpoint = test_opt(sbi, DISABLE_CHECKPOINT);
++	bool enable_checkpoint = !test_opt(sbi, DISABLE_CHECKPOINT);
+ 	bool no_io_align = !F2FS_IO_ALIGNED(sbi);
+ 	bool no_atgc = !test_opt(sbi, ATGC);
+-	bool checkpoint_changed;
++	bool no_compress_cache = !test_opt(sbi, COMPRESS_CACHE);
+ #ifdef CONFIG_QUOTA
+ 	int i, j;
+ #endif
+@@ -2003,8 +2014,6 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
+ 	err = parse_options(sb, data, true);
+ 	if (err)
+ 		goto restore_opts;
+-	checkpoint_changed =
+-			disable_checkpoint != test_opt(sbi, DISABLE_CHECKPOINT);
+ 
+ 	/*
+ 	 * Previous and new state of filesystem is RO,
+@@ -2050,6 +2059,12 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
+ 		goto restore_opts;
+ 	}
+ 
++	if (no_compress_cache == !!test_opt(sbi, COMPRESS_CACHE)) {
++		err = -EINVAL;
++		f2fs_warn(sbi, "switch compress_cache option is not allowed");
++		goto restore_opts;
++	}
++
+ 	if ((*flags & SB_RDONLY) && test_opt(sbi, DISABLE_CHECKPOINT)) {
+ 		err = -EINVAL;
+ 		f2fs_warn(sbi, "disabling checkpoint not compatible with read-only");
+@@ -2115,7 +2130,7 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
+ 		need_stop_flush = true;
+ 	}
+ 
+-	if (checkpoint_changed) {
++	if (enable_checkpoint == !!test_opt(sbi, DISABLE_CHECKPOINT)) {
+ 		if (test_opt(sbi, DISABLE_CHECKPOINT)) {
+ 			err = f2fs_disable_checkpoint(sbi);
+ 			if (err)
+@@ -2399,6 +2414,33 @@ static int f2fs_enable_quotas(struct super_block *sb)
+ 	return 0;
+ }
+ 
++static int f2fs_quota_sync_file(struct f2fs_sb_info *sbi, int type)
++{
++	struct quota_info *dqopt = sb_dqopt(sbi->sb);
++	struct address_space *mapping = dqopt->files[type]->i_mapping;
++	int ret = 0;
++
++	ret = dquot_writeback_dquots(sbi->sb, type);
++	if (ret)
++		goto out;
++
++	ret = filemap_fdatawrite(mapping);
++	if (ret)
++		goto out;
++
++	/* if we are using journalled quota */
++	if (is_journalled_quota(sbi))
++		goto out;
++
++	ret = filemap_fdatawait(mapping);
++
++	truncate_inode_pages(&dqopt->files[type]->i_data, 0);
++out:
++	if (ret)
++		set_sbi_flag(sbi, SBI_QUOTA_NEED_REPAIR);
++	return ret;
++}
++
+ int f2fs_quota_sync(struct super_block *sb, int type)
+ {
+ 	struct f2fs_sb_info *sbi = F2FS_SB(sb);
+@@ -2406,57 +2448,42 @@ int f2fs_quota_sync(struct super_block *sb, int type)
+ 	int cnt;
+ 	int ret;
+ 
+-	/*
+-	 * do_quotactl
+-	 *  f2fs_quota_sync
+-	 *  down_read(quota_sem)
+-	 *  dquot_writeback_dquots()
+-	 *  f2fs_dquot_commit
+-	 *                            block_operation
+-	 *                            down_read(quota_sem)
+-	 */
+-	f2fs_lock_op(sbi);
+-
+-	down_read(&sbi->quota_sem);
+-	ret = dquot_writeback_dquots(sb, type);
+-	if (ret)
+-		goto out;
+-
+ 	/*
+ 	 * Now when everything is written we can discard the pagecache so
+ 	 * that userspace sees the changes.
+ 	 */
+ 	for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+-		struct address_space *mapping;
+ 
+ 		if (type != -1 && cnt != type)
+ 			continue;
+-		if (!sb_has_quota_active(sb, cnt))
+-			continue;
+ 
+-		mapping = dqopt->files[cnt]->i_mapping;
++		if (!sb_has_quota_active(sb, type))
++			return 0;
+ 
+-		ret = filemap_fdatawrite(mapping);
+-		if (ret)
+-			goto out;
++		inode_lock(dqopt->files[cnt]);
+ 
+-		/* if we are using journalled quota */
+-		if (is_journalled_quota(sbi))
+-			continue;
++		/*
++		 * do_quotactl
++		 *  f2fs_quota_sync
++		 *  down_read(quota_sem)
++		 *  dquot_writeback_dquots()
++		 *  f2fs_dquot_commit
++		 *			      block_operation
++		 *			      down_read(quota_sem)
++		 */
++		f2fs_lock_op(sbi);
++		down_read(&sbi->quota_sem);
+ 
+-		ret = filemap_fdatawait(mapping);
+-		if (ret)
+-			set_sbi_flag(F2FS_SB(sb), SBI_QUOTA_NEED_REPAIR);
++		ret = f2fs_quota_sync_file(sbi, cnt);
++
++		up_read(&sbi->quota_sem);
++		f2fs_unlock_op(sbi);
+ 
+-		inode_lock(dqopt->files[cnt]);
+-		truncate_inode_pages(&dqopt->files[cnt]->i_data, 0);
+ 		inode_unlock(dqopt->files[cnt]);
++
++		if (ret)
++			break;
+ 	}
+-out:
+-	if (ret)
+-		set_sbi_flag(F2FS_SB(sb), SBI_QUOTA_NEED_REPAIR);
+-	up_read(&sbi->quota_sem);
+-	f2fs_unlock_op(sbi);
+ 	return ret;
+ }
+ 
+@@ -3089,11 +3116,13 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi,
+ 		return -EFSCORRUPTED;
+ 	}
+ 
+-	if (le32_to_cpu(raw_super->cp_payload) >
+-				(blocks_per_seg - F2FS_CP_PACKS)) {
+-		f2fs_info(sbi, "Insane cp_payload (%u > %u)",
++	if (le32_to_cpu(raw_super->cp_payload) >=
++				(blocks_per_seg - F2FS_CP_PACKS -
++				NR_CURSEG_PERSIST_TYPE)) {
++		f2fs_info(sbi, "Insane cp_payload (%u >= %u)",
+ 			  le32_to_cpu(raw_super->cp_payload),
+-			  blocks_per_seg - F2FS_CP_PACKS);
++			  blocks_per_seg - F2FS_CP_PACKS -
++			  NR_CURSEG_PERSIST_TYPE);
+ 		return -EFSCORRUPTED;
+ 	}
+ 
+@@ -3129,6 +3158,7 @@ int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi)
+ 	unsigned int cp_pack_start_sum, cp_payload;
+ 	block_t user_block_count, valid_user_blocks;
+ 	block_t avail_node_count, valid_node_count;
++	unsigned int nat_blocks, nat_bits_bytes, nat_bits_blocks;
+ 	int i, j;
+ 
+ 	total = le32_to_cpu(raw_super->segment_count);
+@@ -3249,6 +3279,17 @@ int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi)
+ 		return 1;
+ 	}
+ 
++	nat_blocks = nat_segs << log_blocks_per_seg;
++	nat_bits_bytes = nat_blocks / BITS_PER_BYTE;
++	nat_bits_blocks = F2FS_BLK_ALIGN((nat_bits_bytes << 1) + 8);
++	if (__is_set_ckpt_flags(ckpt, CP_NAT_BITS_FLAG) &&
++		(cp_payload + F2FS_CP_PACKS +
++		NR_CURSEG_PERSIST_TYPE + nat_bits_blocks >= blocks_per_seg)) {
++		f2fs_warn(sbi, "Insane cp_payload: %u, nat_bits_blocks: %u)",
++			  cp_payload, nat_bits_blocks);
++		return -EFSCORRUPTED;
++	}
++
+ 	if (unlikely(f2fs_cp_error(sbi))) {
+ 		f2fs_err(sbi, "A bug case: need to run fsck");
+ 		return 1;
+@@ -3949,10 +3990,14 @@ try_onemore:
+ 		goto free_node_inode;
+ 	}
+ 
+-	err = f2fs_register_sysfs(sbi);
++	err = f2fs_init_compress_inode(sbi);
+ 	if (err)
+ 		goto free_root_inode;
+ 
++	err = f2fs_register_sysfs(sbi);
++	if (err)
++		goto free_compress_inode;
++
+ #ifdef CONFIG_QUOTA
+ 	/* Enable quota usage during mount */
+ 	if (f2fs_sb_has_quota_ino(sbi) && !f2fs_readonly(sb)) {
+@@ -4093,6 +4138,8 @@ free_meta:
+ 	/* evict some inodes being cached by GC */
+ 	evict_inodes(sb);
+ 	f2fs_unregister_sysfs(sbi);
++free_compress_inode:
++	f2fs_destroy_compress_inode(sbi);
+ free_root_inode:
+ 	dput(sb->s_root);
+ 	sb->s_root = NULL;
+@@ -4171,6 +4218,15 @@ static void kill_f2fs_super(struct super_block *sb)
+ 		f2fs_stop_gc_thread(sbi);
+ 		f2fs_stop_discard_thread(sbi);
+ 
++#ifdef CONFIG_F2FS_FS_COMPRESSION
++		/*
++		 * latter evict_inode() can bypass checking and invalidating
++		 * compress inode cache.
++		 */
++		if (test_opt(sbi, COMPRESS_CACHE))
++			truncate_inode_pages_final(COMPRESS_MAPPING(sbi));
++#endif
++
+ 		if (is_sbi_flag_set(sbi, SBI_IS_DIRTY) ||
+ 				!is_set_ckpt_flags(sbi, CP_UMOUNT_FLAG)) {
+ 			struct cp_control cpc = {
+diff --git a/fs/fscache/cookie.c b/fs/fscache/cookie.c
+index 751bc5b1cddf9..6104f627cc712 100644
+--- a/fs/fscache/cookie.c
++++ b/fs/fscache/cookie.c
+@@ -74,10 +74,8 @@ void fscache_free_cookie(struct fscache_cookie *cookie)
+ static int fscache_set_key(struct fscache_cookie *cookie,
+ 			   const void *index_key, size_t index_key_len)
+ {
+-	unsigned long long h;
+ 	u32 *buf;
+ 	int bufs;
+-	int i;
+ 
+ 	bufs = DIV_ROUND_UP(index_key_len, sizeof(*buf));
+ 
+@@ -91,17 +89,7 @@ static int fscache_set_key(struct fscache_cookie *cookie,
+ 	}
+ 
+ 	memcpy(buf, index_key, index_key_len);
+-
+-	/* Calculate a hash and combine this with the length in the first word
+-	 * or first half word
+-	 */
+-	h = (unsigned long)cookie->parent;
+-	h += index_key_len + cookie->type;
+-
+-	for (i = 0; i < bufs; i++)
+-		h += buf[i];
+-
+-	cookie->key_hash = h ^ (h >> 32);
++	cookie->key_hash = fscache_hash(0, buf, bufs);
+ 	return 0;
+ }
+ 
+diff --git a/fs/fscache/internal.h b/fs/fscache/internal.h
+index c483863b740ad..aee639d980bad 100644
+--- a/fs/fscache/internal.h
++++ b/fs/fscache/internal.h
+@@ -97,6 +97,8 @@ extern struct workqueue_struct *fscache_object_wq;
+ extern struct workqueue_struct *fscache_op_wq;
+ DECLARE_PER_CPU(wait_queue_head_t, fscache_object_cong_wait);
+ 
++extern unsigned int fscache_hash(unsigned int salt, unsigned int *data, unsigned int n);
++
+ static inline bool fscache_object_congested(void)
+ {
+ 	return workqueue_congested(WORK_CPU_UNBOUND, fscache_object_wq);
+diff --git a/fs/fscache/main.c b/fs/fscache/main.c
+index c1e6cc9091aac..4207f98e405fd 100644
+--- a/fs/fscache/main.c
++++ b/fs/fscache/main.c
+@@ -93,6 +93,45 @@ static struct ctl_table fscache_sysctls_root[] = {
+ };
+ #endif
+ 
++/*
++ * Mixing scores (in bits) for (7,20):
++ * Input delta: 1-bit      2-bit
++ * 1 round:     330.3     9201.6
++ * 2 rounds:   1246.4    25475.4
++ * 3 rounds:   1907.1    31295.1
++ * 4 rounds:   2042.3    31718.6
++ * Perfect:    2048      31744
++ *            (32*64)   (32*31/2 * 64)
++ */
++#define HASH_MIX(x, y, a)	\
++	(	x ^= (a),	\
++	y ^= x,	x = rol32(x, 7),\
++	x += y,	y = rol32(y,20),\
++	y *= 9			)
++
++static inline unsigned int fold_hash(unsigned long x, unsigned long y)
++{
++	/* Use arch-optimized multiply if one exists */
++	return __hash_32(y ^ __hash_32(x));
++}
++
++/*
++ * Generate a hash.  This is derived from full_name_hash(), but we want to be
++ * sure it is arch independent and that it doesn't change as bits of the
++ * computed hash value might appear on disk.  The caller also guarantees that
++ * the hashed data will be a series of aligned 32-bit words.
++ */
++unsigned int fscache_hash(unsigned int salt, unsigned int *data, unsigned int n)
++{
++	unsigned int a, x = 0, y = salt;
++
++	for (; n; n--) {
++		a = *data++;
++		HASH_MIX(x, y, a);
++	}
++	return fold_hash(x, y);
++}
++
+ /*
+  * initialise the fs caching module
+  */
+diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
+index 54d3fbeb3002f..384565d63eea8 100644
+--- a/fs/gfs2/glops.c
++++ b/fs/gfs2/glops.c
+@@ -610,16 +610,13 @@ static int freeze_go_xmote_bh(struct gfs2_glock *gl)
+ 		j_gl->gl_ops->go_inval(j_gl, DIO_METADATA);
+ 
+ 		error = gfs2_find_jhead(sdp->sd_jdesc, &head, false);
+-		if (error)
+-			gfs2_consist(sdp);
+-		if (!(head.lh_flags & GFS2_LOG_HEAD_UNMOUNT))
+-			gfs2_consist(sdp);
+-
+-		/*  Initialize some head of the log stuff  */
+-		if (!gfs2_withdrawn(sdp)) {
+-			sdp->sd_log_sequence = head.lh_sequence + 1;
+-			gfs2_log_pointers_init(sdp, head.lh_blkno);
+-		}
++		if (gfs2_assert_withdraw_delayed(sdp, !error))
++			return error;
++		if (gfs2_assert_withdraw_delayed(sdp, head.lh_flags &
++						 GFS2_LOG_HEAD_UNMOUNT))
++			return -EIO;
++		sdp->sd_log_sequence = head.lh_sequence + 1;
++		gfs2_log_pointers_init(sdp, head.lh_blkno);
+ 	}
+ 	return 0;
+ }
+diff --git a/fs/gfs2/lock_dlm.c b/fs/gfs2/lock_dlm.c
+index dac040162ecc1..50578f881e6de 100644
+--- a/fs/gfs2/lock_dlm.c
++++ b/fs/gfs2/lock_dlm.c
+@@ -299,6 +299,11 @@ static void gdlm_put_lock(struct gfs2_glock *gl)
+ 	gfs2_sbstats_inc(gl, GFS2_LKS_DCOUNT);
+ 	gfs2_update_request_times(gl);
+ 
++	/* don't want to call dlm if we've unmounted the lock protocol */
++	if (test_bit(DFL_UNMOUNT, &ls->ls_recover_flags)) {
++		gfs2_glock_free(gl);
++		return;
++	}
+ 	/* don't want to skip dlm_unlock writing the lvb when lock has one */
+ 
+ 	if (test_bit(SDF_SKIP_DLM_UNLOCK, &sdp->sd_flags) &&
+diff --git a/fs/io-wq.c b/fs/io-wq.c
+index c7171d9758968..6612d0aa497ef 100644
+--- a/fs/io-wq.c
++++ b/fs/io-wq.c
+@@ -237,9 +237,9 @@ static bool io_wqe_activate_free_worker(struct io_wqe *wqe)
+  * We need a worker. If we find a free one, we're good. If not, and we're
+  * below the max number of workers, create one.
+  */
+-static void io_wqe_wake_worker(struct io_wqe *wqe, struct io_wqe_acct *acct)
++static void io_wqe_create_worker(struct io_wqe *wqe, struct io_wqe_acct *acct)
+ {
+-	bool ret;
++	bool do_create = false, first = false;
+ 
+ 	/*
+ 	 * Most likely an attempt to queue unbounded work on an io_wq that
+@@ -248,25 +248,18 @@ static void io_wqe_wake_worker(struct io_wqe *wqe, struct io_wqe_acct *acct)
+ 	if (unlikely(!acct->max_workers))
+ 		pr_warn_once("io-wq is not configured for unbound workers");
+ 
+-	rcu_read_lock();
+-	ret = io_wqe_activate_free_worker(wqe);
+-	rcu_read_unlock();
+-
+-	if (!ret) {
+-		bool do_create = false, first = false;
+-
+-		raw_spin_lock_irq(&wqe->lock);
+-		if (acct->nr_workers < acct->max_workers) {
+-			atomic_inc(&acct->nr_running);
+-			atomic_inc(&wqe->wq->worker_refs);
+-			if (!acct->nr_workers)
+-				first = true;
+-			acct->nr_workers++;
+-			do_create = true;
+-		}
+-		raw_spin_unlock_irq(&wqe->lock);
+-		if (do_create)
+-			create_io_worker(wqe->wq, wqe, acct->index, first);
++	raw_spin_lock_irq(&wqe->lock);
++	if (acct->nr_workers < acct->max_workers) {
++		if (!acct->nr_workers)
++			first = true;
++		acct->nr_workers++;
++		do_create = true;
++	}
++	raw_spin_unlock_irq(&wqe->lock);
++	if (do_create) {
++		atomic_inc(&acct->nr_running);
++		atomic_inc(&wqe->wq->worker_refs);
++		create_io_worker(wqe->wq, wqe, acct->index, first);
+ 	}
+ }
+ 
+@@ -798,7 +791,8 @@ append:
+ static void io_wqe_enqueue(struct io_wqe *wqe, struct io_wq_work *work)
+ {
+ 	struct io_wqe_acct *acct = io_work_get_acct(wqe, work);
+-	int work_flags;
++	unsigned work_flags = work->flags;
++	bool do_create;
+ 	unsigned long flags;
+ 
+ 	/*
+@@ -811,15 +805,19 @@ static void io_wqe_enqueue(struct io_wqe *wqe, struct io_wq_work *work)
+ 		return;
+ 	}
+ 
+-	work_flags = work->flags;
+ 	raw_spin_lock_irqsave(&wqe->lock, flags);
+ 	io_wqe_insert_work(wqe, work);
+ 	wqe->flags &= ~IO_WQE_FLAG_STALLED;
++
++	rcu_read_lock();
++	do_create = !io_wqe_activate_free_worker(wqe);
++	rcu_read_unlock();
++
+ 	raw_spin_unlock_irqrestore(&wqe->lock, flags);
+ 
+-	if ((work_flags & IO_WQ_WORK_CONCURRENT) ||
+-	    !atomic_read(&acct->nr_running))
+-		io_wqe_wake_worker(wqe, acct);
++	if (do_create && ((work_flags & IO_WQ_WORK_CONCURRENT) ||
++	    !atomic_read(&acct->nr_running)))
++		io_wqe_create_worker(wqe, acct);
+ }
+ 
+ void io_wq_enqueue(struct io_wq *wq, struct io_wq_work *work)
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 58ae2eab99efa..925f7f27af1ae 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -1307,6 +1307,8 @@ static void io_kill_timeout(struct io_kiocb *req, int status)
+ 	struct io_timeout_data *io = req->async_data;
+ 
+ 	if (hrtimer_try_to_cancel(&io->timer) != -1) {
++		if (status)
++			req_set_fail_links(req);
+ 		atomic_set(&req->ctx->cq_timeouts,
+ 			atomic_read(&req->ctx->cq_timeouts) + 1);
+ 		list_del_init(&req->timeout.list);
+@@ -3474,7 +3476,7 @@ static int io_renameat_prep(struct io_kiocb *req,
+ 
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->buf_index)
++	if (sqe->ioprio || sqe->buf_index || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (unlikely(req->flags & REQ_F_FIXED_FILE))
+ 		return -EBADF;
+@@ -3525,7 +3527,8 @@ static int io_unlinkat_prep(struct io_kiocb *req,
+ 
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->off || sqe->len || sqe->buf_index)
++	if (sqe->ioprio || sqe->off || sqe->len || sqe->buf_index ||
++	    sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (unlikely(req->flags & REQ_F_FIXED_FILE))
+ 		return -EBADF;
+@@ -3571,8 +3574,8 @@ static int io_shutdown_prep(struct io_kiocb *req,
+ #if defined(CONFIG_NET)
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->off || sqe->addr || sqe->rw_flags ||
+-	    sqe->buf_index)
++	if (unlikely(sqe->ioprio || sqe->off || sqe->addr || sqe->rw_flags ||
++		     sqe->buf_index || sqe->splice_fd_in))
+ 		return -EINVAL;
+ 
+ 	req->shutdown.how = READ_ONCE(sqe->len);
+@@ -3720,7 +3723,8 @@ static int io_fsync_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 
+ 	if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index))
++	if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index ||
++		     sqe->splice_fd_in))
+ 		return -EINVAL;
+ 
+ 	req->sync.flags = READ_ONCE(sqe->fsync_flags);
+@@ -3753,7 +3757,8 @@ static int io_fsync(struct io_kiocb *req, unsigned int issue_flags)
+ static int io_fallocate_prep(struct io_kiocb *req,
+ 			     const struct io_uring_sqe *sqe)
+ {
+-	if (sqe->ioprio || sqe->buf_index || sqe->rw_flags)
++	if (sqe->ioprio || sqe->buf_index || sqe->rw_flags ||
++	    sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+@@ -3784,7 +3789,7 @@ static int __io_openat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe
+ 	const char __user *fname;
+ 	int ret;
+ 
+-	if (unlikely(sqe->ioprio || sqe->buf_index))
++	if (unlikely(sqe->ioprio || sqe->buf_index || sqe->splice_fd_in))
+ 		return -EINVAL;
+ 	if (unlikely(req->flags & REQ_F_FIXED_FILE))
+ 		return -EBADF;
+@@ -3909,7 +3914,8 @@ static int io_remove_buffers_prep(struct io_kiocb *req,
+ 	struct io_provide_buf *p = &req->pbuf;
+ 	u64 tmp;
+ 
+-	if (sqe->ioprio || sqe->rw_flags || sqe->addr || sqe->len || sqe->off)
++	if (sqe->ioprio || sqe->rw_flags || sqe->addr || sqe->len || sqe->off ||
++	    sqe->splice_fd_in)
+ 		return -EINVAL;
+ 
+ 	tmp = READ_ONCE(sqe->fd);
+@@ -3980,7 +3986,7 @@ static int io_provide_buffers_prep(struct io_kiocb *req,
+ 	struct io_provide_buf *p = &req->pbuf;
+ 	u64 tmp;
+ 
+-	if (sqe->ioprio || sqe->rw_flags)
++	if (sqe->ioprio || sqe->rw_flags || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 
+ 	tmp = READ_ONCE(sqe->fd);
+@@ -4067,7 +4073,7 @@ static int io_epoll_ctl_prep(struct io_kiocb *req,
+ 			     const struct io_uring_sqe *sqe)
+ {
+ #if defined(CONFIG_EPOLL)
+-	if (sqe->ioprio || sqe->buf_index)
++	if (sqe->ioprio || sqe->buf_index || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+@@ -4113,7 +4119,7 @@ static int io_epoll_ctl(struct io_kiocb *req, unsigned int issue_flags)
+ static int io_madvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+ #if defined(CONFIG_ADVISE_SYSCALLS) && defined(CONFIG_MMU)
+-	if (sqe->ioprio || sqe->buf_index || sqe->off)
++	if (sqe->ioprio || sqe->buf_index || sqe->off || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+@@ -4148,7 +4154,7 @@ static int io_madvise(struct io_kiocb *req, unsigned int issue_flags)
+ 
+ static int io_fadvise_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+-	if (sqe->ioprio || sqe->buf_index || sqe->addr)
++	if (sqe->ioprio || sqe->buf_index || sqe->addr || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+@@ -4186,7 +4192,7 @@ static int io_statx_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->buf_index)
++	if (sqe->ioprio || sqe->buf_index || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (req->flags & REQ_F_FIXED_FILE)
+ 		return -EBADF;
+@@ -4222,7 +4228,7 @@ static int io_close_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+ 	if (sqe->ioprio || sqe->off || sqe->addr || sqe->len ||
+-	    sqe->rw_flags || sqe->buf_index)
++	    sqe->rw_flags || sqe->buf_index || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (req->flags & REQ_F_FIXED_FILE)
+ 		return -EBADF;
+@@ -4283,7 +4289,8 @@ static int io_sfr_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 
+ 	if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index))
++	if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index ||
++		     sqe->splice_fd_in))
+ 		return -EINVAL;
+ 
+ 	req->sync.off = READ_ONCE(sqe->off);
+@@ -4710,7 +4717,7 @@ static int io_accept_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->len || sqe->buf_index)
++	if (sqe->ioprio || sqe->len || sqe->buf_index || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 
+ 	accept->addr = u64_to_user_ptr(READ_ONCE(sqe->addr));
+@@ -4758,7 +4765,8 @@ static int io_connect_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->len || sqe->buf_index || sqe->rw_flags)
++	if (sqe->ioprio || sqe->len || sqe->buf_index || sqe->rw_flags ||
++	    sqe->splice_fd_in)
+ 		return -EINVAL;
+ 
+ 	conn->addr = u64_to_user_ptr(READ_ONCE(sqe->addr));
+@@ -5368,7 +5376,7 @@ static int io_poll_update_prep(struct io_kiocb *req,
+ 
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->buf_index)
++	if (sqe->ioprio || sqe->buf_index || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	flags = READ_ONCE(sqe->len);
+ 	if (flags & ~(IORING_POLL_UPDATE_EVENTS | IORING_POLL_UPDATE_USER_DATA |
+@@ -5603,7 +5611,7 @@ static int io_timeout_remove_prep(struct io_kiocb *req,
+ 		return -EINVAL;
+ 	if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->buf_index || sqe->len)
++	if (sqe->ioprio || sqe->buf_index || sqe->len || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 
+ 	tr->addr = READ_ONCE(sqe->addr);
+@@ -5662,7 +5670,8 @@ static int io_timeout_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+ 
+ 	if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->buf_index || sqe->len != 1)
++	if (sqe->ioprio || sqe->buf_index || sqe->len != 1 ||
++	    sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (off && is_timeout_link)
+ 		return -EINVAL;
+@@ -5811,7 +5820,8 @@ static int io_async_cancel_prep(struct io_kiocb *req,
+ 		return -EINVAL;
+ 	if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->off || sqe->len || sqe->cancel_flags)
++	if (sqe->ioprio || sqe->off || sqe->len || sqe->cancel_flags ||
++	    sqe->splice_fd_in)
+ 		return -EINVAL;
+ 
+ 	req->cancel.addr = READ_ONCE(sqe->addr);
+@@ -5868,7 +5878,7 @@ static int io_rsrc_update_prep(struct io_kiocb *req,
+ {
+ 	if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
+ 		return -EINVAL;
+-	if (sqe->ioprio || sqe->rw_flags)
++	if (sqe->ioprio || sqe->rw_flags || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 
+ 	req->rsrc_update.offset = READ_ONCE(sqe->off);
+@@ -6281,6 +6291,7 @@ static void io_wq_submit_work(struct io_wq_work *work)
+ 	if (timeout)
+ 		io_queue_linked_timeout(timeout);
+ 
++	/* either cancelled or io-wq is dying, so don't touch tctx->iowq */
+ 	if (work->flags & IO_WQ_WORK_CANCEL)
+ 		ret = -ECANCELED;
+ 
+@@ -7195,11 +7206,11 @@ static struct io_rsrc_data *io_rsrc_data_alloc(struct io_ring_ctx *ctx,
+ {
+ 	struct io_rsrc_data *data;
+ 
+-	data = kzalloc(sizeof(*data), GFP_KERNEL);
++	data = kzalloc(sizeof(*data), GFP_KERNEL_ACCOUNT);
+ 	if (!data)
+ 		return NULL;
+ 
+-	data->tags = kvcalloc(nr, sizeof(*data->tags), GFP_KERNEL);
++	data->tags = kvcalloc(nr, sizeof(*data->tags), GFP_KERNEL_ACCOUNT);
+ 	if (!data->tags) {
+ 		kfree(data);
+ 		return NULL;
+@@ -7477,7 +7488,7 @@ static bool io_alloc_file_tables(struct io_file_table *table, unsigned nr_files)
+ {
+ 	unsigned i, nr_tables = DIV_ROUND_UP(nr_files, IORING_MAX_FILES_TABLE);
+ 
+-	table->files = kcalloc(nr_tables, sizeof(*table->files), GFP_KERNEL);
++	table->files = kcalloc(nr_tables, sizeof(*table->files), GFP_KERNEL_ACCOUNT);
+ 	if (!table->files)
+ 		return false;
+ 
+@@ -7485,7 +7496,7 @@ static bool io_alloc_file_tables(struct io_file_table *table, unsigned nr_files)
+ 		unsigned int this_files = min(nr_files, IORING_MAX_FILES_TABLE);
+ 
+ 		table->files[i] = kcalloc(this_files, sizeof(*table->files[i]),
+-					GFP_KERNEL);
++					GFP_KERNEL_ACCOUNT);
+ 		if (!table->files[i])
+ 			break;
+ 		nr_files -= this_files;
+@@ -9090,8 +9101,8 @@ static void io_uring_clean_tctx(struct io_uring_task *tctx)
+ 		 * Must be after io_uring_del_task_file() (removes nodes under
+ 		 * uring_lock) to avoid race with io_uring_try_cancel_iowq().
+ 		 */
+-		tctx->io_wq = NULL;
+ 		io_wq_put_and_exit(wq);
++		tctx->io_wq = NULL;
+ 	}
+ }
+ 
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index 9023717c5188b..35839acd0004a 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -1045,7 +1045,7 @@ iomap_finish_page_writeback(struct inode *inode, struct page *page,
+ 
+ 	if (error) {
+ 		SetPageError(page);
+-		mapping_set_error(inode->i_mapping, -EIO);
++		mapping_set_error(inode->i_mapping, error);
+ 	}
+ 
+ 	WARN_ON_ONCE(i_blocks_per_page(inode, page) > 1 && !iop);
+diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
+index 498cb70c2c0d0..273a81971ed57 100644
+--- a/fs/lockd/svclock.c
++++ b/fs/lockd/svclock.c
+@@ -395,28 +395,10 @@ nlmsvc_release_lockowner(struct nlm_lock *lock)
+ 		nlmsvc_put_lockowner(lock->fl.fl_owner);
+ }
+ 
+-static void nlmsvc_locks_copy_lock(struct file_lock *new, struct file_lock *fl)
+-{
+-	struct nlm_lockowner *nlm_lo = (struct nlm_lockowner *)fl->fl_owner;
+-	new->fl_owner = nlmsvc_get_lockowner(nlm_lo);
+-}
+-
+-static void nlmsvc_locks_release_private(struct file_lock *fl)
+-{
+-	nlmsvc_put_lockowner((struct nlm_lockowner *)fl->fl_owner);
+-}
+-
+-static const struct file_lock_operations nlmsvc_lock_ops = {
+-	.fl_copy_lock = nlmsvc_locks_copy_lock,
+-	.fl_release_private = nlmsvc_locks_release_private,
+-};
+-
+ void nlmsvc_locks_init_private(struct file_lock *fl, struct nlm_host *host,
+ 						pid_t pid)
+ {
+ 	fl->fl_owner = nlmsvc_find_lockowner(host, pid);
+-	if (fl->fl_owner != NULL)
+-		fl->fl_ops = &nlmsvc_lock_ops;
+ }
+ 
+ /*
+@@ -788,9 +770,21 @@ nlmsvc_notify_blocked(struct file_lock *fl)
+ 	printk(KERN_WARNING "lockd: notification for unknown block!\n");
+ }
+ 
++static fl_owner_t nlmsvc_get_owner(fl_owner_t owner)
++{
++	return nlmsvc_get_lockowner(owner);
++}
++
++static void nlmsvc_put_owner(fl_owner_t owner)
++{
++	nlmsvc_put_lockowner(owner);
++}
++
+ const struct lock_manager_operations nlmsvc_lock_operations = {
+ 	.lm_notify = nlmsvc_notify_blocked,
+ 	.lm_grant = nlmsvc_grant_deferred,
++	.lm_get_owner = nlmsvc_get_owner,
++	.lm_put_owner = nlmsvc_put_owner,
+ };
+ 
+ /*
+diff --git a/fs/nfs/export.c b/fs/nfs/export.c
+index 37a1a88df7717..d772c20bbfd15 100644
+--- a/fs/nfs/export.c
++++ b/fs/nfs/export.c
+@@ -180,5 +180,5 @@ const struct export_operations nfs_export_ops = {
+ 	.fetch_iversion = nfs_fetch_iversion,
+ 	.flags = EXPORT_OP_NOWCC|EXPORT_OP_NOSUBTREECHK|
+ 		EXPORT_OP_CLOSE_BEFORE_UNLINK|EXPORT_OP_REMOTE_FS|
+-		EXPORT_OP_NOATOMIC_ATTR,
++		EXPORT_OP_NOATOMIC_ATTR|EXPORT_OP_SYNC_LOCKS,
+ };
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index be960e47d7f61..28350d62b9bd1 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -335,7 +335,7 @@ static bool pnfs_seqid_is_newer(u32 s1, u32 s2)
+ 
+ static void pnfs_barrier_update(struct pnfs_layout_hdr *lo, u32 newseq)
+ {
+-	if (pnfs_seqid_is_newer(newseq, lo->plh_barrier))
++	if (pnfs_seqid_is_newer(newseq, lo->plh_barrier) || !lo->plh_barrier)
+ 		lo->plh_barrier = newseq;
+ }
+ 
+@@ -347,11 +347,15 @@ pnfs_set_plh_return_info(struct pnfs_layout_hdr *lo, enum pnfs_iomode iomode,
+ 		iomode = IOMODE_ANY;
+ 	lo->plh_return_iomode = iomode;
+ 	set_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags);
+-	if (seq != 0) {
+-		WARN_ON_ONCE(lo->plh_return_seq != 0 && lo->plh_return_seq != seq);
++	/*
++	 * We must set lo->plh_return_seq to avoid livelocks with
++	 * pnfs_layout_need_return()
++	 */
++	if (seq == 0)
++		seq = be32_to_cpu(lo->plh_stateid.seqid);
++	if (!lo->plh_return_seq || pnfs_seqid_is_newer(seq, lo->plh_return_seq))
+ 		lo->plh_return_seq = seq;
+-		pnfs_barrier_update(lo, seq);
+-	}
++	pnfs_barrier_update(lo, seq);
+ }
+ 
+ static void
+@@ -1000,7 +1004,7 @@ pnfs_layout_stateid_blocked(const struct pnfs_layout_hdr *lo,
+ {
+ 	u32 seqid = be32_to_cpu(stateid->seqid);
+ 
+-	return !pnfs_seqid_is_newer(seqid, lo->plh_barrier) && lo->plh_barrier;
++	return lo->plh_barrier && pnfs_seqid_is_newer(lo->plh_barrier, seqid);
+ }
+ 
+ /* lget is set to 1 if called from inside send_layoutget call chain */
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index ab81e8ae32659..42c42ee3f00a2 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -6727,6 +6727,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	struct nfsd4_blocked_lock *nbl = NULL;
+ 	struct file_lock *file_lock = NULL;
+ 	struct file_lock *conflock = NULL;
++	struct super_block *sb;
+ 	__be32 status = 0;
+ 	int lkflg;
+ 	int err;
+@@ -6748,6 +6749,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 		dprintk("NFSD: nfsd4_lock: permission denied!\n");
+ 		return status;
+ 	}
++	sb = cstate->current_fh.fh_dentry->d_sb;
+ 
+ 	if (lock->lk_is_new) {
+ 		if (nfsd4_has_session(cstate))
+@@ -6796,7 +6798,8 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 	fp = lock_stp->st_stid.sc_file;
+ 	switch (lock->lk_type) {
+ 		case NFS4_READW_LT:
+-			if (nfsd4_has_session(cstate))
++			if (nfsd4_has_session(cstate) &&
++			    !(sb->s_export_op->flags & EXPORT_OP_SYNC_LOCKS))
+ 				fl_flags |= FL_SLEEP;
+ 			fallthrough;
+ 		case NFS4_READ_LT:
+@@ -6808,7 +6811,8 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 			fl_type = F_RDLCK;
+ 			break;
+ 		case NFS4_WRITEW_LT:
+-			if (nfsd4_has_session(cstate))
++			if (nfsd4_has_session(cstate) &&
++			    !(sb->s_export_op->flags & EXPORT_OP_SYNC_LOCKS))
+ 				fl_flags |= FL_SLEEP;
+ 			fallthrough;
+ 		case NFS4_WRITE_LT:
+@@ -6928,8 +6932,7 @@ out:
+ /*
+  * The NFSv4 spec allows a client to do a LOCKT without holding an OPEN,
+  * so we do a temporary open here just to get an open file to pass to
+- * vfs_test_lock.  (Arguably perhaps test_lock should be done with an
+- * inode operation.)
++ * vfs_test_lock.
+  */
+ static __be32 nfsd_test_lock(struct svc_rqst *rqstp, struct svc_fh *fhp, struct file_lock *lock)
+ {
+@@ -6944,7 +6947,9 @@ static __be32 nfsd_test_lock(struct svc_rqst *rqstp, struct svc_fh *fhp, struct
+ 							NFSD_MAY_READ));
+ 	if (err)
+ 		goto out;
++	lock->fl_file = nf->nf_file;
+ 	err = nfserrno(vfs_test_lock(nf->nf_file, lock));
++	lock->fl_file = NULL;
+ out:
+ 	fh_unlock(fhp);
+ 	nfsd_file_put(nf);
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index 93efe7048a771..7c1850adec288 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -542,8 +542,10 @@ static int ovl_create_over_whiteout(struct dentry *dentry, struct inode *inode,
+ 			goto out_cleanup;
+ 	}
+ 	err = ovl_instantiate(dentry, inode, newdentry, hardlink);
+-	if (err)
+-		goto out_cleanup;
++	if (err) {
++		ovl_cleanup(udir, newdentry);
++		dput(newdentry);
++	}
+ out_dput:
+ 	dput(upper);
+ out_unlock:
+diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
+index 8fc8bbf9635b6..e55f861d23fda 100644
+--- a/fs/userfaultfd.c
++++ b/fs/userfaultfd.c
+@@ -33,11 +33,6 @@ int sysctl_unprivileged_userfaultfd __read_mostly;
+ 
+ static struct kmem_cache *userfaultfd_ctx_cachep __read_mostly;
+ 
+-enum userfaultfd_state {
+-	UFFD_STATE_WAIT_API,
+-	UFFD_STATE_RUNNING,
+-};
+-
+ /*
+  * Start with fault_pending_wqh and fault_wqh so they're more likely
+  * to be in the same cacheline.
+@@ -69,8 +64,6 @@ struct userfaultfd_ctx {
+ 	unsigned int flags;
+ 	/* features requested from the userspace */
+ 	unsigned int features;
+-	/* state machine */
+-	enum userfaultfd_state state;
+ 	/* released */
+ 	bool released;
+ 	/* memory mappings are changing because of non-cooperative event */
+@@ -104,6 +97,14 @@ struct userfaultfd_wake_range {
+ 	unsigned long len;
+ };
+ 
++/* internal indication that UFFD_API ioctl was successfully executed */
++#define UFFD_FEATURE_INITIALIZED		(1u << 31)
++
++static bool userfaultfd_is_initialized(struct userfaultfd_ctx *ctx)
++{
++	return ctx->features & UFFD_FEATURE_INITIALIZED;
++}
++
+ static int userfaultfd_wake_function(wait_queue_entry_t *wq, unsigned mode,
+ 				     int wake_flags, void *key)
+ {
+@@ -666,7 +667,6 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs)
+ 
+ 		refcount_set(&ctx->refcount, 1);
+ 		ctx->flags = octx->flags;
+-		ctx->state = UFFD_STATE_RUNNING;
+ 		ctx->features = octx->features;
+ 		ctx->released = false;
+ 		ctx->mmap_changing = false;
+@@ -943,38 +943,33 @@ static __poll_t userfaultfd_poll(struct file *file, poll_table *wait)
+ 
+ 	poll_wait(file, &ctx->fd_wqh, wait);
+ 
+-	switch (ctx->state) {
+-	case UFFD_STATE_WAIT_API:
++	if (!userfaultfd_is_initialized(ctx))
+ 		return EPOLLERR;
+-	case UFFD_STATE_RUNNING:
+-		/*
+-		 * poll() never guarantees that read won't block.
+-		 * userfaults can be waken before they're read().
+-		 */
+-		if (unlikely(!(file->f_flags & O_NONBLOCK)))
+-			return EPOLLERR;
+-		/*
+-		 * lockless access to see if there are pending faults
+-		 * __pollwait last action is the add_wait_queue but
+-		 * the spin_unlock would allow the waitqueue_active to
+-		 * pass above the actual list_add inside
+-		 * add_wait_queue critical section. So use a full
+-		 * memory barrier to serialize the list_add write of
+-		 * add_wait_queue() with the waitqueue_active read
+-		 * below.
+-		 */
+-		ret = 0;
+-		smp_mb();
+-		if (waitqueue_active(&ctx->fault_pending_wqh))
+-			ret = EPOLLIN;
+-		else if (waitqueue_active(&ctx->event_wqh))
+-			ret = EPOLLIN;
+ 
+-		return ret;
+-	default:
+-		WARN_ON_ONCE(1);
++	/*
++	 * poll() never guarantees that read won't block.
++	 * userfaults can be waken before they're read().
++	 */
++	if (unlikely(!(file->f_flags & O_NONBLOCK)))
+ 		return EPOLLERR;
+-	}
++	/*
++	 * lockless access to see if there are pending faults
++	 * __pollwait last action is the add_wait_queue but
++	 * the spin_unlock would allow the waitqueue_active to
++	 * pass above the actual list_add inside
++	 * add_wait_queue critical section. So use a full
++	 * memory barrier to serialize the list_add write of
++	 * add_wait_queue() with the waitqueue_active read
++	 * below.
++	 */
++	ret = 0;
++	smp_mb();
++	if (waitqueue_active(&ctx->fault_pending_wqh))
++		ret = EPOLLIN;
++	else if (waitqueue_active(&ctx->event_wqh))
++		ret = EPOLLIN;
++
++	return ret;
+ }
+ 
+ static const struct file_operations userfaultfd_fops;
+@@ -1169,7 +1164,7 @@ static ssize_t userfaultfd_read(struct file *file, char __user *buf,
+ 	int no_wait = file->f_flags & O_NONBLOCK;
+ 	struct inode *inode = file_inode(file);
+ 
+-	if (ctx->state == UFFD_STATE_WAIT_API)
++	if (!userfaultfd_is_initialized(ctx))
+ 		return -EINVAL;
+ 
+ 	for (;;) {
+@@ -1905,9 +1900,10 @@ out:
+ static inline unsigned int uffd_ctx_features(__u64 user_features)
+ {
+ 	/*
+-	 * For the current set of features the bits just coincide
++	 * For the current set of features the bits just coincide. Set
++	 * UFFD_FEATURE_INITIALIZED to mark the features as enabled.
+ 	 */
+-	return (unsigned int)user_features;
++	return (unsigned int)user_features | UFFD_FEATURE_INITIALIZED;
+ }
+ 
+ /*
+@@ -1920,12 +1916,10 @@ static int userfaultfd_api(struct userfaultfd_ctx *ctx,
+ {
+ 	struct uffdio_api uffdio_api;
+ 	void __user *buf = (void __user *)arg;
++	unsigned int ctx_features;
+ 	int ret;
+ 	__u64 features;
+ 
+-	ret = -EINVAL;
+-	if (ctx->state != UFFD_STATE_WAIT_API)
+-		goto out;
+ 	ret = -EFAULT;
+ 	if (copy_from_user(&uffdio_api, buf, sizeof(uffdio_api)))
+ 		goto out;
+@@ -1945,9 +1939,13 @@ static int userfaultfd_api(struct userfaultfd_ctx *ctx,
+ 	ret = -EFAULT;
+ 	if (copy_to_user(buf, &uffdio_api, sizeof(uffdio_api)))
+ 		goto out;
+-	ctx->state = UFFD_STATE_RUNNING;
++
+ 	/* only enable the requested features for this uffd context */
+-	ctx->features = uffd_ctx_features(features);
++	ctx_features = uffd_ctx_features(features);
++	ret = -EINVAL;
++	if (cmpxchg(&ctx->features, 0, ctx_features) != 0)
++		goto err_out;
++
+ 	ret = 0;
+ out:
+ 	return ret;
+@@ -1964,7 +1962,7 @@ static long userfaultfd_ioctl(struct file *file, unsigned cmd,
+ 	int ret = -EINVAL;
+ 	struct userfaultfd_ctx *ctx = file->private_data;
+ 
+-	if (cmd != UFFDIO_API && ctx->state == UFFD_STATE_WAIT_API)
++	if (cmd != UFFDIO_API && !userfaultfd_is_initialized(ctx))
+ 		return -EINVAL;
+ 
+ 	switch(cmd) {
+@@ -2078,7 +2076,6 @@ SYSCALL_DEFINE1(userfaultfd, int, flags)
+ 	refcount_set(&ctx->refcount, 1);
+ 	ctx->flags = flags;
+ 	ctx->features = 0;
+-	ctx->state = UFFD_STATE_WAIT_API;
+ 	ctx->released = false;
+ 	ctx->mmap_changing = false;
+ 	ctx->mm = current->mm;
+diff --git a/include/crypto/public_key.h b/include/crypto/public_key.h
+index 47accec68cb0f..f603325c0c30d 100644
+--- a/include/crypto/public_key.h
++++ b/include/crypto/public_key.h
+@@ -38,9 +38,9 @@ extern void public_key_free(struct public_key *key);
+ struct public_key_signature {
+ 	struct asymmetric_key_id *auth_ids[2];
+ 	u8 *s;			/* Signature */
+-	u32 s_size;		/* Number of bytes in signature */
+ 	u8 *digest;
+-	u8 digest_size;		/* Number of bytes in digest */
++	u32 s_size;		/* Number of bytes in signature */
++	u32 digest_size;	/* Number of bytes in digest */
+ 	const char *pkey_algo;
+ 	const char *hash_algo;
+ 	const char *encoding;
+diff --git a/include/drm/drm_auth.h b/include/drm/drm_auth.h
+index 6bf8b2b789919..f99d3417f3042 100644
+--- a/include/drm/drm_auth.h
++++ b/include/drm/drm_auth.h
+@@ -107,6 +107,7 @@ struct drm_master {
+ };
+ 
+ struct drm_master *drm_master_get(struct drm_master *master);
++struct drm_master *drm_file_get_master(struct drm_file *file_priv);
+ void drm_master_put(struct drm_master **master);
+ bool drm_is_current_master(struct drm_file *fpriv);
+ 
+diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h
+index b81b3bfb08c8d..726cfe0ff5f5c 100644
+--- a/include/drm/drm_file.h
++++ b/include/drm/drm_file.h
+@@ -226,15 +226,27 @@ struct drm_file {
+ 	/**
+ 	 * @master:
+ 	 *
+-	 * Master this node is currently associated with. Only relevant if
+-	 * drm_is_primary_client() returns true. Note that this only
+-	 * matches &drm_device.master if the master is the currently active one.
++	 * Master this node is currently associated with. Protected by struct
++	 * &drm_device.master_mutex, and serialized by @master_lookup_lock.
++	 *
++	 * Only relevant if drm_is_primary_client() returns true. Note that
++	 * this only matches &drm_device.master if the master is the currently
++	 * active one.
++	 *
++	 * When dereferencing this pointer, either hold struct
++	 * &drm_device.master_mutex for the duration of the pointer's use, or
++	 * use drm_file_get_master() if struct &drm_device.master_mutex is not
++	 * currently held and there is no other need to hold it. This prevents
++	 * @master from being freed during use.
+ 	 *
+ 	 * See also @authentication and @is_master and the :ref:`section on
+ 	 * primary nodes and authentication <drm_primary_node>`.
+ 	 */
+ 	struct drm_master *master;
+ 
++	/** @master_lock: Serializes @master. */
++	spinlock_t master_lookup_lock;
++
+ 	/** @pid: Process that opened this file. */
+ 	struct pid *pid;
+ 
+diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h
+index e030f7510cd3a..b21a553e2e062 100644
+--- a/include/linux/ethtool.h
++++ b/include/linux/ethtool.h
+@@ -17,8 +17,6 @@
+ #include <linux/compat.h>
+ #include <uapi/linux/ethtool.h>
+ 
+-#ifdef CONFIG_COMPAT
+-
+ struct compat_ethtool_rx_flow_spec {
+ 	u32		flow_type;
+ 	union ethtool_flow_union h_u;
+@@ -38,8 +36,6 @@ struct compat_ethtool_rxnfc {
+ 	u32				rule_locs[];
+ };
+ 
+-#endif /* CONFIG_COMPAT */
+-
+ #include <linux/rculist.h>
+ 
+ /**
+diff --git a/include/linux/exportfs.h b/include/linux/exportfs.h
+index fe848901fcc3a..3260fe7148462 100644
+--- a/include/linux/exportfs.h
++++ b/include/linux/exportfs.h
+@@ -221,6 +221,8 @@ struct export_operations {
+ #define EXPORT_OP_NOATOMIC_ATTR		(0x10) /* Filesystem cannot supply
+ 						  atomic attribute updates
+ 						*/
++#define EXPORT_OP_SYNC_LOCKS		(0x20) /* Filesystem can't do
++						  asychronous blocking locks */
+ 	unsigned long	flags;
+ };
+ 
+diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h
+index 5487a80617a30..0021ea8f7c3bd 100644
+--- a/include/linux/f2fs_fs.h
++++ b/include/linux/f2fs_fs.h
+@@ -34,6 +34,7 @@
+ #define F2FS_ROOT_INO(sbi)	((sbi)->root_ino_num)
+ #define F2FS_NODE_INO(sbi)	((sbi)->node_ino_num)
+ #define F2FS_META_INO(sbi)	((sbi)->meta_ino_num)
++#define F2FS_COMPRESS_INO(sbi)	(NM_I(sbi)->max_nid)
+ 
+ #define F2FS_MAX_QUOTAS		3
+ 
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index 28a110ec2a0d5..0b9a894c20c85 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -835,6 +835,11 @@ static inline spinlock_t *huge_pte_lockptr(struct hstate *h,
+ 
+ void hugetlb_report_usage(struct seq_file *m, struct mm_struct *mm);
+ 
++static inline void hugetlb_count_init(struct mm_struct *mm)
++{
++	atomic_long_set(&mm->hugetlb_usage, 0);
++}
++
+ static inline void hugetlb_count_add(long l, struct mm_struct *mm)
+ {
+ 	atomic_long_add(l, &mm->hugetlb_usage);
+@@ -1019,6 +1024,10 @@ static inline spinlock_t *huge_pte_lockptr(struct hstate *h,
+ 	return &mm->page_table_lock;
+ }
+ 
++static inline void hugetlb_count_init(struct mm_struct *mm)
++{
++}
++
+ static inline void hugetlb_report_usage(struct seq_file *f, struct mm_struct *m)
+ {
+ }
+diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
+index 0bff345c4bc68..171bf1be40115 100644
+--- a/include/linux/hugetlb_cgroup.h
++++ b/include/linux/hugetlb_cgroup.h
+@@ -118,6 +118,13 @@ static inline void hugetlb_cgroup_put_rsvd_cgroup(struct hugetlb_cgroup *h_cg)
+ 	css_put(&h_cg->css);
+ }
+ 
++static inline void resv_map_dup_hugetlb_cgroup_uncharge_info(
++						struct resv_map *resv_map)
++{
++	if (resv_map->css)
++		css_get(resv_map->css);
++}
++
+ extern int hugetlb_cgroup_charge_cgroup(int idx, unsigned long nr_pages,
+ 					struct hugetlb_cgroup **ptr);
+ extern int hugetlb_cgroup_charge_cgroup_rsvd(int idx, unsigned long nr_pages,
+@@ -196,6 +203,11 @@ static inline void hugetlb_cgroup_put_rsvd_cgroup(struct hugetlb_cgroup *h_cg)
+ {
+ }
+ 
++static inline void resv_map_dup_hugetlb_cgroup_uncharge_info(
++						struct resv_map *resv_map)
++{
++}
++
+ static inline int hugetlb_cgroup_charge_cgroup(int idx, unsigned long nr_pages,
+ 					       struct hugetlb_cgroup **ptr)
+ {
+diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
+index 03faf20a6817e..cf2dafe3ce608 100644
+--- a/include/linux/intel-iommu.h
++++ b/include/linux/intel-iommu.h
+@@ -124,9 +124,9 @@
+ #define DMAR_MTRR_PHYSMASK8_REG 0x208
+ #define DMAR_MTRR_PHYSBASE9_REG 0x210
+ #define DMAR_MTRR_PHYSMASK9_REG 0x218
+-#define DMAR_VCCAP_REG		0xe00 /* Virtual command capability register */
+-#define DMAR_VCMD_REG		0xe10 /* Virtual command register */
+-#define DMAR_VCRSP_REG		0xe20 /* Virtual command response register */
++#define DMAR_VCCAP_REG		0xe30 /* Virtual command capability register */
++#define DMAR_VCMD_REG		0xe00 /* Virtual command register */
++#define DMAR_VCRSP_REG		0xe10 /* Virtual command response register */
+ 
+ #define DMAR_IQER_REG_IQEI(reg)		FIELD_GET(GENMASK_ULL(3, 0), reg)
+ #define DMAR_IQER_REG_ITESID(reg)	FIELD_GET(GENMASK_ULL(47, 32), reg)
+diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
+index 28f32fd00fe9f..57a8aa463ccb8 100644
+--- a/include/linux/memory_hotplug.h
++++ b/include/linux/memory_hotplug.h
+@@ -366,8 +366,8 @@ extern void sparse_remove_section(struct mem_section *ms,
+ 		unsigned long map_offset, struct vmem_altmap *altmap);
+ extern struct page *sparse_decode_mem_map(unsigned long coded_mem_map,
+ 					  unsigned long pnum);
+-extern struct zone *zone_for_pfn_range(int online_type, int nid, unsigned start_pfn,
+-		unsigned long nr_pages);
++extern struct zone *zone_for_pfn_range(int online_type, int nid,
++		unsigned long start_pfn, unsigned long nr_pages);
+ extern int arch_create_linear_mapping(int nid, u64 start, u64 size,
+ 				      struct mhp_params *params);
+ void arch_remove_linear_mapping(u64 start, u64 size);
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index 1199ffd305d1a..fcd8ec0b7408e 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -167,7 +167,7 @@ void synchronize_rcu_tasks(void);
+ # define synchronize_rcu_tasks synchronize_rcu
+ # endif
+ 
+-# ifdef CONFIG_TASKS_RCU_TRACE
++# ifdef CONFIG_TASKS_TRACE_RCU
+ # define rcu_tasks_trace_qs(t)						\
+ 	do {								\
+ 		if (!likely(READ_ONCE((t)->trc_reader_checked)) &&	\
+diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h
+index d1672de9ca89e..87b325aec5085 100644
+--- a/include/linux/rtmutex.h
++++ b/include/linux/rtmutex.h
+@@ -52,17 +52,22 @@ do { \
+ } while (0)
+ 
+ #ifdef CONFIG_DEBUG_LOCK_ALLOC
+-#define __DEP_MAP_RT_MUTEX_INITIALIZER(mutexname) \
+-	, .dep_map = { .name = #mutexname }
++#define __DEP_MAP_RT_MUTEX_INITIALIZER(mutexname)	\
++	.dep_map = {					\
++		.name = #mutexname,			\
++		.wait_type_inner = LD_WAIT_SLEEP,	\
++	}
+ #else
+ #define __DEP_MAP_RT_MUTEX_INITIALIZER(mutexname)
+ #endif
+ 
+-#define __RT_MUTEX_INITIALIZER(mutexname) \
+-	{ .wait_lock = __RAW_SPIN_LOCK_UNLOCKED(mutexname.wait_lock) \
+-	, .waiters = RB_ROOT_CACHED \
+-	, .owner = NULL \
+-	__DEP_MAP_RT_MUTEX_INITIALIZER(mutexname)}
++#define __RT_MUTEX_INITIALIZER(mutexname)				\
++{									\
++	.wait_lock = __RAW_SPIN_LOCK_UNLOCKED(mutexname.wait_lock),	\
++	.waiters = RB_ROOT_CACHED,					\
++	.owner = NULL,							\
++	__DEP_MAP_RT_MUTEX_INITIALIZER(mutexname)			\
++}
+ 
+ #define DEFINE_RT_MUTEX(mutexname) \
+ 	struct rt_mutex mutexname = __RT_MUTEX_INITIALIZER(mutexname)
+diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
+index 61b622e334ee5..52e5b11e4b6a9 100644
+--- a/include/linux/sunrpc/xprt.h
++++ b/include/linux/sunrpc/xprt.h
+@@ -422,6 +422,7 @@ void			xprt_unlock_connect(struct rpc_xprt *, void *);
+ #define XPRT_CONGESTED		(9)
+ #define XPRT_CWND_WAIT		(10)
+ #define XPRT_WRITE_SPACE	(11)
++#define XPRT_SND_IS_COOKIE	(12)
+ 
+ static inline void xprt_set_connected(struct rpc_xprt *xprt)
+ {
+diff --git a/include/linux/sunrpc/xprtsock.h b/include/linux/sunrpc/xprtsock.h
+index 3c1423ee74b4e..8c2a712cb2420 100644
+--- a/include/linux/sunrpc/xprtsock.h
++++ b/include/linux/sunrpc/xprtsock.h
+@@ -10,6 +10,7 @@
+ 
+ int		init_socket_xprt(void);
+ void		cleanup_socket_xprt(void);
++unsigned short	get_srcport(struct rpc_xprt *);
+ 
+ #define RPC_MIN_RESVPORT	(1U)
+ #define RPC_MAX_RESVPORT	(65535U)
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 34a92d5ed12b5..22a60291f2037 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -1411,6 +1411,10 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
+ 				!hci_dev_test_flag(dev, HCI_AUTO_OFF))
+ #define bredr_sc_enabled(dev)  (lmp_sc_capable(dev) && \
+ 				hci_dev_test_flag(dev, HCI_SC_ENABLED))
++#define rpa_valid(dev)         (bacmp(&dev->rpa, BDADDR_ANY) && \
++				!hci_dev_test_flag(dev, HCI_RPA_EXPIRED))
++#define adv_rpa_valid(adv)     (bacmp(&adv->random_addr, BDADDR_ANY) && \
++				!adv->rpa_expired)
+ 
+ #define scan_1m(dev) (((dev)->le_tx_def_phys & HCI_LE_SET_PHY_1M) || \
+ 		      ((dev)->le_rx_def_phys & HCI_LE_SET_PHY_1M))
+diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h
+index dc5c1e69cd9f2..26788244d75bb 100644
+--- a/include/net/flow_offload.h
++++ b/include/net/flow_offload.h
+@@ -451,6 +451,7 @@ struct flow_block_offload {
+ 	struct list_head *driver_block_list;
+ 	struct netlink_ext_ack *extack;
+ 	struct Qdisc *sch;
++	struct list_head *cb_list_head;
+ };
+ 
+ enum tc_setup_type;
+diff --git a/include/uapi/linux/serial_reg.h b/include/uapi/linux/serial_reg.h
+index be07b5470f4bb..f51bc8f368134 100644
+--- a/include/uapi/linux/serial_reg.h
++++ b/include/uapi/linux/serial_reg.h
+@@ -62,6 +62,7 @@
+  * ST16C654:	 8  16  56  60		 8  16  32  56	PORT_16654
+  * TI16C750:	 1  16  32  56		xx  xx  xx  xx	PORT_16750
+  * TI16C752:	 8  16  56  60		 8  16  32  56
++ * OX16C950:	16  32 112 120		16  32  64 112	PORT_16C950
+  * Tegra:	 1   4   8  14		16   8   4   1	PORT_TEGRA
+  */
+ #define UART_FCR_R_TRIG_00	0x00
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index 14de1271463fd..4457545299177 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -794,7 +794,7 @@ static int dump_show(struct seq_file *seq, void *v)
+ }
+ DEFINE_SHOW_ATTRIBUTE(dump);
+ 
+-static void dma_debug_fs_init(void)
++static int __init dma_debug_fs_init(void)
+ {
+ 	struct dentry *dentry = debugfs_create_dir("dma-api", NULL);
+ 
+@@ -807,7 +807,10 @@ static void dma_debug_fs_init(void)
+ 	debugfs_create_u32("nr_total_entries", 0444, dentry, &nr_total_entries);
+ 	debugfs_create_file("driver_filter", 0644, dentry, NULL, &filter_fops);
+ 	debugfs_create_file("dump", 0444, dentry, NULL, &dump_fops);
++
++	return 0;
+ }
++core_initcall_sync(dma_debug_fs_init);
+ 
+ static int device_dma_allocations(struct device *dev, struct dma_debug_entry **out_entry)
+ {
+@@ -892,8 +895,6 @@ static int dma_debug_init(void)
+ 		spin_lock_init(&dma_entry_hash[i].lock);
+ 	}
+ 
+-	dma_debug_fs_init();
+-
+ 	nr_pages = DIV_ROUND_UP(nr_prealloc_entries, DMA_DEBUG_DYNAMIC_ENTRIES);
+ 	for (i = 0; i < nr_pages; ++i)
+ 		dma_debug_create_entries(GFP_KERNEL);
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 567fee3405003..268f1e7321cb3 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -1045,6 +1045,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
+ 	mm->pmd_huge_pte = NULL;
+ #endif
+ 	mm_init_uprobes_state(mm);
++	hugetlb_count_init(mm);
+ 
+ 	if (current->mm) {
+ 		mm->flags = current->mm->flags & MMF_INIT_MASK;
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index 3c20afbc19e13..ae5afba2162b7 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -1556,7 +1556,7 @@ void __sched __rt_mutex_init(struct rt_mutex *lock, const char *name,
+ 		     struct lock_class_key *key)
+ {
+ 	debug_check_no_locks_freed((void *)lock, sizeof(*lock));
+-	lockdep_init_map(&lock->dep_map, name, key, 0);
++	lockdep_init_map_wait(&lock->dep_map, name, key, 0, LD_WAIT_SLEEP);
+ 
+ 	__rt_mutex_basic_init(lock);
+ }
+diff --git a/kernel/pid_namespace.c b/kernel/pid_namespace.c
+index ca43239a255ad..cb5a25a8a0cc7 100644
+--- a/kernel/pid_namespace.c
++++ b/kernel/pid_namespace.c
+@@ -51,7 +51,8 @@ static struct kmem_cache *create_pid_cachep(unsigned int level)
+ 	mutex_lock(&pid_caches_mutex);
+ 	/* Name collision forces to do allocation under mutex. */
+ 	if (!*pkc)
+-		*pkc = kmem_cache_create(name, len, 0, SLAB_HWCACHE_ALIGN, 0);
++		*pkc = kmem_cache_create(name, len, 0,
++					 SLAB_HWCACHE_ALIGN | SLAB_ACCOUNT, 0);
+ 	mutex_unlock(&pid_caches_mutex);
+ 	/* current can fail, but someone else can succeed. */
+ 	return READ_ONCE(*pkc);
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 421c35571797e..d6731384dd47b 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -2545,6 +2545,7 @@ void console_unlock(void)
+ 	bool do_cond_resched, retry;
+ 	struct printk_info info;
+ 	struct printk_record r;
++	u64 __maybe_unused next_seq;
+ 
+ 	if (console_suspended) {
+ 		up_console_sem();
+@@ -2654,8 +2655,10 @@ skip:
+ 			cond_resched();
+ 	}
+ 
+-	console_locked = 0;
++	/* Get consistent value of the next-to-be-used sequence number. */
++	next_seq = console_seq;
+ 
++	console_locked = 0;
+ 	up_console_sem();
+ 
+ 	/*
+@@ -2664,7 +2667,7 @@ skip:
+ 	 * there's a new owner and the console_unlock() from them will do the
+ 	 * flush, no worries.
+ 	 */
+-	retry = prb_read_valid(prb, console_seq, NULL);
++	retry = prb_read_valid(prb, next_seq, NULL);
+ 	printk_safe_exit_irqrestore(flags);
+ 
+ 	if (retry && console_trylock())
+diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
+index ad0156b869371..d149050515355 100644
+--- a/kernel/rcu/tree_plugin.h
++++ b/kernel/rcu/tree_plugin.h
+@@ -2995,17 +2995,17 @@ static void noinstr rcu_dynticks_task_exit(void)
+ /* Turn on heavyweight RCU tasks trace readers on idle/user entry. */
+ static void rcu_dynticks_task_trace_enter(void)
+ {
+-#ifdef CONFIG_TASKS_RCU_TRACE
++#ifdef CONFIG_TASKS_TRACE_RCU
+ 	if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB))
+ 		current->trc_reader_special.b.need_mb = true;
+-#endif /* #ifdef CONFIG_TASKS_RCU_TRACE */
++#endif /* #ifdef CONFIG_TASKS_TRACE_RCU */
+ }
+ 
+ /* Turn off heavyweight RCU tasks trace readers on idle/user exit. */
+ static void rcu_dynticks_task_trace_exit(void)
+ {
+-#ifdef CONFIG_TASKS_RCU_TRACE
++#ifdef CONFIG_TASKS_TRACE_RCU
+ 	if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB))
+ 		current->trc_reader_special.b.need_mb = false;
+-#endif /* #ifdef CONFIG_TASKS_RCU_TRACE */
++#endif /* #ifdef CONFIG_TASKS_TRACE_RCU */
+ }
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index f148eacda55a9..542c2d03dab65 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -5902,6 +5902,13 @@ static void __init wq_numa_init(void)
+ 		return;
+ 	}
+ 
++	for_each_possible_cpu(cpu) {
++		if (WARN_ON(cpu_to_node(cpu) == NUMA_NO_NODE)) {
++			pr_warn("workqueue: NUMA node mapping not available for cpu%d, disabling NUMA support\n", cpu);
++			return;
++		}
++	}
++
+ 	wq_update_unbound_numa_attrs_buf = alloc_workqueue_attrs();
+ 	BUG_ON(!wq_update_unbound_numa_attrs_buf);
+ 
+@@ -5919,11 +5926,6 @@ static void __init wq_numa_init(void)
+ 
+ 	for_each_possible_cpu(cpu) {
+ 		node = cpu_to_node(cpu);
+-		if (WARN_ON(node == NUMA_NO_NODE)) {
+-			pr_warn("workqueue: NUMA node mapping not available for cpu%d, disabling NUMA support\n", cpu);
+-			/* happens iff arch is bonkers, let's just proceed */
+-			return;
+-		}
+ 		cpumask_set_cpu(cpu, tbl[node]);
+ 	}
+ 
+diff --git a/lib/test_bpf.c b/lib/test_bpf.c
+index 4dc4dcbecd129..acf825d816714 100644
+--- a/lib/test_bpf.c
++++ b/lib/test_bpf.c
+@@ -4286,8 +4286,8 @@ static struct bpf_test tests[] = {
+ 		.u.insns_int = {
+ 			BPF_LD_IMM64(R0, 0),
+ 			BPF_LD_IMM64(R1, 0xffffffffffffffffLL),
+-			BPF_STX_MEM(BPF_W, R10, R1, -40),
+-			BPF_LDX_MEM(BPF_W, R0, R10, -40),
++			BPF_STX_MEM(BPF_DW, R10, R1, -40),
++			BPF_LDX_MEM(BPF_DW, R0, R10, -40),
+ 			BPF_EXIT_INSN(),
+ 		},
+ 		INTERNAL,
+@@ -6659,7 +6659,14 @@ static int run_one(const struct bpf_prog *fp, struct bpf_test *test)
+ 		u64 duration;
+ 		u32 ret;
+ 
+-		if (test->test[i].data_size == 0 &&
++		/*
++		 * NOTE: Several sub-tests may be present, in which case
++		 * a zero {data_size, result} tuple indicates the end of
++		 * the sub-test array. The first test is always run,
++		 * even if both data_size and result happen to be zero.
++		 */
++		if (i > 0 &&
++		    test->test[i].data_size == 0 &&
+ 		    test->test[i].result == 0)
+ 			break;
+ 
+diff --git a/lib/test_stackinit.c b/lib/test_stackinit.c
+index f93b1e145ada7..16b1d3a3a4975 100644
+--- a/lib/test_stackinit.c
++++ b/lib/test_stackinit.c
+@@ -67,10 +67,10 @@ static bool range_contains(char *haystack_start, size_t haystack_size,
+ #define INIT_STRUCT_none		/**/
+ #define INIT_STRUCT_zero		= { }
+ #define INIT_STRUCT_static_partial	= { .two = 0, }
+-#define INIT_STRUCT_static_all		= { .one = arg->one,		\
+-					    .two = arg->two,		\
+-					    .three = arg->three,	\
+-					    .four = arg->four,		\
++#define INIT_STRUCT_static_all		= { .one = 0,			\
++					    .two = 0,			\
++					    .three = 0,			\
++					    .four = 0,			\
+ 					}
+ #define INIT_STRUCT_dynamic_partial	= { .two = arg->two, }
+ #define INIT_STRUCT_dynamic_all		= { .one = arg->one,		\
+@@ -84,8 +84,7 @@ static bool range_contains(char *haystack_start, size_t haystack_size,
+ 					var.one = 0;			\
+ 					var.two = 0;			\
+ 					var.three = 0;			\
+-					memset(&var.four, 0,		\
+-					       sizeof(var.four))
++					var.four = 0
+ 
+ /*
+  * @name: unique string name for the test
+@@ -210,18 +209,13 @@ struct test_small_hole {
+ 	unsigned long four;
+ };
+ 
+-/* Try to trigger unhandled padding in a structure. */
+-struct test_aligned {
+-	u32 internal1;
+-	u64 internal2;
+-} __aligned(64);
+-
++/* Trigger unhandled padding in a structure. */
+ struct test_big_hole {
+ 	u8 one;
+ 	u8 two;
+ 	u8 three;
+ 	/* 61 byte padding hole here. */
+-	struct test_aligned four;
++	u8 four __aligned(64);
+ } __aligned(64);
+ 
+ struct test_trailing_hole {
+diff --git a/mm/hmm.c b/mm/hmm.c
+index 943cb2ba44423..fb617054f9631 100644
+--- a/mm/hmm.c
++++ b/mm/hmm.c
+@@ -291,10 +291,13 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
+ 		goto fault;
+ 
+ 	/*
++	 * Bypass devmap pte such as DAX page when all pfn requested
++	 * flags(pfn_req_flags) are fulfilled.
+ 	 * Since each architecture defines a struct page for the zero page, just
+ 	 * fall through and treat it like a normal page.
+ 	 */
+-	if (pte_special(pte) && !is_zero_pfn(pte_pfn(pte))) {
++	if (pte_special(pte) && !pte_devmap(pte) &&
++	    !is_zero_pfn(pte_pfn(pte))) {
+ 		if (hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0)) {
+ 			pte_unmap(ptep);
+ 			return -EFAULT;
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 6ad419e7e0a4c..dcc6ded8ff221 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -3840,8 +3840,10 @@ static void hugetlb_vm_op_open(struct vm_area_struct *vma)
+ 	 * after this open call completes.  It is therefore safe to take a
+ 	 * new reference here without additional locking.
+ 	 */
+-	if (resv && is_vma_resv_set(vma, HPAGE_RESV_OWNER))
++	if (resv && is_vma_resv_set(vma, HPAGE_RESV_OWNER)) {
++		resv_map_dup_hugetlb_cgroup_uncharge_info(resv);
+ 		kref_get(&resv->refs);
++	}
+ }
+ 
+ static void hugetlb_vm_op_close(struct vm_area_struct *vma)
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 1bf3f86812509..8b2bec1963b48 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -834,8 +834,8 @@ static inline struct zone *default_zone_for_pfn(int nid, unsigned long start_pfn
+ 	return movable_node_enabled ? movable_zone : kernel_zone;
+ }
+ 
+-struct zone *zone_for_pfn_range(int online_type, int nid, unsigned start_pfn,
+-		unsigned long nr_pages)
++struct zone *zone_for_pfn_range(int online_type, int nid,
++		unsigned long start_pfn, unsigned long nr_pages)
+ {
+ 	if (online_type == MMOP_ONLINE_KERNEL)
+ 		return default_kernel_zone_for_pfn(nid, start_pfn, nr_pages);
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index f62d81f61b56b..6b06e472a07d8 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -2576,7 +2576,7 @@ out:
+ 			cgroup_size = max(cgroup_size, protection);
+ 
+ 			scan = lruvec_size - lruvec_size * protection /
+-				cgroup_size;
++				(cgroup_size + 1);
+ 
+ 			/*
+ 			 * Minimally target SWAP_CLUSTER_MAX pages to keep
+diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
+index f4fea28e05da6..3ec1a51a6944e 100644
+--- a/net/9p/trans_xen.c
++++ b/net/9p/trans_xen.c
+@@ -138,7 +138,7 @@ static bool p9_xen_write_todo(struct xen_9pfs_dataring *ring, RING_IDX size)
+ 
+ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req)
+ {
+-	struct xen_9pfs_front_priv *priv = NULL;
++	struct xen_9pfs_front_priv *priv;
+ 	RING_IDX cons, prod, masked_cons, masked_prod;
+ 	unsigned long flags;
+ 	u32 size = p9_req->tc.size;
+@@ -151,7 +151,7 @@ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req)
+ 			break;
+ 	}
+ 	read_unlock(&xen_9pfs_lock);
+-	if (!priv || priv->client != client)
++	if (list_entry_is_head(priv, &xen_9pfs_devs, list))
+ 		return -EINVAL;
+ 
+ 	num = p9_req->tc.tag % priv->num_rings;
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 62c99e015609d..89f37f26f2535 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -40,6 +40,8 @@
+ #define ZERO_KEY "\x00\x00\x00\x00\x00\x00\x00\x00" \
+ 		 "\x00\x00\x00\x00\x00\x00\x00\x00"
+ 
++#define secs_to_jiffies(_secs) msecs_to_jiffies((_secs) * 1000)
++
+ /* Handle HCI Event packets */
+ 
+ static void hci_cc_inquiry_cancel(struct hci_dev *hdev, struct sk_buff *skb,
+@@ -1171,6 +1173,12 @@ static void hci_cc_le_set_random_addr(struct hci_dev *hdev, struct sk_buff *skb)
+ 
+ 	bacpy(&hdev->random_addr, sent);
+ 
++	if (!bacmp(&hdev->rpa, sent)) {
++		hci_dev_clear_flag(hdev, HCI_RPA_EXPIRED);
++		queue_delayed_work(hdev->workqueue, &hdev->rpa_expired,
++				   secs_to_jiffies(hdev->rpa_timeout));
++	}
++
+ 	hci_dev_unlock(hdev);
+ }
+ 
+@@ -1201,24 +1209,30 @@ static void hci_cc_le_set_adv_set_random_addr(struct hci_dev *hdev,
+ {
+ 	__u8 status = *((__u8 *) skb->data);
+ 	struct hci_cp_le_set_adv_set_rand_addr *cp;
+-	struct adv_info *adv_instance;
++	struct adv_info *adv;
+ 
+ 	if (status)
+ 		return;
+ 
+ 	cp = hci_sent_cmd_data(hdev, HCI_OP_LE_SET_ADV_SET_RAND_ADDR);
+-	if (!cp)
++	/* Update only in case the adv instance since handle 0x00 shall be using
++	 * HCI_OP_LE_SET_RANDOM_ADDR since that allows both extended and
++	 * non-extended adverting.
++	 */
++	if (!cp || !cp->handle)
+ 		return;
+ 
+ 	hci_dev_lock(hdev);
+ 
+-	if (!cp->handle) {
+-		/* Store in hdev for instance 0 (Set adv and Directed advs) */
+-		bacpy(&hdev->random_addr, &cp->bdaddr);
+-	} else {
+-		adv_instance = hci_find_adv_instance(hdev, cp->handle);
+-		if (adv_instance)
+-			bacpy(&adv_instance->random_addr, &cp->bdaddr);
++	adv = hci_find_adv_instance(hdev, cp->handle);
++	if (adv) {
++		bacpy(&adv->random_addr, &cp->bdaddr);
++		if (!bacmp(&hdev->rpa, &cp->bdaddr)) {
++			adv->rpa_expired = false;
++			queue_delayed_work(hdev->workqueue,
++					   &adv->rpa_expired_cb,
++					   secs_to_jiffies(hdev->rpa_timeout));
++		}
+ 	}
+ 
+ 	hci_dev_unlock(hdev);
+@@ -4373,6 +4387,21 @@ static void hci_sync_conn_complete_evt(struct hci_dev *hdev,
+ 
+ 	switch (ev->status) {
+ 	case 0x00:
++		/* The synchronous connection complete event should only be
++		 * sent once per new connection. Receiving a successful
++		 * complete event when the connection status is already
++		 * BT_CONNECTED means that the device is misbehaving and sent
++		 * multiple complete event packets for the same new connection.
++		 *
++		 * Registering the device more than once can corrupt kernel
++		 * memory, hence upon detecting this invalid event, we report
++		 * an error and ignore the packet.
++		 */
++		if (conn->state == BT_CONNECTED) {
++			bt_dev_err(hdev, "Ignoring connect complete event for existing connection");
++			goto unlock;
++		}
++
+ 		conn->handle = __le16_to_cpu(ev->handle);
+ 		conn->state  = BT_CONNECTED;
+ 		conn->type   = ev->link_type;
+@@ -5095,9 +5124,64 @@ static void hci_disconn_phylink_complete_evt(struct hci_dev *hdev,
+ }
+ #endif
+ 
++static void le_conn_update_addr(struct hci_conn *conn, bdaddr_t *bdaddr,
++				u8 bdaddr_type, bdaddr_t *local_rpa)
++{
++	if (conn->out) {
++		conn->dst_type = bdaddr_type;
++		conn->resp_addr_type = bdaddr_type;
++		bacpy(&conn->resp_addr, bdaddr);
++
++		/* Check if the controller has set a Local RPA then it must be
++		 * used instead or hdev->rpa.
++		 */
++		if (local_rpa && bacmp(local_rpa, BDADDR_ANY)) {
++			conn->init_addr_type = ADDR_LE_DEV_RANDOM;
++			bacpy(&conn->init_addr, local_rpa);
++		} else if (hci_dev_test_flag(conn->hdev, HCI_PRIVACY)) {
++			conn->init_addr_type = ADDR_LE_DEV_RANDOM;
++			bacpy(&conn->init_addr, &conn->hdev->rpa);
++		} else {
++			hci_copy_identity_address(conn->hdev, &conn->init_addr,
++						  &conn->init_addr_type);
++		}
++	} else {
++		conn->resp_addr_type = conn->hdev->adv_addr_type;
++		/* Check if the controller has set a Local RPA then it must be
++		 * used instead or hdev->rpa.
++		 */
++		if (local_rpa && bacmp(local_rpa, BDADDR_ANY)) {
++			conn->resp_addr_type = ADDR_LE_DEV_RANDOM;
++			bacpy(&conn->resp_addr, local_rpa);
++		} else if (conn->hdev->adv_addr_type == ADDR_LE_DEV_RANDOM) {
++			/* In case of ext adv, resp_addr will be updated in
++			 * Adv Terminated event.
++			 */
++			if (!ext_adv_capable(conn->hdev))
++				bacpy(&conn->resp_addr,
++				      &conn->hdev->random_addr);
++		} else {
++			bacpy(&conn->resp_addr, &conn->hdev->bdaddr);
++		}
++
++		conn->init_addr_type = bdaddr_type;
++		bacpy(&conn->init_addr, bdaddr);
++
++		/* For incoming connections, set the default minimum
++		 * and maximum connection interval. They will be used
++		 * to check if the parameters are in range and if not
++		 * trigger the connection update procedure.
++		 */
++		conn->le_conn_min_interval = conn->hdev->le_conn_min_interval;
++		conn->le_conn_max_interval = conn->hdev->le_conn_max_interval;
++	}
++}
++
+ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
+-			bdaddr_t *bdaddr, u8 bdaddr_type, u8 role, u16 handle,
+-			u16 interval, u16 latency, u16 supervision_timeout)
++				 bdaddr_t *bdaddr, u8 bdaddr_type,
++				 bdaddr_t *local_rpa, u8 role, u16 handle,
++				 u16 interval, u16 latency,
++				 u16 supervision_timeout)
+ {
+ 	struct hci_conn_params *params;
+ 	struct hci_conn *conn;
+@@ -5145,32 +5229,7 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
+ 		cancel_delayed_work(&conn->le_conn_timeout);
+ 	}
+ 
+-	if (!conn->out) {
+-		/* Set the responder (our side) address type based on
+-		 * the advertising address type.
+-		 */
+-		conn->resp_addr_type = hdev->adv_addr_type;
+-		if (hdev->adv_addr_type == ADDR_LE_DEV_RANDOM) {
+-			/* In case of ext adv, resp_addr will be updated in
+-			 * Adv Terminated event.
+-			 */
+-			if (!ext_adv_capable(hdev))
+-				bacpy(&conn->resp_addr, &hdev->random_addr);
+-		} else {
+-			bacpy(&conn->resp_addr, &hdev->bdaddr);
+-		}
+-
+-		conn->init_addr_type = bdaddr_type;
+-		bacpy(&conn->init_addr, bdaddr);
+-
+-		/* For incoming connections, set the default minimum
+-		 * and maximum connection interval. They will be used
+-		 * to check if the parameters are in range and if not
+-		 * trigger the connection update procedure.
+-		 */
+-		conn->le_conn_min_interval = hdev->le_conn_min_interval;
+-		conn->le_conn_max_interval = hdev->le_conn_max_interval;
+-	}
++	le_conn_update_addr(conn, bdaddr, bdaddr_type, local_rpa);
+ 
+ 	/* Lookup the identity address from the stored connection
+ 	 * address and address type.
+@@ -5264,7 +5323,7 @@ static void hci_le_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 	BT_DBG("%s status 0x%2.2x", hdev->name, ev->status);
+ 
+ 	le_conn_complete_evt(hdev, ev->status, &ev->bdaddr, ev->bdaddr_type,
+-			     ev->role, le16_to_cpu(ev->handle),
++			     NULL, ev->role, le16_to_cpu(ev->handle),
+ 			     le16_to_cpu(ev->interval),
+ 			     le16_to_cpu(ev->latency),
+ 			     le16_to_cpu(ev->supervision_timeout));
+@@ -5278,7 +5337,7 @@ static void hci_le_enh_conn_complete_evt(struct hci_dev *hdev,
+ 	BT_DBG("%s status 0x%2.2x", hdev->name, ev->status);
+ 
+ 	le_conn_complete_evt(hdev, ev->status, &ev->bdaddr, ev->bdaddr_type,
+-			     ev->role, le16_to_cpu(ev->handle),
++			     &ev->local_rpa, ev->role, le16_to_cpu(ev->handle),
+ 			     le16_to_cpu(ev->interval),
+ 			     le16_to_cpu(ev->latency),
+ 			     le16_to_cpu(ev->supervision_timeout));
+@@ -5314,7 +5373,8 @@ static void hci_le_ext_adv_term_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 	if (conn) {
+ 		struct adv_info *adv_instance;
+ 
+-		if (hdev->adv_addr_type != ADDR_LE_DEV_RANDOM)
++		if (hdev->adv_addr_type != ADDR_LE_DEV_RANDOM ||
++		    bacmp(&conn->resp_addr, BDADDR_ANY))
+ 			return;
+ 
+ 		if (!ev->handle) {
+diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
+index b069f640394d0..477519ab63b83 100644
+--- a/net/bluetooth/hci_request.c
++++ b/net/bluetooth/hci_request.c
+@@ -2053,8 +2053,6 @@ int hci_get_random_address(struct hci_dev *hdev, bool require_privacy,
+ 	 * current RPA has expired then generate a new one.
+ 	 */
+ 	if (use_rpa) {
+-		int to;
+-
+ 		/* If Controller supports LL Privacy use own address type is
+ 		 * 0x03
+ 		 */
+@@ -2065,14 +2063,10 @@ int hci_get_random_address(struct hci_dev *hdev, bool require_privacy,
+ 			*own_addr_type = ADDR_LE_DEV_RANDOM;
+ 
+ 		if (adv_instance) {
+-			if (!adv_instance->rpa_expired &&
+-			    !bacmp(&adv_instance->random_addr, &hdev->rpa))
++			if (adv_rpa_valid(adv_instance))
+ 				return 0;
+-
+-			adv_instance->rpa_expired = false;
+ 		} else {
+-			if (!hci_dev_test_and_clear_flag(hdev, HCI_RPA_EXPIRED) &&
+-			    !bacmp(&hdev->random_addr, &hdev->rpa))
++			if (rpa_valid(hdev))
+ 				return 0;
+ 		}
+ 
+@@ -2084,14 +2078,6 @@ int hci_get_random_address(struct hci_dev *hdev, bool require_privacy,
+ 
+ 		bacpy(rand_addr, &hdev->rpa);
+ 
+-		to = msecs_to_jiffies(hdev->rpa_timeout * 1000);
+-		if (adv_instance)
+-			queue_delayed_work(hdev->workqueue,
+-					   &adv_instance->rpa_expired_cb, to);
+-		else
+-			queue_delayed_work(hdev->workqueue,
+-					   &hdev->rpa_expired, to);
+-
+ 		return 0;
+ 	}
+ 
+@@ -2134,6 +2120,30 @@ void __hci_req_clear_ext_adv_sets(struct hci_request *req)
+ 	hci_req_add(req, HCI_OP_LE_CLEAR_ADV_SETS, 0, NULL);
+ }
+ 
++static void set_random_addr(struct hci_request *req, bdaddr_t *rpa)
++{
++	struct hci_dev *hdev = req->hdev;
++
++	/* If we're advertising or initiating an LE connection we can't
++	 * go ahead and change the random address at this time. This is
++	 * because the eventual initiator address used for the
++	 * subsequently created connection will be undefined (some
++	 * controllers use the new address and others the one we had
++	 * when the operation started).
++	 *
++	 * In this kind of scenario skip the update and let the random
++	 * address be updated at the next cycle.
++	 */
++	if (hci_dev_test_flag(hdev, HCI_LE_ADV) ||
++	    hci_lookup_le_connect(hdev)) {
++		bt_dev_dbg(hdev, "Deferring random address update");
++		hci_dev_set_flag(hdev, HCI_RPA_EXPIRED);
++		return;
++	}
++
++	hci_req_add(req, HCI_OP_LE_SET_RANDOM_ADDR, 6, rpa);
++}
++
+ int __hci_req_setup_ext_adv_instance(struct hci_request *req, u8 instance)
+ {
+ 	struct hci_cp_le_set_ext_adv_params cp;
+@@ -2236,6 +2246,13 @@ int __hci_req_setup_ext_adv_instance(struct hci_request *req, u8 instance)
+ 		} else {
+ 			if (!bacmp(&random_addr, &hdev->random_addr))
+ 				return 0;
++			/* Instance 0x00 doesn't have an adv_info, instead it
++			 * uses hdev->random_addr to track its address so
++			 * whenever it needs to be updated this also set the
++			 * random address since hdev->random_addr is shared with
++			 * scan state machine.
++			 */
++			set_random_addr(req, &random_addr);
+ 		}
+ 
+ 		memset(&cp, 0, sizeof(cp));
+@@ -2493,30 +2510,6 @@ void hci_req_clear_adv_instance(struct hci_dev *hdev, struct sock *sk,
+ 						false);
+ }
+ 
+-static void set_random_addr(struct hci_request *req, bdaddr_t *rpa)
+-{
+-	struct hci_dev *hdev = req->hdev;
+-
+-	/* If we're advertising or initiating an LE connection we can't
+-	 * go ahead and change the random address at this time. This is
+-	 * because the eventual initiator address used for the
+-	 * subsequently created connection will be undefined (some
+-	 * controllers use the new address and others the one we had
+-	 * when the operation started).
+-	 *
+-	 * In this kind of scenario skip the update and let the random
+-	 * address be updated at the next cycle.
+-	 */
+-	if (hci_dev_test_flag(hdev, HCI_LE_ADV) ||
+-	    hci_lookup_le_connect(hdev)) {
+-		bt_dev_dbg(hdev, "Deferring random address update");
+-		hci_dev_set_flag(hdev, HCI_RPA_EXPIRED);
+-		return;
+-	}
+-
+-	hci_req_add(req, HCI_OP_LE_SET_RANDOM_ADDR, 6, rpa);
+-}
+-
+ int hci_update_random_address(struct hci_request *req, bool require_privacy,
+ 			      bool use_rpa, u8 *own_addr_type)
+ {
+@@ -2528,8 +2521,6 @@ int hci_update_random_address(struct hci_request *req, bool require_privacy,
+ 	 * the current RPA in use, then generate a new one.
+ 	 */
+ 	if (use_rpa) {
+-		int to;
+-
+ 		/* If Controller supports LL Privacy use own address type is
+ 		 * 0x03
+ 		 */
+@@ -2539,8 +2530,7 @@ int hci_update_random_address(struct hci_request *req, bool require_privacy,
+ 		else
+ 			*own_addr_type = ADDR_LE_DEV_RANDOM;
+ 
+-		if (!hci_dev_test_and_clear_flag(hdev, HCI_RPA_EXPIRED) &&
+-		    !bacmp(&hdev->random_addr, &hdev->rpa))
++		if (rpa_valid(hdev))
+ 			return 0;
+ 
+ 		err = smp_generate_rpa(hdev, hdev->irk, &hdev->rpa);
+@@ -2551,9 +2541,6 @@ int hci_update_random_address(struct hci_request *req, bool require_privacy,
+ 
+ 		set_random_addr(req, &hdev->rpa);
+ 
+-		to = msecs_to_jiffies(hdev->rpa_timeout * 1000);
+-		queue_delayed_work(hdev->workqueue, &hdev->rpa_expired, to);
+-
+ 		return 0;
+ 	}
+ 
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 9769a7ceb6898..bd0d616dbc372 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -48,6 +48,8 @@ struct sco_conn {
+ 	spinlock_t	lock;
+ 	struct sock	*sk;
+ 
++	struct delayed_work	timeout_work;
++
+ 	unsigned int    mtu;
+ };
+ 
+@@ -74,9 +76,20 @@ struct sco_pinfo {
+ #define SCO_CONN_TIMEOUT	(HZ * 40)
+ #define SCO_DISCONN_TIMEOUT	(HZ * 2)
+ 
+-static void sco_sock_timeout(struct timer_list *t)
++static void sco_sock_timeout(struct work_struct *work)
+ {
+-	struct sock *sk = from_timer(sk, t, sk_timer);
++	struct sco_conn *conn = container_of(work, struct sco_conn,
++					     timeout_work.work);
++	struct sock *sk;
++
++	sco_conn_lock(conn);
++	sk = conn->sk;
++	if (sk)
++		sock_hold(sk);
++	sco_conn_unlock(conn);
++
++	if (!sk)
++		return;
+ 
+ 	BT_DBG("sock %p state %d", sk, sk->sk_state);
+ 
+@@ -90,14 +103,21 @@ static void sco_sock_timeout(struct timer_list *t)
+ 
+ static void sco_sock_set_timer(struct sock *sk, long timeout)
+ {
++	if (!sco_pi(sk)->conn)
++		return;
++
+ 	BT_DBG("sock %p state %d timeout %ld", sk, sk->sk_state, timeout);
+-	sk_reset_timer(sk, &sk->sk_timer, jiffies + timeout);
++	cancel_delayed_work(&sco_pi(sk)->conn->timeout_work);
++	schedule_delayed_work(&sco_pi(sk)->conn->timeout_work, timeout);
+ }
+ 
+ static void sco_sock_clear_timer(struct sock *sk)
+ {
++	if (!sco_pi(sk)->conn)
++		return;
++
+ 	BT_DBG("sock %p state %d", sk, sk->sk_state);
+-	sk_stop_timer(sk, &sk->sk_timer);
++	cancel_delayed_work(&sco_pi(sk)->conn->timeout_work);
+ }
+ 
+ /* ---- SCO connections ---- */
+@@ -177,6 +197,9 @@ static void sco_conn_del(struct hci_conn *hcon, int err)
+ 		sco_chan_del(sk, err);
+ 		bh_unlock_sock(sk);
+ 		sock_put(sk);
++
++		/* Ensure no more work items will run before freeing conn. */
++		cancel_delayed_work_sync(&conn->timeout_work);
+ 	}
+ 
+ 	hcon->sco_data = NULL;
+@@ -191,6 +214,8 @@ static void __sco_chan_add(struct sco_conn *conn, struct sock *sk,
+ 	sco_pi(sk)->conn = conn;
+ 	conn->sk = sk;
+ 
++	INIT_DELAYED_WORK(&conn->timeout_work, sco_sock_timeout);
++
+ 	if (parent)
+ 		bt_accept_enqueue(parent, sk, true);
+ }
+@@ -210,44 +235,32 @@ static int sco_chan_add(struct sco_conn *conn, struct sock *sk,
+ 	return err;
+ }
+ 
+-static int sco_connect(struct sock *sk)
++static int sco_connect(struct hci_dev *hdev, struct sock *sk)
+ {
+ 	struct sco_conn *conn;
+ 	struct hci_conn *hcon;
+-	struct hci_dev  *hdev;
+ 	int err, type;
+ 
+ 	BT_DBG("%pMR -> %pMR", &sco_pi(sk)->src, &sco_pi(sk)->dst);
+ 
+-	hdev = hci_get_route(&sco_pi(sk)->dst, &sco_pi(sk)->src, BDADDR_BREDR);
+-	if (!hdev)
+-		return -EHOSTUNREACH;
+-
+-	hci_dev_lock(hdev);
+-
+ 	if (lmp_esco_capable(hdev) && !disable_esco)
+ 		type = ESCO_LINK;
+ 	else
+ 		type = SCO_LINK;
+ 
+ 	if (sco_pi(sk)->setting == BT_VOICE_TRANSPARENT &&
+-	    (!lmp_transp_capable(hdev) || !lmp_esco_capable(hdev))) {
+-		err = -EOPNOTSUPP;
+-		goto done;
+-	}
++	    (!lmp_transp_capable(hdev) || !lmp_esco_capable(hdev)))
++		return -EOPNOTSUPP;
+ 
+ 	hcon = hci_connect_sco(hdev, type, &sco_pi(sk)->dst,
+ 			       sco_pi(sk)->setting);
+-	if (IS_ERR(hcon)) {
+-		err = PTR_ERR(hcon);
+-		goto done;
+-	}
++	if (IS_ERR(hcon))
++		return PTR_ERR(hcon);
+ 
+ 	conn = sco_conn_add(hcon);
+ 	if (!conn) {
+ 		hci_conn_drop(hcon);
+-		err = -ENOMEM;
+-		goto done;
++		return -ENOMEM;
+ 	}
+ 
+ 	/* Update source addr of the socket */
+@@ -255,7 +268,7 @@ static int sco_connect(struct sock *sk)
+ 
+ 	err = sco_chan_add(conn, sk, NULL);
+ 	if (err)
+-		goto done;
++		return err;
+ 
+ 	if (hcon->state == BT_CONNECTED) {
+ 		sco_sock_clear_timer(sk);
+@@ -265,9 +278,6 @@ static int sco_connect(struct sock *sk)
+ 		sco_sock_set_timer(sk, sk->sk_sndtimeo);
+ 	}
+ 
+-done:
+-	hci_dev_unlock(hdev);
+-	hci_dev_put(hdev);
+ 	return err;
+ }
+ 
+@@ -496,8 +506,6 @@ static struct sock *sco_sock_alloc(struct net *net, struct socket *sock,
+ 
+ 	sco_pi(sk)->setting = BT_VOICE_CVSD_16BIT;
+ 
+-	timer_setup(&sk->sk_timer, sco_sock_timeout, 0);
+-
+ 	bt_sock_link(&sco_sk_list, sk);
+ 	return sk;
+ }
+@@ -562,6 +570,7 @@ static int sco_sock_connect(struct socket *sock, struct sockaddr *addr, int alen
+ {
+ 	struct sockaddr_sco *sa = (struct sockaddr_sco *) addr;
+ 	struct sock *sk = sock->sk;
++	struct hci_dev  *hdev;
+ 	int err;
+ 
+ 	BT_DBG("sk %p", sk);
+@@ -576,12 +585,19 @@ static int sco_sock_connect(struct socket *sock, struct sockaddr *addr, int alen
+ 	if (sk->sk_type != SOCK_SEQPACKET)
+ 		return -EINVAL;
+ 
++	hdev = hci_get_route(&sa->sco_bdaddr, &sco_pi(sk)->src, BDADDR_BREDR);
++	if (!hdev)
++		return -EHOSTUNREACH;
++	hci_dev_lock(hdev);
++
+ 	lock_sock(sk);
+ 
+ 	/* Set destination address and psm */
+ 	bacpy(&sco_pi(sk)->dst, &sa->sco_bdaddr);
+ 
+-	err = sco_connect(sk);
++	err = sco_connect(hdev, sk);
++	hci_dev_unlock(hdev);
++	hci_dev_put(hdev);
+ 	if (err)
+ 		goto done;
+ 
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index 3ed7c98a98e1d..6076c75706d00 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -1056,8 +1056,10 @@ proto_again:
+ 							      FLOW_DISSECTOR_KEY_IPV4_ADDRS,
+ 							      target_container);
+ 
+-			memcpy(&key_addrs->v4addrs, &iph->saddr,
+-			       sizeof(key_addrs->v4addrs));
++			memcpy(&key_addrs->v4addrs.src, &iph->saddr,
++			       sizeof(key_addrs->v4addrs.src));
++			memcpy(&key_addrs->v4addrs.dst, &iph->daddr,
++			       sizeof(key_addrs->v4addrs.dst));
+ 			key_control->addr_type = FLOW_DISSECTOR_KEY_IPV4_ADDRS;
+ 		}
+ 
+@@ -1101,8 +1103,10 @@ proto_again:
+ 							      FLOW_DISSECTOR_KEY_IPV6_ADDRS,
+ 							      target_container);
+ 
+-			memcpy(&key_addrs->v6addrs, &iph->saddr,
+-			       sizeof(key_addrs->v6addrs));
++			memcpy(&key_addrs->v6addrs.src, &iph->saddr,
++			       sizeof(key_addrs->v6addrs.src));
++			memcpy(&key_addrs->v6addrs.dst, &iph->daddr,
++			       sizeof(key_addrs->v6addrs.dst));
+ 			key_control->addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS;
+ 		}
+ 
+diff --git a/net/core/flow_offload.c b/net/core/flow_offload.c
+index 715b67f6c62f3..e3f0d59068117 100644
+--- a/net/core/flow_offload.c
++++ b/net/core/flow_offload.c
+@@ -321,6 +321,7 @@ EXPORT_SYMBOL(flow_block_cb_setup_simple);
+ static DEFINE_MUTEX(flow_indr_block_lock);
+ static LIST_HEAD(flow_block_indr_list);
+ static LIST_HEAD(flow_block_indr_dev_list);
++static LIST_HEAD(flow_indir_dev_list);
+ 
+ struct flow_indr_dev {
+ 	struct list_head		list;
+@@ -346,6 +347,33 @@ static struct flow_indr_dev *flow_indr_dev_alloc(flow_indr_block_bind_cb_t *cb,
+ 	return indr_dev;
+ }
+ 
++struct flow_indir_dev_info {
++	void *data;
++	struct net_device *dev;
++	struct Qdisc *sch;
++	enum tc_setup_type type;
++	void (*cleanup)(struct flow_block_cb *block_cb);
++	struct list_head list;
++	enum flow_block_command command;
++	enum flow_block_binder_type binder_type;
++	struct list_head *cb_list;
++};
++
++static void existing_qdiscs_register(flow_indr_block_bind_cb_t *cb, void *cb_priv)
++{
++	struct flow_block_offload bo;
++	struct flow_indir_dev_info *cur;
++
++	list_for_each_entry(cur, &flow_indir_dev_list, list) {
++		memset(&bo, 0, sizeof(bo));
++		bo.command = cur->command;
++		bo.binder_type = cur->binder_type;
++		INIT_LIST_HEAD(&bo.cb_list);
++		cb(cur->dev, cur->sch, cb_priv, cur->type, &bo, cur->data, cur->cleanup);
++		list_splice(&bo.cb_list, cur->cb_list);
++	}
++}
++
+ int flow_indr_dev_register(flow_indr_block_bind_cb_t *cb, void *cb_priv)
+ {
+ 	struct flow_indr_dev *indr_dev;
+@@ -367,6 +395,7 @@ int flow_indr_dev_register(flow_indr_block_bind_cb_t *cb, void *cb_priv)
+ 	}
+ 
+ 	list_add(&indr_dev->list, &flow_block_indr_dev_list);
++	existing_qdiscs_register(cb, cb_priv);
+ 	mutex_unlock(&flow_indr_block_lock);
+ 
+ 	return 0;
+@@ -463,7 +492,59 @@ out:
+ }
+ EXPORT_SYMBOL(flow_indr_block_cb_alloc);
+ 
+-int flow_indr_dev_setup_offload(struct net_device *dev, struct Qdisc *sch,
++static struct flow_indir_dev_info *find_indir_dev(void *data)
++{
++	struct flow_indir_dev_info *cur;
++
++	list_for_each_entry(cur, &flow_indir_dev_list, list) {
++		if (cur->data == data)
++			return cur;
++	}
++	return NULL;
++}
++
++static int indir_dev_add(void *data, struct net_device *dev, struct Qdisc *sch,
++			 enum tc_setup_type type, void (*cleanup)(struct flow_block_cb *block_cb),
++			 struct flow_block_offload *bo)
++{
++	struct flow_indir_dev_info *info;
++
++	info = find_indir_dev(data);
++	if (info)
++		return -EEXIST;
++
++	info = kzalloc(sizeof(*info), GFP_KERNEL);
++	if (!info)
++		return -ENOMEM;
++
++	info->data = data;
++	info->dev = dev;
++	info->sch = sch;
++	info->type = type;
++	info->cleanup = cleanup;
++	info->command = bo->command;
++	info->binder_type = bo->binder_type;
++	info->cb_list = bo->cb_list_head;
++
++	list_add(&info->list, &flow_indir_dev_list);
++	return 0;
++}
++
++static int indir_dev_remove(void *data)
++{
++	struct flow_indir_dev_info *info;
++
++	info = find_indir_dev(data);
++	if (!info)
++		return -ENOENT;
++
++	list_del(&info->list);
++
++	kfree(info);
++	return 0;
++}
++
++int flow_indr_dev_setup_offload(struct net_device *dev,	struct Qdisc *sch,
+ 				enum tc_setup_type type, void *data,
+ 				struct flow_block_offload *bo,
+ 				void (*cleanup)(struct flow_block_cb *block_cb))
+@@ -471,6 +552,12 @@ int flow_indr_dev_setup_offload(struct net_device *dev, struct Qdisc *sch,
+ 	struct flow_indr_dev *this;
+ 
+ 	mutex_lock(&flow_indr_block_lock);
++
++	if (bo->command == FLOW_BLOCK_BIND)
++		indir_dev_add(data, dev, sch, type, cleanup, bo);
++	else if (bo->command == FLOW_BLOCK_UNBIND)
++		indir_dev_remove(data);
++
+ 	list_for_each_entry(this, &flow_block_indr_dev_list, list)
+ 		this->cb(dev, sch, this->cb_priv, type, bo, data, cleanup);
+ 
+diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
+index baa5d10043cb0..6134b180f59f8 100644
+--- a/net/ethtool/ioctl.c
++++ b/net/ethtool/ioctl.c
+@@ -7,6 +7,7 @@
+  * the information ethtool needs.
+  */
+ 
++#include <linux/compat.h>
+ #include <linux/module.h>
+ #include <linux/types.h>
+ #include <linux/capability.h>
+@@ -807,6 +808,120 @@ out:
+ 	return ret;
+ }
+ 
++static noinline_for_stack int
++ethtool_rxnfc_copy_from_compat(struct ethtool_rxnfc *rxnfc,
++			       const struct compat_ethtool_rxnfc __user *useraddr,
++			       size_t size)
++{
++	struct compat_ethtool_rxnfc crxnfc = {};
++
++	/* We expect there to be holes between fs.m_ext and
++	 * fs.ring_cookie and at the end of fs, but nowhere else.
++	 * On non-x86, no conversion should be needed.
++	 */
++	BUILD_BUG_ON(!IS_ENABLED(CONFIG_X86_64) &&
++		     sizeof(struct compat_ethtool_rxnfc) !=
++		     sizeof(struct ethtool_rxnfc));
++	BUILD_BUG_ON(offsetof(struct compat_ethtool_rxnfc, fs.m_ext) +
++		     sizeof(useraddr->fs.m_ext) !=
++		     offsetof(struct ethtool_rxnfc, fs.m_ext) +
++		     sizeof(rxnfc->fs.m_ext));
++	BUILD_BUG_ON(offsetof(struct compat_ethtool_rxnfc, fs.location) -
++		     offsetof(struct compat_ethtool_rxnfc, fs.ring_cookie) !=
++		     offsetof(struct ethtool_rxnfc, fs.location) -
++		     offsetof(struct ethtool_rxnfc, fs.ring_cookie));
++
++	if (copy_from_user(&crxnfc, useraddr, min(size, sizeof(crxnfc))))
++		return -EFAULT;
++
++	*rxnfc = (struct ethtool_rxnfc) {
++		.cmd		= crxnfc.cmd,
++		.flow_type	= crxnfc.flow_type,
++		.data		= crxnfc.data,
++		.fs		= {
++			.flow_type	= crxnfc.fs.flow_type,
++			.h_u		= crxnfc.fs.h_u,
++			.h_ext		= crxnfc.fs.h_ext,
++			.m_u		= crxnfc.fs.m_u,
++			.m_ext		= crxnfc.fs.m_ext,
++			.ring_cookie	= crxnfc.fs.ring_cookie,
++			.location	= crxnfc.fs.location,
++		},
++		.rule_cnt	= crxnfc.rule_cnt,
++	};
++
++	return 0;
++}
++
++static int ethtool_rxnfc_copy_from_user(struct ethtool_rxnfc *rxnfc,
++					const void __user *useraddr,
++					size_t size)
++{
++	if (compat_need_64bit_alignment_fixup())
++		return ethtool_rxnfc_copy_from_compat(rxnfc, useraddr, size);
++
++	if (copy_from_user(rxnfc, useraddr, size))
++		return -EFAULT;
++
++	return 0;
++}
++
++static int ethtool_rxnfc_copy_to_compat(void __user *useraddr,
++					const struct ethtool_rxnfc *rxnfc,
++					size_t size, const u32 *rule_buf)
++{
++	struct compat_ethtool_rxnfc crxnfc;
++
++	memset(&crxnfc, 0, sizeof(crxnfc));
++	crxnfc = (struct compat_ethtool_rxnfc) {
++		.cmd		= rxnfc->cmd,
++		.flow_type	= rxnfc->flow_type,
++		.data		= rxnfc->data,
++		.fs		= {
++			.flow_type	= rxnfc->fs.flow_type,
++			.h_u		= rxnfc->fs.h_u,
++			.h_ext		= rxnfc->fs.h_ext,
++			.m_u		= rxnfc->fs.m_u,
++			.m_ext		= rxnfc->fs.m_ext,
++			.ring_cookie	= rxnfc->fs.ring_cookie,
++			.location	= rxnfc->fs.location,
++		},
++		.rule_cnt	= rxnfc->rule_cnt,
++	};
++
++	if (copy_to_user(useraddr, &crxnfc, min(size, sizeof(crxnfc))))
++		return -EFAULT;
++
++	return 0;
++}
++
++static int ethtool_rxnfc_copy_to_user(void __user *useraddr,
++				      const struct ethtool_rxnfc *rxnfc,
++				      size_t size, const u32 *rule_buf)
++{
++	int ret;
++
++	if (compat_need_64bit_alignment_fixup()) {
++		ret = ethtool_rxnfc_copy_to_compat(useraddr, rxnfc, size,
++						   rule_buf);
++		useraddr += offsetof(struct compat_ethtool_rxnfc, rule_locs);
++	} else {
++		ret = copy_to_user(useraddr, &rxnfc, size);
++		useraddr += offsetof(struct ethtool_rxnfc, rule_locs);
++	}
++
++	if (ret)
++		return -EFAULT;
++
++	if (rule_buf) {
++		if (copy_to_user(useraddr, rule_buf,
++				 rxnfc->rule_cnt * sizeof(u32)))
++			return -EFAULT;
++	}
++
++	return 0;
++}
++
+ static noinline_for_stack int ethtool_set_rxnfc(struct net_device *dev,
+ 						u32 cmd, void __user *useraddr)
+ {
+@@ -825,7 +940,7 @@ static noinline_for_stack int ethtool_set_rxnfc(struct net_device *dev,
+ 		info_size = (offsetof(struct ethtool_rxnfc, data) +
+ 			     sizeof(info.data));
+ 
+-	if (copy_from_user(&info, useraddr, info_size))
++	if (ethtool_rxnfc_copy_from_user(&info, useraddr, info_size))
+ 		return -EFAULT;
+ 
+ 	rc = dev->ethtool_ops->set_rxnfc(dev, &info);
+@@ -833,7 +948,7 @@ static noinline_for_stack int ethtool_set_rxnfc(struct net_device *dev,
+ 		return rc;
+ 
+ 	if (cmd == ETHTOOL_SRXCLSRLINS &&
+-	    copy_to_user(useraddr, &info, info_size))
++	    ethtool_rxnfc_copy_to_user(useraddr, &info, info_size, NULL))
+ 		return -EFAULT;
+ 
+ 	return 0;
+@@ -859,7 +974,7 @@ static noinline_for_stack int ethtool_get_rxnfc(struct net_device *dev,
+ 		info_size = (offsetof(struct ethtool_rxnfc, data) +
+ 			     sizeof(info.data));
+ 
+-	if (copy_from_user(&info, useraddr, info_size))
++	if (ethtool_rxnfc_copy_from_user(&info, useraddr, info_size))
+ 		return -EFAULT;
+ 
+ 	/* If FLOW_RSS was requested then user-space must be using the
+@@ -867,7 +982,7 @@ static noinline_for_stack int ethtool_get_rxnfc(struct net_device *dev,
+ 	 */
+ 	if (cmd == ETHTOOL_GRXFH && info.flow_type & FLOW_RSS) {
+ 		info_size = sizeof(info);
+-		if (copy_from_user(&info, useraddr, info_size))
++		if (ethtool_rxnfc_copy_from_user(&info, useraddr, info_size))
+ 			return -EFAULT;
+ 		/* Since malicious users may modify the original data,
+ 		 * we need to check whether FLOW_RSS is still requested.
+@@ -893,18 +1008,7 @@ static noinline_for_stack int ethtool_get_rxnfc(struct net_device *dev,
+ 	if (ret < 0)
+ 		goto err_out;
+ 
+-	ret = -EFAULT;
+-	if (copy_to_user(useraddr, &info, info_size))
+-		goto err_out;
+-
+-	if (rule_buf) {
+-		useraddr += offsetof(struct ethtool_rxnfc, rule_locs);
+-		if (copy_to_user(useraddr, rule_buf,
+-				 info.rule_cnt * sizeof(u32)))
+-			goto err_out;
+-	}
+-	ret = 0;
+-
++	ret = ethtool_rxnfc_copy_to_user(useraddr, &info, info_size, rule_buf);
+ err_out:
+ 	kfree(rule_buf);
+ 
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index 8d8a8da3ae7e0..a202dcec0dc27 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -446,8 +446,9 @@ static void ip_copy_addrs(struct iphdr *iph, const struct flowi4 *fl4)
+ {
+ 	BUILD_BUG_ON(offsetof(typeof(*fl4), daddr) !=
+ 		     offsetof(typeof(*fl4), saddr) + sizeof(fl4->saddr));
+-	memcpy(&iph->saddr, &fl4->saddr,
+-	       sizeof(fl4->saddr) + sizeof(fl4->daddr));
++
++	iph->saddr = fl4->saddr;
++	iph->daddr = fl4->daddr;
+ }
+ 
+ /* Note: skb->sk can be different from sk, in case of tunnels */
+diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c
+index d49709ba8e165..1071119843843 100644
+--- a/net/ipv4/tcp_fastopen.c
++++ b/net/ipv4/tcp_fastopen.c
+@@ -379,8 +379,7 @@ struct sock *tcp_try_fastopen(struct sock *sk, struct sk_buff *skb,
+ 		return NULL;
+ 	}
+ 
+-	if (syn_data &&
+-	    tcp_fastopen_no_cookie(sk, dst, TFO_SERVER_COOKIE_NOT_REQD))
++	if (tcp_fastopen_no_cookie(sk, dst, TFO_SERVER_COOKIE_NOT_REQD))
+ 		goto fastopen;
+ 
+ 	if (foc->len == 0) {
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index 137fa4c50e07a..8df7ab34911c7 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -1985,9 +1985,16 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name,
+ 
+ 		netdev_set_default_ethtool_ops(ndev, &ieee80211_ethtool_ops);
+ 
+-		/* MTU range: 256 - 2304 */
++		/* MTU range is normally 256 - 2304, where the upper limit is
++		 * the maximum MSDU size. Monitor interfaces send and receive
++		 * MPDU and A-MSDU frames which may be much larger so we do
++		 * not impose an upper limit in that case.
++		 */
+ 		ndev->min_mtu = 256;
+-		ndev->max_mtu = local->hw.max_mtu;
++		if (type == NL80211_IFTYPE_MONITOR)
++			ndev->max_mtu = 0;
++		else
++			ndev->max_mtu = local->hw.max_mtu;
+ 
+ 		ret = cfg80211_register_netdevice(ndev);
+ 		if (ret) {
+diff --git a/net/netfilter/nf_flow_table_offload.c b/net/netfilter/nf_flow_table_offload.c
+index 528b2f1726844..0587f071e5040 100644
+--- a/net/netfilter/nf_flow_table_offload.c
++++ b/net/netfilter/nf_flow_table_offload.c
+@@ -1097,6 +1097,7 @@ static void nf_flow_table_block_offload_init(struct flow_block_offload *bo,
+ 	bo->command	= cmd;
+ 	bo->binder_type	= FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS;
+ 	bo->extack	= extack;
++	bo->cb_list_head = &flowtable->flow_block.cb_list;
+ 	INIT_LIST_HEAD(&bo->cb_list);
+ }
+ 
+diff --git a/net/netfilter/nf_tables_offload.c b/net/netfilter/nf_tables_offload.c
+index b58d73a965232..9656c16462222 100644
+--- a/net/netfilter/nf_tables_offload.c
++++ b/net/netfilter/nf_tables_offload.c
+@@ -353,6 +353,7 @@ static void nft_flow_block_offload_init(struct flow_block_offload *bo,
+ 	bo->command	= cmd;
+ 	bo->binder_type	= FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS;
+ 	bo->extack	= extack;
++	bo->cb_list_head = &basechain->flow_block.cb_list;
+ 	INIT_LIST_HEAD(&bo->cb_list);
+ }
+ 
+diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c
+index 5415ab14400d7..31e6da30da5f0 100644
+--- a/net/netfilter/nft_compat.c
++++ b/net/netfilter/nft_compat.c
+@@ -680,14 +680,12 @@ static int nfnl_compat_get_rcu(struct sk_buff *skb,
+ 		goto out_put;
+ 	}
+ 
+-	ret = netlink_unicast(info->sk, skb2, NETLINK_CB(skb).portid,
+-			      MSG_DONTWAIT);
+-	if (ret > 0)
+-		ret = 0;
++	ret = nfnetlink_unicast(skb2, info->net, NETLINK_CB(skb).portid);
+ out_put:
+ 	rcu_read_lock();
+ 	module_put(THIS_MODULE);
+-	return ret == -EAGAIN ? -ENOBUFS : ret;
++
++	return ret;
+ }
+ 
+ static const struct nla_policy nfnl_compat_policy_get[NFTA_COMPAT_MAX+1] = {
+diff --git a/net/netlabel/netlabel_cipso_v4.c b/net/netlabel/netlabel_cipso_v4.c
+index 50f40943c8153..f3f1df1b0f8e2 100644
+--- a/net/netlabel/netlabel_cipso_v4.c
++++ b/net/netlabel/netlabel_cipso_v4.c
+@@ -144,8 +144,8 @@ static int netlbl_cipsov4_add_std(struct genl_info *info,
+ 		return -ENOMEM;
+ 	doi_def->map.std = kzalloc(sizeof(*doi_def->map.std), GFP_KERNEL);
+ 	if (doi_def->map.std == NULL) {
+-		ret_val = -ENOMEM;
+-		goto add_std_failure;
++		kfree(doi_def);
++		return -ENOMEM;
+ 	}
+ 	doi_def->type = CIPSO_V4_MAP_TRANS;
+ 
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 6133e412b948c..b9ed16ff09c12 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -2545,13 +2545,15 @@ int nlmsg_notify(struct sock *sk, struct sk_buff *skb, u32 portid,
+ 		/* errors reported via destination sk->sk_err, but propagate
+ 		 * delivery errors if NETLINK_BROADCAST_ERROR flag is set */
+ 		err = nlmsg_multicast(sk, skb, exclude_portid, group, flags);
++		if (err == -ESRCH)
++			err = 0;
+ 	}
+ 
+ 	if (report) {
+ 		int err2;
+ 
+ 		err2 = nlmsg_unicast(sk, skb, portid);
+-		if (!err || err == -ESRCH)
++		if (!err)
+ 			err = err2;
+ 	}
+ 
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index e3e79e9bd7067..9b276d14be4c4 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -634,6 +634,7 @@ static void tcf_block_offload_init(struct flow_block_offload *bo,
+ 	bo->block_shared = shared;
+ 	bo->extack = extack;
+ 	bo->sch = sch;
++	bo->cb_list_head = &flow_block->cb_list;
+ 	INIT_LIST_HEAD(&bo->cb_list);
+ }
+ 
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index 5c91df52b8c2c..b0d6385fff9e3 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -1547,7 +1547,9 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt,
+ 	taprio_set_picos_per_byte(dev, q);
+ 
+ 	if (mqprio) {
+-		netdev_set_num_tc(dev, mqprio->num_tc);
++		err = netdev_set_num_tc(dev, mqprio->num_tc);
++		if (err)
++			goto free_sched;
+ 		for (i = 0; i < mqprio->num_tc; i++)
+ 			netdev_set_tc_queue(dev, i,
+ 					    mqprio->count[i],
+diff --git a/net/socket.c b/net/socket.c
+index 877f1fb61719a..caac290ba7ec1 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -3099,128 +3099,6 @@ static int compat_dev_ifconf(struct net *net, struct compat_ifconf __user *uifc3
+ 	return 0;
+ }
+ 
+-static int ethtool_ioctl(struct net *net, struct compat_ifreq __user *ifr32)
+-{
+-	struct compat_ethtool_rxnfc __user *compat_rxnfc;
+-	bool convert_in = false, convert_out = false;
+-	size_t buf_size = 0;
+-	struct ethtool_rxnfc __user *rxnfc = NULL;
+-	struct ifreq ifr;
+-	u32 rule_cnt = 0, actual_rule_cnt;
+-	u32 ethcmd;
+-	u32 data;
+-	int ret;
+-
+-	if (get_user(data, &ifr32->ifr_ifru.ifru_data))
+-		return -EFAULT;
+-
+-	compat_rxnfc = compat_ptr(data);
+-
+-	if (get_user(ethcmd, &compat_rxnfc->cmd))
+-		return -EFAULT;
+-
+-	/* Most ethtool structures are defined without padding.
+-	 * Unfortunately struct ethtool_rxnfc is an exception.
+-	 */
+-	switch (ethcmd) {
+-	default:
+-		break;
+-	case ETHTOOL_GRXCLSRLALL:
+-		/* Buffer size is variable */
+-		if (get_user(rule_cnt, &compat_rxnfc->rule_cnt))
+-			return -EFAULT;
+-		if (rule_cnt > KMALLOC_MAX_SIZE / sizeof(u32))
+-			return -ENOMEM;
+-		buf_size += rule_cnt * sizeof(u32);
+-		fallthrough;
+-	case ETHTOOL_GRXRINGS:
+-	case ETHTOOL_GRXCLSRLCNT:
+-	case ETHTOOL_GRXCLSRULE:
+-	case ETHTOOL_SRXCLSRLINS:
+-		convert_out = true;
+-		fallthrough;
+-	case ETHTOOL_SRXCLSRLDEL:
+-		buf_size += sizeof(struct ethtool_rxnfc);
+-		convert_in = true;
+-		rxnfc = compat_alloc_user_space(buf_size);
+-		break;
+-	}
+-
+-	if (copy_from_user(&ifr.ifr_name, &ifr32->ifr_name, IFNAMSIZ))
+-		return -EFAULT;
+-
+-	ifr.ifr_data = convert_in ? rxnfc : (void __user *)compat_rxnfc;
+-
+-	if (convert_in) {
+-		/* We expect there to be holes between fs.m_ext and
+-		 * fs.ring_cookie and at the end of fs, but nowhere else.
+-		 */
+-		BUILD_BUG_ON(offsetof(struct compat_ethtool_rxnfc, fs.m_ext) +
+-			     sizeof(compat_rxnfc->fs.m_ext) !=
+-			     offsetof(struct ethtool_rxnfc, fs.m_ext) +
+-			     sizeof(rxnfc->fs.m_ext));
+-		BUILD_BUG_ON(
+-			offsetof(struct compat_ethtool_rxnfc, fs.location) -
+-			offsetof(struct compat_ethtool_rxnfc, fs.ring_cookie) !=
+-			offsetof(struct ethtool_rxnfc, fs.location) -
+-			offsetof(struct ethtool_rxnfc, fs.ring_cookie));
+-
+-		if (copy_in_user(rxnfc, compat_rxnfc,
+-				 (void __user *)(&rxnfc->fs.m_ext + 1) -
+-				 (void __user *)rxnfc) ||
+-		    copy_in_user(&rxnfc->fs.ring_cookie,
+-				 &compat_rxnfc->fs.ring_cookie,
+-				 (void __user *)(&rxnfc->fs.location + 1) -
+-				 (void __user *)&rxnfc->fs.ring_cookie))
+-			return -EFAULT;
+-		if (ethcmd == ETHTOOL_GRXCLSRLALL) {
+-			if (put_user(rule_cnt, &rxnfc->rule_cnt))
+-				return -EFAULT;
+-		} else if (copy_in_user(&rxnfc->rule_cnt,
+-					&compat_rxnfc->rule_cnt,
+-					sizeof(rxnfc->rule_cnt)))
+-			return -EFAULT;
+-	}
+-
+-	ret = dev_ioctl(net, SIOCETHTOOL, &ifr, NULL);
+-	if (ret)
+-		return ret;
+-
+-	if (convert_out) {
+-		if (copy_in_user(compat_rxnfc, rxnfc,
+-				 (const void __user *)(&rxnfc->fs.m_ext + 1) -
+-				 (const void __user *)rxnfc) ||
+-		    copy_in_user(&compat_rxnfc->fs.ring_cookie,
+-				 &rxnfc->fs.ring_cookie,
+-				 (const void __user *)(&rxnfc->fs.location + 1) -
+-				 (const void __user *)&rxnfc->fs.ring_cookie) ||
+-		    copy_in_user(&compat_rxnfc->rule_cnt, &rxnfc->rule_cnt,
+-				 sizeof(rxnfc->rule_cnt)))
+-			return -EFAULT;
+-
+-		if (ethcmd == ETHTOOL_GRXCLSRLALL) {
+-			/* As an optimisation, we only copy the actual
+-			 * number of rules that the underlying
+-			 * function returned.  Since Mallory might
+-			 * change the rule count in user memory, we
+-			 * check that it is less than the rule count
+-			 * originally given (as the user buffer size),
+-			 * which has been range-checked.
+-			 */
+-			if (get_user(actual_rule_cnt, &rxnfc->rule_cnt))
+-				return -EFAULT;
+-			if (actual_rule_cnt < rule_cnt)
+-				rule_cnt = actual_rule_cnt;
+-			if (copy_in_user(&compat_rxnfc->rule_locs[0],
+-					 &rxnfc->rule_locs[0],
+-					 rule_cnt * sizeof(u32)))
+-				return -EFAULT;
+-		}
+-	}
+-
+-	return 0;
+-}
+-
+ static int compat_siocwandev(struct net *net, struct compat_ifreq __user *uifr32)
+ {
+ 	compat_uptr_t uptr32;
+@@ -3377,8 +3255,6 @@ static int compat_sock_ioctl_trans(struct file *file, struct socket *sock,
+ 		return old_bridge_ioctl(argp);
+ 	case SIOCGIFCONF:
+ 		return compat_dev_ifconf(net, argp);
+-	case SIOCETHTOOL:
+-		return ethtool_ioctl(net, argp);
+ 	case SIOCWANDEV:
+ 		return compat_siocwandev(net, argp);
+ 	case SIOCGIFMAP:
+@@ -3391,6 +3267,7 @@ static int compat_sock_ioctl_trans(struct file *file, struct socket *sock,
+ 		return sock->ops->gettstamp(sock, argp, cmd == SIOCGSTAMP_OLD,
+ 					    !COMPAT_USE_64BIT_TIME);
+ 
++	case SIOCETHTOOL:
+ 	case SIOCBONDSLAVEINFOQUERY:
+ 	case SIOCBONDINFOQUERY:
+ 	case SIOCSHWTSTAMP:
+diff --git a/net/sunrpc/auth_gss/svcauth_gss.c b/net/sunrpc/auth_gss/svcauth_gss.c
+index 6dff64374bfe1..e22f2d65457da 100644
+--- a/net/sunrpc/auth_gss/svcauth_gss.c
++++ b/net/sunrpc/auth_gss/svcauth_gss.c
+@@ -1980,7 +1980,7 @@ gss_svc_init_net(struct net *net)
+ 		goto out2;
+ 	return 0;
+ out2:
+-	destroy_use_gss_proxy_proc_entry(net);
++	rsi_cache_destroy_net(net);
+ out1:
+ 	rsc_cache_destroy_net(net);
+ 	return rv;
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index 3509a7f139b98..8d3983c8b4d60 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -774,9 +774,9 @@ void xprt_force_disconnect(struct rpc_xprt *xprt)
+ 	/* Try to schedule an autoclose RPC call */
+ 	if (test_and_set_bit(XPRT_LOCKED, &xprt->state) == 0)
+ 		queue_work(xprtiod_workqueue, &xprt->task_cleanup);
+-	else if (xprt->snd_task)
++	else if (xprt->snd_task && !test_bit(XPRT_SND_IS_COOKIE, &xprt->state))
+ 		rpc_wake_up_queued_task_set_status(&xprt->pending,
+-				xprt->snd_task, -ENOTCONN);
++						   xprt->snd_task, -ENOTCONN);
+ 	spin_unlock(&xprt->transport_lock);
+ }
+ EXPORT_SYMBOL_GPL(xprt_force_disconnect);
+@@ -865,12 +865,14 @@ bool xprt_lock_connect(struct rpc_xprt *xprt,
+ 		goto out;
+ 	if (xprt->snd_task != task)
+ 		goto out;
++	set_bit(XPRT_SND_IS_COOKIE, &xprt->state);
+ 	xprt->snd_task = cookie;
+ 	ret = true;
+ out:
+ 	spin_unlock(&xprt->transport_lock);
+ 	return ret;
+ }
++EXPORT_SYMBOL_GPL(xprt_lock_connect);
+ 
+ void xprt_unlock_connect(struct rpc_xprt *xprt, void *cookie)
+ {
+@@ -880,12 +882,14 @@ void xprt_unlock_connect(struct rpc_xprt *xprt, void *cookie)
+ 	if (!test_bit(XPRT_LOCKED, &xprt->state))
+ 		goto out;
+ 	xprt->snd_task =NULL;
++	clear_bit(XPRT_SND_IS_COOKIE, &xprt->state);
+ 	xprt->ops->release_xprt(xprt, NULL);
+ 	xprt_schedule_autodisconnect(xprt);
+ out:
+ 	spin_unlock(&xprt->transport_lock);
+ 	wake_up_bit(&xprt->state, XPRT_LOCKED);
+ }
++EXPORT_SYMBOL_GPL(xprt_unlock_connect);
+ 
+ /**
+  * xprt_connect - schedule a transport connect operation
+diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
+index 19a49d26b1e41..d2052f06acfae 100644
+--- a/net/sunrpc/xprtrdma/transport.c
++++ b/net/sunrpc/xprtrdma/transport.c
+@@ -249,12 +249,9 @@ xprt_rdma_connect_worker(struct work_struct *work)
+ 					   xprt->stat.connect_start;
+ 		xprt_set_connected(xprt);
+ 		rc = -EAGAIN;
+-	} else {
+-		/* Force a call to xprt_rdma_close to clean up */
+-		spin_lock(&xprt->transport_lock);
+-		set_bit(XPRT_CLOSE_WAIT, &xprt->state);
+-		spin_unlock(&xprt->transport_lock);
+-	}
++	} else
++		rpcrdma_xprt_disconnect(r_xprt);
++	xprt_unlock_connect(xprt, r_xprt);
+ 	xprt_wake_pending_tasks(xprt, rc);
+ }
+ 
+@@ -487,6 +484,8 @@ xprt_rdma_connect(struct rpc_xprt *xprt, struct rpc_task *task)
+ 	struct rpcrdma_ep *ep = r_xprt->rx_ep;
+ 	unsigned long delay;
+ 
++	WARN_ON_ONCE(!xprt_lock_connect(xprt, task, r_xprt));
++
+ 	delay = 0;
+ 	if (ep && ep->re_connect_status != 0) {
+ 		delay = xprt_reconnect_delay(xprt);
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index 649c23518ec04..5a11e318a0d99 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -1416,11 +1416,6 @@ void rpcrdma_post_recvs(struct rpcrdma_xprt *r_xprt, int needed, bool temp)
+ 
+ 	rc = ib_post_recv(ep->re_id->qp, wr,
+ 			  (const struct ib_recv_wr **)&bad_wr);
+-	if (atomic_dec_return(&ep->re_receiving) > 0)
+-		complete(&ep->re_done);
+-
+-out:
+-	trace_xprtrdma_post_recvs(r_xprt, count, rc);
+ 	if (rc) {
+ 		for (wr = bad_wr; wr;) {
+ 			struct rpcrdma_rep *rep;
+@@ -1431,6 +1426,11 @@ out:
+ 			--count;
+ 		}
+ 	}
++	if (atomic_dec_return(&ep->re_receiving) > 0)
++		complete(&ep->re_done);
++
++out:
++	trace_xprtrdma_post_recvs(r_xprt, count, rc);
+ 	ep->re_receive_count += count;
+ 	return;
+ }
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index 3228b7a1836aa..b836b4c322fc8 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -1648,6 +1648,13 @@ static int xs_get_srcport(struct sock_xprt *transport)
+ 	return port;
+ }
+ 
++unsigned short get_srcport(struct rpc_xprt *xprt)
++{
++	struct sock_xprt *sock = container_of(xprt, struct sock_xprt, xprt);
++	return xs_sock_getport(sock->sock);
++}
++EXPORT_SYMBOL(get_srcport);
++
+ static unsigned short xs_next_srcport(struct sock_xprt *transport, unsigned short port)
+ {
+ 	if (transport->srcport != 0)
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index a0dce194a404a..5d036b9c15d27 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -1905,6 +1905,7 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m,
+ 	bool connected = !tipc_sk_type_connectionless(sk);
+ 	struct tipc_sock *tsk = tipc_sk(sk);
+ 	int rc, err, hlen, dlen, copy;
++	struct tipc_skb_cb *skb_cb;
+ 	struct sk_buff_head xmitq;
+ 	struct tipc_msg *hdr;
+ 	struct sk_buff *skb;
+@@ -1928,6 +1929,7 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m,
+ 		if (unlikely(rc))
+ 			goto exit;
+ 		skb = skb_peek(&sk->sk_receive_queue);
++		skb_cb = TIPC_SKB_CB(skb);
+ 		hdr = buf_msg(skb);
+ 		dlen = msg_data_sz(hdr);
+ 		hlen = msg_hdr_sz(hdr);
+@@ -1947,18 +1949,33 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m,
+ 
+ 	/* Capture data if non-error msg, otherwise just set return value */
+ 	if (likely(!err)) {
+-		copy = min_t(int, dlen, buflen);
+-		if (unlikely(copy != dlen))
+-			m->msg_flags |= MSG_TRUNC;
+-		rc = skb_copy_datagram_msg(skb, hlen, m, copy);
++		int offset = skb_cb->bytes_read;
++
++		copy = min_t(int, dlen - offset, buflen);
++		rc = skb_copy_datagram_msg(skb, hlen + offset, m, copy);
++		if (unlikely(rc))
++			goto exit;
++		if (unlikely(offset + copy < dlen)) {
++			if (flags & MSG_EOR) {
++				if (!(flags & MSG_PEEK))
++					skb_cb->bytes_read = offset + copy;
++			} else {
++				m->msg_flags |= MSG_TRUNC;
++				skb_cb->bytes_read = 0;
++			}
++		} else {
++			if (flags & MSG_EOR)
++				m->msg_flags |= MSG_EOR;
++			skb_cb->bytes_read = 0;
++		}
+ 	} else {
+ 		copy = 0;
+ 		rc = 0;
+-		if (err != TIPC_CONN_SHUTDOWN && connected && !m->msg_control)
++		if (err != TIPC_CONN_SHUTDOWN && connected && !m->msg_control) {
+ 			rc = -ECONNRESET;
++			goto exit;
++		}
+ 	}
+-	if (unlikely(rc))
+-		goto exit;
+ 
+ 	/* Mark message as group event if applicable */
+ 	if (unlikely(grp_evt)) {
+@@ -1981,9 +1998,10 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m,
+ 		tipc_node_distr_xmit(sock_net(sk), &xmitq);
+ 	}
+ 
+-	tsk_advance_rx_queue(sk);
++	if (!skb_cb->bytes_read)
++		tsk_advance_rx_queue(sk);
+ 
+-	if (likely(!connected))
++	if (likely(!connected) || skb_cb->bytes_read)
+ 		goto exit;
+ 
+ 	/* Send connection flow control advertisement when applicable */
+diff --git a/samples/bpf/test_override_return.sh b/samples/bpf/test_override_return.sh
+index e68b9ee6814b8..35db26f736b9d 100755
+--- a/samples/bpf/test_override_return.sh
++++ b/samples/bpf/test_override_return.sh
+@@ -1,5 +1,6 @@
+ #!/bin/bash
+ 
++rm -r tmpmnt
+ rm -f testfile.img
+ dd if=/dev/zero of=testfile.img bs=1M seek=1000 count=1
+ DEVICE=$(losetup --show -f testfile.img)
+diff --git a/samples/bpf/tracex7_user.c b/samples/bpf/tracex7_user.c
+index fdcd6580dd736..8be7ce18d3ba0 100644
+--- a/samples/bpf/tracex7_user.c
++++ b/samples/bpf/tracex7_user.c
+@@ -14,6 +14,11 @@ int main(int argc, char **argv)
+ 	int ret = 0;
+ 	FILE *f;
+ 
++	if (!argv[1]) {
++		fprintf(stderr, "ERROR: Run with the btrfs device argument!\n");
++		return 0;
++	}
++
+ 	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
+ 	obj = bpf_object__open_file(filename, NULL);
+ 	if (libbpf_get_error(obj)) {
+diff --git a/samples/pktgen/pktgen_sample03_burst_single_flow.sh b/samples/pktgen/pktgen_sample03_burst_single_flow.sh
+index 5adcf954de731..c12198d5bbe57 100755
+--- a/samples/pktgen/pktgen_sample03_burst_single_flow.sh
++++ b/samples/pktgen/pktgen_sample03_burst_single_flow.sh
+@@ -83,7 +83,7 @@ for ((thread = $F_THREAD; thread <= $L_THREAD; thread++)); do
+ done
+ 
+ # Run if user hits control-c
+-function control_c() {
++function print_result() {
+     # Print results
+     for ((thread = $F_THREAD; thread <= $L_THREAD; thread++)); do
+ 	dev=${DEV}@${thread}
+@@ -92,11 +92,13 @@ function control_c() {
+     done
+ }
+ # trap keyboard interrupt (Ctrl-C)
+-trap control_c SIGINT
++trap true SIGINT
+ 
+ if [ -z "$APPEND" ]; then
+     echo "Running... ctrl^C to stop" >&2
+     pg_ctrl "start"
++
++    print_result
+ else
+     echo "Append mode: config done. Do more or use 'pg_ctrl start' to run"
+ fi
+diff --git a/scripts/gen_ksymdeps.sh b/scripts/gen_ksymdeps.sh
+index 1324986e1362c..725e8c9c1b53f 100755
+--- a/scripts/gen_ksymdeps.sh
++++ b/scripts/gen_ksymdeps.sh
+@@ -4,7 +4,13 @@
+ set -e
+ 
+ # List of exported symbols
+-ksyms=$($NM $1 | sed -n 's/.*__ksym_marker_\(.*\)/\1/p' | tr A-Z a-z)
++#
++# If the object has no symbol, $NM warns 'no symbols'.
++# Suppress the stderr.
++# TODO:
++#   Use -q instead of 2>/dev/null when we upgrade the minimum version of
++#   binutils to 2.37, llvm to 13.0.0.
++ksyms=$($NM $1 2>/dev/null | sed -n 's/.*__ksym_marker_\(.*\)/\1/p' | tr A-Z a-z)
+ 
+ if [ -z "$ksyms" ]; then
+ 	exit 0
+diff --git a/scripts/subarch.include b/scripts/subarch.include
+index 650682821126c..776849a3c500f 100644
+--- a/scripts/subarch.include
++++ b/scripts/subarch.include
+@@ -7,7 +7,7 @@
+ SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
+ 				  -e s/sun4u/sparc64/ \
+ 				  -e s/arm.*/arm/ -e s/sa110/arm/ \
+-				  -e s/s390x/s390/ -e s/parisc64/parisc/ \
++				  -e s/s390x/s390/ \
+ 				  -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
+ 				  -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \
+ 				  -e s/riscv.*/riscv/)
+diff --git a/security/smack/smack_access.c b/security/smack/smack_access.c
+index 7eabb448acab4..169929c6c4eb3 100644
+--- a/security/smack/smack_access.c
++++ b/security/smack/smack_access.c
+@@ -81,23 +81,22 @@ int log_policy = SMACK_AUDIT_DENIED;
+ int smk_access_entry(char *subject_label, char *object_label,
+ 			struct list_head *rule_list)
+ {
+-	int may = -ENOENT;
+ 	struct smack_rule *srp;
+ 
+ 	list_for_each_entry_rcu(srp, rule_list, list) {
+ 		if (srp->smk_object->smk_known == object_label &&
+ 		    srp->smk_subject->smk_known == subject_label) {
+-			may = srp->smk_access;
+-			break;
++			int may = srp->smk_access;
++			/*
++			 * MAY_WRITE implies MAY_LOCK.
++			 */
++			if ((may & MAY_WRITE) == MAY_WRITE)
++				may |= MAY_LOCK;
++			return may;
+ 		}
+ 	}
+ 
+-	/*
+-	 * MAY_WRITE implies MAY_LOCK.
+-	 */
+-	if ((may & MAY_WRITE) == MAY_WRITE)
+-		may |= MAY_LOCK;
+-	return may;
++	return -ENOENT;
+ }
+ 
+ /**
+diff --git a/sound/soc/atmel/Kconfig b/sound/soc/atmel/Kconfig
+index ec04e3386bc0e..8617793ed9557 100644
+--- a/sound/soc/atmel/Kconfig
++++ b/sound/soc/atmel/Kconfig
+@@ -11,7 +11,6 @@ if SND_ATMEL_SOC
+ 
+ config SND_ATMEL_SOC_PDC
+ 	bool
+-	depends on HAS_DMA
+ 
+ config SND_ATMEL_SOC_DMA
+ 	bool
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 22dbd9d93c1ef..4bddb969176fe 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -290,9 +290,6 @@ static const struct snd_soc_dapm_widget byt_rt5640_widgets[] = {
+ static const struct snd_soc_dapm_route byt_rt5640_audio_map[] = {
+ 	{"Headphone", NULL, "Platform Clock"},
+ 	{"Headset Mic", NULL, "Platform Clock"},
+-	{"Internal Mic", NULL, "Platform Clock"},
+-	{"Speaker", NULL, "Platform Clock"},
+-
+ 	{"Headset Mic", NULL, "MICBIAS1"},
+ 	{"IN2P", NULL, "Headset Mic"},
+ 	{"Headphone", NULL, "HPOL"},
+@@ -300,19 +297,23 @@ static const struct snd_soc_dapm_route byt_rt5640_audio_map[] = {
+ };
+ 
+ static const struct snd_soc_dapm_route byt_rt5640_intmic_dmic1_map[] = {
++	{"Internal Mic", NULL, "Platform Clock"},
+ 	{"DMIC1", NULL, "Internal Mic"},
+ };
+ 
+ static const struct snd_soc_dapm_route byt_rt5640_intmic_dmic2_map[] = {
++	{"Internal Mic", NULL, "Platform Clock"},
+ 	{"DMIC2", NULL, "Internal Mic"},
+ };
+ 
+ static const struct snd_soc_dapm_route byt_rt5640_intmic_in1_map[] = {
++	{"Internal Mic", NULL, "Platform Clock"},
+ 	{"Internal Mic", NULL, "MICBIAS1"},
+ 	{"IN1P", NULL, "Internal Mic"},
+ };
+ 
+ static const struct snd_soc_dapm_route byt_rt5640_intmic_in3_map[] = {
++	{"Internal Mic", NULL, "Platform Clock"},
+ 	{"Internal Mic", NULL, "MICBIAS1"},
+ 	{"IN3P", NULL, "Internal Mic"},
+ };
+@@ -354,6 +355,7 @@ static const struct snd_soc_dapm_route byt_rt5640_ssp0_aif2_map[] = {
+ };
+ 
+ static const struct snd_soc_dapm_route byt_rt5640_stereo_spk_map[] = {
++	{"Speaker", NULL, "Platform Clock"},
+ 	{"Speaker", NULL, "SPOLP"},
+ 	{"Speaker", NULL, "SPOLN"},
+ 	{"Speaker", NULL, "SPORP"},
+@@ -361,6 +363,7 @@ static const struct snd_soc_dapm_route byt_rt5640_stereo_spk_map[] = {
+ };
+ 
+ static const struct snd_soc_dapm_route byt_rt5640_mono_spk_map[] = {
++	{"Speaker", NULL, "Platform Clock"},
+ 	{"Speaker", NULL, "SPOLP"},
+ 	{"Speaker", NULL, "SPOLN"},
+ };
+diff --git a/sound/soc/intel/boards/sof_pcm512x.c b/sound/soc/intel/boards/sof_pcm512x.c
+index 8620d4f38493a..335c212c1961a 100644
+--- a/sound/soc/intel/boards/sof_pcm512x.c
++++ b/sound/soc/intel/boards/sof_pcm512x.c
+@@ -26,11 +26,16 @@
+ 
+ #define SOF_PCM512X_SSP_CODEC(quirk)		((quirk) & GENMASK(3, 0))
+ #define SOF_PCM512X_SSP_CODEC_MASK			(GENMASK(3, 0))
++#define SOF_PCM512X_ENABLE_SSP_CAPTURE		BIT(4)
++#define SOF_PCM512X_ENABLE_DMIC			BIT(5)
+ 
+ #define IDISP_CODEC_MASK	0x4
+ 
+ /* Default: SSP5 */
+-static unsigned long sof_pcm512x_quirk = SOF_PCM512X_SSP_CODEC(5);
++static unsigned long sof_pcm512x_quirk =
++	SOF_PCM512X_SSP_CODEC(5) |
++	SOF_PCM512X_ENABLE_SSP_CAPTURE |
++	SOF_PCM512X_ENABLE_DMIC;
+ 
+ static bool is_legacy_cpu;
+ 
+@@ -245,8 +250,9 @@ static struct snd_soc_dai_link *sof_card_dai_links_create(struct device *dev,
+ 	links[id].dpcm_playback = 1;
+ 	/*
+ 	 * capture only supported with specific versions of the Hifiberry DAC+
+-	 * links[id].dpcm_capture = 1;
+ 	 */
++	if (sof_pcm512x_quirk & SOF_PCM512X_ENABLE_SSP_CAPTURE)
++		links[id].dpcm_capture = 1;
+ 	links[id].no_pcm = 1;
+ 	links[id].cpus = &cpus[id];
+ 	links[id].num_cpus = 1;
+@@ -381,6 +387,9 @@ static int sof_audio_probe(struct platform_device *pdev)
+ 
+ 	ssp_codec = sof_pcm512x_quirk & SOF_PCM512X_SSP_CODEC_MASK;
+ 
++	if (!(sof_pcm512x_quirk & SOF_PCM512X_ENABLE_DMIC))
++		dmic_be_num = 0;
++
+ 	/* compute number of dai links */
+ 	sof_audio_card_pcm512x.num_links = 1 + dmic_be_num + hdmi_num;
+ 
+diff --git a/sound/soc/intel/skylake/skl-messages.c b/sound/soc/intel/skylake/skl-messages.c
+index 476ef1897961d..79c6cf2c14bfb 100644
+--- a/sound/soc/intel/skylake/skl-messages.c
++++ b/sound/soc/intel/skylake/skl-messages.c
+@@ -802,9 +802,12 @@ static u16 skl_get_module_param_size(struct skl_dev *skl,
+ 
+ 	case SKL_MODULE_TYPE_BASE_OUTFMT:
+ 	case SKL_MODULE_TYPE_MIC_SELECT:
+-	case SKL_MODULE_TYPE_KPB:
+ 		return sizeof(struct skl_base_outfmt_cfg);
+ 
++	case SKL_MODULE_TYPE_MIXER:
++	case SKL_MODULE_TYPE_KPB:
++		return sizeof(struct skl_base_cfg);
++
+ 	default:
+ 		/*
+ 		 * return only base cfg when no specific module type is
+@@ -857,10 +860,14 @@ static int skl_set_module_format(struct skl_dev *skl,
+ 
+ 	case SKL_MODULE_TYPE_BASE_OUTFMT:
+ 	case SKL_MODULE_TYPE_MIC_SELECT:
+-	case SKL_MODULE_TYPE_KPB:
+ 		skl_set_base_outfmt_format(skl, module_config, *param_data);
+ 		break;
+ 
++	case SKL_MODULE_TYPE_MIXER:
++	case SKL_MODULE_TYPE_KPB:
++		skl_set_base_module_format(skl, module_config, *param_data);
++		break;
++
+ 	default:
+ 		skl_set_base_module_format(skl, module_config, *param_data);
+ 		break;
+diff --git a/sound/soc/intel/skylake/skl-pcm.c b/sound/soc/intel/skylake/skl-pcm.c
+index b1ca64d2f7ea6..031d5dc7e6601 100644
+--- a/sound/soc/intel/skylake/skl-pcm.c
++++ b/sound/soc/intel/skylake/skl-pcm.c
+@@ -1317,21 +1317,6 @@ static int skl_get_module_info(struct skl_dev *skl,
+ 		return -EIO;
+ 	}
+ 
+-	list_for_each_entry(module, &skl->uuid_list, list) {
+-		if (guid_equal(uuid_mod, &module->uuid)) {
+-			mconfig->id.module_id = module->id;
+-			if (mconfig->module)
+-				mconfig->module->loadable = module->is_loadable;
+-			ret = 0;
+-			break;
+-		}
+-	}
+-
+-	if (ret)
+-		return ret;
+-
+-	uuid_mod = &module->uuid;
+-	ret = -EIO;
+ 	for (i = 0; i < skl->nr_modules; i++) {
+ 		skl_module = skl->modules[i];
+ 		uuid_tplg = &skl_module->uuid;
+@@ -1341,10 +1326,18 @@ static int skl_get_module_info(struct skl_dev *skl,
+ 			break;
+ 		}
+ 	}
++
+ 	if (skl->nr_modules && ret)
+ 		return ret;
+ 
++	ret = -EIO;
+ 	list_for_each_entry(module, &skl->uuid_list, list) {
++		if (guid_equal(uuid_mod, &module->uuid)) {
++			mconfig->id.module_id = module->id;
++			mconfig->module->loadable = module->is_loadable;
++			ret = 0;
++		}
++
+ 		for (i = 0; i < MAX_IN_QUEUE; i++) {
+ 			pin_id = &mconfig->m_in_pin[i].id;
+ 			if (guid_equal(&pin_id->mod_uuid, &module->uuid))
+@@ -1358,7 +1351,7 @@ static int skl_get_module_info(struct skl_dev *skl,
+ 		}
+ 	}
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int skl_populate_modules(struct skl_dev *skl)
+diff --git a/sound/soc/rockchip/rockchip_i2s.c b/sound/soc/rockchip/rockchip_i2s.c
+index 0740764e7f71f..ac9980ed266e2 100644
+--- a/sound/soc/rockchip/rockchip_i2s.c
++++ b/sound/soc/rockchip/rockchip_i2s.c
+@@ -186,7 +186,9 @@ static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai,
+ {
+ 	struct rk_i2s_dev *i2s = to_info(cpu_dai);
+ 	unsigned int mask = 0, val = 0;
++	int ret = 0;
+ 
++	pm_runtime_get_sync(cpu_dai->dev);
+ 	mask = I2S_CKR_MSS_MASK;
+ 	switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) {
+ 	case SND_SOC_DAIFMT_CBS_CFS:
+@@ -199,7 +201,8 @@ static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai,
+ 		i2s->is_master_mode = false;
+ 		break;
+ 	default:
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_pm_put;
+ 	}
+ 
+ 	regmap_update_bits(i2s->regmap, I2S_CKR, mask, val);
+@@ -213,7 +216,8 @@ static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai,
+ 		val = I2S_CKR_CKP_POS;
+ 		break;
+ 	default:
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_pm_put;
+ 	}
+ 
+ 	regmap_update_bits(i2s->regmap, I2S_CKR, mask, val);
+@@ -229,14 +233,15 @@ static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai,
+ 	case SND_SOC_DAIFMT_I2S:
+ 		val = I2S_TXCR_IBM_NORMAL;
+ 		break;
+-	case SND_SOC_DAIFMT_DSP_A: /* PCM no delay mode */
+-		val = I2S_TXCR_TFS_PCM;
+-		break;
+-	case SND_SOC_DAIFMT_DSP_B: /* PCM delay 1 mode */
++	case SND_SOC_DAIFMT_DSP_A: /* PCM delay 1 bit mode */
+ 		val = I2S_TXCR_TFS_PCM | I2S_TXCR_PBM_MODE(1);
+ 		break;
++	case SND_SOC_DAIFMT_DSP_B: /* PCM no delay mode */
++		val = I2S_TXCR_TFS_PCM;
++		break;
+ 	default:
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_pm_put;
+ 	}
+ 
+ 	regmap_update_bits(i2s->regmap, I2S_TXCR, mask, val);
+@@ -252,19 +257,23 @@ static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai,
+ 	case SND_SOC_DAIFMT_I2S:
+ 		val = I2S_RXCR_IBM_NORMAL;
+ 		break;
+-	case SND_SOC_DAIFMT_DSP_A: /* PCM no delay mode */
+-		val = I2S_RXCR_TFS_PCM;
+-		break;
+-	case SND_SOC_DAIFMT_DSP_B: /* PCM delay 1 mode */
++	case SND_SOC_DAIFMT_DSP_A: /* PCM delay 1 bit mode */
+ 		val = I2S_RXCR_TFS_PCM | I2S_RXCR_PBM_MODE(1);
+ 		break;
++	case SND_SOC_DAIFMT_DSP_B: /* PCM no delay mode */
++		val = I2S_RXCR_TFS_PCM;
++		break;
+ 	default:
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_pm_put;
+ 	}
+ 
+ 	regmap_update_bits(i2s->regmap, I2S_RXCR, mask, val);
+ 
+-	return 0;
++err_pm_put:
++	pm_runtime_put(cpu_dai->dev);
++
++	return ret;
+ }
+ 
+ static int rockchip_i2s_hw_params(struct snd_pcm_substream *substream,
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index d1c570ca21ea7..b944f56a469a6 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -2001,6 +2001,8 @@ int dpcm_be_dai_trigger(struct snd_soc_pcm_runtime *fe, int stream,
+ 	struct snd_soc_pcm_runtime *be;
+ 	struct snd_soc_dpcm *dpcm;
+ 	int ret = 0;
++	unsigned long flags;
++	enum snd_soc_dpcm_state state;
+ 
+ 	for_each_dpcm_be(fe, stream, dpcm) {
+ 		struct snd_pcm_substream *be_substream;
+@@ -2017,76 +2019,141 @@ int dpcm_be_dai_trigger(struct snd_soc_pcm_runtime *fe, int stream,
+ 
+ 		switch (cmd) {
+ 		case SNDRV_PCM_TRIGGER_START:
++			spin_lock_irqsave(&fe->card->dpcm_lock, flags);
+ 			if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_PREPARE) &&
+ 			    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP) &&
+-			    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED))
++			    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED)) {
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				continue;
++			}
++			state = be->dpcm[stream].state;
++			be->dpcm[stream].state = SND_SOC_DPCM_STATE_START;
++			spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 
+ 			ret = soc_pcm_trigger(be_substream, cmd);
+-			if (ret)
++			if (ret) {
++				spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++				be->dpcm[stream].state = state;
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				goto end;
++			}
+ 
+-			be->dpcm[stream].state = SND_SOC_DPCM_STATE_START;
+ 			break;
+ 		case SNDRV_PCM_TRIGGER_RESUME:
+-			if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_SUSPEND))
++			spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++			if (be->dpcm[stream].state != SND_SOC_DPCM_STATE_SUSPEND) {
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				continue;
++			}
++
++			state = be->dpcm[stream].state;
++			be->dpcm[stream].state = SND_SOC_DPCM_STATE_START;
++			spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 
+ 			ret = soc_pcm_trigger(be_substream, cmd);
+-			if (ret)
++			if (ret) {
++				spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++				be->dpcm[stream].state = state;
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				goto end;
++			}
+ 
+-			be->dpcm[stream].state = SND_SOC_DPCM_STATE_START;
+ 			break;
+ 		case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
+-			if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED))
++			spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++			if (be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED) {
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				continue;
++			}
++
++			state = be->dpcm[stream].state;
++			be->dpcm[stream].state = SND_SOC_DPCM_STATE_START;
++			spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 
+ 			ret = soc_pcm_trigger(be_substream, cmd);
+-			if (ret)
++			if (ret) {
++				spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++				be->dpcm[stream].state = state;
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				goto end;
++			}
+ 
+-			be->dpcm[stream].state = SND_SOC_DPCM_STATE_START;
+ 			break;
+ 		case SNDRV_PCM_TRIGGER_STOP:
++			spin_lock_irqsave(&fe->card->dpcm_lock, flags);
+ 			if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_START) &&
+-			    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED))
++			    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED)) {
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				continue;
++			}
++			spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 
+ 			if (!snd_soc_dpcm_can_be_free_stop(fe, be, stream))
+ 				continue;
+ 
++			spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++			state = be->dpcm[stream].state;
++			be->dpcm[stream].state = SND_SOC_DPCM_STATE_STOP;
++			spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
++
+ 			ret = soc_pcm_trigger(be_substream, cmd);
+-			if (ret)
++			if (ret) {
++				spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++				be->dpcm[stream].state = state;
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				goto end;
++			}
+ 
+-			be->dpcm[stream].state = SND_SOC_DPCM_STATE_STOP;
+ 			break;
+ 		case SNDRV_PCM_TRIGGER_SUSPEND:
+-			if (be->dpcm[stream].state != SND_SOC_DPCM_STATE_START)
++			spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++			if (be->dpcm[stream].state != SND_SOC_DPCM_STATE_START) {
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				continue;
++			}
++			spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 
+ 			if (!snd_soc_dpcm_can_be_free_stop(fe, be, stream))
+ 				continue;
+ 
++			spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++			state = be->dpcm[stream].state;
++			be->dpcm[stream].state = SND_SOC_DPCM_STATE_STOP;
++			spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
++
+ 			ret = soc_pcm_trigger(be_substream, cmd);
+-			if (ret)
++			if (ret) {
++				spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++				be->dpcm[stream].state = state;
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				goto end;
++			}
+ 
+-			be->dpcm[stream].state = SND_SOC_DPCM_STATE_SUSPEND;
+ 			break;
+ 		case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+-			if (be->dpcm[stream].state != SND_SOC_DPCM_STATE_START)
++			spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++			if (be->dpcm[stream].state != SND_SOC_DPCM_STATE_START) {
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				continue;
++			}
++			spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 
+ 			if (!snd_soc_dpcm_can_be_free_stop(fe, be, stream))
+ 				continue;
+ 
++			spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++			state = be->dpcm[stream].state;
++			be->dpcm[stream].state = SND_SOC_DPCM_STATE_PAUSED;
++			spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
++
+ 			ret = soc_pcm_trigger(be_substream, cmd);
+-			if (ret)
++			if (ret) {
++				spin_lock_irqsave(&fe->card->dpcm_lock, flags);
++				be->dpcm[stream].state = state;
++				spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 				goto end;
++			}
+ 
+-			be->dpcm[stream].state = SND_SOC_DPCM_STATE_PAUSED;
+ 			break;
+ 		}
+ 	}
+diff --git a/sound/soc/ti/davinci-mcasp.c b/sound/soc/ti/davinci-mcasp.c
+index b94220306d1a8..41d7cb1321981 100644
+--- a/sound/soc/ti/davinci-mcasp.c
++++ b/sound/soc/ti/davinci-mcasp.c
+@@ -83,6 +83,8 @@ struct davinci_mcasp {
+ 	struct snd_pcm_substream *substreams[2];
+ 	unsigned int dai_fmt;
+ 
++	u32 iec958_status;
++
+ 	/* Audio can not be enabled due to missing parameter(s) */
+ 	bool	missing_audio_param;
+ 
+@@ -757,6 +759,9 @@ static int davinci_mcasp_set_tdm_slot(struct snd_soc_dai *dai,
+ {
+ 	struct davinci_mcasp *mcasp = snd_soc_dai_get_drvdata(dai);
+ 
++	if (mcasp->op_mode == DAVINCI_MCASP_DIT_MODE)
++		return 0;
++
+ 	dev_dbg(mcasp->dev,
+ 		 "%s() tx_mask 0x%08x rx_mask 0x%08x slots %d width %d\n",
+ 		 __func__, tx_mask, rx_mask, slots, slot_width);
+@@ -827,6 +832,20 @@ static int davinci_config_channel_size(struct davinci_mcasp *mcasp,
+ 		mcasp_mod_bits(mcasp, DAVINCI_MCASP_RXFMT_REG, RXROT(rx_rotate),
+ 			       RXROT(7));
+ 		mcasp_set_reg(mcasp, DAVINCI_MCASP_RXMASK_REG, mask);
++	} else {
++		/*
++		 * according to the TRM it should be TXROT=0, this one works:
++		 * 16 bit to 23-8 (TXROT=6, rotate 24 bits)
++		 * 24 bit to 23-0 (TXROT=0, rotate 0 bits)
++		 *
++		 * TXROT = 0 only works with 24bit samples
++		 */
++		tx_rotate = (sample_width / 4 + 2) & 0x7;
++
++		mcasp_mod_bits(mcasp, DAVINCI_MCASP_TXFMT_REG, TXROT(tx_rotate),
++			       TXROT(7));
++		mcasp_mod_bits(mcasp, DAVINCI_MCASP_TXFMT_REG, TXSSZ(15),
++			       TXSSZ(0x0F));
+ 	}
+ 
+ 	mcasp_set_reg(mcasp, DAVINCI_MCASP_TXMASK_REG, mask);
+@@ -842,10 +861,16 @@ static int mcasp_common_hw_param(struct davinci_mcasp *mcasp, int stream,
+ 	u8 tx_ser = 0;
+ 	u8 rx_ser = 0;
+ 	u8 slots = mcasp->tdm_slots;
+-	u8 max_active_serializers = (channels + slots - 1) / slots;
+-	u8 max_rx_serializers, max_tx_serializers;
++	u8 max_active_serializers, max_rx_serializers, max_tx_serializers;
+ 	int active_serializers, numevt;
+ 	u32 reg;
++
++	/* In DIT mode we only allow maximum of one serializers for now */
++	if (mcasp->op_mode == DAVINCI_MCASP_DIT_MODE)
++		max_active_serializers = 1;
++	else
++		max_active_serializers = (channels + slots - 1) / slots;
++
+ 	/* Default configuration */
+ 	if (mcasp->version < MCASP_VERSION_3)
+ 		mcasp_set_bits(mcasp, DAVINCI_MCASP_PWREMUMGT_REG, MCASP_SOFT);
+@@ -1031,16 +1056,18 @@ static int mcasp_i2s_hw_param(struct davinci_mcasp *mcasp, int stream,
+ static int mcasp_dit_hw_param(struct davinci_mcasp *mcasp,
+ 			      unsigned int rate)
+ {
+-	u32 cs_value = 0;
+-	u8 *cs_bytes = (u8*) &cs_value;
++	u8 *cs_bytes = (u8 *)&mcasp->iec958_status;
+ 
+-	/* Set the TX format : 24 bit right rotation, 32 bit slot, Pad 0
+-	   and LSB first */
+-	mcasp_set_bits(mcasp, DAVINCI_MCASP_TXFMT_REG, TXROT(6) | TXSSZ(15));
++	if (!mcasp->dat_port)
++		mcasp_set_bits(mcasp, DAVINCI_MCASP_TXFMT_REG, TXSEL);
++	else
++		mcasp_clr_bits(mcasp, DAVINCI_MCASP_TXFMT_REG, TXSEL);
+ 
+ 	/* Set TX frame synch : DIT Mode, 1 bit width, internal, rising edge */
+ 	mcasp_set_reg(mcasp, DAVINCI_MCASP_TXFMCTL_REG, AFSXE | FSXMOD(0x180));
+ 
++	mcasp_set_reg(mcasp, DAVINCI_MCASP_TXMASK_REG, 0xFFFF);
++
+ 	/* Set the TX tdm : for all the slots */
+ 	mcasp_set_reg(mcasp, DAVINCI_MCASP_TXTDM_REG, 0xFFFFFFFF);
+ 
+@@ -1049,16 +1076,8 @@ static int mcasp_dit_hw_param(struct davinci_mcasp *mcasp,
+ 
+ 	mcasp_clr_bits(mcasp, DAVINCI_MCASP_XEVTCTL_REG, TXDATADMADIS);
+ 
+-	/* Only 44100 and 48000 are valid, both have the same setting */
+-	mcasp_set_bits(mcasp, DAVINCI_MCASP_AHCLKXCTL_REG, AHCLKXDIV(3));
+-
+-	/* Enable the DIT */
+-	mcasp_set_bits(mcasp, DAVINCI_MCASP_TXDITCTL_REG, DITEN);
+-
+ 	/* Set S/PDIF channel status bits */
+-	cs_bytes[0] = IEC958_AES0_CON_NOT_COPYRIGHT;
+-	cs_bytes[1] = IEC958_AES1_CON_PCM_CODER;
+-
++	cs_bytes[3] &= ~IEC958_AES3_CON_FS;
+ 	switch (rate) {
+ 	case 22050:
+ 		cs_bytes[3] |= IEC958_AES3_CON_FS_22050;
+@@ -1088,12 +1107,15 @@ static int mcasp_dit_hw_param(struct davinci_mcasp *mcasp,
+ 		cs_bytes[3] |= IEC958_AES3_CON_FS_192000;
+ 		break;
+ 	default:
+-		printk(KERN_WARNING "unsupported sampling rate: %d\n", rate);
++		dev_err(mcasp->dev, "unsupported sampling rate: %d\n", rate);
+ 		return -EINVAL;
+ 	}
+ 
+-	mcasp_set_reg(mcasp, DAVINCI_MCASP_DITCSRA_REG, cs_value);
+-	mcasp_set_reg(mcasp, DAVINCI_MCASP_DITCSRB_REG, cs_value);
++	mcasp_set_reg(mcasp, DAVINCI_MCASP_DITCSRA_REG, mcasp->iec958_status);
++	mcasp_set_reg(mcasp, DAVINCI_MCASP_DITCSRB_REG, mcasp->iec958_status);
++
++	/* Enable the DIT */
++	mcasp_set_bits(mcasp, DAVINCI_MCASP_TXDITCTL_REG, DITEN);
+ 
+ 	return 0;
+ }
+@@ -1237,12 +1259,18 @@ static int davinci_mcasp_hw_params(struct snd_pcm_substream *substream,
+ 		int slots = mcasp->tdm_slots;
+ 		int rate = params_rate(params);
+ 		int sbits = params_width(params);
++		unsigned int bclk_target;
+ 
+ 		if (mcasp->slot_width)
+ 			sbits = mcasp->slot_width;
+ 
++		if (mcasp->op_mode == DAVINCI_MCASP_IIS_MODE)
++			bclk_target = rate * sbits * slots;
++		else
++			bclk_target = rate * 128;
++
+ 		davinci_mcasp_calc_clk_div(mcasp, mcasp->sysclk_freq,
+-					   rate * sbits * slots, true);
++					   bclk_target, true);
+ 	}
+ 
+ 	ret = mcasp_common_hw_param(mcasp, substream->stream,
+@@ -1598,6 +1626,77 @@ static const struct snd_soc_dai_ops davinci_mcasp_dai_ops = {
+ 	.set_tdm_slot	= davinci_mcasp_set_tdm_slot,
+ };
+ 
++static int davinci_mcasp_iec958_info(struct snd_kcontrol *kcontrol,
++				     struct snd_ctl_elem_info *uinfo)
++{
++	uinfo->type = SNDRV_CTL_ELEM_TYPE_IEC958;
++	uinfo->count = 1;
++
++	return 0;
++}
++
++static int davinci_mcasp_iec958_get(struct snd_kcontrol *kcontrol,
++				    struct snd_ctl_elem_value *uctl)
++{
++	struct snd_soc_dai *cpu_dai = snd_kcontrol_chip(kcontrol);
++	struct davinci_mcasp *mcasp = snd_soc_dai_get_drvdata(cpu_dai);
++
++	memcpy(uctl->value.iec958.status, &mcasp->iec958_status,
++	       sizeof(mcasp->iec958_status));
++
++	return 0;
++}
++
++static int davinci_mcasp_iec958_put(struct snd_kcontrol *kcontrol,
++				    struct snd_ctl_elem_value *uctl)
++{
++	struct snd_soc_dai *cpu_dai = snd_kcontrol_chip(kcontrol);
++	struct davinci_mcasp *mcasp = snd_soc_dai_get_drvdata(cpu_dai);
++
++	memcpy(&mcasp->iec958_status, uctl->value.iec958.status,
++	       sizeof(mcasp->iec958_status));
++
++	return 0;
++}
++
++static int davinci_mcasp_iec958_con_mask_get(struct snd_kcontrol *kcontrol,
++					     struct snd_ctl_elem_value *ucontrol)
++{
++	struct snd_soc_dai *cpu_dai = snd_kcontrol_chip(kcontrol);
++	struct davinci_mcasp *mcasp = snd_soc_dai_get_drvdata(cpu_dai);
++
++	memset(ucontrol->value.iec958.status, 0xff, sizeof(mcasp->iec958_status));
++	return 0;
++}
++
++static const struct snd_kcontrol_new davinci_mcasp_iec958_ctls[] = {
++	{
++		.access = (SNDRV_CTL_ELEM_ACCESS_READWRITE |
++			   SNDRV_CTL_ELEM_ACCESS_VOLATILE),
++		.iface = SNDRV_CTL_ELEM_IFACE_PCM,
++		.name = SNDRV_CTL_NAME_IEC958("", PLAYBACK, DEFAULT),
++		.info = davinci_mcasp_iec958_info,
++		.get = davinci_mcasp_iec958_get,
++		.put = davinci_mcasp_iec958_put,
++	}, {
++		.access = SNDRV_CTL_ELEM_ACCESS_READ,
++		.iface = SNDRV_CTL_ELEM_IFACE_MIXER,
++		.name = SNDRV_CTL_NAME_IEC958("", PLAYBACK, CON_MASK),
++		.info = davinci_mcasp_iec958_info,
++		.get = davinci_mcasp_iec958_con_mask_get,
++	},
++};
++
++static void davinci_mcasp_init_iec958_status(struct davinci_mcasp *mcasp)
++{
++	unsigned char *cs = (u8 *)&mcasp->iec958_status;
++
++	cs[0] = IEC958_AES0_CON_NOT_COPYRIGHT | IEC958_AES0_CON_EMPHASIS_NONE;
++	cs[1] = IEC958_AES1_CON_PCM_CODER;
++	cs[2] = IEC958_AES2_CON_SOURCE_UNSPEC | IEC958_AES2_CON_CHANNEL_UNSPEC;
++	cs[3] = IEC958_AES3_CON_CLOCK_1000PPM;
++}
++
+ static int davinci_mcasp_dai_probe(struct snd_soc_dai *dai)
+ {
+ 	struct davinci_mcasp *mcasp = snd_soc_dai_get_drvdata(dai);
+@@ -1605,6 +1704,12 @@ static int davinci_mcasp_dai_probe(struct snd_soc_dai *dai)
+ 	dai->playback_dma_data = &mcasp->dma_data[SNDRV_PCM_STREAM_PLAYBACK];
+ 	dai->capture_dma_data = &mcasp->dma_data[SNDRV_PCM_STREAM_CAPTURE];
+ 
++	if (mcasp->op_mode == DAVINCI_MCASP_DIT_MODE) {
++		davinci_mcasp_init_iec958_status(mcasp);
++		snd_soc_add_dai_controls(dai, davinci_mcasp_iec958_ctls,
++					 ARRAY_SIZE(davinci_mcasp_iec958_ctls));
++	}
++
+ 	return 0;
+ }
+ 
+@@ -1651,7 +1756,8 @@ static struct snd_soc_dai_driver davinci_mcasp_dai[] = {
+ 			.channels_min	= 1,
+ 			.channels_max	= 384,
+ 			.rates		= DAVINCI_MCASP_RATES,
+-			.formats	= DAVINCI_MCASP_PCM_FMTS,
++			.formats	= SNDRV_PCM_FMTBIT_S16_LE |
++					  SNDRV_PCM_FMTBIT_S24_LE,
+ 		},
+ 		.ops 		= &davinci_mcasp_dai_ops,
+ 	},
+@@ -1871,6 +1977,8 @@ out:
+ 		} else {
+ 			mcasp->tdm_slots = pdata->tdm_slots;
+ 		}
++	} else {
++		mcasp->tdm_slots = 32;
+ 	}
+ 
+ 	mcasp->num_serializer = pdata->num_serializer;
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index f6ebda75b0306..533512d933c6c 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -3844,6 +3844,42 @@ static int bpf_map_find_btf_info(struct bpf_object *obj, struct bpf_map *map)
+ 	return 0;
+ }
+ 
++static int bpf_get_map_info_from_fdinfo(int fd, struct bpf_map_info *info)
++{
++	char file[PATH_MAX], buff[4096];
++	FILE *fp;
++	__u32 val;
++	int err;
++
++	snprintf(file, sizeof(file), "/proc/%d/fdinfo/%d", getpid(), fd);
++	memset(info, 0, sizeof(*info));
++
++	fp = fopen(file, "r");
++	if (!fp) {
++		err = -errno;
++		pr_warn("failed to open %s: %d. No procfs support?\n", file,
++			err);
++		return err;
++	}
++
++	while (fgets(buff, sizeof(buff), fp)) {
++		if (sscanf(buff, "map_type:\t%u", &val) == 1)
++			info->type = val;
++		else if (sscanf(buff, "key_size:\t%u", &val) == 1)
++			info->key_size = val;
++		else if (sscanf(buff, "value_size:\t%u", &val) == 1)
++			info->value_size = val;
++		else if (sscanf(buff, "max_entries:\t%u", &val) == 1)
++			info->max_entries = val;
++		else if (sscanf(buff, "map_flags:\t%i", &val) == 1)
++			info->map_flags = val;
++	}
++
++	fclose(fp);
++
++	return 0;
++}
++
+ int bpf_map__reuse_fd(struct bpf_map *map, int fd)
+ {
+ 	struct bpf_map_info info = {};
+@@ -3852,6 +3888,8 @@ int bpf_map__reuse_fd(struct bpf_map *map, int fd)
+ 	char *new_name;
+ 
+ 	err = bpf_obj_get_info_by_fd(fd, &info, &len);
++	if (err && errno == EINVAL)
++		err = bpf_get_map_info_from_fdinfo(fd, &info);
+ 	if (err)
+ 		return err;
+ 
+@@ -4318,12 +4356,16 @@ static bool map_is_reuse_compat(const struct bpf_map *map, int map_fd)
+ 	struct bpf_map_info map_info = {};
+ 	char msg[STRERR_BUFSIZE];
+ 	__u32 map_info_len;
++	int err;
+ 
+ 	map_info_len = sizeof(map_info);
+ 
+-	if (bpf_obj_get_info_by_fd(map_fd, &map_info, &map_info_len)) {
+-		pr_warn("failed to get map info for map FD %d: %s\n",
+-			map_fd, libbpf_strerror_r(errno, msg, sizeof(msg)));
++	err = bpf_obj_get_info_by_fd(map_fd, &map_info, &map_info_len);
++	if (err && errno == EINVAL)
++		err = bpf_get_map_info_from_fdinfo(map_fd, &map_info);
++	if (err) {
++		pr_warn("failed to get map info for map FD %d: %s\n", map_fd,
++			libbpf_strerror_r(errno, msg, sizeof(msg)));
+ 		return false;
+ 	}
+ 
+@@ -4528,10 +4570,13 @@ bpf_object__create_maps(struct bpf_object *obj)
+ 	char *cp, errmsg[STRERR_BUFSIZE];
+ 	unsigned int i, j;
+ 	int err;
++	bool retried;
+ 
+ 	for (i = 0; i < obj->nr_maps; i++) {
+ 		map = &obj->maps[i];
+ 
++		retried = false;
++retry:
+ 		if (map->pin_path) {
+ 			err = bpf_object__reuse_map(map);
+ 			if (err) {
+@@ -4539,6 +4584,12 @@ bpf_object__create_maps(struct bpf_object *obj)
+ 					map->name);
+ 				goto err_out;
+ 			}
++			if (retried && map->fd < 0) {
++				pr_warn("map '%s': cannot find pinned map\n",
++					map->name);
++				err = -ENOENT;
++				goto err_out;
++			}
+ 		}
+ 
+ 		if (map->fd >= 0) {
+@@ -4572,9 +4623,13 @@ bpf_object__create_maps(struct bpf_object *obj)
+ 		if (map->pin_path && !map->pinned) {
+ 			err = bpf_map__pin(map, NULL);
+ 			if (err) {
++				zclose(map->fd);
++				if (!retried && err == -EEXIST) {
++					retried = true;
++					goto retry;
++				}
+ 				pr_warn("map '%s': failed to auto-pin at '%s': %d\n",
+ 					map->name, map->pin_path, err);
+-				zclose(map->fd);
+ 				goto err_out;
+ 			}
+ 		}
+diff --git a/tools/testing/selftests/arm64/mte/mte_common_util.c b/tools/testing/selftests/arm64/mte/mte_common_util.c
+index f50ac31920d13..0328a1e08f659 100644
+--- a/tools/testing/selftests/arm64/mte/mte_common_util.c
++++ b/tools/testing/selftests/arm64/mte/mte_common_util.c
+@@ -298,7 +298,7 @@ int mte_default_setup(void)
+ 	int ret;
+ 
+ 	if (!(hwcaps2 & HWCAP2_MTE)) {
+-		ksft_print_msg("FAIL: MTE features unavailable\n");
++		ksft_print_msg("SKIP: MTE features unavailable\n");
+ 		return KSFT_SKIP;
+ 	}
+ 	/* Get current mte mode */
+diff --git a/tools/testing/selftests/arm64/pauth/pac.c b/tools/testing/selftests/arm64/pauth/pac.c
+index 592fe538506e3..b743daa772f55 100644
+--- a/tools/testing/selftests/arm64/pauth/pac.c
++++ b/tools/testing/selftests/arm64/pauth/pac.c
+@@ -25,13 +25,15 @@
+ do { \
+ 	unsigned long hwcaps = getauxval(AT_HWCAP); \
+ 	/* data key instructions are not in NOP space. This prevents a SIGILL */ \
+-	ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled"); \
++	if (!(hwcaps & HWCAP_PACA))					\
++		SKIP(return, "PAUTH not enabled"); \
+ } while (0)
+ #define ASSERT_GENERIC_PAUTH_ENABLED() \
+ do { \
+ 	unsigned long hwcaps = getauxval(AT_HWCAP); \
+ 	/* generic key instructions are not in NOP space. This prevents a SIGILL */ \
+-	ASSERT_NE(0, hwcaps & HWCAP_PACG) TH_LOG("Generic PAUTH not enabled"); \
++	if (!(hwcaps & HWCAP_PACG)) \
++		SKIP(return, "Generic PAUTH not enabled");	\
+ } while (0)
+ 
+ void sign_specific(struct signatures *sign, size_t val)
+@@ -256,7 +258,7 @@ TEST(single_thread_different_keys)
+ 	unsigned long hwcaps = getauxval(AT_HWCAP);
+ 
+ 	/* generic and data key instructions are not in NOP space. This prevents a SIGILL */
+-	ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled");
++	ASSERT_PAUTH_ENABLED();
+ 	if (!(hwcaps & HWCAP_PACG)) {
+ 		TH_LOG("WARNING: Generic PAUTH not enabled. Skipping generic key checks");
+ 		nkeys = NKEYS - 1;
+@@ -299,7 +301,7 @@ TEST(exec_changed_keys)
+ 	unsigned long hwcaps = getauxval(AT_HWCAP);
+ 
+ 	/* generic and data key instructions are not in NOP space. This prevents a SIGILL */
+-	ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled");
++	ASSERT_PAUTH_ENABLED();
+ 	if (!(hwcaps & HWCAP_PACG)) {
+ 		TH_LOG("WARNING: Generic PAUTH not enabled. Skipping generic key checks");
+ 		nkeys = NKEYS - 1;
+diff --git a/tools/testing/selftests/bpf/prog_tests/send_signal.c b/tools/testing/selftests/bpf/prog_tests/send_signal.c
+index 7043e6ded0e60..75b72c751772b 100644
+--- a/tools/testing/selftests/bpf/prog_tests/send_signal.c
++++ b/tools/testing/selftests/bpf/prog_tests/send_signal.c
+@@ -1,5 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <test_progs.h>
++#include <sys/time.h>
++#include <sys/resource.h>
+ #include "test_send_signal_kern.skel.h"
+ 
+ static volatile int sigusr1_received = 0;
+@@ -41,12 +43,23 @@ static void test_send_signal_common(struct perf_event_attr *attr,
+ 	}
+ 
+ 	if (pid == 0) {
++		int old_prio;
++
+ 		/* install signal handler and notify parent */
+ 		signal(SIGUSR1, sigusr1_handler);
+ 
+ 		close(pipe_c2p[0]); /* close read */
+ 		close(pipe_p2c[1]); /* close write */
+ 
++		/* boost with a high priority so we got a higher chance
++		 * that if an interrupt happens, the underlying task
++		 * is this process.
++		 */
++		errno = 0;
++		old_prio = getpriority(PRIO_PROCESS, 0);
++		ASSERT_OK(errno, "getpriority");
++		ASSERT_OK(setpriority(PRIO_PROCESS, 0, -20), "setpriority");
++
+ 		/* notify parent signal handler is installed */
+ 		CHECK(write(pipe_c2p[1], buf, 1) != 1, "pipe_write", "err %d\n", -errno);
+ 
+@@ -62,6 +75,9 @@ static void test_send_signal_common(struct perf_event_attr *attr,
+ 		/* wait for parent notification and exit */
+ 		CHECK(read(pipe_p2c[0], buf, 1) != 1, "pipe_read", "err %d\n", -errno);
+ 
++		/* restore the old priority */
++		ASSERT_OK(setpriority(PRIO_PROCESS, 0, old_prio), "setpriority");
++
+ 		close(pipe_c2p[1]);
+ 		close(pipe_p2c[0]);
+ 		exit(0);
+diff --git a/tools/testing/selftests/bpf/prog_tests/sockopt_inherit.c b/tools/testing/selftests/bpf/prog_tests/sockopt_inherit.c
+index ec281b0363b82..86f97681ad898 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sockopt_inherit.c
++++ b/tools/testing/selftests/bpf/prog_tests/sockopt_inherit.c
+@@ -195,8 +195,10 @@ static void run_test(int cgroup_fd)
+ 
+ 	pthread_mutex_lock(&server_started_mtx);
+ 	if (CHECK_FAIL(pthread_create(&tid, NULL, server_thread,
+-				      (void *)&server_fd)))
++				      (void *)&server_fd))) {
++		pthread_mutex_unlock(&server_started_mtx);
+ 		goto close_server_fd;
++	}
+ 	pthread_cond_wait(&server_started, &server_started_mtx);
+ 	pthread_mutex_unlock(&server_started_mtx);
+ 
+diff --git a/tools/testing/selftests/bpf/progs/xdp_tx.c b/tools/testing/selftests/bpf/progs/xdp_tx.c
+index 94e6c2b281cb6..5f725c720e008 100644
+--- a/tools/testing/selftests/bpf/progs/xdp_tx.c
++++ b/tools/testing/selftests/bpf/progs/xdp_tx.c
+@@ -3,7 +3,7 @@
+ #include <linux/bpf.h>
+ #include <bpf/bpf_helpers.h>
+ 
+-SEC("tx")
++SEC("xdp")
+ int xdp_tx(struct xdp_md *xdp)
+ {
+ 	return XDP_TX;
+diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
+index aa38dc4a5e85f..90f38c6528a1a 100644
+--- a/tools/testing/selftests/bpf/test_maps.c
++++ b/tools/testing/selftests/bpf/test_maps.c
+@@ -968,7 +968,7 @@ static void test_sockmap(unsigned int tasks, void *data)
+ 
+ 		FD_ZERO(&w);
+ 		FD_SET(sfd[3], &w);
+-		to.tv_sec = 1;
++		to.tv_sec = 30;
+ 		to.tv_usec = 0;
+ 		s = select(sfd[3] + 1, &w, NULL, NULL, &to);
+ 		if (s == -1) {
+diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
+index 6396932b97e29..9ed13187136c1 100644
+--- a/tools/testing/selftests/bpf/test_progs.c
++++ b/tools/testing/selftests/bpf/test_progs.c
+@@ -148,18 +148,18 @@ void test__end_subtest()
+ 	struct prog_test_def *test = env.test;
+ 	int sub_error_cnt = test->error_cnt - test->old_error_cnt;
+ 
+-	if (sub_error_cnt)
+-		env.fail_cnt++;
+-	else if (test->skip_cnt == 0)
+-		env.sub_succ_cnt++;
+-	skip_account();
+-
+ 	dump_test_log(test, sub_error_cnt);
+ 
+ 	fprintf(env.stdout, "#%d/%d %s:%s\n",
+ 	       test->test_num, test->subtest_num, test->subtest_name,
+ 	       sub_error_cnt ? "FAIL" : (test->skip_cnt ? "SKIP" : "OK"));
+ 
++	if (sub_error_cnt)
++		env.fail_cnt++;
++	else if (test->skip_cnt == 0)
++		env.sub_succ_cnt++;
++	skip_account();
++
+ 	free(test->subtest_name);
+ 	test->subtest_name = NULL;
+ }
+@@ -783,17 +783,18 @@ int main(int argc, char **argv)
+ 			test__end_subtest();
+ 
+ 		test->tested = true;
+-		if (test->error_cnt)
+-			env.fail_cnt++;
+-		else
+-			env.succ_cnt++;
+-		skip_account();
+ 
+ 		dump_test_log(test, test->error_cnt);
+ 
+ 		fprintf(env.stdout, "#%d %s:%s\n",
+ 			test->test_num, test->test_name,
+-			test->error_cnt ? "FAIL" : "OK");
++			test->error_cnt ? "FAIL" : (test->skip_cnt ? "SKIP" : "OK"));
++
++		if (test->error_cnt)
++			env.fail_cnt++;
++		else
++			env.succ_cnt++;
++		skip_account();
+ 
+ 		reset_affinity();
+ 		restore_netns();
+diff --git a/tools/testing/selftests/bpf/test_xdp_veth.sh b/tools/testing/selftests/bpf/test_xdp_veth.sh
+index ba8ffcdaac302..995278e684b6e 100755
+--- a/tools/testing/selftests/bpf/test_xdp_veth.sh
++++ b/tools/testing/selftests/bpf/test_xdp_veth.sh
+@@ -108,7 +108,7 @@ ip link set dev veth2 xdp pinned $BPF_DIR/progs/redirect_map_1
+ ip link set dev veth3 xdp pinned $BPF_DIR/progs/redirect_map_2
+ 
+ ip -n ns1 link set dev veth11 xdp obj xdp_dummy.o sec xdp_dummy
+-ip -n ns2 link set dev veth22 xdp obj xdp_tx.o sec tx
++ip -n ns2 link set dev veth22 xdp obj xdp_tx.o sec xdp
+ ip -n ns3 link set dev veth33 xdp obj xdp_dummy.o sec xdp_dummy
+ 
+ trap cleanup EXIT
+diff --git a/tools/testing/selftests/firmware/fw_namespace.c b/tools/testing/selftests/firmware/fw_namespace.c
+index 0e393cb5f42de..4c6f0cd83c5b0 100644
+--- a/tools/testing/selftests/firmware/fw_namespace.c
++++ b/tools/testing/selftests/firmware/fw_namespace.c
+@@ -129,7 +129,8 @@ int main(int argc, char **argv)
+ 		die("mounting tmpfs to /lib/firmware failed\n");
+ 
+ 	sys_path = argv[1];
+-	asprintf(&fw_path, "/lib/firmware/%s", fw_name);
++	if (asprintf(&fw_path, "/lib/firmware/%s", fw_name) < 0)
++		die("error: failed to build full fw_path\n");
+ 
+ 	setup_fw(fw_path);
+ 
+diff --git a/tools/testing/selftests/ftrace/test.d/functions b/tools/testing/selftests/ftrace/test.d/functions
+index a6fac927ee82f..0cee6b067a374 100644
+--- a/tools/testing/selftests/ftrace/test.d/functions
++++ b/tools/testing/selftests/ftrace/test.d/functions
+@@ -115,7 +115,7 @@ check_requires() { # Check required files and tracers
+                 echo "Required tracer $t is not configured."
+                 exit_unsupported
+             fi
+-        elif [ $r != $i ]; then
++        elif [ "$r" != "$i" ]; then
+             if ! grep -Fq "$r" README ; then
+                 echo "Required feature pattern \"$r\" is not in README."
+                 exit_unsupported
+diff --git a/tools/testing/selftests/nci/nci_dev.c b/tools/testing/selftests/nci/nci_dev.c
+index 57b505cb15618..acd4125ff39fe 100644
+--- a/tools/testing/selftests/nci/nci_dev.c
++++ b/tools/testing/selftests/nci/nci_dev.c
+@@ -110,11 +110,11 @@ static int send_cmd_mt_nla(int sd, __u16 nlmsg_type, __u32 nlmsg_pid,
+ 		na->nla_type = nla_type[cnt];
+ 		na->nla_len = nla_len[cnt] + NLA_HDRLEN;
+ 
+-		if (nla_len > 0)
++		if (nla_len[cnt] > 0)
+ 			memcpy(NLA_DATA(na), nla_data[cnt], nla_len[cnt]);
+ 
+-		msg.n.nlmsg_len += NLMSG_ALIGN(na->nla_len);
+-		prv_len = na->nla_len;
++		prv_len = NLA_ALIGN(nla_len[cnt]) + NLA_HDRLEN;
++		msg.n.nlmsg_len += prv_len;
+ 	}
+ 
+ 	buf = (char *)&msg;
+diff --git a/tools/thermal/tmon/Makefile b/tools/thermal/tmon/Makefile
+index 9db867df76794..610334f86f631 100644
+--- a/tools/thermal/tmon/Makefile
++++ b/tools/thermal/tmon/Makefile
+@@ -10,7 +10,7 @@ override CFLAGS+= $(call cc-option,-O3,-O1) ${WARNFLAGS}
+ # Add "-fstack-protector" only if toolchain supports it.
+ override CFLAGS+= $(call cc-option,-fstack-protector-strong)
+ CC?= $(CROSS_COMPILE)gcc
+-PKG_CONFIG?= pkg-config
++PKG_CONFIG?= $(CROSS_COMPILE)pkg-config
+ 
+ override CFLAGS+=-D VERSION=\"$(VERSION)\"
+ LDFLAGS+=


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [gentoo-commits] proj/linux-patches:5.13 commit in: /
@ 2021-09-20 22:01 Mike Pagano
  0 siblings, 0 replies; 42+ messages in thread
From: Mike Pagano @ 2021-09-20 22:01 UTC (permalink / raw
  To: gentoo-commits

commit:     56f1783c70dc1caea90b3de9433112cf163d133e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Sep 20 21:57:57 2021 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Sep 20 22:01:34 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=56f1783c

Move USER_NS to GENTOO_LINUX_PORTAGE

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index d2175f0..74e80d3 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -65,6 +65,7 @@
 +	select NET_NS
 +	select PID_NS
 +	select SYSVIPC
++	select USER_NS
 +	select UTS_NS
 +
 +	help
@@ -145,7 +146,6 @@
 +	select TIMERFD
 +	select TMPFS_POSIX_ACL
 +	select TMPFS_XATTR
-+	select USER_NS
 +
 +	select ANON_INODES
 +	select BLOCK


^ permalink raw reply related	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2021-09-20 22:01 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-09-08 13:55 [gentoo-commits] proj/linux-patches:5.13 commit in: / Alice Ferrazzi
  -- strict thread matches above, loose matches on Subject: below --
2021-09-20 22:01 Mike Pagano
2021-09-18 16:08 Mike Pagano
2021-09-17 12:49 Mike Pagano
2021-09-17 12:42 Mike Pagano
2021-09-16 11:02 Mike Pagano
2021-09-15 11:59 Mike Pagano
2021-09-12 14:37 Mike Pagano
2021-09-03 11:50 Mike Pagano
2021-09-03 11:19 Mike Pagano
2021-08-29 14:48 Mike Pagano
2021-08-26 14:33 Mike Pagano
2021-08-25 16:23 Mike Pagano
2021-08-24 20:00 Mike Pagano
2021-08-24 19:56 Mike Pagano
2021-08-21 14:27 Mike Pagano
2021-08-18 22:42 Mike Pagano
2021-08-18 12:45 Mike Pagano
2021-08-15 20:04 Mike Pagano
2021-08-13 14:30 Mike Pagano
2021-08-12 11:54 Mike Pagano
2021-08-10 12:13 Mike Pagano
2021-08-10 12:13 Mike Pagano
2021-08-08 13:35 Mike Pagano
2021-08-04 11:50 Mike Pagano
2021-08-03 11:03 Mike Pagano
2021-08-02 22:34 Mike Pagano
2021-07-31 10:28 Alice Ferrazzi
2021-07-28 13:23 Mike Pagano
2021-07-25 17:25 Mike Pagano
2021-07-20 15:51 Alice Ferrazzi
2021-07-19 11:15 Mike Pagano
2021-07-14 16:16 Mike Pagano
2021-07-13 12:35 Mike Pagano
2021-07-12 11:36 Mike Pagano
2021-07-07 13:26 Mike Pagano
2021-07-07 13:11 Mike Pagano
2021-07-05 14:11 Mike Pagano
2021-07-04 15:43 Mike Pagano
2021-07-01 14:29 Mike Pagano
2021-06-13 20:14 Mike Pagano
2021-05-25 17:49 Mike Pagano

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox